sorted by: new top controversial old
[-] OmnipotentEntity@beehaw.org 4 points 1 day ago

A problem that only affects newbies huh?

Let's say that you are writing code intended to be deployed headless in the field, and it should not be allowed to exit in an uncontrolled fashion because there are communications that need to happen with hardware to safely shut them down. You're making a autonomous robot or something.

Using python for this task isn't too out of left field, because one of the major languages of ROS is python, and it's the most common one.

Which of the following python standard library functions can throw, and what do they throw?

bytes, hasattr, len, super, zip

[-] OmnipotentEntity@beehaw.org 7 points 1 week ago* (last edited 1 week ago)
[-] OmnipotentEntity@beehaw.org 3 points 1 week ago* (last edited 1 week ago)

Oh, I'll try to describe Euler's formula in a way that is intuitive, and maybe you could have come up with it too.

So one way to think about complex numbers, and perhaps an intuitive one, is as a generalization of "positiveness" and "negativeness" from a binary to a continuous thing. Notice that if we multiply -1 with -1 we get 1, so we might think that maybe we don't have a straight line of positiveness and negativeness, but perhaps it is periodic in some manner.

We can envision that perhaps the imaginary unit, i, is "halfway between" positive and negative, because if we think about what √(-1) could possibly be, the only thing that makes sense is it's some form of 1 where you have to use it twice to make something negative instead of just once. Then it stands to reason that √i is "halfway between" i and 1 in this scale of positive and negative.

If we figure out what number √i we get √2/2 + √2/2 i

(We can find this by saying (a + bi)^(2) = i, which gives us (a^(2) - b^(2) = 0 and 2ab = 1) we get a = b from the first, and a^(2) = 1/2)

The keen eyed observer might notice that this value is also equal to sin(45°) and we start to get some ideas about how all of the complex numbers with radius 1 might be somewhat special and carry their own amount of "positiveness" or "negativeness" that is somehow unique to it.

So let's represent these values with R ∠ θ where the θ represents the amount of positiveness or negativeness in some way.

Since we've observed that √i is located at the point 45° from the positive real axis, and i is on the imaginary axis, 90° from the positive real axis, and -1 is 180° from the positive real axis, and if we examine each of these we find that if we use cos to represent the real axis and sin to represent the imaginary axis. That's really neat. It means we can represent any complex number as R ∠ θ = cos θ + i sin θ.

What happens if we multiply two complex numbers in this form? Well, it turns out if you remember your trigonometry, you exactly get the angle addition formulas for sin and cos. So R ∠ θ * S ∠ φ = RS ∠ θ + φ. But wait a second. That's turning multiplication into an addition? Where have we seen something like this before? Exponent rules.

We have a^(n) * a^(m) = a^(n+m) what if, somehow, this angle formula is also an exponent in disguise?

Then you're learning calculus and you come across Taylor Series and you learn a funny thing, the Taylor series of e^x looks a lot like the Taylor series of sine and cosine.

And actually, if we look at the Taylor series for e^(ix) is exactly matches the Taylor series for cos x + i sin x. So our supposition was correct, it was an exponent in disguise. How wild. Finally we get:

R ∠ θ = Re^(iθ) = cos θ + i sin θ

[-] OmnipotentEntity@beehaw.org 3 points 1 week ago

What god formula?

[-] OmnipotentEntity@beehaw.org 3 points 1 week ago

No, I just understand math. So yes.

[-] OmnipotentEntity@beehaw.org 5 points 2 weeks ago* (last edited 2 weeks ago)

Well, 13 microarcseconds is the resolution they claim to be shooting for. The nearest star is 4.2 light-years away. 13 microarcseconds at 4.2 light-years is 2500km, the earth is about 12742 km in diameter. So we can theoretically take an approximately 5x5 pixel image of Proxima Centauri b.

[-] OmnipotentEntity@beehaw.org 15 points 2 weeks ago

I would be impressed if they risk it. Literally half of Mongolia's population resides in their capital city Ulaanbaatar. If a country bordering Russia were to arrest the sitting Russian president and turn him over to Copenhagen then there's a non-zero possibility of a retaliatory airstrike on the capital, destroying their only major city and killing a significant percentage of the entire country's population.

[-] OmnipotentEntity@beehaw.org 1 points 1 month ago

No one tell OP that the ml in lemmy.ml is for Marxist Leninists.

[-] OmnipotentEntity@beehaw.org 2 points 1 month ago

Too bad you'll never receive that option from any manufacturer.

[-] OmnipotentEntity@beehaw.org 7 points 1 month ago

Iirc, some SMR designs also have this property designed, though this is the very first I've heard of it actually being tested at scale.

[-] OmnipotentEntity@beehaw.org 14 points 1 month ago

The scam is that they are actually doing the work, getting paid well

Listen. I know that there are some really shitty stuff going on in North Korea, and very real threats that their government is capable of, and it sucks for the people living there who have to do this work under threat of death.

But if you say that "the scam" is they're doing work and receiving full pay for work done, I'm going to make fun of you. Oh no, someone outside of the West did work and was slightly less exploited by capital than usual in the process. Horror upon horror.

79
106

Abstract:

Hallucination has been widely recognized to be a significant drawback for large language models (LLMs). There have been many works that attempt to reduce the extent of hallucination. These efforts have mostly been empirical so far, which cannot answer the fundamental question whether it can be completely eliminated. In this paper, we formalize the problem and show that it is impossible to eliminate hallucination in LLMs. Specifically, we define a formal world where hallucina- tion is defined as inconsistencies between a computable LLM and a computable ground truth function. By employing results from learning theory, we show that LLMs cannot learn all of the computable functions and will therefore always hal- lucinate. Since the formal world is a part of the real world which is much more complicated, hallucinations are also inevitable for real world LLMs. Furthermore, for real world LLMs constrained by provable time complexity, we describe the hallucination-prone tasks and empirically validate our claims. Finally, using the formal world framework, we discuss the possible mechanisms and efficacies of existing hallucination mitigators as well as the practical implications on the safe deployment of LLMs.

22
26

Subverting Betteridge's law of headlines. Yes.

14
1
submitted 1 year ago* (last edited 1 year ago) by OmnipotentEntity@beehaw.org to c/science@beehaw.org
view more: next ›

OmnipotentEntity

joined 1 year ago