71
submitted 6 months ago by jordanlund@lemmy.world to c/world@lemmy.world

My favorite part of this story:

"The rocket terminated the flight after judging that the achievement of its mission would be difficult."

"Man, this is too hard, better explode!"

top 15 comments
sorted by: hot top controversial new old
[-] ptz@dubvee.org 17 points 6 months ago

"The rocket terminated the flight after judging that the achievement of its mission would be difficult."

For a second, I thought the UK had a space program. That just sounds extremely British.

[-] AbouBenAdhem@lemmy.world 4 points 6 months ago

the rocket self-destructs when it detects errors in its flight path, speed or control system that could cause a crash that endangers people on the ground

More AI hallucinations?

[-] ptz@dubvee.org 13 points 6 months ago
[-] AbouBenAdhem@lemmy.world 5 points 6 months ago* (last edited 6 months ago)

Right—the way they’re describing it sounds like they replaced a human range safety officer with an autonomous AI.

[-] Wrench@lemmy.world 9 points 6 months ago

I think you're misusing the term "AI".

This would just be presets that would trigger if sensors detected problems, and if enough parts trigger, and automated response of destroying the craft would trigger.

That is in no way artificial intelligence. Just automated safety features.

Just like your car deploys an airbag if it's sensors detect a collision.

[-] AbouBenAdhem@lemmy.world -2 points 6 months ago* (last edited 6 months ago)

Except for this line: “The rocket terminated the flight after judging that the achievement of its mission would be difficult”.

Either the company president being quoted or the translator seems to be implying that the system is modeling the outcome of the whole mission, not just checking if sensor readings exceed some preset threshold. They’re trying to portray it as an AI-like decision, whether that’s really the case or not.

[-] Wrench@lemmy.world 13 points 6 months ago

It's going to be a combination of red flags that an algorithm weighs, and triggers the self destruct if exceeded. Probably even gives HQ a short window to override it (if coms are working).

It's not going to have a built in "AI" making "intelligent" decisions in a dynamic way. That would be extremely dangerous/unreliable, as well as require a shit ton of processing power.

Stop buying into the AI bullshit. Algorithms != AI

[-] idiomaddict@feddit.de 0 points 6 months ago

It’s not buying into AI bullshit to infer some processing and assessment from something said to have decided something. Decisions involve consideration, they’re not like instincts.

It seems like the person saying that misspoke.

[-] MartianSands@sh.itjust.works 3 points 6 months ago

They didn't misspeak, they anthropomorphised. People do that all the time, and calling it an error is pedantic to the point of being incorrect.

Also, that statement was probably in Japanese. You can't read that kind of implication from it, even if it would have been correct to do so in English (which it wouldn't)

[-] idiomaddict@feddit.de -1 points 6 months ago

That’s misleadingly inaccurate if it wasn’t misspeaking, calling it a mistake was charitable (though the issue could definitely rest in translation, you’re right).

[-] jimbolauski@lemm.ee 2 points 6 months ago

They will not put AI on flight critical pieces for planes. It's impossible to fully test and verify the software will behave in a predictable fashion. Instead the ai is used in a layer outside the critical path and it's decisions are vetted by flight critical pieces.

Destroying the rocket was done after flight critical software calculated the probability of failure as too high.

If( notGoingToMakeIt() ) goBoom();

[-] ptz@dubvee.org 9 points 6 months ago* (last edited 6 months ago)

I'm not clear on this particular one, but I believe there can be both. Onboard sensors can initiate flight termination as well as a ground-based range safety officer.

Edit: Yeah. Looks like there have been autonomous methods since about 1998.

An autonomous flight termination system (AFTS) or autonomous flight safety system (AFSS) is a system in which the rocket's flight computer can command flight termination without depending on ground personnel.

[-] Wrench@lemmy.world 6 points 6 months ago

Which in itself is a failsafe in case communication is disrupted. These are good things.

[-] Kbobabob@lemmy.world 2 points 6 months ago

There's a difference between AI and an algorithm. I highly doubt this was AI.

[-] fiat_lux@kbin.social 3 points 6 months ago

Sounds like we had the same programmers. I feel you, Kairos.

this post was submitted on 13 Mar 2024
71 points (97.3% liked)

World News

38529 readers
2303 users here now

A community for discussing events around the World

Rules:

We ask that the users report any comment or post that violate the rules, to use critical thinking when reading, posting or commenting. Users that post off-topic spam, advocate violence, have multiple comments or posts removed, weaponize reports or violate the code of conduct will be banned.

All posts and comments will be reviewed on a case-by-case basis. This means that some content that violates the rules may be allowed, while other content that does not violate the rules may be removed. The moderators retain the right to remove any content and ban users.


Lemmy World Partners

News !news@lemmy.world

Politics !politics@lemmy.world

World Politics !globalpolitics@lemmy.world


Recommendations

For Firefox users, there is media bias / propaganda / fact check plugin.

https://addons.mozilla.org/en-US/firefox/addon/media-bias-fact-check/

founded 1 year ago
MODERATORS