92
submitted 3 months ago by floofloof@lemmy.ca to c/technology@beehaw.org
you are viewing a single comment's thread
view the rest of the comments
[-] aStonedSanta@lemm.ee 10 points 3 months ago

And these current LLMs aren’t just gonna find sentience for themselves. Sure they’ll pass a Turing test but they aren’t alive lol

[-] knokelmaat@beehaw.org 14 points 3 months ago

I think the issue is not wether it's sentient or not, it's how much agency you give it to control stuff.

Even before the AI craze this was an issue. Imagine if you were to create an automatic turret that kills living beings on sight, you would have to make sure you add a kill switch or you yourself wouldn't be able to turn it off anymore without getting shot.

The scary part is that the more complex and adaptive these systems become, the more difficult it can be to stop them once they are in autonomous mode. I think large language models are just another step in that complexity.

An atomic bomb doesn't pass a Turing test, but it's a fucking scary thing nonetheless.

this post was submitted on 05 Jun 2024
92 points (100.0% liked)

Technology

37554 readers
323 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS