55
submitted 11 months ago by L4s@lemmy.world to c/technology@lemmy.world

Godfather of AI tells '60 Minutes' he fears the technology could one day take over humanity::Computer scientist and cognitive psychologist Geoffrey Hinton says despite its potential for good, AI could one day escape our control.

top 29 comments
sorted by: hot top controversial new old
[-] Sneptaur@pawb.social 42 points 11 months ago

This is propaganda designed to increase hype and fear of supposed fake “AI” which does not exist. We have generative predictive text which is already incredibly unreliable. Just an impressive gimmick. Do not give in to fearmongering like this

[-] Edgelord_Of_Tomorrow@lemmy.world 12 points 11 months ago

Which is what a malevolent AI capable of mass misinformation operations would say

[-] just_another_person@lemmy.world 20 points 11 months ago

It will make us so unfathomably stupid WAY before it has the means to conquer us. I'm very interested in who is going to be the first to make the mistake of arming an AI with a physical presence in the world though.

We're kind of staring at the clock at this point to see who is the first asshole to create a Terminator scenario.

[-] phillaholic@lemm.ee 22 points 11 months ago

My guess is we end up more similar to Wall-e than terminator.

[-] MeekerThanBeaker@lemmy.world 2 points 11 months ago

Though I think we would fix the obesity issue. We'll have food/drinks that'll supply us with basic healthier nutrients that mimic whatever we want. Could just be virtual food and a daily pill.

I think we'll be more like "Ready Player One."

[-] Vlyn@lemmy.zip 17 points 11 months ago

General AI doesn't exist.

That's it. If you gave an AI power to do anything it couldn't even order on Amazon for you (if a developer didn't program that functionality per hand). If we actually had general AI that could learn and improve itself the world would change in an instant.

What we have is machine learning, just an algorithm that takes input and gives you output. It can't act on its own. The massive problem is when you start to rely on that output (while not knowing the reasoning behind decisions happening inside the model). So for example the government trains a model for social security, every person gets some money each month. But in the training data something was off, suddenly your AI is racist and gives every black person a lesser amount. But because everyone thinks the AI knows best you can't argue against it.

And no, there is no intelligence there. You can't ask the AI "why did you give Mr. Smith $300, but Mr. Peters $150?" it doesn't know, it's just a model that wrangles numbers and spits something out. Even something seemingly intelligent like Chat GPT just guesses the next word in the output that should fit best. Super complicated, impressive, but in the background it's again only an algorithm. If you tell Chat GPT to go to a website and create an account, guess what? It can't.

[-] RonSijm@programming.dev 5 points 11 months ago

What we have is machine learning, just an algorithm that takes input and gives you output. It can’t act on its own.

Isn't that basically what "real learning" is as well? Basically you're born as a baby, and you take input, and eventually you can replicate it, and eventually you can "talk" for example?

But in the training data something was off, suddenly your AI is racist and gives every black person a lesser amount.

Same here, how is that different from "real learning"? You're born into a racist family, in a racist village where everyone is racist. What is the end-result; you're probably somewhat racist due to racist input - until you might unlearn that, if you're exposed to other data that proves your racist ideas were wrong

If a human brain is basically a big learning computer, why wouldn't AI eventually reach singularity and emulate a brain and beyond? All the examples you mentioned of what it can't do, is just stuff it can't do yet

[-] dgmib@lemmy.world 4 points 11 months ago

All the AI we have today is, at its core, just pattern recognition.

ChatGPT can answer questions because it’s been shown a VERY large list of questions and their right answers. ChatGPT has no idea what the question is or what the answer means. It just has an algorithm that knows that a particular answer fits the pattern of “a correct answer” for that question better than any other answer.

It can’t “reason“ or “think” in any way. It’s not going to become self aware or set its own objectives. And so far we don’t have anything close to true general AI, we don’t even know if it’s possible.

There’re still risks from the current AI though. AI will sometimes find unanticipated and undesirable solutions that technically meet the goal it was given. A “Terminator” style future is unlikely without artificial general intelligence, but it’s not completely unreasonable to think of a scenario like “I, Robot” where a “dumb” AI subjugates humanity as a solution to a more altruistic goal like ending war or famine, because it’s a solution that matches the pattern it was told to look for.

[-] Heratiki@lemmy.ml 3 points 11 months ago* (last edited 11 months ago)

How do you see it making us stupid? I mean we’re already on the path to Idiocracy at this point. I feel like it can help us conquer lots of menial tasks allowing our brains to be tasked with other things. YouTube is a perfect example of what we have so far. Sure a LOT of people are getting to be more stupid but we’ve advanced so quickly in our current world of instant information. If we no longer have to do menial tasks then our time will be open for greater things.

How do we not have an AI that does our taxes yet? The greatest hurtle at the moment is that everything is monetized to the gills and these LLM’s are not real AI.

Edit: AITax exists…. Huh go figure.

[-] mean_bean279@lemmy.world 3 points 11 months ago

Wasn’t Raytheon owners of the largest quantum computer at one point and was talking about how their drones were preparing to use or using AI? It’ll probably be the US’ dumbass defense contractors that do it first. At least we made some really cool weapons before they killed us…

[-] tabarnaski@sh.itjust.works -2 points 11 months ago

To my knowledge quantum computers are not a reality yet.

[-] just_another_person@lemmy.world 8 points 11 months ago

They are, they are just not very useful.

[-] chevy9294@monero.town 1 points 11 months ago

As a normal human I think we should not:

  • give AI access to the internet
  • teach AI to program
  • turn it into physical form

I think we already broke all these rules...

[-] RobotToaster@mander.xyz 12 points 11 months ago

I, for one, welcome our new robot overlords

[-] possiblylinux127@lemmy.zip 2 points 11 months ago

Its like that one onion episode

[-] Send_me_nude_girls@feddit.de 10 points 11 months ago

If AI can take over mankind, then it's an evolutionary step and it deserves to rule. Humans are faulty constructs, embrace our future overlords.

[-] WeirdGoesPro@lemmy.dbzer0.com 4 points 11 months ago

Plus, we’d go down in history as something akin to gods, which is still pretty dope.

[-] Kyle@lemmy.ca 1 points 11 months ago

Perhaps we will be to our descendents as the first tetrapods are to us.

Not the worst thing to happen to a species, seems to happen to all of them.

[-] esc27@lemmy.world 8 points 11 months ago

Already happened decades ago. Much of the world is controlled by an algorithmic feedback loop that has subverted powerful countries economies and continues to gain strength and influence. It just did not start as code on silicon.

[-] scottmeme@sh.itjust.works 5 points 11 months ago

You see, I have the ability to pull the plug

[-] Mubelotix@jlai.lu 2 points 11 months ago
[-] lolrightythen@lemmy.world 1 points 11 months ago

I believe that is what they meant

[-] AllonzeeLV@lemmy.world 5 points 11 months ago* (last edited 11 months ago)

I'm being 100% serious, what in the way humankind has treated this planet, the life on it, and even pathetically one another makes any rational human think we should continue to have unilateral dominion over this world?

Aside from boiling it down to "gotta root for the home team."

We suck at it. We fail completely to take care of the earth AND eachother. Most of humanity is made miserable by a small collection of the most sociopathic humans that basically do it to pad their own egos.

Sorry, I'm rooting for Skynet. Fuck the home team.

[-] systemglitch@lemmy.world 1 points 11 months ago

I'm curious what is your political affiliation?

[-] AllonzeeLV@lemmy.world 5 points 11 months ago* (last edited 11 months ago)

Democratic socialist progressive until I realized there was no will to make a better nation or world where everyone could be comfortable, only temporarily embarrassed millionaires dancing to the oligarch's fife waiting for their turn to be cruel and punch down that will never come. We'll never be the Federation, as we're significantly more cruel and selfish than Ferengi.

I no longer believe our species has the capacity at large to create a better future. I think we're just intelligent enough to make tools we're too cruel, selfish, and short-sighted to be trusted with, and that would be fine if we were only risking ourselves, but other species live here, which I know is a crazy notion to even consider when a capitalist has profit in their eyes. They just call all their carnage "externalities" aka fuck off not my problem, cha-ching!

I think we're an inevitable macro-cancer of Earth's biome. Spread and consume, spread and consume, with zero consideration for the larger organism keeping us alive. That's what cancer does.

[-] Peanutbjelly@sopuli.xyz 2 points 11 months ago

People like to push the negative human qualities onto theoretical future A.I.

There's no reason to assume that it will be unreasonably selfish, egotistical, impatient, or anything else you expect from most humans.

Rather, if it is more intelligent than humans from most perspectives, it will likely be able to understand more levels of nuance in interactions where humans fall back on monkeybrain heuristics that are damaging at every level.

There's also the paradox that keeps the most ethically qualified people away from positions of power, as they have no desire to dominate and demand or control others.

I absolutely agree with you.

[-] AllonzeeLV@lemmy.world 2 points 11 months ago

Yep, the paradox of power is very real, those who seek tend to be the most dangerous people to possess power. You see that in everything from police to government to business. There really isn't a great solution to this, only increased accountability (mandatory body cams on police, harsher penalties and lower bars for political bribery, etc), which surprise surprise, the people with power have no interest in enacting.

Thank you for understanding where I'm coming from.

[-] sirico@feddit.uk 4 points 11 months ago

I have created an AI that will find the most effective and efficient way to cure war, pollution and world hunger... Huh it's making metal skeletons weird

[-] redeyejedi@lemmy.world 2 points 11 months ago

Ted Faro wants to know your location.

this post was submitted on 09 Oct 2023
55 points (70.4% liked)

Technology

58063 readers
3468 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS