43
submitted 10 months ago by yuunikki@lemmy.dbzer0.com to c/asklemmy@lemmy.ml

with the way AI is getting by the week,it just might be a reality

you are viewing a single comment's thread
view the rest of the comments
[-] tacosanonymous@lemm.ee 28 points 10 months ago

I think I’d stick to not judging them but if it was in place of actual socialization, I’d like to get them help.

I don’t see it as a reality. We don't have AI. We have language learning programs that are hovering around mediocre.

[-] yuunikki@lemmy.dbzer0.com 3 points 10 months ago

what if they were so socially introverted that the AI is all they could handle?

[-] jeffw@lemmy.world 19 points 10 months ago

If you’re that crippled by social anxiety, you need help, not isolation with a robot.

[-] givesomefucks@lemmy.world 4 points 10 months ago

Then get professional help if you can't improve on your own.

Social skills aren't innate and some people take longer than others to get them.

Getting help is a lot less embarrassing than living your whole life without social skills. Maybe that's a shrink, maybe that's a day program for people with autism, maybe it's just hanging out with other introverts. But itll only get better if you want to put the effort in. If you don't put effort in, don't be surprised when nothing changes.

[-] cheese_greater@lemmy.world 1 points 10 months ago

I don't see it as any more problematic than falling in a YouTube/Wikipedia/Reddit rabbit hole. As long as you don't really believe its capital-S-Sentient, I don't see an issue. I would prefer people with social difficulties practice on ChatGPT and pay attention to the dialectical back and forth and take lessons away from that to the real world and their interaction(s) withit

[-] kot@hexbear.net 1 points 10 months ago

We don't have AI. We have language learning programs that are hovering around mediocre.

That's all that AI is. People just watched too many science fiction movies, and fell for the market-y name. It was always about algorithms and statistics, and not about making sentient computers.

[-] novibe@lemmy.ml 0 points 10 months ago* (last edited 10 months ago)

That is really unscientific. There is a lot of research on LLMs showing they have emergent intelligent features. They have internal models of the world etc.

And there is nothing to indicate that what we do is not “transforming” in some way. Our minds might be indistinguishable from what we are building towards with AI currently.

And that will likely make more of us start realising that the brain and the mind are not consciousness. We’ll build intelligences, with minds, but without consciousnesses.

this post was submitted on 06 Nov 2023
43 points (87.7% liked)

Asklemmy

43399 readers
715 users here now

A loosely moderated place to ask open-ended questions

Search asklemmy 🔍

If your post meets the following criteria, it's welcome here!

  1. Open-ended question
  2. Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
  3. Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
  4. Not ad nauseam inducing: please make sure it is a question that would be new to most members
  5. An actual topic of discussion

Looking for support?

Looking for a community?

~Icon~ ~by~ ~@Double_A@discuss.tchncs.de~

founded 5 years ago
MODERATORS