356
submitted 1 year ago by L4s@lemmy.world to c/technology@lemmy.world

Tech experts are starting to doubt that ChatGPT and A.I. ‘hallucinations’ will ever go away: ‘This isn’t fixable’::Experts are starting to doubt it, and even OpenAI CEO Sam Altman is a bit stumped.

you are viewing a single comment's thread
view the rest of the comments
[-] nxfsi@lemmy.world 83 points 1 year ago

"AI" are just advanced versions of the next word function on your smartphone keyboard, and people expect coherent outputs from them smh

[-] 1bluepixel@lemmy.world 24 points 1 year ago

Seriously. People like to project forward based on how quickly this technological breakthrough came on the scene, but they don't realize that, barring a few tweaks and improvements here and there, this is it for LLMs. It's the limit of the technology.

It's not to say AI can't improve further, and I'm sure that when it does, it will skillfully integrate LLMs. And I also think artists are right to worry about the impact of AI on their fields. But I think it's a total misunderstanding of the technology to think the current technology will soon become flawless. I'm willing to bet we're currently seeing it at 95% of its ultimate capacity, and that we don't need to worry about AI writing a Hollywood blockbuster any time soon.

In other words, the next step of evolution in the field of AI will require a revolution, not further improvements to existing systems.

[-] postmateDumbass@lemmy.world 7 points 1 year ago

I’m willing to bet we’re currently seeing it at 95% of its ultimate capacity

For free? On the internet?

After a year or two of going live?

[-] tweeks@feddit.nl 2 points 1 year ago

It depends on what you'd call a revolution. Multiple instances working together, orchestrating tasks with several other instances to evaluate progress and provide feedback on possible hallucinations, connected to services such as Wolfram Alpha for accuracy.

I think the whole orchestration network of instances could functionally surpass us soon in a lot of things if they work together.

But I'd call that evolution. Revolution would indeed be a different technique that we can probably not imagine right now.

[-] persolb@lemmy.ml 13 points 1 year ago

It is possible to get coherent output from them though. I’ve been using the ChatGPT API to successfully write ~20 page proposals. Basically give it a prior proposal, the new scope of work, and a paragraph with other info it should incorporate. It then goes through a section at a time.

The numbers and graphics need to be put in after… but the result is better than I’d get from my interns.

I’ve also been using it (google Bard mostly actually) to successfully solve coding problems.

I either need to increase the credit I giver LLM or admit that interns are mostly just LLMs.

[-] WoahWoah@lemmy.world 1 points 1 year ago

Are you using your own application to utilize the API or something already out there? Just curious about your process for uploading and getting the output. I've used it for similar documents, but I've been using the website interface which is clunky.

[-] persolb@lemmy.ml 2 points 1 year ago

Just hacked together python scripts.

Pip install openapi-core

[-] WoahWoah@lemmy.world 0 points 1 year ago

Just FYI, I dinked around with the available plugins, and you can do something similar. But, even easier is just to enable "code interpreter" in the beta options. Then you can upload and have it scan documents and return similar results to what we are talking about here.

[-] PrinzMegahertz@lemmy.world 1 points 1 year ago

I recently asked it a very specific domain architecture question about whether a certain application would fit the need of a certain business application and the answer was very good and showed both a good understanding of architecture, my domain and the application.

[-] tryptaminev@feddit.de 13 points 1 year ago

It is just that everyone now refers to LLMs when talking about AI even though it has sonmany different aspects to it. Maybe at some point there is an AI that actually understands the concepts and meanings of things. But that is not learned by unsupervised web crawling.

[-] kromem@lemmy.world 6 points 1 year ago

So is your brain.

Relative complexity matters a lot, even if the underlying mechanisms are similar.

[-] FlyingSquid@lemmy.world 4 points 1 year ago

In the 1980s, Racter was released and it was only slightly less impressive than current LLMs only because it didn't have an Internet's worth of data it was trained on, but it could still write things like:

Bill sings to Sarah. Sarah sings to Bill. Perhaps they will do other dangerous things together. They may eat lamb or stroke each other. They may chant of their difficulties and their happiness. They have love but they also have typewriters. That is interesting.

If anything, at least that's more entertaining than what modern LLMs can output.

this post was submitted on 02 Aug 2023
356 points (94.1% liked)

Technology

58123 readers
4447 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS