186
submitted 9 months ago* (last edited 9 months ago) by throws_lemy@lemmy.nz to c/technology@lemmy.world

TikTok's parent company, ByteDance, has been secretly using OpenAI's technology to develop its own competing large language model (LLM). "This practice is generally considered a faux pas in the AI world," writes The Verge's Alex Heath. "It's also in direct violation of OpenAI's terms of service, which state that its model output can't be used 'to develop any artificial intelligence models that compete with our products and services.'"

top 39 comments
sorted by: hot top controversial new old
[-] TootSweet@lemmy.world 135 points 9 months ago

OpenAI will steal a whole internet worth of everybody's data to train their large language model, but gets pissed when others do the same to them.

[-] Chozo@kbin.social 15 points 9 months ago* (last edited 9 months ago)
[-] LWD@lemm.ee 0 points 9 months ago* (last edited 8 months ago)
[-] FaceDeer@kbin.social 9 points 9 months ago

No, even then it isn't. It's not stealing. There is literally a whole different body of law defining stealing versus the body of law that defines copyright and intellectual property. The data is still exactly where it was to begin with, therefore it hasn't been stolen.

I wish people would stop using wildly inaccurate loaded terminology in these discussions simply to score emotional points.

[-] LWD@lemm.ee -2 points 9 months ago* (last edited 8 months ago)
[-] crazyCat@sh.itjust.works 5 points 9 months ago

Their take on it, via Sam Altman, is that the AI is reading and learning from the internet and we can’t fault them for that, right? You don’t fault a human from using what they’ve learned, do you? Is the rationale… I don’t know what I think about it though

[-] Hacksaw@lemmy.ca 1 points 9 months ago

It's not a PERSON. The only person involved is literally copying the internet and duct taping it together to form chat gpt. Then they say "the AI is reading and learning like any human would". No brother, the AI IS MADE FROM a copy of all the stolen words. Before the theft, there is no AI that you can put the words into and have it learn. It's just a matrix filled with trillions of zeroes. It's only an AI AFTER you build it from the stolen data.

[-] cmnybo@discuss.tchncs.de 70 points 9 months ago

Training one AI with the output of another AI will just make an even crappier AI.

[-] FaceDeer@kbin.social 35 points 9 months ago* (last edited 9 months ago)

Ever since that paper about "model decay" this has been a common talking point and it's greatly misunderstood. Yes, if you just repeatedly cycle content through AI training over and over through successive generations, you get AIs that lose "fidelity." But that's not what any actual real world training regimen using synthetic data does. The helper AI is usually used to process input data. For example, if you're training an AI to respond in a chat-like format, you could take raw non-conversational text (like a book) and have the helper AI create a conversation about that content for the new AI to learn from. Or to take a real-world example, Dalle3 was trained by having a helper AI look at pictures and create detailed text descriptions of them to use as the caption to associate with the image when training.

OpenAI has put these restrictions in its TOS as a way of trying to "pull up the ladder behind them", preventing rivals from trying to build AIs as good as the ones they have already. Fortunately it's not going to work. There are already open LLMs that can be used as "helpers" without needing OpenAI at all. ByteDance was likely just being lazy here.

[-] ripe_banana@lemmy.world 18 points 9 months ago

There is actually a whole subsection of AI focused on training one model with the output of another called knowledge distillation.

[-] altima_neo@lemmy.zip 9 points 9 months ago

Works kinda neat with stable Diffusion tho

[-] ech@lemm.ee 9 points 9 months ago

Depends how it's done. GAN (Generative Adversarial Network) training works with exactly that, having networks train against each other, each improving the other over time.

[-] SeaJ@lemm.ee 8 points 9 months ago* (last edited 9 months ago)

I've watched Multiplicity enough times to know you get a slightly less functional copy.

[-] cybersandwich@lemmy.world 4 points 9 months ago

She touched my peppy Steve.

[-] FaceDeer@kbin.social -2 points 9 months ago

That wasn't a documentary, and it wasn't about machine learning.

[-] CaptainSpaceman@lemmy.world 7 points 9 months ago

Like photocopying a picture of a terd

[-] LWD@lemm.ee 4 points 9 months ago* (last edited 8 months ago)
[-] ZickZack@fedia.io 2 points 9 months ago

Not necessarily: there have been recent works that indicate that filtering effects of fine tuned LLMs greatly improves the data efficiency (e.g phi-1). Further, if you have e.g. human selection on top of LLM generated content you can get great results as the LLM generation can be used as a soft curriculum, with the human selection biasing towards higher quality.

[-] betterdeadthanreddit@lemmy.world 2 points 9 months ago

Sounds like what you'd get if you ordered a ChatGPT off of Wish dot com. Cheap knock-offs that blatantly steal ideas/designs and somewhat work are kinda their thing.

[-] Buttons@programming.dev 22 points 9 months ago

I hope this harms OpenAI in their lawsuits somehow. Their argument of "we can train on the output of others, but nobody can train on our output" has no moral foundation. Pick a lane.

[-] unconsciousvoidling@sh.itjust.works 17 points 9 months ago

Should change its name to closed ai.

[-] redcalcium 16 points 9 months ago

A lot of open source models are trained using data from gpt outputs actually. It's a cheap way to generate huge training data. The difference is those models are made by independent researchers not backed by a huge company for commercial purpose so OpenAI probably left them alone.

[-] betterdeadthanreddit@lemmy.world 10 points 9 months ago

Probably an honest mistake. Who hasn't bent down to tie their shoe, lost their balance and accidentally coded up a LLM to steal from an existing product? I'd still trust them to plant listening devices, cameras and keyloggers in my pocket since they've displayed such a commitment to honesty, integrity and transparency.

[-] filister@lemmy.world -3 points 9 months ago

Yes, honestly you have also been a subject of a lot of propaganda. The US and the US media are villifying a lot of Chinese companies while American companies are not much better if not worse.

[-] webghost0101@sopuli.xyz 2 points 9 months ago

Who hasnt been subject to a lot of propaganda?

This is the mis-information age.

[-] Mahlzeit@feddit.de 6 points 9 months ago

I wonder if that clause is legal. It could be argued that it legitimately protects the capital investment needed to make the model. I'm not sure if that's true, though.

[-] Nick@mander.xyz 1 points 9 months ago

I can't speak for every jurisdiction, but I'd be hard pressed to see why it wouldn't be legal in the US, especially in these circumstances. ByteDance is a massive legally sophisticated corporation, so they should've been expected to fully read and understand the terms and conditions before accepting them. They probably won't bring a legal challenge, because they know they don't have a particularly strong legal argument or a sympathetic angle to use.

[-] Mahlzeit@feddit.de 3 points 9 months ago
[-] Nick@mander.xyz 1 points 9 months ago

Sorry for the late reply, but this doesn't really seem like it'd come close to invoking any of the US's neutered antitrust enforcement. Open AI doesn't have a monopoly position to abuse, since there are other large firms offering LLMs that see reasonable amounts of usage. This clause amounts more to an effort to stop reverse engineering than stifle anyone trying to build an LLM.

[-] Mahlzeit@feddit.de 1 points 9 months ago

I doubt if it is clear-cut enough to bring down enforcement in any case. However, that does not mean that the clause is enforceable.

It is easy to circumvent such a ban. Eventually, the only option that MS has is suing. Then what?

[-] Nick@mander.xyz 1 points 9 months ago

Why would the clause be unenforceable? It doesn't violate any of the general principles of contract law. If you intentionally contract around these terms that don't violate any existing body of law and don't run counter to public interest, a court would have no problem enforcing the terms of a contract. They probably wouldn't sue you or me in our individual capacity if we circumvented. There's a much greater chance of recovery if they go after a company which is pretty clearly using their service in a bad faith. If ByteDance wanted to use their LLM to train their own, they could've negotiated such a license.

this post was submitted on 16 Dec 2023
186 points (99.5% liked)

Technology

58143 readers
4462 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS