58

Insider report details clash over one board member's criticism in an academic paper.

Kyle Orland - 12/5/2023, 9:31 PM

top 7 comments
sorted by: hot top controversial new old
[-] autotldr@lemmings.world 5 points 9 months ago

This is the best summary I could come up with:


Toner, who serves as director of strategy and foundational research grants at Georgetown University’s Center for Security and Emerging Technology, allegedly drew Altman's negative attention by co-writing a paper on different ways AI companies can "signal" their commitment to safety through "costly" words and actions.

In the paper, Toner contrasts OpenAI's public launch of ChatGPT last year with Anthropic's "deliberate deci[sion] not to productize its technology in order to avoid stoking the flames of AI hype."

She also wrote that, "by delaying the release of [Anthropic chatbot] Claude until another company put out a similarly capable product, Anthropic was showing its willingness to avoid exactly the kind of frantic corner-cutting that the release of ChatGPT appeared to spur."

At the same time, Duhigg's piece also gives some credence to the idea that the OpenAI board felt it needed to be able to hold Altman "accountable" in order to fulfill its mission to "make sure AI benefits all of humanity," as one unnamed source put it.

"It's hard to say if the board members were more terrified of sentient computers or of Altman going rogue," Duhigg writes.

The piece also offers a behind-the-scenes view into Microsoft's three-pronged response to the OpenAI drama and the ways the Redmond-based tech giant reportedly found the board's moves "mind-bogglingly stupid."


The original article contains 414 words, the summary contains 215 words. Saved 48%. I'm a bot and I'm open source!

why repost the same article with the exact same title?

https://sopuli.xyz/post/6648766

[-] topinambour_rex@lemmy.world 5 points 9 months ago

Welcome to lemmy.

[-] Decoy321@lemmy.world 2 points 9 months ago

Because it's an entirely different instance. That's on sopuli, this is on Lemmy.world.

[-] BearOfaTime@lemm.ee -3 points 9 months ago

OK, taking Toner's approach, no one would ever release a AI because there isn't one already out there.

Don't blame Atman for "lashing out".

What a stupid take, and the board is idiotic for going along with it.

[-] ConstableJelly@kbin.social 7 points 9 months ago* (last edited 9 months ago)

I...don't think that's what the referenced paper was saying. First of all, Toner didn't co-author the paper from her position as an OpenAI board member, but as a CSET director. Secondly, the paper didn't intend to prescribe behaviors to private sector tech companies, but rather investigate "[how policymakers can] credibly reveal and assess intentions in the field of artificial intelligence" by exploring "costly signals...as a policy lever."

The full quote:

By delaying the release of Claude until another company put out a similarly capable product, Anthropic was showing its willingness to avoid
exactly the kind of frantic corner-cutting that the release of ChatGPT appeared to spur. Anthropic achieved this goal by leveraging installment costs, or fixed costs that cannot be offset over time. In the framework of this study, Anthropic enhanced the credibility of its commitments to AI safety by holding its model back from early release and absorbing potential future revenue losses. The motivation in this case was not to recoup those losses by gaining a wider market share, but rather to promote industry norms and contribute to shared expectations around responsible AI development and deployment.

Anthropic is being used here as an example of "private sector signaling," which could theoretically manifest in countless ways. Nothing in the text seems to indicate that OpenAI should have behaved exactly this same way, but the example is held as a successful contrast to OpenAI's allegedly failed use of the GPT-4 system card as a signal of OpenAI's commitment to safety.

To more fully understand how private sector actors can send costly signals, it is worth considering two examples of leading AI companies going beyond public statements to signal their commitment to develop AI responsibly: OpenAI’s publication of a “system card” alongside the launch of its GPT-4 model, and Anthropic’s decision to delay the release of its chatbot, Claude.

Honestly, the paper seems really interesting to an AI layman like me and a critically important subject to explore: empowering policymakers to make informed determinations about regulating a technology that almost everyone except the subject-matter experts themselves will *not fully understand.

[-] steakmeout@lemmy.world 3 points 9 months ago

Take your deliberate ignorance to reddit.

this post was submitted on 06 Dec 2023
58 points (85.4% liked)

Technology

58157 readers
3527 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS