sorted by: new top controversial old
[-] danielbln@lemmy.world 4 points 6 months ago

Microsoft's Phi model was largely trained on synthetic data derived from GPT-4.

[-] danielbln@lemmy.world 5 points 6 months ago

If their soft deletes (so instead of actually deleting, it's just a flag on the comment that hides it) then no, it won't make a difference at all.

[-] danielbln@lemmy.world 6 points 6 months ago* (last edited 6 months ago)

Are you asking why stock of a single company is different from "stock" of the richest country and only superpower on earth?

Also, money is liquid, can be spent immediately. Stock is not liquid, it has to be traded, vested, etc. and given enough stock will tank yje value if too much of it is liquified at once.

[-] danielbln@lemmy.world 10 points 6 months ago

I'm German you "hit" a decision.

[-] danielbln@lemmy.world 0 points 6 months ago
[-] danielbln@lemmy.world 0 points 6 months ago

Clearly, because chairs are obviously male (German). Anything else is just silly.

[-] danielbln@lemmy.world 11 points 7 months ago

They run all of Gamepass as well as all of Sony's PS+ on Azure, I think they'll be fine.

[-] danielbln@lemmy.world 6 points 7 months ago

It's so, so, so much better. GenAI is actually useful, crypto is gambling pretending to be a solution in search of a problem.

[-] danielbln@lemmy.world 18 points 7 months ago

In fact, the original script of The Matrix had the machines harvest humans to be used as ultra efficient compute nodes. Executive meddling led to the dumb battery idea .

[-] danielbln@lemmy.world 16 points 7 months ago

Eh, that's not quite true. There is a general alignment tax, meaning aligning the LLM during RLHF lobotomizes it some, but we're talking about usecase specific bots, e.g. for customer support for specific properties/brands/websites. In those cases, locking them down to specific conversations and topics still gives them a lot of leeway, and their understanding of what the user wants and the ways it can respond are still very good.

[-] danielbln@lemmy.world 8 points 7 months ago* (last edited 7 months ago)

Depends on the model/provider. If you're running this in Azure you can use their content filtering which includes jailbreak and prompt exfiltration protection. Otherwise you can strap some heuristics in front or utilize a smaller specialized model that looks at the incoming prompts.

With stronger models like GPT4 that will adhere to every instruction of the system prompt you can harden it pretty well with instructions alone, GPT3.5 not so much.

[-] danielbln@lemmy.world 99 points 7 months ago

I've implemented a few of these and that's about the most lazy implementation possible. That system prompt must be 4 words and a crayon drawing. No jailbreak protection, no conversation alignment, no blocking of conversation atypical requests? Amateur hour, but I bet someone got paid.

5

Workflow: Midjourney for the input images, Pika Labs for the animations, CapCut to tie it together and Elevenlabs for the voices.

view more: next ›

danielbln

joined 1 year ago