sorted by: new top controversial old
[-] Grimy@lemmy.world 4 points 1 day ago* (last edited 1 day ago)

A computer lacks human emotions, more at 6

[-] Grimy@lemmy.world 19 points 2 days ago

Set up the system for them and let it do the talking I guess?

They can have both until they realize they don't need both.

[-] Grimy@lemmy.world 17 points 3 days ago

I make sure to always assume it was nepotism and my confidence remains sky high no matter how long I stay unemployed. It just works.

[-] Grimy@lemmy.world 4 points 5 days ago* (last edited 5 days ago)

Having a phone that can turn into an iPad is probably going to get some use.

There is practical purpose to a touchscreen, the fact that short sighted people couldn't see its usefulness is my actual point.

[-] Grimy@lemmy.world 5 points 5 days ago

Someone said the same about touchscreen when you were still a kid. If you don't want it, don't buy it.

[-] Grimy@lemmy.world 4 points 5 days ago

What if Google was God.

[-] Grimy@lemmy.world 29 points 5 days ago

Someone was probably mean to them but it wasn't her. She is also not the first to be attacked.

She described finding herself suddenly surrounded by the pack after they “jumped out” from a drain in Perdana Park at about 6am local time.

“At first, I thought it was a cat, but the creature jumped out and bit me while I was running, and there were many of them ... I could not even stand up when it happened,” she told local media.

[-] Grimy@lemmy.world 16 points 6 days ago

"But honey, the car took me to the strip club, it isn't my fault"

[-] Grimy@lemmy.world 8 points 6 days ago

I'd start eating apples too

[-] Grimy@lemmy.world 0 points 1 week ago

I'll be the first to praise a bill that is actually aimed at helping artist. I'm just being realistic, everything being proposed is catered towards data brokers and the big AI players. If the choice is between artist getting screwed, and artists and society getting screwed, I will choose the former.

I understand it needs to happen but doing the opposite and playing into openAIs hand doesn't really help imo.

[-] Grimy@lemmy.world -1 points 1 week ago

No regulations is going to force them to retroactively take their current models offline.

Public facing doesn't mean open source.

Never said it was but public facing means you can scrape and use it for ml projects. This has already been decided in courts of law. You can't use data with personal information or data which needs an account to access. Peruse kaggle for a bit, it's all scraped datasets.

do you have any idea who I am

I literally don't, I'm assuming you are part of the 99.999 % of population that didn't get upset just like I assume you have arms and legs.

Did you get upset about translators online when it happened?

I'm also assuming you use AI on a weekly basis like practically everyone else else.

You can give me a detailed biography and a list of every device, software and app you use, and I'll stop assuming. Its fine if I'm wrong, point it out but it feels like I'm assuming correctly and instead of admitting it, you would rather get offended.

the open source bit

Paying 20x more than it currently costs to train a model will affect how many models are trained and given away for free.

public domain works, it most definitely is enough

Not enough to give a usable and competitive product. What's the point of gimping open source so openai cam get all that profit. The jobs will still be lost regardless of if we can run these models on our computer or if a subscription service is the only option.

Artists and writers already struggle more than your usual workers.

I can empathize, I know it sucks. But regulations won't change any of that. Deviant art will sell its dataset, the artists won't be compensated and they will still have a hard time because these tools will still be available.

And please don't call me "mad"

You commented under my post with a trite catch phrases. The tone of your comments aren't very nice. I don't know you, I'm going off of how you are saying it and it's coming off as angry.

[-] Grimy@lemmy.world 0 points 1 week ago

I couldn't give less of a shit what open ai wants, I'm not fighting for open ai, I'm fighting for all the artists

What you want and what openai want are the same thing. Regulations directly benefit them by giving them and Google a easy peasy monopoly. Artists are never getting a dime out of any of this, all the data is already owned by websites and data brokers.

open ai should be investigated for profiting from data they acquired through the loophole of being non-profit.

This is patently false, there isn't a loop hole. Almost all ml projects use public facing data, it's accepted and completely legal since it's highly transformative. What do you think translation software or Shazam uses? You probably already use AI multiple times a week. I'm guessing you didn't get mad when all the translators lost their job a decade ago.

What do any of the concerns over the way data acquisition happens have to do with open source?

How can a company actually open source anything if the costs are so insanely high. It's already above a million in compute power for a foundation model, how many open source projects do you expect if reddit or getty gets to tack on an other 60 million. Even worse, Microsoft and Google will absolutely pay a premium to keep it out of the hands of their competition. And no, there is simply not enough data in the public domain and most of it shit tbh.

You are missing the forest for the tree and this is by design. There's a reason you are bombarded every day by ai bad articles, it's to keep you mad about it so you don't actually think about what these regulations mean.

74
submitted 2 months ago* (last edited 2 months ago) by Grimy@lemmy.world to c/technology@lemmy.world

Meta's issue isn't with the still-being-finalized AI Act, but rather with how it can train models using data from European customers while complying with GDPR — the EU's existing data protection law.

  • Meta announced in May that it planned to use publicly available posts from Facebook and Instagram users to train future models. Meta said it sent more than 2 billion notifications to users in the EU, offering a means for opting out, with training set to begin in June.

  • Meta says it briefed EU regulators months in advance of that public announcement and received only minimal feedback, which it says it addressed.

  • In June — after announcing its plans publicly — Meta was ordered to pause the training on EU data. A couple weeks later it received dozens of questions from data privacy regulators from across the region.

559
submitted 2 months ago* (last edited 2 months ago) by Grimy@lemmy.world to c/technology@lemmy.world

A bipartisan group of senators introduced a new bill to make it easier to authenticate and detect artificial intelligence-generated content and protect journalists and artists from having their work gobbled up by AI models without their permission.

The Content Origin Protection and Integrity from Edited and Deepfaked Media Act (COPIED Act) would direct the National Institute of Standards and Technology (NIST) to create standards and guidelines that help prove the origin of content and detect synthetic content, like through watermarking. It also directs the agency to create security measures to prevent tampering and requires AI tools for creative or journalistic content to let users attach information about their origin and prohibit that information from being removed. Under the bill, such content also could not be used to train AI models.

Content owners, including broadcasters, artists, and newspapers, could sue companies they believe used their materials without permission or tampered with authentication markers. State attorneys general and the Federal Trade Commission could also enforce the bill, which its backers say prohibits anyone from “removing, disabling, or tampering with content provenance information” outside of an exception for some security research purposes.

(A copy of the bill is in he article, here is the important part imo:

Prohibits the use of “covered content” (digital representations of copyrighted works) with content provenance to either train an AI- /algorithm-based system or create synthetic content without the express, informed consent and adherence to the terms of use of such content, including compensation)

view more: next ›

Grimy

joined 1 year ago