251
80
submitted 2 months ago by 0x815@feddit.org to c/technology@beehaw.org

Archived link

5,000 AI-Controlled Fake X Accounts Linked to China Disinformation Campaign

Researchers have uncovered a network of at least 5,000 fake X (formerly Twitter) accounts that appear to be controlled by AI in a disinformation campaign linked to China – and the activity appears to be heating up as the U.S. election approaches.

The X disinformation network, dubbed “Green Cicada” by researchers, “primarily engages with divisive U.S. political issues and may plausibly be staged to interfere in the upcoming presidential election.”

The network has also amplified divisive political issues in other democracies, including Australia, western Europe, India, Japan and other democratic countries.

The finding is the latest example of attempted interference in the U.S. presidential election, which just this month has seen reports of increasing activity by Iran.

[...]

The researchers, from CyberCX. [...] said the network is unlikely to be very effective in its current state, but they added that it “is plausible that the network operators are preparing to increase activities in the lead up to the U.S. presidential election.”

Most accounts on the network are currently dormant, but activity increased sharply in July. The network has been rectifying operational errors over time – including reducing malformed outputs – which could make its activities more effective and harder to detect over time.

The network uses a Chinese-language LLM system and links to an AI researcher affiliated with Tsinghua University and Zhipu AI, a prominent Chinese AI company. So far the actors haven’t had specific political leanings, but instead have focused on amplification of divisive content, “consistent with China’s information operation playbook,” the researchers said.

[...]

The researchers said [that] "our findings also indicate** key gaps in X’s willingness and ability to detect inauthentic content. **While we have observed X taking sporadic action against Green Cicada Network accounts during our period of monitoring, we have observed a failure to take systemic action against overtly linked accounts."

“We note that X has reversed initiatives put in place by Twitter to combat inauthentic activity, including efforts to detect, label and/or ban inauthentic accounts.”

The researchers said the network is a sign of things to come, with generative AI able to produce “a significant scale of malicious output with limited human oversight, at low cost and with low barriers to entry. It is possible that the system underpinning the network is operated by high-end consumer-grade hardware and is developed by just one individual.

“We assess that a more mature, future version of the system underlying the Green Cicada Network would be extremely difficult for parties other than X to detect.”

252
71
submitted 2 months ago* (last edited 2 months ago) by tychosmoose@lemm.ee to c/technology@beehaw.org

What a bunch of ~~clowns~~ idiots (edited to remove the implication that clowns are genuinely as clueless and incompetent as Sonos execs). When Sonos launched in 2004 they were far ahead of any other company in the connected speaker landscape. And they stayed best-of-the-best for a dozen years. Since the S1/S2 split they have been on a steady down trajectory with no signs of improvement.

Now another bunch of employees are getting the axe while the decision makers who have steadily ruined their service remain at the helm. Good job, Sonos.

If I was shopping for speakers right now I know exactly what not to buy.

253
78
submitted 2 months ago by JRepin@lemmy.ml to c/technology@beehaw.org

cross-posted from: https://lemmy.ml/post/19117230

As X’s owner and most followed user, Elon Musk has increasingly used the social media platform as a microphone to amplify his political views and, lately, those of right-wing figures he’s aligned with. There are few modern parallels to his antics, but then again there are few modern parallels to Elon Musk himself.

254
71
submitted 2 months ago by girlfreddy@lemmy.ca to c/technology@beehaw.org

Kim Dotcom, who is facing criminal charges relating to the defunct file-sharing website Megaupload, is to be extradited to the US, the New Zealand justice minister has said.

German-born Dotcom has New Zealand residency and has been fighting extradition to the US since 2012 following an FBI-ordered raid on his Auckland mansion.

The justice minister, Paul Goldsmith, had signed an extradition order for Dotcom, a spokesperson said on Thursday.

“I considered all of the information carefully, and have decided that Mr Dotcom should be surrendered to the US to face trial,” Goldsmith said in a statement.

In a post on X on Tuesday, Dotcom said: “The obedient US colony in the South Pacific just decided to extradite me for what users uploaded to Megaupload,” in what appears to be a reference to the extradition order.

255
30

What connects a dad living in Lahore in Pakistan, an amateur hockey player from Nova Scotia - and a man named Kevin from Houston, Texas?

They’re all linked to Channel3Now - a website whose story giving a false name for the 17-year-old charged over the Southport attack was widely quoted in viral posts on X. Channel3Now also wrongly suggested the attacker was an asylum seeker who arrived in the UK by boat last year.

This, combined with untrue claims the attacker was a Muslim from other sources, has been widely blamed for contributing to riots across the UK - some of which have targeted mosques and Muslim communities.

[...]

The BBC has tracked down several people linked to Channel3Now.

[...]

The person who gets in touch [from Channel3Now’s official email] says he is called Kevin, and that he is based in Houston, Texas. He declines to share his surname and it is unclear if Kevin is actually who he says he is, but he agrees to answer questions over email.

Kevin says he is speaking to me from the site’s “main office” in the US - which fits with both the timings of the social media posts on some of the site's social media profiles, and the times Kevin replies to my emails.

He signs off initially as “the editor-in-chief” before he tells me he is actually the “verification producer”. He refuses to share the name of the owner of the site who he says is worried “not only about himself but also about everyone working for him”.

[...]

Although [there is] no evidence to back up these claims of Russian links to Channel3Now, pro-Kremlin Telegram channels did reshare and amplify the site’s false posts. This is a tactic they often use.

Kevin said the site is a commercial operation and “covering as many stories as possible” helps it generate income. The majority of its stories are accurate - seemingly drawing from reliable sources about shootings and car accidents in the US. However, the site has shared further false speculation about the Southport attacker and also the person who attempted to assassinate Donald Trump.

Following the false Southport story and media coverage about Channel3Now, Kevin says its YouTube channel and almost all of its “multiple Facebook pages” have been suspended, but not its X accounts. A Facebook page exclusively re-sharing content from the site called the Daily Felon also remains live.

[...]

Some profiles [on across several social media sites] have racked up millions of views over the past week posting about the Southport attacks and subsequent riots. X’s “ads revenue sharing” means that blue-tick users can earn a share of revenue from the ads in their replies.

Estimates from users with fewer than half a million followers who have generated income in this way say that accounts can make $10-20 per million views or impressions on X. Some of these accounts sharing disinformation are racking up more than a million impressions almost every post, and sharing posts several times a day.

Other social media companies - aside from X - also allow users to make money from views. But YouTube, TikTok, Instagram and Facebook have previously de-monetised or suspended some profiles posting content that break their guidelines on misinformation. Apart from rules against faked AI content, X does not have guidelines on misinformation.

256
199

While this isn't news about new technology, I thought it was an interesting look about how predatory EULAs can still hurt us even years later in seemingly unrelated ways

Archive.org link

Some key excerpts:

After a doctor suffered a fatal allergic reaction at a Disney World restaurant, Disney is trying to get her widower’s wrongful death lawsuit tossed by pointing to the fine print of a Disney+ trial he signed up for years earlier.

Tangsuan was “highly allergic” to dairy and nuts, and they chose that particular restaurant in part because of its promises about accommodating patrons with food allergies, according to the lawsuit filed in a Florida circuit court.

They allegedly raised the issue upfront, inquired about the safety of specific menu items, had the server confirm with the chef that they could be made allergen-free and asked for confirmation “several more times” after that.

After about 45 minutes, Tangsuan “began having severe difficulty breathing and collapsed to the floor.”

“The medical examiner's investigation determined that [Tangsuan’s] cause of death was as a result of anaphylaxis due to elevated levels of dairy and nut in her system,” according to the lawsuit.

He is seeking more than $50,000 in damages and trial by jury “on all issues so triable.”

In late May, Disney’s lawyers filed a motion asking the circuit court to order Piccolo to arbitrate the case — with them and a neutral third party in private, as opposed to publicly in court — and to pause the legal proceedings in the meantime.

The reason it says Piccolo must be compelled to arbitrate? A clause in the terms and conditions he signed off on when he created a Disney+ account for a month-long trial in 2019.

Disney says Piccolo agreed to similar language again when purchasing park tickets online in September 2023. Whether he actually read the fine print at any point, it adds, is “immaterial.”

“Piccolo ignores that he previously created a Disney account and agreed to arbitrate ‘all disputes’ against ‘The Walt Disney Company or its affiliates’ arising ‘in contract, tort, warranty, statute, regulation, or other legal or equitable basis,’” the motion reads, arguing the language is broad enough to cover Piccolo’s claims.

“There is simply no reading of the Disney+ Subscriber Agreement which would support the notion that Mr. Piccolo agreed to arbitrate claims arising from injuries sustained by his wife at a restaurant located on premises owned by a Disney theme park or resort which ultimately led to her death,” [Piccolo's legal team] wrote in the 123-page filing.

They confirmed he did create a Disney+ account on his PlayStation in 2019, but he believes he canceled the subscription during the trial because he hasn’t found any charges associated with it after that point.

“In effect, WDPR is explicitly seeking to bar its 150 million Disney+ subscribers from ever prosecuting a wrongful death case against it in front of a jury even if the case facts have nothing to with Disney+,” they wrote.

The court has scheduled a hearing on Disney’s motion for October 2.

257
48

Archived version

One law for megarich provocateur Elon Musk, another for the poor idiots who follow

[...]

Last week in England, a 55-year-old woman was arrested over an online post inaccurately “identifying” the suspect accused of killing three young girls. Chief supt Alison Ross said this was “a stark reminder of the dangers of posting information on social media platforms without checking the accuracy”. She added that “we are all accountable for our actions, whether that be online or in person”.

On Friday, two young men were jailed – one for 20 months, the other for more than three years – for provocative posts. One of them was so thick he posted under his own name while boasting that he had “watched enough CSI programmes” to ensure he would “categorically not be arrested”.

[...]

We’re going to see a lot more of this – including in Ireland [...] Responses to online provocations will therefore have to be aligned too. There will be more prosecutions of individuals who share fake stories that feed into violent attacks on immigrants, religious minorities and asylum seekers.

But the eejits who get caught are low-hanging fruit. Typically, they’ve been hooked by the algorithms engineered by the social media companies to mimic addictive drugs. Their brains have been rewired by constant exposure to conspiracy theories and hate content.

Thus, the war on disinformation begins to look very like the war on drugs. One of the great social disasters of the contemporary world was the policy of arresting and imprisoning drug users. It caused immense harm to individuals, families and communities while doing nothing to stop ruthless suppliers from creating an industry worth hundreds of billions of dollars a year.

The Pablo Escobar of toxic disinformation is Elon Musk. His lying is a fully integrated business. He simultaneously produces, consumes and distributes disinformation.

[...]

Musk isn’t doing this only because incendiary content has become a big driver of X’s business model. He is sincerely committed to far-right provocation.

As time has gone on, Musk has become more and more like his father, a full-on, down-the-rabbit hole conspiracy theorist. In his biography of Musk, Walter Issacson quotes one of the father’s emails to his son suggesting that in Musk’s native South Africa “with no Whites here, the Blacks will go back to the trees” and calling Joe Biden a “freak, criminal, paedophile president”.

[...]

What are democracies going to do about this? The only political entity that has shown any real appetite for taking Musk on is the EU Commission, which last month made a preliminary finding that X is in breach of the Digital Services Act because its “verified accounts” are deceptive, its promotion of advertising is not transparent and it refuses to provide real-time data to allow action against disinformation.

This is welcome, but even if X is eventually found guilty, it will face financial penalties, not criminal sanctions. Musk, as we know, is quite happy to lose money in the promotion of far-right causes.

What has to happen is that Musk is held personally to the same standards of criminal justice as random idiots who join his chorus. He is, he says, against two-tier policing. Take him at his word.

258
50

archive.today link to bypass the soft paywall.

259
10
260
46
submitted 2 months ago by 0x815@feddit.org to c/technology@beehaw.org

Archived link

Most of the cameras that Israel Police is using to monitor the country as part of the "Hawk-Eye" project are made in China, and in particular manufactured by Dahua. The police are also making use of cameras from Chinese company HikVision. These two companies have been removed from the national infrastructures of several Western countries in recent years.

Dahua and Hikvision were blacklisted in 2021 by the US Federal Communications Commission (FCC) together with Huawei, China Telecom and ZTE, as companies endangering US national security. US Congress also enacted special legislation prohibiting import and sale of the Chinese companies' products, including the Dahua and Hikvision surveillance cameras, by government companies or any organization that relies on a federal budget.

In the Netherlands, the Amsterdam Municipality also announced that within five years it will replace nearly 1,300 city cameras made in China that were installed on its streets, due to fear of espionage as well as suspicion of complicity in the violation of human rights in the communist country. In addition, as far as is known, due to US suspicions about Chinese cameras, Israeli defense companies like Elbit Systems and Israel Aerospace Industries (IAI) are also required not to use these cameras at all. [Similar moves to ban Chinese surveillance software has been underway across other European jurisdictions - ed.]

261
54
submitted 2 months ago by 0x815@feddit.org to c/technology@beehaw.org

Archived link

Israel’s apartheid state and occupation are being sponsored by tech giants, with artificial intelligence (AI) and other surveillance technologies used to deepen the longstanding repression of Palestinians. In 2021’s Operation Guardian of the Walls, which saw Israel bombard the Gaza Strip with airstrikes, leaving one thousand Palestinians displaced and 256 dead, “AI was a force multiplier,” according to an Israeli official. In the years since, companies like Amazon have powered what a recent Amnesty International report dubbed “automated apartheid.” Amazon announced just recently that it would invest another $7.2 billion in Israel through 2037 and extend its web services to the country.

The company claims the benefactors of Amazon Web Services (AWS) will be “Israeli entrepreneurs and businesses.” In reality, the primary winner will be the military. AWS will expand “Project Nimbus,” which provides the cloud service ecosystem for Israel, primarily serving the country’s military. (Google also invests in Project Nimbus.)

The project will allow Israeli forces to obtain and retain data on Palestinians and surveil them with facial recognition, clamping down on the right to protest and making Palestinians warier of, say, appearing at a demonstration. Even if they aren’t detained at the protest itself, Palestinians know the numerous watchtowers and checkpoints will capture their faces and they could be arrested later or banned from visiting certain sites. Amnesty International’s report found that protests outside Jerusalem’s Damascus Gate plummeted after the various watchtowers and cameras were erected.

The ties between Israel and Amazon run deep. As of 2019, Israel Aerospace Industries (IAI) had been supplied by Amazon with 80 percent of its aircrafts. Buoyed by Amazon’s investment, IAI is implementing autonomous “robo-snipers” and drones across Gaza and the occupied West Bank.

262
277

Yay!

263
37
264
61

TikTok has been sending inaccurate and misleading news-style alerts to users’ phones, including a false claim about Taylor Swift and a weeks-old disaster warning, intensifying fears about the spread of misinformation on the popular video-sharing platform.

Among alerts was a warning about a tsunami in Japan, labeled “BREAKING,” that was posted in late January, three weeks after an earthquake had struck.

The notifications, which sometimes contain summaries from user-generated posts, pop up on screen in the style of a news alert. Researchers say that format, adopted widely to boost engagement through personalized video recommendations, may make users less critical of the veracity of the content and open them up to misinformation.

“Notifications have this additional stamp of authority,” said Laura Edelson, a researcher at Northeastern University, in Boston. “When you get a notification about something, it’s often assumed to be something that has been curated by the platform and not just a random thing from your feed.”

Social media groups such as TikTok, X, and Meta are facing greater scrutiny to police their platforms, particularly in a year of major national elections, including November’s vote in the US. The rise of artificial intelligence adds to the pressure given that the fast-evolving technology makes it quicker and easier to spread misinformation, including through synthetic media, known as deepfakes.

[...]

TikTok, which has more than 1 billion global users, has repeatedly promised to step up its efforts to counter misinformation in response to pressure from governments around the world, including the UK and EU. In May, the video-sharing platform committed to becoming the first major social media network to label some AI-generated content automatically.

[...]

TikTok declined to reveal how the app determined which videos to promote through notifications, but the sheer volume of personalized content recommendations must be “algorithmically generated,” said Dani Madrid-Morales, co-lead of the University of Sheffield’s Disinformation Research Cluster.

Edelson, who is also co-director of the Cybersecurity for Democracy group, suggested that a responsible push notification algorithm could be weighted towards trusted sources, such as verified publishers or officials. “The question is: Are they choosing a high-traffic thing from an authoritative source?” she said. “Or is this just a high-traffic thing?”

265
85

Archived version

Meta announced last month that it would replace CrowdTangle with the “Meta Content Library,” a less powerful tool that will not be made available to media companies.

That will help China, Russia, and other autocratic countries that seek to sow political division in the United States, said Nathan Doctor, senior digital methods manager at the Institute for Strategic Dialogue.

“With these sorts of foreign influence campaigns, probably the biggest thing about them is to keep tabs on them. Otherwise, as we see, they can start to flourish. So as data access…dries up a bit, it becomes a lot more difficult in some cases to identify this kind of stuff and then you know, reactively deal with it, Doctor said on Monday during a Center for American Progress online event.

[...]

That shows that social media companies don’t feel much accountability to policy-makers, the press, or the publice, Brandi Geurkink, the executive director of the Coalition for Independent Technology Research, said Monday.

“In arguably the largest global election year ever, the fact that a company can…signal their intention to make such a decision and then have such a groundswell of opposition from civil society all around the globe, from lawmakers in the United States and Europe, from journalists, you name it, and continue to go ahead with this decision and not really respond to any of the criticism—that's what I think is the bigger worrying piece,” she said.

266
44
submitted 2 months ago by JRepin@lemmy.ml to c/technology@beehaw.org

cross-posted from: https://lemmy.ml/post/19060045

Government involvement in content moderation raises serious human rights concerns in every context. Since October 7, social media platforms have been challenged for the unjustified takedowns of pro-Palestinian content—sometimes at the request of the Israeli government—and a simultaneous failure to remove hate speech towards Palestinians. More specifically, social media platforms have worked with the Israeli Cyber Unit—a government office set up to issue takedown requests to platforms

267
54
submitted 2 months ago by 0x815@feddit.org to c/technology@beehaw.org

cross-posted from: https://feddit.org/post/1843814

Complaints were filed in Austria, Belgium, France, Greece, Ireland, Italy, Netherlands, Spain and Poland by NOYB ('non of your business'), the rights group founded by Austrian Max Schrems. NYOB argues that Elon Musk's Twitter, now X, violated the General Data Protection Regulation (GDPR) in the European Union.

"As if Meta’s failed attempt to illegally use people’s personal data for AI projects did not send a clear enough message, Twitter is the next US company to just suck up EU users’ data to train AI," the organization says.

Twitter started irreversibly feeding European users’ data into its “Grok” AI technology in May 2024, without ever informing them or asking for their consent.

NOYB says that "Twitter’s blatant ignorance of the law" prompted a surprising response by the "notoriously pro-corporate" Data Protection Commissioner (DPC): The authority has taken court action against Twitter to stop the illegal processing and enforce an order to bring its systems into compliance with the GDPR. However, a court hearing last Thursday revealed that the DPC seems to have been mainly concerned with so-called “mitigation” measures and the fact that Twitter started processing while still being in a mandatory consultation process with the DPC under Article 36 GDPR. The DPC does not seem to go for the core violations, the Irish watchdog argues.

Max Schrems, Chairman of noyb: “The court documents are not public, but from the oral hearing we understand that the DPC was not questioning the legality of this processing itself. It seems the DPC was concerned with so-called ‘mitigation measures’ and a lack of cooperation by Twitter. The DPC seems to take action around the edges, but shies away from the core problem.”

(Edit typo.]

268
35
submitted 2 months ago* (last edited 2 months ago) by smallpatatas@lemm.ee to c/technology@beehaw.org

cross-posted from: https://lemm.ee/post/39429322

Interesting essay looking at the role of friction in human development, and how a particular vision of technology's function in society - one that seeks to eliminate friction - paradoxically reduces our autonomy, rather than enhancing it.

This post was reported as spam on technology @ lemmy.world, and was removed, then eventually reinstated, by the mods. The original reason for removal was "it's not really technology-related." I suspect it's being brigaded due to my cryptocurrency criticism, but I have no way to know for sure.

(Edit - update: I have now been banned from technology @ lemmy.world for ... I guess asking the mods how this isn't tech-related? LOL)

269
37
submitted 2 months ago* (last edited 2 months ago) by SweetCitrusBuzz@beehaw.org to c/technology@beehaw.org

Warning for gore and zombies.

A video showing how both the tech sector in general but more specifically the video game sector have no new ideas and try to repackage old ideas and get massive investment to sell 'innovated' ideas and products to people even though they're no better than the original ideas they're aping and in a lot of ways worse.

270
44
submitted 2 months ago by Gaywallet@beehaw.org to c/technology@beehaw.org
271
50
272
56
submitted 2 months ago by alyaza@beehaw.org to c/technology@beehaw.org

archive.is link

Semafor, a global news publication that launched in late 2022, originally focussed on publishing e-mail newsletters. The rise of the newsletter was another strategy for building loyal audiences without relying on social media: rather than try to get readers to visit your Web site, you deliver your content straight to their in-boxes. But over time Semafor’s site has become more important. “It actually felt like a slightly counterintuitive choice to say, ‘We’re going to invest in building a Web page,’ ” Ben Smith, the co-founder of Semafor, told me. Smith was the long-running editor-in-chief of BuzzFeed News, a publication built to distribute content through social media. “We were convinced that home pages were dead. In fact, they were just resting,” he said. (The New Yorker launched a redesigned home page in late 2023, having reached a similar conclusion.)

273
18
274
24

Archived version

"Everyone will go hungry,” one taxi driver said of Wuhan drivers competing against robotaxis from Apollo Go, a subsidiary of Chinese technology giant Baidu.

Ride-hailing and taxi drivers are among the first workers globally to face the threat of job loss from artificial intelligence as thousands of robotaxis hit Chinese streets, economists and industry experts said.

Self-driving technology remains experimental, but China has moved aggressively to green-light trials compared with the U.S, which is quick to launch investigations and suspend approvals after accidents.

Just a few weeks ago, the robotaxi revolution was causing public concerns in China with the issue blowing up on social media after an Apollo Go vehicle ran into a pedestrian in Wuhan. Footage of the incident spread online has sparked a wide debate about the issues created by robotaxis — especially the threat the technology poses to ride-hailing and taxi drivers.

Authorities in Wuhan have felt the need back then to respond to the “rumors” about problems caused by robotaxis. The city’s transportation bureau told domestic media that the local taxi industry is “relatively stable” and that Apollo Go only operates 400 robotaxis in the city, rather than 1,000 as many have claimed online.

Despite the safety concerns, fleets proliferate in China as authorities approve testing to support economic goals. Last year, President Xi Jinping called for “new productive forces,” setting off regional competition.

275
40
submitted 2 months ago by JRepin@lemmy.ml to c/technology@beehaw.org

cross-posted from: https://lemmy.ml/post/18985984

Palantir, the company named for the dangerous seeing-stones that tended to mislead their users, has announced a partnership with Microsoft to deliver services for classified networks in US defense and intelligence agencies.

“This is a first-of-its-kind, integrated suite of technology that will allow critical national security missions to operationalize Microsoft’s best-in-class large language models (LLMs) via Azure OpenAI Service within Palantir’s AI Platforms (AIP) in Microsoft’s government and classified cloud environments,” the announcement says.

Palantir is a data-analysis company that sucks down huge amounts of personal data to assist governments and companies with surveillance. It is somewhat unclear from the text of the announcement what services Palantir and Microsoft will offer. What we do know is that Microsoft’s Azure cloud services will integrate Palantir products. Previously, Azure incorporated OpenAI’s Chat-GPT4 into a “top secret” version of its software.

view more: ‹ prev next ›

Technology

37664 readers
862 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS