651
47
submitted 5 months ago* (last edited 5 months ago) by hedge@beehaw.org to c/technology@beehaw.org

I was under the impression that Privacy Badger wasn't considered useful any more . . . ? They should've just recommended using Firefox instead, yes?

EDIT: They spoke to, but IMHO, did not give enough time to, Cory Doctorow and Brewster Kahle. They mentioned Mastodon 👍, and described the Fediverse while not actually calling it that! A bit frustrating.

652
60
653
34
submitted 5 months ago by hedge@beehaw.org to c/technology@beehaw.org

I've never completely understood this, but I think the answer would probably be "no," although I'm not sure. Usually when I leave the house I turn off wifi and just use mobile data (this is a habit from my pre-VPN days), although I guess I should probably just keep it on since using strange Wi-Fi with a VPN is ok (unless someone at Starbucks is using the evil twin router trick . . . ?). I was generally under the impression that mobile data is harder to interfere with than Wi-Fi, but I could well be wrong and my notions out of date. So, if need be, please set me straight. 🙂

654
86

Archive.org link

Some highlights I found interesting:

After Tinucci had cut between 15% and 20% of staffers two weeks earlier, part of much wider layoffs, they believed Musk would affirm plans for a massive charging-network expansion.

Musk, the employees said, was not pleased with Tinucci’s presentation and wanted more layoffs. When she balked, saying deeper cuts would undermine charging-business fundamentals, he responded by firing her and her entire 500-member team.

The departures have upended a network widely viewed as a signature Tesla achievement and a key driver of its EV sales.

Despite the mass firings, Musk has since posted on social media promising to continue expanding the network. But three former charging-team employees told Reuters they have been fielding calls from vendors, contractors and electric utilities, some of which had spent millions of dollars on equipment and infrastructure to help build out Tesla’s network.

Tesla's energy team, which sells solar and battery-storage products for homes and businesses, was tasked with taking over Superchargers and calling some partners to close out ongoing charger-construction projects, said three of the former Tesla employees.

Tinucci was one of few high-ranking female Tesla executives. She recently started reporting directly to Musk, following the departure of battery-and-energy chief Drew Baglino, according to four former Supercharger-team staffers. They said Baglino had historically overseen the charging department without much involvement from Musk.

Two former Supercharger staffers called the $500 million expansion budget a significant reduction from what the team had planned for 2024 - but nonetheless a challenge requiring hundreds of employees.

Three of the former employees called the firings a major setback to U.S. charging expansion because of the relationships Tesla employees had built with suppliers and electric utilities.

655
115
submitted 5 months ago by alyaza@beehaw.org to c/technology@beehaw.org
656
139
submitted 5 months ago by Hirom@beehaw.org to c/technology@beehaw.org
657
28
submitted 5 months ago by 0x815@feddit.de to c/technology@beehaw.org

The authors introduce and evaluate an open-source software package and methodological framework for detecting and analysing coordinated behaviour on social media, namely the Coordination Network Toolkit, utilising weighted, directed multigraphs to capture intricate coordination dynamics.

To whom it may concern.

658
163

I hate to go as cliche as "surprising absolutely no one," but really, this is not a surprise.

659
33

This is the alternative Invidious link for the embedded article.

By Mayank Kejriwal, Research Assistant Professor of Industrial & Systems Engineering, University of Southern California.

Many people understand the concept of bias at some intuitive level. In society, and in artificial intelligence systems, racial and gender biases are well documented.

If society could somehow remove bias, would all problems go away? The late Nobel laureate Daniel Kahneman, who was a key figure in the field of behavioral economics, argued in his last book that bias is just one side of the coin. Errors in judgments can be attributed to two sources: bias and noise.

Bias and noise both play important roles in fields such as law, medicine and financial forecasting, where human judgments are central. In our work as computer and information scientists, my colleagues and I have found that noise also plays a role in AI.

Noise in this context means variation in how people make judgments of the same problem or situation. The problem of noise is more pervasive than initially meets the eye. A seminal work, dating back all the way to the Great Depression, has found that different judges gave different sentences for similar cases.

Worryingly, sentencing in court cases can depend on things such as the temperature and whether the local football team won. Such factors, at least in part, contribute to the perception that the justice system is not just biased but also arbitrary at times.

Other examples: Insurance adjusters might give different estimates for similar claims, reflecting noise in their judgments. Noise is likely present in all manner of contests, ranging from wine tastings to local beauty pageants to college admissions.

Noise in the data

On the surface, it doesn’t seem likely that noise could affect the performance of AI systems. After all, machines aren’t affected by weather or football teams, so why would they make judgments that vary with circumstance? On the other hand, researchers know that bias affects AI, because it is reflected in the data that the AI is trained on.

For the new spate of AI models like ChatGPT, the gold standard is human performance on general intelligence problems such as common sense. ChatGPT and its peers are measured against human-labeled commonsense datasets.

Put simply, researchers and developers can ask the machine a commonsense question and compare it with human answers: “If I place a heavy rock on a paper table, will it collapse? Yes or No.” If there is high agreement between the two – in the best case, perfect agreement – the machine is approaching human-level common sense, according to the test.

So where would noise come in? The commonsense question above seems simple, and most humans would likely agree on its answer, but there are many questions where there is more disagreement or uncertainty: “Is the following sentence plausible or implausible? My dog plays volleyball.” In other words, there is potential for noise. It is not surprising that interesting commonsense questions would have some noise.

But the issue is that most AI tests don’t account for this noise in experiments. Intuitively, questions generating human answers that tend to agree with one another should be weighted higher than if the answers diverge – in other words, where there is noise. Researchers still don’t know whether or how to weigh AI’s answers in that situation, but a first step is acknowledging that the problem exists. Tracking down noise in the machine

Theory aside, the question still remains whether all of the above is hypothetical or if in real tests of common sense there is noise. The best way to prove or disprove the presence of noise is to take an existing test, remove the answers and get multiple people to independently label them, meaning provide answers. By measuring disagreement among humans, researchers can know just how much noise is in the test.

The details behind measuring this disagreement are complex, involving significant statistics and math. Besides, who is to say how common sense should be defined? How do you know the human judges are motivated enough to think through the question? These issues lie at the intersection of good experimental design and statistics. Robustness is key: One result, test or set of human labelers is unlikely to convince anyone. As a pragmatic matter, human labor is expensive. Perhaps for this reason, there haven’t been any studies of possible noise in AI tests.

To address this gap, my colleagues and I designed such a study and published our findings in Nature Scientific Reports, showing that even in the domain of common sense, noise is inevitable. Because the setting in which judgments are elicited can matter, we did two kinds of studies. One type of study involved paid workers from Amazon Mechanical Turk, while the other study involved a smaller-scale labeling exercise in two labs at the University of Southern California and the Rensselaer Polytechnic Institute.

You can think of the former as a more realistic online setting, mirroring how many AI tests are actually labeled before being released for training and evaluation. The latter is more of an extreme, guaranteeing high quality but at much smaller scales. The question we set out to answer was how inevitable is noise, and is it just a matter of quality control?

The results were sobering. In both settings, even on commonsense questions that might have been expected to elicit high – even universal – agreement, we found a nontrivial degree of noise. The noise was high enough that we inferred that between 4% and 10% of a system’s performance could be attributed to noise.

To emphasize what this means, suppose I built an AI system that achieved 85% on a test, and you built an AI system that achieved 91%. Your system would seem to be a lot better than mine. But if there is noise in the human labels that were used to score the answers, then we’re not sure anymore that the 6% improvement means much. For all we know, there may be no real improvement.

On AI leaderboards, where large language models like the one that powers ChatGPT are compared, performance differences between rival systems are far narrower, typically less than 1%. As we show in the paper, ordinary statistics do not really come to the rescue for disentangling the effects of noise from those of true performance improvements. Noise audits

What is the way forward? Returning to Kahneman’s book, he proposed the concept of a “noise audit” for quantifying and ultimately mitigating noise as much as possible. At the very least, AI researchers need to estimate what influence noise might be having.

Auditing AI systems for bias is somewhat commonplace, so we believe that the concept of a noise audit should naturally follow. We hope that this study, as well as others like it, leads to their adoption.

660
24
submitted 5 months ago by Five@slrpnk.net to c/technology@beehaw.org
661
29
662
75
submitted 5 months ago* (last edited 5 months ago) by Powderhorn@beehaw.org to c/technology@beehaw.org
663
43
submitted 5 months ago by Gaywallet@beehaw.org to c/technology@beehaw.org
664
118
submitted 5 months ago by mozz@mbin.grits.dev to c/technology@beehaw.org
665
38

A cyberattack on the Ascension health system operating in 19 states across the U.S. forced some of its 140 hospitals to divert ambulances, caused patients to postpone medical tests and blocked online access to patient records

A cyberattack on the Ascension health system operating in 19 states across the U.S. forced some of its 140 hospitals to divert ambulances, caused patients to postpone medical tests and blocked online access to patient records.

An Ascension spokesperson said it detected “unusual activity” Wednesday on its computer network systems. Officials refused to say whether the non-profit Catholic health system, based in St. Louis, was the victim of a ransomware attack or whether it had paid a ransom, and it did not immediately respond to an email seeking updates.

But the attack had the hallmarks of a ransomware, and Ascension said it had called in Mandiant, the Google cybersecurity unit that is a leading responder to such attacks. Earlier this year, a cyberattack on Change Healthcare disrupted care systems nationwide, and the CEO of its parent, UnitedHealth Group Inc., acknowledged in testimony to Congress that it had paid a ransom of $22 million in bitcoin.

Ascension said that both its electronic records system and the MyChart system that gives patients access to their records and allows them to communicate with their doctors were offline.

“We have determined this is a cybersecurity incident,” the national Ascension spokesperson’s statement said. “Our investigation and restoration work will take time to complete, and we do not have a timeline for completion.”

To prevent the automated spread of ransomware, hospital IT officials typically take electronic medical records and appointment-scheduling systems offline. UnitedHealth CEO Andrew Witty told congressional committees that Change Healthcare immediately disconnected from other systems to prevent the attack from spreading during its incident.

The Ascension spokesperson's latest statement, issued Thursday, said ambulances had been diverted from “several” hospitals without naming them.

In Wichita, Kansas, local news reports said the local emergency medical services started diverting all ambulance calls from its hospitals there Wednesday, though the health system's spokesperson there said Friday that the full diversion of ambulances ended Thursday afternoon.

The EMS service for Pensacola, Florida, also diverted patients from the Ascension hospital there to other hospitals, its spokesperson told the Pensacola News Journal.

And WTMJ-TV in Milwaukee reported that Ascension patients in the area said they were missing CT scans and mammograms and couldn't refill prescriptions.

Connie Smith, president of the Wisconsin Federation of Nurses and Health Professionals, is among the Ascension providers turning to paper records this week to cope. Smith, who coordinates surgeries at Ascension St. Francis Hospital in Milwaukee, said the hospital didn’t cancel any surgical procedures and continued treating emergency patients.

But she said everything has slowed down because electronic systems are built into the hospital’s daily operations. Younger providers are often unfamiliar with paper copies of essential records and it takes more time to document patient care, check the results of prior lab tests and verify information with doctors’ offices, she said.

Smith said union leaders feel staff and service cutbacks have made the situation even tougher. Hospital staff also have received little information about what led to the attack or when operations might get closer to normal, she said.

“You’re doing everything to the best of your ability but you leave feeling frustrated because you know you could have done things faster or gotten that patient home sooner if you just had some extra hands,” Smith said.

Ascension said its system expected to use “downtime” procedures “for some time” and advised patients to bring notes on their symptoms and a list of prescription numbers or prescription bottles with them to appointments.

Cybersecurity experts say ransomware attacks have increased substantially in recent years, especially in the health care sector. Increasingly, ransomware gangs steal data before activating data-scrambling malware that paralyzes networks. The threat of making stolen data public is used to extort payments. That data can also be sold online.

“We are working around the clock with internal and external advisors to investigate, contain, and restore our systems,” the Ascension spokesperson's latest statement said.

The attack against Change Healthcare earlier this year delayed insurance reimbursements and heaped stress on doctor’s offices around the country. Change Healthcare provides technology used by doctor offices and other care providers to submit and process billions of insurance claims a year.

It was unclear Friday whether the same group was responsible for both attacks.

Witty said Change Healthcare's core systems were now fully functional. But company officials have said it may take several months of analysis to identify and notify those who were affected by the attack.

They also have said they see no signs that doctor charts or full medical histories were released after the attack. Witty told senators that UnitedHealth repels an attempted intrusion every 70 seconds.

A ransomware attack in November prompted the Ardent Health Services system, operating 30 hospitals in six states, to divert patients from some of its emergency rooms to other hospitals while postponing certain elective procedures.

666
153

Archived link.

On Jan. 6, 2021, QAnon conspiracy theorists played a significant role in inciting Donald Trump supporters to storm the Capitol building in D.C., hoping to overturn the 2020 election in favor of Trump.

Days later, Twitter suspended tens of thousands of QAnon accounts, effectively banning most users who promote the far-right conspiracy theory.

Now, a new study from Newsguard has uncovered that since Elon Musk acquired the company, QAnon has had a resurgence on X, formerly Twitter, over the past year.

QAnon grows on X

Tracking commonly used QAnon phrases like "QSentMe," "TheGreatAwakening," and "WWG1WGA" (which stands for "Where We Go One, We Go All"), Newsguard found that these QAnon-related slogans and hashtags have increased a whopping 1,283 percent on X under Musk.

From May 1, 2023 to May 1, 2024, there were 1.12 million mentions of these QAnon supporter phrases on X. This was a huge uptick from the 81,100 mentions just one year earlier from May 1, 2022 to May 1, 2023.

One of the most viral QAnon-related posts of the year, on the "Frazzledrip" conspiracy, has received more than 21.8 million views, according to the report. Most concerning, however, is that it was posted by a right-wing influencer who has specifically received support from Musk.

The Jan. 2024 tweet was posted by @dom_lucre, a user with more than 1.2 million followers who commonly posts far-right conspiracy theories. In July 2023, @dom_lucre was suspended on then-Twitter. Responding to @dom_lucre's supporters, Musk shared at the time that @dom_lucre was "suspended for posting child exploitation pictures."

Sharing child sexual abuse material or CSAM would result in a permanent ban on most platforms. However, Musk decided to personally intervene in favor of @dom_lucre and reinstated his account.

Since then, @dom_lucre has posted about how he earns thousands of dollars directly from X. The company allows him to monetize his conspiratorial posts via the platform's official creator monetization program.

Musk has also previously voiced his support for Jacob Chansely, a QAnon follower known as the "QAnon Shaman," who served prison time for his role in the Jan. 6 riot at the Capitol.

The dangers of QAnon

QAnon's adherents follow a number of far-right conspiracy theories, but broadly (and falsely) believe that former President Trump has been secretly battling against a global cabal of Satanic baby-eating traffickers, who just happen to primarily be made up of Democratic Party politicians and Hollywood elites.

Unfortunately, these beliefs have too often turned deadly. Numerous QAnon followers have been involved in killings fueled by their beliefs. In 2022, one Michigan man killed his wife before being fatally shot in a standoff with police. His daughter said her father spiraled out of control as he fell into the QAnon conspiracies. In 2021, another QAnon conspiracy theorists killed his two young children, claiming that his wife had "Serpent DNA" and his children were monsters.

Of course, QAnon never completely disappeared from social media platforms. Its followers still espoused their beliefs albeit in a more coded manner over the past few years to circumvent social media platforms' policies. Now, though, QAnon believers are once again being more open about their radical theories.

The looming November 2024 Presidential election likely plays a role in the sudden resurgence of QAnon on X, as QAnon-believing Trump supporters look to help their chosen candidate. However, Musk and X have actively welcomed these users to their social media service, eagerly providing them with a platform to spread their dangerous falsehoods.

667
165
668
86
submitted 5 months ago by 0x815@feddit.de to c/technology@beehaw.org

Archived version

Here is the report (pdf)

Security researchers at Insikt Group identified a malign influence network, CopyCop, skillfully leveraging inauthentic media outlets in the US, UK, and France. This network is suspected to be operated from Russia and is likely aligned with the Russian government. CopyCop extensively used generative AI to plagiarize and modify content from legitimate media sources to tailor political messages with specific biases. This included content critical of Western policies and supportive of Russian perspectives on international issues like the Ukraine conflict and the Israel-Hamas tensions.

CopyCop’s operation involves a calculated use of large language models (LLMs) to plagiarize, translate, and edit content from legitimate mainstream media outlets. By employing prompt engineering techniques, the network tailors this content to resonate with specific audiences, injecting political bias that aligns with its strategic objectives. In recent weeks, alongside its AI-generated content, CopyCop has begun to gain traction by posting targeted, human-produced content that engages deeply with its audience.

The content disseminated by CopyCop spans divisive domestic issues, including perspectives on Russia’s military actions in Ukraine presented in a pro-Russian light and critical viewpoints of Israeli military operations in Gaza. It also includes narratives that influence the political landscape in the US, notably by supporting Republican candidates while disparaging House and Senate Democrats, as well as critiquing the Biden administration’s policies.

The infrastructure supporting CopyCop has strong ties to the disinformation outlet DCWeekly, managed by John Mark Dougan, a US citizen who fled to Russia in 2016. The content from CopyCop is also amplified by well-known Russian state-sponsored actors such as Doppelgänger and Portal Kombat. Also, it boosts material from other Russian influence operations like the Foundation to Battle Injustice and InfoRos, suggesting a highly coordinated effort.

This use of generative AI to create and disseminate content at scale introduces significant challenges for those tasked with safeguarding elections. The sophisticated narratives, tailored to stir specific political sentiments, make it increasingly difficult for public officials to counteract the rapid spread of these false narratives effectively.

Public-sector organizations are urged to heighten awareness around threat actors like CopyCop and the risks posed by AI-generated disinformation. Legitimate media outlets also face risks, as their content may be plagiarized and weaponized to support adversarial state narratives, potentially damaging their credibility.

669
29
Emoji history: the missing years (blog.gingerbeardman.com)
670
5
RSS and OPML (libranet.de)
submitted 5 months ago* (last edited 5 months ago) by petrescatraian@libranet.de to c/technology@beehaw.org

Can somebody explain me how OPML works for RSS? Are these files usually imported into the RSS reader apps or are they used where they are? If I import multiple OPML files with multiple feeds, will the feeds from the first OPML be overwritten by those in the second one or will they add up? Will article read/unread status be synced to multiple devices if I use the same OPML file or not?

671
20

SMIC, China’s biggest contract chip manufacturer, is seen as critical to Beijing’s ambitions of cutting foreign reliance in its domestic semiconductor industry as the U.S. continues to curb China’s tech power. SMIC lags behind Taiwan’s TSMC and South Korea’s Samsung Electronics, according to analysts.

The company’s first-quarter net income plunged 68.9% from a year earlier to $71.79 million, compared with LSEG analysts’ average estimate of $80.49 million.

Gross margin slid to 13.7% in the quarter – the lowest the firm has ever recorded in nearly 12 years – according to LSEG data.

Revenue for the first quarter was $1.75 billion, up 19.7% from a year earlier, as customers stocked up on chips, SMIC said. This handily beat LSEG estimate of $1.69 billion.

"In the first quarter, the IC [integrated circuits] industry was still in the recovery stage and customer inventory gradually improved. Compared to three months ago, we have noticed that our global customers are more willing to build up inventory,” SMIC said on Friday.

Customers are building up inventory to brace for competition and respond to market demand, the firm said, adding that it was unable to fulfil a few rush orders in the first quarter as some production lines were running at near maximum capacity.

SMIC’s chips are found in automobiles, smartphones, computers, IoT technologies and others. More than 80% of its revenue in the first quarter came from customers in China, it said.

Bracing for competition

In a bid to build up competitiveness and increase market share, the firm said it was prioritizing areas such as capacity construction and R&D activities for investments.

"[To] ensure that the company maintain its leading position in fierce market competition and maximize the protection of investor interest ... the company plans not to pay dividends for the year 2023,” said SMIC.

“We believe that as long as there’s demand from customers along with our technology and capacity readiness, we can ultimately be bigger, better and stronger despite the fierce competition.”

The company expects second-quarter revenue to rise by 5% to 7% from the first quarter on strong demand, while gross margin could dip further to between 9% and 11%.

“Along with the increase in capacity scale, depreciation is expected to rise quarter by quarter. So the gross margin is expected to decline sequentially,” SMIC said.

The company was placed on a U.S. trade blacklist in 2020 due to which businesses were required to apply for a license before they could sell to SMIC, limiting its ability to acquire certain U.S. technology.

In a blow to U.S. sanctions, an analysis of Chinese tech giant Huawei’s Mate 60 Pro smartphone launched last year revealed that it runs on a 7-nanometer chip made by SMIC. The smartphone also appears to support 5G connectivity despite U.S. attempts to cut Huawei from key technologies including 5G chips.

TSMC and Samsung began mass producing 7-nanometer chips in 2018 and currently manufacture 3-nanometer chips — a smaller size denotes more advanced technology.

672
50
673
120
674
111
submitted 5 months ago by mozz@mbin.grits.dev to c/technology@beehaw.org
675
19
An Interview With Jack Dorsey (www.piratewires.com)
submitted 5 months ago by jorge@feddit.cl to c/technology@beehaw.org
view more: ‹ prev next ›

Technology

37664 readers
302 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS