276
24

Archived version

"Everyone will go hungry,” one taxi driver said of Wuhan drivers competing against robotaxis from Apollo Go, a subsidiary of Chinese technology giant Baidu.

Ride-hailing and taxi drivers are among the first workers globally to face the threat of job loss from artificial intelligence as thousands of robotaxis hit Chinese streets, economists and industry experts said.

Self-driving technology remains experimental, but China has moved aggressively to green-light trials compared with the U.S, which is quick to launch investigations and suspend approvals after accidents.

Just a few weeks ago, the robotaxi revolution was causing public concerns in China with the issue blowing up on social media after an Apollo Go vehicle ran into a pedestrian in Wuhan. Footage of the incident spread online has sparked a wide debate about the issues created by robotaxis — especially the threat the technology poses to ride-hailing and taxi drivers.

Authorities in Wuhan have felt the need back then to respond to the “rumors” about problems caused by robotaxis. The city’s transportation bureau told domestic media that the local taxi industry is “relatively stable” and that Apollo Go only operates 400 robotaxis in the city, rather than 1,000 as many have claimed online.

Despite the safety concerns, fleets proliferate in China as authorities approve testing to support economic goals. Last year, President Xi Jinping called for “new productive forces,” setting off regional competition.

277
40
submitted 2 months ago by JRepin@lemmy.ml to c/technology@beehaw.org

cross-posted from: https://lemmy.ml/post/18985984

Palantir, the company named for the dangerous seeing-stones that tended to mislead their users, has announced a partnership with Microsoft to deliver services for classified networks in US defense and intelligence agencies.

“This is a first-of-its-kind, integrated suite of technology that will allow critical national security missions to operationalize Microsoft’s best-in-class large language models (LLMs) via Azure OpenAI Service within Palantir’s AI Platforms (AIP) in Microsoft’s government and classified cloud environments,” the announcement says.

Palantir is a data-analysis company that sucks down huge amounts of personal data to assist governments and companies with surveillance. It is somewhat unclear from the text of the announcement what services Palantir and Microsoft will offer. What we do know is that Microsoft’s Azure cloud services will integrate Palantir products. Previously, Azure incorporated OpenAI’s Chat-GPT4 into a “top secret” version of its software.

278
72
submitted 2 months ago by 0x815@feddit.org to c/technology@beehaw.org

cross-posted from: https://feddit.org/post/1814468

Archived link

Here is the stuy: The CCP’s Digital Charm Offensive

TikTok Stacking Algorithms in Chinese Government’s Favor, Study Claims

A study published on Thursday asserts TikTok’s algorithms promote Chinese Communist Party narratives and suppress content critical of those narratives, a claim the embattled company forcefully denied to KQED.

Titled “The CCP’s Digital Charm Offensive,” the study by the Rutgers University-based Network Contagion Research Institute argues that much of the pro-China content originates from state-linked entities. ByteDance, a Chinese technology company, owns TikTok.

Institute co-founder Joel Finkelstein wrote that includes media outlets and influencers, such as travel vloggers who post toothlessly about Chinese regions like Xinxiang, where the government has imprisoned more than 1 million Uyghurs and other mostly Muslim minorities.

This manipulation is not just about content availability; it extends to psychological manipulation, particularly affecting Gen Z users,” Finkelstein wrote.

[...]

An NCRI analysis published in December looked at the volume of posts with certain hashtags — like “Uyghur,” “Xinjiang,” “Tibet” and “Tiananmen” — across TikTok, Instagram and YouTube. That report found **anomalies in TikTok content based on its alignment with the interests of the Chinese government. **For example, researchers wrote, hashtags about Tibet, Hong Kong protests and the Uyghur population appeared to be underrepresented on TikTok compared with Instagram.

279
89
280
394
submitted 2 months ago by remington@beehaw.org to c/technology@beehaw.org

uBlock Origin will soon stop functioning in Chrome as Google transitions to new browser extension rules.

281
81

Archived version

Researchers discovered a significant flaw which allowed them to take full control of the process of Windows Update. This also allowed the creation of Windows Downdate, a tool that can be used for downgrading updates and bypassing all verification steps including Integrity Verification and Trusted Installer Enforcement.

Additionally, after the downgrading of Critical OS components was achieved including DLLs, drivers and the NT kernel, the OS reported that it was fully updated and was unable to install future updates. Moreover, the recovery and scanning tools were not able to detect the issues in the Operating System.

Further escalating this attack, the researchers successfully downgraded Credential Guard’s Isolated User Mode process, Secure Kernel, and Hyper-V’s hypervisor to expose past privilege escalation vulnerabilities.

This concludes the overview with the final discovery of multiple ways to disable Windows virtualization-based Security (VBS), including Credential Guard and Hypervisor-Protected Code integrity (HVCI), even when enforced with UEFI locks.

The result of this attack resulted in a fully patched Windows machine that is vulnerable to thousands of previous patched vulnerabilities, changing fixed vulnerabilities to zero-days and still making the Operating System to think that it is “fully patched”.

282
73

Archived version

Here is the FCC proposal (pdf)

The U.S. Federal Communications Commission has proposed new rules governing the use of AI-generated phone calls and texts. Part of the proposal centers on create a clear definition for AI-generated calls, with the rest focuses on consumer protection by making companies disclose when AI is being used in calls or texts.

"This provides consumers with an opportunity to identify and avoid those calls or texts that contain an enhanced risk of fraud and other scams," the FCC said. The agency is also looking ensure that legitimate uses of AI to assist people with disabilities to communicate remains protected.

Today's proposal is the latest action by the FCC to regulate how AI is used in robocalls and robotexts. The commission has already moved to place a ban on AI-generated voices in robocalls and has called on telecoms to crack down on the practice. Ahead of this year's November election, there has already been one notable use of AI robocalls attempting to spread misinformation to New Hampshire voters.

283
41
284
17
285
22

Archived version

Nearly 200 nations approved the United Nations Convention against Cybercrime on Thursday afternoon at a special committee meeting that capped months of complicated negotiations. The treaty expected to win General Assembly approval within months creates a framework for nations to cooperate against internet-related crimes including the illegal access and interception of computer information; electronic eavesdropping and online child sex abuse.

Many cited examples of probable downsides like the case against Rappler, an online Philippine news outlet that angered former President Rodrigo Duterte by reporting critically on his deadly crackdown on illegal drugs and alarming human rights record.

[...]

The deal allows two countries to cooperate on any serious crime with a tech link, said Nick Ashton-Hart, spokesman for the Cybersecurity Tech Accord, a group of 158 technology companies.

The United Nations label attached to the convention could provide cover for repressive countries that want to go after people who use the internet in ways they dislike, according to private companies, international civil rights groups and electronic freedom advocates.

I think it's a blank check for abuse because it has a very broad scope for domestic and cross-border spying and surveillance and a lack of robust checks and balances, said Katitza Rodrguez, the policy director for global privacy at the Electronic Frontier Foundation.

[...]

The final result doesn't create more online safety and will be used to justify repression, Ashton-Hart said.

It's going to happen more now because now countries that want to do this can point to a UN treaty to justify cooperating on repression, he said.

286
93
submitted 2 months ago by Hirom@beehaw.org to c/technology@beehaw.org
287
29
submitted 2 months ago by JRepin@lemmy.ml to c/technology@beehaw.org

cross-posted from: https://lemmy.ml/post/18923426

The Israeli army is using Amazon’s cloud service to store surveillance information on Gaza’s population, while procuring further AI tools from Google and Microsoft for military purposes, an investigation reveals.

288
29

alt-text for thumbnail in case it embeds: it is an image of a queer flag with an infinity symbol, on a drawn wooden background with the words "autistic people mistaken for AI" on it

289
98
290
50
submitted 2 months ago by nzmaa@lemy.lol to c/technology@beehaw.org
291
40

Archived version

Microsoft embraces a very specific kind of spyware

Palantir, the company named for the dangerous seeing-stones that tended to mislead their users, has announced a partnership with Microsoft to deliver services for classified networks in US defense and intelligence agencies.

“This is a first-of-its-kind, integrated suite of technology that will allow critical national security missions to operationalize Microsoft’s best-in-class large language models (LLMs) via Azure OpenAI Service within Palantir’s AI Platforms (AIP) in Microsoft’s government and classified cloud environments,” the announcement says.

Palantir is a data-analysis company that sucks down huge amounts of personal data to assist governments and companies with surveillance. It is somewhat unclear from the text of the announcement what services Palantir and Microsoft will offer. What we do know is that Microsoft’s Azure cloud services will integrate Palantir products. Previously, Azure incorporated OpenAI’s Chat-GPT4 into a “top secret” version of its software.

292
14
submitted 2 months ago by t3rmit3@beehaw.org to c/technology@beehaw.org

Hello Bees!

I've got a couple of projects lined up that I want to use SBCs (single-board computers) for, and I admit that I have very little knowledge about how the different SBCs from different manufacturers compare to each other, so I figured I'd get y'all's help.

Project 1: Portable media server

This is something I've been wanting for a while in order to make long car trips that involve low or no internet access more enjoyable. The basic idea I have is an SBC with a 2-4 M.2 SSDs, wireless, and bluetooth, that I can load up with media and run Jellyfin on, and then connect to with whatever devices I have around (whether that's a tablet, a smart tv in a hotel, etc). I want to do this as an SBC versus on a laptop partially so I can power it off my car more easily, and potentially have the car play music from it while driving.

I'm leaning towards something like the CM3588 from FriendlyElec is where I'm leaning, so I could RAID 5 some 4TB M.2 SSDs and get ~11.5TB usable (which would match my current Jellyfin home server setup). I'd love to hear if thoughts on this for this kind of portable use case, and any recommendations on alternatives, or other routes to explore.

Project 2: Miniature AI Machine

I've enjoyed experimenting with LLMs and StableDiffusion, and I want to make something a little faster and more targeted towards AI without building a 5U GPU server (nor do I have a spare $14.5k for a barebones setup of one). I've seen SBCs targeting AI use via baked-in NPUs, or with NPU expansion slots, and I'm interested in what y'all think about this approach.

I've also seen people with rPi clusters ostensibly for ML applications, but never any real write-ups on how these perform compared to a regular (E-)ATX machine with a high-end GPU.

293
28
Introducing Raspberry Pi Pico 2 (www.raspberrypi.com)

We’re happy to announce the launch of Raspberry Pi Pico 2, our second-generation microcontroller board, built on RP2350: a new high-performance, secure microcontroller designed here at Raspberry Pi.

With a higher core clock speed, twice the memory, more powerful Arm cores, new security features, and upgraded interfacing capabilities, Pico 2 delivers a significant performance and feature uplift, while retaining hardware and software compatibility with earlier members of the Pico series.

Pico 2 is on sale now, priced at $5.

294
76
submitted 2 months ago by Dnb@lemmy.dbzer0.com to c/technology@beehaw.org
295
22

Archived version

  • Worldcoin has been suspended in several countries over data protection concerns.
  • Tools for Humanity, the company behind Worldcoin, lobbied Colombian authorities to ease their concerns about privacy.
  • A day after Worldcoin orbs went live, Colombia’s personal data regulator launched an investigation to determine if the service complies with the law.
296
63

Archived version

The best clue might come from a 2022 paper written by the Anthropic team back when their startup was just a year old. They warned that the incentives in the AI industry — think profit and prestige — will push companies to “deploy large generative models despite high uncertainty about the full extent of what these models are capable of.” They argued that, if we want safe AI, the industry’s underlying incentive structure needs to change.

Well, at three years old, Anthropic is now the age of a toddler, and it’s experiencing many of the same growing pains that afflicted its older sibling OpenAI. In some ways, they’re the same tensions that have plagued all Silicon Valley tech startups that start out with a “don’t be evil” philosophy. Now, though, the tensions are turbocharged.

An AI company may want to build safe systems, but in such a hype-filled industry, it faces enormous pressure to be first out of the gate. The company needs to pull in investors to supply the gargantuan sums of money needed to build top AI models, and to do that, it needs to satisfy them by showing a path to huge profits. Oh, and the stakes — should the tech go wrong — are much higher than with almost any previous technology.

So a company like Anthropic has to wrestle with deep internal contradictions, and ultimately faces an existential question: Is it even possible to run an AI company that advances the state of the art while also truly prioritizing ethics and safety?

“I don’t think it’s possible,” futurist Amy Webb, the CEO of the Future Today Institute, told me a few months ago.

297
17

cross-posted from: https://midwest.social/post/15454358

A new study suggests virtual reality pain relief interventions may be effective at reducing pain in hospitalized populations with cancer.

298
67
submitted 2 months ago* (last edited 2 months ago) by Powderhorn@beehaw.org to c/technology@beehaw.org

I'm leaving the hed as-is per protocol, but the larger story here seems to be we've already hit the point where LLMs produce better prompts for other LLMs than human prompt engineers do.

This is not in my wheelhouse but feels like something of a marker being laid down far sooner than anyone was publicly expressing. The fact itself isn't all that surprising since we don't think in weights, and this is so far domain specific, but people were unironically talking about prompt engineering being a field with a promising future well into this year.

I use ChatGPT daily for work. Much of what I do is rewriting government press releases for a trade publication, so I'll often have ChatGPT paraphrase (literally paraphrase: ) paragraphs which I'll then paste into my working document after comparing to the original and making sure something festive didn't show up in translation.

Sometimes, I have to say "this was a terrible result with almost no deviation from the original and try again," at which point I get the result I'm looking for.

As plagiarism goes, no one's going to rake you over the coals for a press release, written to be run verbatim. And within that subset, government releases are literally public domain. Still, I've got these fucking journalism ethics.

So, I've got my starting text (I've not tried doing a full story in 4o yet) from which I'll write my version knowing that if I do end up changing "enhanced" to "improved" where the latter is the original in the release, I'm agreeing with an editorial decision, not plagiarizing.

For what I do, it's a godsend. For now. But because I can define the steps and reasoning, an LLM can as well, and I see no reason the linked article is wrong in assuming that version would be better than what I do.

From there, I add quotes, usually about where they were in the release but stripped of self-congratulatory bullshit (remove all references in quotes to figures not quoted themselves in the story and recast with unquoted intro to match the verb form used in the predicate, where the quote picks up would, frankly, get you 90% of the way there) and compile links (For all proper nouns encountered, search the Web to find the most recent result from the body issuing the release; if none found, look on other '.gov' sites; if none found, look for '.org' links; if none, stop attempting to link and move on to next proper noun).

It sounds like all this (and more!) could be done by LLM's today, relegating me to the role of copyeditor (not the briar patch!). Cool. No one's reading my stories about HVDC transmission lines for my dry wit, so with a proper line of editing, the copy would be just as readable, and I'd have more time to fact-check things or find a deeper resource to add context.

But then how much more quickly do we get to a third layer of machine instructions that takes over everything that can be turned into an algorithm in my new role? At a certain point, all I have to offer that seems unattainable for LLMs (due to different heuristics and garbage training data) even in the medium term is news judgment, which isn't exactly a high-demand skill.

This development worries me far more than anything I've read about LLM advancements in quite some time.

299
68
Microsoft Ruined Windows (www.youtube.com)

Microsoft is ruining Windows. It just keeps getting worse. Whether it be their insistence on AI and cloud garbage, or just a general sense of incompetence, I can’t help but feel like the operating system has seen better days.

Normally I wouldn’t care too much, big tech ruins another thing, whatever.

But the problem is Microsoft has such a dominant market share that you can’t really escape them.

I guess unless you use a Mac or something I don’t know.

300
210

“I think the existing, altruistic, free version of Reddit will continue to exist and grow and thrive just the way it has,” Huffman said. “But now we will unlock the door for new use cases, new types of subreddits that can be built that may have exclusive content or private areas, things of that nature.”

view more: ‹ prev next ›

Technology

37664 readers
874 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS