sorted by: new top controversial old

They stopped that ad campaign about 15 years ago, and they started it closer to 20 years ago.

Doctors with Borders Made for Them.

very smart much secure

Upvoted, agreed, and all I really want to say is thanks for bringing more Brian Blessed into my life. Never enough of him.

Montgomery Scott of the Starship Enterprise has entered the chat.

[-] SnotFlickerman@lemmy.blahaj.zone 4 points 2 hours ago* (last edited 2 hours ago)

Every answer so far is wrong.

I wouldn't say wrong so much as leaving out the detail that LLMs aren't evil and that open source LLMs are really what the world should be aiming for, if anything. Like any tool, it can be used as a weapon and for ill-purposes. I can use a hammer to build a house as much as I can use it to cave in someone's skull.

But even in the open source world, LLMs have not lead to a massive increase of new tools, or a massive increase of finding bugs, or a massive increase in open source productivity... all things LLMs promise, but have yet to deliver on in the open source world. Which, based on how much energy they use, we ought to be asking if that's actually truly beneficial to be burning so much energy for something which has, as of yet, to prove itself as actually bringing the promised increased open source productivity.

[-] SnotFlickerman@lemmy.blahaj.zone 0 points 2 hours ago* (last edited 2 hours ago)

Someone's been watching way too many movies and isn't familiar yet with how mind bogglingly stupid "AI" actually is.

JARVIS can think on its own, it doesn't need to be told to do anything. LLMs cannot think on their own, they have no intention, they can only respond to input. They cannot create "thoughts" on their own without being prompted by a human.

The reason they spout so much BS is because they don't even really think. They cannot tell the difference between truth and fiction and will be just as happily confident in the truth of their statements whether they are being truthful or lying because they don't know the fucking difference.

We're fucking worlds away from a JARVIS, man.

Like half the stuff they claim AI does, like those "AI stores" Amazon had, where you just picked up stuff and walked out with it and the "AI would intelligently figure out what you bought and apply it to your account." That AI was actually a bunch of low paid people in third world countries documenting videos. It was never fucking AI to begin with because nothing we have even comes close to that fucking capability without human intervention.

[-] SnotFlickerman@lemmy.blahaj.zone 0 points 2 hours ago* (last edited 2 hours ago)

Musk just tweeted out a death threat against a Presidential candidate and still has fucking security clearance.

I mean, this really just belies all the lies post-9/11 from the conservatives about how they cared deeply about US security. (It was Security Theater then, it's Security Theater now)

Just like any shitty corporation, security is almost always compromised by high-level officials that think the rules do not apply to them.

How the fuck is this country even still standing at this point with this chicanery and buffoonery at the reigns?

[-] SnotFlickerman@lemmy.blahaj.zone 22 points 17 hours ago* (last edited 17 hours ago)

Yeah because Sony's been making real solid business decisions lately to retain players.

Like Helldivers 2 is still a wild success story, right? Right??

Oh... Wait... I guess they alienated a bunch of customers with that.

Or well, what about the PS5, I mean, it's the most popular cons-

Oh... Wait... It's priced so high for the PS5 Pro that people are buying PCs and they increased the prices of their already absurdly priced controllers with stick drift issues again.


Rootkit Sony, you mean those fucks? Never forget.

[-] SnotFlickerman@lemmy.blahaj.zone 17 points 18 hours ago

Lula doesn't seem quite as straightforwardly pro-Russia as Bolsanaro was, but its very clear their economic association with Russia makes them unable to truly be neutral arbiters of the situation, no matter how much Lula wants to present Brazil as attempting to be neutral.

[-] SnotFlickerman@lemmy.blahaj.zone 6 points 20 hours ago* (last edited 20 hours ago)

No problem, it's easy enough to happen online, and I was kind of vague.

[-] SnotFlickerman@lemmy.blahaj.zone 7 points 20 hours ago* (last edited 20 hours ago)

It was more of a comment on the meme, not the actual original content of the photo.

If we're taking OPs word on the meme to be true, then it's accepted that most people use clever wordplay to make themselves more competent and important on their resumes than they actually are, putting every tiny accomplishment front and center. Some people even outright lie on their resumes to get ahead.

I'm speaking to the idea that perhaps someone like OP also has skills but is more humble in their approach and is willing to rely on their affability over their less colorful resume. I personally am saddened by it and think we need a world where an ounce of humility is a good thing and being willing to accept our limitations instead of build ourselves up beyond what we really are is good too. The world of resumes refuses any shred of humility and I think the world suffers for it.

I actually agree with your interpretation of the photo itself.

1

Don't make me tap the beam.

1
submitted 1 week ago* (last edited 1 week ago) by SnotFlickerman@lemmy.blahaj.zone to c/lemmyshitpost@lemmy.world
135
submitted 2 months ago* (last edited 1 month ago) by SnotFlickerman@lemmy.blahaj.zone to c/nostupidquestions@lemmy.world

As stupid as it says on the tin. Can you remove hair clogs with Nair?

EDIT: I don't actually have a drain that needs to be unclogged. This is a showerthoughtquestion.

98
submitted 2 months ago* (last edited 2 months ago) by SnotFlickerman@lemmy.blahaj.zone to c/asklemmy@lemmy.ml

EDIT: Thanks so much everyone. Great answers. This has been fun. Keep it going as long as you want!

DISCLAIMER: Silly Thought Exercise: NOT AN ENDORSEMENT OF REPLACING BIDEN. I personally do not think replacing Biden is a good idea at this stage in the election. I think that's more dangerous than keeping him, sadly, but he's who we've got. I'm just looking for shitposty thoughts on this question, please and thank you.


What-over-the-top absurd person would you choose to replace Biden who you think could actually body Trump, and why?

For an example, my choice would be based on the idea that the only thing that makes a bully like Trump wilt is a bigger bully. Secondly, US citizens love trash talking and sports and absolutely will vote for someone who is already famous, they certainly love their celebrities. Finally, what better sport for trash talk than basketball?

In that, my choice would be basketball legend Larry Bird. (he's famously apolitical, so it's hard to know if he would actually be politically aligned against Trump.)

...but, the thing is, Larry Bird is a masterclass trash talker.

And that is really what throws Trump off and throws him into obscene tantrums where his composure is lost and he comes off like a whining loser: when he's been taken down a peg by someone else. Nothing sticks deeper in his craw. I don't think he could handle Larry Bird's level of shit-talk, Bird is like god-tier.

I can imagine Bird calling Trump out and saying he can smell his shit-filled diaper from across the auditorium, obviously Bird would describe more colorfully than I. The thing is, I can also see that absolutely throwing Trump into hysterics.

Also, at 67 Bird's a fucking spring chicken compared to Biden or Trump.

So, I'm hoping for answers that are a bit silly, like this. Larry Bird is obviously not actually a good choice for this. I just like chuckling at the idea, because real life has gotten so absurd I need to hide in even deeper absurdity.


What's your absurd Biden replacement? Please, I think we could use some laughs.

30
submitted 4 months ago* (last edited 4 months ago) by SnotFlickerman@lemmy.blahaj.zone to c/technology@lemmy.world

Copied from Reddit's /r/cscareerquestions:

The US Department of Labor is proposing a rule change that would add STEM occupations to their list of Schedule A occupations. Schedule A occupations are pre-certified and thus employers do NOT have to prove that they first sought American workers for a green card job. This comes on the heels of massive layoffs from the very people pushing this rule change.

From Tech Target:

The proposed exemption could be applied to a broad range of tech occupations including, notably, software engineering -- which represents about 1.8 million U.S. positions, according to U.S. labor statistics data -- and would allow companies to bypass some labor market tests if there's a demonstrated shortage of U.S. workers in an occupation.

Currently the comments include heavy support from libertarian think tank, Cato, and the American Immigration Lawyers Association

The San Francisco Tech scene has been riddled with CEOs whining over labor shortages for the past few months on Twitter/X amidst a sea of layoffs from Amazon, Meta, Google, Tesla, and much more. Now, we know that it's an attempt at influencing the narrative for these rule changes.

If you are having a hard time finding a job, now, this rule change will only make things worse.

From the US Census Bureau:

Does majoring in STEM Lead to a STEM job after graduation?

The vast majority (62%) of college-educated workers who majored in a STEM field were employed in non-STEM fields such as non-STEM management, law, education, social work, accounting or counseling. In addition, 10% of STEM college graduates worked in STEM-related occupations such as health care.

The path to STEM jobs for non-STEM majors was narrow. Only a few STEM-related majors (7%) and non-STEM majors (6%) ultimately ended up in STEM occupations.

If you or someone you know has experienced difficulty finding an engineering job post graduation amidst this so called shortage, then please submit your story in the remaining few days that the Public comment period is still open (ends May 13th.)

Public comment can be made, here:

https://www.regulations.gov/document/ETA-2023-0006-0001/comment

Please share this with anyone else you feel has will be affected by this rule change.

122

Edward Zitron has been reading all of google's internal emails that have been released as evidence in the DOJ's antitrust case against google.

This is the story of how Google Search died, and the people responsible for killing it.

The story begins on February 5th 2019, when Ben Gomes, Google’s head of search, had a problem. Jerry Dischler, then the VP and General Manager of Ads at Google, and Shiv Venkataraman, then the VP of Engineering, Search and Ads on Google properties, had called a “code yellow” for search revenue due to, and I quote, “steady weakness in the daily numbers” and a likeliness that it would end the quarter significantly behind.

HackerNews thread: https://news.ycombinator.com/item?id=40133976

MetaFilter thread: https://www.metafilter.com/203456/The-core-query-softness-continues-without-mitigation

547
submitted 4 months ago* (last edited 4 months ago) by SnotFlickerman@lemmy.blahaj.zone to c/technology@lemmy.world

Edward Zitron has been reading all of google's internal emails that have been released as evidence in the DOJ's antitrust case against google.

This is the story of how Google Search died, and the people responsible for killing it.

The story begins on February 5th 2019, when Ben Gomes, Google’s head of search, had a problem. Jerry Dischler, then the VP and General Manager of Ads at Google, and Shiv Venkataraman, then the VP of Engineering, Search and Ads on Google properties, had called a “code yellow” for search revenue due to, and I quote, “steady weakness in the daily numbers” and a likeliness that it would end the quarter significantly behind.

HackerNews thread: https://news.ycombinator.com/item?id=40133976

MetaFilter thread: https://www.metafilter.com/203456/The-core-query-softness-continues-without-mitigation

33
48

Archive Options Failing, Text Follows:

Sam Altman’s Knack for Dodging Bullets—With a Little Help From Bigshot Friends

The OpenAI CEO lost the confidence of top leaders in the three organizations he has directed, yet each time he’s rebounded to greater heights

Minutes after the board of OpenAI fired CEO Sam Altman, saying he failed to be truthful, he exchanged texts with Brian Chesky, the billionaire chief executive of Airbnb.

“So brutal,” Altman wrote to his friend. Later that day, Chesky told Microsoft ’s CEO Satya Nadella, OpenAI’s biggest partner, “Sam has the support of the Valley.” It was no exaggeration.

Over the weekend, Altman rallied some of Silicon Valley’s most influential CEOs and investors to his side, including Vinod Khosla, co-founder of Sun Microsystems and the founder of Khosla Ventures, OpenAI’s first venture-capital investor; Ron Conway, an early investor in Google and Facebook ; and Nadella. Days later, Altman returned as OpenAI’s chief executive.

Altman’s firing and swift reversal of fortune followed a pattern in his career, which began when he dropped out of Stanford University in 2005 and gained the reputation as a Silicon Valley visionary. Over the past two decades, Altman has lost the confidence of several top leaders in the three organizations he has directed. At every crisis point, Altman, 38 years old, not only rebounded but climbed to more powerful roles with the help of an expanding network of powerful allies.

A group of senior employees at Altman’s first startup, Loopt—a location-based social-media network started in the flip-phone era—twice urged board members to fire him as CEO over what they described as deceptive and chaotic behavior, said people familiar with the matter. But the board, with support from investors at venture-capital firm Sequoia, kept Altman until Loopt was sold in 2012.

Two years later, Altman was a surprise pick to head Y Combinator, the startup incubator that helped launch Airbnb and Dropbox , by its co-founder Paul Graham. Graham had once compared Altman with Steve Jobs and said he was one of the “few people with such force of will that they’re going to get what they want.”

Altman’s job as president of the incubator put him at the center of power in Silicon Valley. It was there he counseled Chesky through Airbnb’s spectacular ascent and helped make grand sums for tech moguls by pointing out promising startups.

In 2019, Altman was asked to resign from Y Combinator after partners alleged he had put personal projects, including OpenAI, ahead of his duties as president, said people familiar with the matter.

This fall, Altman also faced a crisis of trust at OpenAI, the company he navigated to the front of the artificial-intelligence field. In early October, OpenAI’s chief scientist approached some fellow board members to recommend Altman be fired, citing roughly 20 examples of when he believed Altman misled OpenAI executives over the years. That set off weeks of closed-door talks, ending with Altman’s surprise ouster days before Thanksgiving.

Altman’s gifts as a deal-maker, talent scout and pitchman helped turn OpenAI into a business some investors now value at $86 billion. The loyalty he engendered through his success mobilized high-profile supporters after his firing and inspired employees to threaten a mass exit.

“A big secret is that you can bend the world to your will a surprising percentage of the time,” Altman wrote in his personal blog two months before his exit from Y Combinator.

Over his career, Altman has shown skill in bending circumstances to his favor. His ability to bounce back will be tested once again. Scrutiny of his management is expected in coming months. OpenAI’s two new board members have commissioned an outside investigation into the causes of the company’s recent turmoil, conducted by Washington law firm WilmerHale, including Altman’s performance as CEO and the board’s reasons for firing him.

“The senior leadership team was unanimous in asking for Sam’s return as CEO and for the board’s resignation, actions backed by an open letter signed by over 95% of our employees. The strong support from his team underscores that he is an effective CEO,” said an OpenAI spokeswoman.

This article is based on interviews with dozens of executives, engineers, current and former employees and friend’s of Altman’s, as well as investors.

Center stage

Altman was a 19-year-old Stanford sophomore studying computer science when he stepped into the limelight at a campus entrepreneur event in 2005. He stood onstage, held up a flip phone and said he had just learned all cellphones would soon have a Global Positioning System, now commonly known as GPS.

Altman asked anyone interested to join him to figure out how best to pair the technologies. He and his co-founders decided on a flip-phone app that would let people track their friends on a map, which Altman would later pitch as a remedy for loneliness.

During a later entrepreneurship competition, Altman impressed Patrick Chung, who had just joined New Enterprise Associates, a venture-capital firm, and was one of the event’s judges. NEA teamed up with Sequoia and offered Altman and his team $5 million to pursue their idea.

Altman dropped out of school and Loopt was born. An early investor was Y Combinator, a startup incubator founded by Paul Graham and his-then girlfriend now-wife, Jessica Livingston. Altman soon became a favorite of Graham’s.

A few years after the company’s launch, some Loopt executives voiced frustration with Altman’s management. There were complaints about Altman pursuing side projects, at one point diverting engineers to work on a gay dating app, which they felt came at the expense of the company’s main work.

Senior executives approached the board with concerns that Altman at times failed to tell the truth—sometimes about matters so insignificant one person described them as paper cuts. At one point, they threatened to leave the company if he wasn’t removed as CEO, according to people familiar with the matter. The board backed Altman.

“If he imagines something to be true, it sort of becomes true in his head,” said Mark Jacobstein, co-founder of Jimini Health who served as Loopt’s chief operating officer. “That is an extraordinary trait for entrepreneurs who want to do super ambitious things. It may or may not lead one to stretch, and that can make people uncomfortable.”

Altman doesn’t recall employee complaints beyond the normal annual CEO review process, according to people familiar with his thinking.

Among the most important relationships that Altman made at Loopt was with Sequoia, whose partner, Greg McAdoo, served on Loopt’s board and led the firm’s investment in Y Combinator around that time. Altman also became a scout for Sequoia while at Loopt, and helped the firm make its first investment in the payments firm Stripe—now one of the most valuable U.S. startups.

Michael Moritz, who led Sequoia, personally advised Altman. When Loopt struggled to find buyers, Moritz helped engineer an acquisition by another Sequoia-backed company, the financial technology firm Green Dot.

“I saw in a 19-year-old Sam Altman the same thing that I see now: an intensely focused and brilliant person whom I was willing to bet big on,” said Chung, now managing general partner of Xfund, a venture-capital firm.

Man versus machine

Graham’s selection of Altman to lead Y Combinator in 2014 surprised many in Silicon Valley, given that Altman had never run a successful startup. Altman nonetheless set a high goal—to expand the family run operation into a business empire.

He made as many as 20 introductions a day, helping connect people in Y Combinator’s orbit. He helped Greg Brockman, the former chief technology officer of Stripe, make a mint selling his shares in the successful payments company to buyers including Y Combinator. Brockman co-founded OpenAI in 2015 and became its president.

Altman turned Y Combinator into an investing powerhouse. While serving as the president, he kept his own venture-capital firm, Hydrazine, which he launched in 2012. He caused tensions after barring other partners at Y Combinator from running their own funds, including the current chief executive, Garry Tan, and Reddit co-founder Alexis Ohanian. Tan and Ohanian didn’t respond to requests for comment

Altman also expanded Y Combinator through a nonprofit he created called YC Research, which served as an incubator for Altman’s own projects, including OpenAI. From its founding in 2015, YC Research operated without the involvement of the firm’s longtime partners, fueling their concern that Altman was straying too far from running the firm’s core business.

Altman believed OpenAI was primed for AI breakthroughs, including artificial general intelligence—an AI system capable of performing intellectual tasks as well as or better than humans. Altman helped recruit Ilya Sutskever from Google to OpenAI in 2015, which attracted many of the world’s best AI researchers.

By early 2018, Altman was barely present at Y Combinator’s headquarters in Mountain View, Calif., spending more time at OpenAI, at the time a small research nonprofit, according to people familiar with the matter.

The increasing amount of time Altman spent at OpenAI riled longtime partners at Y Combinator, who began losing faith in him as a leader. The firm’s leaders asked him to resign, and he left as president in March 2019.

Graham said it was his wife’s doing. “If anyone ‘fired’ Sam, it was Jessica, not me,” he said. “But it would be wrong to use the word ‘fired’ because he agreed immediately.”

Jessica Livingston said her husband was correct.

To smooth his exit, Altman proposed he move from president to chairman. He pre-emptively published a blog post on the firm’s website announcing the change. But the firm’s partnership had never agreed, and the announcement was later scrubbed from the post.

For years, even some of Altman’s closest associates—including Peter Thiel, Altman’s first backer for Hydrazine—didn’t know the circumstances behind Altman’s departure.

Resurrection

At OpenAI, Altman recruited talent, oversaw major research advances and secured $13 billion in funding from Microsoft. Sutskever, the company’s chief scientist, directed advances in large language models that helped form the technological foundation for ChatGPT—the phenomenally successful AI chatbot. Sequoia was one of OpenAI’s investors.

As the company grew, management complaints about Altman surfaced.

In early fall this year, Sutskever, also a board member, was upset because Altman had elevated another AI researcher, Jakub Pachocki, to director of research, according to people familiar with the matter.

Sutskever told his board colleagues that the episode reflected a long-running pattern of Altman’s tendency to pit employees against one another or promise resources and responsibilities to two different executives at the same time, yielding conflicts, according to people familiar with the matter.

“Ilya has taken responsibility for his participation in the Board’s actions, and has made clear that he believes Sam is the right person to lead OpenAI,” Alex Weingarten, a lawyer representing Sutskever, said in a statement. He described as inaccurate some accounts given by people familiar with Sutskever’s actions but didn’t identify any alleged inaccuracies.

Altman has said he runs OpenAI in a “dynamic” fashion, at times giving people temporary leadership roles and later hiring others for the job. He also reallocates computing resources between teams with little warning, according to people familiar with the matter.

Other board members already had concerns about Altman’s management. Tasha McCauley, an adjunct senior management scientist at Rand Corp., tried to cultivate relationships with employees as a board member. Past board members chatted regularly with OpenAI executives without informing Altman. Yet during the pandemic, Altman told McCauley he needed to be told if the board spoke to employees, a request that some on the board viewed as Altman limiting the board’s power, people familiar with the matter said.

Around the time Sutskever aired his complaints, the independent board members heard similar concerns from some senior OpenAI executives, people familiar with the discussions said. Some considered leaving the company over Altman’s leadership, the people said.

Altman also misled board members, leaving the impression with one board member that another wanted board member Helen Toner removed, even though it wasn’t true, according to people familiar with the matter, The Wall Street Journal reported.

The board also felt nervous about Altman’s ability to use his Silicon Valley influence, so when members decided to fire him, they kept it a secret until the end. They gave only minutes notice to Microsoft, OpenAI’s most important partner. The board in a statement said Altman had failed to be “consistently candid” and lost their trust without giving specific details.

Altman retreated to his 9,500 square-foot house, which overlooks San Francisco in the city’s Russian Hill neighborhood.

One of his key allies was Chesky. Shortly after Altman was fired, Chesky hopped on a video call with Altman and Brockman, who had been removed from the board that day and quit the company in solidarity with Altman. Chesky asked why it happened. Altman theorized it might have been about the dust-up with Toner or Sutskever’s complaints.

Satisfied that it wasn’t a criminal matter, Chesky phoned Nadella, the Microsoft CEO.

A small group of Silicon Valley power brokers, including Chesky and Conway, advised Altman and worked the phones, trying to negotiate with the board.

The board named Emmett Shear, an OpenAI outsider, as interim CEO, drawing threats to resign by most of the company’s employees. In another lucky turn of fortune for Altman, Shear was an ally and a mentor of Chesky’s.

Together, Chesky and Shear helped clear a path for Altman’s return.

362
submitted 10 months ago* (last edited 10 months ago) by SnotFlickerman@lemmy.blahaj.zone to c/asklemmy@lemmy.ml

Money wins, every time. They're not concerned with accidentally destroying humanity with an out-of-control and dangerous AI who has decided "humans are the problem." (I mean, that's a little sci-fi anyway, an AGI couldn't "infect" the entire internet as it currently exists.)

However, it's very clear that the OpenAI board was correct about Sam Altman, with how quickly him and many employees bailed to join Microsoft directly. If he was so concerned with safeguarding AGI, why not spin up a new non-profit.

Oh, right, because that was just Public Relations horseshit to get his company a head-start in the AI space while fear-mongering about what is an unlikely doomsday scenario.


So, let's review:

  1. The fear-mongering about AGI was always just that. How could an intelligence that requires massive amounts of CPU, RAM, and database storage even concievably able to leave the confines of its own computing environment? It's not like it can "hop" onto a consumer computer with a fraction of the same CPU power and somehow still be able to compute at the same level. AI doesn't have a "body" and even if it did, it could only affect the world as much as a single body could. All these fears about rogue AGI are total misunderstandings of how computing works.

  2. Sam Altman went for fear mongering to temper expectations and to make others fear pursuing AGI themselves. He always knew his end-goal was profit, but like all good modern CEOs, they have to position themselves as somehow caring about humanity when it is clear they could give a living flying fuck about anyone but themselves and how much money they make.

  3. Sam Altman talks shit about Elon Musk and how he "wants to save the world, but only if he's the one who can save it." I mean, he's not wrong, but he's also projecting a lot here. He's exactly the fucking same, he claimed only he and his non-profit could "safeguard" AGI and here he's going to work for a private company because hot damn he never actually gave a shit about safeguarding AGI to begin with. He's a fucking shit slinging hypocrite of the highest order.

  4. Last, but certainly not least. Annie Altman, Sam Altman's younger, lesser-known sister, has held for a long time that she was sexually abused by her brother. All of these rich people are all Jeffrey Epstein levels of fucked up, which is probably part of why the Epstein investigation got shoved under the rug. You'd think a company like Microsoft would already know this or vet this. They do know, they don't care, and they'll only give a shit if the news ends up making a stink about it. That's how corporations work.

So do other Lemmings agree, or have other thoughts on this?


And one final point for the right-wing cranks: Not being able to make an LLM say fucked up racist things isn't the kind of safeguarding they were ever talking about with AGI, so please stop conflating "safeguarding AGI" with "preventing abusive racist assholes from abusing our service." They aren't safeguarding AGI when they prevent you from making GPT-4 spit out racial slurs or other horrible nonsense. They're safeguarding their service from loser ass chucklefucks like you.

view more: next ›

SnotFlickerman

joined 10 months ago