NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Molotov Cocktail Is Hurled at Home of OpenAI CEO Sam Altman (nytimes.com)
MontyCarloHall 2 hours ago [-]
I don't think most people in tech are quite aware of the level of visceral AI hatred amongst non-techies. I've personally witnessed the worst Thanksgiving dinnertable fight I've ever seen (after someone revealed that their recipe was AI-generated, a couple people literally spat out the food they were enjoying and threw their plates in the trash), and a divorce (a very solid marriage between two people who were once both staunchly anti-AI unraveled within weeks after one of them changed their tune and adopted AI at work).
lbarrow 2 hours ago [-]
Spitting your food out because the AI generated the recipe is so clearly irrational that I chuckled a bit on reading that
dirkc 2 hours ago [-]
People talk about AI getting things wrong all the time, why is it "so clearly irrational" to be doubtful of a recipe that might include ingredients that can make you sick?
VectorLock 2 hours ago [-]
Because I hope that someone who's hands were required to assemble the recipe didn't blindly add ingredients like "bleach" if the AI happened to hallucinate them.
stvltvs 1 hours ago [-]
A naive hope perhaps, but this ignores the risk of LLMs just creating a bad recipe based on the blind combination of various recipes in their training data.
VectorLock 43 minutes ago [-]
As the parent comment said the people seemed to be enjoying the food otherwise so the LLM didn't create an unpalatable combination, and I can't think of any combination of edible and unharmful ingredients that might combine to something harmful (when consuming a reasonable amount)
defen 1 hours ago [-]
let's take a second to think about the threat vectors here. The two obvious ones I can think of are: "AI hallucinates and tells you to put non-food into the food" and "AI hallucinates and gives you unsafe prep instructions" (e.g. "heat the chicken to an internal temperature of 110 degrees"). For both of those, it's not clear why "random recipe from an internet blog" is safer than something the AI generates. At some level if someone is preparing your food you need to trust that they know how to prepare food, no matter where they're getting their instructions from.
strongpigeon 2 hours ago [-]
Because it assumes the person actually making the food has no common sense?
therouwboat 1 hours ago [-]
We had billion dollar AI company install vending machine that was giving stuff away for free, so maybe AI users don't have common sense.
bloody-crow 23 minutes ago [-]
This is an experiment they ran and were prepared to lose money on. It seems perfectly reasonable for an AI company to test their products in adversarial conditions to have a better understanding of its flaws and limitations.
wpm 1 hours ago [-]
If they're asking an LLM for a recipe, they don't.
bloody-crow 16 minutes ago [-]
That's just pure nonsense. My partner is very competent cook and she invents new recipes and experiments all the time. I don't see why she can't use LLM output as an inspiration to combine with her own expertise, sense of taste, and preferences to come up with an excellent dish.
pixel_popping 1 hours ago [-]
My wife does it all the time, and it's actually decent.
2 hours ago [-]
steve1977 2 hours ago [-]
People get things wrong all the time as well, so I wouldn't trust them either.
happytoexplain 2 hours ago [-]
People get things wrong in a different, more observable/predictable way. Sure, we are easily tricked dummies and we can't know if a human is right or wrong, but our human-trust heuristics are highly developed. Our AI-trust heuristics don't exist.
steve1977 1 hours ago [-]
I mean I had people serve me expired food and chicken that was half raw. The latter I could observe, the former I couldn't so easily. Both were things that could have made me sick.
happytoexplain 1 hours ago [-]
For sure. I'm not defending human perfection, I'm defending human caution (Disclaimer: The format of the preceding sentence was chosen without AI assistance).
mikestew 1 hours ago [-]
Dunno about you, but I like the increased viscosity in my sauces when I use glue:

https://www.bbc.com/news/articles/cd11gzejgz4o

ikkun 2 hours ago [-]
I could see being concerned about food safety; I wouldn't trust an AI recipe to tell me how long/what temperature to cook chicken, and I might not trust someone who uses AI to generate recipes to know either.
ctoth 1 hours ago [-]
Hi! I love to cook! I also use AI to brainstorm recipes sometimes! Wanna try asking Claude, ChatGPT, Gemini, or even Grok what temperature chicken needs to be cooked to? I just asked Claude: 165°F (74°C) internal temperature.

Where does this come from?

ikkun 37 minutes ago [-]
if you ask that question alone, AI is most likely to get it right, but the usual pitfalls of AI apply; they sometimes randomly get things wrong, people are more likely to miss wrong information when it's surrounded with correct information, and LLMs are specifically good at making text that seems correct on the surface. and in my experience, people often use AI specifically because they don't have a lot of knowledge in an area. if you do already know plenty about cooking, I'm sure using AI is probably fine, I just see it as a red flag.

cooking is also a form of art, with a strong social aspect. using AI for it has a similar ick factor to using generative AI for pictures. I'm not saying I immediately distrust anyone using it, but I do think it's a sign that maybe the person cares a bit less about what they're doing.

miloignis 57 minutes ago [-]
Arguably, that's wrong - not because it's unsafe, but because it's not the best temperature for any part of the chicken I know of. I'm a big J. Kenji López-Alt and Serious Eats fan, and 165 is too hot for good chicken breast and too cool for good dark meat: https://www.seriouseats.com/chicken-thigh-temperature-techni...
happytoexplain 1 hours ago [-]
I can't tell if you're criticizing the parent or are innocently asking how Claude knows the temperature for chicken.

To be clear in the case of the former: Harm data points have approximately one trillion times the weight of no-harm data points, as a rule of thumb.

stvltvs 1 hours ago [-]
Even if it can give the right answer when asked, will it necessarily account for that in a recipe it generates? A beginning cook may not know enough to ask.
lbarrow 1 hours ago [-]
Yea, I suppose that is fair regarding cook timings.
layer8 1 hours ago [-]
I interpret it as an expression of disgust. Similar to how people will stop reading and throw away a good book when they learn the author is a morally reprehensible person.
wak90 49 minutes ago [-]
Like, I wouldn't spit the food out.

But I would be disgusted. Someone told me they planned their vacation with an llm and I couldn't help but express disdain for this friend of mine.

Why are we outsourcing creativity and research and interest in discovery to an llm?

thevinter 35 minutes ago [-]
Probably because the person wasn't interested in planning their vacation and wanted just to enjoy the end result?

Let's not assume different people find the same parts of the process enjoyable.

bloody-crow 11 minutes ago [-]
Really don't get this take. I really hate vacation planning and I would outsource this part in a heartbeat. My partner does this for me currently and she seems to quite enjoy it, but it she wasn't, the LLM-generated plans I've tried out of curiosity were equally as good.
lostmsu 26 minutes ago [-]
> Why are we outsourcing creativity and research and interest in discovery to an llm?

This is also weird. I hate planning vacations, but I like going to them.

pixel_popping 2 hours ago [-]
but was it done with GPT-5.4 xhigh with an adversarial loop?
happytoexplain 2 hours ago [-]
I mostly agree that it's an overreaction. However, "irrational" is a really bad choice of word. Every non-technical person understands that sometimes AI says wrong things - like, random, crazy wrong things, not just a little off. It's just a general rule kept in the back of the mind. Food is easily in that realm of "be careful". Did the AI produce a recipe that would be harmful to you and the cook didn't notice? Almost certainly not. So, sure, they were being over-cautious. But "irrational"? No, no, no. It's definitely rational.

Look at what you're writing.

"Doing X is so clearly irrational that I chuckled a bit."

Please don't perpetuate the image of the elitist techie. That is what was just firebombed.

misiti3780 2 hours ago [-]
lol = if you're against AI recipes, you have bigger problems.
ajross 2 hours ago [-]
The very fact that your takeaway from that story was "look at how dumb my enemies are" is why this is a conflict worth worrying about.

Are you right? Yeah, basically. Are you going to laugh at your stupid neighbors until they burn your house down in rage? Maybe? You don't treat fear with malice.

snielson 1 hours ago [-]
My wife runs a food blog and sometimes uses AI to come up with recipes she tests on us first. One of the best dishes she’s ever made (and one of the best I’ve ever eaten) was pork with an apricot sauce. The pork was fine, but the sauce was absolutely incredible! I’d put it on any kind of meat. Funny thing is, I don’t even like apricots, but the sauce was amazing. My wife does have one advantage, which is that she knows when the AI has hallucinated something crazy and makes appropriate adjustments. I guess it's like anything. AI can be a big help to those who already have a threshold level of background knowledge in a field but can cause big problems for those who don't.
layer8 56 minutes ago [-]
You can’t write something like this and not share the recipe.
TehCorwiz 2 hours ago [-]
Well, Sam Altman and Jensen Huang are going around bragging about how many people they're going to push out of employment. Might have something to do with it.
2 hours ago [-]
layer8 39 minutes ago [-]
From a recent NBC News poll, “the only topics that were less popular than AI were the Democratic Party and Iran”: https://www.nbcnews.com/politics/politics-news/poll-majority...
happytoexplain 2 hours ago [-]
There is very strong anti-AI sentiment among "techies" too. It's just not absolute or generalized (AI is a huge umbrella term).
metalliqaz 1 hours ago [-]
You might call me a "techie" and I both use AI and have very strong anti-AI sentiment. I don't think this is a contradiction, because I believe while the technology itself is not bad, the way that people use it definitely is.

People trust AI outputs in ways they should not. They don't understand its sycophantic design and succumb to AI psychosis. They deploy it in antisocial ways, for war, or spam, or scams. They use it to justify layoffs. They use it as a justification to gobble up public funds. They use it to power their winner-take-all late-stage capitalism economy. It goes on and on.

slopinthebag 1 hours ago [-]
I agree completely. The way it's marketed and used is a big part of my distaste, the other part is big tech / AI companies and their actions and ethics. It's why I'm a huge supporter of open source and locally run models, and I am moving most of my workflow to things that I can run on my own machine, or at least on a GPU that I can rent from a plethora of providers.
linkage 2 hours ago [-]
Politics really is a substitute for religion in America
kelnos 2 hours ago [-]
In secular America at least. Most people in the US are religious, many of them fervently so.

And quite a few of them like to mix their religion with politics.

24 minutes ago [-]
elephanlemon 2 hours ago [-]
Frankly I think a lot of these people are politics first. How else do you explain the dissonance between Jesus’s teachings and their political opinions?
MiguelX413 24 minutes ago [-]
Their politics are perfectly in line with their Christian-themed cult.
misiti3780 2 hours ago [-]
this is true, but thankfully, religion is declining in America. although if people are replacing it with politics, maybe we need another revival
leosanchez 2 hours ago [-]
Religious people can be anti-AI too.
MontyCarloHall 2 hours ago [-]
Indeed, but the rage I've seen during political fights at family gatherings (and another politics-induced divorce) pales in comparison to the rage I saw in these two anecdotes. The worst political debates I've seen involved raised voices and some name calling, not spitting food and smashing plates. The only other political divorce I've seen slowly simmered over a few years after Trump was first elected, not in a literal matter of weeks.
2 hours ago [-]
LooseMarmoset 2 hours ago [-]
From my own perspective, the "visceral hatred" isn't so much at AI (which I use almost exclusively to generate funny pictures of myself and coworkers) but at the executives that view it as a way to enshittify society.

turning myself (an overweight bearded guy) into an animated hula dancer and turning my coworker into the Terminator and sinking into molten steel don't seem to inspire the same hatred. unless you don't like hula dancers.

rishabhaiover 1 hours ago [-]
This was obviously a fictional thanksgiving dinner. Nobody is this geezed up about AI assistance.
TripleTree 1 hours ago [-]
I would absolutely stop eating a meal if I learned AI was involved in creating it. I suppose I wouldn't literally spit it out but I wouldn't take another bite.
stvltvs 1 hours ago [-]
Nobody in your circle of friends/acquaintances perhaps.
rishabhaiover 26 minutes ago [-]
You're okay with sitting at the rear seat of a car while it drives you around the city though.
sillyfluke 47 minutes ago [-]
I must live in the upside down. If there are any ardent anti-AI people I come across they're techies. Whereas non-techies are either oblivious or completely and comically locked-in as caricatured in that South Park episode.
Kon5ole 1 hours ago [-]
The remarkable part of your anecdote is the behavior. Seems to me some humans nowadays are less tolerant of any difference in opinion, AI is just the current reason to pick a fight.

Wonder why that is, and if we'll grow out of it peacefully.

alfalfasprout 2 hours ago [-]
It's quite prevalent in tech too-- however, folks tend to be quiet because the "use AI for everything or else" hammer is being used across the industry.
nothinkjustai 2 hours ago [-]
Not just non-techies. Plenty of techies share that same visceral hatred. Some of them even use these tools themselves, because it’s a complicated issue with nuances.
throwanem 2 hours ago [-]
Surely there must have been underlying tensions in that marriage.

(I don't feel at all confident in that statement; I am requesting reassurance.)

MontyCarloHall 2 hours ago [-]
They are pretty good friends of mine and I never sensed any tension. It really was a marriage-ending bolt out of the blue, like discovering an affair or severe financial infidelity.
throwanem 2 hours ago [-]
I don't really want to say "thank you." That story, more to the point that I can't find a priori cause to doubt it, makes me glad I'm about to go enjoy a gorgeous spring afternoon full of birdsong and sunshine. But I appreciate your taking the time to follow up.
lexandstuff 2 hours ago [-]
I've found that most non-tech people are indifferent or, at worst, utterly bored by any mention of AI.

The tech people are the ones that have the strongest opinions one way or the other.

littlestymaar 2 hours ago [-]
> after someone revealed that their recipe was AI-generated, a couple people literally spat out the food they were enjoying and threw their plates in the trash

Not entirely unwarranted given the track record of LLMs as a chef though:

https://www.theguardian.com/world/2023/aug/10/pak-n-save-sav...

https://www.bbc.com/news/articles/cd11gzejgz4o

Of course it was two years ago and it's unlikely to happen again, but that's the drawback of the “move fast and break things” attitude: sometimes you've broken public perception and it's hard to fix afterwards.

rvz 1 hours ago [-]
Crypto doesn't get that much hatred, since you don't need to participate in the space even in non-techies circles. But it doesn't affect them and it can be safely ignored in its own bubble.

Mentioning "AI" in non-techies circles is a bad idea. It tells you that many here are in a massive bubble and unaware of the visceral hate against AI because it directly affects them and they cannot opt-out.

Given that AI takes more than it gives back (jobs, energy, water, houses) of course you will get anti-AI activists.

layer8 59 minutes ago [-]
Except when you’re the victim of ransomware that extorts you to pay some bitcoin. But it seems that fewer people have encountered that than having AI forced upon them.
therobots927 2 hours ago [-]
Most SV people live in a bubble inside of a bubble. They don’t understand how their words come across to a significant portion of the population. If they did they would shut the fuck up.
baal80spam 48 minutes ago [-]
Not sure why you were downvoted so heavily. SV is a bubble if I've seen one.
mandeepj 2 hours ago [-]
> a couple people literally spat out the food they were enjoying and threw their plates in the trash

That was an unnecessarily extreme reaction, like AI 3d printed the ingredients.

strongpigeon 2 hours ago [-]
It is a bit scary how people seem to genuinely be OK with violence (see this reddit thread [0]). Is just me or does it feel like the overall "temperature" has gone up.

[0] https://www.reddit.com/r/ChatGPT/comments/1shugf8/firebomb_t...

plorkyeran 1 hours ago [-]
AI company marketing is pretty overwhelmingly "we're going to take away your job and leave to you starve on the streets". People concluding that the public face of this is their enemy who must be stopped is just a really unsurprising outcome.
rvz 1 hours ago [-]
That is what Ilya (and many other employees) (fore)saw.

They did not want a target painted on their backs or being involved with the company responsible for mass job displacement.

Let's hope that SF doesn't turn into a free-for-all after the IPOs, since the silliest thing is for everyone to move to SF and buy up the houses and then the have-not's realise who got rich.

I'd donate that money away or give the employees (who have nothing) a one-time bonus / raise like the five-guys owner [0] to not be a target.

[0] https://www.theguardian.com/us-news/2026/mar/27/five-guys-ce...

scoofy 1 hours ago [-]
People become okay with vigilante justice when they see the executive branch as compromised, just look at the insane plot/ending of the film Singham.

Many people see this happening in the US. We should expect to see more vigilante justice and organized crime if we see the executive branch as having a significant principal-agent problem.

layer8 36 minutes ago [-]
It used to be a little less violent: https://www.youtube.com/watch?v=HEMbp6Epfz8
therobots927 2 hours ago [-]
It is scary. You know what’s also scary? Being told a robot is going to take your job and healthcare away.

There’s a lot of scary shit going on.

happytoexplain 2 hours ago [-]
Also scary: Seeing a comment this ostensibly un-controversial in grey.
therobots927 1 hours ago [-]
HN is rigged - downvotes are half fake and explicitly target comments critical of the oligarchy.
pixel_popping 2 hours ago [-]
I agree it is scary, but why would a robot take healthcare away? Wouldn't that be the contrary?
ironman1478 2 hours ago [-]
There are stories about insurance companies using AI when determining if a claim should be let through or denied.

https://www.palmbeachpost.com/story/news/healthcare/2026/03/...

WBrentWilliams 1 hours ago [-]
The quickest way to rile up an existing mob is to make them fear their livelihood is being reduced or removed. The _robot_ is not taking away healthcare, but the effect of the robot existing hit directly at the livelihood of the masses.

In the US, health insurance is largely tied to employment. Health insurance, in a personal economic sense, reduces to being able to pay for healthcare. This policy is largely a left-over of World War II era employment policies. No one is taking healthcare _away_ from anyone (strictly speaking), but the ability to be able to _pay_ for healthcare is reduced to zero when employment ceases. Accessing the safety net is a separate skillset. This skill set becomes more difficult to achieve because the political class does not want to provide healthcare for everyone, only the worthy (their loyal voters).

I grew up in and am still a member of the precariat. I am educated and doing well, but I wear a well-polished pair of golden handcuffs due to how my ability to afford healthcare for myself, and my family, is tied to employment. Politically, I _do not_ like being tied to my employer by such a chain, but my arguments to change the system have been met with quite firm push-back.

stvltvs 1 hours ago [-]
Insurance companies are using AI (whatever that means in this case) to make coverage denial decisions. That can be reasonably summarized as robots are taking away our healthcare.
whimblepop 2 hours ago [-]
Because healthcare in the US is tied to employment. For most people here, losing a job means losing access to healthcare (partially or totally).
cryptonym 2 hours ago [-]
Because the robot would take their job and having a job is a precondition to healthcare (may vary by country)?
therobots927 2 hours ago [-]
1. Americans need a job to get healthcare

2. Robots take away jobs from Americans and the proceeds to go the owner (investor) class

3. Americans no longer have healthcare

Understand?

pixel_popping 1 hours ago [-]
I understand (I'm not from the US), however, wouldn't healthcare in the US would get drastically cheaper (even eventually free?) if hospitals/clinics were composed of humanoids instead of humans?
threecheese 4 minutes ago [-]
This is definitely a potential future state, but not one I could imagine happening soon. Given that the robots which are currently deployed do not benefit people directly (and even the indirect benefit of lower costs or better investment returns appear to be captured by the upper tiers of the economy), we have no confidence that they would deployed to benefit anyone but their owners.

More likely near-term states are less rosy, given intelligence takes off.

WBrentWilliams 1 hours ago [-]
Interesting idea. I cannot say that I can answer affirmatively nor negatively. There are also human elements to be considered. Humans are status-seeking social creatures. There will always be a stain of humanoid-delivered care, no matter how high-quality, as being not as high quality of all-human delivered care. This is, status accounts for a lot.

I can also draw pictures of how dangerous humanoid care can be, as there is a possibility in a break in the chain of responsibility. If a human medical professional messes up, you (or your survivors) can sue and seek damages directly, as well as sue the hospital and insurance system (with mixed results).

With humanoids? Currently, the bar is higher as the entity being sued is not the hospital, nor a person, or even a team. The only entities that can be addressed are the corporation the runs the hospital and the corporation that produced the humanoid. These two entities have an incredible out-sized advantage in terms of sheer delaying tactics, not to mention arbitration clauses and other legal innovations. Most injured will simply give up, which is a legal win for the two entities.

In my opinion, humanoid care will take a large amount of time, damage, and treasure to lower the costs. No actor will willingly give up their cash flow. My view may be too strong.

wak90 38 minutes ago [-]
Lol no
sophacles 1 hours ago [-]
Well in the US you get healthcare from a job (either directly in the form of insurance or indirectly in the form the money to pay for healthcare). If the robot takes your job, it takes your healthcare too.

You know this, stop pretending otherwise.

misiti3780 2 hours ago [-]
the narrative im hearing is AI breakthroughs will drive the cost of healthcare to zero (i.e. Alphafold etc)
metalliqaz 1 hours ago [-]
[dead]
mghackerlady 1 hours ago [-]
People are apathetic at this point. When a large amount of americans can barely afford to live while threatened with replacement while the economy booms on the backs of their claimed obsolescence, they don't care that a billionaire could've gotten hurt, especially when that billionaire is working against their interests.
strongpigeon 1 hours ago [-]
I mean, it's also scary because I don't think it works. People should demand a new deal and lobby for that. Throwing molotovs doesn't help with that.
eschaton 1 hours ago [-]
What happens when lobbying for a new deal fails? Do the people just shrug and accept the fate their feudal lords have determined for them?
pixel_popping 1 hours ago [-]
It clearly did open a discourse on HN at least :)
sophacles 1 hours ago [-]
You're just a smidge away from asking why they can't just eat cake...
strongpigeon 1 hours ago [-]
I think you're extrapolating a lot from my comment... One can reasonably think something has to be done to address the current (and upcoming) economic situation and think that molotov cocktails won't help. Acts like these will likely make things much worse before settling into a new situation that's probably just slightly worse.
sophacles 1 hours ago [-]
Wondering why people might want to resist their lives becoming worse at all just so some assholes can gloat about how much richer they became is literally the same as asking why they can't just eat cake.

Thinking something should be done, means nothing is being done. The poor in france didn't start with bread riots. They begged and pleaded and asked nicely first, and while lots of people thought something should be done to help them, nothing was.

Thank you for getting over the line.

strongpigeon 57 minutes ago [-]
Being worried that people choose to channel their energy into actions that undoubtedly make their situation worse rather than have a chance of finding a solution is not the same. Or I guess it depends on how you decide to view things as being "literally the same".
sophacles 52 minutes ago [-]
Worry is not an action to making something better.

People will take actions when the threat is against their livelihood, health and homes, particularly when there is no action being taken on their behalf. Their risk assessment may be different than yours.

MiguelX413 46 minutes ago [-]
They don't really have another choice do they.
nothinkjustai 2 hours ago [-]
I don’t think it’s surprising - some people already consider the actions of AI execs and tech companies to be synonymous to violence. Like, comparing something like this to destroying the livelihoods of millions of people, a lot of people would consider the latter far worse.

Temperature is certainly going up, but it definitely hasn’t reached historic levels yet lol.

_bohm 1 hours ago [-]
Structural violence is the term most commonly used for this

https://en.wikipedia.org/wiki/Structural_violence

Analemma_ 2 hours ago [-]
Altman keeps on telling people he’s going to take away their jobs. He says that because it gets cred in tech circles, but in America this is an existential threat, not much different from telling someone “I’m going to break your kneecaps”. Of course some subset of people are going to respond with violence.

The sheer tone-deafness of AI marketing is going to come back to bite us very hard. This is probably just the beginning.

nickvec 2 hours ago [-]
https://archive.ph/aoXIY

@dang didn't see this post before posting the archive.ph link at https://news.ycombinator.com/item?id=47722344 - feel free to delete/merge that thread with this one

GlibMonkeyDeath 16 minutes ago [-]
0cf8612b2e1e 2 hours ago [-]
One thing I have idly wondered is how much do the ultra rich protect themselves from theft or kidnapping. Is it just not a real concern?

If Taylor Swift owns a dozen homes, does she have full time security guards at each one? Or just accept some amount of burglary may occur? Do they go everywhere with a guard? Only to public events?

bombcar 1 hours ago [-]
It varies and they don't talk about it (obviously) but you can glean things from various sources. The more "public" the ultra rich are, the more they'll have security, especially noticeable security.

The silent or unknown ones will often still have something (usually a requirement of their or their company's insurance).

Once you graduate from "2, 3, 5 houses" to "mansions" you will have staff at each one, even if relatively bare-bones.

strongpigeon 1 hours ago [-]
I once knew a guy that used to be head of physical security for Bill Gates. He has body guards with him all the time and a sizable security team at his home in Medina. You wouldn't believe the amount of lunatics that show up at his home unannounced and claim he promised them money (or are a relative of him somehow).
ciupicri 2 hours ago [-]
> accept some amount of burglary may occur?

From https://edition.cnn.com/2025/05/13/entertainment/kim-kardash...

> Kim Kardashian, testifying in the trial of the burglars accused of tying her up and robbing her at gunpoint nearly nine years ago, told a Paris court on Tuesday that she “absolutely thought” her assailants would kill her.

> “I have babies, I have to make it home, I have babies,” Kardashian recalled pleading with the armed men, who had broken into her hotel room while she slept during Paris Fashion Week in 2016.

> Facing her alleged attackers for the first time since the heist, the billionaire reality TV star detailed how she was robbed of nearly $10 million in cash and jewelry, including a $4 million engagement ring – gifted to her by her then-husband Kanye West – that was never recovered.

neko_ranger 2 hours ago [-]
you dont need to be a wizard to cast fireball
linkage 2 hours ago [-]
It's funny how he has become the face of AI amongst low-information luddites, while Dario and Demis are under the radar.
2 hours ago [-]
smt88 2 hours ago [-]
> face of AI amongst low-information luddites

This is condescending and unfair. Altman, OpenAI, and the media have spent years making Altman the face of AI. His company has (by far) the largest market cap, does the most deals, and has the most users.

I suspect Anthropic/Claude will become as much of a household name as ChatGPT, but it's not even close yet. ChatGPT is almost a generic term for AI chatbots at the moment.

linkage 1 hours ago [-]
You're conflating MAU with economic relevance. The overwhelming majority of ChatGPT users are brokies on the free tier who use it for simple questions, like their homework assignments or relationship advice.

Anthropic, by contrast, is about to release a model so powerful that Scott Bessent an Jay Powell convened an emergency meeting just a few hours ago with the CEOs of America's biggest banks. They are forming contingency plans for the effects Mythos is going to have on the financial markets. Anthropic is also far more consequential to the job market since it's the biggest and most sophisticated player in the B2B space. And of course, Anthropic has a higher ARR than OpenAI.

MiguelX413 1 hours ago [-]
I think their point stands.
PunchTornado 46 minutes ago [-]
An impostor is an impostor, no matter what the media makes them. Tbh, it's ok that the plates brake into his head since he has done so many bad things previously, he deserves it.
2 hours ago [-]
boznz 2 hours ago [-]
I guess this is what we get when the media and politicians go all in with their AI populist hate. I don't think I've seen a positive AI headline outside of the tech press, and even then they are pretty thin. Abundance and growing the pie for everyone is also an outcome if this is done right.
lexicality 2 hours ago [-]
> Abundance and growing the pie for everyone is also an outcome if this is done right.

Do you genuinely believe there's any chance that's going to happen?

boznz 35 minutes ago [-]
I do, because the alternative is unthinkable.
senordevnyc 40 minutes ago [-]
Looking at the last few hundred years of our civilization, absolutely!
MiguelX413 34 minutes ago [-]
Lol
senordevnyc 18 minutes ago [-]
Substantive.

Try this, I'm genuinely curious: if you were going to be born as a random human somewhere on earth, what year would you prefer that to happen?

mghackerlady 1 hours ago [-]
or, here me out, people are just sick of it? They don't care that their masters are sniffing eachothers ai powered farts to keep the economy afloat on the promise of their obsolescence. Sure, in theory it could be good for them, they can get more work done quickly, but why would they be kept alive if their owners no longer need to rely on them. The ideal business has no expenses, workers are one of those. Combine that with everything being shit nowadays, yeah, I can't blame whoever did this
therobots927 2 hours ago [-]
Think occupy Wall Street but cranked up significantly.

That’s what’s coming. Like it or not.

linkage 1 hours ago [-]
I hope "cranked up" was a pun
rambrrest 2 hours ago [-]
This will only get worse imo - regardless of how Sam is perceived - there is anger against AI which is growing amongst the people. I think we as a society need to stop and have the conversation and be more thoughtful about how we integrate AI with everything.
pixel_popping 58 minutes ago [-]
I don't think this is possible yet, because many people refuse to think AI would be eventually better than us at practically anything (at least anything virtual), they keep talking about what's "current" while I think it's completely irrelevant for that discussion, people need to assume extreme intelligence and orchestration tools (and robots) will be there, worldwide, it's a *fact*, not just a maybe.
classified 26 minutes ago [-]
Your "fact" is pure vaporware and hallucination.
pixel_popping 15 minutes ago [-]
Let's talk about it again in 5 years, but 1-2 years from now, at the very least, coding will be over in the sense that the best models will do it better than the best (or the 99.99%). I don't think I'm hallucinating no, when my own work went from coding+managing+bunch of other stuff to just orchestrating and my output is just insanely higher and I literally have a bunch of friends that went from coding 8h a day to just "pretending to code" and just using a bunch of agents and get paid the same salary for working 30min a day, that's real, not an hallucination.
classified 10 minutes ago [-]
> in 5 years

That's literally the same argument that the blockchain gurus made, and each following year it was still 5 years in the future. I'm getting strong Real Soon Now™ vibes.

pixel_popping 8 minutes ago [-]
common, that's very different, that's something current with practical use-cases that are already being implemented across all companies, I don't even know why we compare this with blockchain, blockchain is just some fancy resilient DB with proofs in the end.
2 hours ago [-]
fredgrott 2 hours ago [-]
how to tell its not AI or AGI..it throws a Molotov cocktail...
pixel_popping 2 hours ago [-]
Yeah, Unitrees wouldn't aim that good.
josefritzishere 2 hours ago [-]
My first thought was false flag. Is that too cynical?
foota 2 hours ago [-]
I would go for out of touch, not cynical. A lot of people really think AI is the devil.
risyachka 2 hours ago [-]
It will be hard to convince them otherwise when their jobs are replaced with AI, and they are in their late 40s or later - with no time to adjust and to learn new craft.
polotics 2 hours ago [-]
Possible, but unlikely. To organise such a stunt and keep undetected you're going to need other consigliere than what Sam's got I presume.
josefritzishere 1 hours ago [-]
Like another commenter wrote... anyone can cast a fireball. Sam has been called a sociopath by many who know him personally. So it seems more likely than it might be otherwise.
ReptileMan 18 minutes ago [-]
Nope. So was mine.
2 hours ago [-]
SilentM68 1 hours ago [-]
Hmm, that's troubling but predictable.

The idea that AI will bring an age of abundance may be true, but not in the short term. Companies are letting people go, and AI will be blamed for that, whether true or not. For decades the public perception that most Tech Bros have prioritized profits over the wellbeing of the little guy is well established, in my view, in some cases well deserved with no accountability.

It's looking like AI will generate a modern version of the early 1800s Luddite Rebellion where British textile workers destroyed machines that displaced jobs, prioritizing factory owners' profits over workers. They targeted technology and industrialists.

Tech Bros can avoid this by modifying their priorities, prioritize employee rights and lobbying governments to begin implementing some sort of Universal Basic Income of some sort and or provide the means by which people can survive, or the government may start marketing Soylent Green to consumers :(

whimblepop 1 hours ago [-]
> It's looking like AI will generate a modern version of the early 1800s Luddite Rebellion where British textile workers destroyed machines that displaced jobs, prioritizing factory owners' profits over workers. They targeted technology and industrialists.

It's worth remembering that the way that ended was extremely bloody, particularly for the Luddites themselves. There were a handful of extreme participants, there was a murder, and there was a hell of a lot of violence directed at anyone perceived as a Luddite— even though most actual Luddites themselves mostly avoided violence against other humans.

It would be good if we can somehow avoid such outcomes this time.

SilentM68 1 minutes ago [-]
Greed drives most of the current crop of Tech Bros.

I once had the chance to be a Bro, far richer than any of the current ones, thanks to the still secretive and anonymous "original-sn-adjacent cryptographic collective". Things, however, did not work out in my favor thanks to other nefarious third-party actors. So, I know where from I speak.

Any outcome is in the hands of the Tech Bros but by the looks of it, greed drives their every action, so things are not looking good!

:(

jorgonda 2 hours ago [-]
[dead]
rvz 2 hours ago [-]
The problem here is that there are no viable solutions to what happens when AI eventually replaces (yes replaces) tens of millions of humans in white collar roles.

All that is being "promised" are vague claims of "abundance". But all I see is this:

"AGI" is going to bring abundance of lots of very angry people and UBI to no-one (because it can never work at a large sustainable scale).

Some people are starting to realise that "AGI" was a grift and a scam and they are not happy about this lie and the insiders knew that and increased spending on security and private bodyguards.

operatingthetan 2 hours ago [-]
I don't think the LLM will produce AGI. Just based on how context windows work, the prompt cycle, etc. LLMs aren't out there thinking about stuff in their spare time. The way they appear to have thoughts and a psyche is purely an illusion.
andsoitis 2 hours ago [-]
> LLMs aren't out there thinking about stuff in their spare time.

Agentic changes the calculus.

operatingthetan 2 hours ago [-]
Explain how? Even if you are using crons or heartbeats to reactivate the model they are still dependent on context windows that are quite small. With frontier models I still have to remind them how stuff works, stuff they forgot or focused on the wrong thing, etc.

Also every AI company is motivated to have us use their models _just enough_ to want to pay for them, but not more than that.

booleandilemma 2 hours ago [-]
It doesn't have to produce AGI and it could still ruin the lives of millions of people. Our society isn't ready for that kind of shock. We can't all be instagram influencers.
fooqux 2 hours ago [-]
Something I often think about is how we can barely define what AGI, consciousness, etc are. We may be pretty sure that what we have currently is an illusion, but at which point is the illusion good enough that it no longer matters? Especially with regards to my first question.

It's hard to say it's not X when we can't really define X.

ethanrutherford 1 hours ago [-]
I would personally argue that it's a lot easier to say something definitely isn't x, with confidence, than to say it definitely is. I definitely don't know what the surface of jupiter looks like, but I can pretty confidently say it doesn't look like Kansas. I think the better it gets, the easier it will be to spot the shortcomings, because the gap between what it can do well and what it can't will widen. Anything the technology is fundamentally incapable of ever achieving will be made obvious by the fact that it will simply continue to not achieve it. We may not be able to easily define the totality of what exactly it needs to have to count as AGI, but the further it progresses, the easier it will be to point out individual things it's definitely missing.
operatingthetan 1 hours ago [-]
I'm not saying we can't build it, but what we have right now certainly is not it. Right now context is just a bunch of text. Surely the human mind's context resembles something more like a graph database. What if we could use a database for context?
2 hours ago [-]
sourcegrift 2 hours ago [-]
[flagged]
whimblepop 1 hours ago [-]
Neither statement is an incitement to violence. Charles Koch's support for an open borders policy is on the public record; there's no conspiracy nor assertion of conspiracy: https://www.newyorker.com/culture/annals-of-inquiry/the-case...
EcommerceFlow 1 hours ago [-]
[flagged]
xiphias2 60 minutes ago [-]
AI is unstoppable as the algorithmic changes are at least as fast as compute improvements. One proof of it is that even older GPUs' value is going up.
MiguelX413 53 minutes ago [-]
What exactly does that mean?
EGreg 2 hours ago [-]
I've been saying for years on here...

to the people on HN who are against blockchain but bullish on AI

With blockchain and smart contracts or stupid even memecoins, you can only lose what you voluntarily put in. You had to jump through a few hoops, then maybe you got rugpulled, maybe you became a millionaire.

With AI, regardless of whether you consented or not, you can lose your job, gradually your relationships and sense of purpose. And if some malicious actors want to weaponize it against you, you can lose your reputation, your freedom, get hacked at scale, and much more. The sooner we give biolabs to everyone the sooner someone can create an advanced persistent threat virus online infecting every openclaw machine, or a designer virus with an incubation period of half a year.

And I know what someone on here will always say. There will always be a comment to the effect of "this has always existed, AI is nothing new". But quantity has a quality all its own. Enjoy your AI slop internet dark forest. Until you don't.

kupadapuku 50 minutes ago [-]
[dead]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 20:59:29 GMT+0000 (Coordinated Universal Time) with Vercel.