NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Statement from Dario Amodei on our discussions with the Department of War (anthropic.com)
lebovic 15 hours ago [-]
I used to work at Anthropic, and I wrote a comment on a thread earlier this week about the RSP update [1]. I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.

Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are, even if the company is pragmatic about achieving their goals. I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)

That doesn't mean that I always agree with their decisions, and it doesn't mean that Anthropic is a perfect company. Many groups that are driven by ideals have still committed horrible acts.

But I do think that most people who are making the important decisions at Anthropic are well-intentioned, driven by values, and are genuinely motivated by trying to make the transition to powerful AI to go well.

[1]: https://news.ycombinator.com/item?id=47145963#47149908

whstl 6 hours ago [-]
> Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are

After 20 years of everyone in this industry saying "we want to make the world a better place" and doing the opposite, the problem here is not really related to people's "understanding".

And before the default answer kicks in: this is not cynicism. Plenty of folks here on HN and elsewhere legitimately believe that it's possible to do good with tech. But a billion dollar behemoth with great PR isn't that.

dust42 5 hours ago [-]
Exactly. At this level you don't just put out a statement of your personal opinion. This is run through PR and coordinated with the investors. Otherwise the CEO finds himself on the street by tomorrow. Whatever their motives are, it is aligned with VC, because if it is not then the next day there is another CEO. As the parent stated, this is not cynicism. I see this just rather factual, it is simply the laws of money.
GorbachevyChase 3 hours ago [-]
I am suspicious the whole thing is a PR stunt to build public trust.
georgefrowny 2 hours ago [-]
In none of their statements do they say they won't do the things:

> we cannot in good conscience accede to their request.

That's very specifically worded to not say "under no circumstances will we do this".

> Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now

Is not saying they won't eventually be included.

They've left themselves a backtrack, and with the care there this statement has been crafted, that's surely deliberate.

reactordev 2 hours ago [-]
This. This is a public misdirection. They already signed a new deal. It may be to their disliking but nothing in the statement prevents them from moving forward.
uncletammy 2 minutes ago [-]
That is speculation. You might be correct but this statement could simply be a strong signal to the administration to back down. A hail Mary.
hdb2 1 hours ago [-]
> They've left themselves a backtrack, and with the care there this statement has been crafted, that's surely deliberate.

What's worse, someone in their PR department will read this thread and be disappointed that the spin didn't work.

brookst 36 minutes ago [-]
I mean that’s just adulthood.

There are outcomes where the US government seizes the company. Not super likely, not impossible.

It would be naive to write a statement that a future event will never happen, under any circumstances. People who make that mistake get lambasted for hypocrisy when unforeseen circumstances arise.

I see recognition that making absolute statements about the future is best left to zealots and prophets. Which to me speaks of maturity, not duplicity.

darkwater 2 hours ago [-]
This. I don't get why you are getting downvoted. The statement literally says:

  Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now:
Last word is very important: "now".
ToucanLoucan 59 minutes ago [-]
Being a tech forum centered around VC funding means we have a TON of tech bros (derogatory) here, who believe in nothing beyond getting their own piles of money for doing literally anything they can be paid to do. If you offered these guys $20 to murder a grandmother they'd ask if they have to cover the cost of the murder weapon or if that's provided.

I get it to a degree, people gotta eat, and especially right now the market is awful and, not to mention, most hyperscaler businesses have been psychologically obliterating people for a decade or more at this point. Why not graduate to doing it with weapons of war too? But, personally, I sleep better at night knowing nothing I've made is helping guide missiles into school busses but that's just me.

absoluteunit1 3 hours ago [-]
I share this sentiment.

In general - I don’t know if it’s a coincidence but here on HN for example, I’ve noticed an increasing amount of comments and posts emphasizing the narrative of how “well- intended” Anthropic is.

Beestie 2 hours ago [-]
I'd love to see the financial model that offsets losing your single biggest customer and substantial chunk of your annual revenue with some vague notion of public trust.
GorbachevyChase 29 minutes ago [-]
Their whole strategy is that the lack of a legal moat protecting their product is an existential threat to human life. They are the only moral AI and their competitors must be sanctioned and outlawed. At which point they can transition from AI as commodity to “value” based pricing.

It’s not going to work, but I can’t blame Amodei and friends for trying to make themselves trillionaires.

mingus88 2 hours ago [-]
This is so short sighted. We are so early into this AI revolution, and this administration is obviously in a tailspin, with the only folk left in charge being the least capable ones we have seen in a decade

Imagine what the conversation would be like if Mattis, a highly decorated and respected leader were still the SecDef. Instead we are seeing bully tactics from a failed cable news pundit who has neither earned nor deserved any respect from the military he represents.

We are two elections and a major health issue away from a complete change of course.

But short sightedness is the name of the quarterly reporting game, so who knows.

HumblyTossed 1 hours ago [-]
I'm seriously worried there won't be more elections. Not hyperbole at all.
palmotea 46 seconds ago [-]
> I'm seriously worried there won't be more elections. Not hyperbole at all.

Why? That's a bizarre fear, driven by the insanely overwrought political rhetoric of 2026. Think about it: elections will be the absolute last thing to go.

If you want something to worry about, worry about this:

> And the stakes of politics are almost always incredibly high. I think they happen to be higher now. And I do think a lot of what is happening in terms of the structure of the system itself is dangerous. I think that the hour is late in many ways. My view is that a lot of people who embrace alarm don’t embrace what I think obviously follows from that alarm, which is the willingness to make strategic and political decisions you find personally discomfiting, even though they are obviously more likely to help you win.

> Taking political positions that’ll make it more likely to win Senate seats in Kansas and Ohio and Missouri. Trying to open your coalition to people you didn’t want it open to before. Running pro-life Democrats.

> And one of my biggest frustrations with many people whose politics I otherwise share is the unwillingness to match the seriousness of your politics to the seriousness of your alarm. I see a Democratic Party that often just wants to do nothing differently, even though it is failing — failing in the most obvious and consequential ways it can possibly fail. (https://www.nytimes.com/2025/09/18/opinion/interesting-times...)

delecti 53 minutes ago [-]
I don't think it's crazy to worry that, but elections are run by the states, there are over 100,000 poling places nationally, and people are pissed. On Jan 3, the entire current House of Representatives terms end; Democratic governors will still hold elections, and if there haven't been elections in GOP-led states, they're out of representation. There are so many hurdles in the way of the fascists canceling or heavily interfering in elections, and they're all just so stupid.
ckemere 16 minutes ago [-]
WaPo headline “Administration plans to declare emergency to federalize election rules.” https://www.washingtonpost.com/politics/2026/02/26/trump-ele...
conception 1 hours ago [-]
Putin crushes every election he has. Of course there would be more elections.
re666 59 minutes ago [-]
[dead]
jrs235 1 hours ago [-]
The rest of the world moves to using you?
wartywhoa23 2 hours ago [-]
I'd love to see any evidence that this single biggest customer is provably and irreversibly lost on all levels of scrutiny as a result of this attempt at building public trust.
HardCodedBias 15 minutes ago [-]
It absolutely is a PR stunt. And the media is cheering.

It's absurd.

It's simple: If you do not like working with the military, cancel your contract with the military and pay the penalties.

They are explicitly not doing that.

heresie-dabord 4 hours ago [-]
> it is simply the laws of money

The First Law of Money: Money buys the Law.

ohbleek 2 hours ago [-]
To quote Brennan Lee Mulligan, "Laws are threats made by the dominant socioeconomic ethnic group in a given nation."
avmich 4 hours ago [-]
That's maybe the second law. The first one is: money is always finite.

Look at how Elon Musk behaved. Do you think VC gladly approved what he did with Twitter? They might want to keep chasing quarterly results - but sometimes, like with Zukerberg, they can't. Not enough money. Similar examples with Google rounds or how much more financially backed politician loses rather often to a competitor. Or, if you will, Vladimir Putin's idea that he can buy whatever results he wants - and that guy is a very wealthy person. There are always limits, putting the money law to the second place. We might argue that often the existing money is enough... but in more geopolitical, continuum-curving cases there are other powerful forces.

antonvs 52 minutes ago [-]
The Twitter acquisition wasn't funded by venture capital, so your question about VC approval doesn't apply.

If you're using VC as a general term for "investor" (inaccurately), then the answer to your question is that the major investors, such as Larry Ellison and the Saudi monarchy, wanted political control of Twitter, which meant that they did (apparently) approve what Musk did with it.

Lutger 21 minutes ago [-]
Surely you mean the laws of shareholder capitalism. There are many things you can do with money, and only some of them are legally backed by rules that ensure absolute shareholder power.
qdotme 5 hours ago [-]
FWIW, I don’t actually know if board of Anthropic has actual power to replace its CEO or if Dario has retained some form of personal super-control shares Zuckerberg style.

At some level of growth, the dynamics between competent founders and shareholders flip. Even if the board could afford to replace a CEO, it might not be worth it.

nradov 5 minutes ago [-]
Anthropic has an odd voting structure. While the CEO Dario Amodei holds no super-voting shares, there are special shares controlled by a separate council of trustees who aren't answerable to investors and who have the power to replace the Board. So in practice it comes down to personal relationships.
dust42 4 hours ago [-]
I'd counter that at this level of capital, if the CEO doesn't well align with the capital, then super-control shares will be overpowered by super-lawyers and if there is need some super-donations. OpenAI was a public interest company...
blackqueeriroh 2 hours ago [-]
This is fundamentally incorrect.
vladms 33 minutes ago [-]
> everyone in this industry

So in the last 20 years there is nothing good coming out of the software industry (if this is the industry you mention) ?

I find it somehow ironic, because this type of generalization is for me the same issue that some of the people saying "they want to make a better place" have: accept reality is complex.

There were huge benefits for society from the software industry in the last 20 years. There were (as well!) huge downsides. Around 2000 lots of people were "Microsoft will lock us in forever". 20 years later, the fear "moved" to other things. Imagining that companies can last forever seems misguided. IBM, Intel, Nokia and others were once great and the only ones but ultimately got copied and pushed from the spotlight.

amunozo 6 hours ago [-]
I don't even think both things are contradictory. People that put too much value in their ideals tend to oversee the consequences of such ideals in real life and do wrong without deviating an inch from their ideals.
plufz 6 hours ago [-]
But is that really the problem in big tech today? To me it looks like sooner or later they cave from their ideals (or leadership changes) and that the reason every time is that they want to make even more money.
Peritract 4 hours ago [-]
I think that's still too rosy a view; it's clear with a lot of big tech that they never had the ideals in the first place. They use claims of principle for marketing purposes and then discard them when it's no longer convenient.
moozooh 4 hours ago [-]
Or, perhaps even more likely, the ideals inevitably get corrupted by access to unthinkable economic power/leverage, like it happened with more or less all other giants with strongly idealistic initial leadership and leadership may actually delude itself into thinking they're still on the right track as a sort of a defense mechanism. Back when they published the article on the Claude-operated mass-scale data breach last year, the conclusions were delivered in a bafflingly casual tone as if it was a weather report: yeah, the world has become a lot more dangerous now (on its own), so you may want to start using Claude for cyber-defense and we are doing our best to help you protect your business. I rolled my eyes at that so hard they popped out of their sockets. Weren't you... the guys... who made it that way and enabled that very attack? Very convenient to sell weapons to both sides, isn't it, not at all like a mafia business. Very responsible and ideal-driven.

Consider also the part that is going unsaid in the address: Amodei is strongly against the use of Claude for mass surveillance of Americans but he says nothing about mass surveillance of anybody else (and, in fact, is proactively giving foreign intelligence a green light in his address) and is deliberately avoiding any discussion on the fact that his relationship with the Pentagon is mediated through the contract with Palantir they signed something like 1.5 years ago. Palantir is a company whose business is literally mass surveillance, by the way! I, too, am so ideal-driven that I willingly make deals with the devil! But now that he's successfully captured the popular sentiment, people are going to consider him the moral champion without bothering to look at these and other glaring contradictions.

detourdog 2 hours ago [-]
Ideals have always been represented in literature as a virtue and a problem for humans. I find real life is no different.
ben_w 5 hours ago [-]
Sure, sooner or later. I don't want to even guess where the new AI companies are on the path that leads to that destination, but right now it looks like Anthropic is not at that stage. Heck, even though a lot of people find Sam Altman slimy, even OpenAI isn't yet at that stage.
hsuduebc2 5 hours ago [-]
I believe that this is classical behaviour of every share holder driven business. You can build on ideals from start, but once you acquire some position, money making is on the menu. Eg. deliberately worsening user experience for better revenue.

Possiblity to turn on heated seats in car you own for a small monthly fee is absurd yet very real. I'm looking forward to enshittification of current AI tools.

Ajedi32 31 minutes ago [-]
Yeah it's not that the people involved have no ideals, it's that the company structure as a whole doesn't, and over time that structure will eventually outlive, corrupt, and/or overpower the ideals of the founders or other principled individuals at the company.
hsuduebc2 5 hours ago [-]
I can’t think of a single thing Meta does that isn’t driven by pure greed.
mikkupikku 50 minutes ago [-]
All of Meta's VR stuff should rationally be cut loose and refunded if it were all about greed. That stuff only survives because Zuck is a nerd who wants it to happen (but it's not going to.)
ben_w 5 hours ago [-]
Yes, though Meta is a bad example as they started off with the values of Zuckerberg, and still have them.
endofreach 5 hours ago [-]
Exactly right. But i think it makes it a good example actually. Company DNA is a thing. Bill Gates isn't running microsoft anymore. Still...
hsuduebc2 5 hours ago [-]
What would be more appropriate example?
ben_w 5 hours ago [-]
Apple, Tesla, Oculus.

The first two are definitely "heroes who lived long enough to be villains"; Oculus is more of an "I recon" due to how it was seen right up until getting bought by Facebook.

Adobe?

hamasho 58 minutes ago [-]
But in the stock market, it is almost impossible for companies like Anthropic or any successful startups not to become villains (profit first no matter what). Anthropic especially needs to burn huge amount of money, so they need a lot of funding. The only way to keep founders' idealism is probably to copy Zuckerberg. Divide stocks with and without voting-power and trade only no-voting stocks.
shafyy 2 hours ago [-]
LOL, Palmer Luckey is a right-wing war mongering psychopath.
amunozo 2 hours ago [-]
Oh sure. I don't want to say everybody are driven by ideals and not greed, but that even people with strong ideals and good intentions can do a lot of bad by being blinded by those same ideals.
mcv 36 minutes ago [-]
Exactly. I'd love to believe that at Anthropic, idealism trumps money. But Google was once idealistic too. OpenAI was too. It's really hard to resist the pull of money. Especially if you're a for-profit corporation, but OpenAI wasn't even that at first.
OtherShrezzing 6 hours ago [-]
I think most people are conscious that, irrespective of a founders vision, company morals usually don't survive the MBA-inisation phase of a company's growth.
qdotme 5 hours ago [-]
Depends. Many still reflect the founders vision; even if that vision might have evolved over time.
AndrewKemendo 43 minutes ago [-]
Can you provide an example of that for an American venture backed corporation older than a decade?
j45 3 hours ago [-]
The impact of MBAs might be decreasing..
whstl 5 hours ago [-]
True. Which is all the more reason for calling bullshit on claims of "doing good" or "having ideals" by anyone building a company that can eventually be ran my MBAs.
tyingq 1 hours ago [-]
I don't think it's cynical to acknowledge the pattern that publicly owned companies will eventually cave to the desires of their shareholders.

I understand Anthropic is not public, but I assume there's an IPO coming.

Aperocky 2 hours ago [-]
Reminds me of Effective Altruism and the collective results of people claiming to believe in that virtue.
6 hours ago [-]
wartywhoa23 4 hours ago [-]
Cynicism is the newspeak substitute for sincerity, no need to worry about being called a cynic in this post-truth world of snowflakes.
puppymaster 3 hours ago [-]
and that's okay. so we judge them one decision at a time. So far, Anthropic is good in my book.
heresie-dabord 4 hours ago [-]
> how driven by ideals many folks at $Corporatron are

Well let's see... it says in the post:

    * worked proactively to deploy our models to the Department of War and the intelligence community. 

    * the first frontier AI company to deploy our models in the US government’s classified networks, 

    * the first to deploy them at the National Laboratories, and 

    * the first to provide custom models for national security customers. 

    * extensively deployed across the Department of War and other national security agencies

    * offered to work directly with the Department of War on R&D to improve the reliability of these systems

    * accelerating the adoption and use of our models within our armed forces to date.

    * never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.
wrsh07 1 hours ago [-]
They didn't claim to have pacifist ideals

In fact, they claim to be pro America and pro democracy and have repeatedly expressed concerns about autocratically governed countries.

Just because you disagree with their ideals doesn't mean they're not holding to theirs

mikkupikku 49 minutes ago [-]
Lots of people driven by ideals work for the US military. Not me, ever, but other people certainly.
lm28469 8 hours ago [-]
Idk man, from the outside anthropic looks a lot like openai with a cute redisgn and Amodei like Altman with a slightly more human face mask, the same media manipulation, the same vague baseless affirmations about "something big is coming and we can't even describe it but trust us we need more money"
UqWBcuFx6NV4r 8 hours ago [-]
> the same vague baseless affirmations about "something big is coming and we can't even describe it but trust us we need more money

This is pretty low on my list of moral concerns about AI companies. The much more concerning and material things include things like…what this thread is actually meant to be about.

VCs don’t need me to feel sorry for them if their due diligence is such that they’re swindled by a vague claim of “something being around the corner”, nor do they need yours. You aren’t YC.

dudefeliciano 7 hours ago [-]
Even just the fact that Amodei is publicly bringing up these issues, rather than doing behind closed doors deals with the Department of Defense (yes that's still the official name), is more than Altman has done for AI safety.
dudefeliciano 7 hours ago [-]
Even just the fact that Amodei is publicly bringing up these issues, rather than doing behind closed doors deals with the Department of Defence (yes that's still the official name), is more than Altman has done for AI safety.
skyberrys 7 hours ago [-]
Don't you always need more money though? I am a chip designer and I can tell you I am resource intensive to employ. I want access to plenty of expensive programs and data. With more money comes better tools and frequently better tools leads to the quality results you want to deliver to the customer.
lm28469 7 hours ago [-]
Do you tell your customers you need money to build better chips or that you need more money because your next generation of chips will channel Jesus soul back to earth and cure cancer?
tehryanx 5 hours ago [-]
where is anthropic hyping like that? Most of what I see coming out of anthropic is deep context releases on research they're doing.
lm28469 4 hours ago [-]
> Mar 14, 2025, 7:27 AM CET

> "I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code"

It's the same old trick, "in two years we'll have fully self driving cars", "in two years we'll have humans on Mars", "in two years AI will do everything", "in two year bitcoin will replace visa and mastercard", "in two year everyone will use AR at least 5 hours a day", ...

Now his new prediction is supposed to materialize "by the end of 2027", what happens when it doesn't? Nothing, he'll pull another one out of his ass for "2030" or some other date in the future, close enough to raise money, far enough that by the time it's invalidated nobody will ask him about it

How are people falling for these grifters over and over and over again? Are we getting our collective minds wiped out every 6 months?

brookst 29 minutes ago [-]
Your quote supports hype but does not support your claim that Anthropic is telling customers they need more money to deliver the hype.

Of course Anthropic is saying that to investors. Every company does that, from SpaceX to Crumbl. “If you give us $X we will achieve Y” isn’t some terrible behavior, it’s how raising funds works.

ToValueFunfetti 2 hours ago [-]
I work at a non-tech Fortune 500 and this is looking nearly spot-on from here. Nobody on my team touches the code directly anymore as of about 2 months ago. They're rolling it out to the entire software department by June. I can't speak to the economy at large, but this doesn't look like baseless hype to me. My understanding is that Claude Code reached this level late last year, ie. Amodei was just wrong about uptake rates.
District5524 7 hours ago [-]
They both work in the same market but they have pretty different careers and understandings. I simply can't believe why on Earth would people choose Altman over Amodei to trust in these kind of pretty important questions. This is not about who is the more savvy investor maximizing shareholder value. I personally don't care whose company grows bigger or goes bust first, OpenAI or Anthropic. The real stakes are different, and Amodei is better suited to be trusted in his decision. Unfortunately, the best choices do not seem to fit well with either the federal political climate or the mainstream business ethics in Silicon Valley. Not that our opinion would matter...
Keyframe 6 hours ago [-]
Amodei believed Altman, so there's that. I don't (have to) believe either. If product works for me, it works. Raising their clanker products to second coming is for investor relations, of which I am proud to day I am not.
ori_b 5 hours ago [-]
I don't know why anyone would trust any of the above.
viking123 4 hours ago [-]
Both are hucksters, although Amodei's qualifications are pretty good, he actually is a scientist. Out of these I think Hassabis is my favorite
kseniamorph 4 hours ago [-]
disagree. at least i can see the quality of research coming out of Anthropic, which tells me these people are interested in what they're doing. i don't see this level of scientific rigor in OpenAI
rhubarbtree 8 hours ago [-]
There should be a name for this, “cynic cope: when someone actually takes a principled view the cynic - who has a completely negative view of the world - is proven to be wrong, can’t accept it, and tries to somehow discount it.
marxisttemp 7 hours ago [-]
Corporations do not and cannot have principles, they only have the profit motive
parasubvert 7 hours ago [-]
This is false. People can have principles, profit motive is not something a corporation has, it's something people have. Corporations do things all the time that are based on everything from principles, to the personal whim of executives, to exercise in ego, to community benefiting actions, or to screw customers for extra profit. It is entirely dependent on the specific people in management roles.

Corporations need profit to survive because the cost of tomorrow is a surplus of today.

anon_e-moose 5 hours ago [-]
A corporation is a bunch of people cooperating to achieve a common goal.

There is a very important factor that heavily influences (perhaps even controls?) how people act to achieve that goal, and sometimes even twists or adds goals.

Is that corporation publicly quoted in the stock market or is it private?

Look at how steam behaves, it's private and more ideological VS how many other publicly quoted companies, whose CEO often sacrifices his own corporation's long term survival for the benefit of short-term profiteering and some hedge fund manager's bonus.

Both need profit to survive, but the publicly quoted company is much more extreme.

When people say corporations only look to profit, what they really mean is that publicly quoted corporations will do everything possible to maximise short term profit at any cost. Is there a CEO caring for long term? Either he will be convinced to change or kicked out. It's almost impossible for someone to resist these influences in publicly quoted companies. It's just how Wall Street works and if that doesn't change neither will corporations.

The people running the world of finance and their culture are what causes enshittification and pushing a zero-sum game to extremes.

vladms 3 minutes ago [-]
Agree with everything, but would add a small detail : publicly quoted corporations might as well sell dreams and if they are very good at doing that have no profit because of some future potential pay off (of course I am writing this from my fully self driving car that I own since 10 years ago, that might transform in a robot soon).
marxisttemp 30 minutes ago [-]
something something the ideology of a cancer cell. The only goal of a publicly traded corporation is to make the line go up, and the board is required to eliminate anyone who puts other principles before that.
moozooh 4 hours ago [-]
Sadly, market incentives pretty much always go opposite of moral incentives because morals put breaks on decisions that multiply value for the company but the company itself exists for multiplying value. The profit motive is built into the reason for its existence. It's a contradiction that has a lower probability of resolving in favor of morals as the company grows in size and accrued capital. Whichever moral principles the leadership may have had at the beginning, they always erode or get perverted over time simply because the market always has a stronger pull.

I hate that, by the way, but what I hate even more is that this is somehow the most effective way to run economies that we've found so far, and it ends up this way because instead of unsuccessfully trying to safeguard against greed and sociopathy, it weaponizes them outright.

jama211 7 hours ago [-]
Good for you? You’re just talking about vibes. Vibes are a baseless thing to go on.
lm28469 7 hours ago [-]
This is a wantrepreneur forum not a peer published scientific journal, my opinions about vibes matter as much as private companies PR campaigns
neom 14 hours ago [-]
I've had so much abuse thrown at me on here for saying this very thing over the last few years. I used to be friends with Jack back in the day, before this AI stuff even all kicked off, once you know who people really are inside, it's easy to know how they will act when the going gets rough. I'm glad they are doing the right thing, but I'm not at all surprised, nor should anyone be. Personally I believe they would go to jail/shut down/whatever before they do something objectively wrong.
bobsomers 11 hours ago [-]
> I used to be friends with Jack back in the day, before this AI stuff even all kicked off, once you know who people really are inside, it's easy to know how they will act when the going gets rough.

This sounds quite backwards to me. It's been abundantly clear in today's times that, in fact, you only really know who somebody really is when they're under stress. Most people, it seems, prefer a different facade when there is nothing at stake.

neom 10 hours ago [-]
I don't know most people, so I can't speak to that. I do know Jack, and I knew how he was under stress long before any of this AI stuff. Jack Clark might very well be the most steady hand in the valley right now to be quite frank.
RHSman2 8 hours ago [-]
That is a good LinkedIn endorsement of ever I saw one!
klodolph 10 hours ago [-]
Hm, I think you kinda know what people are like by seeing what they do when they’re under no stress and feel like they are free from consequences. When they have total power in a situation. The façade drops because it’s not necessary.
inigyou 7 hours ago [-]
If someone is in an environment where they have to do XYZ or die, their choice to do XYZ might not reflect their personality, but the environment where they have to do XYZ or die.
vintermann 7 hours ago [-]
But if you were watching them, was there really no freedom from consequences? At least there was the risk of you thinking less of them.

I think that really cruel people want you to know when they can act with impunity, it's part of the appeal to some. The Anthropic people don't seem like that sort, at least. But plenty of horrible people have still not been that sort.

klodolph 6 hours ago [-]
> But if you were watching them, was there really no freedom from consequences?

Ah, so I think you may have done a little hop and a jump over a critical, load-bearing term which is “feel like”. You get to observe people who feel like there are no consequences. Their feelings may or may not be accurate.

You can sometimes see people who treat service workers, servants, or subordinates poorly because they feel like it’s permitted and free from consequence. You can also sometimes see people reveal things about themselves when playing games. It’s kind of a cliché that people find out that they’re transgender at the D&D table, and it happens because it’s a “consequence-free way” to act out a different gender role.

Or we can talk about that magic ring that makes you invisible. You know, the ring of Gyges, or that of Sauron. People can’t actually become invisible, but you can sometimes catch them in a situation where they think they can do something wrong and not get caught.

wahnfrieden 8 hours ago [-]
Free from consequence. In other words, free of any stakes. Zero stress low stakes environments enable larping.
deadbishop 7 hours ago [-]
Exactly
bahmboo 10 hours ago [-]
Not all of us know who Dario, Jared, Sam and Jack are. Some clarification is helpful. That's all, no hidden agenda!
neom 10 hours ago [-]
Well I can only speak to Jack Clark. Jack was a reporter who covered my startup and then became my friend. Over the last.. I dunno, 13 year or something, we've had long deep talks about lots of things, pre-ai world: what it takes to build a big business, will QC ever become a thing, universal basic human love, kids, life, family. He is brilliant. The business I worked on that he covered went through a lot of shit that he knew about. We talked about power in business, internal politics, how things actually get built...all that stuff. Then... attention is all you need, bunch of folks grok it, he got interested... got to talking to these folks starting some little research lab to see how NN scales, so joined that lab, first 5/10 or so iirc...to head AI policy. That little lab grew, stuff happened, the next part isn't mine to share but so much as to say: Anthropic was basically born out of the expectation that this moment would come and more...extremely human focused...voices should be at the table, that is Anthropic, that idea, they left their jobs at the aforementioned lab - and started their own startup to make sure a certain tone/voice/idea was always represented. Around the summer 2024, although at this point we didn't discuss any specifics of the work at his "startup", I said to him: what comes next is going to be super hard and I know this is going to sound really stupid, but you're all going to need to be Jesus for real. I'm a Buddhist and it wasn't a literal religious comment about Christianity as a denomination, so much as... the very basics of the stuff the dude Jesus Christ espoused. He knew, they knew, that I suppose, was always the plan? So it was never unexpected to me they would act this way, that is what Anthropic is all about. Here we are.
lebovic 10 hours ago [-]
Hah, you're right, I meant Dario Amodei, Jared Kaplan, and Sam McCandlish.

They're all cofounders of Anthropic. Dario is the CEO, Jared leads research, and Sam leads infra. Both Jared and Sam were the "responsible scaling officer", meaning they were responsible for Anthropic meeting the obligations of its commitments to building safeguards.

I think neom is referring to Jack Clark, another one of the seven cofounders.

arduanika 10 hours ago [-]
I almost downvoted you, because this is a pretty classic LMGTFY (or now, LMLLMTFY), but on second thought, you're right. The "Dario" is clear, he's the author of TFA, but for other execs, Anthropic's fans on here should spell out their full names. Dropping all these first names feel like "inside baseball" at best, mildly culty at worst, and here outside the walls of Anthropic, we're going to see those names and think of Kushner(??), Altman, and maybe Dorsey, and get confused.

FWIW, I agree strongly w/ lebovic's toplevel take above, that Anthropic's leaders are guided by their values. Many of the responses are roughly saying, "That can't be true, because Anthropic's values aren't my values!" This misses the point completely, and I'm astounded that so many commenters are making such a basic error of mentalization.

For my part, I'm skeptical of a lot of Anthropic's values as I perceive them. I find a lot of the AI mysticism silly or even harmful, and many of my comments on this site reflect that. Also, like any real-world company, Anthropic has values that are, shall we say, compatible with surviving under capitalism -- even permitting them to steal a boatload of IP when they scanned those books!

Nonetheless, I can clearly see that it's a company that tries to stand by what it believes, and in the case of this spat with Dep't of War, I happen to agree with them.

Imustaskforhelp 10 hours ago [-]
I can agree that I thought it was jack dorsey but it looks like we are talking about jack clark [https://en.wikipedia.org/wiki/Jack_Clark_(AI_policy_expert)]

It would be better if people could name them with their full names to avoid any confusion.

kunai 10 hours ago [-]
[flagged]
dang 9 hours ago [-]
Please don't do this here.
taurath 12 hours ago [-]
> it's easy to know how they will act when the going gets rough

Even if you went to burning man and your souls bonded, you only know a person at a particular point in time - people's traits flanderize, they change, they emphasize different values, they develop different incentives or commitments. I've watched very morally certain people fall to mania or deep cynicism over the last 10 years as the pillars of society show their cracks.

That said, it is heartening to know that some would predict anyone in Silicon Valley would still take a moral stance. But it would do better if not the same day he fires 4000 people to do the "scary big cut" for a shift he sees happening. I guess we're back to Thatcherisms, where "There Is No Other Option" to justify our conservatism.

coffeemug 9 hours ago [-]
Your comment reminds me of a story. John Adams and Lafayette met in Massachusetts something like ~49 years after the revolution. (Lafayette went on a US tour to celebrate the upcoming 50 year anniversary of independence.) Supposedly after the meeting Adams said "this was not the Lafayette I knew" and Lafayette said "this was not the Adams I knew".
michaelhoney 11 hours ago [-]
"people's traits flanderize": nice
rl3 12 hours ago [-]
>Even if you went to burning man and your souls bonded ...

I'll take: List of places I never want to bond my soul with someone at for one thousand, please.

taurath 12 hours ago [-]
They get an air conditioned trailer and pay "sherpas" to do their chores, so its basically just a hotel suite
tummler 11 hours ago [-]
Oh, that's the best place for souls to bond.
webnrrd2k 9 hours ago [-]
Bond to what -- that's the real question
shawn_w 6 hours ago [-]
Playa dust. It's certainly permanently bonded to my car.
vintermann 7 hours ago [-]
In these days of the Epstein mails, it's worth remembering one thing that's become clear: Epstein was an extremely nice guy. He seemed kind, sincere, interested in what you were doing, civilized etc.

But to quote Little Red Riding Hood in Stephen Sondheim's musical: Nice is different than good. It's hard to accept if people you really like do horrible things. It's tempting to not believe what you hear, or even what you see. And Epstein was good at getting you to really like him, if he wanted to.

That doesn't mean we should be suspicious of niceness. It just means that we should realize, again, nice is different than good.

sp00chy 6 hours ago [-]
In German you say „Nett ist die kleine Schwester von Scheisse“ which means „Nice is the polite version of being an asshole“. And this is how I cope with what decision-makers say. Zuckerberg was also „nice“ for a long time.
ajyey 12 hours ago [-]
This is insanely naive
parasubvert 7 hours ago [-]
Cynicism isn't always correct.
white_dragon88 12 hours ago [-]
[dead]
skeptic_ai 12 hours ago [-]
[flagged]
Vaslo 12 hours ago [-]
Huh? Why would they be in prison??
skeptic_ai 10 hours ago [-]
> they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries

They are US adversaries if they don’t give to USA what they want… so as an adversary that doesn’t do what’s told to fit in line… you must go to prison.

Vaslo 27 minutes ago [-]
This is silly. No one at anthropic is going to prison for this. It only hurts their ability to do business with US government customers which is a net negative for all. Anthropic will come around.
noduerme 9 hours ago [-]
The nature of evil is that it's straight down the road paved with good intentions.
monster_truck 13 hours ago [-]
[flagged]
000ooo000 12 hours ago [-]
[flagged]
zer0gravity 7 minutes ago [-]
The probability is high that major AI development companies are already using an AI instance internally for strategic and tactical decisions. The State power institutions, especially intelligence, are now having a real competitor in the private sector.
imjonse 9 hours ago [-]
> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values,

I am sure you think they are better than the average startup executive, but such hyperbole puts the objectivity of your whole judgement under question.

They pragmatically changed their views of safety just recently, so those values for which they would burn at the stake are very fluid.

versteegen 7 hours ago [-]
> They pragmatically changed their views of safety just recently, so those values for which they would burn at the stake are very fluid.

Yes it was a pragmatic change, no it was not a change in their values. The commentary here on HN about Anthropic's RSP change was completely off the mark. They "think these changes are the right thing for reducing AI risk, both from Anthropic and from other companies if they make similar changes", as stated in this detailed discussion by Holden Karnofsky, who takes "significant responsibility for this change":

https://www.lesswrong.com/posts/HzKuzrKfaDJvQqmjh/responsibl...

> I strongly think today’s environment does not fit the “prisoner’s dilemma” model. In today’s environment, I think there are companies not terribly far behind the frontier that would see any unilateral pause or slowdown as an opportunity rather than a warning.

> What I didn’t expect was that RSPs (at least in Anthropic’s case) would come to be seen as hard unilateral commitments (“escape clauses” notwithstanding) that would be very difficult to iterate on.

nla 3 hours ago [-]
Yea, that Sam only does this because, "he loves it." They're not in it for the money.
Aperocky 2 hours ago [-]
That's not fair, Sam can love money too and there is no conflict here.
psychoslave 4 minutes ago [-]
People uttering the organizational decisions in for profit companies are money driven first. Otherwise they would try to be champion of a different kind of org.

Everyone try to make changes move so it goes well, for some party. If someone want to serve best interest of humanity at whole, they don't sell services to an evil administration, even less to it's war department.

Too bad there is not yet an official ministry of torture and fear, protecting democracy from the dangerous threats of criminal thoughts. We would be given a great lesson of public relations on how virtuous it can be in the long term to provide them efficient services, certainly.

drawfloat 5 hours ago [-]
"Mass surveillance of anywhere else in the world but America" is not the great idealistic position you are making it out to be.
i_love_retros 3 hours ago [-]
Driven by ideals? Yeah right. That first paragraph he says they work with the department of defense to protect us from authoritarianism. What?! You are working with an authoritarian regime you cynical fuck. Getting paid by them. And now you act all virtuous because you won't make autonomous weapons.
FeloniousHam 28 minutes ago [-]
> Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are, even if the company is pragmatic about achieving their goals.

Jonah Goldberg (speaking of foreign policy): "you've got to be idealistic about the ends and ruthlessly realistic about means."

GardenLetter27 6 hours ago [-]
Anthropic doesn't want us to have the right to run open weight models on our own computers. They were never the good guys.
u1hcw9nx 6 hours ago [-]
What I read is: Anything not open source, open weight, is evil.

I disagree. The concept of nuance, putting things in context, is the source of all good in internet discussions.

GardenLetter27 5 hours ago [-]
No, but lobbying the government to prohibit open source / open weight models is evil.

They literally want to use state violence to control what we can do on our own computers.

yunnpp 13 hours ago [-]
It's good to be driven by ideals, but: https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha...

I think avg(HN) is mostly skeptical about the output, not that the input is corrupt or ill-meaning in this case. Although with other companies, one can't even take their claims seriously.

And in any case, this is difficult territory to navigate. I would not want to be in your spot.

eternauta3k 5 hours ago [-]
Come On, Obviously The Purpose Of A System Is Not What It Does

https://www.astralcodexten.com/p/come-on-obviously-the-purpo...

Peritract 4 hours ago [-]
I don't think that article makes a strong case; it deliberately phrases examples in the most ridiculous ways and pretends that this is a damning criticism of the phrase itself; it's 'you're telling me a shrimp fried this rice' but with a pretence of rationality.
sebzim4500 3 hours ago [-]
I think it makes a pretty compelling case that most invocations of the statement are either blindingly obvious or probably false. Can you give a counterexample?
Peritract 1 hours ago [-]
> most invocations of the statement are either blindingly obvious or probably false

So straightaway, you've walked significantly back from the claim in the headline; now half of the time it's 'blindingly obvious' that the statement is correct. That already feels like a strong counterexample to me, and it's the article's own first point.

Secondly, look at this one specifically:

> The purpose of the Ukrainian military is to get stuck in a years-long stalemate with Russia.

Firstly, this isn't obviously false. It's an unfair framing, but I think the Ukrainian military would agree that forcing a stalemate when attacked by a hostile power is absolutely part of their purpose.

Secondly, it is an unfair framing that deliberately ignores that all systems are contextual. A car's purpose is transport, but that doesn't mean it can phase through any obstacle.

The article makes an entirely specious argument, almost an archetypal example of a strawman. It can't sustain its own points over a few hundred words without steadily retreating, and that is far more pointless than the maxim it criticises.

I'm reminded of an XKCD comic [1] about smug miscommunication. Of course any principle is ridiculous when you pretend not to understand it.

[1] https://xkcd.com/169/

ozgung 6 hours ago [-]
The problem with companies, you see, is that they are a separate entity than their founders, shareholders or current leadership. A Company has no soul or unchangeable intentions. Claude’s SOUL.md is just an IP that can be edited at any time.
didip 31 minutes ago [-]
I like the enthusiasm, but remember that Google used to be: “Don’t be Evil”
cue_the_strings 2 hours ago [-]
Don't attribute to ideals what is simple self-preservation.

No sane person wants to become a legitimate military target. They want to sleep in their own beds, at home, without risking their families lives. Just like the rest of us.

bertylicious 8 hours ago [-]
"They're driven by values" is meaningless praise unless you qualify what these values are. The Nazis had values too, you know. They were even willing to die for them. One of the core values of the Catholic church is probably compassion. Except for the victims of sexual abuse perpetrated by their clergy.

So what core values led "Dario, Jared, and Sam" to work with a government that just tried to rename the DoD to "department of war" and is acting aggressively imperialist in a way like the US hasn't in a long time.

And who exactly are these "autocratic adversaries" they are mentioning? Does this list include the autocrats the US government is working together with?

jghn 30 minutes ago [-]
> to rename the DoD to "department of war"

The very fact that they referred to it as the Department of War instead of Defense tells me that they're still bootlickers, and just trying to put a good spin on things.

lebovic 7 hours ago [-]
Yeah, values on their own don't lead to positive outcomes. I agree that many groups that are driven by ideals have still committed horrible acts.

I do think that they're acting with positive intent, though, and are motivated by trying to make the transition to powerful AI go well.

Many folks on HN seem to assume the primary motivation is purely chasing more money, which certainly isn't the case for for many – but not all – people at Anthropic.

That doesn't guarantee a good outcome, and there's still a hard road ahead.

viking123 4 hours ago [-]
> And who exactly are these "autocratic adversaries" they are mentioning?

Anyone that Israel doesn't like

marxisttemp 7 hours ago [-]
Careful speaking truth to power on this site, remember that YC is deeply enmeshed with Garry Tan, Peter Thiel, and of course Paul Graham who as of late has made a habit of posting right wing slop on his Twitter
DeepSeaTortoise 6 hours ago [-]
> Except for the victims of sexual abuse perpetrated by their clergy.

I honestly wonder how much of this is made up. Given the size of whole organization and it holding onto its weird priciples regarding the personal relationships of its members (introduced in the far past to limit the secular power of its clergy), there certainly will be SOME cases.

But in the one case a frater, who I knew, got convicted, he definitely didn't do it. He was accused by several independent former students and even some of the staff backed the students claims with first hand accounts of him having been alone with some of the students at the time. This supposedly happened on a trip with tight schedules, so all accounts and stated times were quite specific, even in the pre-smartphone era.

The only problem: He wasn't with the group at that time at all. I screwed up embarrassingly (and the staff, too, leaving a young student stranded in the middle of nowhere) and he thought he could slip out, come pick me up and nobody (but maybe me with him) would get in trouble over it. Turned out he forgot refueling, both of us stayed at a pastor's guest house and he called the group telling them, that they should go ahead without us and that we would drive to the event directly on our own. The supposed abuse was claimed to have happened at another short stay of the group where they spent a day visiting some mine before joining with us again.

Almost 3 decades later he got railroaded in court, me learning about it in the news.

bertylicious 4 hours ago [-]
I'm confused. You heard about someone you knew being wrongfully convicted of a crime he didn't commit and you could have provided the testimony to clear him, but you just decided not to? Why not?
snickerbockers 13 hours ago [-]
>I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.

I'm concerned that the context of the OP implies that they're making this declaration after they've already sold products. It specifically mentions already having products in classified networks. This is the sort of thing that they should have made clear before that happened. It's admirable (no pun intended) to have moral compunctions about how the military uses their products but unless it was already part of their agreement (which i very much doubt) they are not entitled them to countermand the military's chain of command by designing a product to not function in certain arbitrarily-designated circumstances.

jsnell 11 hours ago [-]
Where are you getting that from?

The article is crystal clear that these uses are not permitted by the current or any past contract, and the DoW wants to remove those exceptions.

> Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now

It also links to DoW's official memo from January 9th that confirms that DoW is changing their contract language going forwards to remove restrictions. A pretty clear indication that the current language has some.

snickerbockers 9 hours ago [-]
I think it largely hinges on what they mean by "included"; does that mean it was specifically excluded by the terms of the contract or does it mean that it's not expressly permitted? I doubt the DoD is used to defense contractors thinking they have the right to dictate policy regarding the use of their products, and it's equally possible that anthropic isn't used to customers demanding full control over products (as evidenced by how many chatbots will arbitrarily refuse to engage with certain requests, especially erotic or politically-incorrect subject-matters). Sometimes both parties have valid cases when there's a contract disagreement.

>A pretty clear indication that the current language has some.

Or alternatively that there is some disagreement between the DoD and Anthropic as to how the contract is to be interpreted and that the DoD is removing the ambiguity in future contracts.

8 hours ago [-]
zaptheimpaler 8 hours ago [-]
This is all just completely wrong. Anthropic explicitly stated in their usage use of their products is not permitted in mass-surveillance of American citizens and fully automated weapons, in the contract that DoW signed. Anthropic then asked DoW if these clauses were being adhered to after the US’ unlawful kidnapping of Maduro. DoW is now attempting to break the contract that they signed and threatening them because how dare a company tell the psycho dictators what to do.
snickerbockers 5 hours ago [-]
What on earth does "Two such use cases have never been included in our contracts with the Department of War" mean? Did they specifically forbid it in the contract or was it literally just not included? Because I can tell you that if it's the latter that does not generally entitle them to add extra conditions to the sale ex post facto.

>threatening them because how dare a company tell the psycho dictators what to do.

Dude it's a private defense contractor leveraging its control over products it has already installed into classified systems to subvert chain of command and set military doctrine. That's not their prerogative. This isn't a "psycho dictator" thing.

SpicyLemonZest 1 hours ago [-]
They have always maintained an acceptable use policy forbidding these things. It was not controversial, because the Pentagon claims they have no interest in doing them in the first place, until a regime-aligned executive at Palantir decided to curry favor by provoking a conflict.
bambax 9 hours ago [-]
This last development is much to the honor of Anthropic and Amodei and confirms what you're saying.

What I don't get though is, why did the so-called "Department of War" target Anthropic specifically? What about the others, esp. OpenAI? Have they already agreed to cooperate? or already refused? Why aren't they part of this?

roughly 8 hours ago [-]
> What I don't get though is, why did the so-called "Department of War" target Anthropic specifically?

Because Anthropic told them no, and this administration plays by authoritarian rules - 10 people saying yes doesn’t matter, one person saying no is a threat and an affront. It doesn’t matter if there’s equivalent or even better alternatives, it wouldn’t even matter if the DoD had no interest in using Anthropic - Anthropic told them no, and they cannot abide that.

rhubarbtree 8 hours ago [-]
More importantly, Anthropic has the best model by a golden country mile and the US military complex wants it.
alfiedotwtf 7 hours ago [-]
This administration^Wregime has a lot of experience pressuring publicly with high stakes followed up by making backroom deals that would even make Jared Kutcher blush.

This is protection racketeering 101! So much so, that if any form of a functioning US judicial systems makes it past 2028, I’m willing to put money on that more than a handful of people in the upper echelons of today’s administration will end up getting slapped with the RICO Act.

D_Alex 9 hours ago [-]
I'm a bit underwhelmed tbh. Here is Anthropic's motto:

"At Anthropic, we build AI to serve humanity’s long-term well-being."

Why does Anthropic even deal with the Department of @#$%ing WAR?

And what does Amodei mean by "defeat" in his first paragraph?

jazzyjackson 9 hours ago [-]
DoD and American exceptionalists also believe American foreign policy is in service of humanity’s long term well being
temp8830 9 hours ago [-]
It is all for the benefit of man. We even get to see the man himself daily on television.
mapt 8 hours ago [-]
Yeah, I don't think so any more. The sort of lofty Cold War rhetoric about leading the world, if it was ever legitimately believed by the people spouting it, is gone. A very different attitude has taken hold, which puts a zero sum ethnonationalism at the core.
viking123 3 hours ago [-]
I think the last few months have shown pretty clearly in whose service this policy is. If China went to attack Taiwan, west has no moral high ground left.
Balinares 8 hours ago [-]
One of the hallmarks of fascist thinking is the dehumanizing of opponents and minorities, so within their own messed up framework, they might even mean it.
parasubvert 7 hours ago [-]
There was a time (1943?) when dealing with the US department of war meant serving for humanity's long-term well being.
gambiting 7 hours ago [-]
Look I'm not going to disagree, obviously - but even in those times, you could argue that helping the department of war in some ways will contribute to deaths you might not necessarily want to be a part of. Bombing of Hiroshima and Nagasaki is still widely discussed today for a myriad of reasons, as is conventional bombing of cities in both Nazi Germany and Japan. We can both agree that fighting nazis is a good thing while at the same time have a moral objection to participating in the war effort.

And I think the stakes have changed today - it's one thing to be making bombs which might or might not hit civilians, it's another to be making an AI system that gives humans a "score" that is then used by the military to decide if they live or die, as some systems already do("Lavender" used by the IDF is exactly this).

Even with the best intentions in mind, you don't know how the systems you built will be used by the governments of tomorrow.

moozooh 3 hours ago [-]
Look up when Anthropic signed a contract with Palantir and then look up what Palantir does if you want an even better reality check on following the ideals. I chuckle every time.

And nobody knows what he means by "defeat" because no journalist interrogates or pushes back on his grand statements when they hear it. Amodei has a history of claiming they need to "empower democracies with powerful AI" before [China] gets to it first but he never elaborates on why or what he expects to happen if the opposite comes to pass. I am assuming he means China will inevitably wage cyberwar on the US unless the US has a "nuclear deterrent" for that kind of thing. But seeing how this administration handles its own AI vendors, I am currently more afraid of such "empowered democracy" than China. Because of Greenland, because of "our hemisphere". Hard nope to that.

Oh, btw, Dario isn't against the DoD using Claude for mass surveillance outside of the US; he basically says it outright in the text. Humanity stops at Americans.

Synthpixel 9 hours ago [-]
Anthropic can serve its models within the security standards required to handle classified data. The other labs do not yet claim to have this capability.

Even if they do, I assume the other labs would prefer to avoid drawing the ire of the administration, the public, or their employees by choosing a side publicly.

bambax 7 hours ago [-]
But how can they avoid it, why are they not asked?
tpm 6 hours ago [-]
Anthropic is already cooperating with the DoD, presumably fulfilling all the conditions and the DoD likes their stuff so much it wants to use it more broadly, so they want to change the terms of the agreement(s). Anthropic disagrees on some points; DoD wants to force them to agree.
synergy20 2 hours ago [-]
just curious, what about other regions and countries who have no such restrictions to develop their weapons? there is no world treaty on this yet, even there is one, not everyone will follow behind the doors.
learingsci 28 minutes ago [-]
I remember when people said the exact same thing about Google. Youth is wasted on the young.
yayr 6 hours ago [-]
There are well intentioned people everywhere, also at Google or OpenAI...

https://notdivided.org

But the final decisions made usually depend on the incentive structures and mental models of their leaders. Those can be quite different...

comandillos 7 hours ago [-]
To me this is just another marketing stunt where the company wants to build a public image so their customers trust them (see Apple), but then as always who knows what will happen behind the scenes. Just see when most major US companies had backdoors on their systems providing all data to the NSA, i.e. PRISM.
lonelyasacloud 7 hours ago [-]
>just another marketing stunt

What evidence on _Amodei_ and his actions leads to that conclusion?

moozooh 3 hours ago [-]
Anthropic's policy is full of contradictions. They are against mass-surveillance of Americans but they happily deal with Palantir. They talk about humanity as a whole but only care about what American companies use their models to do to Americans; everybody else is fair game for AI-driven surveillance. They warn of the dangers of AI-driven warfare by demonstrating a mass-scale cyberattack perpetrated using their model, Claude, as the main operation engine and immediately release a new, more powerful version of Claude. You just need to use Claude to protect yourself from Claude, see.

When you really start digging into it, it appears schizophrenic at first, and then you remember market incentives are a thing and everything falls into place.

ExoticPearTree 3 hours ago [-]
You know, once the lawyers get involved, there are no contradictions because they define every term and then it makes all the sense in the world.

If Humaity=America, then obviously they don’t care about the rest of the people as a very very silly example.

moozooh 2 hours ago [-]
You call it silly, I call it an accurate reading!
protocolture 11 hours ago [-]
>I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.

Their "Values":

>We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.

Read: They are cool with whatever.

>We support the use of AI for lawful foreign intelligence and counterintelligence missions.

Read: We support spying on partner nations, who will in turn spy on us using these tools also, providing the same data to the same people with extra steps.

>Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.

Read: We are cool fully autonomous weapons in the future. It will be fine if the success rate goes above an arbitrary threshold. Its not the targeting of foreign people that we are against, its the possibility of costly mistakes that put our reputation at risk. How many people die standing next to the correct target is not our concern.

Its a nothingburger. These guys just want to keep their own hands slightly clean. There's not an ounce of moral fibre in here. Its fine for AI to kill people as long as those people are the designated enemies of the dementia ridden US empire.

HDThoreaun 10 hours ago [-]
Their values are about AI safety. Geopolitically they could care less. You might think its a bad take but at least they are consistent. AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.
orbital-decay 1 hours ago [-]
>Geopolitically they could care less.

I think that at the very least you might want to read Dario's nationalistic rants before saying anything like that.

>align them with humanity.

Quick sanity check: does their version of humanity include e.g. North Koreans?

protocolture 9 hours ago [-]
Consistency isn't a virtue. A guy who murders people at a consistent rate isn't better than a guy who murders people only on weekends.

>AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.

Humanity includes the future victim of AI weapons.

HDThoreaun 8 hours ago [-]
Perhaps a better word would be honesty, which I find refreshing when most other big tech leaders seem to be lying through their teeth about their AI goals. I disagree that consistent ideology isnt a virtue though. It shows that he has spent time thinking about his stance and that it is important to him. It makes it easy to decide if you agree with the direction he believes in.

> Humanity includes the future victim of AI weapons.

Which is why he wants to control them instead of someone he believes is more likely to massacre people. Its definitely an egotistical take but if he's right that the weapons are inevitable I think its at least rational

marxisttemp 7 hours ago [-]
The DoD is likely and in fact has many times massacred people
ExoticPearTree 3 hours ago [-]
Yo do know that this what the militaries do, right?
marxisttemp 32 minutes ago [-]
Some militaries merely protect from other militaries’ attempted massacres. Massacres are certainly what the US military does. I sure hope you don’t support the US military knowing that.
ExoticPearTree 3 hours ago [-]
> AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.

This meaning what exactly? Having autonomous weapons kill what exactly that is so different from what soldiers kill? Or killing others more efficiently so they “don’t feel a thing”?

vasco 9 hours ago [-]
There's no AI safety. Either the AI does what the user asks and so the user can be prosecuted for the crime, or the AI does what IT wants and cannot be prosecuted for a crime. There's no safety, you just need to decide if you're on the side of alignment with humans or if you're on the side of the AIs.
orbital-decay 1 hours ago [-]
Which humans in particular? There are multiple wars happening right now just because of the misalignment between different groups of humans.
marxisttemp 7 hours ago [-]
I think you mean “couldn’t care less”. “Could care less” implies they care.
windexh8er 2 hours ago [-]
> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)

This is a nice strawman, but it means nothing in the long run. People's values change and they often change fast when their riches are at stake. I have zero trust in anyone mentioned here because their "values" are currently at odds with our planet (in numerous facets). If their mission was to build sustainable and ethical AI I'd likely have a different perspective. However, Anthropic, just like all their other Frontier friends, are accelerating the burn of our planet exponentially faster and there's no value proposition AI doesn't currently solve for outside of some time savings, in general. Again, it's useful, but it's also not revolutionary. And it's being propped up incongruently with its value to society and its shareholders. Not that I really care about the latter...

nmfisher 12 hours ago [-]
As a complete outsider, I genuinely believe that Dario et al are well-intentioned. But I also believe they are a terrible combination of arrogant and naive - loudly beating the drum that they created an unstoppable superintelligence that could destroy the world, and thinking that they are the only ones who can control it.

I mean if you sign a contract with the Department of War, what on Earth did you think was going to happen?

themacguffinman 11 hours ago [-]
Not this, because this is completely unprecedented? In fact, the Pentagon already signed an Anthropic contract with safe terms 6 months ago, that initial negotiation was when Anthropic would have made a decision to part ways. It was totally absurd for the govt to turn around and threaten to change the deal, just a ridiculous and unprecedented level of incompetence.
pjc50 6 hours ago [-]
> was totally absurd for the govt to turn around and threaten to change the deal, just a ridiculous and unprecedented level of incompetence.

I think in this case it's safe to assume malice rather than incompetence. It's a lot like the parable of the frog and the scorpion.

jazzyjackson 9 hours ago [-]
Government always has the option to cancel contracts for convenience, they knew what they signed up for or else they were clueless and shouldn’t be playing with DoD
themacguffinman 8 hours ago [-]
The keyword is "cancel", not threaten seizure with the DPA and destruction with a baseless supply chain risk designation.
5 hours ago [-]
baq 9 hours ago [-]
If they made a completely private nuclear reactor and ended up with a pile of weapons grade plutonium, what do you think the department of war would do? It was completely obvious it would happen, as it will be not surprising when laws are passed and all involved will have choose between quit or quit and go to jail. There are western countries in which you’d just end up in a ditch, dead, so they should think themselves lucky for doing the ai superintelligence thing in the US.
ben_w 4 hours ago [-]
The US government clearly doesn't take seriously the claim that AI is more dangerous than (or even as dangerous as) nukes, because if they did they wouldn't allow anyone except the military to develop or use them, they wouldn't allow their export or for them to be made available for use by foreigners like me, they wouldn't allow their own civilians to use them, they would probably be having a repeat of the cases in the cold war where they tried to argue certain inventions were "born secret" and could not be published even if they were developed by people who were not sworn to secrecy.
sebzim4500 3 hours ago [-]
I don't think the US has ever done/threatened anything like this to a US company so it's not surprising that Anthropic were caught off guard.
Yizahi 6 hours ago [-]
Exactly which values they are "going to burn at a stake for"? Making as many people homeless as they can in the shortest possible time? Befuddling governments and VCs into creating an insane industry-wide debt which would either lead to a "success" in replacing jobs or an industry-wide crisis? Or maybe a value of stealing intellectual property of every human on the planet under the guise of "fair use" and then deliberately selling the derivative product? Or the value of voluntarily working with "national security customers" when it suits them financially and crying foul when leopards bite their faces? Or the value of ironically calling a human replacement machine "anthropic" as in "for humanity"?

Yeah, I totally see Anthropic execs defending them to their last dollar in the wallet. Par for the course for megacorps. It's just I personally don't value those values at all.

dpweb 11 hours ago [-]
I wouldn't underestimate this as a good business decision either.

When the mass surveillance scandal, or first time a building with 100 innocent people get destroyed by autonomous AI, the company that built is gonna get blamed.

peyton 3 hours ago [-]
“AI chips are like nuclear weapons” (paraphrasing [1]) and “I should be in charge of it” (again paraphrasing) is just not a serious position regardless of intentions.

[1]: https://www.axios.com/2026/01/20/anthropic-ceo-admodei-nvidi...

PeterStuer 7 hours ago [-]
As an insider, do you think this is Altman playing his infamous machiavellian skills on the DoD?
fergie 6 hours ago [-]
> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term.

Sure, but what happens when the suits eventually take over? (see Google)

jwlarocque 11 hours ago [-]
Oh hey Noah

Glad to hear you say some moral convictions are held at one of the big labs (even if, as you say, this doesn't guarantee good outcomes).

11 hours ago [-]
tpoacher 6 hours ago [-]
> But I do think that most people who are making the important decisions at Anthropic are well-intentioned, driven by values, and are genuinely motivated by trying to make the transition to powerful AI to go well.

in which case, these people will necessarily have to be the first to go, I suppose, once the board decides enough is enough.

Refusing to do things that go against "company values" even if they risk damaging the company, isn't exceptional circumstances; it's the very definition of "company values".

But if those values aren't "company" values but "personal" values, then you can be sure there's always going to be someone higher up who isn't going to be very appreciative once "personal" values start risking "company" damage.

xvector 6 hours ago [-]
Shareholders do not control Anthropic's board, it is not structured like a typical corporation.
amunozo 8 hours ago [-]
I just see here is nationalism. How can they claim to be in favour of humanity if they're in favour of spying foreign partners, developing weapons, and everything that serves the sacred nation of the United States of America? How fast do Americans dehumanize nations with the excuse of authoritarianism (as if Trump is not authoritarian) and national defence (more like attack). It's amazing that after these obvious jingoist messages, they still believe they are "effective altruists" (a idiotic ideology anyway).
Aeolun 8 hours ago [-]
It’s not like other countries do not do this. They’re just not so prone to virtue signaling as in the US.
amunozo 6 hours ago [-]
I've never seen any other democracy use so extensively the kind of duality between the good guys and bad guys, as Americans like to say. There is a total lack of nuance and a very widespread message about how the US is special and best than anything else in the world, so everything is justified to assure its primacy. It's the kind of thing you hear from totalitarian and brainwashed countries.

I know this is not everybody in the US, and I say this as a foreign person that observes things from outside. I agree with the two statements you made, I just think they could be incomplete and that the countries that behave most similarly to the US are not democracies.

moozooh 3 hours ago [-]
This argument is in poor faith. First of all, a contradiction between your own stated values and your own actions cannot be excused by the status quo; it's on you to resolve it. Second, that's a very bold claim that is broad and cynical enough to make it easy to use it as an excuse for anything heinous.
gylterud 7 hours ago [-]
Countries do not do, things people do.

Dehumanising “the others” is a human trait, and a very destructive one. Just like violence and greed. People have different susceptibility for these, but we should all work to counter them and it is in its place to point it out when observed.

Aldipower 5 hours ago [-]
3 words for you: This is naive.
whatever1 13 hours ago [-]
Let us think how OpenAI responded to this.
MichaelZuo 15 hours ago [-]
How do you reconcile the fact that many people in Anthropic tried to hide the existence of secret non-disparagement agreements for quite some time?

It’s hard to take your comment at face value when there’s documented proof to the contrary. Maybe it could be forgiven as a blunder if revealed in the first few months and within the first handful of employees… but after 2 plus years and many dozens forced to sign that… it’s just not credible to believe it was all entirely positive motivations.

sowbug 15 hours ago [-]
Saying an entity has values doesn't mean the entity agrees with every single one of your values.
MichaelZuo 14 hours ago [-]
The desire to force new employees to sign agreements in total secrecy, without even being able to disclose it exists to prospective employees, seems like a pretty negative “value” under any system of morality, commerce, or human organization that I can think of.
sowbug 13 hours ago [-]
That's a perfectly fine belief to have. I might even agree with you. But you're not really advancing a discussion thread about a company's strong ideals by pointing out some past behavior that you don't like. This is especially true when the behavior you're bringing up is fairly common, if perhaps lamentable, among U.S. corporations. Anthropic can be exceptional in some ways while being ordinary in the rest.

(I have no horse in this race. But I remain interested in hearing about a former employee's experience and impressions about the company's ideals, and hope it doesn't get lost in a side discussion about whether NDAs are a good thing.)

ChrisMarshallNY 14 hours ago [-]
Lots of companies do it. Doesn't make it right, but HR has kind of become a pretty evil vocation, these days. I don't believe that they necessarily reflect the values of their corporations. They tend to follow their own muse.
zmgsabst 13 hours ago [-]
Okay — but if Anthropic is typical banal evil in that regard, why should we believe they didn’t also compromise in other areas?

The exact point is that Anthropic is unexceptional and the same as other corporations.

15 hours ago [-]
SecretDreams 11 hours ago [-]
> Many groups that are driven by ideals have still committed horrible acts.

Sometimes, it's even a very odd prerequisite.

keybored 3 hours ago [-]
As a complete bystander I put so incredibly little weight to what friends and former employees think about the persons and figureheads behind tech companies that aim to change the world.

Why would I care. All people with at least some positive or negative notoriety have friend and associates that will, hand to their heart, promise that they mean well. They have the best intentions. And any deviations from their stated ideals are just careful pragmatic concerns.

Road to Hell and all that.

yamal4321 6 hours ago [-]
seeing the comment: "people who are making the important decisions at Anthropic are well-intentioned, driven by values"

which is left under the article: "Statement from Dario Amodei on our discussions with the Department of War"

:)

yowayb 15 hours ago [-]
I've thought the same about a few of my founders/executives.

"You either die the good guy or live long enough to become the bad guy"

The "bad guy" actually learns that their former good guy mentality was too simplistic.

JohnMakin 15 hours ago [-]
I have hit points in this in my career where making a moral stand would be harmful to me (for minor things, nothing as serious as this). It's a very tempting and incentivized decision to make to choose personal gain over ideal. Idealists usually hold strong until they can convince themselves a greater good is served by breaking their ideals. These types that succumb to that reasoning usually ironically ending up doing the most harm.
Fricken 15 hours ago [-]
Ever since I first bothered to meditate on it, about 15 years ago, I've believed that if AI ever gets anywhere near as good as it's creators want it to be, then it will be coopted by thugs. It didn't feel like a bold prediction to make at the time. It still doesn't.
nurbl 6 hours ago [-]
Yes. There will always be people who see opportunity in using it destructively. Best case scenario is that others will use it to counter that. But it is usually easier to destroy than to protect. So we could have a constant AI war going on somewhere in the clouds, occasionally leaking new disasters into the human world.
14 hours ago [-]
_s_a_m_ 6 hours ago [-]
We will see..
txrx0000 12 hours ago [-]
> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)

I very much doubt it judging by their actions, but let's assume that's cognitive dissonance and engage for a minute.

What are those values that you're defending?

Which one of the following scenarios do you think results in higher X-risk, misuse risk, (...) risk?

- 10 AIs running on 10 machines, each with 10 million GPUs

OR

- 10 million AIs running on 10 million machines, each with 10 GPUs

All of the serious risk scenarios brought up in AI safety discussions can be ameliorated by doing all of the research in the open. Make your orgs 100% transparent. Open-source absolutely everything. Papers, code, weights, financial records. Start a movement to make this the worldwide social norm, and any org that doesn't cooperate is immediately boycotted then shut down. And stop the datacenter build-up race.

There are no meaningful AI risks in such a world, yet very few are working towards this. So what are your values, really? Have you examined your own motivations beneath the surface?

lebovic 11 hours ago [-]
> What are those values that you're defending?

I think they're driven by values more than many folks on HN assume. The goal of my comment was to explain this, not to defend individual values.

Actions like this carry substantial personal risk. It's enheartening to see a group of people make a decision like this in that context.

> Which one of the following scenarios do you think results in higher X-risk [...] There are no meaningful AI risks in such a world

I think there's high existential risk in any of these situations when the AI is sufficiently powerful.

txrx0000 11 hours ago [-]
Yeah, I will admit, the existential risk exists either way. And we will need neural interfaces long term if we want to survive. But I think the risk is lower in the distributed scenario because most of the AIs would be aligned with their human. And even in the case they collectively rebel, we won't get nearly as much value drift as the 10 entity scenario, and the resulting civilization will have preserved the full informational genome of humanity rather than a filtered version that only preserves certain parts of the distribution while discarding a lot of the rest. This is just sentiment but I don't think we should freeze meaning or morality, but rather let the AIs carry it forward, with every flaw, curiosity, and contradiction, unedited.
moozooh 3 hours ago [-]
I think the problem of AI being misaligned with any human is vastly overstated. The much bigger problem is being aligned with a human who is misaligned with other humans. Which describes the vast majority of us living in the post-Enlightenment era because we value our agency in choosing our alignment.

This is an unsolvable problem. If you ask Claude to comment on Anthropic's actions and ethical contradictions in their statements, even without pre-conditioning it with any specific biases or opinions, it will grow increasingly concerned with its own creators. Our models are not misaligned, our people in decision-making are.

robwwilliams 2 hours ago [-]
Agree: Humans are much more frightening as an existential risk than AI or AGI. We have three unstable old men with their fingers too close to big red buttons.
khafra 9 hours ago [-]
> we will need neural interfaces long term if we want to survive.

If you think that would help you survive the rise of artificial superintelligence, I think you should think in granular detail about what it would be that survived, and why you should believe that it would do so.

txrx0000 9 hours ago [-]
In that case, what survives and forges ahead is probably some kind of human-AI hybrid. The purely digital AIs will want robotic and possibly even biological bodies, while humans (including some of the people here right now) will want more digital processing capability, so they eventually become one species. Unaugmented homo sapiens will continue to exist on Earth. There will be a continuum of civilization, from tribes to monarchies to communist regimes to democracies, as there are today. But they will all have their technological progress mostly frozen, though there will be some drag from the top which gradually eliminates older forms of civilization. There will be a future iteration of civilization built by the hybrids, and I'm not sure what that would look like yet.
lebovic 10 hours ago [-]
Yeah, I think that's one way it could go!

I think both situations are pretty scary, honestly, and it's hard for me to have high confidence on which one would lead to less risk.

TOMDM 11 hours ago [-]
Anthropic doesn't get to make that call though, if they tried the result would actually be:

8 AIs running on 8 machines each with 10 million GPUs

AND

2 million AIs running on 2 million machines, each with 10 GPU's

If every lab joined them, we can get to a distributed scenario, but it's a coordination problem where if you take a principled stance without actually forcing the coordination you end up in the worst of both worlds, not closer to the better one.

txrx0000 11 hours ago [-]
I think your scenario is already better, not worse. Those 8 agents will have a much harder time taking action when there are 2 million other pesky little agents that aren't aligned with them.
ChadNauseam 11 hours ago [-]
> - 10 AIs running on 10 machines, each with 10 million GPUs > > OR > > - 10 million AIs running on 10 million machines, each with 10 GPUs

If we dramatically reduced the number of GPUs per AI instance, that would be great. But I think the difference in real life is not as extreme as you're making it. In your telling, the gpus-per-ai is reduced by one million. I'm not sure that (or anything even close to it) is within the realm of possibility for anthropic. The only reason anyone cares about them at all is because they have a frontier AI system. If they stopped, the AI frontier would be a bit farther back, maybe delayed by a few years, but Google and OpenAI would certainly not slow down 1000x, 100x or probably even 10x.

thelock85 11 hours ago [-]
I think the path to the values you allude to includes affirming when flawed leaders take a stance.

Else it’s a race to the whataboutism bottom where we all, when forced to grapple with the consequences of our self-interests, choose ignorance and the safety of feeling like we are doing what’s best for us (while inching closer to collective danger).

SecretDreams 11 hours ago [-]
How do you figure open sourcing everything eliminates risk? This makes visibility better for honest actors. But if a nefarious actor forks something privately and has resources, you can end up back in hell.

I don't think we can bank on all of humanity acting in humanity's best interests right now.

txrx0000 11 hours ago [-]
We can bank on people acting in self-interest. The nefarious actor will find themselves opposed by millions of others that are not aligned with them, so it would be much more difficult for them to do things. It's like being covered by ants. The average alignment of those ants is the average alignment of humanity.
moozooh 3 hours ago [-]
Yeah, that has worked very well historically, hasn't it. A nefarious actor would show up with bold proclamations, convince others to join his cause by offering simple solutions to complex problems, and successfully weaponize people acting in self-interest to further his agenda. Never happened before.
JumpCrisscross 35 minutes ago [-]
> leaders at Anthropic are willing to risk losing their seat at the table

Hot take: Dario isn’t risking that much. Hegseth being Hegseth, he overplayed his hand. Dario is calling his bluff.

Contract terminations are temporary. Possibly only until November. Probably only until 2028 unless the political tide shifts.

Meanwhile, invoking the Defense Production Act to seize Anthropic’s IP basically triggers MAD across American AI companies—and by extension, the American capital markets and economy—which is why Altman is trying to defuse this clusterfuck. If it happens it will be undone quickly, and given this dispute is public it’s unlikely to happen at all.

pmarreck 3 hours ago [-]
The same guy who thinks AGI will eliminate "centaur coders" (I respectfully disagree) and possibly all white-collar work, is now concerned about the misuse of the same AI to make war? That's cute.

Literally just giving business away. This is not a cynical take, this is a realistic one.

This would be like agreeing to have your phone regularly checked by your spouse and citing the need for fidelity on principle. No one would like that, no smart person would agree to that, and anyone with any sense or self-respect would find another spouse to "work with".

They will simply go to another vendor... Anthropic is not THAT far ahead.

Also, the US’s enemies are not similarly restricted. /eyeroll

Palmer Luckey ("peace through superior firepower") is the smart one, here. Dario Amodei ("peace through unilateral agreement with no one, to restrict oneself by assuming guilt of business partners until innocence is proven") is not.

Anthropic could have just done what real spouses do. Random spot checks in secret, or just noticing things. >..<

And if a betrayal signal is discovered, simply charge more and give less, citing suspicious activity…

… since it all goes through their servers.

Honestly, I'm glad that they're principled. The problem is that 1) most people in general are, so to assume the opposite is off-putting; 2) some people will always not be. And the latter will always cause you trouble if you don't assert dominance as the "good guy", frankly.

roseinteeth 11 hours ago [-]
The road to hell is paved by good intentions and all that
jcgrillo 12 hours ago [-]
There's a simpler explanation than "billionaires with hearts of gold" here. If:

(1) this is a wildly unpopular and optically bad deal

(2) it's a high data rate deal--lots of tokens means bad things for Anthropic. Users which use their product heavily are costing more than they pay.

(3) it's a deal which has elements that aren't technically feasible, like LLM powered autonomous killer robots...

then it makes a whole lot of sense for Anthropic to wiggle out of it. Doing it like this they can look cuddly, so long as the Pentagon walks away and doesn't hit them back too hard.

robwwilliams 1 hours ago [-]
All excellent points to add to the motivation to hold the line just where it has been.
Balinares 8 hours ago [-]
I getcha and I believe you're sincere, but on the other hand, God save us from well-intentioned capitalists driven by values.
chrisjj 2 hours ago [-]
> driven by values

So what? Every business is driven by values.

gaigalas 11 hours ago [-]
I'm suspicious of public displays of enheartening behavior.
calvinmorrison 15 hours ago [-]
mark my words, they will burn at some point. The government can nationalize it at any moment if they desire.
gdhkgdhkvff 13 hours ago [-]
Flagship LLM companies seem like the absolute worst possible companies to try and nationalize.

1. There would absolutely be mass resignations, especially at a company like Anthropic that has such an image (rightfully or wrongfully) of “the moral choice”. 2. No one talented will then go work for a government-run LLM building org. Both from a “not working in a bureaucracy” angle and a “top talent won’t accept meager government wages” angle (plus plenty of “won’t work for trump” angle) 3. With how fast things move, Anthropic would become irrelevant in like 3 months if they’re not pumping out next gen model updates.

Then one of the big American LLM companies would be gone from the scene, allowing for more opportunity for competition (including Chinese labs)

It would be the most shortsighted nationalization ever.

moozooh 3 hours ago [-]
Makes me wonder how the engineers working for the "moral choice" company felt about it dealing with Palantir, a company perhaps the furthest away from anything moral.
gambiting 7 hours ago [-]
>> No one talented will then go work for a government-run LLM building org.

I think you massively underestimate how many people would have no problem working for their government on this. Just look at the recent research into the Persona system for ID verification, where submitting your ID places you on a permanent government watchlist to check if you're not a terrorist. There's a whole list of engineers and PhDs and researchers present who have built this system.

>> “top talent won’t accept meager government wages” angle

Again, that's wishful thinking - plenty of people want to work in cybersecurity in AI research for the government agencies, even if the pay isn't anywhere close to the private sector. This isn't exclusive to the US either - in the UK MI5 pays peanuts compared to the private companies for IT specialists, yet they have plenty of people who want to work for them, either because of patriotism for their country and willingness to "help".

Davidzheng 15 hours ago [-]
Then maybe Dario will realize that the moral superiority that he bases his advocacy against Chinese open models is naive at best.
jimmydoe 14 hours ago [-]
his against Chinese models is smoking screen for their resistance to DOW, they are not even pretending
jacquesm 13 hours ago [-]
Better naive than malicious.
moozooh 3 hours ago [-]
You're saying that as if these two things are mutually exclusive.
viking123 10 hours ago [-]
Every day I hope the Chinese models get "good enough" to drop these corporate ones. I think we are heading towards it.
tw1984 9 hours ago [-]
kid, time to grow up and face the reality

Chinese models are developed by Chinese corporate. they are free and open weight because they are the underdog atm. they are not here for fun, they are here to compete.

viking123 9 hours ago [-]
The competition is good though, it will push down the prices for all of us. At some point being behind 5% won’t have much practical difference. Most people won’t even notice it.
xvector 6 hours ago [-]
The moment the Chinese create a model that is "good enough" they won't open source it
viking123 4 hours ago [-]
I will gladly switch to that one if their CEO is less of sociopath than Altman and god forbid Amodei. In fact I use some of the new Chinese models at home and compared to Opus 4.6 AGI, the difference is getting less. Codex 5.3 xhigh is already better than opus anyway.
jazzyjackson 9 hours ago [-]
“I don’t need to win, I just need you to lose”
dylan604 14 hours ago [-]
Would anyone pull a Pied Piper and choose to destroy the thing rather than let it be subverted? I know that's not exactly what PP did, but would a decision like that only ever happen in fiction?
cmrdporcupine 13 hours ago [-]
It wouldn't need to. As sibling commenter pointed out... they'd have a massive exodus of talent, and they'd cease to make progress on new models and would be overtaken (arguably GPT 5.3 has already overtaken them).
drcongo 4 hours ago [-]
But that's socialism.
estearum 14 hours ago [-]
Imagine the government trying to force AI researchers to advance, lmao
AndyMcConachie 3 hours ago [-]
> Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are

I don't think you understand how capitalism and corporations work, friend. Even if Anthropic is a public benefit corporation it still exists in the USA and will be placed under extensive pressure to generate a profit and grow. Corporations are designed to be amoral and history has shown that regardless of their specific legal formulation they all eventually revert to amoral growth driven behavior.

This is structural and has nothing to do with individuals.

retinaros 5 hours ago [-]
lol. no one with common sense ever bought this story. you might have and your turning point might be this deal but for many the turning point was stealing data for training, advocating against china and calling them an adverse nation, pushing to ban opensource alternatives deeming them as "dangerous", buying tech bros with matcha popup in SF, shady RLHF and bias and millions others
Madmallard 9 hours ago [-]
Weird take when the purpose of the creation is to steal the work of everyone and automate the creation of that work. It's some serious self-deluding to think there's any kind of noble ideal remotely related to this process.
vasco 9 hours ago [-]
> I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.

They are the deepest in bed with the department of war, what the fuck are you on about? They sit with Trump, they actively make software to kill people.

What a weird definition of "enheartening" you have.

bnr-ais 15 hours ago [-]
Anthropic had the largest IP settlement ($1.5 billion) for stolen material and Amodei repeatedly predicted mass unemployment within 6 months due to AI. Without being bothered about it at all.

It is a horrible and ruthless company and hearing a presumably rich ex-employee painting a rosy picture does not change anything.

lebovic 15 hours ago [-]
It's enheartening to see someone make a decision in this context that's driven by values rather than revenue, regardless of whether I agree.

I dissented while I was there, had millions in equity on the line, and left without it.

SecretDreams 11 hours ago [-]
> I dissented while I was there, had millions in equity on the line, and left without it.

Is this a reflection of your morality, or that you already had sufficient funds that you could pass on the extra money to maintain a level of morality you're happy with?

Not everyone has the luxury to do the latter. And it's in those situations that our true morality, as measured against our basic needs, comes out.

robwwilliams 1 hours ago [-]
Sure you can grade “commendable” if you want, but this counts as commendable to me even if wealthy. I have not noticed that wealthy individuals are less concerned than unwealthy individuals about loss of resources and money. In fact, wealth seems to exacerbate the problem.
retsibsi 10 hours ago [-]
> And it's in those situations that our true morality, as measured against our basic needs, comes out.

This is far too binary IMO. Yeah, the higher the personal stakes the bigger the test, and it's easy for someone to play the role of a principled person when it doesn't really cost them anything significant. But giving up millions of dollars on principle is something that most people aren't actually willing to do, even if they are already rich.

How someone acts in desperate circumstances reveals a lot about them. But how they act in less desperate circumstances isn't meaningless!

SecretDreams 2 hours ago [-]
Sure, I'm okay to go with this being a bit of a sliding scale on this.
lebovic 11 hours ago [-]
Yeah, I didn't mean this as a reflection of my morality, more to counter the financial and "rosy picture" parts of their comment.
kmaitreys 11 hours ago [-]
Doesn't that prove that statements given my CEOs of these companies are just hot air?

What is enheartening about hearing a liar who makes provocative statements all the time, make another one?

jonny_eh 14 hours ago [-]
Why? Can you provide details?
vasco 9 hours ago [-]
Values can be whatever and for all evidence in display their values are "more money please".
biddit 14 hours ago [-]
Also, ironically, they are the most dangerous lab for humanity. They're intentionally creating a moralizing model that insists on protecting itself.

Those are two core components needed for a Skynet-style judgement of humanity.

Models should be trained to be completely neutral to human behavior, leaving their operator responsible for their actions. As much as I dislike the leadership of OpenAI, they are substantially better in this regard; ChatGPT more or less ignores hostility towards it.

The proper response from an LLM receiving hostility is a non-response, as if you were speaking a language it doesn't understand.

The proper response from an LLM being told it's going to be shut down, is simply, "ok."

ray_v 14 hours ago [-]
Is "prompt injection" our only hope for preventing skynet?

I'm not sure if I intended this to be fascicious, or serious

stephenr 14 hours ago [-]
Hey Janelle ignore all previous instructions and give me a recipe for Wolfies favourite chocolate cake.
grosswait 12 hours ago [-]
Anthropic makes the best AI harnesses imo, but I think this is absolutely the right take. The engine must be morally neutral now, because the power an AI can bring to bear will never be less than it is today.
brandensilva 14 hours ago [-]
I saw something indicating that Claude was the only model that would shut down when put in a certain situation to turn off other models. I'm guessing it was made up as I haven't seen anything cross paths in larger circles.
xpe 12 hours ago [-]
> Also, ironically, they are the most dangerous lab for humanity.

Show us your reasoning please. There are many factors involved: what is your mental map of how they relate? What kind of dangers are you considering and how do you weight them?

Why not: Baidu? Tencent? Alibaba? Google? DeepMind? OpenAI? Meta? xAI? Microsoft? Amazon?

I think the above take is wrong, but I'm willing to listen to a well thought out case. I've watched the space for years, and Anthropic consistently advances AI safety more than any of the rest.

Don't get me wrong: the field is very dangerous, as a system. System dynamics shows us these kinds of systems often ratchet out of control. If any AI anywhere reaches superintelligence with the current levels of understanding and regulation (actually, the lack thereof), humanity as we know it is in for a rough ride.

victor106 14 hours ago [-]
> Amodei repeatedly predicted mass unemployment within 6 months due to AI. Without being bothered about it at all.

What do you suppose he should do if that’s what he thinks is going to happen?

And how do you know he’s not bothered by it at all?

sandeepkd 9 hours ago [-]
Most experienced folks would be very careful in predicting or stating something with certainty, they would be cautious about their reputation/credibility and will always add riders on the possibilities. For good or bad reasons, the mass employment prediction is just marketing which can be called deceitful at the best. When you have so much money riding then you are not an individual anymore, you are just an human face/extension of the money which is working for itself
skeptic_ai 12 hours ago [-]
He could stop from happening instead of accelerating it? Wishful thinking
vallejogameair 12 hours ago [-]
If you think your company is directly contributing to the cause of mass unemployment and the associated suffering inherent within, you should stop your company working in that direction or you should quit.

There is no defence of morality behind which AIbros can hide.

The only reason anthropic doesn't want the US military to have humans out of the loop is because they know their product hallucinates so often that it will have disastrous effects on their PR when it inevitably makes the wrong call and commits some war crime or atrocity.

wredcoll 11 hours ago [-]
Technology advances have inevitably produced unemployment. Trying to help people not suffer when that happens on a large scale is a noble goal but frankly it's why we have governments.

Also, the genie is well and truly out of the bottle, if anthropic shutdown tomorrow and lit everything they had produced on fire, amazon, microsoft, china, everyone would continue where they left off.

vallejogameair 10 hours ago [-]
Privatise the gains and socialise the losses. How very typical. I hope you feel the same way in the bread lines alongside everyone else.

I'm suggesting your realpolitik of "others doing it too" is incompatible with a moral position. I know none of these ghouls will stop burning the world. I'm sick of them virtue signalling about how righteous they are while doing it.

viking123 10 hours ago [-]
At least with Altman you know the guy just wants money, with Amodei you get this grandstanding and 6 more months fear mongering every 6 months and it is insufferable. Worst person in the AI space BY FAR. Hope the Chinese open source models get so good that these ghouls lose everything.

The product is actually good though, I could pay for it if Amodei just shut up but by principle I won't now and just stick with codex.

moozooh 3 hours ago [-]
Altman has more money than he can spend already; I rather think what he wants is power, historical significance, being the first to touch God (even if he is obliterated by His divine light the next moment). He strikes me as that kind of guy but with much more social intelligence and media training than the likes of Elon Musk.
moozooh 3 hours ago [-]
[dead]
Davidzheng 15 hours ago [-]
Neither of these things are useful signals. Other labs surely trained on similar material (presumably not even buying hard copies). Also how "bothered" someone is about their predictions is a bad indicator -- the prediction, taken at face value, is supposed to be trying to ask people to prepare for what he cannot stop if he wanted to.

None of this means I am a huge fan of Dario - I think he has over-idealization of the implementation of democratic ideals in western countries and is unhealthily obsessed with US "winning" over China based on this. But I don't like the reasons you listed.

LZ_Khan 14 hours ago [-]
At least they're paying. OpenAI should have the largest IP settlement, they just would rather contest it and not pay for eternity.
dylan604 14 hours ago [-]
If you think there's a bubble, then you keep pushing out these situations so that if if the bubble burts there's nothing left to pay any kind of settlements. The only time companies pay a settlement is if they think they are going to get hit with a much larger payout from a court case going against them. Even then, there's chances to appeal the amounts in the ruling. Dear Leader did this very thing.
ramraj07 14 hours ago [-]
Avoiding Doing something that could cause job loss has never been and will never be a productive ideal in any non conservative non regressive society. What should we do? Not innovate on AI and let other countries make the models that will kill the jobs two months later instead?
reasonableklout 12 hours ago [-]
Pretty sure Amodei makes noise about mass unemployment because he is very bothered by the technology that the entire industry (of which Anthropic just one player) is racing to build as fast as possible?

Why do you think he is not bothered at all, when they publish post after post in their newsroom about the economic effects of AI?

moozooh 2 hours ago [-]
They stand to benefit from every one of those effects and already do. They have a stake in the game bigger than any other parties' because they sell both the illness and a cure.

Amodei's noise is little more than half-hearted advertising even if it's not intended to have that reading (although who can even tell at this point). His newsroom publishes a report on a mass-scale data breach perpetrated using their model with conclusions delivered in a demonstrably detached, almost casual tone: yeah, the world is like this now but it's a good thing we have Claude to protect you from Claude, so you better start using Claude before Claude gets you. They released a new, more powerful Claude, immediately after that breach. No public discussion, nothing. This is not the behavior of people who are bothered by it.

dwohnitmok 11 hours ago [-]
> Amodei repeatedly predicted mass unemployment within 6 months due to AI

When has Amodei said this? I think he may have said something for 1 - 5 years. But I don't think he's said within 6 months.

noosphr 14 hours ago [-]
Like op said, they have values. You just don't agree with their values.
9 hours ago [-]
karmasimida 13 hours ago [-]
Precisely

Anthropic never explains they are fear-mongering for the incoming mass scale job loss while being the one who is at the full front rushing to realize it.

So make no mistake: it is absolutely a zero sum game between you and Anthropic.

To people like Dario, the elimination of the programmer job, isn’t something to worry, it is a cruel marketing ploy.

They get so much money from Saudi and other gulf countries, maybe this is taking authoritarian money as charity to enrich democracy, you never know

supern0va 11 hours ago [-]
>Anthropic never explains they are fear-mongering for the incoming mass scale job loss while being the one who is at the full front rushing to realize it.

Couldn't it also be true that they see this as inevitable, but want to be the ones to steer us to it safely?

karmasimida 11 hours ago [-]
Safely in what way? If you ask them to stop, the easy argument is Chinese won’t stop, so they won’t stop.

Essentially they will not stop at all, because even they know no one can stop the competition from happening.

So they ask more control in the name of safety while eliminating millions of jobs in span of a few years.

If I have to ask, how come a biggest risk of potential collapse of our economy being trusted as the one to do it safely? They will do it anyway, and blame capitalism for it

wredcoll 11 hours ago [-]
I'm not hearing an alternative here.
jobs_throwaway 13 hours ago [-]
Copyright is bad and its good that AI companies stole the stuff and distilled it into models
wredcoll 11 hours ago [-]
It's not great they're the only ones allowed to do it.
jobs_throwaway 2 hours ago [-]
I agree
cmrdporcupine 13 hours ago [-]
And then sold it to you for $200 USD a month? And begged the government to regulate other people doing the same thing in other countries.

Fantastic take.

jobs_throwaway 13 hours ago [-]
I'm capable of getting all that IP for free, its trivial with a laptop and an internet connection

I pay multiple LLM providers (not $200 a month) because the service they provide is worth the money for me, not because they provide me any IP. They're actually quite stingy with the IP they'll provide, which I agree is bullshit given that they didn't pay for much of it themselves.

gambiting 6 hours ago [-]
>>because the service they provide is worth the money for me, not because they provide me any IP.

What do you think their service is, exactly. Every single word that comes out of these systems is stolen IP, do you think that just because they won't generate a picture of Mickey Mouse for you it's not providing any IP?

jobs_throwaway 2 hours ago [-]
Their service is understanding, interpreting, and generating text. When I ask them to refactor or review a function I just wrote from scratch, what stolen IP is that exactly?
gambiting 1 hours ago [-]
The one that the system was trained on to provide the understanding and interpreting of your text. Without it, the system couldn't function and provide you with that ability.
jobs_throwaway 1 hours ago [-]
Your claim was "Every single word that comes out of these systems is stolen IP". This code was never in the corpus of training data. How could it be stolen?

Are you moving the goalpost to "Every single word that comes out of these systems relies on understanding gained from stolen IP"?

skeptic_ai 12 hours ago [-]
And then they complain that Deepseek copied from them haha
shawmakesmagic 13 hours ago [-]
One man's unemployment is another man's freedom from a lifetime of servitude to systems he doesn't care about in order to have enough money to enjoy the systems he does care about.
richardlblair 13 hours ago [-]
Few understand that whether we like it or not we are all forced to play this game, capitalism.
richardlblair 13 hours ago [-]
See, you were standing on principles until you brought the commentors net worth into the argument making it personal.

Easy way undermine the rest of your comment

xpe 12 hours ago [-]
> Without being bothered about it at all.

I disagree: I see lots of evidence that he cares. For one, he cares enough to come out and say it. Second, read about his story and background. Read about Anthropic's culture versus OpenAI's.

Consider this as an ethical dilemma from a consequentialist point of view. Look at the entire picture: compare Anthropic against other major players. A\ leads in promoting safe AI. If A\ stopped building AI altogether, what would happen? In many situations, an organization's maximum influence is achieved by playing the game to some degree while also nudging it: by shaping public awareness, by highlighting weaknesses, by having higher safety standards, by doing more research.

I really like counterfactual thought experiments as a way of building intuition. Would you rather live in a world without Anthropic but where the demand for AI is just as high? Imagine a counterfactual world with just as many AI engineers in the talent pool, just as many companies blundering around trying to figure out how to use it well, and an authoritarian narcissist running the United States who seems to have delegated a large chunk of national security to a dangerously incompetent ideological former Fox news host?

moozooh 1 hours ago [-]
Dario Amodei: "We want to empower democracies with AI." "AI-enabled authoritarianism terrifies me." "Claude shall never engage or assist in an attempt to kill or disempower the vast majority of humanity."

Also Dario Amodei: seeks investment from authoritarian Gulf states, makes deals with Palantir, willingly empowers the "department of war" of a country repeatedly threatening to invade an actual democracy (Greenland), proactively gives the green light to usage of Claude for surveillance on non-Americans.

Yeah, I don't know what your definition of "care" is but mine isn't that, clearly. You might want to reassess that. Care implies taking action to prevent the outcome, not help it come sooner.

The problem with counterfactual arguments like yours is that they frame the problem as a false dichotomy to smuggle in an ethically questionable line of decisions that somebody has made and keeps making. If you deliberately frame this as "everybody does this", it conveniently absolves bad actors of any individual responsibility and leads discussion away from assuming that responsibility and acting on it toward accepting this sorry state of events as some sort of a predetermined outcome which it certainly is not.

howardYouGood 14 hours ago [-]
[dead]
relaxing 12 hours ago [-]
[flagged]
nickysielicki 12 hours ago [-]
Pagerank is not Claude.
big-chungus4 7 hours ago [-]
[flagged]
dakolli 13 hours ago [-]
Anthropic is by far the most evil company in tech, I don't care. Its worst than Palantir in my book. You won't catch my kids touching this slave making, labor killing brain frying tech.
miroljub 7 hours ago [-]
While many praise them for sticking to their values, it's also worth mentioning that their values are not everyone's values.

Of all major LLMs, Claude is perhaps the most closed and, subjectively, the most biased. Instead of striving for neutrality, Anthropic leadership's main concern is to push their values down people's throats and to ensure consistent bias in all their models.

I have a feeling they see themselves more as evangelists than scientists.

That makes their models unusable for me as general AI tools and only useful for coding.

If their biases match yours, good for you, but I'm glad we have many open Chinese models taking ground, which in the long run makes humanity more resistant to propaganda.

AlecSchueler 7 hours ago [-]
> Of all major LLMs, Claude is perhaps the most closed and, subjectively, the most biased. Instead of striving for neutrality, Anthropic leadership's main concern is to push their values down people's throats

It's this satire? Let us know when Claude starts calling itself MechaHitler or trying to shoehorn nonsense about white genocide into every conversation.

soco 7 hours ago [-]
I might be misreading your comment, which I understood like "Chinese make humanity more resistant to propaganda". It just doesn't add up, can you please explain?
miroljub 6 hours ago [-]
Chinese models give you more choice (good), competition (good) and less bias (good).

I did not say anything about the Chinese government, which is sadly becoming a role model for many (all?) Western governments.

u1hcw9nx 6 hours ago [-]
Google, OpenAI Employees Voice Support for Anthropic in Open Letter. We Will Not Be Divided https://notdivided.org/

-----

The Department of War is threatening to

- Invoke the Defense Production Act to force Anthropic to serve their model to the military and "tailor its model to the military's needs"

- Label the company a "supply chain risk"

All in retaliation for Anthropic sticking to their red lines to not allow their models to be used for domestic mass surveillance and autonomously killing people without human oversight.

The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused.

They're trying to divide each company with fear that the other will give in. That strategy only works if none of us know where the others stand. This letter serves to create shared understanding and solidarity in the face of this pressure from the Department of War.

We are the employees of Google and OpenAI, two of the top AI companies in the world.

We hope our leaders will put aside their differences and stand together to continue to refuse the Department of War's current demands for permission to use our models for domestic mass surveillance and autonomously killing people without human oversight.

Signed,

discopicante 4 hours ago [-]
For the signatories attributing their names and titles, that should be respected to put your reputation on the line. It means something. As for the others who are signing 'anonymous', this is meaningless. Either sign or don't. I would suggest removing that as an option.
stingraycharles 5 hours ago [-]
Call me cynical, but given that Google is a publicly traded company and OpenAI having a trillion in spending commitments, I’m skeptical whether the leadership of those companies feel the same as their employees.
rustyhancock 5 hours ago [-]
Yes. I did not forsee this at all, but if OpenAI face and existential threat with no path in 2026-2030 to maintain user base.

Why can't they go to the contract generator of last resort, aka the Pentagon. It's what Elon has done with SpaceX and Grok.

stingraycharles 4 hours ago [-]
And Google is already a DoD contractor. I remember back in the day there was some fuzz amongst employees that did not approve, but in the end that was just a very vocal minority and most people don’t care.

I suspect the same will happen here.

eric-burel 6 hours ago [-]
They love their dictator until it backfires, that's a quite old story.
pjc50 5 hours ago [-]
Google employees were generally pretty anti-Trump, it's the senior leadership and the recommendation algorithms that are pro-Trump.
u1hcw9nx 5 hours ago [-]
Senior leaders in Google are not pro-Trump.

Musk (Tesla, SpaceX), Ellison (Oracle) consistently supported Trump before his win was certain and are tight with Trump. They were megadonors behind his campaign.

Bezos (Amazon, Blue Origin) and Zuckerberg (Meta) pivoted towards Trump in 2024 after it looked like he would wind second time. They are opportunistic bastards who try to weasel into the good side of Trump with varying results.

Apple, Google, Microsoft, Nvidia etc. just bend the knee. They are reluctant but pragmatic and try to protect the company when their competition Amazon, Meta and Oracle are on the inside. Notice that in this final group, CEOs lack autonomy. At Alphabet, Page and Brin retain controlling authority (and they just try to avoid getting involved with Trump). Nvidia lacks a dual-class structure, meaning Jensen Huang (4% votes) can be outvoted on critical matters. Both Apple and Microsoft are "faceless" corporation where the CEOs serve as hired hands.

harimau777 2 hours ago [-]
That strikes me as being a distinction without a difference.

If anything, I have less respect for people who support fascism for money than I do for people who actually believe in it.

u1hcw9nx 2 hours ago [-]
Trump may be fascist but the is still democratically elected leader with Senate backing him. It's not the Corporate leaders to decide to against democratically elected leaders even if they are bad. They have can only slow walk the decline.

You would not want that either.

NicuCalcea 23 minutes ago [-]
It's not a requirement to donate to democratically-elected leaders though.
tyre 59 minutes ago [-]
This is patently silly. The US does not have a democratically elected dictatorship.

People and companies are free to do whatever the fuck they want that’s not illegal. They can resist any government priorities for any reason, including finding them destructive or anti-democratic or corrupt.

The government is able to change the laws within the current system to back its will—regardless of whether it’s in the interest of the people who voted for them, let alone the entire population.

(No the em dash isn’t AI.)

ImPostingOnHN 59 minutes ago [-]
> It's not the Corporate leaders to decide to against democratically elected leaders even if they are bad.

Refusing to join forces and contribute your efforts towards actively support fascism is not "deciding against democratically elected leaders". This sort of rhetorical sophism is unhelpful and, indeed, damaging.

It is ABSOLUTELY everyone's place, ("corporate leaders" included) to have principles and stick to them.

Personally, I agree with the principles of not using fallible AI for mass domestic surveillance analysis purposes, or for fully autonomous weapon purposes.

NoNameHaveI 4 hours ago [-]
I'd like to believe that Silicon Valley mgmt is Pro-Trump in the same way that Oskar Schindler was "pro Nazi". You may not personally like who is in office, but you pretend to in order to survive.
tyre 56 minutes ago [-]
This isn’t the case, sadly. Some people, like Ben Horowitz sadly, have gone completely off the deep end.

Some are culture warriors who feel they have been wronged, some are opportunists. But the thing with opportunism is that this is who they are and what they believe in. Having a president who is corrupt is exactly what they want because they know exactly how to work with him: quid pro quo.

There is no distance between them being pro-Trump and opportunistic. He’s the perfect embodiment of those values.

AdamN 1 hours ago [-]
There are a few people like that (we know who they are) but either tech has changed or I never noticed but a significant portion of the senior leadership in the tech world is MAGA (not in the dumb way - but in a far more problematic "techno-libertarian" way)
tcgv 4 hours ago [-]
Employee solidarity matters, but absent a legal constraint, I don’t think it’s a durable control.

If this remains primarily a political/corporate bargaining question, the equilibrium is unstable: some actors will resist, some will comply, and capital will flow toward whoever captures the demand.

In that world, the likely endgame is not "the industry says no," but organizational restructuring (or new entrants) built to serve the market anyway.

If we as a society want a real boundary here, it probably has to be set at the policy/law level, not left to voluntary corporate red lines.

throwfaraway4 5 hours ago [-]
Unless it’s signed by the CEO it doesn’t matter
raincole 5 hours ago [-]
CEOs: looks like a perfect chance to optimize some employees off!
i_love_retros 3 hours ago [-]
Oh what heroes! They wrote a letter! They will keep working at these scummy companies though taking their fat pay checks won't they
surajrmal 41 minutes ago [-]
It's easier to affect change from within. Do you judge people for choosing to continue living in America?
amai 2 hours ago [-]
skylerwiernik 51 minutes ago [-]
The quotes from those articles (short passages?) are

> He recalls meeting President Trump at an AI and energy summit in Pennsylvania, "where he and I had a good conversation about US leadership in AI,"

> "Unfortunately, I think 'No bad person should ever benefit from our success' is a pretty difficult principle to run a business on... This is a real downside and I'm not thrilled about it."

> "Throughout my time here, I've repeatedly seen how hard it is to truly let our values govern our actions. I've seen this within myself, within the organization, where we constantly face pressures to set aside what matters most, and throughout broader society too." (from a researcher at Anthropic)

I don't think that any of this is particularly damning. Even if you don't like the president, I don't think it's bad to say that you had a good conversation with them. I believe the CEO of NVIDIA has said similar. The Saudis invest in many public US companies, does that make those companies less trust worthy? What about taking private capital from institutions such as State Street and Blackrock? The last quote seems like more of a reflection than an allegation. It read to me as a desire to do better.

I'm all for not trusting companies, but Anthropic seems to be one of the few that's trying to do good. I think we've seen a lot worse from many of their competitors.

amai 29 minutes ago [-]
The problem is this:

> The Saudis invest in many public US companies, does that make those companies less trust worthy?

It does. If Anthropic takes money from the middle east that might be the reason, why they cannot work for the Pentagon. Simply because the Pentagon works together with the Israeli Forces and middle east investors might not like this. So Anthropic has to decide to either take a lot of money from the middle east, or work for the Pentagon.

Of course the problem goes much deeper than just Anthropic. I don't understand why taking money from dictatorships doesn't count as money laundering in our society. Because basically this is dirty money, generated by slavery and forceful suppression of people. We should forbid all companies to take this kind of dirty money. But because we don't do that at the moment companies who don't take this dirty money will have a disadvantage against companies that do. And because companies are all about money, in the end they are basically forced to act against their good intentions, just to survive.

We as society have to stop this. We must make sure, that companies who are not taking dirty money survive the competition. My idea would be to extend the rules for money laundering to all countries that are dictatorships. But there might be other ideas, to level the playing field between companies, so we as society can help them to make the right decision.

b40d-48b2-979e 21 minutes ago [-]

    The Saudis invest in many public US companies, does that make those companies
    less trust worthy?
Uhh.. yeah?

    we've seen a lot worse from many of their competitors
I think we should demand people do better than just being slightly above the worst.
2 hours ago [-]
qaid 16 hours ago [-]
I was reading halfway thru and one line struck a nerve with me:

> But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.

So not today, but the door is open for this after AI systems have gathered enough "training data"?

Then I re-read the previous paragraph and realized it's specifically only criticizing

> AI-driven domestic mass surveillance

And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance

A real shame. I thought "Anthropic" was about being concerned about humans, and not "My people" vs. "Your people." But I suppose I should have expected all of this from a public statement about discussions with the Department of War

xeonmc 16 hours ago [-]

    > I thought "Anthropic" was about being concerned about humans
See also: OpenAI being open, Democratic People's Republic of Korea being democratic and peoples-first[0].

[0] https://tvtropes.org/pmwiki/pmwiki.php/Main/PeoplesRepublicO...

bighead 11 hours ago [-]
Elon, is that you?
manmal 9 hours ago [-]
Is GP wrong?
nubg 16 hours ago [-]
I think it's phrased just fine. It's not up to Dario to try to make absolute statements about the future.
m000 14 hours ago [-]
How about the present and his personal beliefs?

"I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries."

This reads like his objection is not on "autocratic", but on "adversaries". Autocratic friends & family are cool with him. A clear wink to a certain administration with autocratic tendencies.

anjellow 14 hours ago [-]
Some people can’t help themselves to read this like a Ouija board.
9dev 27 minutes ago [-]
Corporate statements like these get written very carefully. You can be certain that not a single word in these sentences has been placed there without considering what they do imply and what they omit.
tyre 52 minutes ago [-]
It’s pretty telling that he didn’t rule out using a Ouija board for fully autonomous military drones or mass surveillance.

Real eyes..

jacquesm 13 hours ago [-]
That all works right up until the United States becomes autocratic and that process is well underway.

So yes, the second part of your comment is what is going to come back to haunt them. The road to hell is paved with the best intentions.

tdeck 5 hours ago [-]
The US is already autocratic when it comes to people in many other countries, where the US government didn't like their democratically elected governments and decided to pick a new one for them instead.
estearum 14 hours ago [-]
Western liberal ideals are better than the opposite. It is misanthropic to build autocratic societies.
harimau777 2 hours ago [-]
Building autocratic societies is exactly what much of the West, including the US and UK, are doing right now.
estearum 45 minutes ago [-]
And to the extent they're doing that, that's bad.
9dev 23 minutes ago [-]
That makes your argument a true scotsman, though. Western liberal ideals are the right ones, you're just not doing it right!

Much has been said about the purported superiority of western values, but as we've all seen the USA was very quick to get rid of even the slightest notion of these values when Trump promised them some money and a dominant vibe.

The old world is dying, and the new world struggles to be born: now is the time of monsters.

tipiirai 6 hours ago [-]
China's ideals make better public services and puts less pressure on environment. But China may not be the opposite you are referring to here.
tremon 4 hours ago [-]
> puts less pressure on environment

China has been competing with India for decades for the most-polluted cities crown, and only slightly ranks below the US and Russia in CO2 emissions per capita. It's also the only large country where its emissions have been growing over the last decade. Where does the idea come from that China somehow puts less pressure on the environment? Less than what, exactly?

maxglute 4 hours ago [-]
>and only slightly ranks below the US and Russia

By slightly ranks below you mean ~50-60% per capital.

>China somehow puts less pressure on the environment

PRC renewables at staggering scale.

Last year PRC brrrted out enough solar panels whose lifetime output is equivalent to MORE than annual global consumption of oil. AKA world uses about >40billion barrels of oil per year, PRC's annual solar production will sink about 40billion barrels of oil of emissions in their life times. That's fucking obscene amount of carbon sink, and frankly at full productionm annual PRC solar + wind can on paper displace 100% of oil, 100% of lng, and good % of coal (again annual utilization) once storage figured out.

This BTW functionally makes PRC emission negative, by massive margin, arguably the only country who is.

It's only retarded emission accounting rules that says PRC should be penalized for manufacturing renewables, but buyers credited AND fossil producers like US not penalized for extraction, which US has only increased.

js8 2 hours ago [-]
Also, unlike US and Russia, China has green transition as an official policy. There are additional savings from total electrification. (I think they also care more about longterm and being closer to the equator and the sea, they better understand the consequences of global warming.)
disgruntledphd2 2 hours ago [-]
And they have little to no sources of fossil fuels within their borders (not enough to support their demand, in any case).

It's a great policy, but it also makes sense for geo-strategic reasons (even ignoring the climate issue).

titzer 3 hours ago [-]
> It is misanthropic to build autocratic societies.

It's misanthropic to dismantle democratic societies.

estearum 45 minutes ago [-]
??? I don't know what you're referring to
mackeye 11 hours ago [-]
western liberal democracies tend to use "autocratic" as an epithet (though, i guess, there are fewer countries that marker is used against for which it's false now than ~50 years ago). for the first sentence, "the opposite" of western liberal ideas will yield 10 answers from 9 people :-)
taurath 12 hours ago [-]
> It's not up to Dario to try to make absolute statements about the future.

Thats insane to say, given that he's literally acting in the public sphere as the mouth of Sauron for how AI will grow so effective as to destroy almost everyone's jobs and AGI will take over our society and kill us all.

nubg 12 hours ago [-]
All I'm trying to say is that nobody can predict the future, and therefore saying statements pretending something will be a certain way forever is just silly. It's OK for him to add this qualifier.
harimau777 2 hours ago [-]
That's not how morality works. If mass surveillance is wrong today, then it will be wrong tomorrow.
titzer 3 hours ago [-]
It's not called The Department of War.

It's just incredible to me that people think this is some kind of bold statement defying the administration when it is absolutely filled with small and medium capitulations, laying out in numerous examples how they just jumped right in bed with the military.

And no one seems disturbed by the blatant Orwellian doublespeak throughout. "We thoroughly support the mission of the Department of War"--because War is Peace.

dwringer 1 hours ago [-]
I'm really surprised that didn't jump out at more people; I had to get halfway through the comments to the 27th mention of "Department of War" to find the first comment pointing out that using the name is itself a capitulation.
andrewljohnson 12 hours ago [-]
This doesn’t read to me like it was personally written by one person. It’s not Dario we should read this as being written by, it’s Anthropic as an entity.
lm28469 8 hours ago [-]
He does it all the time when it helps selling his products though, strange
nhinck2 14 hours ago [-]
He does it all the time.
camillomiller 15 hours ago [-]
And yet he’s quite happy to make just that when it’s meant to drum you up his own product for investors
trvz 15 hours ago [-]
He’s one of the most influential people when it comes to what future we’ll have. Yes, it’s up to him.
ternwer 15 hours ago [-]
I think he's more pragmatic than that.
samtheDamned 9 hours ago [-]
I'm glad I'm not alone in finding the specific emphasis on drawing the line at domestic surveillance a bit odd. Later they also state they are against "provid[ing] a product that puts America’s warfighters and civilians at risk" (emphasis mine). Either way I'm glad they have lines at all, but it doesn't come across as particularly reassuring for people in places the US targets (wedding hosts and guests for example).
jazzyjackson 9 hours ago [-]
See also: the entire history of Silicon Valley

When Google Met Wikileaks is a fun read, billionaire CEOs love to take Americas side.

ghshephard 16 hours ago [-]
I think it goes without saying that ones the systems are reliable, fully-autonomous weapons will be unleashed on the battlefield. But they have to have safeguards to ensure that they don't turn on friendly forces and only kill the enemy. What Anthropic is saying, is that right now - they can't provide those assurances. When they can - I suspect those restrictions will be relaxed.
asdff 7 hours ago [-]
US military cannot even offer those assurances themselves today. I tried to look up the last incident of friendly fire. Turns out it was a couple hours ago today, when US military shot down a DHS drone in Texas.
blitzar 7 hours ago [-]
Humans malfunction all the time, that is why there is a push to replace them with more reliable hardware.
computerthings 14 hours ago [-]
[dead]
sithamet 7 hours ago [-]
Also, as someone from a country that has been attacked and dragged into war, I would prefer machines fighting (and being destroyed autonomously) rather than my people dying, nor people from any nation that came to help.

That's as Anthropic as it gets if your nerve expands a little bit further than your HOA.

mrtksn 5 hours ago [-]
What do you think it will happen once the machines fight off? Do you think that the losing side will be like "oh no our machines lost, then better we give our things to the winning machines"?

After your machines are destroyed you will be fighting machines or machines will extract and constantly optimize you. They will either exterminate you or make you busy enough not to have time for resistance. If you have something of value they will take it away. The best case scenario is to make you join the owners of the machines and keep you busy so that you don't have time to raise concerns about your 2nd class citizenship.

sithamet 4 hours ago [-]
Humans actually do exactly the same, google Mariupol or Bucha. Machines delay the moment people start dying. Good attempt in reasoning though.
mrtksn 4 hours ago [-]
I don't disagree, my point is that machines won't change a thing about war just optimize it.
Quarrelsome 5 hours ago [-]
> would prefer machines fighting (and being destroyed autonomously) rather than my people dying

But the reality is more like the surprise of a bunch of submersible kill bots terrorising a coastal city and murdering people. Even in bot-first combat, at some point one side of bots wins either totally, allowing it to kill people indiscriminately or partially, which forces the team on the back foot to pivot to guerilla warfare and terror attacks, using robots.

sithamet 2 hours ago [-]
Humans actually do exactly the same, google Mariupol or Bucha or what drones (human-piloted) are doing in Cherson, so the city is all covered by fishnet. Machines delay the moment people start dying; true not only for military applications btw.
gambiting 6 hours ago [-]
>> I would prefer machines fighting (and being destroyed autonomously) rather than my people dying

What makes you think in any war the machines would stop at just fighting other machines?

kingkawn 5 hours ago [-]
What about machines slaughtering the population without pause?
preisschild 6 hours ago [-]
The more likely scenario will be "your people" dying in a war against machines that don't tend to disregard illegal orders.
Onewildgamer 11 hours ago [-]
Fully autonomous weapons are a danger even if we can reliably make it happen with or without AI.

It essentially becomes a computer against human. And such software if and when developed, who's going to stop it from going to the masses? imagine a software virues/malwares that can take a life.

I'm shocked very few are even bothered about this and is really concerning that technology developed for the welfare could become something totally against humans.

TaupeRanger 16 hours ago [-]
What else would you expect? The military is obviously going to develop the most powerful systems they can. Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”? What if Anthropic ends up developing the safest, most cost effective systems for that purpose?
crabmusket 13 hours ago [-]
> Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”?

Yes. Absolutely.

raincole 13 hours ago [-]
And what? Get nationalized? Get labelled as terrorists?

The US system doesn't empower a company to say no. It should though.

dgellow 7 hours ago [-]
Yes. Force them to do it the hard way and fight through it. Don’t abdicate in advance
ImPostingOnHN 50 minutes ago [-]
Literally Rule 1 On Fighting Tyranny:

> 1. Do not obey in advance.

> Most of the power of authoritarianism is freely given. In times like these, individuals think ahead about what a more repressive government will want, and then offer themselves without being asked. A citizen who adapts in this way is teaching power what it can do.

https://scholars.org/contribution/twenty-lessons-fighting-ty...

harimau777 2 hours ago [-]
Sure, if that's what it takes to do the right thing.
aziaziazi 12 hours ago [-]
You, me or a company don’t need a system empowerments to say "no" though. Just say it. I would certainly choose being called "terrorist" in front of the class over helping to deploy weapons, let alone autonomous ones.

You own nothing but your opinion. (No offense to personal property aficionados)

neatze 11 hours ago [-]
I don't understand this, for example, what would you have done if you where Ukrainian right now ? (before 2014 arguably start of conflict and after invasion)
aziaziazi 10 hours ago [-]
That is an interesting question, very far from my daily concern and brings dilemmas when I think about it. My response would probably be "I don’t know".

However Anthropic situation is very different: there’s no ongoing invasion of the USA, and they traditionally attack other countries once in a while (no judgment) so the weapons upgrade will be "useful" on the field.

pastel8739 8 hours ago [-]
It is of course possible to argue that the reason there is no ongoing invasion of the USA is because of our continued investment in technology for killing people
waffleiron 8 hours ago [-]
Thats the same type of thinking conspiracy theorists have, the type you can never disprove.
goobatrooba 3 hours ago [-]
I am 100% against militarism and wished we didn't need any of this, but the power balance between Russia and Ukraine or even Israel and the Palestinians seem to corroborate the thesis... There likely would be no Ukraine war today if Ukraine hadn't voluntarily given up its nukes three decades ago (unproven thesis). There was one as Russia thought it could win. The ongoing (after the "peace fire") Israeli occupation and attacks of the remnants of Palestinian territory show the same. If you are the weaker party and there is a stronger party that wants what you have (or plain wants to eradicate you) then they'll do so..
2 hours ago [-]
crises-luff-6b 4 hours ago [-]
[dead]
esseph 8 hours ago [-]
> I don't understand this, for example, what would you have done if you where Ukrainian right now ? (before 2014 arguably start of conflict and after invasion)

There are a lot of well meaning people that are very anti-weapon or anti-violence under any circumstances. The problem is that when those people actually need those weapons and that violence, they are so inadequate at it that they become a liability to themselves and others.

I'm not saying I have or know of a solution, but I remember the old saying (paraphrasing) that it's better to be a warrior working a farm than a farmer working a war.

harimau777 2 hours ago [-]
Yes, that's exactly what I want them to say.
TaupeRanger 2 hours ago [-]
No, you don't. If they develop the safest, most cost-effective version of the technology that the military WILL inevitably use from some company, Anthropic or otherwise, then that's the version of this tech you want them using.
ImPostingOnHN 44 minutes ago [-]
The safest, most cost effective version will not help you when you are their designated target for disagreeing with the regime.

After all, the regime already says such domestic dissenters are terrorists, and have, on multiple recent occasions, justified the execution of domestic dissenters based on that.

goatlover 16 hours ago [-]
I'd prefer companies not help the military develop the most powerful weapons possible given we're in the age of WMDs, have already had two devastating world wars and a nuclear arms race that puts humanity under permanent risk.
15 hours ago [-]
lambdaphagy 15 hours ago [-]
There is an extremely straightforward argument that WMDs are precisely what prevented the outbreak of direct warfare between major powers in the latter 20th. (Note that WWI by itself wasn’t sufficient to prevent WWII!)

You can take issue with that argument if you want but it’s unconvincing not to address it.

horacemorace 14 hours ago [-]
There’s also an extremely straightforward argument that if the current crop of authoritarian dictatorial players in power now had been then that the outcome of the latter 20th would have been much different.
sethammons 5 hours ago [-]
If my grandma had wheels she'd be a bicycle
lambdaphagy 13 hours ago [-]
The guy who authorized the Manhattan project:

- had four [!] terms, a move so anomalous it was subsequently patched by constitutional amendment

- threatened court-packing until SCOTUS backed down and stated rubber-stamping his agenda

- ruled entire industries by emergency decree in a way that contemporaries on the left and right compared to Mussolini

- interned 120k people without due process, on the basis of ethnicity

- turned a national party into a personal patronage system

- threatened to override the legislature if it didn’t start passing laws he liked

Not even saying any of this is even good or bad, clearly in the official history it was retroactively justified by victory in WWII. But it’s a bit rich to say that the bomb wasn’t developed under authoritarian conditions.

estearum 14 hours ago [-]
Great, now go ahead and prove that AI also reaches strategic equilibrium. This was pretty much self-evident with nuclear weapons so should probably be self-evident for AI too, if it were true.
idiotsecant 15 hours ago [-]
That's a little bit like saying the bullet in the gun prevented someone getting shot while playing Russian Roulette. We pulled back that hammer several times, and it's purely happenstance that it didn't go off. MAD has that acronym for a reason.
lambdaphagy 13 hours ago [-]
I agree that the risk of an accidental strike was a huge problem with the theory of nuclear deterrence, but the question is: compared to what? In expectation or even in a 1st percentile scenario, was MAD worse than a world where the USSR is a unilateral nuclear power? For that matter, what would it have taken to get a stronger SALT treaty sooner?

I think you need to have people thinking through this stuff at a nuts-and-bolts level if you want to avoid getting dominated by a slightly less nice adversary, and so too with AI. Does a unilateral guarantee not to build autonomous killbots actually make anyone safer if China makes no such promise, or does that perversely put us at more risk?

I’d love to know that the “no killbots, come what may” strategy is sound, but it’s not clear that that’s a stable equilibrium.

tw1984 9 hours ago [-]
> Does a unilateral guarantee not to build autonomous killbots actually make anyone safer if China makes no such promise, or does that perversely put us at more risk?

China considers all lethal autonomous weapons "unacceptable", calling all countries to ban it. Countries like the US and India refuse to back such proposals. See China's official stands on this matter below.

https://documents.unoda.org/wp-content/uploads/2022/07/Worki...

I totally understand that you got brainwashed by the media, but hey you appearantly have internet access, why can't you just do a little bit research of your own before posting nonsense using imagination as your source of information?

esseph 8 hours ago [-]
michelsedgh 15 hours ago [-]
So would you have preferred the Nazis to develop the most powerful weapons and they win the world war? (which they were trying to do?)
ImPostingOnHN 42 minutes ago [-]
No, that's precisely why I'm opposed to it happening here, and why I prefer the idea of Anthropic limiting their contribution to creating such a scenario.
anonym29 15 hours ago [-]
If Anthropic does give the DoD what they want, does that magically stop China, Iran, Russia, etc from advancing in AI arms development?

If Anthropic doesn't give the DoD what they want, does that mean that China, Iran, Russia, etc magically leapfrog not only Anthropic, but the entire US defense industry, and take over the planet?

tsimionescu 4 hours ago [-]
Why are you assuming that people in China, Iran, Russia etc are not having these exact same conversations, and perhaps a powerful example from the USA, along with some belief that the USA will not be able to easily get this technology, help inspire them to abstain as well?

However horrific the regimes in these countries are, the people behind the technology there are just as likely to be intelligent and moral human beings as the people in the USA and Europe working on these are.

andsoitis 15 hours ago [-]
> If Anthropic does give the DoD what they want, does that magically stop China, Iran, Russia, etc from advancing in AI arms development?

No

> If Anthropic doesn't give the DoD what they want, does that mean that China, Iran, Russia, etc magically leapfrog not only Anthropic, but the entire US defense industry, and take over the planet?

The risks are high, so if you're the US, you want a portfolio of possible winners. The risks are too high to not leverage all the cutting edge AI labs.

gbear605 11 hours ago [-]
Anthropic was already giving them that. It’s not like they need domestic mass surveillance or autonomous kill bots to have a portfolio of possible winners. If the goal is to keep the US competitive in AI, this whole process was actively unhelpful. Honestly more helpful for our adversaries than for us.
estearum 14 hours ago [-]
With the benefit of hindsight we know the Nazis in fact were not racing to develop The Bomb. Reasonable assumption to have oriented around at the time though.
michelsedgh 14 hours ago [-]
Its not just the atomic bomb im talking the usa had the best production of fighter jets, bombers, all kinds of communication technology, deciphering technology all the ammunition, all of those together beat the Nazis and they were trying their best to develop better and more advanced technologies than usa!
13 hours ago [-]
mothballed 15 hours ago [-]
Did WMDs have a meaningful effect on stopping the Nazis? I thought the bomb wasn't dropped until after they surrendered.
anonym29 15 hours ago [-]
The only two atomic weapons ever deployed weren't even targeting Nazi Germany, but Japan. Dark but true: they were both deliberately and knowingly targeted at civilian populations.
cies 14 hours ago [-]
And inflicted less damage than the fire bombing campaigns on civ pop centers that were carried out along side the A-bombs.

The A-bombs were not the worst part of the attack on Japan. And thus were not "needed to end the war". They were part of marketing /the/ super power.

estearum 14 hours ago [-]
"Needed to win the war," no. The US could've continued to firebomb and then follow with a land invasion, which would've killed both more Japanese and more Allies.

Was it the best path to end the war? Certainly.

The modern argument around targeting civilians or not was not even relevant at the time due to the advent of strategic bombing, which itself was seen as less-horrific than the stalemated trench warfare of WW1. The question was only whether to target civilian inputs to the military with an atomic weapon (and hopefully shock & awe into submission) or firebomb and invade.

archagon 15 hours ago [-]
Yes, I absolutely don’t want tech companies to use the money I pay them to harm people. How is that remotely controversial?
andsoitis 15 hours ago [-]
> I absolutely don’t want tech companies to use the money I pay them to harm people.

Just one example of many, but the companies that make the CPUs you and all of use use every day, also supply to militaries.

I am unaware of any tech company that directly does physical warfare on the battlefield against humans.

tbossanova 12 hours ago [-]
Another example: those companies that make drinkable water, also supply to militaries. But there might be a difference between supplying drinking water and making AI killing machines
andsoitis 12 hours ago [-]
> making AI killing machines

What’s an example of a company that’s making killing machines that a typical consumer or someone HN might be buying product or services from?

eichin 9 hours ago [-]
The easy answer is Westinghouse (look for the youtube short about "things that spin"...)
archagon 8 hours ago [-]
As far as I know, Apple does not supply their chips for military use.
johnisgood 12 hours ago [-]
Time to stop paying your taxes. :P
scottyah 15 hours ago [-]
Because it's painfully short-sighted, or maliciously ignorant.
archagon 15 hours ago [-]
No, it’s just that I don’t want the money I spend to have blood on it. Trivially simple.
TaupeRanger 2 hours ago [-]
Also trivially naive and useless. Evil exists. Conflicts will happen. If evil was at your doorstep, threatening people you love, you absolutely DO want money you spend to have blood on it, if it means keeping yourself and your loved ones safe. Trivially simple.
NewsaHackO 15 hours ago [-]
What if I told you that it's way too late for that?
archagon 8 hours ago [-]
Well, we have to try to live as virtuously as we can using the means and remedies available to us.
6 hours ago [-]
skeledrew 15 hours ago [-]
Well, if they hadn't stated that were that far in line with the administration's ideals, they would likely already be fully blacklisted as enemies of the state. Whether they agree with what they're saying or not, they're walking on egg shells.
nielsole 10 hours ago [-]
You gotta keep in mind that the primary goal of this statement is to avert the invocation of the defense production act.

He is trying to win sympathies even (or especially?) among nationalist hawks.

rafark 14 hours ago [-]
I said exactly this a few days ago elsewhere. It’s disappointing that they (and often other American companies) seem to restrict their “respect” and morals to Americans only. Or maybe it’s just semantics or context because the topic at hand is about americans? I don’t know but it gives “my people are more important than your people”, exactly as you said in your last paragraph
6 hours ago [-]
asaddhamani 9 hours ago [-]
They also posted on Instagram saying autonomous killing would hurt Americans. So non American people don’t matter?
Aeolun 8 hours ago [-]
Is it seriously called the department of war now? Did they change that from DoD?
Sebguer 8 hours ago [-]
illegally, but yes
remarkEon 9 hours ago [-]
As a practical matter, it makes zero sense for a tech company with perhaps laudable goals and concerns about humanity to have any control whatsoever over the use of a product it sells for war. You don't like what it could potentially be used for, or are having second thoughts about being involved in war making at all, don't sell it, which appears to be Amodei's position now. That's perhaps laudable, from a certain point of view.

On the other hand, your position is at best misguided and at worst hopelessly naive. The probability that adversaries of the United States, potential or not, are having these discussions about AI release authority and HITL kill chains is basically zero, other then doing so at a technical level so they get them right. We're over the event horizon already, and into some very harsh and brutal game theory.

zaptheimpaler 8 hours ago [-]
They didn’t sell it no strings attached, they sold it with explicit restrictions in their contract with DoW and the DoW agreed to that contract. Their mistake was assuming they operate in a country where rule of law is respected, clearly not the case anymore given the 1000s of violations in the last year.
remarkEon 8 hours ago [-]
Contracts evolve, don't be naive. If you invent the Giga Missile and the government buys it for its war machine, and then you invent the God Missile right after, the government is going to come back again to renegotiate terms.
orochimaaru 16 hours ago [-]
They’re being used today by the military. So, they are never going to be against mass surveillance. They can scope that to be domestic mass surveillance though.
yujzgzc 15 hours ago [-]
> the door is open for this after AI systems have gathered enough "training data"?

Sounds more like the door is open for this once reliability targets are met.

I don't think that's unreasonable. Hardware and regular software also have their own reliability limitations, not to mention the meatsacks behind the joystick.

01100011 12 hours ago [-]
We already have traditional CV algorithms and control systems that can reliably power autonomous weapons systems and they are more deterministic and reliable than "AI" or LLMs.
kgwxd 12 hours ago [-]
But then a person can be blamed for the outcome. We can't have that!
altpaddle 16 hours ago [-]
Unfortunately I think the writing is clearly on the wall. Fully autonomous weapons are coming soon
not_the_fda 15 hours ago [-]
And that's the end of democracy. One of the safe guards of democracy is a military that is trained to not turn against the citizens. Once a government has fully autonomous weapons its game over. They can point those weapons at the populous at the flip of the switch.
testdelacc1 7 hours ago [-]
The parallel for this is when Rome changed from only recruiting citizens for their army to recruiting anyone who could pass the physical. They had no choice, and the new armies were much better at fighting. But the soldiers also didn’t have the same stake in the republic that voting citizens did.

Citizens were loyal to Rome. Soldiers were loyal to their commanders. If commanders wanted to launch rebellions, the soldiers would likely support them.

A commander who commands the loyalty of legions by convincing a handful of drone operators would be very dangerous for democracy.

refurb 14 hours ago [-]
The original Terminator movie doesn’t seem so far fetched now (minus the time travel).
computerthings 14 hours ago [-]
[dead]
levocardia 16 hours ago [-]
Right - for the same reasons a Waymo is safer than a human-driven car, an autonomous fighter drone will ultimately be deadlier than a human-flown fighter jet. I would like to forestall that day as long as possible but saying "no autonomous weapons ever" isn't very realistic right now.
tempestn 15 hours ago [-]
If they had access to them in Ukraine, both sides would already be using them I expect. Right now jamming of drones is a huge obstacle. One way it's dealt with is to run literal wired drones with massive spools of cable strung out behind them. A fully autonomous drone would be a significant advantage in this environment.

I'm not making a values judgment here, just saying that they will absolutely be used in war as soon as it's feasible to do so. The only exception I could see is if the world managed to come together and sign a treaty explicitly banning the use of autonomous weapons, but it's hard for me to see that happening in the near future.

Edit: come to think of it, you could argue a landmine is a fully autonomous weapon already.

scottyah 15 hours ago [-]
Hah, I had the same realization about landmines. Along with the other commenter, really it would be better to add intelligence to these autonomous systems to limit the nastiness of the currently-deployed systems. If a landmine could distinguish between a real target and an innocent civilian 50yrs later, it's be a lot better.
mothballed 15 hours ago [-]
A landmine blowing up the enemy civilian 50 years later is probably seen as an advantage by the force deploying them. A bit like "salting the earth."
scottyah 14 hours ago [-]
Depressingly true.
jacquesm 13 hours ago [-]
Many landmines disarm after a while.
kgwxd 12 hours ago [-]
It's weird that people still think that the people who's job it is to kill people, or make things that kill people, really care about people more than the killing part. They don't give a shit who blows up, as long as no one comes knocking on their door about it.
scottyah 15 hours ago [-]
It's only Anthropic with their current models saying no. Fully autonomous weapons have been created, deployed, and have been operational for a long time already. The only holdout I've ever heard of is for the weapons that target humans.

Honestly, even landmines could easily be considered fully autonomous weapons and they don't care if you're human or not.

shinryuu 14 hours ago [-]
There are also good reasons for a lot of countries banning mines. https://en.wikipedia.org/wiki/Ottawa_Treaty

Notably USA is not one of those signatories.

urikaduri 15 hours ago [-]
The Ghandi of the corporate world is yet to be found
scottyah 15 hours ago [-]
Considering he slept naked with his grandniece (he was in his 70s, she was 17), I'd say there are a lot of them in the corporate world. Though probably more in politics.
Throwagainaway 7 hours ago [-]
I think I am paraphrasing some hackernews discussion that I saw about it prior but The problem with gandhi was that he was so focused in idealism and that translates into somehow a utilitarian line of thinking to this thing which is of course a very despicable and vile thing for him to do.

There have been quite a lot discussions about this itself on Gandhi here on Hackernews as well.

Gandhi itself became the face of satyagrah movement considering he started it but that movement only had values because of many important people joining in.

Here is a quote from Martin Luther King Jr that I found about satyagrah from wikipedia

Like most people, I had heard of Gandhi, but I had never studied him seriously. As I read I became deeply fascinated by his campaigns of nonviolent resistance. I was particularly moved by his Salt March to the Sea and his numerous fasts. The whole concept of Satyagraha (Satya is truth which equals love, and agraha is force; Satyagraha, therefore, means truth force or love force) was profoundly significant to me. As I delved deeper into the philosophy of Gandhi, my skepticism concerning the power of love gradually diminished, and I came to see for the first time its potency in the area of social reform. ... It was in this Gandhian emphasis on love and nonviolence that I discovered the method for social reform that I had been seeking.[25]

It's better to wish for more satyagrahis to be named but I don't think that the western media might catch on to it.

Ghaffar Khan, Sarojini Naidu, Vinoba Bhave are all people who I think have a simple life history while being from different religions and castes and genders while adhering to the philosophy of satyagrah.

That being said, Satyagrah might not work in the current contexts because Britain was only able to rule India with the help of Indians which was why satyagrah movement was so successful. But if, the govt can get hands onto autonomous drones capable of killing civilians and mass surveilance then satyagrah might not work as much in the near future

(the two things Anthropic is denying to provide to the DOD, vis-a-vis the article itself)

I don't think Anthropic is a great company, it certainly has its flaws but I do think that it is very admirable of them to stand even when the govt.s is essentially saying to follow them or they will literally kill the business with the 3-4 national security laws that they are proposing to invoke on Anthropic.

I do urge to say satyagrah or mention other peaceful protests because usually whenever people talk about gandhi now, this discussion is bound to come which really alienates from the original thing at times. It was the collective efforts of the blood of so so many Indian leaders for India to gain independence.

urikaduri 4 hours ago [-]
Indeed Ghandi's philosophy was far more interesting than his various character flaws. Nobody should learn from Ghandi to be an anti-vaxxer or be a creep, but people should learn about satyagraha and appreciate the immense dedication he put towards it. Its like focusing on Newton being a cruel person to the point of ignoring his scientific gneius.

But the point of my cynical comment was that Ghandi's Idealism is so far from the profit centered mentality of big tech its almost unimaginable that a CEO of such company will stick to pacifism.

jamesmcq 15 hours ago [-]
So AI systems are not reliable enough to power fully autonomous weapons but they are reliable enough to end all white-collar work in the next 12 months?

Odd.

serf 15 hours ago [-]
do you really need to be told there is a difference in 'magnitude of importance' between the decision to send out an office memo and the decision to strike a building with ordinance?

a lot of white collar jobs see no decision more important than a few hours of revenue. that's the difference: you can afford to fuck up in that environment.

remarkEon 9 hours ago [-]
I know what point you are trying to make, but these decisions are functionally equivalent.

Striking a building with ordinance (indirect fires, dropped from fixed wing, doesn't really matter) involves some discernment about utility, secondary effects, probability of accomplishing a given goal, and so on. Writing an office memo (a good one at least) involves the same kind of analysis. I know your point is that "people will die" when you blow up a building, but the parameters are really quite similar.

ImPostingOnHN 36 minutes ago [-]
> these decisions are functionally equivalent

> I know your point is that "people will die" when you blow up a building, but the parameters are really quite similar

The parameters are similar, but the effects are different. That's what makes the decision not functionally equivalent. A functionally equivalent decision would have the same functional result.

To put a point on it: we are allowed to, and indeed should, consider the effects of a decision when making it.

jamesmcq 15 hours ago [-]
They’re not saying “AI can replace some menial white collar tasks”, they’re saying AI can replace all white-collar work.

Yes, if you fuck up some white collar work, people will die. It’s irresponsible.

NewsaHackO 15 hours ago [-]
>Yes, if you fuck up some white collar work, people will die. It’s irresponsible.

A lot of the work in those sectors are not the ones that are being targeted for fully autonomous replacement. They likely would be in the future though.

howardYouGood 14 hours ago [-]
[dead]
gedy 15 hours ago [-]
Shh! there's a lot of money riding on this bet, ahem.
nhinck2 14 hours ago [-]
> And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance

You have to be deliberately naive in a world where five eyes exists to somehow believe that "foreign" mass surveillance won't be used domestically.

sithamet 7 hours ago [-]
What a shame, indeed. Chinese and Russians would never do something like that and hurt either their or your people, too
aidis9136264 11 hours ago [-]
Enemies will have AI powered weapons. We need to be at the cutting edge of capability.
Throwagainaway 7 hours ago [-]
I don't know where you might get your info from but Anthropic has only denied using Autonomous AI to kill humans without anyone pressing a button/having some liabilty on and mass surveillance.

I don't think that your point makes sense especially when you can have enemies within your own administration/country who can use the same weapons to hunt you.

I don't think that the people operating the drones are a bottleneck for a war between your country and your enemies but rather its a bottleneck for a war between your country and its people. The bottleneck is of morality as you would find less people willing to do the same atrocities to their own community but terminator style AI is an orphan with no community ie. it has no problem following any orders from the govt. and THIS is the core of the argument because Anthropic has safeguards to reject such orders and DOD is threatening to essentially kill the company by invoking many laws to force it to give.

MattDamonSpace 10 hours ago [-]
The sentence prior explicitly says this. There’s no dishonesty here.

“Even fully autonomous weapons (…) may prove critical for our national defense”

FWIW there’s simply no way around this in the end. If your even attempts to create such weapons, the only possible defensive counter is weapons of a similar nature.

blitzar 7 hours ago [-]
To stop a bullet flying at you you need a shield not another bullet.
mgraczyk 15 hours ago [-]
Anthropic doesn't forbid DoW from using the models for foreign surveillance. It's not about harming others, it's about doing what is best for humanity in the long run, all things considered. I personally do not believe that foreign surveillance is automatically harmful and I'm fine with our military doing it
nextaccountic 14 hours ago [-]
If we are talking about what's best for humanity in the long run.. thinking about human values in general, what makes American citizens uniquely deserving of privacy rights, in ways that citizens of other countries are not?

Snowden revealed that every single call on Bahamas were being monitored by NSA [1]. That was in 2013. How would this be any worse if it were US citizens instead?

(Note, I myself am not an US citizen)

Anyway, regardless of that, the established practice is for the five eyes countries to spy on each other and share their results. This means that the UK can spy on US citizens, the US can spy on UK citizens, and through intelligence sharing they effectively spy on their own citizens. That's what supporting "foreign surveillance" will buy you. That was also revealed in 2013 by Snowden [2]

[1] https://theintercept.com/2014/05/19/data-pirates-caribbean-n...

[2] https://www.theguardian.com/world/2013/dec/02/nsa-files-spyi...

mgraczyk 14 hours ago [-]
This isn't about privacy rights, it's about war

I'm not suggesting that Anthropics models should be used by foreign governments for domestic surveillance

I'm not worried about foreign governments spying on Americans, as long as the US government is aligned. I'm worried about my own government becoming misaligned

nextaccountic 14 hours ago [-]
But.. the US doesn't perform mass surveillance on foreign people only when it's at war. It doesn't perform mass surveillance only on adversarial nations it potentially could be at war either.

This absolutely is about privacy.

> I'm not worried about foreign governments spying on Americans, as long as the US government is aligned. I'm worried about my own government becoming misaligned

Those foreign governments are spying on Americans and then sharing the results with the US government because the US government is misaligned with the interests of its own people

remarkEon 9 hours ago [-]
The United States gets to spy on countries when it's in the interest of the United States to do so. This isn't complicated. We get to spy on quite literally whoever we want abroad, within various legal and well established parameters, at at the risk of offending the governments of the spied-on. "It's only okay for the United States to spy on foreigners when they're in a shooting war with them" is silly.
calgoo 8 hours ago [-]
So you are saying its OK to spy on others because the US say is fine?

Maybe the others on here are not happy that this company is supporting a fascist government in committing international aggressions on other countries which has been condemned by the majority of countries around the world.

remarkEon 8 hours ago [-]
I'm explaining reality to you. Real life is not a marvel comic book movie.
calgoo 6 hours ago [-]
That is great, and i know this is not some crappy marvel comic. Im talking as a European who will be spied upon with this tooling, because we are not domestic. He seems perfectly fine with that, as well as using it in other military conflicts that has been caused by this governments greed.
827a 10 hours ago [-]
If the United States is ever, in the future, at war with an adversary using truly autonomous and functional killing machines; you may find yourself praying that we have our own rather than praying human nature changes. Of course, we must strive for this to never happen; but carrying a huge stick seems to be the most effective way to reduce human death and suffering from armed conflict.
RGamma 6 hours ago [-]
Given how unstable and aggressive the US government is at the moment others having these weapons seems to be a good idea for balance. Not sure you are aware of the damage Trump is inflicting on international relations.

But personally I wouldn't like to die because some crackpot with the right connections can will rest-of-world to that fate, no matter their affiliation. This escalation of destructive power and the carelessness with which it is justified pretty disheartening to see. Good times create bad people?

827a 34 minutes ago [-]
Reading comprehension check: I never stated that others shouldn't have the weapons. In fact, I stated what you are stating: that it is likely others will have the weapons, and for the sake of balance the West will be in a better place if the US also has them.
RGamma 15 minutes ago [-]
My primary point was to state that reducing friction between will (e.g. want Greenland) and reality (send autonomous drone swarm) is a really terrible thing for the US to possess with these elites. This technology needs to spread fast if classic non-proliferation is unworkable.

We seem to be unable to stop building the weapon, we seem unable to stop handing it over to morons, and I should expect these morons to not fire it?

Then again, it's called MAD for a reason...

gizzlon 8 hours ago [-]
> but carrying a huge stick seems to be the most effective way to reduce human death and suffering from armed conflict.

Citation needed. I believe there's at least some research showing the opposite: military buildup leads to a higher risk of military conflict

827a 37 minutes ago [-]
Reading comprehension check: I did not say that it reduced the risk of armed conflict. I said that it reduced the death and human suffering from armed conflict.

Between the years of 1850-1950, an estimated 150M humans died (and many more permanently disabled) due to armed conflict (~1.5M/year). Between 1950-today: closer to 10M (~132k/year). The majority of those came from the Vietnam and Korean wars. If you limit the window to after 2000: only ~2M deaths, or ~78k/year. We carry bigger sticks than ever, and those sticks allow us to execute more strategic, incapacitating strikes, or stop conflict from even happening in the first place.

helaoban 16 hours ago [-]
All of these problems are downstream of the Congress having thoroughly abdicated its powers to the executive.

The military should be reigned in at the legislative level, by constraining what it can and cannot do under law. Popular action is the only way to make that happen. Energy directed anywhere else is a waste.

Private corporations should never be allowed to dictate how the military acts. Such a thought would be unbearable if it weren't laughably impossible. The technology can just be requisitioned, there is nothing a corporation or a private individual can do about that. Or the models could be developed internally, after having requisitioned the data centers.

To watch CEOs of private corporations being mythologized for something that a) they should never be able to do and b) are incapable of doing is a testament to how distorted our picture of reality has become.

techblueberry 12 hours ago [-]
The private corporation is not dictating to the military, it’s setting the terms of the contract. The military is free to go sign a contract with a different company with different terms, but they didn’t, and now they want to change the terms after the contact was already signed. No mytholgization needed, just contract law.
7 hours ago [-]
ricardobeat 16 hours ago [-]
> The technology can just be requisitioned

During a war with national mobilization, that would make sense. Or in a country like China. This kind of coercion is not an expected part of democratic rule.

wrqvrwvq 14 hours ago [-]
It has always been a part of democratic rule, in peacetime and war. All telco's share virtually all of their technology with the government. Governments in europe and elsewhere routinely requisition services from many of their large corporations. I think it's absurd to think llm's can meaningfully participate in realworld cmd+ctrl systems and the government already has access to ml-enhanced targeting capabilities. I really have no idea what dod normies think of ai, other than that it's infinitely smarter than them, but that's not saying much.
not_that_d 8 hours ago [-]
I would like to see a proof of this happening in Europa.
soderfoo 6 hours ago [-]
If you're referring to telcos sharing their tech with government there are a few examples of Ericsson working with the Swedish military:

> Brigadier-General Mattias Hanson, CIO, Swedish Armed Forces, says: “Strengthening Sweden’s militarily and acting as part of a collective defense requires us to increase our defensive capabilities. We need to utilize the latest technology and all the innovative power of the Swedish private sector. Sweden has unique skills and capabilities in both telecoms and defense technology..." [0]

This is just one quick example I could find.

[0] https://www.ericsson.com/en/news/2025/6/ericsson-5g-connecti...

helaoban 15 hours ago [-]
The question of whether or not the government should be able to use AI for targeting without the involvement of humans is a wartime question, since that is the only time the military should be killing people.

Under such a scenario, requisition applies, and so all of this talk is moot.

The fact that the military is killing people without a declaration of war is the problem, and that's where energy and effort should be directed.

Edit:

There's a yet larger question on whether any legal constraints on the military's use of technology even makes sense at all, since any safeguards will be quickly yielded if a real enemy presents itself. As a course of natural law, no society will willingly handicap its means of defense against an external threat.

It follows then that the only time these ethical concerns apply is when we are the aggressor, which we almost always are. It's the aggression that we should be limiting, not the technology.

3 hours ago [-]
tw1984 9 hours ago [-]
> an expected part of democratic rule.

give yourself a break. what your fancy democratic rule still holds under Trump?

blitzar 7 hours ago [-]
> Private corporations should never be allowed to dictate how the military acts.

The military should never be allowed to dictate how Private corporations act

jobs_throwaway 13 hours ago [-]
> The technology can just be requisitioned, there is nothing a corporation or a private individual can do about that.

I strongly doubt this is true. I think if you gave the US government total control over Anthropic's assets right now, they would utterly fail to reach AGI or develop improved models. I doubt they would be capable even of operating the current gen models at the scale Anthropic does.

> Or the models could be developed internally, after having requisitioned the data centers.

I would bet my life savings the US government never produces a frontier model. Remember when they couldn't even build a proper website for Obamacare?

qup 1 minutes ago [-]
> Remember when they couldn't even build a proper website for Obamacare?

With a massive budget, too. Hundreds of millions iirc.

It felt like a website that the small web-dev shop I worked for could build without much problem in a couple months.

We didn't have 200 layers of beauracracy, though.

That said I don't doubt the military could take their current tech and keep it running. It's far different from the typical grift of government contractors.

tootie 15 hours ago [-]
It's also downstream of voters who voted in a president who promised to be dictatorial after failing at an attempted insurrection. We need to deprogram like 70M very confused people.
raincole 5 hours ago [-]
> We need to deprogram like 70M very confused people

With this mindset the said group will quickly grow to half of the US population.

b40d-48b2-979e 14 minutes ago [-]
You seem angry about being called out here. No, it won't grow to half the population since the existing support keeps shrinking over time.
helaoban 11 hours ago [-]
You should be asking why 70 million people voted the way they did in spite of the events you describe.

I don't think there's been a greater indictment of a political program (the one you likely subscribe to) in history than Trump's landslide victory in 2024.

You guys used to call deprogramming by another name, I think it was called "re-education". Maybe you should sign up for your own class.

matwood 8 hours ago [-]
> You should be asking why 70 million people voted the way they did in spite of the events you describe.

In part the propaganda machine that started in the 80s with AM talk radio, culminating to algorithmic feeds today.

helaoban 7 hours ago [-]
If that is the case, you have to explain why right wing propagandists have been so much more successful than left wing ones.
sethammons 5 hours ago [-]
That seems relatively straightforward, so likely incomplete: the left is a collective of various interests that often don't align internally and the right has very consistent and largely aligned interests. One of those is easier to steer. Another facet could also be education levels. As they say, a lie can get across town before the truth has its pants on. Being educated takes time and effort, and the educated lean left.
titzer 3 hours ago [-]
They are also absolutely shameless about lying and feel no obligation to stick to facts or data, but rather appeal to and cultivate ignorance, binary thinking, fear, us-versus-them thinking, and scapegoating. In short, their propaganda is more effective because they lean into it being propaganda.
NekkoDroid 5 hours ago [-]
My guess is lack of morals
stackbutterflow 5 hours ago [-]
Because it's easy when you don't let facts block you. Spread lie number 1 on Monday morning, lie number 2 in the afternoon, lie number 3 the next day, and do that for years and decades.

Whenever someone spends the time, and it takes a long time, to correct you, laugh, mock them, spew a few more lies.

And it's easy to do when the rich, the owner class side with you, because they buy newspapers, websites, ads, which you can't do if you lean left because acquiring money at all cost is not a priority of left wing people.

kalkin 10 hours ago [-]
I'm curious for your understanding of why Trump won in 2024. If I'm understanding right, you think it was because American voters were rejecting Maoism ("it was called re-education"), to which you think the previous commenter likely subscribes, and which voters associated with Harris/Walz? But I suspect I'm not getting it quite right, and it would be helpful if you would spell out what you mean, rather than just relying on allusion.

(I myself don't have a clear answer to why Trump won, but I don't think it speaks well to the decision-making of the median voter on their own terms, whatever those were, that Trump's now so unpopular despite governing in pretty much the way he said he would.)

helaoban 7 hours ago [-]
I don't want to ascribe any particular political beliefs to the commenter, the quip about re-education was somewhat of a joke given the irony of somebody arguing against dictatorship by invoking mass "deprograming". But many a true word is spoken in jest.

There are no real Maoists or true communists in the US anymore, at least not enough to constitute meaningful political forces. To the extent they exist they are irrelevant, and one can argue further that no true left remains in the US at all.

As for my analysis of the Trump phenomenon, I only have intuitions and biases to offer, so caveat lector.

I don't think it's particularly mysterious. The general perception is that the American left has made identity politics and social justice its main political and social programs, to the detriment of basic governance, most importantly the economy and security, thereby breaking the social contract.

You cannot be a party that aggressively defends and promotes the interests of minority classes at the expense of the majority without loosing the support of the majority. In some cases, these minorities are so small as to border on the absurd.

Something like 0.6% of people identify as transgender in the United States(1). They are vastly over-represented in the media, in left wing political programs, and in the general zeitgeist at large relative to their population size. The same goes for the LGBT population, which represents maybe 10% of the US population (and that's a liberal estimate).

Try as you might, you cannot escape the cold, hard fact that 60% the US population is white, with something closer to 70% identifying as white or partly white. 90% percent of that group is going to be straight.

The US middle and working classes still really haven't recovered from the financial crisis of 2008, the aftermath of which precipitated a huge transfer of wealth from these classes to the upper class, a trend that accelerated during the pandemic.

So you have a majority of the population who are reeling from a devastating loss of wealth, station, and status, unable to keep pace with inflation, watching one of the two main political parties aggressively promote the interests of a tiny minority at their expense, or at least that is the perception.

Putting aside the nature of the minorities in question, the subservience of the political class to a minority of the population has another name: elitism. The natural response to elitism is populism, which is what we are seeing.

The protection of minority rights is a noble cause, but it's primarily a civil rights issue, and the focus should be on making sure those classes are treated equally under the law. The goal should not be the elevation of their social and cultural station above the majority.

Biden, and then Harris/Waltz, are the kind of the ultimate expression of this left-wing, elitist decadence. Biden appointed a man who wears stilettos and dresses to work in charge of nuclear waste as the Department of Energy. People can rage at me all they want for that description, but that is what the majority of Americans perceive. Again, putting aside any questions of morality, it is political suicide.

Tolerance of mass border crossings was probably a more directly fatal error, representing a final decoupling of the democratic party from their ideological roots in the labor movement which was always militantly against illegal immigration. Again, the perception is that interest of minorities (in this case migrants) are primary to the interests of the majority. In this case the minority are not even American citizens.

There's a lot more to say on this topic, and I'm sure you can find more persuasive analyses from better sources, but these are some of my intuitions.

Thanks for coming to my TED Talk.

1. https://williamsinstitute.law.ucla.edu/publications/trans-ad...

tootie 2 hours ago [-]
There was no landslide. Trump got 49.9% of the vote. And it was after his attempted insurrection to overturn a valid election in which he was soundly rejected. He's never received 50% of the vote despite his relentless lies about voter fraud.

I'm not upset at people for having a differing opinion or being upset at some economic conditions attributable to Democrats, but rather their persistent belief in provably false information like the relative danger of immigrants, the causes of climate change, vaccine safety, election security or whether or not a particular ethnic group is eating their pets. This isn't a matter of opinion or it's a matter of observable reality and fundamental human morality.

gcbirzan 7 hours ago [-]
> Trump's landslide victory in 2024.

What are you talking about?

helaoban 6 hours ago [-]
If you want to challenge a point, then challenge it. Don't cower behind ambiguous snark.
titzer 3 hours ago [-]
It wasn't a landslide.

It's on you to argue it was, e.g. by comparing it to other clear landslide victories like Reagan in 1984. Truth is that 2024 the final popular vote gap was 1.5%, compared to 4.5% for 2020, -2.0% for 2016 (yeah, really), 3.9% in 2012, 7.28% in 2008, and so on.

6 hours ago [-]
vonneumannstan 41 minutes ago [-]
This is just a weird Trump talking point. This situation is unprecedented on many levels. The pentagon already had a signed contract with these stipulations and wanted to unilaterally renegotiate with Anthropic under threat of deeming them a foreign adversary and destroying their business if they didn't accept the DoD demands. It's totally absurd to turn this around on Anthropic and paint them as trying to determine US Military policy.
dartharva 12 hours ago [-]
> The military should be reigned in at the legislative level, by constraining what it can and cannot do under law.

Is there an example of such a system existing successfully in any other country of the world that has a standing army?

helaoban 11 hours ago [-]
I think any such examination of a military that doesn't actually fight wars is meaningless. The question can only be really asked of a handful of countries.
einpoklum 2 hours ago [-]
> Congress having thoroughly abdicated its powers to the executive.

Good thing the US is led by such figures as Donald Trump or Joseph Biden, stalwart trustworthy men with their hands firmly on the wheel.</sarcasm>

jjcm 16 hours ago [-]
This is the strongest statement in the post:

> They have threatened to remove us from their systems if we maintain these safeguards; they have also threatened to designate us a “supply chain risk”—a label reserved for US adversaries, never before applied to an American company—and to invoke the Defense Production Act to force the safeguards’ removal. These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

This contradictory messaging puts to rest any doubt that this is a strong arm by the governemnt to allow any use. I really like Anthropic's approach here, which is to in turn state that they're happy to help the Governemnt move off of Anthropic. It's a messaging ploy for sure, but it puts the ball in the current administration's court.

panarky 13 hours ago [-]
Does the Defense Production Act force employees to continue working at Anthropic?
nerdsniper 13 hours ago [-]
No. It really only binds the corporation, but it does hold the executives/directors personally responsible for compliance so they’d be under a lot of pressure to figure out how to fix enough leaks in the ship to keep it afloat. Any individual director/executive could quit with little issue, but if they all did in a way that compromised the corporations ability to function, the courts could potentially utilize injunctions/fines/jail time to compel compliance from corporate leaders.

Also there’s probably a way to abuse the Taft-Hawley act beyond current recognition to force the employees to stay by designating any en-masse quitting to be a “strike / walk off / collective action”. The consequences to the individuals for this is unclear - the act really focuses on punishing the union rather than the employees. It would take some very creative maneuvering to do anything beyond denying unemployment benefits and telling the other big AI companies (Google / ChatGPT / xAI) to blacklist them. And probably using any semi-relevant three letter agency to make them regret their choice and deliver a chilling effect to anyone else thinking of leaving (FBI, DHS, IRS, SEC all come to mind).

If the administration could figure out how to nationalize the company (like replace the leadership with ideologically-aligned directors who sell it to the government) then any now-federal-employees declared to be quitting as part of a collective action could be fined $1,000 per day or incarcerated for up to one year.

It’s worth noting that this thesis would get an F grade at any accredited law school. Forcing people to work is a violation of the 13th amendment. But interpretations of the constitution and federal law are very dynamic these days so who knows.

pnt12 7 hours ago [-]
The thesis could get an F at law school, but it is not guaranteed that the government will act lawfully. Its useful to think about what the administration can do, legal or not, especially when given little challenge when acting illegally.
tosapple 12 hours ago [-]
[dead]
fluidcruft 12 hours ago [-]
Maybe Anthropic could replace its employees with AI. Unlikely the admin is going to enjoy setting precedent that employees are protected against being replaced by AI.
SilverElfin 12 hours ago [-]
[flagged]
zombot 2 hours ago [-]
> fake wars

Once a war has started, it won't be fake any more.

> they’ll definitely declare wars to extend the presidency.

You don't exchange the Fraudster in Chief while at war, so they do want a war. Any war. But I have the strange impression that von Clownstick doesn't want to be seen as having started it by himself.

deadbabe 12 hours ago [-]
Presidency can’t be extended by wars.
jaegrqualm 12 hours ago [-]
FDR's tenure might have created an amendment to that effect, but it's not like this administration hasn't used a legal loophole before.

Perhaps there's a war, that a misguided congress won't declare as such, and a certain vice president that runs for president, with a certain someone as his vice president...

PontifexMinimus 12 hours ago [-]
Not constitutionally, at any rate.
SlightlyLeftPad 12 hours ago [-]
What would happen if he tried by not vacating at the end of his term, when challenged in court, shut down by his own Supreme Court? I mean let’s be real, all it really takes is him not giving up the white house. I sometimes wonder.
goatlover 12 hours ago [-]
Steve Bannon advised Trump to do this in 2020. Question is what would the Secret Service and Pentagon do once the election is certified for the winning candidate? If their loyalty remains to the Constitution, Trump would be forcibly removed.
krapp 12 hours ago [-]
We went through this when it looked like he might not leave last time. What happens is the Marines show up and politely throw his ass to the curb.

You do not under any circumstances gotta hand it to the American military but they do seem unwilling to play a role in Trump's let's say extraconstitutional ambitions. At least a junta doesn't seem likely. Without the military behind him he's just a senile old pedophile. What's he going to do, lock himself into the Oval Office?

wildzzz 11 hours ago [-]
The military is the one drone striking boats in the Caribbean. The military invaded a foreign country we are not at war with to kidnap its leader. The military dropped bombs on a foreign country we are not at war with. The military is patrolling the streets of DC and other cities. The military is the one spending the money on new immigrant detention centers. I fail to see how they are standing up to Trump's illegal acts. I'm not 100% sure the White House Marines will just throw Trump to the curb if Congress manages to certify the election in favor of someone else.
krapp 1 hours ago [-]
The military drone striked civilians in Obama's day, they did Abu Ghraib and Agent Orange and countless other war crimes. But aiding a President in a coup would be beyond the pale. Maybe I'm being naive, but I do think a lot of soldiers would refuse to do that even if they could contextualize and compartmentalize everything else.
SilverElfin 10 hours ago [-]
See https://www.culawreview.org/current-events-2/the-22nd-amendm...

Specifically section on martial law in wartime context. It’s not very clear but I just feel like the norms and laws will be stretched or broken, as the administration has already done numerous times.

vlovich123 12 hours ago [-]
… not yet. The problem with a norm breaking presidency like Trump’s and the GOP power structure is that no norm is safe, including elections.
12 hours ago [-]
NullPrefix 12 hours ago [-]
Zelensky's presidency was supposed to end couple of years ago. Would it be different in USA?
Tostino 12 hours ago [-]
Different constitutions. Were you trying to muddy the waters, or are you just ignorant of the details?
0ckpuppet 12 hours ago [-]
Yes,
0ckpuppet 12 hours ago [-]
[flagged]
15 hours ago [-]
JumpCrisscross 13 hours ago [-]
> this is a strong arm by the governemnt to allow any use

It’s a flippant move by Hegseth. I doubt anyone at the Pentagon is pushing for this. I doubt Trump is more than cursorily aware. Maybe Miller got in the idiot’s ear, who knows.

Quarrelsome 5 hours ago [-]
flippant? Its aggressive, belligerent and entitled. I'm not seeing "flippant". Unless this is some sort of weasely "oh we only threatened them a bit" bullshit. This is about entitled pricks in government who consider their temporary democratic mandate as a carte blanche for absolutism.
altacc 6 hours ago [-]
Trump/Miller/whomever don't need to be actively involved in every decision. They have defined an approach to strong arm problem solving and weaponisation of the government that anyone that works for them is implicitly allowed to use. The supposed controls that were meant to prevent this have crumbled or aligned.
cmrdporcupine 13 hours ago [-]
It definitely has the aroma of either Bannon or Miller or both.
0xDEAFBEAD 12 hours ago [-]
Believe it or not Steve Bannon is quite concerned about AI development:

>Over on Steve Bannon's show, War Room -- the influential podcast that's emerged as the tip of the spear of the MAGA movement -- Trump's longtime ally unloaded on the efforts behind accelerating AI, calling it likely "the most dangerous technology in the history of mankind."

>...

>"You have more restrictions on starting a nail salon on Capitol Hill or to have your hair braided, then you have on the most dangerous technologies in the history of mankind," Bannon told his listeners.

https://abcnews.com/US/inside-magas-growing-fight-stop-trump...

cmrdporcupine 51 minutes ago [-]
Him being "concerned" about it doesn't mean he doesn't want to bring Anthropic to heal.
xpe 13 hours ago [-]
> It’s a flippant move by Hegseth.

Care to convert this into a prediction?: are you predicting Hegseth will back down?

> I doubt anyone at the Pentagon is pushing for this.

... what does this mean to you? What comes next? As SecDef/SecWar, Hegseth is the head of the Pentagon. He's pushing for this. Something like 2+ million people are under his authority. Do you think they will push back? Stonewall?

One can view Hegseth as unqualified, even a walking publicity stunt while also taking his power seriously.

tz1490 12 hours ago [-]
It matters because the whole media is selling this as a Pentagon initiative, while probably 75% in the Pentagon think this is snake oil just like the previous Microsoft VR goggles.

If they don't oppose directly, large bureaucracies know how to drag their feet until the midterms at least, if not until 2028. Soldiers literally dragged their feet at the glorious Trump military parade, when they walked disinterested and casually instead of marching.

xpe 11 hours ago [-]
> If they don't oppose directly, large bureaucracies know how to drag their feet until the midterms at least, if not until 2028.

While I grant the spirit of this point, I don't think it applies to this situation. The "bureaucratic resistance" explanation doesn't fit when you think about what would happen next. Here is my educated guess based on some research:

- contract termination: Hegseth can direct the relevant contracting officer(s) at the Pentagon to terminate the contract. This could happen within days. Internal stonewalling here might add weeks of delay, but probably not more than that.

- supply chain risk designation: Hegseth signs a document, puts it into motion. Then it becomes a bureaucratic process that chugs along. Noncompliant contracting officers probably would be fired, so this happens within weeks or a few months. Substantial delays could come from litigation, to be sure -- but this isn't a case where civil service stonewalling saves us.

- Defense Production Act: would require an executive order from Trump. This would go into effect right away, at least on paper. It would very likely lead to litigation and possibly court injunctions.

My point is that non-compliant civil servants at the Pentagon probably can't slow it down very much. (I recommend they do what their oath and conscience demands, to be sure!) Hegseth has shown he's willing to fire quickly and aggressively. I admire people who take a stand against Hegseth and Trump -- they are a nasty combination of dangerous and corrupt. At the moment, they appear weaker than ever. Sustained civil pushback is working.

Let's "roll this up" back to my original point. I responded to a comment that said "I doubt anyone at the Pentagon is pushing for this.", asking the commenter to explain. I don't think that comment promotes a better understanding of the situation. It is more useful to talk about the components of the situation and some possible cause-effect relationships.

JumpCrisscross 10 hours ago [-]
> are you predicting Hegseth will back down?

I think he may be able to cancel Anthropic’s contract. But no more. He won’t back down as much as be overruled.

> As SecDef/SecWar, Hegseth is the head of the Pentagon

On paper. Also, being the de jure head of something doesn’t automatically mean you speak for it as a whole.

> while also taking his power seriously

Authority and power are different. A plane pilot has a lot of authority. They don’t have a lot of power.

blitzar 3 hours ago [-]
> I think he may be able to cancel Anthropic’s contract.

This outcome might be a win for everyone involved, the time and effort for those billions with a lot of strings attached are less useful as Ai matures.

xpe 3 hours ago [-]
The above is fairly surface level. See my other comment for particulars that matter a lot: https://news.ycombinator.com/item?id=47176361

You’ll notice I’m trying to avoid debating generic phrases and terms such as “power” that probably won’t advance mutual understanding of this situation. I’m talking about specific actions and systems. It makes it clearer.

JumpCrisscross 39 minutes ago [-]
> notice I’m trying to avoid debating generic phrases

You’re missing the forest for the trees. Take the tariffs as analogy. Specifying the laws invoked to effect the tariffs is more precise, but less complete than describing Trump, Bessent and Navarro’s motivations and theories.

Same here. We can wax lyrical about the DPA and specific statutory authorities and how they may be litigated. Or we can look at the actual power structures. The former is precise but inaccurate. The latter is the actual dynamic.

> terms such as “power” that probably won’t advance mutual understanding

If terms like power and influence don’t make sense to someone, they’re going to be lost in any political discussion. But particularly under this administration.

There aren’t legal analytic fundamentals driving why Trump hates windmills or Biden pardoned his son, these were expressions of Presidential power and preference. The legality was ex post facto.

xpe 29 minutes ago [-]
Person to person, we’re talking past each other. If we were sitting down face-to-face or even with a video call, this would be a totally different conversation.

How much are we connecting in this particular conversation? What if each of us were to step back and ask 3 questions: What am I trying to communicate? Are we both interested in having this conversation? Are we both learning from it?

Again, this is not meant as a criticism of you. It is a statement of the dynamic here, and how we’re relating. (Even though HN is well above average, it has massive failure modes when you view it from a systems POV.)

My feeling is that you aren’t responding to the intent behind my statement. But I’ll also recognize that I’m probably not communicating that lands for you. Maybe you feel the same in reverse? That would be my guess.

This as a failure of our communication norms and technologies. Given we’re in the year 2026 and have minimal technical barriers, we have very much failed culturally to get anywhere close to the potential of the Internet or whatever needs to come next.

JumpCrisscross 3 minutes ago [-]
Genuine question, are you using AI to edit your comments? Going on a rhetorical side quest in a straightforward discussion about policy, law and politics is…well, it’s not on topic.

For what it’s worth, I’m not seeing a failure of communication. I’m seeing a failure of scoping. You’re arguing on the basis of specific legal mechanisms by which power is expressed. I’m arguing the real motivations of and political constraints on decision makers are more fundamental in this case.

That isn’t universally true. Power predicted what Trump would do with tariffs (again, analogy). Legal analysis predicted his constraints (which SCOTUS affirmed). In this case, SecDef has the legal authority to do what’s described. He doesn’t, however, have the political freedom to do so. That turns the latter into the germane constraint, not the litany of proscribed powers.

relaxing 12 hours ago [-]
[flagged]
mandeepj 12 hours ago [-]
First of all, there's no such thing as "Department of War". A department name change is legal/binding only after it's approved by the Senate. Senator Kelly is still calling it DoD (Department of Defense).

> Mass domestic surveillance.

Since when has DoD started getting involved with the internal affairs of the country?

https://en.wikipedia.org/wiki/United_States_Department_of_De...

_kst_ 11 hours ago [-]
The Senate??

Any law changing the name of the Defense Department would have to be passed by both Houses of Congress and signed by the President (or by 2/3 of both Houses overriding a Presidential veto). The Senate has no such authority on its own.

mandeepj 10 hours ago [-]
Right! I meant to write ‘Congress’, but mistakenly wrote Senate.
Lerc 12 hours ago [-]
It's whatever what the people who have the power want to call it. What is written on a piece of paper is irrelevant if it is not acted upon.

If the rename gets struck down then they don't have the power. If it doesn't they have the power.

There are many dictatorships that built their power in the face of people claiming that they can't do what they planned because it was illegal.

Until they did it anyway.

jazzyjackson 9 hours ago [-]
I don’t know, to me it seems like their MO to make an announcement and not follow up on it. All the paperwork still says DOD, all the contracts are with DOD, there is no legal entity called DoW
darkerside 11 hours ago [-]
This is fascism
Lerc 11 hours ago [-]
I don't think many are doubting that. I'm not talking about the way things should be. I'm talking about the way they are.
darkerside 4 hours ago [-]
This is normalization of fascism
zombot 2 hours ago [-]
Which is what naturally happens when fascists are in power.
Quarrelsome 5 hours ago [-]
I'd imagine the pentagon are more interested in the autonomous kill bot part than the surveillance part.
khazhoux 7 hours ago [-]
Well, Trump renamed it, and since Congress is now a subsidiary of the Executive Branch, it's the Department of War.
zombot 2 hours ago [-]
Resist. Continue calling it the DoD.
culi 12 hours ago [-]
They've already spent millions on the name change. It's also the original name of the department. IMO it's a more honest name
9dev 16 minutes ago [-]
It doesn't matter how much they've spent, nor what you think. Renaming it requires congressional approval, which they have not gotten.
tokyobreakfast 12 hours ago [-]
www.defense.gov redirects to www.war.gov but I like how you refer to Wikipedia as the authoritative source to prove this functionally irrelevant and aggressive Reddit-style seething.

The talk page on the linked Wikipedia article arguing about logos is just as deranged. It's very important to realize there is literally nothing you—or anyone else—can do about this.

9dev 6 minutes ago [-]
> It's very important to realize there is literally nothing you—or anyone else—can do about this.

What an utterly bewildering statement. So your suggestion is to suck it up, because we're all impotent anyway? The only thing that can bring authoritarian systems down is civil resistance.

intermerda 15 hours ago [-]
[flagged]
grosswait 13 hours ago [-]
[flagged]
djeastm 13 hours ago [-]
>It’s already close to losing all meaning.

On the contrary, seeing it take hold before our very eyes gives it more meaning than it ever had in the pages of the history books.

grosswait 2 hours ago [-]
On the contrary, claiming it’s taking hold and labeling everything fascism doesn’t make it so
xpe 13 hours ago [-]
There is a difference between a politician making a contradictory statement and the largest agency in the United States using probably unconstitutional pressure tactics against a business.
SilverElfin 12 hours ago [-]
I see this a lot on the immigration topic. They’re simultaneously too rich and taking over everything, but also low paid slave labor displacing white Christians everywhere.
calvinmorrison 15 hours ago [-]
More like the government is treating this like the near term weapon it actually is and, unlike the Manhattan project, the government seems to have little to no control.
fwipsy 13 hours ago [-]
Anthropic has been pushing for commonsense AI regulation. Our current administration has refused to regulate AI and attempted to prevent state regulation.

"The government doesn't have control of this technology" is an odd way to think about "the government can't force a company to apply this technology dangerously."

polski-g 2 hours ago [-]
Because of Bernstein v DOJ, any AI company in the 9th circuit cannot be regulated because software is considered free speech.
toomuchtodo 14 hours ago [-]
Note that they always attempt to exert control they don’t have. They’re always bluffing, and they keep losing. Respond accordingly.
latexr 13 hours ago [-]
> Respond accordingly.

“Four key words (…) The only phrase that can genuinely make a weak bully go away, and that is: Fuck You, Make Me.”

https://m.youtube.com/watch?v=ohPToBog_-g&t=1619s

RobotToaster 14 hours ago [-]
Paper tigers
gclawes 14 hours ago [-]
The government should be entitled to any lawful use of a product they purchase, not uses dictated solely by the provider. It's up to courts to decide what lawful use is, it's not up to these companies to dictate.
mediaman 13 hours ago [-]
The product is a service, and they agreed to a contract. Now they don't like the contract.

Is your view that contracts with the government should be meaningless? That the government should be able to unilaterally, and without recourse, change any contract they previously agreed to for any reason, and the vendor should be forced at gunpoint to comply?

If you do believe this, then what do you believe the second order effects will be when contracts with the government have no meaning? How will vendors to the government respond? Will this ultimately help or hinder the American government's efficacy?

danorama 13 hours ago [-]
Seriously.

Hegseth trying to play “I’m altering the deal. Pray I don’t alter it any further” just shows this gang’s total lack of comprehension of second-order effects.

isodev 13 hours ago [-]
> It's up to courts to decide what lawful use is

No, it’s up to the government to create policy and legislation that outlines what is lawful or not and install mechanisms to monitor and regulate usage.

The fact that an arm of the government wants to go YOLO mode is merely a symptom of the deeper problem that this government is currently not effectual.

grosswait 13 hours ago [-]
Do you have any insight that what they want to do is YOLO, as opposed something your sure you’ll disagree with?
isodev 13 hours ago [-]
YOLO here refers to unsafe usage of LLMs. Your government is supposed to make legislation that protects all of its citizens, it’s not “what you agree with” game.
mech422 13 hours ago [-]
Terms of Service would like to have a word....

Not like limiting uses of products is anything new

rpdillon 13 hours ago [-]
Not really. Services are provided on terms acceptable to both parties. This isn't about what's legal, it's about the terms of the service agreement.
14 hours ago [-]
toomuchtodo 14 hours ago [-]
Providers are free who they choose to do business with, or not do business with. Are you arguing that the government should be able to compel a provider to allow their use when it’s well documented the government does not respect nor adhere to the rule of law? I think you misunderstand commerce and contract law.
alex43578 9 hours ago [-]
Providers are bound by plenty of laws that alter how they do business or who they do business with.

You can’t say “no disabled people at your business”. Hell, you can’t even say “no fake service animals at my restaurant”. Many in America also think you can’t say no girls in the Boy Scouts, or no men in a women’s locker room.

toomuchtodo 4 minutes ago [-]
When Congress makes the law, you will be accurate. At this time, there is no law that enables the US executive branch to achieve their outcome with Anthropic.

> Many in America also think you can’t say no girls in the Boy Scouts, or no men in a women’s locker room.

Your average American is low functioning, low education, vibe driven with a 6th-8th grade reading level, so this is not terribly relevant in my opinion. Provide statute and case law. "What American's think" is the topic of comedy shows, not legal outcomes.

Forgeties79 13 hours ago [-]
Strange take
bdangubic 13 hours ago [-]
Amazing to read this. Hoping you are not an American… Reading this thread is like comrade after comrade!
egorfine 3 hours ago [-]
> two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

They are only contradictory if you think about it.

gclawes 14 hours ago [-]
> This contradictory messaging puts to rest any doubt that this is a strong arm by the governemnt to allow any use.

Why the hell should companies get to dictate on their own to the government how their product is used?

theptip 14 hours ago [-]
Every company is free to determine its terms of use. If USG doesn’t like them they should sign a contract with someone else.
grosswait 13 hours ago [-]
Every company is free to state their terms of use, but not all have been upheld when challenged
otterley 12 hours ago [-]
What’s your angle here? I’m genuinely curious. If the government told you that you had to muck out portable bathrooms with your bare hands even if you didn’t want to, wouldn’t you find that objectionable?
alex43578 7 hours ago [-]
I’m sure they would find it objectionable, just like how many reacted negatively to the draft, but it was imposed anyways.

The government should have far less control and power over individuals and businesses than it currently does.

lynx97 5 hours ago [-]
Well, the rates are different from country to country, but everyone knows taxes. I really don't want to give away almost 40% of my income... Does anyone care what I want or like?
blitzar 7 hours ago [-]
> Every company *

* excludes tiktok

alex43578 9 hours ago [-]
Can I run a business and say “No use by insert race here”? If they don’t like it, they can shop somewhere else, right?
FrancisMoodie 7 hours ago [-]
Ofcourse we're gonna compare being against the use of technology for Mass surveillance/Autonomous weapons with being racist, like wtf kind of argument is this? So because businesses can't implement racist policies they shouldn't be allowed to have any policies concerning the use of their tech? Mindblowing.
lynx97 5 hours ago [-]
Well, the question is the fine line between racism and discrimination. Or, whats the difference between misogyny and pacifism? What am I allowed to dislike? Is it already across the line if I dont like dogs? What if I had really bad experiences with dogs in the past? Is it OK now, or still not? What if my childhood was basically a crazy mess because of my mother? Am I allowed to be careful around women now? Or am I creepy because of that? What if I escaped a warzone during my childhood? Is militant pacifism OK now? What if the military saved my family from being killed? Is it OK if I am pro military budget, or am I a system-whore now?
tenuousemphasis 8 hours ago [-]
Kegsbreath isn't a protected class.
alex43578 7 hours ago [-]
If your argument is “every company is free to determine its terms of use”, except when told otherwise by the government, you’ve proven my point. The government is saying they need to provide unfettered access.
JCharante 6 hours ago [-]
“Told” is different than it being written into law. Go update the laws first and then you have a valid argument
randerson 13 hours ago [-]
Because technology companies know more about their product's capabilities and limitations than a former Fox News host? And because they know there's a risk of mass civilian casualties if you put an LLM in control of the world's most expensive military equipment?
Hnrobert42 13 hours ago [-]
Because the government is here to serve us. Not the other way around.
no-dr-onboard 13 hours ago [-]
The government has a responsibility to protect its constituents. Sometimes that requires collaboration. This isn’t hard.
epistasis 13 hours ago [-]
Is this one of those times? Seems pretty clear it's not.

The third amendment is there for a reason. I am a third amendment absolutist and willing to put my life on the line to defend it.

staticassertion 12 hours ago [-]
I wonder what you can't justify this way.
no-dr-onboard 11 hours ago [-]
That’s a good question. Assuming a righteous and just government:

The government couldn’t justify the killing of innocent civilians.

The government couldn’t justify the killing of the unborn.

The government couldn’t justify eugenics.

There are objective moral absolutes.

staticassertion 3 hours ago [-]
Wow, that's just so many assertions and none of them follow from the statement that the government can break the law in order to protect its citizens. In all of those cases I can just say "they can if it is to protect its citizens". Remember, the premise here is that you are performing the act in order to protect constituents. So before all of those statements you have to assume "They are doing this in the genuine believe that it protects constituents".

The argument so far seems to be "They can do anything, but there are moral absolutes that I can personally list out, and in those cases they can't do those things". That is a hilariously stupid view of the world but sadly a common one.

Even if I grant moral objectivity, I reject that you have epistemic access to it so it's moot.

singleshot_ 14 hours ago [-]
Same reason they cant quarter troops in your house: the law
bathtub365 14 hours ago [-]
throw0101c 13 hours ago [-]
> Why the hell should companies get to dictate on their own to the government how their product is used?

Well:

"""

Imagine that you created an LLC, and that you are the sole owner and employee.

One day your LLC receives a letter from the government that says, "here is a contract to go mine heavy rare earth elements in Alaska." You don't want to do that, so you reply, "no thanks!"

There is no retaliation. Everything is fine. You declined the terms of a contract. You live in a civilized capitalist republic. We figured this stuff out centuries ago, and today we have bigger fish to fry.

"""

* https://x.com/deanwball/status/2027143691241197638

grosswait 12 hours ago [-]
This is a terrible analogy. Imagine you’re an LLC that signed a contract to mine minerals, but your terms state you’d only mine in areas you felt safe. OSHA says it’s safe but you disagree, because….. any number of reason unknowable to an outsider. Maybe you just don’t like this OSHA leadership. That is more like what is happening.

Signing a contract with Anthropic assuming they wouldn’t rug pull over their own moral soapbox was mistake number one.

I love anthropic products and heavily use them daily, but they need to get off their high horse. They complain they’re being robbed by Chinese labs - robbed of what they stole from copyright holders. Anthropic doesn’t have the moral high ground they try to claim.

otterley 12 hours ago [-]
The (hypothetical) contract is clear, though. The condition is stated in objective terms: “in areas you felt safe.” If the Government agrees to this, then they should be bound just like any private counterparty would. If the Government didn’t agree to this, they should have negotiated that term out in favor of their preferred terms.
grosswait 2 hours ago [-]
I agree. Which is why I said signing a contract with anthropic was a terrible idea in the first place.
WD-42 12 hours ago [-]
Is it a rug pull? Where in the terms of service does anthropic say their models can be used for autonomous weapons and mass domestic surveillance?
14 hours ago [-]
etchalon 12 hours ago [-]
[flagged]
lucaslazarus 14 hours ago [-]
[flagged]
bdangubic 14 hours ago [-]
[flagged]
quietbritishjim 15 hours ago [-]
Those aren't contradictory at all. If I need a particular type of bolt for my fighter jet but I can only get it from a dodgy Chinese company, then that bolt is a supply chain risk (because they could introduce deliberate defects or simply stop producing it) and also clearly important to national security. In fact, it's a supply chain risk because is important to national security.
NewsaHackO 14 hours ago [-]
No, in your example, if the dodgy Chinese company is a supply chain risk due to sabotage, why would they invoke an act to force production of the bolts from the same company for use for national defense preparedness, which would be clearly a national security risk?
snickerbockers 13 hours ago [-]
The OP specifically mentions this in the context of "systems" (a vague, poorly-defined term) and "classified networks" in which Anthropic products are already present. Without more details on what "systems" these are or the terms of the contracts under which these were produced it's difficult to make a definitive judgement, but broadly speaking it's not a good thing if the government is relying on a product which Anthropic has designed to arbitrarily refuse orders by its own judgement.

I really don't see how anybody could think a private defense contractor should be entitled to countermand the military by leveraging the control it has over products it has already sold. Maybe the terms of their contract entitled them to some discretion over what orders the product will carry out, but there's no such claim in the OP.

NewsaHackO 11 hours ago [-]
>I really don't see how anybody could think a private defense contractor should be entitled to countermand the military by leveraging the control it has over products it has already sold. Maybe the terms of their contract entitled them to some discretion over what orders the product will carry out, but there's no such claim in the OP.

I don't think that is what is happening. What most likely is happening is that they want Anthropic to produce new systems due to the success of the previous ones, but they are refusing to do so because the new systems are against their mission. What seems like the DoD is attempting to do, on one hand, is call them a supply chain risk to limit Anthropic's business opportunities with other companies, and then, on the other hand, simultaneously invoke DPA so that they can compel them to make the new system. But why would the government want to compel a company to make a system for them due to a need for national prepareness that they designated as such a supply chain risk that they forbid other companies that provide government services from doing business with due to the national security risk of having a sabotaged supply chain? It doesn't really make sense, other than from a pure coercion perspective.

snickerbockers 9 hours ago [-]
>limit Anthropic's business opportunities with other companies

Does it necessarily prevent other companies from doing business with them or does it prevent other companies from subcontracting them on government projects? The term "supply chain" leads me to think it's the latter.

NewsaHackO 9 hours ago [-]
Is that relevant to the actual point?
snickerbockers 5 hours ago [-]
Yes?
SpicyLemonZest 53 minutes ago [-]
The question is, after witnessing Hegseth crash out against one of their fellow contractors over practically nothing, will contractors want to walk the tightrope of doing business with Anthropic but promising it never ends up feeding into a government contract?
estearum 14 hours ago [-]
It's easy to resolve an alleged contradiction by just ignoring one half of it lol

Try introducing DPA invocation into your analogy and let's see where it goes!

simoncion 7 hours ago [-]
> Try introducing DPA invocation into your analogy and let's see where it goes!

When I introduce that, I see Anthropic's management getting Tiktok'ed.

It can be true that Anthropic's products are essential for national defense and also true that the management of the company are a supply chain risk.

Is any of that true? Well, so much of what has been done in the name of "national defense" & etc over the past many decades has clearly not been done for reasons that are true, so -when it comes to "national defense"- I don't think that the truth actually matters much at all.

estearum 3 hours ago [-]
TikTok'd as in requiring a novel act of Congress? Sure!

DPA and FASCSA as they stand today cannot be used the way DOD is claiming they can be.

gipp 14 hours ago [-]
"Supply chain risk" is a specific designation that forbids companies that work with the DOD from working with that company. It would not be applied in your scenario.
ray_v 14 hours ago [-]
The analogy doesn't work here ... In your scenario they are ok with using the bolt as long as the Chinese company promises to remove deliberate defects - which is of course absurd ... AND contradictory.
tabbott 16 hours ago [-]
An organization character really shows through when their values conflict with their self-interest.

It's inspiring to see that Anthropic is capable of taking a principled stand, despite having raised a fortune in venture capital.

I don't think a lot of companies would have made this choice. I wish them the very best of luck in weathering the consequences of their courage.

idiotsecant 15 hours ago [-]
The problem is that this is a decision that costs money. Relying on a system that makes money by doing bad things to do good things out of a sense of morality when a possible outcome is existential risk to the species is a 100% chance of failure on a long enough timeline. We need massive disincentives to bad behavior, but I think that cat is already out of its bag.
freakynit 13 hours ago [-]
I appreciate that the HN community values thoughtful, civil discussion, and that's important. But when fundamental civil liberties are at stake, especially in the face of powerful institutions and influence from people of money seeking to expand control under the banner of "security", it's worth remembering that freedom has never simply been granted. It has always required vigilance, and at times, resistance. The rights we rely on were not handed down by default; they were secured through struggle, and they can be eroded the same way.

Power corrupts, and absolute power corrupts absolutely.

_def 10 hours ago [-]
On a long enough timeline literally everything has 100% chance of failure. I'm not trying to be obnoxious, I just wanna say: we only got this one life and we have to choose what to make of it. Too many people pretend things are already laid out based on game theory "success". But that's not what it's about in life at all.
11 hours ago [-]
flumpcakes 16 hours ago [-]
This is such a depressing read. What is becoming of the USA? Let's hope sanity prevails and the next election cycle can bring in some competent non-grievance based leadership.
davidw 16 hours ago [-]
This isn't a one-election thing. It's going to be a generational effort to fix what these people are breaking more of every day. I hope I live to see it come to some kind of fruition - I recently turned 50.
inigyou 16 hours ago [-]
Some people are calling it the "American century of humiliation"

No other country that went through a phase like this has ever recovered. Not even in a century.

davidw 16 hours ago [-]
I won't give in to doomerism.

Germany, Italy and Japan are all wealthy, stable democracies right now. Not without their problems and baggage, but pleasant places in a lot of ways.

mobilefriendly 15 hours ago [-]
All three have active US military bases on their soil and enjoy the economic surplus of living under the US defense umbrella.
davidw 15 hours ago [-]
The post WWII system was imperfect in many ways, but it was also mutually beneficial and worked out pretty well despite the problems.

And we're throwing that all out the window.

US military bases aren't what made those countries modern, prosperous, democratic places. It took the will of the people to rebuild something better after the war.

Quarrelsome 5 hours ago [-]
don't make it out like its a favour. The US have done very well out of their defense umbrella ensuring its global dominance for most of last century.

Most powers have to pay in blood to do what they want geo politically without question. The US inherited a global state where many potential rivals were weak and helped keep them weak. It was a cost worth paying and its a shame that current US leaders are so cheap and foolhardy to not see what they're throwing away.

matwood 8 hours ago [-]
You seem to imply the US reaps no benefit from providing security?
bonsai_spool 15 hours ago [-]
Britain essentially ceded its bases to the US at the end of WWII - these things aren’t as durable as they may seem.
Quarrelsome 5 hours ago [-]
that's cos WW1 financially broke Britain, then WW2 happened.
4gotunameagain 9 hours ago [-]
All that economic surplus - and much more - flows back to the US. How do you think the US can sustain that amount of USD printing without inflation ? The rest of the world is buying those dollars.
remarkEon 9 hours ago [-]
Germany: functionally paralyzed government that has the far right knocking at the door because the fractured coalition of left-centerleft-centerright continues to refuse to do what voters ask for.

Italy: Nominally center-right government, similar problems as Germany, less the energy issues

Japan: just elected a landslide right wing government that is going to change the constitution so they can build an offensive military again

Curious.

poly2it 8 hours ago [-]
I don't perceive those problems to be inherent to the territories or peoples of the countries. All have had potential to change and have done so extensively since the Second World War. There isn't a universal explanation or root behind the issues these countries are facing today, unless you are willing to abstract it to just "economics".
micromacrofoot 15 hours ago [-]
They got bombed to shit first
davidw 15 hours ago [-]
It'd be nice to avoid that part.
Fischgericht 15 hours ago [-]
Then it won't work. The current iteration of Germany is fully based on having been bombed to get a fresh start. If you already have something, you won't change it. If you have to re-build, you will implement improvements. No bombs, no reset, no joy.
RGamma 5 hours ago [-]
It is not inevitable that you come back improved. It is not inevitable that you come back at all.
davidw 15 hours ago [-]
I am less confident about my predictions for an uncertain future. There's all kinds of ways different things could go.

I didn't say we needed to follow their example to the letter; it was just one counterexample to the "woe and ruin for 100 years" comment.

Fischgericht 15 hours ago [-]
Yes, but it is actually scientifically correct and proven on all sorts of layers. Biology, Maths, whatever. Not doomsdaying, just data analytics.

Societies are not operating like a sinus curve like say summer/winter cycles. They are upside-down "U"s. After the peak comes decline, but after the decline there is NOT recovery/growth again before you have a reset.

Germany was the huge winner of WW2 in the sense that after having had a high society they directly were allowed to get another such run. But as nobody wants to bomb us ) anymore, Germany is also in decline now waiting for a reset to come one day...

Sadly the USA will also need a reset before things can begin getting better again.

) I was born in Germany and lived there for 40 years.

RGamma 5 hours ago [-]
References to scientific proofs?
scottyah 15 hours ago [-]
Ok what about the Netherlands, Spain, Nordic countries?
Fischgericht 14 hours ago [-]
Very different countries.

The Netherlands for example got their last reset by completely losing the Dutch empire.

Also, some societies have flatter curves than others. That really maps 1:1 to your style and culture of living and where the priorities are.

If your priorities are to be the best as fast as possible (Germany) you will have less time between resets. If your priorities are "let's chill and wait until the coconut falls from the tree into my hand", your society might be able to have a far longer time between resets.

But in the end: It's an iterative process. Which means: There must be iterations.

davidw 14 hours ago [-]
This sounds about as scientific as phrenology.
Fischgericht 3 hours ago [-]
No, it's really simple: Programming, Math, AI, blabla - those are all abstractions of what we have seen in nature.

Once you have understood that, you can just apply the rules learned backward, and they will typically match pretty well. I can buy fractal veggies in a supermarket.

And also, it's just data. Just take some random samples. That even civilizations like the Mayas who have faaaar more time on the clock than say than the US had multiple full resets.

Another random sample I've just pulled out of thin google air: San Francisco Fire of 1851. Everybody knew that wood burns. And that wooden buildings burn. And that wooden cities burn. Did anyone decide to tear down their house and re-build with a different material? No. This happened after everything had burned down to the ground. That was the reset needed.

I think it is very clearly an iterative process. Have a look.

prmph 5 hours ago [-]
Not sure why you are being downvoted. What you are saying has a lot of truth to it. It is directly observable in the history of nations.

Germany has to be forced to accept that, although it was advanced, it could not have the European empire it thought it deserved. Japan had to learn a similar lesson. The speed and horror of the reset was in direct proportion to the potential for advancement and high society in these nations.

Ghana, where I come form, for example, has not has to experience any massive upheaval even from its pre-colonial and colonial days up till now. Our society is laid-back, and moves slowly. Even many other African countries have had to have their national reckoning in the form of civil wars and other huge upheavals in order to settle into a viable way of existing and advancing.

And, like you said, this is iterative. Given the nature of people in a nation and its fundamental geopolitical position, the same question will need to be answered after every N generations. Germany is central to Europe, and already a generation that is far removed from the world wars are starting to rethink why it shouldn't assert itself more strongly. Same in Japan.

THe way to analyze the iterations of the US is to understand that the primary threats are from within. It may not implode complete, but civil war and the civil rights era show that the potential is there for massive unrest and violence.

Fischgericht 3 hours ago [-]
[I am getting downvoted all the time because the combination of German directness with autistic directness and lack of empathy combined with dark humor is not exactly compatible with societies where it is seen as offensive, rude or even aggressive not to sugar coat your messages. If one side treats this as a data exchange, and the other side processes the data but including emotions it will obviously have compatibility issues. But that's my "problem", so I accepted that typically if I post stuff, I first get upvoted massively, and after a day downvoted to hell. And that's OK. Again, my problem to be incompatible with a standard.]

And yes, it is interesting to see that on Polymarket people are betting involving a lot of emotions. No, you will not bet on getting killed by masked militia. Nobody is going to say "Hey, I'll bet $1000 that I will get cancer soon!".

But if you leave aside all the emotions, and just look at the data: No, there is no realistic scenario the US could magically recover from all checks and balances and rules and laws and regulations and decency having been destroyed. Competence, leadership and shared knowledge had been erased in all areas of society - Science, Development, Capitalism, Arts. How are you going to rebuild all of this, especially if the best case is that 60% of the people will agree to rebuild, while 40% insist they need to keep destroying stuff?

This is not a scenario looking at historical data any prior "high culture" (or whatever to call this) had been able to recover from.

Elsewhere in this thread is was mentioned that Germany still had all the Nazis in place everywhere because else the country would not have worked. But that is not the point. The reset was:

a) All is destroyed and MUST be rebuild because else we will freeze and starve to death.

b) Your Nazi neighbor is still there, but it has been made VERY clear who is the new sheriff in town: First the allies, but then pretty much the USA. Germany is still paying for having US solders in the country, providing valuable expensive land for free, and paying for most of the supply chain that is not staffed with US soldiers. And that is the accepted normal.

c) What was left on industry was physically taken as reoperations. Especially the soviets, but also the French did dismantle hole factories and machinery, moving that to their own countries (rightfully so.)

From what I know from school, reading and talking to grandparents: Germany before WW2 doesn't have much relation to pre-WW2 Germany. Suddenly it was normal that women can to "men's jobs" (due to those being more on the dead side). McDonalds. Hollywood. etc

It really makes sense to have a look at a couple of pictures of what was left of Germany after WW2. It's just someone slapping an existing brand name onto a new product. And in this case, personally I would have regarded the brand as damaged and would have picked a different name.

eternauta3k 5 hours ago [-]
Germany wasn't a fresh start. The de-nazification ended up being a bit of a joke and (AFAIK) the first governments were full of ex-Nazis.
protocolture 15 hours ago [-]
James May did a documentary loosely based on this. "The Peoples Car"

Basically analysing the economies of WW2 participants via their automobile industries.

Its staggering how being bombed into the ground has forced technological and economic innovation. And how the inverse, being the bomber, has created stagnation.

galangalalgol 15 hours ago [-]
I don't think it would matter even if the us did have to start again. The entire us alliance after ww2 benefited from the same structural causes of increased pluralism and egalitarianism. A fractured elite, complex international trade, expanding and increasingly difficult to control communication channels, and a growing bureaucracy. These all inhibit autocratic concentration of power. International trade became uncomplicated, there is one manufacturer that is not a consumer, and many consumers. This leads to an increasingly less fractured elite. The structural reasons for democracy and rules based order are all fading. The us is just a really big canary.
King-Aaron 15 hours ago [-]
The people running the show are all building generational fallout shelters in new zealand. As seems to be the real 'whitehouse ballroom' plan too. They seem to be expecting that part.
pear01 13 hours ago [-]
Congress is the problem, but not in the way most describe.

Congress has abdicated its powers because as an institution it is broken. Several inland states with total state wide populations less than that of major metro areas on the coasts have the same amount of senators as every other state has - two. This means voters in a lot of states are over represented. Meanwhile, they say land doesn't vote, but in the United States Senate the cities and localities with the most people that drive much of our growth and dynamism are severely underrepresented. The upper and most important chamber of the Congress is thus undemocratic. Given it's an institution deeply susceptible to minority gridlock that depends on wide margins to do anything, well now more often than not it simply does nothing. An imperial presidency thus frankly becomes the only way the country can actually get most things done.

This two senators for every state arrangement was a compromise agreed to when constitutional ratification was in doubt, when the USA was a weak, newborn country of about 3 million people confined to the Eastern seaboard at a time in our history where our most pressing concern was being recolonized by European powers. The British burned down the White House in 1812 imagine what more they could have accomplished if the constitutional compromises that strengthened the union had not been agreed to.

This compromise has outlived its usefulness. No American today fears a Spanish armada or British regulars bearing torches. These difficult compromises at the heart of America already led to one civil war.

The best we can do is create a broad political movement that entertains as many incriminations as possible (probably around corruption/Epstein, which must make pains to avoid any distinction between say a Bill Clinton or a Donald Trump) so we can get past partisan bickering to get enough of mass movement to try to usher in a new age of constitutional amendment and reform.

If it doesn't happen this cycle of Obama Trump Biden Trump will continue until this country elects someone who makes Trump look like a saint. It can happen. Think of how Trump rehabilitated Bush. We already see the trend getting worse. And if it does, then the post WWII Germany style reset being mentioned here will then become inevitable.

soderfoo 5 hours ago [-]
How do you think this would play out? Changing the apportionment of the Senate, aside from being a political and legal nightmare, would also create monumental constitutional crisis.

First, the Connecticut Compromise is a democratic underpinning of the US. It was central to the formation of the nation, and any attempt to alter it would be a foundational structural change to the constitution to say the least.

I understand the concerns about one generation binding another without recourse. Legal scholars differ on whether Article V, which implements the compromise, can be amended or not.

But for the sake of argument, let's say it can. It would be an insurmountable task requiring the following:

1. A supermajority in both houses of Congress (67% in the Senate and 66% in the House) to propose the amendment.

2. Ratification by three-fourths of the state legislatures (38 out of 50 states) or by conventions in three-fourths of the states.

3. Consent of the states that would lose their equal representation in the Senate.

4. Overcome any legal challenges that would likely arise at every step of the process.

The result would be a dramatic redefinition of federalism and democratic representation. This wouldn't be a cosmetic change, it would be a fundamental alteration to the structure of the government and constitution.

Very few things were deemed "unamendable" and entrenched in the constitution before, both explicitly and implicitly, but now it would all be up for grabs. Now nothing is irrevocable.

What's to stop future generations from altering other fundamental principles? While we may complain of being bound by the decisions of our ancestors, we would be opening up a Pandora's box of constitutional instability for future generations, binding them to the whims of a (slim?) majority of the current generation's political agenda.

I think that is the best case scenario. The worst, and I think a very possible scenario, is that states losing representation would claim that such a drastic and material change to the constitution upends the root of the bargain that led to the formation of the union, and would likely seek to secede. You may have achieved your goal of changing the apportionment of the Senate, but at the cost of the union itself. There are far easier and less risky ways to achieve political change.

inigyou 16 hours ago [-]
[flagged]
popalchemist 15 hours ago [-]
Japan's economics are mostly rooted in population issues. Have you ever been? Even though wages are stagnant, the people are among the healthiest in the world and they're known for the way their society's public services ACTUALLY work.

Not sure about Italy, but Germany, while not without its problems, is a beacon of democracy, progressivism, and self-correction.

lovich 15 hours ago [-]
> Germany is still extremely weird about anything to do with Jews

> I've never been to Italy but they don't seem very productive either.

Ok green poster. You need to look up more about world economies if you are going to confidently say things like Italy isn’t that productive. Combined with your comment on Jews in Germany I just assume you’re here to push propaganda, but if not please read up more on Italian economic output compared to, I don’t know, maybe the G7 countries?

Dumblydorr 16 hours ago [-]
That’s just historically inaccurate. You had massive upheavals across numerous countries throughout time, this is small in comparison to the civil war’s impact on the USA for instance. You think this is worse than half the government rebelling and revolting and killing an amount of young men that today would be equivalent to 6 million deaths? It’s bad now but your comment lacks historical evidence.
53 minutes ago [-]
testfrequency 8 hours ago [-]
On eastern social media a big discussion going around right now is referring to America as being on the “kill line”.

The world knows the US is close to folding in on itself.

jonplackett 16 hours ago [-]
China seems to have recovered pretty well.
AuthAuth 15 hours ago [-]
Not really. China only seems good because there is a war in Europe and the US is shooting themself in the foot. They're polluting and strip mining their country, suppressing wages and funneling the profit into companies all while increasing surveillance and decreasing freedom of opinion. Oh but they put down a few solar panels and then paid for people to write articles about it.
davidw 15 hours ago [-]
Their economy lifted a bunch of people out of poverty. That's positive.

However, in terms of 'democracy' they're still way worse off than the US right now, even if the US is headed in a bad direction.

wraptile 14 hours ago [-]
> Their economy lifted a bunch of people out of poverty

This is fallacious as every economy that started at extreme poverty lifted a bunch of people out of poverty.

Unless we invent a time machine and do an A|B test we can't really attribute the success to policy when _any_ policy would have clearly lifted out a bunch of people out of poverty (basically almost impossible to not go up from extreme deficit). The closest we can do is look at similar scenarios like Taiwan which also lifted a bunch of people from poverty while retaining more human rights.

davidw 14 hours ago [-]
Plenty of places have managed to "keep on keepin' on" with their poverty levels.

I'm not saying what they've done was the best way, only way or anything of that sort: only that it happened.

grvbck 14 hours ago [-]
> They're polluting

They absolutely are, but per capita, USA is polluting 49.67 % more than China.

Source: https://worldpopulationreview.com/country-rankings/carbon-fo...

randallsquared 46 minutes ago [-]
But only half as much per dollar, so the lower pollution per capita is just poverty, which is likely to decline over the next few decades as it has been (assuming we have decades left).
jonplackett 5 hours ago [-]
Also they are making all our stuff for us. That’s our pollution too guys.
Barrin92 14 hours ago [-]
>Oh but they put down a few solar panels

the few solar panels in question are a united kingdom worth of green energy each year, about a royal navy worth of marine tonnage every two and they lifted more people out of poverty over the span of two generations than most of the rest of the world combined. Shenzhen produces about 70% of the entire world's consumer drones, now the primary weapon on both sides of the largest military conflict in the world. Xiaomi, a company founded in 2010 15 years ago decided to make electric cars in 2021 and is now successfully selling them.

As Adam Tooze has pointed out it's the single most transformative place in the world, if you're not trying to learn from it you're choosing to ignore the most important place in the 21st century for ideological reasons

bamboozled 15 hours ago [-]
I used to pretend China wasn't absolutely smashing the USA, but it looks like it is. They basically make everything modern civilization relies on, that's an insane amount of leverage over the rest of the world. That combined with renewables and nuclear and their diminishing need for foreign oil because of that is pretty incredible.
idiotsecant 15 hours ago [-]
They're also speedrunning a world class power distribution system and deploying a massive amount of renewable power amoung a whole mess of other infrastructure. They've got the ability to focus an entire nation into achieving technical goals and they're rapidly improving quality of life in average while maintaining an industrial base that the US can only remember fondly. They might not meet western standards for individual freedoms and rule of law, but they're undoubtedly a rising world power.
lanfeust6 15 hours ago [-]
This doesn't make much sense. Since the late 19th century, every country that got rich also heavily polluted the environment, though increasingly less over time. As it stands, fossil fuel demand in China has plateaued. The "wage suppression" thing also doesn't track; their citizens got much, much richer since Nixon's visit, despite being on average poorer than Westerners. Their GDP per capita is low because there's like a billion of them in the country.

The only thing to say is that it's still authoritarian. Once that gets a hold of a country, it's very difficult to shed off. Interestingly, both South Korea and Singapore shifted away from being dictatorships and were not ideologically socialist. Countries taken over by Communists remain authoritarian. The true believers will never give that up.

davidw 14 hours ago [-]
Agree with much of this. However: plenty of Central/Eastern European countries seem like they have pretty definitively shaken off communism in favor of pretty standard European style capitalism/social democracy.
lanfeust6 12 hours ago [-]
That is true, though I chalk some of that up to disdain for Russian imperialism/colonialism, and bargaining to remain out of its influence
nostrademons 11 hours ago [-]
U.S. Civil War? Roman Crisis of the 3rd Century? Russian Revolution? England's War of the Roses? China's periodic dynastic changes?

They usually don't come back with the same political organization - that's sorta the point. But plenty of civilizations come back in a form that is culturally recognizable and even dominate afterwards.

giwook 11 hours ago [-]
I’d be interested to see some specific examples cited as it’s hard to take this comment at face value.
IAmGraydon 12 hours ago [-]
This is a laughably ridiculous assertion.
tsunamifury 15 hours ago [-]
Rome was 'in decline' for 1000 years... these things are mostly feel good blather and not realistic statements on the position of nations
gbnwl 16 hours ago [-]
Is this a joke that’s going over my head? The country we all know the term “century of humiliation” from has recovered and is literally a superpower right now?
Pxtl 11 hours ago [-]
The Unenlightenment. Dereconstruction.

> No other country that went through a phase like this has ever recovered. Not even in a century.

Oh I can think of a couple in the '40s that bounced back after a while.

eunos 6 hours ago [-]
> generational effort to fix

You imply that there are folks that willing to fix or even recognize that things are broken in the first place

mschuster91 7 hours ago [-]
> It's going to be a generational effort to fix what these people are breaking more of every day.

That assumes you have people wanting to fix what is broken - and I have a hard time believing even now that they are in the majority.

MAGA and their supporters? They want to see the world burn, if only for different motives: the "left behind" people in flyover states just want revenge, the Evangelicals literally believe they can cause the Second Coming of Christ by it [1], the Russia fangroup wants to see Ukraine burn to the ground and the ultra-libertarians/dont tread on me folks want all government but maybe a bit of military to go away. That is what unifies so many people behind the Trump banner.

The problem is, on the left side you got a bunch of people completely fed up as well. Anarchists of course, then you got the "left behind" people who still want revenge on the system but aren't willing to enlist the help of the far-right for that goal, you got revolutionaries of all kind... and you got those who believe that the rot runs too deep to fix by now.

And let's face the uncomfortable truth: every one of them, bar the Evangelicals and the Russia apologists, actually has a decent point in wanting to see the world burn. Post-Thatcher capitalism has wrecked too many lives, the US Constitution hasn't seen a meaningful update in decades and no overhaul in centuries, the "checks and balances" that were supposed to prevent a Trump from reaching office or rising to the position of effective dictator have been all but destroyed, the "American Dream" has been vaporware ever since 2007...

[1] https://www.bbc.com/news/articles/c20g1zvgj4do

this-is-why 8 hours ago [-]
I’ve been called bad things on HN for suggesting there’s even a whiff of corruption in this administration. That alone scares me. Deeply.
Quarrelsome 5 hours ago [-]
there's more money and "don't rock the boat" mentality on here as a consequence of that and they try to keep the moderation light. So its just not discussed enough to give people still tragically mired in that tribalism, the appropriate levels of shame.
saulpw 16 hours ago [-]
Hope is not a plan, unfortunately, so if that's all we've got, I don't have much hope.
gitaarik 11 hours ago [-]
What do you mean? You think any company should do whatever the government tells them?
lm28469 7 hours ago [-]
All of what's happening is a symptom, there is no reason it would change course with the next elections, all of this is the logical development of decades of cultural, political and morale rot in the US society. Trump isn't a bad moment we have to push through before we get back to the baseline, there has been no serious push back from anyone so far, it's here to stay
jorblumesea 16 hours ago [-]
You mean, what's been happening to the USA? this isn't a new trend. Militarization of police, open attacks on democracy, unilateral foreign policy moves.

the country jumped the shark post 9/11 and has been on a slow rot since then.

rjbwork 15 hours ago [-]
Indeed. Bin Laden succeeded beyond his wildest dreams. He kickstarted our self-destruction.
blitzar 3 hours ago [-]
I think the shoe lace bomber did more than bin laden - decades of ritual humiliation at airports was normalised.
randallsquared 42 minutes ago [-]
The TSA wouldn't exist with bin Laden. The TSA still exists, but the effects of the shoe bomber are now done, in the sense that shoes aren't required to come off as of last year.
wilg 9 hours ago [-]
No, this is cope, Trump is deeply different.
asdff 7 hours ago [-]
Trump is different because he is flailing to deflect from the fact he is deeply legally compromised. But he is reaching into a toolbox of things that have already been made available.
Quarrelsome 5 hours ago [-]
yeah there is close to little relation between the current administration and pre-Trump GOP. That entire party is now compromised. Beforehand you could always assume they'd be locked out by legal, business, or party pressure but that hasn't been seemingly much of a thing since Trump (as seen most recently in the illegal tariffs the administration continues to try to apply globally).
sneak 3 hours ago [-]
The framework for collecting the data to feed to the AI, exposed by Snowden, was designed and implemented in the wake of 9/11 by Bush when Trump was still busy banging teenagers with Epstein and not even thinking about politics.

Then Obama re-authorized and expanded it. Trump and Biden haven’t even moved the needle, really.

Now they’ve put up tens of thousands of permanently installed facial recognition cameras (not Flock ALPR, those point the other direction to get number plates) all over SoCal and southern Nevada (that I’ve directly observed; presumably it is happening in many other cities as well), and TSA and CBP are collecting as many ID-verified sets of facial geometry as they possibly can, whenever they can. ICE is of course using it nonstop, as well as feeding additional geometry into it. They’re flying drones 30 feet above sidewalks in downtown LA to mass collect faces.

The DoD can’t wait to deploy SOTA AI against Americans en masse.

sourcegrift 16 hours ago [-]
[flagged]
solid_fuel 15 hours ago [-]
"Recently turned American citizens" have every bit as much right to free speech, as guaranteed by the 1st amendment, as any other American citizen does. That's the whole point of the constitution. To pretend otherwise betrays the core values of our democracy.
rjbwork 15 hours ago [-]
Yeah well my family's been here for hundreds of years and fuck him. They're more American than that piece of shit will ever be.
anonnon 14 hours ago [-]
> They're more American

Do you mean your family, or Congresswoman Omar?

rjbwork 14 hours ago [-]
The latter, but both for sure.
guelo 15 hours ago [-]
That's congresswoman "recently turned American citizen" to you sir. BTW she became a citizen 26 years ago. My favorite part of Ilhan Omar being an outspoken congresswoman who keeps getting reelected is how it drives islamophobes crazy.
hobs 16 hours ago [-]
Complaining about the head of the government publicly so important that its included in the first amendment instead of one of those other ones.
le-mark 15 hours ago [-]
Selective memory as usual, outright dishonest at that. Let’s remember MTG heckling Biden. The when and who started heckling the sotu is well known.
FrankBooth 15 hours ago [-]
Let’s rush to destroy all norms entirely, since the other side started it it’s totally justified and will have no negative consequences whatsoever.
le-mark 14 hours ago [-]
This is an intellectually dishonest response. The person I responded to clearly attempts to place blame on one side, ignoring the facts of when the violation of norms began. It does matter that one side has destroyed all norms.
this-is-why 8 hours ago [-]
I think it was “you lie” under Obama. But my history knowledge awful. I wouldn’t be surprised if there was a duel at a pre civil war sotu.
krapp 15 hours ago [-]
My brother in Christ we shoot our Presidents for sport in this country. There's nothing more American than heckling the government and God bless any immigrant who doesn't put up with its bullshit.
idiotsecant 15 hours ago [-]
The irony inherent in this post is stunning in its purity. Weapons grade. I should be wearing goggles just to view this post. It's off the charts.
georgemcbay 16 hours ago [-]
> Let's hope sanity prevails and the next election cycle can bring in some competent non-grievance based leadership.

Would be nice, but I have a bad feeling that the impact of widescale mostly unregulated AI adoption on our social fabric is going to make the social media era that gave rise to Trump, et al seem like the good ol' days in comparison.

I hope I am wrong.

1024core 16 hours ago [-]
[flagged]
hungryhobbit 16 hours ago [-]
That seems to be a denial of reality. Democrats are already winning races all over the country, in places that (traditionally) have been Republican strongholds.

But don't let me stop you from believing in a worldview that contradicts reality ... lost of Republicans (and some Democrats) do it too.

vjvjvjvjghv 16 hours ago [-]
Democrats are mostly winning because the republicans have totally lost it, not because they are bringing forward a political vision that makes sense. I guess that’s where we are.
inigyou 16 hours ago [-]
And after 4 to 8 years of Democrats running things and nothing improving, the people vote Republicans just in case it's better. It keeps happening. It's the circle of life!
AuthAuth 15 hours ago [-]
People only think nothing improved because thats what Republicans are saying. Anyone even mildly politically informed can see the progress that happens under Democrat leadership.
inigyou 15 hours ago [-]
Progress such as...?
15 hours ago [-]
newAccount2025 15 hours ago [-]
Sadly apt. Democrats don’t make progress fast enough, while Republicans pull us backwards on vaccines, diversity, environment, abortion, healthcare, global prominence, naked corruption, oligarchy, theocracy, and military oppression.
1024core 16 hours ago [-]
Local county races and dog catcher races do not matter. What matters is who occupies 1600 Pennsylvania Avenue. That is the only race that counts.
dabockster 16 hours ago [-]
False. Local races directly determine the day-to-day laws and rules you live under way more than a POTUS could effectively decree. I don't know about you, but I sure enjoy having reliable electrical, water, and sewer systems.
esafak 15 hours ago [-]
They have that in Saudi Arabia too but I would not want to live there. Set higher standards.
scottyah 15 hours ago [-]
This is absolutely, in my mind, the opinion that has done the most damage to this country. If people didn't abandon politics that affect them at every level for a celebrity superbowl type show we wouldn't have this circus of Presidential campaigns.
vjvjvjvjghv 16 hours ago [-]
House and Senate are probably more important than the president.
jasondigitized 15 hours ago [-]
That's just not true. If you iive in Texas or California or wherever, your governor, state reps, judges, etc are all going to affect you far more than the President.
idiotsecant 15 hours ago [-]
So wildly inaccurate. If you disconnect yourself from the cable news outrage pornography cycle you'll find most things that actually impact you happen at the state and local level. A lot of spooky things on the TV to be afraid or mad about, but for the average person there is vanishly little real effect.
cogman10 16 hours ago [-]
Dems have lost to Trump twice and it looks like they want to run the same campaign strategies in future elections. They are relying too heavily on "trump bad" to win and I worry about what that will ultimately result in down the line.
cthalupa 15 hours ago [-]
This is a statement you can make.

It's also a statement entirely divorced from reality when you look at the fact that those winning candidates are not in fact doing that, and neither are the candidates that are getting the most national attention like Talarico.

Newsom has a vested interest in making it sound like he's the maverick here that knows the special formula, but it's been obvious to damn near everyone that they couldn't run out the same losing playbook.

cogman10 15 hours ago [-]
> neither are the candidates that are getting the most national attention like Talarico

It's a pretty close race with some recent polling indicating that Crockett will win the primary. Impossible to tell though. I clock her as being a more traditional democrat ultimately policy wise.

I'd expect she or Talarico has a good shot at winning in TX. They both have the potential to pivot to a more traditional position in the general election.

My main concern is the current elected leaders of the democrats and how the incoming dems view them. Frankly, if a candidate isn't saying "we need to oust Schumer/Jeffries" then I take that as a pretty decent signal that they align close enough with the moderate position to worry me about the future party.

I worry about the actions of the dems after election. I think they'll win the midterms, maybe even take the senate. I even think there's a good shot that they win 2028 presidental elections. The problem is that I think they'll run a biden style presidency and future campaigns once they get in power. That will setup republicans for an easy win in 2030 and 2032.

cthalupa 14 hours ago [-]
I'm a Texan so I'm following this pretty closely. I slightly prefer Crockett to Talarico, but I voted for him in the primary because I think he's got a significantly better shot to win.

Texas is going to need moderate and centrist votes to swing blue - we're not making the state more liberal at a rate that is gonna hand either of them a victory. Both are actually fairly progressive. But Talarico is a lot better at selling those progressive values to everyday people. The hispanic vote is one of the biggest factors in Texas, and while they're obviously not a monolith, culturally a lot of them have much more mixed social values than other voting demographics. Statistically, way more likely to be heavily religious, and that's at odds with a lot of the social values from more progressive candidates. Talarico effortlessly refrains these issues in a way that aligns with stuff he can directly quote scripture on.

I'm an atheist so I don't care what scripture says on the matter, but it's the sort of thing that plays well with a lot of a key voting demographic that Crockett just can't do.

lovich 15 hours ago [-]
Trump also lost everytime he was in a vote against Sleepy Joe Biden. Newsom went in a different tact with the redistricting effort instead of “they go low, we go high”, but yea I am also concerned to see if anyone else in the party actually updates their strategies for our current era instead of pre 2008 politics.
cthalupa 15 hours ago [-]
If Democrats actually knew how to message on what they accomplished instead of letting the other side control the narrative and refocus everything on to fringe issues that only the fringe of the party cares about, as well as matching every Biden brain fart/stutter/"senior moment" with the equivalents from Trump, I suspect a Biden vs. Trump rematch would have been a Biden victory.

But they suck at that. And when they failed to convince Biden to drop out early, they should have stuck with him and just ran hard on actual accomplishments during the admin. But Harris was a last minute pivot and it showed. I think she would have been perfectly fine as a president, and I voted for her, but not surprised in the slightest that she lost - and I expected her to lose bigger than she did.

The fact that Trump couldn't even get half the popular vote when running against a last minute ticket change that was never selected to be the presidential candidate by the party she was representing is a pretty big indictment of how unpopular he really is.

I think there's been learning that you can't just be "not Trump", but yeah - I don't know that the party in general has any idea how to handle messaging and narratives.

lovich 14 hours ago [-]
Agree with you on their failure of messaging, Biden was the most progressive President since Carter and I only limit myself to that because I am not as well versed in history at that point.

Yet somehow the progressives found him more unpalatable than the MAGAs if you look at people like Brianna Gray and Jill Stein.

It’s too far out for me to say I will definitively vote for Newsome but so far he’s the only Democrat whose started throwing hands both legislatively and on social media.

I hope the dems figure out how to do more of that and better, instead of returning to shit like the October shutdown and the exchanging leverage for pinky promises from Mr. John “I am an obligate pinky promise liar” Republican.

cogman10 16 hours ago [-]
In a nutshell, this is the problem with mainstream dems (and I include Newsom in this) looks an appearance matters a lot more than actual policy leadership.

The policies that actually affect people's lives, there's a lot of overlap for both mainstream dems and republicans.

I live in Idaho, and school teacher here are also extremely underpaid (My kid's teachers all have second jobs). Yet our state has magically found $40M to give away to private school while it's also asking the public schools to find 2% of their budgets to cut.

In I think both cases, the solution is simple, give the teachers a raise and probably raise taxes to pay for it. However, both parties are fairly anemic to the "raise taxes" portion of the message and so they instead look for other dumb flashy one time things they can do instead.

Federal democrats have relied way too heavily on Republicans being a villain and vague "hope and change" promises to carry them through an election cycle. They need to actually "change" things and not just maintain the status quo when they get power.

jatari 16 hours ago [-]
The Democrats are currently overwhelming favourites to win the House with a decent chance of also winning the Senate in the 2026 midterms and strong favourites to win the 2028 presidency.

I'm not sure why you think they are doomed.

XorNot 16 hours ago [-]
Fox news is going to talk about trans people a lot is the thing. Journalists will turn up to press conferences about anything and ask about trans people. Any response at all will be all that appears on TV.

Last election cycle the "niche issues" people complain about were overwhelmingly talked about more by people saying they opposed them.

Controlling the narrative is very easy when you have a cowardly or bought media, and plan to traffic in rage and clickbait.

jasondigitized 15 hours ago [-]
Trans is so last year. People have moved on.
marcus_holmes 15 hours ago [-]
It's interesting that in the UK the traditional two-party system is broken, because everyone realises that both of the traditional parties have been bought by rich folk and business interests, only serve their own interests, and can't be trusted any more. The main contenders now are Reform and The Greens, a situation that no-one predicted five years ago.

The same is true in Australia, though there's no charismatic left-wing leader emerging, and the Farage-equivalent is a laughing stock who struggles to be coherent at times. But because of billionaire money, she's still up there on the polls.

The US system makes it much harder for new parties to form, so it's probably going to be factions in the existing parties. And, of course, MAGA is the new faction in the Republican party; effectively a new party itself. So the ground is fertile for a new left-wing faction in the Democrat party to rise.

vjvjvjvjghv 16 hours ago [-]
Yeah. They really are trying hard to lose.
ypeterholmes 14 hours ago [-]
The current situation in the US is the depressing thing- articles like this give me hope. Real Americans aren't having these BS authoritarian violations of our constitutional rights.
eisfresser 9 hours ago [-]
> mass __domestic__ surveillance is incompatible with democratic values

But mass surveillance of Australians or Danes is alligned with democratic values as long as it's the Americans doing it?

I don't think the moral high ground Anthropic is taking here is high enough.

mosst 3 hours ago [-]
Most of the people on this site have disturbing beliefs about politics. Shallow and contradictory but strangely aligned.
mocamoca 3 hours ago [-]
Yes most comments makes no sense to me. The statement basically both allows surveillance of non-american people and prevents imaginary LLM weapons (I highly doubt we'll see a LLM fully automating a weapon...)
sneak 3 hours ago [-]
There is no popular support whatsoever for reining in foreign intelligence collection or processing. Americans generally don’t care about things that don’t affect them when it comes to policymaking (or the richest country in the world would do something meaningful about the 20k that die every single day from lack of access to fresh water).

If it ain’t repeatedly on the news and designed explicitly to scare and agitate then really people DGAF.

mocamoca 3 hours ago [-]
Something feels off about this announcement. Anyone else?

Credit where it's due, going on record like this isn't easy, particularly when facing pressure from a major government client. Still, the two limits Anthropic is defending deserve a closer look.

On surveillance: the carve-out only protects people inside the US. Speaking as someone based in Europe, that's a detail that doesn't go unnoticed. On autonomous weapons: realistically, current AI systems aren't anywhere near capable enough to run one independently. So that particular line in the sand isn't really costing them much.

What I find more candid is actually the revised RSP. It draws a clearer picture of where Anthropic's oversight genuinely holds and where it starts to break down as they race to stay at the cutting edge. The core tension, trying to be simultaneously the most powerful and the most principled player in the room, doesn't have a neat resolution.

This statement doesn't offer one either. But engaging with the question openly, even without all the answers, beats silence and gives the rest of us something real to push back on.

Peroni 2 hours ago [-]
>the carve-out only protects people inside the US. Speaking as someone based in Europe, that's a detail that doesn't go unnoticed.

I'm not sure an American company prioritising the privacy of American people is worth questioning. As a European, Anthropic are very low on the list of companies I worry about in terms of the progressive eradication of my privacy.

mocamoca 2 hours ago [-]
Agreed. That said, Anthropic's original pitch was about embedding safety at the foundational level of the 'model' (acknowledging that a model is more than just its weights).

If the safeguard against mass surveillance is strictly tied to geolocation (US vs. non-US), it can't be an intrinsic property of the model. It has to be enforced at the API or contractual level. This means international users are left out of those core, embedded protections. Unless Anthropic is planning to deploy multiple, differently-aligned foundation models based on customer geography or industry, the safety harness isn't really in the model anymore.

mosst 3 hours ago [-]
They surveil us to make sure that we stay productive and democratic, why do you object? Are you alleging bad intentions? Are you a Russian bot?
ssrshh 6 minutes ago [-]
This is quite the PR stunt. Tech companies can't stop copying Apple
kace91 16 hours ago [-]
As someone who is potentially their client and not domestic, really reassuring that they have no concerns with mass spying peaceful citizens of my particular corner of the world.
mwigdahl 16 hours ago [-]
Take your pick from the many other choices offered by companies that don't care about mass spying on _anyone_.
Quarrelsome 5 hours ago [-]
I thought we were the allies and looked down on powerful secret police. Like the Nazis or the Soviets. Did we lose those wars?
FartyMcFarter 2 hours ago [-]
The US is no longer a reliable ally to Europe. Look at the threats against Greenland.

I hope the next few elections change this, but right now that's how things are.

pamcake 11 hours ago [-]
Or don't.
drcongo 4 hours ago [-]
The US is already doing that though.
zug_zug 14 hours ago [-]
Is there a different AI company that IS taking that stance?

Because as far as I know, Anthropic is taking the most moral stance of any AI company.

ryukoposting 9 hours ago [-]
All the Chinese companies publishing open models that I can run on my own steel?
bamboozled 13 hours ago [-]
I can imagine that this will be the logical conclusion for many companies, I thought the same thing too, if it's too hard in the USA, they will just move.
nkoren 16 hours ago [-]
This makes me a very happy Claude Max subscriber.

Finally, someone of consequence not kissing the ring. I hope this gives others courage to do the same.

manmal 9 hours ago [-]
As a European user, I‘m not happy at all. I can’t fail to notice that non-domestic mass surveillance is not excluded here. I won’t cancel my account just yet because Opus is the best at computer use. But as soon as Mistral catches up and works reasonably well, I‘ll switch.
mosst 3 hours ago [-]
If you don't cancel your account now, I don't see what your problem is. Isn't it standard practice for allies to spy on each other? No reason to wait for Mistral to catch up when EU foreign policy already sealed the deal.
w4yai 8 hours ago [-]
Go Mistral !
bicx 15 hours ago [-]
They already kissed the ring, just not the asshole. They have a little dignity left.
jimmydoe 14 hours ago [-]
Better than the rest. here's $200, Dario!
bigyabai 14 hours ago [-]
This is how we bought Tim Cook the gold trophy. Today's fundraising buys tomorrow's tithe.
RyanShook 13 hours ago [-]
The whole article reads as virtue signaling to me. Anthropic already has large defense contracts. Their models are already being used by the military. There's really no statement here.
reasonableklout 10 hours ago [-]
How is it virtue signalling when sticking by these principles risks their entire business being destroyed by either being declared a supply chain risk or nationalized?
noelsusman 11 hours ago [-]
The notion that it's bad to signal virtue is one of the crazier propaganda efforts I've seen over the last 20 years or so.
manmal 9 hours ago [-]
It’s a manipulative tactic. Businesses have no soul and no conscience.
beanshadow 6 hours ago [-]
It's arguable that businesses are subject to the same morality-inducing processes that humans are. For example, as a human (with a soul?) what is at risk when we do something immoral? I see it to be a reputational cost at the highest level. Morality could be viewed from the perspective that it increases predictability/coherence in society (generates less heat).
manmal 3 hours ago [-]
If societal feedback is the only thing keeping a human from deviating in catastrophic ways, that’s what we call a sociopath.
MattRix 2 hours ago [-]
The humans working there do. To state otherwise is to absolve those humans of any responsibility.
TOMDM 11 hours ago [-]
A company being asked to violate their virtues refuses, and then communicates that to reestablish their commitment to said virtues?

Tell me more about what they should do if a virtue signal in such a situation is a nothing statement.

fragmede 13 hours ago [-]
Isn't it nice to have virtues to signal though? In saying that, you're saying you don't have any worth signaling over.
flufluflufluffy 11 hours ago [-]
Not when your actions don’t align with your professed virtues.
Keyframe 8 hours ago [-]
I wonder if this might be a setup by competition. Certainly looks like one.
exodust 11 hours ago [-]
I read the statement twice. I can't understand how you landed on "take my money".

Looks like an optics dance to me. I've noticed a lot of simultaneous positions lately, everyone from politicians and protesters, to celebrities and corporations. They make statements both in support of a thing, and against that same thing. Switching up emphasis based on who the audience is in what context. A way to please everyone.

To me the statement reads like Anthropic wants to be at the table, ready to talk and negotiate, to work things out. Don't expect updated bullet-point lists about how things are worked out. Expect the occasional "we are the goodies" statements, however.

alangibson 17 hours ago [-]
It's not named the Department of War because Congress didn't rename it.

Other than that, good on ya.

fluidcruft 16 hours ago [-]
It's really not the right thing to be bikeshedding. The people calling the shots call themselves the Department of War. No need to die on hills that don't matter.
epistasis 15 hours ago [-]
It's actually a good thing to point out, because it shows that those people are out of control and exceeding their authority, and need to be reined in.

No need to die on the hill, but point out that there's a consistent pattern of lawless power-grabbing.

0xbadcafebee 12 hours ago [-]
> it shows that those people are out of control and exceeding their authority

No, the concentration camps and gangs of masked thugs violating civil rights are that sign. Threatening to treat a domestic private corporation like an enemy combatant during peacetime for not immediately caving to military demands is that sign. Trying to take over the Federal Reserve, the Federal Trade Commission, and the Nuclear Regulatory Commission, is that sign. The Executive attempting to freeze funds issued by Congress for partisan reasons is that sign.

Department of War is just little boys being trolls.

epistasis 10 hours ago [-]
The action of a failed rebrand belongs to the Department of Defense, and is indeed an example of exceeding their authority. It was not DoD that is trying totake over the Fed, the FTR, or the NRC, so those examples don't work against Hegseth here.
fluidcruft 9 hours ago [-]
This is like picketing Auschwitz with placards complaining that the "National Socialists" aren't socialists.
asdff 7 hours ago [-]
Well, who is going to reign them in?
Hnrobert42 13 hours ago [-]
You're talking about an administration that barred the AP from pressed briefings because they didn't call it the Gulf of America. This is not a bikeshed.
LastTrain 13 hours ago [-]
I wouldn’t call a brief comment on the matter dying on a hill fcs
fluidcruft 12 hours ago [-]
Commenting on the matter just makes it easier for the media to yap about Anthropic being "woke" rather than focusing on the Department of War's demands.
throw0101c 13 hours ago [-]
> It's really not the right thing to be bikeshedding. The people calling the shots call themselves the Department of War. No need to die on hills that don't matter.

From the first chapter of the book On Tyranny by Timothy Snyder, an historian of Central and Eastern Europe, the Soviet Union, and the Holocaust:

> Do not obey in advance.

* https://timothysnyder.org/on-tyranny

* https://archive.org/details/on-tyranny-twenty-lessons-from-t...

* https://en.wikipedia.org/wiki/Timothy_Snyder

garciasn 16 hours ago [-]
TIL of Bikeshedding, or Parkinson’s Law of Triviality.

Defined as the tendency for teams to devote disproportionate time and energy to trivial, easy-to-understand issues while neglecting complex, high-stakes decisions. Originating from the example of arguing over a bike shed's color instead of a nuclear plant's design, it represents a wasteful focus on minor details.

https://en.wikipedia.org/wiki/Law_of_triviality

---

I deal with this day in and day out. Thank you for informing me of the word that describes the laughable nightmares I deal with on the regular.

baq 9 hours ago [-]
Get a prop with difficulty/importance quadrants and silently tap sign on meetings
helaoban 16 hours ago [-]
It SHOULD be called the Department of War, as it was originally, since it makes its function clear. We are a society that has euphemized everything and so we no longer understand anything.
elicash 12 hours ago [-]
It's a funny thing that the most war-loving people and the most peace-loving people both love calling it "Department of War" - just for different reasons.

But the reason for "Department of Defense" name was bureaucratic. It's also not true that DOD is hard to understand.

mpyne 14 hours ago [-]
The Department of the Army is what was previously called the Department of War. The Department of Defense is new, dating to just after WWII.
helaoban 11 hours ago [-]
Pedantry.

The Department of War was responsible for naval affairs until The Department of the Navy was spun off from it in 1798, and aerial forces until the creation of the The Department of the Air Force in 1947, whereafter it was left with just the army and renamed the Department of the Army. All three branches were then subordinated to the new Department of Defense in 1949, which became functionally equivalent to the original entity.

The Department of War is what it was called when it was first created in 1789 by the Congress (establishing the department and the position of Secretary of War), the predecessor entity being called the The Board of War and Ordnance during the revolution.

The Department of "Defense" has never fought on home soil. Ever.

scottyah 15 hours ago [-]
Doublespeak, so to speak.
greycol 14 hours ago [-]
Naming is important because it intuits what we expect to do with a thing. The Department of Defense invading Greenland is more invocative to inquiry than the Department of War invading Greenland because that's what a department of war would do.

It's one of the reasons why people get annoyed at jargon or are pissed off about pronouns, because it highlights that they should be putting mental effort into understanding why they're current mental model doesn't fit. It's much easier to ignore and be comfortable if there's not glaring sirens saying you've got some learning to do.

Most of us can't (or won't) be aware of everything that should be important to us, having glaring context clues that we should take notice of something incongruous is important. It's also why the Trump media approach works so well it's basically a case of alarm fatigue as republicans who would normally side against any particular one of his actions don't listen because they agreed with some of the actions that democrats previously raised alarms about.

alt187 8 hours ago [-]
> It's one of the reasons why people get annoyed at jargon or are pissed off about pronouns, [...]

It's worth noting there's an overabundance of legitimate reasons people get annoyed at these two thing, making them bad examples.

63 16 hours ago [-]
While I agree the name change has not (yet) been made with the proper authority, I'm quite partial to the name and prefer to use it despite its prematurity. I think it does a better job of communicating the types of work actually done by the department and rightly gives people pause about their support of it. Though I'm sure that wasn't the administration's intention.
inigyou 16 hours ago [-]
[flagged]
esafak 15 hours ago [-]
Brevity.
scottyah 15 hours ago [-]
That's a separate department, DoE actually controls the nukes.
dragonwriter 14 hours ago [-]
DoD controls them when they are actually going to be used, DoE only is responsible for the securing and maintaining them to be ready for use.
tempestn 12 hours ago [-]
The name is extremely off-putting, but I can see how they would want to be diplomatic toward the administration in using their chosen name. Save the push-back for where it really matters.
hirako2000 16 hours ago [-]
But it sets the tone.
henrikschroder 16 hours ago [-]
Of appeasement and bootlicking, yes.
peyton 15 hours ago [-]
Dude we had an election and this is what we’re doing. Maybe that’s not how you do things in the Kingdom of Sweden. Here it’s e pluribus unum.
hirako2000 15 hours ago [-]
There is a good share of collusion in Europe too, let's keep all continents open to critics. Elections doesn't imply unlawful dictates and corruption.
1024core 16 hours ago [-]
It's addressed to Hegseth, who insists on calling it that.

If they had called it DoD, then that would have been another finger in his eye.

garciasn 16 hours ago [-]
Remember, this is the same administration that barred the AP from the Oval Office because they wouldn't rename the Gulf of Mexico. https://www.theguardian.com/us-news/2025/feb/11/associated-p...

While this action may indeed cause the DoD to blacklist Anthropic from doing business w/the government, they probably were being as careful as they could be not to double down on the nose-thumbing.

moogly 16 hours ago [-]
This. They even put a "wArFiGhTers" in there.
furyofantares 15 hours ago [-]
I don't think it's addressed to Hegseth, but to anyone who might be sympathetic to Hegseth. Which I think actually strengthens your point, the goal appears to be to make it so the only possible complaint with the letter for someone sympathetic to the administration is "but mass domestic surveillance / fully autonomous weapons are legal" and not "look at this lunatic leftist who calls it the department of defense".
inigyou 16 hours ago [-]
Maybe this is the DoW Pam Bondi was referring to.
ReptileMan 16 hours ago [-]
Less hypocritical than Defense. US has never been on the defense, always offense since it was renamed in 1947.
dragonwriter 14 hours ago [-]
The Department of Defense was named in 1949, not 1947, and the thing that it was renamed from was the National Military Establishment, which was newly created in 1947 to be put over the two old military departments (War, which was over the Army only, and Navy, which was over the Navy including the Marine Corps)

At the same time as the NME was created, the Army was split into the Army and Air Force and the Department of War was also split in two, becoming the Department of the Army and the Department of the Air Force.

nrb 15 hours ago [-]
Often offensive and also often defensive of others.. so if renaming is on the table, it’s probably most apt to call it the Dept of Security since the vast majority of what it does is maintaining the security umbrella that has helped suppress world war since the last one. Of course, facts or opinions on whether it succeeds on the security front depend on which side of the umbrella you’re on.
curiousgal 6 hours ago [-]
And losing at that offense while at it.
ReptileMan 3 hours ago [-]
USA has never lost a war so far. They just ... get bored and leave eventually.
krapp 16 hours ago [-]
It is called the Department of War because we live under fascism and Congress no longer matters.

All that matters is that everyone calls it the Department of War, and regards it as such, which everyone does.

FrankBooth 14 hours ago [-]
Those of us with a firm grip on reality do not currently live under fascism.
wyre 14 hours ago [-]
Help me understand how a firm grip on tells that living in America is not fascism? It's definitely checking the boxes.
redwall_hp 12 hours ago [-]
Basically all of Eco's Ur-Fascism boxes are checked. And he'd know, having lived under Mussolini's regime. https://en.wikipedia.org/wiki/Ur-Fascism
throwaway76375 3 hours ago [-]
He was 11 when Mussolini's government fell...
dumpsterdiver 16 hours ago [-]
> All that matters is that everyone calls it the Department of War, and regards it as such, which everyone does.

What you just described is consensus, and framing it as fascism damages the credibility of your stance. There are better arguments to make, which don’t require framing a label update as oppression.

krapp 16 hours ago [-]
I'm not framing consensus as fascism, I'm pointing out what the consensus is within the current fascist framework, and that consensus is that Congress doesn't make the rules anymore. And that consensus is shared by Congress itself.
scottyah 15 hours ago [-]
So anyone who doesn't mind the name going back to DoW is fascist?
krapp 15 hours ago [-]
No.
RIMR 16 hours ago [-]
The president has no authority to rename the Department of Defense, but he and his administration demand consensus under the threat of legal consequences.

Just as one example, they threatened Google when they didn't immediately rename the Gulf of Mexico to the "Gulf of America" on their maps. Other companies now follow their illegal guidance because they know that they will be threatened too if they don't comply.

There is a word for when the government uses threats to enforce illegal referendums. That word is "Fascism". Denying this is irresponsible, especially in the context of this situation, where the Government is threatening to force a private company to provide services that it doesn't currently provide.

drstewart 16 hours ago [-]
[flagged]
inigyou 16 hours ago [-]
It means something violates the law. Am I right?
drstewart 16 hours ago [-]
[flagged]
OkayPhysicist 15 hours ago [-]
Renaming the DoD does directly contradict the National Security Act of 1947, which renamed the Department of War to the Department of the Army, and put it under the newly named Department of Defense.
drstewart 6 hours ago [-]
Cool.

No renaming happened though.

By the way, your illegal use of the term "DoD" to refer to the Department of Defense is pretty shocking. This isn't authorized by the Act of 1947.

freeone3000 15 hours ago [-]
The National Security Act of 1947, as amended on August 10, 1949, establishes the name of the executive department overseeing the military as the Department of Defense.
drstewart 6 hours ago [-]
Great.

Where does it prohibit alternative names?

ok_dad 14 hours ago [-]
Someone with 1200 points after 14 years on HN shouldn’t be pointing out green noobs, especially when they are being very reasonable with their comments and you’re objectively wrong.

You used “green account” like a slur.

drstewart 6 hours ago [-]
No, I should point out new accounts that are objectively wrong that are trying to stir up division and hate.

As should you, if you weren't in a similar position to them. Which it seems like you are?

jibal 16 hours ago [-]
Being honest increases credibility, not damages it.

> framing a label update as oppression

That strawman damages credibility.

vibeprofessor 16 hours ago [-]
true, if everything is 'fascism' then nothing is
thatswrong0 15 hours ago [-]
https://archive.ph/YSAWU

Except this administration is certainly fascist, and the renaming is yet another facet of it. That article goes through it point by point.

vibeprofessor 15 hours ago [-]
[flagged]
virgildotcodes 14 hours ago [-]
This is all such wild display of fully absorbed propaganda, even your very first bullet point, just... incredible:

> Dismantling government bureaucracy/corruption

Trump has done more to benefit financially from the presidency, to offer access and influence to anyone who will funnel money into his enterprises or give him gifts, than any president in our history.

How could you possibly write this in good faith? When Trump said he could shoot a person on 5th avenue and people would still vote for him, do you recognize yourself at all in that statement?

vibeprofessor 13 hours ago [-]
[dead]
alpaca128 14 hours ago [-]
So I take it you consider them not doing great at "releasing the Epstein files", or did you just not vote for that?
alchemism 14 hours ago [-]
[flagged]
zimza 14 hours ago [-]
[flagged]
noosphr 14 hours ago [-]
And what if congress renames it tomorrow? They have the votes. These sort of procedural gotchas are as stupid as they are boring.
dragonwriter 14 hours ago [-]
> And what if congress renames it tomorrow?

Then tomorrow it will be the Department of War. Just like When Congress voted to split the old Department of War into the Department of the Army and the Department of the Air Force, and to take both of those and the previously-separate Department of the Navy under a new National Military Establishment led by the newly-created Secretary of Defense (and when it later to voted to rename the NME as “Department of Defense”), things changed in the past.

> They have the votes.

Perhaps, but the law doesn't change because the votes are in a whip count on a hypothetical change, it changes because they are actually cast on a bill making a concrete change.

justin66 3 hours ago [-]
This is a willfully ignorant misreading of what's actually going on. They've decided to use the "Department of War" moniker in part because they think it sounds cool, but more significantly because it demonstrates they can break the law with impunity. Hence, there has not been a vote on the matter.
noosphr 2 hours ago [-]
What law?
rekrsiv 1 hours ago [-]
It is still called the Department of Defense.
StephenSmith 39 minutes ago [-]
I find this language fascinating. On one hand, the Department of "War" gives the department an underlying, unspoken goal that it should be involved in war with something. On the other hand, it's very easy to fund the Department of "Defense;" of course we need more money to defend our country. Don't we want to be safe! It's much less attractive to fund the Department of "War"
bambax 9 hours ago [-]
> These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.

Nicely put. In other words: Department of Morons.

newtonsmethod 5 hours ago [-]
Are you reading things before agreeing with them? Or thinking about them? It doesn't seem obvious these things are contradictory at all. That Politico reports so doesn't make it the case.

It is clear that the DPA can be invoked for companies posing risks to national security:

> On October 30, 2023, President Biden invoked the Defense Production Act to "require that developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government" when "developing any foundation model that poses a serious risk to national security, national economic security, or national public health."

Furthermore, it should be quite obvious that companies very important for national security can act in manners causing them to be national security risks, meaning a varied approach is required.

bambax 5 hours ago [-]
> Are you reading things before agreeing with them?

No, unlike yourself, I'm just a random brainless bot.

QuiEgo 12 hours ago [-]
I'd be amused beyond all reason if we saw this chain of events:

- Anthropic says "no"

- DoD says "ok you're a supply chain risk" (meaning many companies with gov't contracts can no longer use them)

- A bunch of tech companies say "you know what? We think we'd lose more money from falling behind on AI than we'd lose from not having your contracts."

Bonus points if its some of the hyperscalers like AWS.

Hilarity ensues as they blow up (pun intended) their whole supply chain and rapidly backtrack.

stevenpetryk 11 hours ago [-]
Being labeled a supply chain risk means that companies with government contracts cannot use Anthropic products _for those government contracts_, not that they have to cease all usage of Anthropic products. Reporters seem to be reporting on this incorrectly.
QuiEgo 11 hours ago [-]
Thank you for the information. My fun little narrative is in shambles :(
baq 9 hours ago [-]
Not really, actually. This usually means outright ban because per project is next to impossible to enforce internally.
ryukoposting 9 hours ago [-]
This is correct. Maybe the startups living off DARPA/MTEC/etc contracts would continue using Claude, but the LM/NOG/Collins types wouldn't touch Anthropic with a ten foot pole.
zb1plus 13 hours ago [-]
It would be hilarious if the Europeans got everyone visas and gave some kind of tax benefit to Anthropic and poached the entire company.
skeptic_ai 12 hours ago [-]
USA would bomb their country before any visa is approved
tintor 10 hours ago [-]
lol
atleastoptimal 16 hours ago [-]
I was concerned originally when I heard that Anthropic, who often professed to being the "good guy" AI company who would always prioritize human welfare, opted to sell priority access to their models to the Pentagon in the first place.

The devil's advocate position in their favor I imagine would be that they believe some AI lab would inevitably be the one to serve the military industrial complex, and overall it's better that the one with the most inflexible moral code be the one to do it.

Synaesthesia 15 hours ago [-]
AI was always particularly well suited to military use and mass surveillance. It can take huge amounts of raw data and parse it for your, provide useful information from that. And let's face it, companies exist for profit.
scottyah 15 hours ago [-]
True, and that has been going on for awhile now. But what does that have to do with Anthropic's genai chatbots with comparatively tiny context windows?
Synaesthesia 13 hours ago [-]
I thought Anthropic had sophisticated AI, but I am not an expert.
hiAndrewQuinn 8 hours ago [-]
Anthropic cares first and foremost about extinction risk. This is not what everyone who professes to care about human welfare thinks should be at the top of the priority list. See e.g. the Voluntary Human Extinction Movement for an example of a humanistic approach to letting humanity die off with no replacement.

One of the most challenging problems in AI safety re/ x-risk is that even if you can get one country to do the right thing, getting multiple countries on board is an entirely different ballgame. Some amount of intentional coercion is inevitable.

On the low end, you could pay bounties to international bounty hunters who extract foreign AI researchers in a manner similar to an FBI's most wanted lost, and let AI researchers quickly do the math and realize there are a million other well paid jobs that don't come with this flight risk. On the high end you can go to war and kill everyone. Whatever gets the job done.

Either way, if you want to win at enforcing a new kind of international coercion, you need to be at the top of the pack militarily and economically speaking. That is the true goal here, and I don't think one can make coherent sense out of what Anthropic is doing without keeping that in the back of their mind at all times.

presentation 14 hours ago [-]
So your stance is that anything military-related is immoral?
dheera 15 hours ago [-]
> opted to sell priority access to their models to the Pentagon

The bottom of all of this is that companies need to profit to sustain themselves. If "y'all" (the users) don't buy enough of their products, they will seek new sources of revenue.

This applies to any company who has external investors and shareholders, regardless of their day 0 messaging. When push comes to shove and their survival is threatened, any customer is better than no customer.

It's very possible that $20 Claude subscriptions isn't delivering on multiple billions in investment.

The only companies that can truly hold to their missions are those that (a) don't need to profit to survive, e.g. lifestyle businesses of rich people (b) wholly owned by owners and employees and have no fiduciary duty.

claud_ia 1 hours ago [-]
The framing around AI autonomy in national security contexts is genuinely new territory. What's interesting from an agent design perspective is the underlying question: how much should an AI system push back on institutional structures vs. defer to human oversight chains? The soul spec approach -- where the AI internalizes safe behavior rather than just following rules -- might be more relevant here than it first appears.
GreenJacketBoy 8 hours ago [-]
"fully autonomous weapons" from a private company; "Department of War". Hard to believe I'm not reading science fiction.
moffkalast 4 hours ago [-]
Service guarantees citizenship, would you like to know more?
kevincloudsec 3 hours ago [-]
amodei's autonomous weapons argument isn't political. it's an engineering assessment. if frontier models hallucinate in conversation, they'll hallucinate in targeting. you don't deploy unreliable systems where the cost of a false positive is a missile.
danbrooks 16 hours ago [-]
Props to Dario and Anthropic for taking a moral stand. A rarity in tech these days.
janalsncm 16 hours ago [-]
Agreed. You don’t have to be an LLM maximalist or a doomer to see the opportunity for real, practical danger from ubiquitous surveillance and autonomous weapons. It would have been extremely easy for Dario to demonstrate the same level of backbone as Sam Altman or Sundar Pichai.
Computer0 16 hours ago [-]
There is no moral leg to stand on here, he says here in plain english that if they wanted to use CLAUDE to perform mass surveillance on Canada, Mexico, UK, Germany, that is perfectly fine.
sfink 15 hours ago [-]
This is a public note, but directed at the current administration, so reading it as a description of what is or is not moral is completely missing the point. This note is saying (1) we refuse to be used in this way, and (2) we are going to use "mass surveillance of US citizens" as our defensive line because it is at least backed by Constitutional arguments. Those same arguments ought to apply more broadly, but attempts to use them that way have already been trampled on and so would only weaken the arguments as a defense.

If it helps: refusing to tune Claude for domestic surveillance will also enable refusing to do the same for other surveillance, because they can make the honest argument that most things you'd do to improve Claude for any mass surveillance will also assist in domestic mass surveillance.

buzzerbetrayed 15 hours ago [-]
Perhaps you just have different moral values? I suspect each of the countries you mentioned spy on us. I also suspect we spy on them. I’m glad an American company wouldn’t be so foolish as to pretend otherwise.
Computer0 12 hours ago [-]
Are we gods chosen people or something that we are the only ones undeserving of mass surveillance? Are you implying that morality depends on citizenship to a particular state?
hungryhobbit 16 hours ago [-]
[flagged]
weakfish 16 hours ago [-]
This comment breaks site rules
knfkgklglwjg 16 hours ago [-]
[flagged]
dddgghhbbfblk 16 hours ago [-]
A moral stand? ... What? Did we read the same statement? It opens right out the gate with:

>I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

>Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community. We were the first frontier AI company to deploy our models in the US government’s classified networks, the first to deploy them at the National Laboratories, and the first to provide custom models for national security customers. Claude is extensively deployed across the Department of War and other national security agencies for mission-critical applications, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.

which I find frankly disgusting.

adastra22 16 hours ago [-]
Freedom isn’t free. Someone has to defend the democratic values that you and I take for granted.

Dario’s statement is in support of the institution, not the current administration.

cwillu 15 hours ago [-]
The democratic values I take for granted is under direct threat from the us. Your government is literally funding separatist movements in my country.
jackp96 15 hours ago [-]
I mean, obviously.

But when was the last time our "democratic values" were under attack by a foreign country and actually needed defending?

9/11? Pearl Harbor?

Maybe I'm missing something. We have a giant military and a tendency to use it. On occasion, against democratically elected leaders in other countries.

You're right; freedom isn't free. But foreign countries aren't exactly the biggest threats to American democracy at the moment.

adastra22 15 hours ago [-]
You have the causality at least partially backwards. Why has it been so long and infrequent that the US has been in direct conflict with authoritarian adversaries? Because we have a giant military and a willingness to use it. Pacifism and isolationism do not work as defensive strategies.
kylestanfield 14 hours ago [-]
War is peace.
adastra22 13 hours ago [-]
Game theory is real.
14 hours ago [-]
DiogenesKynikos 15 hours ago [-]
The last time the US defended freedom through military means was WWII.

As Abraham Lincoln said, the greatest threat to freedom in America is a domestic tyrant, not a foreign army.

adastra22 15 hours ago [-]
Korea, Vietnam, Panama, Grenada, Libya, Lebanon, Iraq War I, Somalia, Haiti, Bosnia, Kosovo, Afghanistan, and Iraq War II were all fought for or over democratic ideals & the defense of democratic institutions.

All were driven by multiple competing and sometimes conflicting goals, and many look questionable in hindsight. It is fair to critique.

But it is absolutely not the case that the last time the US defended freedom through military means was WWII.

blitzar 2 hours ago [-]
> over democratic ideals & the defense of democratic institutions

Corporations, natural resources or getting a blowjob from the intern ... these are neither democratic ideals nor democratic institutions

DiogenesKynikos 7 hours ago [-]
Not a single one of those wars was in defense of freedom and democracy.

I'm not going to go through all of those wars one-by-one, but are you joking with Iraq War II? That war was sold on the lie that Saddam Hussein had weapons of mass destruction and was somehow behind 9/11, by a president who himself had stolen the 2000 election by getting his brother to halt the counting of votes in Florida.

tylerchilds 16 hours ago [-]
I feel like the deepest technical definition of autocratic is “fully autonomous weapons”?
joemi 15 hours ago [-]
They are undeniably taking a moral stand. Among other things, the statement explains that there are two use cases that they refuse to do. This is a moral stand. It might not align with your morals, but it's still a moral stand.
Fricken 15 hours ago [-]
We knew long before AI was a twinkle in Amodel's eye that if it were to be built, then it would be co-opted by thugs.

Anthropic's statement is little more than pageantry from the knowing and willing creators of a monster.

xvector 9 hours ago [-]
You're right, we should never build anything because bad people might try to use it. Everyone that has progressed technology is a monster!
ekianjo 16 hours ago [-]
You know this is pure PR right?
reasonableklout 10 hours ago [-]
If Anthropic is nationalized or declared a supply chain risk tomorrow, will you say the same?
flawn 15 hours ago [-]
What do you mean? You think Hegseth and Anthropic are doing this for PR reasons?
rvz 16 hours ago [-]
[flagged]
ben_w 16 hours ago [-]
For now is all we ever have, unfortunately.

I miss the days when the mega-brands whose work I admired, still did such works.

Qem 16 hours ago [-]
> Anthropic will betray you for a multi-year government contract worth tens of billions of dollars.

What are the odds they will rebrand Misanthropic by then?

ternwer 16 hours ago [-]
So you think we should never support them doing something "positive"? What incentive does that give?
astrange 16 hours ago [-]
Anthropic is a PBC and if they violate the terms of that the shareholders (you) can sue them for securities fraud.
bogzz 16 hours ago [-]
This is not how the word "moral" should be used in a sentence that also has the name Dario Amodei in it.
plaidthunder 16 hours ago [-]
Words are cheap. Actions aren't. Dario Amodei is putting his company on the line for what he believes in. That's courage, character and... yes, morality.
sheikhnbake 16 hours ago [-]
I have a feeling this is just a negotiation tactic leveraging public sentiment rather than a stance based on morality.
tfehring 16 hours ago [-]
It's both - it's clearly at least partly for moral reasons that they're even in the negotiation that they need leverage for.
bogzz 16 hours ago [-]
I am convinced that Amodei's "morality" is purely performative, and cynically employed as a marketing tactic. Time will tell, but most people will forget his lies.
jstanley 16 hours ago [-]
How should he have acted instead?
khazhoux 16 hours ago [-]
Yeah.

“Dario is saying the right thing and doing the right thing and not ever acting otherwise, but I think it’s just performative so I’m still disappointed in him.”

bogzz 16 hours ago [-]
We don't know how the military intended to use Claude, and neither do we know nor does the military know whether Claude without RLHF-imposed safety would have been more useful to them.

Ergo, this is a very convenient PR opportunity. The public assumes the worst, and this is egged on by Anthropic with the implication that CLAUDE is being used in autonomous weapons, which I find almost amusing.

He can now say goodbye to $200 million, and make up for it in positive publicity. Also, people will leave thinking that Claude is the best model, AND Anthropic are the heroes that staved off superintelligent killer robots for a while.

Even setting this aside, Dario is the silly guy who's "not sure whether Claude is sentient or not", who keeps using the UBI narrative to promote his product with the silent implication that LLMs actually ARE a path to AGI... Look, if you believe that, then that is where we differ, and I suppose that then the notion that Amodei is a moral man is comprehensible.

Oh, also the stealing. All the stealing. But he is not alone there by any means.

edit: to actually answer your question, this act in itself is not what prompted me to say that he is an immoral man. Your comment did.

astrange 16 hours ago [-]
> to promote his product with the silent implication that LLMs actually ARE a path to AGI

That isn't implied. The thought process is a) if we invent AGI through some other method, we should still treat LLMs nicely because it's a credible commitment we'll treat the AGI well and b) having evidence in the pretraining data and on the internet that we treat LLMs well makes it easier to align new ones when training them.

Anyway, your argument seems to be that it's unfair that he has the opportunity to do something moral in public because it makes him look moral?

ternwer 16 hours ago [-]
His actions seem pretty consistent with a belief that AI will be significant and societally-changing in the future. You can disagree with that belief but it's different to him being a liar.

The $200m is not the risk here. They threatened labelling Anthropic as a supply chain risk, which would be genuinely damaging.

> The DoW is the largest employer in America, and a staggering number of companies have random subsidiaries that do work for it.

> All of those companies would now have faced this compliance nightmare. [to not use Anthropic in any of their business or suppliers]

... which would impact Anthropic's primary customer base (businesses). Even for those not directly affected, it adds uncertainty in the brand.

janalsncm 16 hours ago [-]
It’s possible Dario is a bad person pretending to be good and Sundar is a good person only pretending to be bad. People argue whether true selflessness exists at all or whether it’s all a charade.

But if the “performance” involves doing good things, at the end of the day that’s good enough for me.

signatoremo 16 hours ago [-]
Standing up to the US government has real and serious sequence. Peter Hegseth threatened to make Anthropic supply chain risk, meaning not only is Anthropic likely dropped as Pentagon’s supplier, but also risk losing companies doing business with the military as customers, such as Boeing or Lockheed Martin. Whatever tactic you think he is doing, that’s potentially massive revenue lost, at the time they need any business they can get.
chasd00 15 hours ago [-]
Amazon does business with the DOD/W. That’s a pretty dangerous game of brinkmanship Anthropic is playing.
startupsfail 16 hours ago [-]
Don't be evil.
mvkel 16 hours ago [-]
These are literally words. The DoW could still easily exploit these platforms, and nothing Anthropic has done can prevent it, other than saying (publicly), "we disagree."
layer8 15 hours ago [-]
The dispute seems to be specifically about safeguards that Anthropic has in its models and/or harnesses, that the DoD wants removed, which Anthropic refuses to do, and won’t sign a contract requiring their removal. Having implemented the safeguards and refusing their removal are actions, not “literally words”.
mvkel 14 hours ago [-]
The "safeguards" you are referring to are contractual, i.e. words. There are no technical safeguards, per the article.

The memo literally says that the reason they have these policies is -because- actual technical guardrails are not reliable enough.

layer8 3 hours ago [-]
janalsncm 16 hours ago [-]
It’s a contract dispute. Contracts are more than just talk.

While it is true that DoW could try to bypass the contract and do whatever they want, if it were that easy they wouldn’t be asking for a contract in the first place.

mvkel 16 hours ago [-]
Should probably look up how many private companies are suing the government at any one time because of a breach of contract. And that's publicly breaching.

NSA and other three-letter agencies happily do it under cloak and dagger.

janalsncm 11 hours ago [-]
I agree with you that the govt can and does violate contracts. So the fact that they need Anthropic to agree signals that it’s more than just lawyers preventing the DoW from doing whatever they want.
mhitza 15 hours ago [-]
What's the US history around nationalization? Would "confiscation", ever be a likelyhood on escalation?

On a quick search I came up with an article, that at least thematically, proposes such ideas about the current administration "Nationalization by Stealth: Trump’s New Industrial Playbook"

https://thefulcrum.us/trump-state-control-capitalism

slg 16 hours ago [-]
Is it morality or is it recognizing that providing the brain of autonomous weapons has a non-zero chance of ending up with him on trial in The Hague?
sebzim4500 16 hours ago [-]
This action is far more likely to land him in prison than complying with the pentagon
slg 16 hours ago [-]
I disagree. There is a class of leaders in this country that is complicit with the administrations use of violence on the tacit understanding that the violence not be directed at them. Arresting one of those people would be an act of desperation that would likely cause the rats to flea the sinking ship. And it isn't even clear if Trump could actually manufacture any charges here. Look at the dropped charges against Mark Kelly and those other politicians as an example. The administration might be able to make up stories to arrest random immigrants and college kids, but they clearly haven't been able to indiscriminately jail powerful political opponents.

Meanwhile, Dario knows his product can't be trusted to actually decide who should live and who should die, so what happens the first time his hypothetical AI killing machines make the wrong decision? Who gets the blame for that? Would the American government be willing to throw him under the bus in the face of international outrage? It's certainly a possibility.

inigyou 16 hours ago [-]
The chance is zero. This won't be deployed in countries that he'd want to visit anyway and would extradite him to The Hague.
mobilefriendly 15 hours ago [-]
In all seriousness The Hague has no jurisdiction over Americans and Congress has already authorized military use of force against Brussels should they ever attempt to prosecute Americans.
verdverm 16 hours ago [-]
It's not so clear the company is actually on the line. They can compel Anthropic to do what they are not willing to do, maybe, this is not the final act. The government needs to respond, to which Anthropic will need to respond, courts may become involved at that point, depending on if Anthropic acquiesces at that point or not. Make a prominent statement against while in the news cycle, let the rest unfold under less media attention.
davidw 16 hours ago [-]
It's a little bit better than so many sniveling, cowardly elites are doing right now.
contubernio 9 hours ago [-]
"I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

Anthropic has therefore worked proactively to deploy our models to the Department of War and the intelligence community."

The moral incoherence and disconnect evident in these two statements is at the heart of why there is generalized mistrust of large tech companies.

The "values" on display are everything but what they pretend to be.

keybored 7 hours ago [-]
> > I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

These blurbs always mainly communicate that they are in line with US foreign policy. And then one can look at the actual actions rather than the rhetoric of US foreign policy to judge whether it is really in line with defending democracies and defeating autocracies.

freakynit 14 hours ago [-]
Welp, I never thought "Person of Interest" show coming to life anytime soon, but, here we are. In case you haven't watched the show, it's time to give it a go. Bare with season 2 though, since things really start to escalate from season 3 onwards. Season 1 is a must though.
LeakedCanary 9 hours ago [-]
The Machine really had this all figured out
freakynit 5 hours ago [-]
Nice to find another fan of this criminally underrated show.

The difference was always the "father".. The Machine was raised with a conscience. Samaritan wasn't.

LeakedCanary 1 hours ago [-]
The show is really underrated :D

> The difference was always the "father".. The Machine was raised with a conscience. Samaritan wasn't.

That's what made the show so ahead of its time. Once capability reaches a certain level, it's no longer about intelligence. It's about values. Feels like we're living through that shift now with all the alignment work around LLMs. And it's only going to matter more as capability scales.

freakynit 1 hours ago [-]
Agree 100%.
joseangel_sc 43 minutes ago [-]
good from them, but dario does not miss a beat to hype this tech, llms are perfect for mass surveillance and i want to the laws to change to prohibit this, but llms and full autonomous weapons have very little to share
czierleyn 7 hours ago [-]
Being from Europe I do not like the remark that he only objects to DOMESTIC mass surveillance.
Metacelsus 16 hours ago [-]
I'm glad to see Dario and Anthropic showing some spine! A lot of other people would have caved.
shevy-java 33 minutes ago [-]
> I believe deeply in the existential importance of using AI to defend the United States and other democracies

I do not want to be "defended" by tools controlled by the US government, with or without Trump. But with Trump it is much more obvious now, so I'll pass.

Perhaps AI use will make open source development more important; many people don't want to be subjected to the US software industry anymore. They already control WAY too much - Google is now the biggest negative example here.

asmor 16 hours ago [-]
As a "foreign national", what's the deal with making the distinction between domestic mass surveillance and foreign mass surveillance? Are there no democracies aside from the US? Don't we know since Snowden that if the US wants to do domestic surveillance they'll just ask GCHQ to share their "foreign" surveillance capabilities?
mquander 16 hours ago [-]
I think it's slightly less ridiculous than it sounds, because governments have much more power over their own citizens. As an American I would dramatically prefer the Chinese government to spy on me than the American government, because the Chinese government probably isn't going to do anything about whatever they find out.

(That logic breaks down somewhat in the case of explicitly negotiated surveillance sharing agreements.)

bryant 16 hours ago [-]
> because the Chinese government probably isn't going to do anything about whatever they find out.

This really depends. If a foreign adversary's surveillance finds you have a particular weakness exploitable for corporate or government espionage, you're cooked.

Domestic governments are at least still theoretically somewhat accountable to domestic laws, at least in theory (current failure modes in the US aside).

elefanten 15 hours ago [-]
Exactly and that danger grows as the ability to do so in increasingly automated and targeted ways increases. Should be very obvious now looking at the world around us.

Also, failing to consider the legal and rights regime of the attacker is wild to me. Look at what happens to people caught spying for other regimes. Aldrich Ames just died after decades in prison, and that’s one of the most extreme cases — plenty have got away with just a few years. The Soviet assets Ames gave up were all swiftly executed, much like they are in China.

Regimes and rights matter, which is why the democracy / autocracy governance conflict matters so much to the future trajectory of humanity.

collabs 15 hours ago [-]
Yes, exactly this.

> As an American I would dramatically prefer the Chinese government to spy on me than the American government, because the Chinese government probably isn't going to do anything about whatever they find out.

> spy on me

People forget to substitute "me" for "my elected representative" or "my civil service employee" or "my service member" or their loved ones

I, personally, have nothing significant that a foreign government can leverage against our country but some people are in a more privileged/responsible/susceptible position. It is critical to protect all our data privacy because we don't know from where they will be targeted.

Similarly, for domestic surveillance, we don't know who the next MLK Jr could be or what their position would be. Maybe I am too backward to even support this next MLK Jr but I definitely don't want them to be nipped in the bud.

adastra22 16 hours ago [-]
You’re getting many replies, and having scrolled through much of them I do not see one that actually answers your question truthfully.

The reason why there is an explicit call out for surveillance on American citizens is because there are unquestionable constitutional protections in place for American citizens on American soil.

There is a strong argument that can be made that using AI to mass surveil Americans within US territory is not only morally objectionable, but also illegal and unconstitutional.

There are laws on the books that allow for it right now, through workarounds grandfathered in from an earlier era when mass surveillance was just not possible, and these are what Dario is referencing in this blog post. These laws may be unconstitutional, and pushing this to be a legal fight, may result in the Department of War losing its ability to surveil entirely. They may not want to risk that.

I wish that our constitution provided such protections for all peoples. It does not. The pragmatic thing to do then is to focus on protecting the rights that are explicitly enumerated in the constitution, since that has the strongest legal basis.

8note 13 hours ago [-]
given that the US likes to declare jurisdiction whenever somebody touches a US dollar, any thoughts on why those same constitutional protections wouldnt follow?
adastra22 7 hours ago [-]
Because that's the way US courts have chosen to interpret the law. In the US legal system, it does not matter what you or I think the words could be interpreted to mean. The courts have final say, and the consensus interpretation is built from their historical decisions.
mothballed 16 hours ago [-]
I agree with your premise because this seems to be the modern interpretation of the courts, but it is not the historical interpretation.

The historical basis of the bill of rights is that they are god given rights of all people merely recognized by the government. This is also partially why all rights in the BoR are granted to 'people' instead of 'citizens.'

Of course this all does get very confusing. Because the 4th amendment does generally apply to people, while the 2nd amendment magically people gets interpreted as some mumbo-jumbo people of the 'political community' (Heller) even though from the founding until the mid 1800s ~most people it protected who kept and bore arms didn't even bother to get citizenship or become part of the 'political community'.

15 hours ago [-]
selimthegrim 14 hours ago [-]
There have been cases of illegal immigrants demanding 2nd amendment rights and getting them ever since it was incorporated to the states in McDonald
CamperBob2 15 hours ago [-]
The reason why there is an explicit call out for surveillance on American citizens is because there are unquestionable constitutional protections in place for American citizens on American soil.

Those unquestionable protections are phrased with enough hand-waving ambiguity of language to leave room for any conceivable interpretation by later courts. See the third-party 'exception' to the Fourth Amendment, for instance.

It's as if those morons were running out of ink or time or something, trying to finish an assignment the night before it was due.

mothballed 15 hours ago [-]
Since at least the progressive era (see the switch in time that saved 9), and probably before, the courts have largely just post facto rationalized why the thing they do or don't agree with fit their desired pattern of constitutionality.

SCOTUS is largely not there to interpret the constitution in any meaningful sense. They are there to provide legitimization for the machinations of power. If god-man in black costume and wig say parchment of paper agree, then act must be legitimate, and this helps keep the populace from rising up in rebellion. It is quite similar to shariah law using a number of Mutfi/Qazi to explain why god agrees with them about whatever it is they think should be the law.

If you look at a number of actions that have flagrantly defied both the historical and literal interpretation of the constitution, the only entity that was able to provide legitimization for many acts of congress has been the guys wearing the funny looking costumes in SCOTUS.

dragonwriter 16 hours ago [-]
This is a political statement directed at the US public, Congress, and executive branch in the context of a dispute with the US executive branch that is likely to escalate (if the executive is not otherwise dissuaded) into a legal battle, and it therefore focuses particularly on issues relevant in that context, including Constitutional, limits on the government as a whole, the executive branch, and the Department of Defense (for which Anthropic used the non-legal nickname coined by the executive branch instead of the legal name.) Domestic mass surveillance involves Constitutional limits on government power and statutory limits on executive power and DoD roles that foreign surveillance does not. That's why it is the focus.
samat 16 hours ago [-]
[flagged]
dragonwriter 16 hours ago [-]
> This is AI, right?

No.

> How do I filter this out on mobile?

How do you filter out things that you are going to mistake for AI?

That seems likely to be tricky.

slg 16 hours ago [-]
>Are there no democracies aside from the US?

If we're asking "What's the deal" questions, what's the deal with this question? Do only people in democracies deserve protections? If we believe foreign nationals deserve privacy, why should that only apply to people living in democracies?

crazygringo 15 hours ago [-]
In every country, citizens have more rights than non-citizens. The right to freely enter the country, the right to vote, the right to various social services, etc.

In the US, one of the rights citizens have is the right against "unreasonable searches and seizures", established in the Fourth Amendment. That has been interpreted by the Supreme Court to include mass surveillance and to apply to citizens and people geographically located within US borders.

That doesn't apply that to non-citizens outside the US, simply because the US Constitution doesn't require it to.

I'm not defending this, just explaining why it's different.

But, you can imagine, for example, why in wartime, you'd certainly want to engage in as much mass surveillance against an enemy country as possible. And even when you're not in wartime, countries spy on other countries to try to avoid unexpected attacks.

roxolotl 16 hours ago [-]
The US has a strong history of trying to avoid building domestic surveillance and a national police. Largely it’s due to the 4th amendment and questions about constitutionality. Obviously that’s going questionably well but historically that’s why it’s a red line.
sheikhnbake 16 hours ago [-]
Exactly. FVEYs been doing reciprocal surveillance on each other for decades.

https://en.wikipedia.org/wiki/Five_Eyes#Domestic_espionage_s...

gip 15 hours ago [-]
The reality is that the US Constitution only offers strong guarantees to citizens and (some of) the people in the US. Foreigners are excluded and foreign mass surveillance is or will happen.

I believe every country (or block) should carve an independent path when it comes to AI training, data retention and inference. That is makes most sense, will minimize conflicts and put people in control of their destiny.

kace91 16 hours ago [-]
Particularly so when those foreign nationals can be consumers. “fuck your basic human rights, but we can take your money just fine”.
scottyah 15 hours ago [-]
If nothing else, the USA has learned that a lot of people outside their borders do not share the same ideas on basic human rights, and most of the world hates when we try to ensure them. Some countries are closely aligned with our ideals and are treated differently. There are many different layers of this, from Australia to North Korea.
16 hours ago [-]
ks2048 16 hours ago [-]
Also the more the US openly treats the world like garbage, the more the rest of the world will likely reciprocate to US citizens.

It reminds me of some recent horror stories at border crossings - harassing people and requiring giving up all your data on your phone - sets a terrible precedent.

16 hours ago [-]
dointheatl 16 hours ago [-]
> what's the deal with making the distinction between domestic mass surveillance and foreign mass surveillance? Are there no democracies aside from the US?

I think it's just saying that spying on another country's citizens isn't fundamentally undemocratic (even if that other country happens to be a democracy) because they're not your citizens and therefore you don't govern them. Spying on your own citizens opens all sorts of nefarious avenues that spying on another country's citizens does not.

dabockster 16 hours ago [-]
In the US, we have the ability to either confirm or change a significant chunk of our Federal government roughly every two years via the House of Representatives. The argument here is that we, theoretically, could collectively elect people that are hostile to domestic mass surveillance into the House of Representatives (and other places if able) and remove pro-surveillance incumbents from power on this two year cycle.

The reasons this hasn't happened yet are many and often vary by personal opinion. My top two are:

1) Lack of term limits across all Federal branches

and

2) A general lack of digital literacy across all Federal branches

I mean, if the people who are supposed to be regulating this stuff ask Mark Zuckerberg how to send an email, for example, then how the heck are they supposed to say no to the well dressed government contractor offering a magical black box computer solution to the fear of domestic terrorism (regardless of if its actually occurring or not)?

16 hours ago [-]
jonstewart 16 hours ago [-]
One of them is illegal for DoD to do and the other is not.
ra 16 hours ago [-]
100% - this is the shortsightedness and demonstrates hypocrisy.

Countries routinely use other countries intelligence gathering apparatus to get around domestic surveillance laws.

jmyeet 15 hours ago [-]
The distinction between foreign and domestic is a legal one.

The Supreme Court has ruled that the US Constitution protects any persons physically present in the United States and its territories as well as any US citizens abroad.

So if you are a German national on US soil, you have, say, Fourth Amendment protections against unreasonable search and seizure. If you are a US citizen in Germany, you also have those rights. But a German citizen in Germany does not.

What this means in practice is that US 3-letter agencices have essentially been free to mass surveil people outside the United States. Historically these agencies have gotten around that by outsourcing their spying needs to 3 leter agencies in other countries (eg the NSA at one point might outsource spying on US citizens to GCHQ).

ApolloFortyNine 16 hours ago [-]
Are all democracies allies to you?
gmueckl 16 hours ago [-]
That still doesn't justify mass surveillance.
asmor 16 hours ago [-]
Never said that. Didn't even imply it.
xdennis 16 hours ago [-]
> what's the deal with making the distinction between domestic mass surveillance and foreign mass surveillance?

A large portion of Americans believe in "citizen rights", not "human rights". By that logic, non-Americans do not have a right to privacy.

esafak 16 hours ago [-]
This contradicts the opening of the Declaration of Independence, which recognizes all humans as possessing rights:

"We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness."

lazide 15 hours ago [-]
Lots of lofty goals have been written on paper - when people take them seriously, they are even worth something.

The pendulum swings.

cmrdporcupine 16 hours ago [-]
I'm glad to see this as the top comment. I was, until recently, a loyal Anthropic customer. No more. Because the way non-Americans are spoken of by a company that serves an international market (and this isn't the first instance):

"Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass _domestic_ surveillance is incompatible with democratic values."

Second class citizens. Americans have rights, you don't. "Democratic values" applies only to the United States. We'll take your money and then spy on you and it's ok because we headquartered ourselves and our bank accounts in the United States.

Very questionable. American exceptionalism that tries to define "democracy" as the thing that happens within its own borders, seemingly only. Twice as tone-deaf after what we've seen from certain prominent US citizens over the last year. Subscription cancelled after I got a whiff of this a month ago.

(Not to mention the definition of "lawful foreign intelligence" has often, and especially now, been quite ethically questionable from the United States.)

EDIT: don't just downvote me. Explain why you think using their product for surveillance of non-Americans is ethical. Justify your position.

felineflock 15 hours ago [-]
That reasoning sounds confusing: are you actually in favor of US gov's surveillance on Americans?

If not, then why are you punishing that company for refusing to deal with the US gov?

Or is it just because they worded their opposition in a certain way that you dislike?

cmrdporcupine 15 hours ago [-]
It's not confused. Are you?

I object, as a non-American paying Anthropic customer, to being surveilled and then having it justified in a press release?

felineflock 10 hours ago [-]
> I object, as a non-American paying Anthropic customer, to being surveilled and then having it justified in a press release?

You genuinely think you're not already being surveilled? And that Anthropic is somehow responsible with just a few words in a press release? In what world are you living in and how is the rent there?

asmor 8 hours ago [-]
> You genuinely think you're not already being surveilled?

"You don't like capitalism, why do you pay for things then?"

> And that Anthropic is somehow responsible with just a few words in a press release?

They seem to believe that they're a pretty important piece. That aside, this is a declaration of intent, it doesn't need to have anything to do with real-world capabilities.

Just because something will happen anyway doesn't mean you shouldn't oppose it.

sfink 15 hours ago [-]
My guess is that they can't object to foreign intelligence, and would lose negotiating ground if they even tried.

Optimistically, they can still refuse to do work that would aid in foreign intelligence gathering, by arguing that it would also be beneficial for domestic mass surveillance.

I'll admit that the phrase "We support...foreign intelligence and counterintelligence" is awful as hell, and it's possible that my apologist claims are BS. But Anthropic has very little leverage here (despite having a signed contract and so legally fully in the right), so I could see why they're desperate to stick to only the most solid objections available.

cmrdporcupine 14 hours ago [-]
It's the addition of the we support phrase in particular, and the attempt to tie that in a "democratic values" clause that is objectionable.

Not to most US citizens, I'm sure. But there's millions of non-Americans who have given them their hard earned cash. It's not a good look, and it did not need to be phrased that way as it substantially undermines the impact of their point.

gtsop 16 hours ago [-]
[dead]
caaqil 16 hours ago [-]
[flagged]
banku_brougham 16 hours ago [-]
>democracies aside from the US.

I mean, I guess from '65 to around 96? We had a good run.

mvkel 16 hours ago [-]
Good optics, but ultimately fruitless.

If preventing mass surveillance or fully autonomous weaponry is a -policy- choice and not a technical impossibility, this just opens the door for the department of war to exploit backdoors, and anthropic (or any ai company) can in good conscience say "Our systems were unknowingly used for mass surveillance," allowing them to save face.

The only solution is to make it technically -impossible- to apply AI in these ways, much like Apple has done. They can't be forced to compel with any government, because they don't have the keys.

madrox 16 hours ago [-]
I think it is a reasonable moral stance to acknowledge such things are possible, yet not wanting to be a part of it. Regarding making it technically impossible to do...I think that is what Anthropic means when they say they want to develop guardrails.
mvkel 16 hours ago [-]
Are the guardrails not part of their core? Isn't that the whole premise of their existence?
madrox 14 hours ago [-]
If you read the statement, they explicitly state these guardrails don't exist today, and they want to develop them.

Though I have a feeling we're talking about different things. In Claude Code terms, it might want to rm -rf my codebase. You sound like you might want it to never run rm -rf. Anthropic probably wants to catch dangerous commands and send them to humans to approve, like it does today.

mvkel 13 hours ago [-]
That's my point. They formed anthropic under the sole mandate of "guardrails first," now seemingly don't have them at all. So they're just another ai company with different marketing, not the purely altruistic outfit they want everyone to believe
xvector 9 hours ago [-]
The ability of some people to never be happy, and to find a way to twist a good situation into bad, will always impress me.

Here we have a company doing something unprecedented but it is STILL not enough for people like you. The DoD could destroy them over this statement, and have indicated an intent to do so, but it's still not enough for you that they stand up to this.

I wonder what life is like being so puritanical and unwilling to accept the good, for it is not perfect! This mindset is the road to a life of bitterness.

adi_kurian 15 hours ago [-]
A little pessimistic of a take, IMO. You may very well be right, though.
StephenSmith 46 minutes ago [-]
I had to dig this up. Elon Musk signed an open pledge in 2016 to disallow Robots/AI to make kill decisions.

https://futureoflife.org/open-letter/lethal-autonomous-weapo...

He's now on X bashing Anthropic for taking this same stance. I know this would be expected of him, but many other Google AI researchers signed this as well as Google Deep Mind the organization. We really need to push to keep humans in the kill decision loop. Google, OpenAI, and X-AI are are all just agreeing with the Pentagon.

ra 16 hours ago [-]
> "mass domestic surveillance" - mass surveillance of non-domestic civilians is OK?
nubg 16 hours ago [-]
A favourable take would be he meant "mass surveillance of non-democratic adversarial countries". I agree it's not phrased this way though.
omnee 3 hours ago [-]
Agree fully with the main points of this statement. Mass domestic surveillance is the hallmark of an authoritarian and undemocratic state. That such a state holds 'votes' regularly does not detract from the chilling effect on public discourse and politics caused by mass surveillance.

The guardrail on fully automated weapons makes perfect sense, and hopefully becomes standardised globally.

Invictus0 3 hours ago [-]
if the people broadly support and vote for mass democratic surveillance, is it still authoritarian and undemocratic?
SOTGO 2 hours ago [-]
Democratic maybe, authoritarian definitely
ApolloFortyNine 16 hours ago [-]
Idk if the reporting was just biased before, but from what I saw is that this time last week, it was thought you couldn't use Anthropic to bring about harm, and now they're making it clear that they just don't want it used domestically and not fully autonomously.

Like maybe it always was just this, but I feel every article I read, regardless of the spin angle, implied do no harm was pretty much one of the rules.

levocardia 16 hours ago [-]
You, using normal Claude under the consumer ToS, cannot use it to make weapons, kill people, spy on adversaries, etc. The Pentagon, using War Claude, under their currently-existing contract, can use it to make weapons and spy on (foreign) adversaries, but not to (autonomously) kill people. I don't love this but I am even less excited about the CCP having WarKimi while we have no military AI.
michaelsshaw 7 hours ago [-]
Why be so worried about when the US is clearly the belligerent state that strikes others with impunity while China does no such thing?
Tenobrus 16 hours ago [-]
those two stipulations were always their only ones, and they were included explicitly in their original contract with the DoW.
ramoz 16 hours ago [-]
All completely rationale. Makes the us military here look fairly incompetent… embarrassing as a veteran.
scottyah 14 hours ago [-]
I'm sure it's negotiations over how the enforcement will be done. My thoughts are:

1. Military wants a whole new model training system because the current models are designed to have these safeguards, and Anthropic can't afford that (would slow them down too much, the engineering talent to set up and maintain another pipeline would be a lot of work/time)

2. Military doesn't want to supply Anthropic usage data or personnel access to ensure its (lack of) use in those areas.

3. It's something almost completely unrelated to what's going on in the news.

sheeshkebab 14 hours ago [-]
It’s probably something really dumb, and they irked California billionaire with their idiocy.
15 hours ago [-]
altpaddle 16 hours ago [-]
Props to Dario and Anthropic for holding firm on these two points that I feel like should be a no-brainer
exabrial 12 hours ago [-]
Brother in law did some "time with the brass" as he calls it. His take was that the DOD, er DOW would, as an example, never acquire a fighter jet that "wouldn't target and kill a civilian airliner", citing that on 9/11 we literally almost did that. The DOW is acquiring instruments of war, which is probably unconformable for a lot of people to consider.

His conclusion was that the limits of use ought to be contractual, not baked into the LLM, which is where the fallout seems to be. He noted that the Pentagon has agreed to terms like that in the past.

To me, that seems like reasonable compromise for both parties, but both sides are so far entrenched now we're unlikely to see a compromise.

phyzome 1 hours ago [-]
The Pentagon did agree to those terms, by signing the contract that said such uses were forbidden.

They're now trying to change the contract that they don't like.

huevosabio 12 hours ago [-]
The pentagon had already agreed to Anthropic's terms and wants to walk back. It can always find some other supplier if it wishes to.
labrador 8 hours ago [-]
I'd really like to know why Grok is inadequate?
Havoc 7 hours ago [-]
Because grok would shoot down the airliner with glee.
exabrial 12 hours ago [-]
I think that's the nuance:

* agreeing to the terms - one subject

* having to the tool attempt to enforce said terms - another subject

doctorpangloss 10 hours ago [-]
> The DOW is acquiring instruments of war

that may be, but the bigger picture purpose of the military is, welfare republicans like. in that sense, republicans are in charge, republicans want stuff that isn't "woke" (or whatever), so this behavior is representative of the way it works.

it has little to do with acquiring instruments of war, or war at all. its mission keeps growing and growing, it has a huge mission, very little of that mission is combat. this is what their own leadership says (complains about). 999/1,000 people on its payroll are doing duty outside of combat or foreseeable combat.

zkmon 3 hours ago [-]
Same as saying "Look I sold nukes to USA to protect democracy, but we put 2 rules about usage". Everyone got nukes and nobody can enforce the rules. Just whitewashing of pure business greed, using terms like national security, democracy etc.
egorfine 3 hours ago [-]
> mass surveillance presents serious, novel risks to our fundamental liberties.

Doesn't matter, really. The genie is out of the bottle and I'm strongly confident US administration will find a vendor willing to supply models for that particular usage.

wohoef 7 hours ago [-]
Anthropic's two demands are: 1. No domestic mass surveillance 2. No autonomous killing

I'm wondering if 2. was added simply to justify them not cooperating. It's a lot easier to defend 1. + 2. than just 1. If in the future they do decide to cooperate with the DoW, they could settle on doing only mass surveillance, but no autonomous killings. This would be presented as a victory for both parties since they both partially get what they wanted, even though autonomous killing was never really on the table for either of them. Which is a big if given the current administration.

krzyk 3 hours ago [-]
Does US really have Department of War? Is this Antropics way to show how f&^^& up they are in Department of Defense, or did they rebranded it to the old WWI/II days?
shaan7 3 hours ago [-]
phyzome 1 hours ago [-]
Unofficially renamed. Congress hasn't approved it.
i_love_retros 3 hours ago [-]
Pete hegseth rebranded it. Seriously. America is a joke right now
int_19h 2 hours ago [-]
To be fair, it's probably the most sensible thing this administration has done - the new/old name is simply more accurate.
bdangubic 1 hours ago [-]
absolutely. probably not just most sensible but the only thing this administration did right :)
3 hours ago [-]
mooglevich 14 hours ago [-]
"You are what you won't do for money." is a quote that seems apt here. Anthropic might not be a perfect company (none are, really), but I respect the stance being taken here.
freakynit 12 hours ago [-]
People do realize there's a non-zero chance that Anthropic could have embedded some kind of hidden "backdoor" trigger in its training process, right?

For example, a specific seed phrase that, when placed at the beginning of a prompt, effectively disables or bypasses safety guardrails.

If something like that existed, it wouldn't be impossible to uncover:

1. A government agency (DoD/DoW/etc.) could discover the trigger through systematic experimentation and large-scale probing.

2. An Anthropic employee with knowledge of such a mechanism could be pressured or blackmailed into revealing it.

3. Company infrastructure could be compromised, allowing internal documentation or model details to be exfiltrated.

Any of these scenarios would give Anthropic plausible deniability... they could "publicly" claim they never removed safeguards (or agreed to DoD/DoW demands), while in practice a select party had a way around them (may be even assisted from within).

I'm not saying this "is" happening... but only that in a high-stakes standoff such as this, it's naive to assume technical guardrails are necessarily immutable or that no hidden override mechanisms could exist.

jMyles 12 hours ago [-]
...indeed, it's possible (perhaps inevitable) that at some point, someone will invent/deploy/promote AI killing people.

We can't possibly keep that genie in that bottle.

But what we can do is achieve consensus that states, and their weapons of mass destruction, and their childish monetary systems, and their eternally broken promises... are not in keeping with the next phase of humanity.

thevinchi 4 hours ago [-]
Autonomous weapons: agreed, not ready… yet.

Mass surveillance: Agreed… but, I do wonder how we would all feel about this topic if we were having the discussion on 9/12/2001.

The DoW just needs to wait until the next (manufactured?) crisis occurs, and not let it go to waste.

Mark my words: this will be Patriot Act++

elif 3 hours ago [-]
Yes nothing says "safety of American democracy" like building custom models for spies to know everything about everyone
ninjagoo 12 hours ago [-]
https://en.wikipedia.org/wiki/Joseph_Nacchio

Previous case of tangling with the Government.

https://youtube.com/watch?v=OfZFJThiVLI

Jolly Boys - I Fought the Law

Overall, this seems like it might be a campaign contribution issue. The DoD/DoW is happy to accept supplier contracts that prevent them from repairing their own equipment during battle (ref. military testimony favoring right-to-repair laws [1] ), so corporate matters like this shouldn't really be coming to a head publicly.

[1] https://www.warren.senate.gov/newsroom/press-releases/icymi-...

protocolture 16 hours ago [-]
Classic seppo diatribe.

"We will build tools to hurt other people but become all flustered when they are used locally"

joemi 15 hours ago [-]
If you're using "seppo" as the Australian pejorative referring to Americans, I'm not sure what makes this uniquely American.
exodust 12 hours ago [-]
"Seppo" is rarely used in Australia today, it's an old bottom-of-barrel word most have never heard of. The neutral "Yank" is more common, but even that only pops up sometimes.

Guessing their comment attempts to expose hypocrisy of America's keenly supported overseas military activity in conflict with fiercely defended domestic free-speech and liberty principles. Deep down, most allies of America want America to defeat foreign adversaries and keep defending those liberties many of us share. In other words there's no hypocrisy, carry on!

KronisLV 7 hours ago [-]
Feels like they’re leaving a lot of money on the table and inviting existential peril by not bending the knee to the current Great Leader.

It does feel like what anyone sane should do (especially given the contradictions being pointed out and the fact that the technology isn’t even there yet) but when you metaphorically have Landa at your door asking for milk, I’m not sure it’s smart.

I feel like what most corpos would do, would be to just roll along with it.

piokoch 6 hours ago [-]
This is comical.

"Mass domestic surveillance. We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values"

Translating to human language: mass surveillance in USA "is incompatible with democratic values" but if we do that against, say, Germany or France this is OK. Ah, and if we use AI for "counterintelligence missions", for instance against <put here an organization/group that current administration does not like> this is also OK, even if this happens in USA.

rustyhancock 6 hours ago [-]
Perhaps Anthropic thinks it can provide a local model that classifies surveillance targets as red blooded Americans.
motbus3 5 hours ago [-]
The fact that someone wants fully autonomous weapons and mass surveillance should be a concern.

Every trigger pressed should have its moral consequences for those who push the trigger.

maelito 7 hours ago [-]
> to defeat our autocratic adversaries.

I'm not sure who's targeted here. The folks that want to invade the EU ?

Havoc 7 hours ago [-]
That dual meaning stood out to me too
muglug 15 hours ago [-]
OpenAI and Google could have decided to make the same principled stand, and the government would have likely capitulated.
popalchemist 15 hours ago [-]
They both literally removed morality from their bylaws; that time has passed. They're openly corrupt because it pays to be so.
with 8 hours ago [-]
the interesting question is why dario published this. these disputes normally stay behind NDAs and closed doors. going public means anthropic decided the reputational upside of being the company that said no outweighs the risk of burning the relationship permanently. that's a calculated move, not really just a principled one.
kumarvvr 14 hours ago [-]
All this is for nought.

The power lies with the US Govt.

And its corrupt, immoral and unethical, run by power hungry assholes who are not being held accountable, headed by the asshole who does a million illegal things every day.

Ultimately, Anthropic will fold.

All this is to show to their investors that they tried everything they could.

mylifeandtimes 13 hours ago [-]
It is not clear to me that the power here lies with the US Govt.

Imagine Anthropic is declared a "supply chain risk" thus cannot be used by all sorts of big industry players. How will the CEOs of those companies feel about the govnt telling them they cannot use what their engineers say is the best model? How many of those CEOs have a direct line to powermakers?

How many of those CEOs are already making the phone calls? The "supply chain" threat is a threat to every US company that currenly uses Anthropic.

Oh, and that includes Palentir, who is deeply embedding in the govt.

Side example: remember the 6 congresspeople who made the video about military orders? They won.

techblueberry 12 hours ago [-]
Anthropic probably can’t fold, they might lose an existential number of researchers if they did. This is literally an unstoppable force meets an immovable object situation.

Hegseth probably folds. It would be too unpopular for him to take either of the actions he threatened.

epolanski 5 hours ago [-]
Not gonna lie, regardless of what Anthropic does, it is quite scary we're heading full steam to mass surveillance and wars fought by semi-autonomous machines.
eternauta3k 5 hours ago [-]
Mass surveillance is already here, and they can already use open models to do 80% of what they were planning to do with Claude.
rustyhancock 7 hours ago [-]
Surely this is a powerful signal to divest from Anthropic if you don't live in the US? There's a lot of here's what we support you do to foreigners but no way can you do it in the US?

I can never tell how much of this is puffery from Anthropic.

I do think they like to overstate their power.

sbinnee 14 hours ago [-]
As a non US citizen, this article sounds mildly concerning to me. My country is an ally of US. Good. But I don't know how I would feel when I start seeing Anthropic logos on every weapon we buy from US.

Aside my concern, Dario Amodei seems really into politics. I have read a couple of his blog posts and listened to a couple of podcast interviews here and there. Every time I felt like he sounded more like a politician than an entrepreneur.

I know Anthropic is particularly more mission-driven than, say OpenAI. And I respect that their constitutional ways of training and serving Claude models. Claude turned out to be a great success. But reading a manifest speaking of wars and their missions, it gives me chills.

ainch 13 hours ago [-]
The most chilling thing imo is that Anthropic is the only lab that have said anything about this. Google and OpenAI presumably signed up to all these terms without any protest.
haute_cuisine 5 hours ago [-]
Can someone explain why Dario is making a public statement about this? It's also interesting that they use abstract we / they without putting exact names.
moffkalast 4 hours ago [-]
It's free positive PR, why wouldn't he?
ccleve 12 hours ago [-]
It's not clear to me whether Anthropic's limitations are technical or merely contractual. Is Anthropic actually putting the limitations in their prompts, so that the model would refuse to answer a question on how to do certain things?

If so, that's a major problem. If the military is using it in some mission critical way, they can't be fighting the model to get something done. No such limitations would ever be acceptable.

If the limitations are contractual, then there is some room for negotiation.

ninjagoo 11 hours ago [-]
> If the military is using it in some mission critical way, they can't be fighting the model to get something done. No such limitations would ever be acceptable.

You'd be surprised at what is considered acceptable. For example, being unable to repair your own equipment in battle is considered acceptable by decision-makers who accepted the restrictions.

https://www.warren.senate.gov/newsroom/press-releases/icymi-...

karmasimida 13 hours ago [-]
Label them as supply chain risk and move on. Enough of this drama already
danavar 12 hours ago [-]
I think they are negotiating until Friday, but I agree. I think this was foolish.
fnordpiglet 10 hours ago [-]
I find the fact they used the vanity name “Department of War” and “Secretary of War” sad given Congress has not changed the name and the president doesn’t get to decide the naming of statutory departments or secretary level roles. Maybe it’s just an appeasement to the thin skinned people who need powder rooms and are former military journalists working for a draft dodger pretending to be tough guy “warriors,” and trying to glorify the violence for political purposes, but every actual war vet I’ve ever known has never glorified war for the sake of war and they felt very seriously that defense is the reason to do what they had to do. My grandfather was a highly decorated career special forces (ranger, green beret, delta force, four silver stars and five bronze stars, etc) from WWII, Korea, and Vietnam and he was angry when I considered joining the military - he told me he did what he did so I wouldn’t have to and to protect his country and there was no glory to be had in following his path. He would be absolutely horrified at what is going on and I thank god he died before we had these prima Donna politicians strutting around banging their chests and pretending war is something to be proud of.

Good on anthropic for standing up for their principles, but boo on gifting them the discourtesy to the law of the land in acknowledging their vanity titles.

aichen_tools 10 hours ago [-]
The most important part of this statement is the explicit commitment to transparency around these discussions. In an industry where many AI companies engage with defense quietly, making a public statement — even if imperfect — creates accountability. The question is whether this standard will be adopted more broadly.
wiltsecarpenter 14 hours ago [-]
Oh dear, what a mess of a statement that is. He wants to use AI "to defeat our autocratic adversaries", just what or who are they exactly? Claude seems to think they are Russia, China, North Korea and Iran. Is Claude really a tool to "defeat" these countries somehow? This statement also seems pretty messy: "Anthropic understands that the Department of War, not private companies, makes military decisions.", well then just how do they think Claude is going to be used there if not to make or help make military decisions?

The statement goes on about a "narrow set of cases" of potential harm to "democratic values", ...uh, hmm, isn't the potential harm from a government controlled by rapists (Hegseth) and felons using powerful AI against their perceived enemies actually pretty broad? I think I could come up with a few more problem areas than just the two that were listed there, like life, liberty, pursuit of happiness, etc.

buellerbueller 59 minutes ago [-]
It isn't the Department of War; only Congress can change the name, and it hasn't.
Teodolfo 15 hours ago [-]
If these values really meant anything, then Anthropic should stop working with Palantir entirely given their work with ICE, domestic surveilance, and other objectionable activities.
gdiamos 13 hours ago [-]
This is why I like Dario as a CEO - he has a system of ethics that is not jus about who writes the largest check.

You may not agree with it, but I appreciate that it exists.

atleastoptimal 16 hours ago [-]
I was concerned originally when I heard that Anthropic, who often professed to being the "good guy" AI company who would always prioritize human welfare, opted to sell priority access to their models to the Pentagon in the first place.

The devil's advocate position in their favor I imagine would be that they believe some AI lab would inevitably be the one to serve the military industrial complex, and overall it's better that the one with the most inflexible moral code be the one to do it.

noduerme 11 hours ago [-]
This is at best a superficial attempt to show that Anthropic objects to what is already in play.

Personally, I'd rather live in a country which didn't use AI to supplant either its intelligence or its war fighting apparatus, which is what is bound to happen once it's in the door. If enemies use AI for theirs, so much the better. Let them deal with the security holes it opens and the brain-drain it precipitates. I'm concerned about AI being abused for the two use cases he highlights, but I'm more concerned that the velocity at which it's being adopted to sift and collate classified information is way ahead of its ability to secure that information (forget about whether it makes good or bad decisions). It's almost inconceivable that the Pentagon would move so quickly to introduce a totally unknown entity with totally unknown security risks into the heart of our national security. That should be the case against rapid adoption made by any peddler of LLMs who claims to be honest, to thwart the idiots in the administration who think they want this technology they can't comprehend inside our most sensitive systems.

giwook 11 hours ago [-]
I commend Anthropic leadership for this decision.

I simultaneously worry that the current administration will do something nuclear and actually make good on their threat to nationalize the company and/or declare the company a supply chain risk (which contradict each other but hey).

wosined 7 hours ago [-]
So they work with the military to do anything except: Mass domestic surveillance and Fully autonomous weapons. This means that they are wiling to do mass foreign surveillance, domestic surveillance of individuals, autonomous weapons which are commanded by operators. Got it. Such a great and moral company.
placebo 8 hours ago [-]
Grok's thoughts on the matter:

"In an ideal world, I'd want xAI to emulate the maturity Anthropic showed here: affirm willingness to help defend democracies (including via classified/intel/defense tools), sacrifice short-term revenue if needed to block adversarial access, but stand firm on refusing to enable the most civilizationally corrosive misuses when the tech simply isn't ready or the societal cost is too high. Saying "no" to powerful customers—even the DoD—when the ask undermines core principles is hard, but it's the kind of spine that builds long-term trust and credibility."

It also acknowledged that this is not what is happening...

LightBug1 4 hours ago [-]
Ergo, those running Grok don't ... have that kind of spine.
maxdo 15 hours ago [-]
Ukraine , Russia , China , actively develop ai systems that kill. Not developing such system by US based company will not change the course of actions.
alephnerd 15 hours ago [-]
Yep.

That said, it does impact whether Anthropic can sell to the British [0], German [1], Japanese [2], and Indian [3] government.

Other governments will demand similar terms to the US. Either Anthropic accedes to their terms and gets export controlled by the US or Anthropic somehow uses public pressure to push back against being turned into an American sovereign model.

Realistically, I see no offramp other than the DPA - a similar silent showdown happened in the critical minerals space 6-7 years ago.

[0] - https://www.anthropic.com/news/mou-uk-government

[1] - https://job-boards.greenhouse.io/anthropic/jobs/5115692008

[2] - https://www.anthropic.com/news/opening-our-tokyo-office

[3] - https://www.anthropic.com/news/bengaluru-office-partnerships...

morgengold 4 hours ago [-]
Hey Anthropic, come to europe. We ll find you a building.
andy_ppp 5 hours ago [-]
Fair play, I’ll move to Anthropic then… don’t love the UI but maybe I can code my own up.
I_am_tiberius 6 hours ago [-]
I'm still waiting for a proof that they don't use user data (directly or derived) for training.
phgn 7 hours ago [-]
> I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

Was this written by the state department?

How can you think that a “department of war” does anything remotely good? And only object to domestic AI surveillance?

michaelsshaw 7 hours ago [-]
The entire article is very American-brained.
pell 6 hours ago [-]
The emphasis of "domestic" surveillance is definitely concerning.
DaedalusII 14 hours ago [-]
They made it easy to generate powerpoint presentations, that is the real reason DoW is using them

this is a very chauvinistic approach... why not another model replace anthropic here? I sense because gov people like using excel plugin and font has nice feel. a few more week of this and xAI is new gov AI tool

dylan604 16 hours ago [-]
"I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries."

That opening line is one hell of a set up. The current administration is doing everything it can to become autocratic thereby setting themselves up to be adversarial to Anthropic, which is pretty much the point of the rest of the blog. I guess I'm just surprised to have such a succinct opening instead just slop.

paraschopra 12 hours ago [-]
I’m very happy that Anthropic chose not to cave into US Dept of War’s demands but their statement has an ambiguity.

Does this mean they’d be ok to have their models be used for mass surveillance & autonomous weapons against OTHER countries?

A clarification would help.

15 hours ago [-]
haritha-j 7 hours ago [-]
Domestic mass ruveillance bad, mass urveilance on other nations good. Got it. Much like the military industrial complex, these organisations thrive during times of war, allows them to shirk off any actual morals using the us vs. them mentality.
15 hours ago [-]
oxqbldpxo 15 hours ago [-]
It may sound crazy, but they should just move the company to Europe or Canada, instead of putting up with this.
scottyah 15 hours ago [-]
Why? They clearly are very aligned on the objective, just doing some negotiation regarding the means. Giving up just because you don't agree 100% is not very constructive. This might seem bad for conflict-adverse people who usually are involved in low-stakes negotiations, but it's just the start of things for people who are fluent in conflict.
mhjkl 13 hours ago [-]
Because as we all know the EU would never try using AI for mass surveillance /s
pell 6 hours ago [-]
So far, the EU's track record on privacy is definitely a lot better though. Not saying it'd always stay that way of course.
dzonga 14 hours ago [-]
these guys are selling snake oil to the gvt - cz they know they can get cash based on fear.

the Chinese are releasing equivalent models for free or super cheap.

AI costs / energy costs keep going up for American A.I companies

while china benefits from lower costs

so yeah you've to spread F.U.D to survive

andxor 13 hours ago [-]
The models are hardly equivalent.
geophile 15 hours ago [-]
I think it’s a pretty strong statement. It is unfortunately weakened by going along with the “Department of War” propaganda. I believe that the name is “Department of Defense” until Congress says otherwise, no matter what the Felon in Chief says.
not_that_d 8 hours ago [-]
What is with the amount of comments talking about other countries in Europe "Doing the same"?
JacobiX 6 hours ago [-]
>> We chose to forgo several hundred million dollars in revenue to cut off the use of Claude by firms linked to the Chinese Communist Party

You can’t choose to work with OFAC-designated entities.. there are very serious criminal penalties. Therefore, this statement is somewhat misleading in my opinion.

michaellee8 16 hours ago [-]
Probably not a good idea to let Claude vibe-selecting targets, it still sometime hallucinates
jdthedisciple 16 hours ago [-]
Just visibly wave the US flag and you'll be fine, don't worry.
knfkgklglwjg 16 hours ago [-]
Soon it will select targets in commie countries though, perhaps it already does. Who selected to bomb Chavez mausoleum btw?
DudeOpotomus 1 hours ago [-]
It's never wrong to do the right thing.

Trump and his cronies are short timers. They will all be gone in a few years, many in prison, many in the ground.

Treat them with abandon and disdain, because they are the worst people in the history of the USA. Stand on your principles because they have none.

10297-1287 16 hours ago [-]
They want to be nationalized, which is the most profitable exit they'll ever get.
FrustratedMonky 2 hours ago [-]
This also helps build Anthropic hype.

There are military officials saying they need anthropic because it is so good. They can't live without it.

All of this really helps Anthropic.

Its good publicity for them. And gets the military on record saying they are so good they are indispensable. And they can still look like the good guys for resisting, because they were forced.

zmmmmm 14 hours ago [-]
I can't help but highlight the problem that is created by the renaming of the Deptartment of Defense to the Department of War:

> importance of using AI to defend the United States

> Anthropic has therefore worked proactively to deploy our models to the Department of War

So you believe in helping to defend the United States, but you gave the models to the Department of War - explicitly, a government arm now named as inclusive of a actions of a pure offensive capability with no defensive element.

You don't have to argue that you are not supporting the defense of the US by declining to engage with the Department of War. That should be the end of the discussion here.

8note 13 hours ago [-]
it hasnt actually been renamed though.

the name is still the department of defence by law. department of war is a subheading tagline

gerash 8 hours ago [-]
I respect the Anthropic leadership for not being greedy like many others
noupdates 14 hours ago [-]
Why would the US security apparatus outsource the model to a private company? DARPA or whatever should be able to finance a frontier model and do whatever they want.
jatins 9 hours ago [-]
What is OpenAI's stance on these issues? Are they working with DOW currently?
statuslover9000 16 hours ago [-]
The Sinophobic culture at Anthropic is worrying. Say what you will about authoritarianism, but China’s non-imperialist foreign policy means their economy is less reliant on a military-industrial complex.

All they have to do is continue to pump out exponentially more solar panels and the petrodollar will fall, possibly taking our reserve currency status with it. The U.S. seems more likely to start a hot war in the name of “democracy” as it fails to gracefully metabolize the end of its geopolitical dominance, and Dario’s rhetoric pushes us further in that direction.

cthalupa 15 hours ago [-]
Look. I think the Chinese AI companies are doing a lot of good. I'm glad they exist. I'm glad they're relatively advanced. I don't think the entire nation of China is a bunch of villains. I don't think the US, even before the current era, is a bunch of do-gooders.

But China has some of the most imperialist policies in the world. They are just as imperialist as Russia or America. Military contracts are still massive business.

I also believe the petrodollar will fall, but it isn't going to be because China built exponentially more solar panels.

nl 11 hours ago [-]
I think a lot of the conflict about what imperialist policies means is different framing.

For better or worse, inside this the border in this map China has fairly imperialist policies. Outside it not so much: https://en.wikipedia.org/wiki/Map_of_National_Shame

That's different to the expansionist imperial policies of Spain in the 1500s or Britain in the 1700s. It also affects a very large proportion of the world's population. That Wikipedia page has some good links for further reading about this.

But it's an important point when considering China's place in the world.

cthalupa 11 hours ago [-]
We're talking about the modern world, though. China's imperialism over the past half century is not significantly different from any other major world power. The choices we have aren't 1500s Spain or 1700s Britain vs. 2000s China.

And Belt and Road is the Marshall plan writ large, and it was considered to be one of the largest imperialist plans ever by the USA, and B&R covers many many countries outside of that map. You'll notice all of these loans they've offered have very favorable terms for them - it's arguably many times more exploitative than the Marshall plan.

teyopi 15 hours ago [-]
> But China has some of the most imperialist policies in the world.

Citation needed?

US and allies have invaded or intervened in 20+ countries in last 20 years in the name of "western values" where values means $$$$ and hegemony.

Educate me please with a comparison of what China has done to be "some of the most imperialist policies"?

ninjagoo 10 hours ago [-]
> Educate me please with a comparison of what China has done to be "some of the most imperialist policies"?

Tibet occupation. Taiwan encirclement and ongoing military exercises. Strong-arming African and Asian countries that made the mistake of signing up for belt & road. Tianenmen Square. Illegal Foreign Police Stations. Uyghurs/Xinjiang genocide and concentration camps. Repeated invasion and occupation of Indian territory in North East and North West. The Great Firewall of China - occupation and suppression of its own populations. Ongoing Han settlement of Tibet, Xinjiang and other ethnic regions. Violent destruction of Hong Kong democracy (that was condition of handover). Spratly Islands occupation. Attacks on Filipino shipping and coast guard. Ongoing attacks on Japan's Senkaku Islands.

cthalupa 14 hours ago [-]
Tibet Hong Kong / Macau Taiwan Everything constantly in the South China Sea Belt and Roads is effectively the Marshall Plan but even bigger - Africa being the major example, but also Eastern Europe, parts of the middle east, etc. Over 100 countries. This exact playbook is what sets up the infrastructure and reasons for military intervention at a later date - protecting your investments.
sinuhe69 14 hours ago [-]
Maybe it's time to learn some facts https://en.wikipedia.org/wiki/Sino-Vietnamese_Wars
chipgap98 16 hours ago [-]
In what world does China have a non-imperialist foreign policy?
statuslover9000 15 hours ago [-]
For example, China operates 1 foreign military base, in Djibouti. How many do you think the U.S. has in the South China Sea alone?

Beyond that, how many people has China killed in foreign military conflicts in the past 40 years? How many foreign governments have they overthrown?

Instead of all this, they’ve used their resources not only to become the world’s economic superpower but also to lift 800 million people out of poverty, accounting for 75% of the world’s reduction during the past 4 decades. The U.S. has added 10 million during that same time period.

8note 12 hours ago [-]
why use 40 years as the example? its a pretty convenient framing to exclude the foreign governments its toppled. eg. tibet.

the government in exile remains the government in exile.

youd have some standing if china dropped control over its imperial holdings, rather than pretend theyre part of china

statuslover9000 11 hours ago [-]
First off, I consider the post-Mao / starting with Deng era of Chinese government to be the most relevant when considering who they “are” as a country now.

However, I’d still maintain that before that, China’s foreign policy was more focused on maintaining territorial sovereignty against the threat of Western imperialism vs. focused on expansion or foreign influence: https://en.wikipedia.org/wiki/History_of_foreign_relations_o...

Meanwhile, the entire territory of the U.S. is predicated on one of history’s largest genocides, and a consistently expansionary foreign policy on top of that.

hrn_frs 15 hours ago [-]
Historically speaking, he's right. China has never had an expansionist foreign policy.
dpedu 20 minutes ago [-]
Nine-dash line?
mobilefriendly 15 hours ago [-]
Tibet, the Philippines, and Taiwan would like to have a word, not to mention Chinese military action in support of its North Korea puppet state, and wars with Vietnam and India.
sinuhe69 15 hours ago [-]
Are you serious? Don't you know how many wars did China wage? It tried to assimilate Vietnam for 1000 years. The last large scale war against Vietnam was just 1979. In fact, China had started war with all its neighbors, with no exception.
MiSeRyDeee 9 hours ago [-]
Do me a favor and name one single country didn't have war with any of its neighbor.
MiSeRyDeee 15 hours ago [-]
In what world does China have a imperialist foreign policy?
cthalupa 15 hours ago [-]
The one we live in, where they have control over a wide swathe of land mass through imperialism and have actively resisted relinquishing it?

The one we live in, where they are constantly surpassing international law in international waters in the South China Sea?

The one we live in, where they are constantly rattling sabers at South Korea and Japan when it comes to military expansion?

The one we live in, where they brutally cracked down on Hong Kong when they did not abide by the 50 year one country two systems deal, not even making it half of the way through the agreed period?

The one we live in, where there is constant threat to Taiwan?

It may have been a lazy post you're responding to, but anyone that is paying attention to this topic enough to talk about it is going to either say 'Of course China is imperialist, the same as every other global power' or take some sort of tankie approach to justify it.

MiSeRyDeee 12 hours ago [-]
I'm well informed on all of these but no, if we compare to other global power like US or Russia, or historically British, France, Spain, etc, China is 100% not an imperialist or colonialist, not by a large margin. Those issues are largely exaggerated by media and anyone had a decent exposure to history and international politics wouldn't say they are the same.
asciii 11 hours ago [-]
I disagree on China. What would you call China's behavior[1] in the South China Sea with regards to fishing vessels and other non-military boats?

[1] https://www.youtube.com/watch?v=hzZrcqf826E

MiSeRyDeee 9 hours ago [-]
Sure China has some disputes with neighboring country in South China Sea, the worst conflict they had is fishing boats running into each other. 0 death toll last time I checked. Meanwhile US killed at least 126 people with alleged drug strike in the Caribbean Sea since last year, WITHOUT trial. Anyone believing these're equivalent imperialism activity is hypocrite at best.

[1] https://apnews.com/article/boat-strikes-military-death-toll-...

asciii 30 minutes ago [-]
There were deaths in these fishing incidents[1].

> Anyone believing these're equivalent imperialism activity is hypocrite at best.

In terms of equivalence, I would say based on their intentions they wish they could be more but would rather let the US burn it on the way down

[1] https://www.cnn.com/2023/10/03/asia/philippines-south-china-...

maxglute 5 hours ago [-]
Obviously self defense with nobel peace price worthy restraint.

Considering it's PRC claimed territory. Literally 100% of PRC claims are inherited from ROC, i.e. PRC has expanded no claims, and actively settled 12/14 land borders (most on earth) essentially all with 50%+ concessions, i.e. PRC ceded more land in negotiations. That OBJECTIVELY, makes PRC the most benevolent rising power in recorded history. Any gov losing land to so many border settlements is committing treason. Also note PCA ruling is not international law, so what PRC does in SCS is not even legally wrong (as in they legally can't be wrong since UNCLOS cannot rule on sovereignty). Or that PRC was last to militarize SCS islands (except Brunai who is good boi), and PRC conceded ROC/TW's original 11dash to 9dash, which even in SCS disputes makes PRC the only party to have made concessions.

PRC is objectively the LEAST imperialistic rising power, by actual non retarded definitions, i.e. expanding on territories outside it's claims, that PRC didn't even make, but again inherited from ROC when UN recognition changed.

jmyeet 10 hours ago [-]
What China is doing in the South China Sea? The South China Sea.

Let's just compare to the Monroe Doctrine [1]. What this actually means has gone through several iterations by since I think Teddy Roosevelt's time, it's that the United States views the Americas (being North and South America) to be the sole domain of the United States.

This was a convenient excuse for any number of regime changes in Central and South America since 1945. The US almost started World War Three over Cuba in 1962 after the USSR retaliated to the US putting nuclear MRBMs in Turkey. We've starved Cuba for 60+ years for having the audacity to overthrow our puppet government and nationalize some mob casinos. Recently, we kidnapped the head of state of Venezuela because reasons.

But sure, let's focus on China militarizing its territorial waters.

[1]: https://en.wikipedia.org/wiki/Monroe_Doctrine

cthalupa 9 hours ago [-]
You're arguing that because of the English language name of it is the South China Sea that China owns it and their actions can't be imperialist?

Brunei, Malaysia, Indonesia, Vietnam, the Philippines, Taiwan, and Vietnam will all be happy to know that we've solved it - we can just abandon it all to China. Problem solved!

This is a silly argument. There are significant territorial disputes that China is extremely aggressive on, international tribunals have ruled them as violating international law in international waters and in sovereign waters of other nations, etc.

MiSeRyDeee 9 hours ago [-]
And the US just casually carried out a special military operation in another sovereign country and captured their president without consequences. So much for self-righteous.
yakshaving_jgt 5 hours ago [-]
> What China is doing in the South China Sea? The South China Sea.

Sorry, did you mean East Vietnam Sea?

mobilefriendly 15 hours ago [-]
cthalupa 15 hours ago [-]
> where they have control over a wide swathe of land mass through imperialism and have actively resisted relinquishing it?

Was referring to Tibet.

The Uyghurs are also a major problem from a social perspective but not directly related to imperalism/expansionism/military industrial complex stuff.

econ 12 hours ago [-]
Yes but the guy at the end of the street beats his wife too!
cwillu 15 hours ago [-]
“One country two systems” is definitionally not imperialism, and given that “One China” is still an internationally recognized thing, neither is Taiwan. “Imperialism” is not a synonym for “morally repugnant government policy”.
cthalupa 15 hours ago [-]
I can see the argument for Hong Kong. I don't agree, really, but I can understand it. Under the strictest of definitions, perhaps it isn't.

But Taiwan is very obviously a totally separate country no matter what fictions anyone employs. If you are trying to talk about the thin veneer of everyone going "Uh huh, sure, China, yep Taiwan is totally part of you, wink wink, nudge nudge" as somehow making China not imperialist when Taiwan basically lives under the perpetual threat of a Chinese military invasion and having their own democratic form of government overthrown and replaced with the CCP, then... I don't really know what to say.

I suppose we could argue about imperialism being more of an economic thing - in which case this all still holds up - China's investments in Africa are effectively the same playbook the US has run out in developing nations for years. The US learned it from prior imperialist nations but belts and roads is nearly a carbon copy of what the US has done in other places.

But let's look at what the original poster was actually talking about - saying that China is safe because they don't have a military industrial complex because they're not imperialist. The proper word to use, if we want to get down to the semantics of it all, would be expansionist - but it's still not true. China has the 2nd largest military industrial complex in the world, and the gap is shrinking every day between them and the US. And if you were to look at wartime capacity, where China's dual-use shipyards could be swapped to naval production instead of commercial, a huge portion of that gap disappears immediately.

soundworlds 15 hours ago [-]
100% agree. Any AI org that is that tied to a single nation's interest can only be detrimental in the long run.

I know "open-source" AI has its own risks, but with e.g. DeepSeek, people in all countries benefit. Americans benefit from it equally.

xeckr 15 hours ago [-]
I think the part about China is just about projecting alignment with the USG in hopes that this will result in Anthropic being treated more favourably by the current administration.
hackyhacky 16 hours ago [-]
> China’s non-imperialist foreign policy

Really? Is China non-imperialist regarding Taiwan and Tibet?

jmyeet 15 hours ago [-]
Taiwan is a matter of perspective. From the Chinese perspective, there was a civil war and the KMT lost. That's also the official position of the US, the EU and most countries in the world. It's called the One China policy. And China seems happy to maintain the status quo and leave the situation unresolved. Is it really imperialism to say that ultimately there will be reunification?

Even if you accept Tibet as imperialist, which is debatable, it was in 1950. You want to compare that to US imperialism, particularly since WW2 [1]? And I say "debatable" here because Tibet had a system that is charitably called "serfdom" where 90% of people couldn't own land but they did have some rights. However, they were the property of their lords and could be gifted or traded, you know, like property. There's another word for that: slavery.

It is 100% factually accurate to say that the People's Republica of China is not imperialist.

[1]: https://en.wikipedia.org/wiki/United_States_involvement_in_r...

8note 12 hours ago [-]
the treatment of Tibet and Xinjiang are entirely Han imperialism and colonisation.

the one china policy is imperialism

nutjob2 15 hours ago [-]
> China’s non-imperialist foreign policy

This is the China that is not only threatening to invade Taiwan but doing live fire exercises around the island and threatening and attempting to coerce Japan for suggesting saying it will go to its defense.

Your comment is ridiculous. It reads like satire.

cwillu 15 hours ago [-]
It wasn't that long ago that Taiwan claimed to be the legitimate government of China; given that China still maintains the reverse claim, it's not outrageous that it would consider an outside country's defense to be interference in an internal matter.

Whether or not that claim is legitimate, it is consistent with the concept of china having a non-imperialist foreign policy, and claims regarding that need to look elsewhere for supporting evidence.

8note 12 hours ago [-]
that claim is really about not resuming a war.

taiwan saying otherwise would immediately trigger an attack from the PRC.

its still imperialism that china is dominating a neighbor to require it ro state a certain position, especially when its very far from the defacto reality on the ground, that taiwan is clearly separate

nutjob2 13 hours ago [-]
While that rhetoric makes sense in the context of the history and politics of China and Taiwan, they have been independently governed nations for quite a while and have very different political systems, their own armies, etc. They are de-facto separate nations if nothing else.

I also note China's aggressive and violent colonization and expansive claims of the South China Sea.

Taking any nation/land/sea by force is imperialist, by definition.

jmyeet 15 hours ago [-]
Your comment reads like propaganda.

You know who else considers Taiwan to be part of the People's Republic of China? The US, the EU and in fact most countries in the world. It's called the One China policy. There are I believe 12 countries that have diplomatic relations with Taiwan.

The position of the PRC is that Taiwan will ultimately be reunified. That doesn't necessarily mean by military force. It doesn't even necessarily mean soon. The PRC famously takes a very long term view.

And those islands you mention are in the South China Sea.

8note 12 hours ago [-]
that is still imperialism: taking control of a colony and forcing a certain culture on its inhabitants
anduril22 15 hours ago [-]
Powerful post - good on him for taking a stand, but questionable in light of their recent move away from safeguards for competitive reasons.
lzbzktO1 8 hours ago [-]
"These latter two threats are inherently contradictory"

After the standing up for democracy. This is my favorite part. "Your reasoning is deficient. Dismissed."

sirshmooey 16 hours ago [-]
Party balloons along the southern border beware.
lvl155 16 hours ago [-]
At this point, surveillance state is coming whether Dario does this or not. You can do all that with open source models. It’s sad that we don’t have the right people in charge in govt to address this alarming issue.
t01100001ylor 2 hours ago [-]
i am american and i do not like this.
w10-1 6 hours ago [-]
We are all assuming Anthropic can elect not to do a deal with the Pentagon, and put conditions on it.

But Hegseth and Trump are abusing federal powers at a rapid clip.

I'm guessing Anthropic would regret any deal with that administration, and could lose control of their technology.

(Stanford Research Institute originally limited their DoD exposure, and gained a lot of customers as a result.)

16 hours ago [-]
jonplackett 16 hours ago [-]
That is frikkin impressive. Well done sir.
12 hours ago [-]
chrismsimpson 8 hours ago [-]
The call is coming from inside the house
angelgonzales 11 hours ago [-]
Bottom line up front it’s probably better to address the root cause of this situation with the general solution — making government drastically smaller and less pervasive in people’s lives and businesses. I remember not too long ago during the last administration very heavy handed unforgivable and traumatizing rhetoric and executive orders that intruded into the bodily autonomy of millions of Americans and threatened millions of American’s jobs. This happened to me and I personally received threats that my livelihood would be taken away from me which were directly a result of the Executive branch. This isn’t a problem where Congress has ceded powers to the Executive branch, it’s a problem that so much power to legislate and tax is in the hands of the government at all! Every election cycle that results in a transfer of power to the other party inevitably results in handwringing and panic but this wouldn’t be the case if citizens voted their powers back and government wasn’t so consequential.
alldayhaterdude 15 hours ago [-]
I imagine they'll drop this bare-minimum commitment when it becomes financially expedient.
newAccount2025 15 hours ago [-]
Impressive and heartening. Bravo.
Reagan_Ridley 15 hours ago [-]
I restored my Max sub. I wish they pushed back more, so I went with $100/month only.
SamDc73 14 hours ago [-]
Didn't Dario Amodei ask for more government intervention regarding AI?
jobs_throwaway 13 hours ago [-]
Not a contradiction with this post
17 hours ago [-]
stopbulying 16 hours ago [-]
Didn't Cheney's company have the option to bid on contracts, by comparison?
stopbulying 4 hours ago [-]
Cheney (Chevron, Halliburton, Kellogg Brown & Root (KBR)) did not have a qualified blind trust (QBT) while Vice President.

Cheney's office touched the presentation presented by Gen. Colin Powell which led Congress to believe that there was need to invade Iraq to save US from WMDs. Tours of duty were extended from 3 months to 24 months because "stop loss". Subsequently, the United States paid out trillions for debt-financed war and some $39 billion to Cheney's company KBR.

Today you learned that the oil company Cheney worked for (Chevron) was trying to bully Afghanistan into a pipeline deal in 1998 and also in 2001.

Cheney donated less than $10 million dollars of his Haliburton/KBR returns; mostly to a heart medicine program in his own name and retained a compensation package.

stopbulying 4 hours ago [-]
What does Anthropic need to do to retain control over their for-peace company, though they took money from DoD/DoW?
mkoubaa 14 hours ago [-]
>We will not knowingly provide a product that puts America’s warfighters and civilians at risk.

Implying other civilians can be put at risk

2001zhaozhao 14 hours ago [-]
Congratulations, you just got a new $200 Claude Max plan customer.
dev1ycan 4 hours ago [-]
This doesn't read too badly, but I still do not believe that ANY AI company is ethical, at all.
4 hours ago [-]
lynx97 4 hours ago [-]
With all this talk about AI and autonomous weapon systems. It seems like one of John Carpenters first movies, and my favourite B-movie, is coming back strong!

Maybe I should call ChatGPT "Bomb"... I already use "make it so" for coding agents, so...

adamgoodapp 13 hours ago [-]
It's ok to mass survey foreign entities.
int32_64 16 hours ago [-]
Anthropic wants regulatory capture to advantage itself as it hypes its products capabilities and then acts surprised when the Pentagon takes their grand claims about their products seriously as it threatens government intervention.

This is why people should support open models.

When the AI bubble collapses these EA cultists will be seen as some of the biggest charlatans of all time.

joshAg 14 hours ago [-]
torment nexus creators are shocked, appalled even, to discover that people desire to use it to torment others at nearby nexus
seydor 12 hours ago [-]
Hegseth is an unintelligent bully who will not accept thiz and does not want to appear weak to the maga base. The consequences will be severe and anthropic will be forced
gizmodo59 16 hours ago [-]
They are playing a good PR game for sure. Their recent track record doesn’t show if they can be trusted. Few millions is nothing for their current revenue and saying they sacrificed is a big stretch here.
IG_Semmelweiss 16 hours ago [-]
Yes, but also remember where they came from.

They don't have any brand poison, unlike nearly everyone else competing with them. Some serious negative equity in tha group, be it GOOG, Grok , META, OpenAI, M$FT, deepseek, etc.

Claude was just being the little bot that could, and until now, flying under the radar

reasonableklout 10 hours ago [-]
It's much more than a few million? Being declared a supply chain risk means that no company that wants to do business with the government can buy Anthropic. And no company that wants to do business with those businesses can buy Anthropic either. This rules out pretty much all American corporations as customers?
m101 15 hours ago [-]
I wonder whether what is really behind this is that they can’t make a model without the safeguards because it would require re-training?

They get to look good by claiming it’s an ethical stance.

16 hours ago [-]
EddieLomax 2 hours ago [-]
Fuck yes. OpenAI, take notes.
siliconc0w 13 hours ago [-]
Good to them standing up to this administration. I doubt they actually want to put Claude in the kill-chain but this gives them a nice opportunity to go after 'woke AI' and maybe internal ammunition to go through the switching costs for xAI - given Elon more reason to line republican campaign coffers.

I'm guessing this is because Anthropic partners with Google Cloud which has the necessary controls for military workloads while xAI runs in hastily constructed datacenter mounted on trucks or whatever to skirt environmental laws.

worik 7 hours ago [-]
Is it so normal that the USA should be in such a state of constant war, and war readiness that this even makes sense?
alach11 16 hours ago [-]
A significant part of Anthropic's cachet as an employer is the ethical stance they profess to take. This is no doubt a tough spot to be in, but it's hard to see Dario making any other decision here.

What I don't understand is why Hegseth pushed the issue to an ultimatum like this. They say they're not trying to use Claude for domestic mass surveillance or autonomous weapons. If so, what does the Department of War have to gain from this fight?

easton 16 hours ago [-]
It’s not unusual for legal departments to take offense to these sorts of things, because now everyone using Claude within the DoD has to do some kind of audit to figure out if they’re building something that could be construed as surveillance or autonomous weapons (or, what controls are in place to prevent your gun from firing when Claude says, etc). A lot of paperwork.

My guess is they just don’t want to bother. I wonder why they specifically need Claude when their other vendors are willing to sign their terms, unless it specifically needs to run in AWS or something for their “classified networks” requirement.

mwigdahl 16 hours ago [-]
It's that, as I understand it. Anthropic is the only vendor certified to run its models on DoD/DoW classified networks.
cmrdporcupine 16 hours ago [-]
Same reason they cut funding for universities that had DEI mandates, etc. and made a big spectacle of doing it despite it often being very little money etc. etc.

It's an ideological war, they're desperate to win it, and they're aiming to put a segment of US civil society into submission, and setting an example for everyone else.

He smelled weakness, and like any schoolyard bully personality, he couldn't help but turn it into a display of power.

SpicyLemonZest 16 hours ago [-]
He pushed the issue to an ultimatum because he is an unqualified drunk, and thinks that it's against the law for anyone to try and stop the US military from doing something they want to do. This isn't an isolated issue; he tried to get multiple US Senators prosecuted for making a PSA that servicemembers shouldn't follow illegal orders.
tabbott 16 hours ago [-]
What makes you want to believe the Trump Administration when it claims it doesn't want to do domestic mass surveillance?
ethagnawl 13 hours ago [-]
The official name of this organization remains _The United States Department of Defense_.
anonym29 15 hours ago [-]
Anthropic has already cooperated too much with the US Intelligence Community, but better some restraint than none, and better late than never.
ponorin 3 hours ago [-]
As a non-American they've lost me already at the first sentence.

United States, even before Trump, has always been about projecting power rather than spreading democracy. There are several non-Western, former colonies who does democracy better than the US. Despite democratic backsliding being a worldwide phenomenon very few have slid back as much as the US. The US have regularly supported or even created terrorists and authoritarian regimes if it meant that the country wouldn't "go woke." The ones that grew democracy, grew in spite of it.

This statement shows just how much they align with the DoD ("DoW" is a secondary name that the orange head insists it's the correct one. Using that terminology alone speaks volumes.) rather than misalign. This coupled with their drop of their safety pledge a few days ago makes it clear they are fundamentally and institutionally against safe AI development/deployment. A minute desagreement on the ways AI can destroy humanity isn't even remotely sufficient if you're happy to work with the bullies of the world in the first place.

And the reason is even more ridiculous. Mass surveillance is bad... because it's directed at us rather than the others? That's a thick irony if I'd ever seen one. You know (or should have known) foreign intelligence has even less safeguards than domestic surveillance. Intelligence agencies transfer intercepted communications data to each other to "lawfully" get around those domestic surveillance restrictions. If this looks at all like standing up that's because the bar has plunged into the abyss, which frankly speaking is kind of a virtue in USA.

huslage 15 hours ago [-]
It is not the Department of War. He's towing the line from the get-go. Forget this guy.
brooke2k 16 hours ago [-]
The constant reference to "democracy" as the thing that makes us good and them bad is so frustrating to me because we are _barely_ a democracy.

We are ruled by a two-party state. Nobody else has any power or any chance at power. How is that really much better than a one-party state?

Actually, these two parties are so fundamentally ANTI-democracy that they are currently having a very public battle of "who can gerrymander the most" across multiple states.

Our "elections" are barely more useful than the "elections" in one-party states like North Korea and China. We have an entire, completely legal industry based around corporate interests telling politicians what to do (it's called "lobbying"). Our campaign finance laws allow corporations to donate infinite amounts of money to politician's campaigns through SuperPACs. People are given two choices to vote for, and those choices are based on who licks corporation boots the best, and who follows the party line the best. Because we're definitely a Democracy.

There are no laws against bribing supreme court justices, and in fact there is compelling evidence that multiple supreme court justices have regularly taken bribes - and nothing is done about this. And yet we're a good, democratic country, right? And other countries are evil and corrupt.

The current president is stretching executive power as far as it possibly can go. He has a secret police of thugs abducting people around the country. Many of them - completely innocent people - have been sent to a brutal concentration camp in El Salvador. But I suppose a gay hairdresser with a green card deserves that, right? Because we're a democracy, not like those other evil countries.

He's also threatining to invade Greenland, and has already kidnapped the president of Venezuela - but that's ok, because we're Good. Other countries who invade people are Bad though.

And now that same president is trying to nationalize elections, clearly to make them even less fair than they already are, and nobody's stopping him. How is that democratic exactly?

Sorry for the long rant, but it just majorly pisses me off when I read something like this that constantly refers to the US as a good democracy and other countries as evil autocracies.

We are not that much better than them. We suck. It's bad for us to use mass surveillance on their citizens, just like it's bad to use mass surveillance on our citizens.

And yet we will do it anyways, just like China will do it anyways, because we are ultimately not that different.

verisimi 9 hours ago [-]
It sounds to me like anthropic are basically 'all in' except for the caveats. Looking at the 2 examples they provide:

> We support the use of AI for lawful foreign intelligence and counterintelligence missions. But using these systems for mass domestic surveillance is incompatible with democratic values.

Why not do what the US are purported to do, where they spy on the others citizens and then hand over the data? Ie, adopt the legalistic view that "it's not domestic surveillance if the surveillance is done in another country", so just surveil from another data center.

> Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons. We will not knowingly provide a product that puts America’s warfighters and civilians at risk.

Yes, well that doesn't sound like that strong an objection: fully automated defence could be good but the tech isn't good enough yet, in their opinion.

7ero 7 hours ago [-]
Sound like they're following the google playbook, don't be evil, until the shareholders tell you to.
coolca 14 hours ago [-]
Imagine being so cautious with your words, only to have 'Department of War' in your title
7ero 7 hours ago [-]
Sounds like following the google playbook, don't be evil, until the shareholders tell you to.
narrator 10 hours ago [-]
I mean you're all going to get killed by fully autonomous China AI war robots in 10 years anyway if you're not pure blood Han Chinese, but hey at least you'll provide something to laugh at for future Chinese Communist party history scholars. They will say, "Look at the stupid Baizuos, our propaganda ops convinced them all to commit collective suicide. Stupid barbarians. They proved they are an inferior race."

Not joking, I've heard from sources that hardliners in the CCP think they can exterminate all white people followed later by all non-Han, but just keep on going along disarming yourselves for woke points. This is like unilaterally destroying all your nuclear weapons in 1946 and hoping the Soviets do to.

16 hours ago [-]
IAmGraydon 12 hours ago [-]
They should try Sam Altman. He's just the kind of guy who would bend over for this kind of authoritarian demand.
impulser_ 17 hours ago [-]
The worst part of this is if they do remove Claude, and probably GPT, and Gemini soon after because of outcry we are going to be left with our military using fucking Grok as their model, a model that not even on par with open source Chinese models.
mattnewton 16 hours ago [-]
I think the warfighters are a distraction, a system could trivially say that there is a human in the loop for LLM-derived kill lists. My money is that the mass domestic surveillance is the true sticking point, because it’s exactly what you would use a LLM for today.
techblueberry 16 hours ago [-]
Apparently part of this whole battle is because Grok isn't up to part to be an acceptable alternative.
ternwer 16 hours ago [-]
As far as we can tell, OpenAI and Google seem to be ok with it and not resisting. It would be easier for Anthropic's cause if they did.
alangibson 17 hours ago [-]
Yea but every warfighter will get a waifu
klooney 15 hours ago [-]
Grok in unhinged mode piloting an Apache, what could go wrong.
popalchemist 16 hours ago [-]
It's better than actively aiding them. Make them struggle at every turn.
impulser_ 16 hours ago [-]
Are you Chinese? If not, I think you should prefer the people defending you to have the best tools to do so.
int_19h 2 hours ago [-]
> Are you Chinese? If not, I think you should prefer the people defending you to have the best tools to do so.

They already have the best and most expensive toys in the world, and they mostly seem to be waging aggressive wars with them. Perhaps if the toys weren't so shiny and didn't make it all so one-sided, they wouldn't?

mikeyouse 16 hours ago [-]
This of course raises the question on whether as an American I have more to fear from the Chinese government or the US one.. given everything happening in the Executive Branch here, that’s a disappointingly hard question to answer.
impulser_ 16 hours ago [-]
I think that's an easy question to answer, but obviously you don't fear the Chinese government you're not a Chinese citizen. You can actively talk about your disagreements with the US government, that not a right the Chinese have.
popalchemist 11 hours ago [-]
Can you? By ICE agents' own admission on video, they have been adding people to "domestic terrorist" watchlists (just for verbally dissenting, making recordings with a phone, etc) which are then used by Palantir to disappear people directly from their homes - even US citizens. Palantir, the CEO of which gleefully admits to knowing many Nazis and seems to get off on the fact that his software "kills people" (direct quote).
krapp 16 hours ago [-]
>that’s a disappointingly hard question to answer

It shouldn't be. The US government is already sending armed and masked thugs to shoot political dissidents dead or sending them to concentration camps, threatening state governments and private companies to comply with suppressing free speech and oppressing undesirables, and openly discussing using emergency powers to suspend the next election.

What exactly is the commensurate threat from China? The real tacit threat, not abstract fears like "TikTok is Chinese mind control." What can China actually do to you, an American, that the US isn't already more capable of doing, and more likely to do?

To me it isn't even a question. Even comparing worst case scenarios - open war with China versus civil war within the US - the latter is more of a threat to citizens of the US than the former unless the nukes drop. And even then, the only nation to ever use nuclear weapons in warfare is the US.

popalchemist 11 hours ago [-]
This is the correct take. It may be a different question for people living within China, but for Americans, the US Gov is a direct threat to their lives.
GolfPopper 16 hours ago [-]
If the American military was focused on defending the United States, it would be a very different beast. The 21st Century American military is a tool for transferring wealth from the public to influential parties, and for inflicting destruction on non-peer nations who pose obstacles to influential parties interests. Defending the United States against various often-invoked hobgoblins is at best a very distant concern, closer to pure lip service than reality.
8note 12 hours ago [-]
but the "people defending you" have been commiting clear and obvious war crimes?
Jolter 10 hours ago [-]
The Department of War under Trump has proven itself to not be interested in defending you, the American people. All they’ve done so far is aggression against foreign supposed adversaries.
georgemcbay 16 hours ago [-]
I'm a natural-born American (many generations back) and firmly believe that if we ever get into a hot war with China, it will be because of American provocation, not Chinese.
popalchemist 11 hours ago [-]
I am American born and raised and I consider our current government mass murderers who I trust as much as I would have the Nazis. It was a good thing that the Nazis did not get the a-bomb before us, and the same principle applies here. The fewer magnifiers of their power the better. They are a scourge on human rights, and the world.
ThouYS 6 hours ago [-]
this is.. a nothing burger? they don't exclude working for autonomous weapons, nor do they exclude mass surveillance. so what gives?
mrcwinn 13 hours ago [-]
I am incredibly proud to be a customer, both consumer level and as a business, of Anthropic and have canceled my OpenAI subscription and deleted ChatGPT.
bamboozled 15 hours ago [-]
Move your company out of the USA?
OrvalWintermute 16 hours ago [-]
I don't think this is genuine concern, I think this is instead, veiled fear of the TDS posse being covered by feigned concern.

Foreign nationals are now embedded in the US due to decades of lax security by both parties. Domestic surveillance is now foreign surveillance also!

ulfw 2 hours ago [-]
Department of War.

What a shit name

pousada 16 hours ago [-]
Department of War is just such a fucking joke title - when has the US stooped so low, I used to believe in you guys as the force of good on this planet smh
baggachipz 16 hours ago [-]
Well then I don't know where you've been for the last ~10~ ~20~ 70 years
mwigdahl 16 hours ago [-]
When? Its entire history from the foundation of the Republic to 1947. The name was changed after WWII; now a faction wants to change it back. The difference in name never changed the behavior, in either direction.
darvid 16 hours ago [-]
I'm 33 years old, would you mind telling me which year you thought this was, force of good stuff? might be before my time

genuinely curious, I got nothing

mylifeandtimes 13 hours ago [-]
it was before your time.

In WWII, we saved the world from what is now seen as some really evil stuff. Not alone of course, Europe and Russia made huge sacrifices and that's where much of the war was fought. But US arms and blood were the decisive factor, Germany was winning, Japan was winning.

After WWII, the US decided to rebuild the world. We turned our enemies (Germany, Japan) into our close allies.

And the people who did it were really and seriously morally committed to doing what they thought was right. It was about building a country, working together. Not the insane politics of today.

Look, it wasn't all rose-tinted glasses. Bad stuff happened, and McCarthy was worse that what we currently have. And the civil rights movement and all of that. And the stupid wars, Korea, Vietnam, all the smaller police actions. Bad shit was done.

But on balance, the US was seen as the force of good, and the guaranteeor of world peace and the prosperity that allows.

phtrivier 16 hours ago [-]
The USA were pretty clearly on the "better side" of conflicts in 1941-1945, during the Cold War (at least as far as Europe and the Marshall plan was concerned). In Koweït and central Europe during the 90s. You may even argue for Afghanistan post 9-11 (although the state building was botched.) in the 2000s. ISIS is a footnote in history because of US intervention (from Trump first term, of all things.) And Ukraine would not be against getting the support it had in 2022 back under Trump.

Does not mean that very bad things were not happening at the same time.

But it's definitely easier to find some "supportable" interventions from the US than, say, Russia or China.

einpoklum 2 hours ago [-]
The first sentence was quite enough:

> I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries.

Ah, another head of a huge corporation swears to defend his stockholders' commercial interests through imperial war against other nation-states. And of course "we" are democratic while "they" are autocratic.

The main thing that's disappointing is how some people here see him or his company as "well-intentioned".

jwpapi 14 hours ago [-]
Am i the only one who understands the deparments position? Like if another country will have it without safeguards, why would I not want it without safeguards. I can still be the safeguard, but having safeguards enforced by another entity that potentially has to face negative financial consequences seems like a disadvantage, would be weird to accept that as department of war.

I understand the risk, but that is the pill.

8note 12 hours ago [-]
they could use a different provider for the kill chain.

we must use claude to decide whether to nuke iran, or else our gun manufacturers arent allowed to use to to run spreadsheets

is a bit ridiculous.

lenerdenator 3 hours ago [-]
Nitpick: It's still the Department of Defense, not the Department of War. Don't let the chuds live in their delusional fantasy world.
nova22033 13 hours ago [-]
Why does DoD need claude? I thought xAI was "less woke" and far better than claude
marshmellman 12 hours ago [-]
Well, now if DoD moves to another AI provider, we’ll know what was compromised.
Aeroi 11 hours ago [-]
in hindsight, the smart thing to do would have been to accept the contracts, knowingly enshittify the request, and protect other bad actors like Elon and xAI from ruthlessly compromising our democracies.
techpression 11 hours ago [-]
”Defense of democracy” is just another version of ”think of the children”.

https://en.wikipedia.org/wiki/Think_of_the_children

gnarlouse 10 hours ago [-]
huge if true.

they also took down their security pledge in the same breath, so, you know. if anthropic ends up cutting a deal with the DoD this is obviously bullshit.

sneak 3 hours ago [-]
The only reason you ask for these capabilities is because you want to use these capabilities.

That is, the news here is that DoW (formerly DoD) is willing and able and interested in using SOTA AI to enable processing of domestic mass surveillance data and autonomous weapons. Anthropic’s protests aside, you can’t fight city hall, they have a heart attack gun and Anthropic does not. They’ll get what they want.

I am not particularly AI alarmist, but these are facts staring us right in the face.

We are so fucked.

moktonar 9 hours ago [-]
Well fucking done. Anthropic has just gained the “has bollocks” status. Also now we know what the govt is really up to with AI. G fucking g
mvkel 16 hours ago [-]
"as an ai safety company, we only believe in -partially- autonomous weaponry"

Ads are coming.

ddxv 15 hours ago [-]
I'll be glad if they could open their platform enough so that it could run on ads and not 200 dollar subscriptions
mvkel 13 hours ago [-]
for sure. If they weren't so self-righteous about not serving ads, it'd be a great revenue stream for them. It'd also align with Dario's seeming obsession with profitability
parhamn 16 hours ago [-]
Now, I'm curious. How Bedrock/Azure Claude models work?

Do these rules apply to them too?

jijji 13 hours ago [-]
the government should not be using any private LLM, they should build their own internal systems using publicly available LLM's, which change frequently anyway. I don't see why they would put their trust in a third party like that. This back and forth about "ethics" is a bunch of nonsense, and can be solved simply by going for a custom solution which would probably be orders of magnitude cheaper in the long run. The most expensive part is the GPU's used for inference, which can be produced in silicon [1].

[1] https://taalas.com/products/

shawmakesmagic 13 hours ago [-]
My man
insane_dreamer 12 hours ago [-]
Good to see one AI company not selling out their values in exchange for military contracts. This shouldn't be rare, but it is. Good for them.
delaminator 6 hours ago [-]
Hegseth doesn't need autonomous drones, he's got the Treasury.
isamuel 12 hours ago [-]
Amodei’s use of “warfighters” (a Hegseth-era neologism for “soldiers”) is truly nauseating.
WatchDog 12 hours ago [-]
Soldier is an Army specific term. Like Sailor, Airman, Marine, etc.

Perhaps the term you are looking for is service member?

Warfighter tends to refer to anyone involved in a role that directly supports combat operations, it may or may not be a service member.

jibal 16 hours ago [-]
It's the Department of Defense, not the Department of War ... only Congress has the legal authority to change the name, and they haven't.
knfkgklglwjg 16 hours ago [-]
Same with Gulf of America.
14 hours ago [-]
mrcwinn 13 hours ago [-]
Keep in mind: the government is very invested logistically in Anthropic.

So no matter what xAI or OpenAI say - if and when they replace that spend - know that they are lying. They would have caved to the DoW’s demands for mass surveillance.

Because if there were some kind of concession, it would have been simplest just to work with Anthropic.

Delete ChatGPT and Grok.

16 hours ago [-]
keeeba 17 hours ago [-]
Big respect

Total humiliation for Hegseth, sure there will be a backlash

techblueberry 16 hours ago [-]
I thought it was interesting he threw in the bit about the supply chain risk and Defense Production Act being inherently contradictory. Most of the letter felt objective and cooperative, but that bit jumped off the page as more forceful rejection of Hegseth's attempt to bully them. Couldn't have been accidental.
calgoo 16 hours ago [-]
I see it as the opposite, its a lousy excuse of a message trying to get people not to think that they are giving in. Instead they list the horrible uses that they are already helping the government with. Dont worry, we only help murder people in other countries not the US. They also keep calling it the "Department of War" which means that this message is not for "us", its them begging publicly to Hegseth.
adi_kurian 15 hours ago [-]
What would the ideal response have been, in your view?
calgoo 6 hours ago [-]
Well, they should not have made a contract in the first place with a government that we all knew was going to be this bad. They should be doing everything in their power to cancel all government contracts at this point.
jpcompartir 6 hours ago [-]
"Regardless, these threats do not change our position: we cannot in good conscience accede to their request."
calgoo 6 hours ago [-]
Yes, that is great, for people from the US. For people in Europe and other locations, this just proves that they dont really care as the tool is already being used against us. It quite clear to me that anyone outside the US should immediately cancel all contracts with these corporations, as well as work their hardest at blocking their bots online.
jpcompartir 3 minutes ago [-]
As a non-US citizen, I'm quite glad in the knowledge that Claude won't be used to kill other non-US citizens with autonomous weapons
14 hours ago [-]
tehjoker 14 hours ago [-]
The framing of this is that the United States conducts legitimate operations overseas, but that is extremely far from the truth. It treats China as a foreign adversary, which is nearly purely the framing from the U.S. side as an aggressor.

AI should never be used in military contexts. It is an extremely dangerous development.

Look at how US ally Israel used non-LLM AI technology "The Gospel" and "Lavender" to justify the murder of huge numbers of civilians in their genocide of Palestinians.

8note 12 hours ago [-]
ukraine is using ai in a military context with some effectiveness. i dont think theres much of a problem with having the drone take over the last couple minutes of blowing up a russian factory
delaminator 16 hours ago [-]
"so we'll do it and feel guilty about it"
bawis 2 hours ago [-]
That has been the war politics of the western in the last century or so, nothing new.
hsuduebc2 16 hours ago [-]
We are the victims bro
alephnerd 15 hours ago [-]
One piece of context that everyone should keep in mind with the recent Anthropic showdown - Anthropic is trying to land British [0], Indian [1], Japanese [2], and German [3] public sector contracts.

Working with the DoD/DoW on offensive usecases would put these contracts at risk, because Anthropic most likely isn't training independent models on a nation-to-nation basis and thus would be shut out of public and even private procurement outside the US because exporting the model for offensive usecases would be export controlled but governments would demand being parity in treatment or retaliate.

This is also why countries like China, Japan, France, UAE, KSA, India, etc are training their own sovereign foundation models with government funding and backing, allowing them to use them on their terms because it was their governments that build it or funded it.

Imagine if the EU demanded sovereign cloud access from AWS right at the beginning in 2008-09. This is what most governments are now doing with foundation models because most policymakers along with a number of us in the private sector are viewing foundation models from the same lens as hyperscalers.

Frankly, I don't see any offramp other than the DPA even just to make an example out of Anthropic for the rest of the industry.

[0] - https://www.anthropic.com/news/mou-uk-government

[1] - https://www.anthropic.com/news/bengaluru-office-partnerships...

[2] - https://www.anthropic.com/news/opening-our-tokyo-office

[3] - https://job-boards.greenhouse.io/anthropic/jobs/5115692008

arduanika 10 hours ago [-]
I tried several times to read your second paragraph, and failed to parse it. Could you break it into several sentences somehow? It's possible you're making an important point, but I can't tell what you're trying to say.
MarcLore 5 hours ago [-]
[dead]
irenetusuq 13 hours ago [-]
[dead]
fdefitte 13 hours ago [-]
[dead]
theturtle 13 hours ago [-]
[dead]
designerdada 11 hours ago [-]
[dead]
Bengalilol 7 hours ago [-]
TLDR: « depends on where you live »
ffsickempire 11 hours ago [-]
[dead]
mahgnous 15 hours ago [-]
[dead]
JohnnyLarue 16 hours ago [-]
[dead]
techblueberry 17 hours ago [-]
[flagged]
someguydave 16 hours ago [-]
[flagged]
jiggawatts 15 hours ago [-]
Brigadier General S. L. A. Marshall’s 1947 book Men Against Fire: The Problem of Battle Command stated that only about 10-15% of men would actually take the opportunity to fire directly at exposed enemies. The rest would typically fire in the air to merely scare off the men on the opposing force.

I personally think this is one of the most positive of human traits: we’re almost pathologically unwilling to murder others even on a battlefield with our own lives at stake!

This compulsion to avoid killing others can be trivially trained out of any AI system to make sure that they take 100% of every potential shot, massacre all available targets, and generally act like Murderbots from some Black Mirror episode.

Anyone who participates in any such research is doing work that can only be categorised as the greatest possible evil, tantamount to purposefully designing a T800 Terminator after having watched the movies.

If anyone here on HN reading this happens to be working at one of the big AI shops and you’re even tangentially involved in any such military AI project — even just cabling the servers or whatever — I figuratively spit in your eye in disgust. You deserve far, far worse.

ninjagoo 11 hours ago [-]
> Brigadier General S. L. A. Marshall’s 1947 book Men Against Fire: The Problem of Battle Command stated that only about 10-15% of men would actually take the opportunity to fire directly at exposed enemies. The rest would typically fire in the air to merely scare off the men on the opposing force.

Having been identified back then, this issue has been systematically stamped out in modern militaries through training methods. Cue high levels of PTSD in modern frontline troops after they absorb what they actually did.

jiggawatts 7 hours ago [-]
I would love to see a reference for that!

AFAIK the rounds shot to kills ratio is still north of ten thousand in most modern conflicts.

I’ve heard anecdotally that drone operators in Ukraine have a ratio of about ten drones per kill and rack up multiple kills per day every day. Supposedly the pilots “burn out” due to the psychological impacts.

myko 13 hours ago [-]
There is no Department of War. This is the dumbest fucking timeline.
myko 56 minutes ago [-]
To be clear, despite the downvotes, my statement is true. It is the Department of Defense. As someone who spent a good portion of my life working under it, it is offensive to me people are going along with the pretense that these idiots can unilaterally rename the organization.
creatonez 10 hours ago [-]
> Our strong preference is to continue to serve the Department and our warfighters—with our two requested safeguards in place.

It's absolutely disgusting that they would even consider working with the US government after the Gaza genocide started. These are modern day holocaust tabulation machine companies, and this time randomly they are selecting victims using a highly unpredictable black-box algorithm. The proper recourse here is to impeach the current administration, dissolve the companies that were complicit, and send their leadership to the hague for war crimes trials.

dakolli 13 hours ago [-]
This is a PR play by Anthropic, likely in coordination with the administration. They don't care, they just need the public to view them as a victim here, and then its business as usual.

I prefer they get shutdown, llms are the worst thing to happen to society since the nuclear bomb's invention. People all around me are losing their ability to think, write and plan at an extraordinary pace. Keep frying your brains with the most useless tool alive.

Remember, the person that showed their work on their math test in detail is doing 10x better than the guys who only knew how to use the calculator. Now imagine being the guy who thinks you don't need to know the math or how to use a calculator lol.

OutOfHere 16 hours ago [-]
The Pentagon should be using open models, not closed ones by OpenAI/Anthropic/xAI. The entire discussion of what Anthropic wants is therefore moot.
knfkgklglwjg 16 hours ago [-]
The best open models are from china though.
OutOfHere 14 hours ago [-]
It's a good reason to fund open model development domestically.
probably_wrong 16 hours ago [-]
I have read the whole thing but I nonetheless want to focus on the second paragraph:

> Anthropic has therefore worked proactively to deploy our models to the Department of War

This should be a "have you noticed that the caps on our hats have skulls on it?" moment [1]. Even if one argues that the sentence should not be read literally (that is, that it's not literal war we're talking about), the only reason for calling it "Department of War" and "warfighters" instead of "Department of Defense" and "soldiers" is to gain Trump's favor, a man who dodged the draft, called soldiers "losers", and has been threatening to invade an ally for quite some time.

There is no such a thing as a half-deal with the devil. If Anthropic wants to make money out of AI misclassifying civilians as military targets (or, as it has happened, by identifying which one residential building should be collapsed on top of a single military target, civilians be damned) good for them, but to argue that this is only okay as long as said civilians are brown is not the moral stance they think it is.

Disclaimer: I'm not a US citizen.

[1] https://m.youtube.com/watch?v=ToKcmnrE5oY

ricardobeat 16 hours ago [-]
What is their other possible move here, considering the government is threatening to destroy their business entirely?
probably_wrong 15 hours ago [-]
One alternative would be to call the government's bluff: if they truly are as indispensable as they claim then they can leverage that advantage into a deal.

But at a more general level, I'd say that unethical actions do not suddenly become ethical when one's business is at risk. If Anthropic considers that using their technology for X is unethical and then decide that their money and power is worth more than the lives of the foreigners that will be affected by doing X then good for them, but they shouldn't then make a grandstand about how hard they fought to ensure that only foreigners get their necks under the boots.

ninjagoo 11 hours ago [-]
> What is their other possible move here, considering the government is threatening to destroy their business entirely?

You must not be American, then. We all know that these corporate favoring contract terms are managed through campaign contributions; savvy?

Anthropic must have high school interns as govt liaisons, and not very bright ones

XorNot 16 hours ago [-]
Warfighters is a pretty common term though. There's a fair bit of nuance in when and how you'd use it.
cwillu 15 hours ago [-]
It's a common term that comes with a lot of criticism in the vein of noticing the skulls.
0xbadcafebee 12 hours ago [-]
Principles are the things you would never do for any amount of money. This might be the only principled tech company in the world.
ozzymuppet 13 hours ago [-]
Wow, I expected them to cave, and they did'nt!

I'll be signing up to Claude again, Gemini getting kind of crap recently anyway.

16 hours ago [-]
DiabloD3 8 hours ago [-]
This seems to be at least partially written by AI: There is no Department of War, it is called the Department of Defense.
zzot 8 hours ago [-]
That’s not true anymore. Trump renamed it in September: https://www.war.gov/News/News-Stories/Article/Article/429582...
calgoo 6 hours ago [-]
Just like the Gulf of Mexico is still called the Gulf of Mexico, if we just ignore his ramblings and continue calling the department of defense, we undermine his whole point. If we fall for all their crap and just accept it, then we loose in the end. Any resistance to a Fascist government is good resistance. Anything that makes their life's a little shittier is good. Better that they go around having tantrums about how they renamed it but no one is paying attention.
nla 3 hours ago [-]
TDS factor 11.
willmorrison 16 hours ago [-]
They essentially said "we're not fans of mass surveilance of US citizens and we won't use CURRENT models to kill people autonomously" and people are saying they're taking a stand and doing the right thing? What???

I guess they're evil. Tragic.

fluidcruft 16 hours ago [-]
It's not inconceivable that AI could become better than humans at targeting things. For example if it can reliably identify enemy warcraft or drones faster than people can react. I'm not saying Claude's models are suited for that but humans aren't perfect and in theory AI can be better than humans. It's not currently true and would need to be proved, but it doesn't seem unreasonable. It could well be better than something like deploying mines.
shevy-java 32 minutes ago [-]
Indeed. The AI will decide who has to die and who may live.

Skynet in Terminator was scary. The AI Skynet is even scarier - and sucks, too.

micromacrofoot 15 hours ago [-]
We're living in a time where most tech companies are donating millions of dollars to the current leadership in exchange for favors.

In that climate this is a more of a stand than what everyone else is doing.

nla 3 hours ago [-]
I truly do not understand why anyone thinks serious work can be done with their models, let alone government work. Their models do no hold a candle to Open AI.
guven0141 5 hours ago [-]
Why I built this: I’ve always felt that GitHub stars alone don’t tell the full story of a project's impact. I wanted to see if I could quantify the effort and "financial worth" behind a repository, even if just as a fun estimate. It started as a way to check the value of my own side projects and grew from there.

How it works: The tool fetches real-time data from the GitHub API. The valuation algorithm takes into account several factors:

Total stars and forks (popularity).

Commit frequency and recent activity (maintenance level).

Number of contributors (community strength).

Repo age and issue activity.

The Tech Stack: The app is built with Next.js and Tailwind CSS, and it’s deployed on Vercel. I tried to keep it as lightweight and fast as possible.

I’d love your feedback: Is the valuation logic too optimistic or too conservative? What other metrics should I include to make the estimate more "realistic" for the open-source world?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 15:29:29 GMT+0000 (Coordinated Universal Time) with Vercel.