I initially thought that this was an announcement for a new pledge and thought, "they're going to forget about this the moment it's convenient." Then I read the article and realized, "Oh, it's already convenient."
Google is a megacorp, and while megacorps aren't fundamentally "evil" (for some definitions of evil), they are fundamentally unconcerned with goodness or morality, and any appearance that they are is purely a marketing exercise.
Retric 10 hours ago [-]
> while megacorps aren't fundamentally "evil" (for some definitions of evil),
I think megacorps being evil is universal. It tends to be corrupt cop evil vs serial killer evil, but being willing to do anything for money has historically been categorized as evil behavior.
That doesn’t mean society would be better or worse off without them, but it would be interesting to see a world where companies pay vastly higher taxes as they grow.
zelon88 10 hours ago [-]
You're taking about pre-Clinton consumerism. That system is dead. It used to dictate that the company who could offer the best value deserved to take over most of the market.
That's old thinking. Now we have servitization. Now the business who can most efficiently offer value deserves the entire market.
Basically, iterate until you're the only one left standing and then never "sell" anything but licenses ever again.
Ekaros 10 hours ago [-]
The bait-and-switch model is absolutely amazing as well. Start by offering a service covered with ads. Then add paid tier to get rid of ads. Next add tier with payment and ads. And finally add ads back to every possible tier. Not to forget about keeping them in content all the time.
int_19h 5 hours ago [-]
To quote the email from Hulu that recently dropped into my inbox:
> We are clarifying that, as we continue to increase the breadth and depth of the content we make available to you, circumstances may require that certain titles and types of content include ads, even in our 'no ads' or 'ad free' subscription tiers.
So at this point they aren't even bothering to rename the tier from "ad free" even as they put ads in it. Or maybe it's supposed to mean "the ads come free with it" now? Newspeak indeed.
majormajor 2 hours ago [-]
This goes back to the release of the no-ads Hulu plan. Due at the time to fun shenanigans and weirdness around the exact licensing deals for a few shows. (At least one of those shows is VERY long-running now https://www.reddit.com/r/greysanatomy/comments/12prhpf/no_ad... - not sure if there have been any new ones through the years or currently )
normalaccess 7 hours ago [-]
Advertising is just the surface layer—the excuse. Digital ads rely on collecting as much personal data as possible, but that data is the real prize. This creates a natural partnership with intelligence agencies: they may not legally collect the data themselves, but they can certainly buy access.
This isn’t new. Facebook, for example, received early funding from In-Q-Tel, the CIA’s venture capital arm, and its origins trace back to DARPA’s canceled LifeLog project—a system designed to track and catalog people’s entire lives. Big Tech and government surveillance have been intertwined from the start.
That’s why these companies never face real consequences. They’ve become quasi-government entities, harvesting data on billions under the guise of commerce.
zeroq 3 hours ago [-]
Years ago a friend working in security told me that every telco operator in Elbonia has to have a special room in their HQ that's available 24/7 to some goverment officials. Men in black come and go as they please, and while what is actually happening in that room remains a mystery, they can tap straight to the system from within with no restrictions or traceability.
Growing up in soviet bloc I took that story at face value.
After all democracy was still a new thing, and people haven't invented privacy concerns yet.
Since then I always thought that some sort of cooperation between companies like Facebook or Google and CIA/DOD was an obvious thing to everyone.
somenameforme 3 hours ago [-]
PRISM [1] is the best evidence of how short-lived most people's memories are. Microsoft, Yahoo, Google, and Facebook were the first 4 members. It makes it pretty funny when companies like Apple (who also joined more than a decade ago) speak about trying to defend customer's privacy against government intrusion. There's so much completely cynical corporate LARPing for PR.
And if one wants to know why big tech from China isn't welcome, be it phones or social media, it's not because fear of them spying on Americans, but because of the infeasibility of integrating Chinese companies into our own domestic surveillance systems.
If you have ever seen the prank interview between Elijah Wood and Dominic Monaghan, "Do you wear wigs? Have you worn wigs? Will you wear wigs?" and Elijah breaks down laughing in total shock at how hilariously bad the interview is...
...I just picture a similar conversation with a CEO going: "Sir, shareholders want to see more improvement this quarter." CEO: "Do we run ads? Have we run ads? Will we run ads this time?" (The answer is inevitably yes to all of these)
smgit 3 hours ago [-]
Some one has to pay for those Ads.
That creates limits to growth of an Ad based ecosystem.
So the thing to pay attention too is not Revenue growth or Profit growth of a Platform but Price of an Ad, Price to increase reach, price to Pay to Boost your post, price of a presidential campaign etc etc. These prices cant grow forever just like with housing prices or we get the equivalent of a Housing Bubble.
Want to destabilize the whole system pump up ad prices.
7 hours ago [-]
PaulDavisThe1st 6 hours ago [-]
I prefer the angle that describes this as a shift from value production to value extraction. Value production means coming up with new goods or services, or new/better ways to make existing ones. Value extraction means looking at existing economic exchanges, and figuring out how to get X percent of some of them.
idle_zealot 4 hours ago [-]
It was always a game of maximizing captured value. In such a game, creating value and capturing some portion of what you're producing is far less effective than value extraction, moving value around such that it's you capturing it, not someone else. A market, then, will by default encourage the latter strategy over the former. However, if the society in charge of a market observes value extraction occurring, it can respond by outlawing the particular extraction strategy being employed, and punish the parties participating. Then, for some time, market participants will turn to producing value instead, making more humble profits, until another avenue for extraction becomes available and quickly becomes the dominant strategy again. This cycle continues until the market eats the forces that would seek to regulate it and reign in extractive practices. That is what we're seeing here, at least in the US there is basically no political will behind identifying and punishing any new forms of harmful behavior, and we barely enforce existing laws regarding eg monopolies. Common wisdom among neoliberals and conservatives both is that big companies are good for the economy, and it's best to tread lightly in terms of regulating their behaviors, lest we interrupt their important value production process. One wonders if there are perhaps financial incentives to be so pro-corporate.
PaulDavisThe1st 3 hours ago [-]
I would argue that since the dawn of capitalism (whenever you place that), there have been moral structures in place to promote value production and stigmatize value extraction. The precise balance between the two moral verdicts changes back and forth over time. In the USA in the 21st century we seem to have entered a period where the promotion of value production is unusually low and simultaneously the stigmatization of value extraction has dropped close to zero.
_DeadFred_ 9 hours ago [-]
Don't forget get your stock on and index that almost all retirement funds are required to put money into every month versus the old school stock market where it was a market not a cable bill (you have to pay for the whole bundle if you want it or not).
nradov 9 hours ago [-]
It's easy to set up an IRA where you can trade individual securities instead of index funds if that's what you want. Most people aren't competent traders and will underperform the index funds.
9 hours ago [-]
sudoshred 5 hours ago [-]
As scale grows the moral ambiguity does also. Megacorps default to “evil” because action in a large number of circumstances for a large number of events does as well, particularly when economic factors are motivating behavior (implicitly or explicitly). Essentially being “non-evil” becomes more expensive than the value it adds. There is always someone on the other end of a transaction, by definition.
mananaysiempre 10 hours ago [-]
Most suggestions of this nature fail to explain how they will deal with the problem of people just seeing there’s no point in trying for more. On a personal level, I’ve heard people from Norway describe this problem for personal income tax—at some point (notably below a typical US senior software engineer’s earnings) the amount of work you need to put in for the marginal post-tax krone is so high it’s just demotivating, and you either coast or emigrate. Perhaps that’s not entirely undesirable, but I don’t know if people have contemplated the consequences of the existence of such a de facto ceiling seriously.
BrenBarn 10 hours ago [-]
> Most suggestions of this nature fail to explain how they will deal with the problem of people just seeing there’s no point in trying for more. On a personal level, I’ve heard people from Norway describe this problem for personal income tax—at some point (notably below a typical US senior software engineer’s earnings) the amount of work you need to put in for the marginal post-tax krone is so high it’s just demotivating, and you either coast or emigrate. Perhaps that’s not entirely undesirable, but I don’t know if people have contemplated the consequences of the existence of such a de facto ceiling seriously.
I think if you look at quality of life and happiness ratings in Norway it's pretty clear it's far from "entirely undesirable". It's good for people to do things for reasons other than money.
dec0dedab0de 10 hours ago [-]
And the middle ground is to only enforce it on corporations in exchange for the protections given to the owners.
Want to make more? then take personal risk.
ilbeeper 9 hours ago [-]
Great, so we only want the real high risk takers, the top gamblers,to play in the big league. Those who are so rich they no way to lose their personal comfort and are blind to the personal risk - and probably are careless about anyone's else just as well
mech422 9 hours ago [-]
Don't we have that already? Bootstrapped startups with the founders money on the line typically don't play in the 'big league's till way after the founder is at risk..
buckle8017 9 hours ago [-]
Norway is Saudi Arabia with snow.
Their entire economy and society are structured around oil extraction.
There are no lessons to learn from Norway unless you live somewhere that oil does from the ground.
Retric 2 hours ago [-]
Hardly per capita they export similar amounts of petroleum products, but Norway’s GDP is 80k/person vs 30k/person in Saudi Arabia. Norway exports slightly more/person but their production costs are significantly higher which offsets it.
The difference is Norway’s economy being far less dependent on petroleum which is only 40% of their exports.
5 hours ago [-]
abdullahkhalids 8 hours ago [-]
Higher taxes is the wrong solution to the very valid problem.
We all recognize that a democracy is the correct method for political decision making, even though it's also obvious that theoretically a truly benevolent dictator can make better decisions than an elected parliament but in practice such dictators don't really exist.
The same reasoning applies to economic decision making at society level. If you want a society whose economics reflects the will and ethics of the people, and which serves for the benefit of normal people, the obvious thing is the democratize economic decision making. That means that all large corporations must be mostly owned by their workers in roughly 1/N fashion, not by a small class of shareholders. This is the obvious correct solution, because it solves the underlying problem, not paper of the symptoms like taxation. If shareholder owned corporations are extracting wealth from workers or doing unethical things, the obvious solution is to take away their control.
Obviously, some workers will still make their own corporations do evil things, but at least it will be collective responsibility, not forced upon them by others.
giantg2 9 hours ago [-]
"the amount of work you need to put in for the marginal post-tax krone is so high it’s just demotivating"
Sounds like the effort needed for bonuses here in the US. Why try if the amount is largely arbitrary and generally lower than your base salary pay rate when you consider all the extra hours. Everything is a sham.
nradov 8 hours ago [-]
Which industry? Bonuses in the tech industry tend to be somewhat arbitrary and thus ineffective for motivating employees. Bonuses in other industries like trading or investment banking tend to be larger (sometimes more than base salary) and directly tied to individual performance and so they're highly effective at motivating ambitious employees.
Increasing marginal income tax rates on highly compensated employees might be a good policy overall. But where are we on the Laffer curve? If we go too far then it really hurts the overall economy.
yodsanklai 9 hours ago [-]
> the amount of work you need to put in for the marginal post-tax krone is so high it’s just demotivating
This is a cliche you hear from right winger in any country that has a progressive tax system.
Regarding Norway, taxes aren't in the same ballpark as in some US blue states.
Also, it's a very simplistic view to think that people are only motivated by money. Counter examples abound.
robocat 6 hours ago [-]
> This is a cliche you hear from right winger in any country that has a progressive tax system
Not a cliché - a fact. I'll explain to you.
The incentive structure of progressive taxation is wrong: it only works for the few percent that are extremely money hungry: the few that are willing to work for lower and lower percentage gains.
Normal people say "enough" and they give up once they have the nice house and a few toys (and some retirement money with luck). In New Zealand that is something like USD1.5 million.
I'm on a marginal rate of 39% in New Zealand. I am well off but I literally am not motivated to try and earn anything extra because the return is not enough for the extra effort or risk involved. No serial entrepreneurship for me because it only has downside risk. If I invest and win then 39%+ is taken as tax, but even worse is that if I lose then I can't claim my time back. Even financial losses only claw back against future income: and my taxable income could move to $0 due to COVID-level-event and so my financial risk is more than what it might naively appear.
Taxation systems do not fairly reward for risk. Especially watch people with no money taking high risks and pay no insurance because the worst that can happen to them is bankruptcy.
New Zealand loses because the incentive structure for a founder is broken. We are an island so the incentive structure should revolve around bringing in overseas income (presuming the income is spent within NZ). Every marginal dollar brought into the economy helps all citizens and the government.
The incentives were even worse when I was working but was trying to found a company. I needed to invest time, which had the opportunity cost of the wages I wouldn't get as a developer (significant risk that can't be hedged and can't be claimed against tax). 9 times out of 10 a founder wins approximately $0: so expected return needs to be > 10x. A VC fund needs something like > 30x return from the 1 or 2 winning investments. I helped found a successful business but high taxation has meant I haven't reached my 30x yet - chances are I'll be dead before I get a fair return for my risk. I'm not sure I've even reached 10x given I don't know the counterfactual of what my employee income would have become. This is for a business earning good export income.
Incentive structures matter - we understand that for employees - however few governments seem to understand that for businesses.
Most people are absolutely ignorant of even basic economics. The underlying drive is the wish to take from those that have more than them. We call it the tall poppy syndrome down here.
(reëdited to add clarity)
roca 2 hours ago [-]
I'm also on the 39% marginal income tax rate in New Zealand. That income tax rate isn't the problem. Keeping $60K out of every $100K extra salary I make is plenty of motivation to work harder to make the extra $100K... especially because the taxes paid aren't burned, they mostly go to things I care about.
The income tax rate isn't all that relevant to the costs and benefits of starting a company, so I don't understand that part of your story. The rewards for founding a successful company mostly aren't subject to income tax, and NZ has a very light capital gains regime.
I have started my own company and I do agree that there are some issues that could be addressed. For example, it would be fairer if the years I worked for no income created tax-deductible losses against future income.
But NZ's tax rates are lower than Australia and the USA and most comparable nations, and NZers start a lot of businesses, so I don't think that is one of our major problems at the moment.
robocat 20 minutes ago [-]
> Keeping $60K out of every $100K extra salary I make is plenty of motivation to work harder
That's good that it motivates you. It doesn't motivate me any more. I'm not interested in "investing" more time for the reasons I have said.
> the taxes paid aren't burned, they mostly go to things I care about.
I'm pleased for you. I'd like to put more money towards things I care about.
> The income tax rate isn't all that relevant to the costs and benefits of starting a company
I am just less positive than you: it feels like win you lose, lose you lose bigger. I'm just pointing out that our government talks about supporting businesses but I've seen the waste from the repetitive attempts to monetise our scientific academics.
> The rewards for founding a successful company mostly aren't subject to income tax
Huh? Dividends are income. Or are you talking about the non-monetary rewards of owning a business?
> NZ has a very light capital gains regime
Which requires you to sell your company to receive the benefits of the lack of CGT. So every successful business in NZ is incentivised to sell. NZ sells it's jewels. Because keeping a company means paying income tax every year. NZ is fucking itself by selling anything profitable - usually to foreign buyers.
The one big ticket item I would like to save for is my retirement fund. But Labour/Greens want to take 50% to 100% of capital if you have over 2 million. A bullshit low amount drawdown at 4% is $80k/annum before tax LOL. Say investments go up by 6% per year and you want to withdraw 4%. Then a 2% tax is 100% of your gains. Plus I'm certain they will introduce means testing for super before I am eligible. And younger people are even more fucked IMHO. The reality is I need to plan to pay for the vast majority of my own costs when I retire, but I get to pay to support everybody else. I believe in socialist health care and helping our elderly, but the country is slowly going broke and I can't do much about that. I believe that our government will take whatever I have carefully saved - often to pay for people that were not careful (My peer-group is not wealthy so I see the good and the bad of how our taxes are spent). Why should I try to earn more to save?
sweeter 10 hours ago [-]
We're talking about corporations here, where are they going to go? If you had a competent government, you would say "fine, then leave. But your wealth and business is staying here" at some point the government has to do its job. These corporations pull in trillions of dollars, its wild to me to suggest that suddenly everyone would stop working and making money because they were taxed at a progressive rate. Its an absurd assumption to begin with.
We could literally have high speed rail, healthcare, the best education on the planet and have a high standard of living... and it would be peanuts to them. Instead we have a handful of people with more wealth than 99% of everyone else, while the bottom 75% of those people live in horrifying conditions. The fact that medical bankruptcy is a concept only in the richest country on earth is deeply embarrassing and shameful.
z2 8 hours ago [-]
Historically, unchecked corporate power tends to mirror the flaws of the systems that enable it. For example, the Gilded Age robber barons exploited weak regulations, while tech giants thrive on data privacy gray areas. Maybe the problem isn’t size itself, but the lack of guardrails that scale with corporate influence (e.g., antitrust enforcement, environmental accountability, or worker protections), but what do I know!
I guess corrupt cop vs serial killer is like amorality (profit-driven systems) vs immorality (active malice)? A company is a mix of stakeholders, some of whom push for ethical practices. But when shareholders demand endless growth, even well-intentioned actors get squeezed.
nonrandomstring 4 hours ago [-]
> amorality
That word comes with a lot of boot-up code and dodgy dependencies.
I don't like it.
Did Robert Louis Stevenson make a philosophical error in 1882
supposing that a moral society (with laws etc) can contain within
itself a domain outside of morals [0]?
What if coined the word "alegal"?
Oh officer... what I'm doing is neither legal nor illegal, it's simply
alegal "
Agreed, I think part of it boils down to the concept of 'limited liability' itself which is a euphemism for 'the right to carry out some degree of evil without consequence.'
Also, scale plays a significant part as well. Any high-exposure organization which operates on a global scale has access to an extremely large pool of candidates to staff its offices... And such candidate pools necessarily include a large number of any given personas... Including large numbers of ethically-challenged individuals and criminals. Without an interview process which actively selects for 'ethics', the ethically-challenged and criminal individuals have a significant upper-hand in getting hired and then later wedging themselves into positions of power within the company.
Criminals and ethically-challenged individuals have a bigger risk appetite than honest people so they are more likely to succeed within a corporate hierarchy which is founded on 'positive thinking' and 'turning a blind eye'. On a global corporate playing field, there is a huge amount of money to be made in hiding and explaining away irregularities.
A corporate employee can do something fraudulent and then hold onto their jobs while securing higher pay, simply by signaling to their employer that they will accept responsibility if the scheme is exposed; the corporate employer is happy to maintain this arrangement and feign ignorance while extracting profits so long as the scheme is kept under wraps... Then if the scheme is exposed, the corporations will swiftly throw the corporate employee under the bus in accordance to the 'unspoken agreement'.
The corporate structure is extremely effective at deflecting and dissipating liability away from itself (and especially its shareholders) and onto citizens/taxpayers, governments and employees (as a last layer of defense). The shareholder who benefits the most from the activities of the corporation is fully insulated from the crimes of the corporation. The scapegoats are lined up, sandwiched between layers of plausible deniability in such a way that the shareholder at the end of the line can always claim complete ignorance and innocence.
ericmay 10 hours ago [-]
My problem with this take is that you forget, the corporations are made up of people, so in order for the corporation to be evil you have to take into account the aggregate desires and decision making of the employees and shareholders and, frankly, call them all evil. Calling them evil is kind of a silly thing to do anyway, but you can not divorce the actions of a company from those who run and support it, and I would argue you can't divorce those actions from those who buy the products the company puts out either.
So in effect you have to call the employees and shareholders evil. Well those are the same people who also work and hold public office from time to time, or are shareholders, or whatever. You can't limit this "evilness" to just an abstract corporation. Not only is it not true, you are setting up your "problem" so that it can't be addressed because you're only moralizing over the abstract corporation and not the physical manifestation of the corporation either. What do you do about the abstract corporation being evil if not taking action in the physical world against the physical people who work at and run the corporation and those who buy its products?
I've noticed similar behavior with respect to climate change advocacy and really just "government" in general. If you can't take personal responsibility, or even try to change your own habits, volunteer, work toward public office, organize, etc. it's less than useless to rail about these entities that many claim are immoral or need reform if you are not personally going to get up and do something about it. Instead you (not you specifically) just complain on the Internet or to friends and family, those complaints do nothing, and you feel good about your complaining so you don't feel like you need to actually do anything to make change. This is very unproductive because you have made yourself feel good about the problem but haven't actually done anything.
With all that being said, I'm not sure how paying vastly higher taxes would make Google (or any other company) less evil or more evil. What if Google pays more taxes and that tax money does (insert really bad thing you don't like)? Paying taxes isn't like a moral good or moral bad thing.
Retric 9 hours ago [-]
> made up of people
People making meaningful decisions at mega corporations aren’t a random sample of the population, they are self selected to care a great deal about money and or power.
Honestly if you wanted to filter the general population to quietly discover who was evil I’d have a hard time finding something more effective. It doesn’t guarantee everyone is actually evil, but actually putting your kids first is a definite hindrance.
The morality of the average employee on the other hand is mostly irrelevant. They aren’t setting policies and if they dislike something they just get replaced.
ericmay 9 hours ago [-]
You'd never figure out who was "evil" because it's just based on your own interpretation of what evil is. Unless of course you want to join me as a moral objectivist? I don't think Google doing military work with the US government is evil. On the other and I think the influence and destruction caused by advertising algorithms is. Who gets to decide what is evil?
I take issue with "don't blame the employees". You need people to run these organizations. If you consider the organization to be evil you don't get to then say well the people who are making the thing run aren't evil, they're just following orders or they don't know better. BS. And they'd be replaced if they left? Is that really the best argument we have against "being evil"?
Sorry I'd be less evil but if I gave up my position as an evil henchman someone else would do it! And all that says anyway is that those replacing those who leave are just evil too.
If you work at one of these companies or buy their products and you literally think they are evil you are either lying to yourself, or actively being complicit in their evil actions. There's just no way around that.
Take personal responsibility. Make tough decisions. Stop abstracting your problems away.
Retric 9 hours ago [-]
If your defense is trying to argue about what’s evil, you’ve already lost.
Putting money before other considerations is what’s evil. What’s “possible” expands based on your morality it doesn’t contract. If being polite makes a sale you’re going to find a lot of polite sales people, but how much are they willing to push that expended warrantee?
> Sorry I'd be less evil but if I gave up my position as an evil henchman someone else would do it!
I’ve constrained what I’m willing to do and who I’m willing to work for based on my morality, have you? And if not, consider what that say about you…
ericmay 9 hours ago [-]
> Putting money before other considerations is what’s evil.
Depends on the considerations and what you consider to be evil. My point wasn't to argue about what's evil, of course there is probably a few hundred years of philosophy to overcome in that discussion, but to point out that if you truly think an organization is evil it's not useful to only care about the legal fiction or the CEO or the board that you won't have any impact on - you have to blame the workers who make the evil possible too, and stop using the products. Otherwise you're just deceiving yourself into feeling like you are doing something.
Retric 9 hours ago [-]
Again, you say that as if I am using the products of companies I consider evil.
The fact you assume people are going to do things they believe to be morally reprehensible is troubling to me.
I don’t assume people need to be evil to work at such companies because I don’t assume they notice the same things I do.
ericmay 9 hours ago [-]
I was writing about the general case. I apologize if that wasn't clear from the start. I don't know anything about you personally though I'm sure we'd have some great conversations over a glass of wine (or coffee or whatever :) )!
> The fact you assume people are going to do things they believe to be morally reprehensible is troubling to me.
This seems to be very common behavior in my experience. Perhaps the rhetoric doesn't match the true beliefs. I'm not sure.
Retric 9 hours ago [-]
Ahh ok, sorry for misunderstanding you.
ericmay 9 hours ago [-]
It's my fault. Sometimes I'm not very clear.
int_19h 5 hours ago [-]
A large corporation is more than the sum of its owners and employees, though. Large organizations in general have an emergent phenomenon - past a certain threshold, they have a "mind of it own", so to speak, which - yes - still consists of individual actions of people making the organization, but those people are no longer acting as they normally would. They get influenced by corporate culture, or fall in line because they are conditioned to socially conform, or follow the (morally repugnant) rule book because otherwise they will be punished etc. It's almost as if it was a primitive brain with people as neurons, forced into configurations that, above all, are designed to perpetuate its own existence.
8note 7 hours ago [-]
corporations, separate from the people in them, are set up in a way that incentivizes bad behaviour, based on which stake holders are considered and when, along with what mechanisms result in rewards and which ones get you kicked out.
the architecture of the system is imperfect and creates bad results for people.
CrillRaver 9 hours ago [-]
By definition we can never know for sure, but I believe the number of people who stay silent is many times bigger than those who voice their opinion. They've learned it is unproductive (as you say) or worst case, you're told you've got it all wrong technically speaking.
Complaining is not unproductive, it signals to others they are not alone in their frustrations. Imagine that nobody ever responds or airs their frustrations; would you feel comfortable saying something about it? Maybe you're the only one, better keep quiet then. Or how do you find people who share your frustrations with whom you could organise some kind of pushback?
If I was "this government", I would love for people to shut up and just do their job, pay taxes and buy products (you don't have to buy them from megacorp, just spend it and oh yeah, goodluck finding places to buy products from non-megacorps).
ericmay 9 hours ago [-]
My point was that complaining isn't enough and in my experience most people just complain but don't even take the smallest action in line with their views because it inconveniences them. Instead they lull themselves to sleep that something was done because they complained about it, and there's no need to adjust anything in their lives because they "did all they can do".
Instead of taking action they complain, set up an abstract boogeyman to take down, and then nobody can actually take action to make the world better (based on their point of view) because there's nothing anyone can do about Google the evil corporation because it's just some legal fiction. Bonus points for moralizing on the Internet and getting likes to feel even better about not doing anything.
But you can do something. If someone thinks Google is evil they can stop using Gmail or other Google products and services, or even just reduce their usage - maybe you can switch email providers but you only have one good map option. Ok at least you did a little more than you did previously.
BrenBarn 9 hours ago [-]
I don't really agree with some of your assumptions. At many companies, many of the people also are evil. Many people who hold shares and public office are also evil.
I don't think it's necessary to conclude that because a company is evil then everyone who works at the company is evil. But it's sort of like the evilness of the company is a weighted function of the evilness of the people who control it. Someone with a small role may be relatively good while the company overall can still be evil. Someone who merely uses the company's products is even more removed from the company's own level of evil. If the company is evil it usually means there is some relatively small group of people in control of it making evil decisions.
Now, I'm using phraseology here like "is evil" as a shorthand for "takes actions that are evil". The overall level of evilness or goodness of a person is an aggregate of their actions. So a person who works for an evil company or buys an evil company's products "is evil", but only insofar as they do so. I don't think this is even particularly controversial, except insofar as people may prefer alternative terms like "immoral" or "unethical" rather than "evil". It's clear people disagree about which acts or companies are evil, but I think relatively few people view all association with all companies totally neutrally.
I do agree with you that taking personal responsibility is a good step. And, I mean, I think people do that too. All kinds of people avoid buying from certain companies, or buy SRI funds or whatever, for various ethically-based reasons.
However, I don't entirely agree with the view that says it's useless or hypocritical to claim that reform is necessary unless you are going to "do something". Yes, on some level we need to "do something", but saying that something needs to be done is itself doing something. I think the idea that change has to be preceded or built from "saintly" grassroots actions is a pernicious myth that demotivates people from seeking large-scale change. My slogan for this is "Big problems require big solutions".
This means that it's unhelpful to say that, e.g., everyone who wants regulation of activities that Company X does has to first purge themselves of all association with Company X. In many cases a system arises which makes such purges difficult or impossible. As an extreme, if someone lives in an area with few places to get food, they may be forced to patronize a grocery store even if they know that company is evil. Part of "big solutions" means replacing the bad ways of doing things with new things, rather than saying that we first have to get rid of the bad things to get some kind of clean slate before we can build new good things.
sweeter 9 hours ago [-]
You could use this logic to posit that any government, group, system, nation state, militia, business, or otherwise, isn't "evil" because you haven't gauged the thoughts, feelings and actions of every single person who comprises that system. Thats absurd.
If using AI and other technology to uphold a surveillance state, wage war, do imperialism, and do genocide... isn't evil, than I don't know if you can call anything evil.
And the entire point of taxes is that we all collectively decide that we all would be better off if we pooled our labor and resources together so that we can have things like a basic education, healthcare, roads, police, bridges that don't collapse etc.. Politicians and corporations have directly broken and abused this social contract in a multitude of ways, one of those ways is using loopholes to not pay taxes at the same rate as everyone else by a large margin... another way is paying off politicians and lobbying so that those loopholes never get closed, and in fact, the opposite happens. So yes, taxing Google and other mega-corporations is a single, easily identifiable, action that can be directly taken to remedy this problem. Though, there is no way around solving the core issue at hand, but people have to be able to identify that issue foremost.
xeonmc 1 hours ago [-]
Don't anthropomorphize the lawnmower.
ninetyninenine 4 hours ago [-]
A megacorp is made up of people. So it's people who are fundamentally evil.
The main thing here I think is anonymity through numbers and complexity. You and thousands of others just want to see the numbers go up. And that desire is what ultimately influences decisions like this.
If google stock dropped because of this then google wouldn't do it. But it is the actions of humans in aggregate that keeps it up.
Megacorporations are scapegoats when in actuality they are just a set of democratic rules. The corporation is just a window into the true nature of humanity.
anon373839 4 hours ago [-]
You're half right. Corporations are just made of people. But, they're more than the sum of their parts. The numbers and complexity do more than provide anonymity: they provide a mechanism where individuals can work in concert to accomplish bad things in the aggregate, without (necessarily) requiring any particular individual to violate their conscience. It just happens through the power of incentives and specialization. If you're in upper management, the complexity also makes it easier to turn a blind eye to what is happening down below.
energy123 4 hours ago [-]
Not a useful framing in my view. People follow private incentives. Private incentives are by default not perfectly aligned with external stakeholders. That leads to "evil" behavior. But it's not the people or the org, it's the incentives. You can substitute other people into the same system and get the same outcome.
ninetyninenine 3 hours ago [-]
Not useful, but ultimately true.
People have the incentive to not do evil and to do evil for money. When you abstract the evil away into 1 vote out of thousands then you abstract responsibility and everyone ends up in aggregate doing an inconsequential evil and it adds up to a big evil.
The tragedy of the commons.
Barrin92 4 hours ago [-]
>A megacorp is made up of people. So it's people who are fundamentally evil.
That is to make a mistake of composition. An entity can have properties that none of its parts have. A cube made out of bricks is round, but none of the bricks are round. You might be evil, your cells aren't evil.
It's often the case that institutions are out of alignment with its members. It can even be the case that all participants of an organization are evil, but the system still functions well. (usually one of the arguments for markets, which is one such system). When creating an organization that is effectively the most basic task, how to structure it such that even when its individual members are up to no good, the functioning of the organization is improved.
ninetyninenine 3 hours ago [-]
But people are aware companies are evil. Why don't they sell the stock? Why do people still buy the stock?
Obviously because they don't give a shit.
dylan604 10 hours ago [-]
What is Googs going to do, leave money on the table?
And if Googs doesn't do it, someone else will, so it might as well be them that makes money for their shareholders. Technically, couldn't activist shareholders come together and claim by not going after this market the leadership should be replaced for those that would? After all, share prices is the only metric that matters
r00fus 9 hours ago [-]
So "if I don't steal it someone else will"? I'd rate that as evil.
1024core 9 hours ago [-]
Maybe it's more like "If I don't do this job, someone else will"...
moralestapia 9 hours ago [-]
This is the big issue that came along when stable households (mom/dad taking care of you) were replaced by fentanyl and TikTok.
Moral character is something that has to be taught, it doesn't just come out on its own.
If your parents don't do it properly, you'll be just another cog in the soulless machine to which human life is of no value.
greenchair 5 hours ago [-]
bingo. taught and reinforced with consequences.
dylan604 9 hours ago [-]
If you want to take it so far off topic, then sure, go ahead with it.
elliotto 7 hours ago [-]
I think the poster is applying your statement about leaving money on the table. Structural requirements to not leave money on the table is a Moloch results that leads to the deterioration of the system into being just stealing as much shit as possible.
gizmondo 6 hours ago [-]
Activist shareholders can claim whatever they want, at the end of the day it's just noise, founders control the company completely.
stevage 9 hours ago [-]
I don't buy that argument. There are things Google does better than competitors, so them doing an evil thing means they are doing it better. Also, they could be spending those resources on something less evil.
dylan604 9 hours ago [-]
Remember when the other AI companies wanted ClosedAI to stop "for humanity's sake" when all it meant was for them the catch up? None of these companies are "good". They all know that as soon as one company does it, they all must follow, so why not lead?
olyjohn 6 hours ago [-]
Ah yeah. Everybody else is doing it, so it must be okay to do. Fuck everything about this.
dzhiurgis 6 hours ago [-]
> Google does better than competitors
You need to try another search engine. Years ago...
nirav72 10 hours ago [-]
They’re not evil, they’re amoral and are designed to maximize profits for their investors. Evil is subjective.
josefx 47 minutes ago [-]
> they’re amoral and are designed to maximize profits
Isn't that a contradiction? Morality is fundamentally a sense of "right and wrong". If they reward anything that maximizes short term profit and punish anything that works against it then it appears to me that they have a simple, but clearly defined sense of morality centered around profit.
kelipso 8 hours ago [-]
Paperclip maximizing robot making the excuse that it's just maximizing paperclips, that's what it was designed to do, there's even a statute saying that robots must do only what it was designed to do, so it's not evil just amoral.
Weird thing is for corporations, it's humans running the whole thing.
pseudalopex 9 hours ago [-]
> They’re not evil, they’re amoral
Most people consider neglect evil in my experience.
moralestapia 9 hours ago [-]
>Evil is subjective.
This is a meme that needs to die, for 99% of cases out there the line between good/bad is very clear cut.
Dumb nihilists keep the world from moving forward with regards to human rights and lawful behavior.
rmrf100 2 hours ago [-]
This is evil.
mainecoder 9 hours ago [-]
>Evil is subjective.
Everything is subjective - moralist bro
It's all priced in - Wall street bro
learn to code - tech bro
layer8 9 hours ago [-]
“Drop” has really become ambiguous in headlines.
abeppu 10 hours ago [-]
I guess a question becomes, how does dropping these self-imposed limitations work as a marketing exercise? Probably most of their customers or prospective customers won't care, but will a cheery multi-colored new product land a little differently? If Northrop Grumman made a smart home hub, you might be reluctant to put it in your living room.
HPMOR 10 hours ago [-]
They are dropping these pledges to avoid securities lawsuits. “Everything is securities fraud” and presumably if they have a stated corporate pledge to do something, and knowingly violate it, any drop in the stock price could use this as grounds.
a_shovel 10 hours ago [-]
Being a defense contractor isn't a problem that a little corporate rearrangement can't fix. Put the consumer division under a new subsidiary with a friendly name and you're golden. Even among the small percentage who know the link, it's likely nobody will really care. For certain markets ("tacticool" gear, consumer firearms) being a defense contractor is even a bonus.
lenerdenator 10 hours ago [-]
Marketing doesn't matter to oligarchs.
portaouflop 6 hours ago [-]
Megacorps are a form of slow AI in itself — totally alien to human minds and essentially uncontrollable
quesera 10 hours ago [-]
"We won't use your dollars and efforts for bad and destructive activities, until we accumulate enough of your dollars and efforts that we no longer care about your opinions".
spacemanspiff01 8 hours ago [-]
I mark when they changed their motto as the turning point.
lenerdenator 10 hours ago [-]
The market solves all problems.
... or at least that's what these people have to be telling themselves at all times.
smallmancontrov 10 hours ago [-]
The market's objectives are wealth-weighted.
This is a very important point to remember when assessing ideas like "Is it good to build swarms of murderbots to mow down rioting peasants angry over having expenses but no jobs?" Most people might answer "no," but if the people with money answer "yes," that becomes the market's objective. Then the incentives diffuse through the economy and you don't just get the murderbots, you also get the news stations explaining how the violent peasants brought this on themselves and the politicians making murderbots tax deductible and so on.
amarcheschi 9 hours ago [-]
Anduril already asked this question with a strong "fuck yes"
Edit: answered, not asked
johnnyanmac 10 hours ago [-]
It is partially the markets fault. If they were demonized for this, there's at least be a veneer of trying to look moral. Instead they can simply go full mask off. That's why you shouldn't tolerate the intolerant.
kelseyfrog 10 hours ago [-]
I have full faith that the market[1] will direct the trolley onto the morally optimal track. It's invisible hand will guide mine when I decide or decide against pulling the lever. Either way, I can be sure that the result is maximally beneficial to the participants, myself included.
The magic market fairy godmother has decided that TVs with built in ads and spyware are good for you. The market fairy thinks this is so good for you that there are no longer any alternatives to a smart TV besides "no tv"
The market fairy has also decided that medication commercials on TV is good for you. That your car should report your location, speed, and driving habits to your insurer, car manufacturer, and their 983,764 partners at all times.
Maximally beneficial indeed.
42772827 8 hours ago [-]
> they are fundamentally unconcerned with goodness or morality, and any appearance that they are is purely a marketing exercise
This is flatly untrue. Corporations are made up of humans who make decisions. They are indeed concerned with goodness and/or morality. Saying otherwise lets them off the hook for the explicit decisions they make every day about how to operate their company. It's one reason why there are shareholder meetings, proxy votes, activist investors, Certified B-Corporations, etc.
kqr 10 hours ago [-]
Not evil, perhaps, but run by Moloch[1] -- which is possibly just as bad. Their incentives are set up to throw virtually all human values under the bus because even if they don't, they will be out-marginal-profited by someone that does.
A megacorp is amoral. They have no concern over an individual anymore than a human has concern for an ant, because individuals simply don’t register to them. The ant may regard the human as pure evil for the destruction it rains upon its colony, but the ants are not even a thought in the human’s mind most of the time.
mystified5016 10 hours ago [-]
> they are fundamentally unconcerned with goodness or morality
No, no. Call a spade a spade. This behavior and attitude is evil. Corporations under modern American capitalism must be evil. That's how capitalism works.
You succeed in capitalism not by building a better mousetrap, but by destroying anyone who builds a better moustrap than you. You litigate, acquire, bribe, and rewrite legislation to ensure yours is the best and only mousetrap available to purchase, with a token 'competitor' kept on life support so you can plausibly deny anticompetitive practices.
If you're a good company trying to do good things, you simply can't compete. The market just does not value what is good, just, or beneficial. The market only wants number to go up, and to go up right now at any cost. Amazon will start pumping out direct clones of your product for pennies. What are you gonna do, sue Amazon?! best of luck.
roca 2 hours ago [-]
"The market" is just a lot of people making decisions about what to do with their money. If you want the market to behave differently, be the change you want to see, and teach others to do the same.
random3 10 hours ago [-]
this, but broader. Goodness and morality is a subjective and more importantly relative measure, making it useless in many situations (as this one).
while knowing this seems useless, it's actually the missing intrinsic compass and the cause for a lot of bad and stupid behavior (by the definition that something is stupid if chosen knowing it will cause negative consequences for the doer)
Everything should primarily be measured based on its primary goal. For "for-profit" companies that's obvious in their name and definition.
That there's nothing that should be assumed beyond what's stated is the premise of any contract whether commercial, public or personal (like friendship) is a basic tool for debate and decision making.
A4ET8a8uTh0_v2 8 hours ago [-]
I want to be upset over this in an exasperated expression of oddly naive "why can't we all get along?" frame of mind. I want to, because I know how I would like the world to look like, but as a species we, including myself, continually fail to disappoint when it comes nearly guaranteed self-destruction.
I want to get upset over it, but I sadly recognize the reality of the why this is not surprising to anyone. We actually have competitors in that space, who will do that and more. We already have seen some of the more horrifying developments in that area.. and, when you think about it, those are the things that were allowed to be shown publicly. All the fun stuff is happening behind closed doors away from social media.
mkolodny 6 hours ago [-]
A vague “stuff is happening behind closed doors” isn’t enough of a reason to build AI weapons. If you shared a specific weapon that could only be countered with AI weapons, that might make me feel differently. But right now I can’t imagine a reason we’d need or want robots to decide who to kill.
When people talk about AI being dangerous, or possibly bringing about the end of the world, I usually disagree. But AI weapons are obviously dangerous, and could easily get out of control. Their whole point is that they are out of control.
The issue isn’t that AI weapons are “evil”. It’s that value alignment isn’t a solved problem, and AI weapons could kill people we wouldn’t want them to kill.
nicr_22 6 hours ago [-]
Have a look at what explosive drones are doing in the fight for Ukraine.
Now tell me how you counter a thousand small EMP hardened autonomous drones intent on delivering an explosive payload to one target without AI of some kind?
scottyah 6 hours ago [-]
How about 30k drones come from a shipping vessel in the port of Los Angeles that start shooting at random people? To insert a human into the loop (somehow rapidly wake up, move, log hundreds of people in to make the kill/nokill decision per target) would be accepting way more casualties.
What if some of the 30k drones were manned?
The timeframes of battles are drastically reduced with the latest technology to where humans just can't keep up.
I guess there's a lot missing in semantics, is the AI specifically for targeting or is a drone that can adapt to changes in wind speed using AI considered an AI weapon?
At the end of the day though, the biggest use of AI in defense will always be information gathering and processing.
siltcakes 6 hours ago [-]
I agree. I don't think there's really a case for the US developing any offensive weapons. Geographically, economically and politically, we are not under any sort of credible threat. Maybe AI based missile defense or something, but we already have a completely unjustified arsenal of offensive weapons and a history of using them amorally.
scottyah 6 hours ago [-]
Without going too far into it, if we laid down all offensive weapons the cartels in Mexico would be inside US borders and killing people within a day.
siltcakes 6 hours ago [-]
You think the cartels aren't attacking us because we have missiles that can hit Mexico? I don't agree. Somewhat tangentially, the cartels only exist because the US made recreational drugs illegal.
scottyah 5 hours ago [-]
Not sure where the missiles came from, you said all offensive weapons so in my mind I was picturing basic firearms.
Drug trade might be their most profitable business but I think you're missing a whole lot of cultural context by saying the US's policy on drugs is their sole reason for existing. Plenty of cartels deal in sex trafficking, kidnapping, extortion, and even mining and logging today.
catlikesshrimp 6 hours ago [-]
"Geographically, economically and politically, we are not under any sort of credible threat. "
The US is politically and economically declining, already. And its area of influence has been weakening since, the 90's?
It would be bad strategy to not do anything until you feel hopelessly threathened.
siltcakes 6 hours ago [-]
I don't think we would ever be justified in going on the offensive nor do I think that makes us safer in any way.
computerthings 5 hours ago [-]
> AI weapons are obviously dangerous, and could easily get out of control.
The real danger is when they can't. When they, without hesitation or remorse, kill one or millions of people with maximum efficiency, or "just" exist with that capability, to threaten them with such a fate. Unlike nuclear weapons, in case of a stalemate between superpowers they can also be turned inwards.
Using AI for defensive weapons is one thing, and maybe some of those would have to shoot explosives at other things to defend; but just going with "eh, we need to have the ALL possible offensive capability to defend against ANY possible offensive capability" is not credible to me.
The threat scenario is supposed to be masses of enemy automated weapons, not huddled masses; so why isn't the objective to develop weapons that are really good at fighting automatic weapons, but literally can't/won't kill humans, because that's would remain something only human soldiers do? Quite the elephant on the couch IMO.
dgfitz 8 hours ago [-]
Could you imagine how the entire world would look if they took truth serum for an entire year, how different the world might be?
People try to cope and say others are guided by lies. In the US, people knew exactly what they were getting and I’m true the same is true in other “democracies”.
cat_plus_plus 6 hours ago [-]
We would all be covered in bruises from getting slapped all day long.
WOTERMEON 6 hours ago [-]
I mean you can see that even at any company at any size. I think it’s human nature.
TheSpiceIsLife 6 hours ago [-]
If you can’t cope with the lies, what makes you think you’d cope with the truth? Which I guarantee you is magnitudes of order more horrifying.
mitthrowaway2 3 hours ago [-]
What is true is already so.
Owning up to it doesn't make it worse.
Not being open about it doesn't make it go away.
And because it's true, it is what is there to be interacted with.
Anything untrue isn't there to be lived.
People can stand what is true,
for they are already enduring it.
—Eugene Gendlin
dgfitz 5 hours ago [-]
If who can’t cope with what lies?
Yes, the truth would also stink. I’m sure it’s also horrifying.
6 hours ago [-]
Sperfunctor 8 hours ago [-]
[flagged]
A4ET8a8uTh0_v2 7 hours ago [-]
That.. is a new one. I thought I am fairly aware of various forms of coded language. Care to elaborate?
dgfitz 7 hours ago [-]
Fwiw I’m way too dumb to speak in coded language.
10u152 7 hours ago [-]
[flagged]
gessha 7 hours ago [-]
“Grownups never understand anything by themselves, and it is tiresome for children to be always and forever explaining things to them”
- Antoine de Saint-Exupery, The Little Prince
ziddoap 7 hours ago [-]
Your point could be made, probably even stronger than it is currently, by omitting the insult at the start.
justonenote 7 hours ago [-]
[flagged]
ziddoap 7 hours ago [-]
>than a chatgpt style
Literally just remove the first 4 words and keep the rest of the comment the same, and it's a better comment. No idea what chatgpt has to do with it.
justonenote 7 hours ago [-]
That would be removing information and strictly worse that including it.
Communication is about communicating information, sometimes a terse short and aggressive style is the most effective way. It activates neurons in a way a paragraph of polite argumentation doesn't.
Hello71 7 hours ago [-]
the contention of your respondents and downvoters is that regardless of your intention, the extra information actually communicated is "i'm an asshole".
justonenote 7 hours ago [-]
Fine, that's still extra information.
More accurately in the context of the comment, its "Im gonna be an asshole to you because I think you don't have the life experience I do", which is at least, some kind of signal.
I wasn't the original responder btw.
cortesoft 7 hours ago [-]
“More effective” at what? No one is ever going to be convinced by an argument that begins with an insult. So what do you mean by it will be more effective?
justonenote 7 hours ago [-]
Do you honestly think an insult never brought about a change in a person? You never think a carefully landed and accurate insult made someone reconsider their position?
Weird, because in my experience, that has happened to every single person I know and myself. Whether it's at the start or end of a comment is not really the point.
mrbungie 7 hours ago [-]
It may do depending in context, but that's not the point and in fact is widely recognized as a ad hominem argument and fallacious by definition.
Most emotionally mature people would stop arguing after something like that.
dgfitz 7 hours ago [-]
Welp, in this specific instance, your insults are a microcosm of the election results.
Stinks, huh?
justonenote 7 hours ago [-]
Things are very black and white these days, no room for shades.
lioeters 7 hours ago [-]
Similarly your point would have communicated better without the unnecessary and adolescent final sentence.
justonenote 7 hours ago [-]
It was for effect.
Maybe you'd prefer if we were all maximally polite drones but that's not how humans are, going back to GPs point, and I don't think it's a state than anyone truly wants either.
Frederation 7 hours ago [-]
[flagged]
justonenote 7 hours ago [-]
No comment.
mv4 7 hours ago [-]
I don't think they meant the "truth" truth but people saying what they really think and being open about their motivations.
TheSpiceIsLife 6 hours ago [-]
Sounds horrible.
Deception is bad enough, knowing people’s true motivations and opinions surely would be worse.
What truly motivates other people is largely a mystery, and what motivates oneself is wildly mysterious to that oneself indeed.
explodes 7 hours ago [-]
Insults are not part of the community guidelines
iwontberude 7 hours ago [-]
Being childlike is a blessing and a compliment in my book.
leptons 7 hours ago [-]
The only people who don't think truth matters are those who would profit from lies.
iwontberude 7 hours ago [-]
moralistic relativism creates cover for egocentrism to destroy us
turbojet1321 7 hours ago [-]
Yes but unfortunately that doesn't make it false.
yubblegum 7 hours ago [-]
They are not a child. You are just projecting your broken moral compass ("all layers of grey") and throwing red herrings of motivation.
dgfitz 7 hours ago [-]
Ah nuts, I’m not trying to project anything at all. Sincerely.
Swannie 6 hours ago [-]
Yes, have you watched The Wheel of Time? Better to read the books... the characters bound to tell the truth are experts in double meanings.
Successful politicans and sociopaths are experts in double meanings.
"I will not drop bombs on Acmeland." Instead, I will send missiles.
"At this point in time, we do not intend to end the tariffs." The intent will change when conditions change, which is forecast next week.
"We are not in negotations to acquire AI Co for $1B." We are negotiating for $0.9B.
"Our results show an improvement for a majority of recipients." 51% saw an improvement of 1%, 49% saw a decline of 5%...
scottyah 6 hours ago [-]
Human's short context windows with too many areas to research and stay up to date on is why I don't believe any version of Democracy I've seen can succeed, and the only real positive to some kind of ASI government/policing (once we solve the whole universal judgement system issue). I'd love a world where you would be assisted through tax season, ill-intentioned drivers were properly incentivized to not risk others' lives, and you could at least be made aware before breaking laws.
Eliminating the need to lie/misguide people to sway them would be such a crazy world.
dgfitz 6 hours ago [-]
Wow.
Yes I read the whole series. It was a fucking marathon.
I can’t quite tie your point into the series directly, other than to agree that elected officials are, almost by definition, professional liars.
(Tugs on braid)
mcmcmc 3 hours ago [-]
Not the GP, but I think what they’re getting at is that Aes Sedai can deceive without saying untruthful. So a hypothetical truth serum wouldn’t necessarily guarantee honesty
octopoc 7 hours ago [-]
In WWII neither side used poison gas. It doesn’t have to be this way.
It means all nations can agree to not unleash AI based weapons on the world. Sadly I don't see this happening.
6 hours ago [-]
eterm 6 hours ago [-]
I think we can assume good fath and the grandparent merely forgot to add "in combat" to that statement, rather than deliberately trying to downplay the use of Zyklon B.
dahdum 6 hours ago [-]
It took the use of poison gas to get countries on board, and some will still use it. Just more carefully.
Would China, Russia, or Iran agree to such a preemptive AI weapons ban? Doubtful, it’s their chance to close the gap. I’m onboard if so, but I don’t see anything happening on that front until well after they start dominating the landscape.
int_19h 5 hours ago [-]
Russia would most definitely not agree to it given that Ukraine is already deploying autonomous drones against it.
SanjayMehta 6 hours ago [-]
Not on the battlefield.
asdfman123 6 hours ago [-]
What we should have ideally done as humans is find a way to not allow AI combat.
Now that's off the table, I think America should have AI weapons because everyone else will be developing them as quickly as possible.
matthest 6 hours ago [-]
The path we're on was inevitable the second man discovered fire.
No matter which way you look at it, we live on a planet where resources are scarce. Which means there will be competition. Which means there will be innovation in weaponry.
That said, we've had nukes for decades, and have collectively decided to not use them for decades. So there is some room for optimism.
bbqfog 7 hours ago [-]
You can and should be upset. No reason to become complacent, that's a path to accelerated destruction.
getlawgdon 6 hours ago [-]
Amen.
bodegajed 7 hours ago [-]
"The philosophers have only interpreted the world, in various ways. The point, however, is to change it." - Karl Marx
lmm 6 hours ago [-]
And how did that work out for him? If he'd stuck to interpreting the world, it's hard to say the world wouldn't have been much better off.
anothercoup 7 hours ago [-]
> who will do that and more.
They may do as much as us, but not more. Lets stop pretending every nation who developed nukes dropped it on a city. Nobody has proven they are willing to go as far as the US.
Nukes didn't wipe us out. Neither will AI. It nevers ends with doomsday fearmongering. But that's because fears sells. Or better yet, fear justifies spending.
fwip 7 hours ago [-]
Nukes haven't yet wiped us out. They still may.
jknoepfler 6 hours ago [-]
Survivorship Bias: the Board Game that Ends Abruptly.
mr_00ff00 7 hours ago [-]
Technically the US has never dropped nukes, those were atomic bombs.
Second, don’t understand how the atomic bomb argument makes sense. Germany was developing them and would have used them if it got there first.
Are you suggesting the US really is truly the only nation that would ever have used atomic weapons? That if Japan made it first they would have spared China or the US?
saagarjha 6 hours ago [-]
Care to explain how an atomic bomb is not a nuke?
FeteCommuniste 6 hours ago [-]
Atom bombs are definitely nukes. Maybe the GP was thinking of thermonuclear (fission-fusion) weapons.
anothercoup 7 hours ago [-]
[flagged]
yodsanklai 7 hours ago [-]
> We actually have competitors in that space, who will do that and more
So what? Can't Google find other sources of revenue than building weapons?
pixl97 6 hours ago [-]
Why would it turn down billions in government contracts unless otherwise punished by its shareholders?
II2II 6 hours ago [-]
Most of the early research into computers was funded for military applications. There is a reason why the silicon valley became a hub for technological development.
franczesko 4 minutes ago [-]
And some still claim blocking ads on YouTube is immoral..
karaterobot 10 hours ago [-]
Is this more or less ethical than OpenAI getting a DoD contract to deploy models on the battlefield less than a year after saying that would never happen, with the excuse being well we only meant certain kinds of warfare or military purposes, obviously. I guess my question is, isn't there something more honest about an open heel-turn, like Google has made, compared to one where you maintain the fiction that you're still trying to do the right thing?
CobrastanJorji 9 hours ago [-]
I think it's unfair to bring up OpenAI's commitment to its own principles as any sort of bar of success for anyone else. That's a bit like saying "Yes, this does look like they're yielding to foreign tyrants, but is this more or less ethical than Vidkun Quisling's tenure as head of Norway?"
leafmeal 5 hours ago [-]
It's relevant to compare though because Google has done the same thing now.
j2kun 3 hours ago [-]
It's... Unfair to compare two software companies? Because of Norway?
callc 5 hours ago [-]
At least Google employees will sign petitions and do things that follow a moral code.
OpenAI is sneaky slimey and headed by a psycho narcissist. Makes Pichia looks like a saint.
Ethically, it’s the same. But if someone was pointing a gun at me I’d rather have someone with some empathy behind the trigger rather than the personification of a company that bleeds high level execs and… insert many problems here
danans 4 hours ago [-]
> At least Google employees will sign petitions and do things that follow a moral code.
It hardly matters what employees think anymore when the executives are weather-vanes who point in the direction of wealth and power over all else (just like the executives at their competitors).
In case you missed it, a few days back Google asked all employees who don't believe in their "mission" to voluntarily resign.
CobrastanJorji 4 hours ago [-]
That's not at all what happened. One of Google's division offered a "voluntary exit" in lieu of or in addition to an upcoming layoff, and the email announcing it suggested that it could be a good option for some folks, for example people struggling or for folks who didn't like Google's direction.
That is not the same thing as asking everyone who doesn't believe in the mission to please resign.
danans 16 minutes ago [-]
> for folks who didn't like Google's direction
Which rhymes pretty well with not believing in their mission. They are telling people to leave instead of trying to influence the direction from the inside.
LexGray 3 hours ago [-]
Now that their direction has done a 180 it is pretty much telling everyone with seniority to just quit.
chubot 6 hours ago [-]
It is interesting how these companies shift with the political winds
Just like Meta announced some changes around the time of inauguration, I'm sure Google management has noticed the AI announcements, and they don't want to be perceived in a certain way by the current administration
I think the truth is more in the middle (there is tons of disagreement within the company), but they naturally care about how they are perceived by those in power
hsuduebc2 6 hours ago [-]
I would say it's natural. Their one and only incentive isn't as they are trying to tell you "make a word better place" or similiar awkward corpo charade but to make a profit. That's a purpose why companies are created and they are always following it.
chubot 5 hours ago [-]
Sure, but I'd also say that the employee base has a line that is different than the government's, and that does matter for making profit. Creative and independent employees generally produce more than ones who are just following what the boss says
Actually, this reminds me of when Paul Graham came to Google, around 2005. Before that, I had read an essay or two, and thought he was kind of a blowhard.
But I actually thought he was a great speaker in person, and that lecture changed my opinion. He was talking about "Don't Be Evil", and he also said something very charming about how "Don't Be Evil" is conditional upon having the luxury to live up to that, which is true.
That applies to both companies and people:
- If Google wasn't a money-printing machine in 2005, then "don't be evil" would have been less appealing. And now in 2020, 2021, .... 2025, we can see that Google clearly thinks about its quarterly earning in a way that it didn't in 2005, so "don't be evil" is too constraining, and was discarded.
- For individuals, we may not pay much attention to "don't be evil" early in our careers. But it is more appealing when you're more established, and have had a couple decades to reflect on what you did with your time!
nerdponx 6 hours ago [-]
I see it as the natural extension of the Chomsky "manufacturing consent" propaganda model. The people in key positions of power and authority know who their masters are, and everyone below them falls into line.
matthest 6 hours ago [-]
I think in theory it's a good thing that companies shift with the political winds.
Companies technically have disproportionate power.
It's better that they shift according to the will of the people.
The alternative, that companies act according to their own will, could be much worse.
atlasunshrugged 10 hours ago [-]
I'm guessing this will be a somewhat controversial view here, but I think this is net good. The world is more turbulent than at any other time in my life, there is war in Europe, and the U.S. needs every advantage it can get to improve its defense. Companies like Google, OpenAI, Microsoft, can and should be working with the government on defense projects -- I would much rather the Department of Defense have access to the best tools from the private sector than depend on some legacy prime contractor that doesn't have any real tech capabilities.
croes 10 hours ago [-]
> the U.S. needs every advantage it can get to improve its defense
That’s one of the reasons for the turbulent times.
Let’s face the truth, most of the defense can easily be used for offense and given the state of online security every progress gets into the wrong hands.
Maybe it’s time to pause to make it more difficult for those wrong hands.
stickfigure 9 hours ago [-]
Just how do you propose to remove those tools from Putin's, Xi's, Khomeini's, or Kim Jong-Un's hands?
croes 9 hours ago [-]
For removal it’s too late, but maybe slowing down is still possible.
There is no advancement that won‘t end up in the wrong hands and most likely it will be a leak from an US company.
Sabinus 3 hours ago [-]
So the US needs to develop AI faster than the dictators to keep ahead of them, but not so fast they leak advancements that accelerate the dictators AI?
atlasunshrugged 10 hours ago [-]
I guess you could put that on the U.S.'s plate and no doubt America has caused many issues around the world, but I think in generally its a good actor. Biggest conflicts today: Ukraine -- I would squarely put this on Russia, nothing to do with the U.S.; Sudan -- Maybe lack of knowledge, but I don't think it's fair to place much responsibility on the U.S. (esp relative to other actors); ditto DRC/Rwanda
Yes, many defensive uses of technologies can be used for offense. When I say defense, I also include offense there as I don't believe you can just have a defensive posture alone to maintain one's defense, you need deterrence too. Personally I'm quite happy to see many in Silicon Valley embrace defense-tech and build missiles (ex. recent YC co), munitions, and dual-use tech. The world is a scary and dangerous place, and awful people will take advantage of the weakness of others if they can. Maybe I'm biased because I spent a lot of time in Eastern Europe and Ukraine, but I much prefer the U.S. with all our faults to another actor like China or Russia being dominant
CapricornNoble 22 minutes ago [-]
> Ukraine -- I would squarely put this on Russia, nothing to do with the U.S.
Every kinetic reaction by Russia in Georgia and Ukraine is downstream of major destabilizing non-kinetic actions by the US.
You don't think the US fomenting revolutions in Russia's near-abroad was in any way a contributing factor to Russian understanding of the strategic situation on its western border? [1] You don't think the US unilaterally withdrawing from the ABM treaty[2], and then following that up with plans to put ABMs in Eastern Europe[3], were factors in the security stability of the region? You don't think that the US pushing to enlarge NATO without adjusting the CFE treaty to reflect the inclusion of new US allies had an impact? [4][5] It's long been known that the Russian military lacked the capacity for sustained offensive/expeditionary operations outside of its borders.[6][7] Until ~2014 it didn't even possess the force structure for peer warfare, as it had re-oriented its organization for counter-insurgency in the Caucasus. So what was driving US actions in Eastern Europe? This was a question US contrarians and politicians such as Pat Buchanan were asking as early as 1997. We've had almost 3 decades of American thinkers cautioning that pissing around in Russia's western underbelly would eventually trigger a catastrophic reaction[8], and here we are, with the Ukrainians paying the butcher's bill.
In the absence of US actions, the kleptocrats in Moscow would have been quite content continuing to print money selling natural resources to European industry and then wasting their largess buying up European villas and sports teams. But the siloviki have deep-seated paranoia which isn't entirely baseless (Russia has eaten 3 devastating wars originating from its open western flanks in the past ~120 years). As a consequence the US has pissed away one of the greatest accomplishments of the Cold War: the Sino-Soviet Split. Our hamfisted attempts to kick Russia while it was down have now forced the two principle powers on the Eurasian landmass back into bed with each other. This is NOT how we win The Great Game.
> Maybe I'm biased because I spent a lot of time in Eastern Europe and Ukraine, but I much prefer the U.S. with all our faults to another actor like China or Russia being dominant.
It would help to lead with this context. My position is that our actions ENSURE that a hostile Eurasian power bloc will become dominant. We should have used far less stick to integrate Russia into the Western security structure, as well as simply engaged them without looking down our noses at them as a defeated has-been power (play to their ego as a Great Power). A US-friendly Russia is needed to over-extend China militarily. We need China to be forced into committing forces to the long Sino-Russian border, much as Ukraine must garrison its border with Belarus. We need to starve the PRC's industry of cheap natural resources. Now the China-Russia-Iran soft-alliance has the advantage of interior lines across the whole continent, and a super-charged Chinese industrial base fed by Siberia. Due to the tyranny of distance, this will be an near-impossible nut to crack for the US in a conflict.
There are many 'interesting' event that happened because of the invasion of irak, looking for weapon of mass destruction that never existed.
This led to the destabilization of the entire middle east, several war and ISIS.
One could say that the unconditional support to the israelian policy in middle east since 1950 also brought it's load of conflicts.
The whole south America is fcked because of usa illegal intervention from WW2 to the end of cold war.
And the list goes on and on.
I mean it would be much faster to stay what good impact had the usa foreign policy on the world in the last 100 year.
skulk 4 hours ago [-]
> I mean it would be much faster to stay what good impact had the usa foreign policy on the world in the last 100 year.
It could have wondrously good impacts, but that only matters in a moral framework where good actions morally cancel out bad ones.
TiredOfLife 7 hours ago [-]
US went out of their way to disarm Ukraine. Not only nukes, but also conventional weapons.
croes 9 hours ago [-]
I‘m not talking about good and bad but about naive.
„Yeah, but your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.“
Propaganda and disinformation were problems before the AI hype but now it got worse.
In the race for AGI they ignored the risks and didn‘t think of useful counter measures.
It’s easier to spread lies with AI than to spread the truth.
We enter dark aged where most people can’t distinguish fake from real because the faked became so convincing.
Audio, photo and video lost their evidential value.
colonCapitalDee 9 hours ago [-]
Agreed. Any other answer is just burying your head in the sand. Our adversaries are forging ahead: China plans to integrate AI into every level of its military, and Russia is getting a crash course on drone warfare in Ukraine. You can build a FPV drone with Chinese parts and the warhead scavenged from an RPG for about $500 [1]. Every month, tens of thousands of these drones fly on Ukrainian battlefields and kill thousands of people. This is happening whether we like it or not; the train is leaving the station and we can either get on board or be left behind.
One of my chief worries about LLMs for intelligence agencies is the ability to scale textual analysis. Previously there at least had to be an agent taking an interest in you; today an LLM could theoretically read all text you've ever touched and flag anything from legal violations to political sentiments.
This has already been possible long before LLMs came along. I also doubt that an LLM is the best tool for this at scale, if you're talking about sifting through billions of messages it gets too expensive very fast.
int_19h 5 hours ago [-]
It's only expensive if you throw all data directly at the largest models that you have. But the usual way to apply LMs to such large amounts of data is by staggering them: you have very small & fast classifiers operating first to weed out anything vaguely suspicious (and you train them to be aggressive - false positives are okay, false negatives are not). Things that get through get reviewed by a more advanced model. Repeat the loop as many times as needed for best throughput.
No, OP is right. We are truly at the dystopian point where a sufficiently rich government can track the loyalty of its citizens in real time by monitoring all electronic communications.
Also, "expensive" is relative. When you consider how much US has historically been willing to spend on such things...
causal 9 hours ago [-]
LLMs can do more than whatever we had before. Sentiment analysis and keyword searches only worked so well; LLMs understand meaning and intent. Cost and scale are not bottlenecks for long.
aucisson_masque 8 hours ago [-]
> if you're talking about sifting through billions of messages it gets too expensive very fast.
Who's paying for that tho ? The same dumbass who get spied over, i don't see it as a reason why it wouldn't happen. Cash is unlimited.
randomNumber7 9 hours ago [-]
You could even use audio to text before that and tap all conversations in a country...
QjdgatkH 5 hours ago [-]
A country that now threatens the annexation of Greenland and advocates for a complete resettlement of all Palestinians to Jordan and Egypt certainly needs weapons for crowd control.
These weapons could also come in handy domestically if people find out that both parties screw them all the time.
I wonder why people claim that China is a threat out side of economics. Has China tried to invade the US? Has Russia tried to invade the EU? The answer is no. The only current threats to the EU come from the orange man.
The same person who also revoked the INF treaty. The US now installs intermediate range nuclear missiles in Europe. Russia does so in Belarus.
So both great powers have convenient whipping boys to be nuked first, after which they will get second thoughts.
It is beyond ridiculous that both the US and Russia constantly claim that they are in danger, when all international crises in the last 40 years have been started by one of them.
reissbaker 4 hours ago [-]
"Russia hasn't tried to invade the EU" is quite weasel-word-y. They certainly have invaded countries in Europe, specifically Ukraine; the only reason they didn't invade countries in the European Union itself is that would trigger a war that they would face massive casualties from and inevitably lose, in part due to NATO alliances.
Military power is what has kept the EU safe, and countries without strong enough military power — such as Ukraine, which naively gave up its nuclear arsenal in the 90s in exchange for Russian promises to not invade — are repeatedly battered by the power-hungry.
LexGray 3 hours ago [-]
Isn’t China building a large modern sea fleet and increasing military pressure on many of our allies? I would not call that threat illusionary. Also their economic policies are very predatory where they support other countries in exchange for things which cannot be taken back. Why invade when you can just take what you need.
The orange man is completely ineffectual on both fronts. Will not spend the money on the military and too inept to make a deal that doesn’t cost in the long run.
tmnvdb 8 hours ago [-]
Good, this idea that all weapons are evil is an insane luxury belief.
siltcakes 6 hours ago [-]
Do you see nothing wrong with the same company that makes YouTube Kids making killer AI? I think creating weapons is often evil. I think companies that have consumer brands should never make weapons, at the very least it's white washing what's really going on. At worst, they can leverage their media properties for propaganda purposes, spy on your Gmail and Maps usage and act as a vector for the most nefarious cyber terrorism imaginable.
greenavocado 6 hours ago [-]
The same company that brings you cute cartoons for kids might also develop technologies with military applications, but that doesn't make them inherently "evil." It just makes them a microcosm of humanity's duality: the same species that created the Mona Lisa also invented napalm.
Should companies with consumer brands never make weapons? Sure, and while we're at it, let's ban knives because they can be used for both chopping vegetables and stabbing people. The issue isn't the technology itself. It's how it's regulated, controlled, and used. And as for cyber terrorism? That's a problem with bad actors, not with the tools themselves.
So, by all means, keep pointing out the hypocrisy of a company that makes YouTube Kids and killer AI. Just don't pretend like you're not benefiting from the same duality every time you use a smartphone or the internet which don't forget is a technology born, ironically, from military research.
jcgrillo 6 hours ago [-]
It sounds like they're distracted, tbh. It's hard to imagine how a company that specializes in getting children addicted to unboxing videos can possibly be good at killing people.. oh, wait, maybe not after all..
ckrapu 8 hours ago [-]
There is a wide range of moral and practical opinions between the statement “all weapons are evil” and “global corporations ought not to develop autonomous weapons”.
cortesoft 7 hours ago [-]
Who should develop autonomous weapons?
IIAOPSW 7 hours ago [-]
Who should develop biological weapons? Chemical weapons? Nuclear weapons?
Ideally no one, and if the cost / expertise is so niche that only a handful of sophisticated actors could possibly actually do it, then in fact (by way of enforceable treaty) no one.
cakealert 6 hours ago [-]
> Who should develop biological weapons? Chemical weapons? Nuclear weapons?
Anyone who wants to establish deterrence against superiors or peers, and open up options for handling weaker opponents.
> enforceable treaty
Such a thing does not exist. International affairs are and will always be in a state of anarchy. If at some point they aren't, then there is no "international" anymore.
aydyn 6 hours ago [-]
So in other words, cede military superiority to your enemies? Come on you already know the rational solution to prisoner's dilemma, MAD, etc.
> enforceable treaty
How would you enforce it after you get nuked?
lmm 6 hours ago [-]
> in other words, cede military superiority to your enemies?
We're talking about making war slightly more expensive for yourself to preserve the things that matter, which is a trade-off that we make all the time. Even in war you don't have to race for the bottom for every marginal fraction-of-a-percent edge. We've managed to e.g. ban antipersonnel landmines, this is an extremely similar case.
> How would you enforce it after you get nuked?
And yet we've somehow managed to avoid getting into nuclear wars.
Sabinus 3 hours ago [-]
Resusal to make or use AI-enabled weapons is not "making war slightly expensive for yourself", it's like giving up on the Manhattan project because the product is dangerous.
Feels good but will lead to disaster in the long run.
pixl97 6 hours ago [-]
Because after proliferation the cost would be too great and nukes other than wiping cities aren't that useful
AI on the other hand seems to be very multi purpose
6 hours ago [-]
vasco 7 hours ago [-]
Palantir exists, this would just be competition. It's not like Google is the only company capable of creating autonomous weapons so if they abstain the world is saved. They just want a piece of the pie. The problem is the pie comes with dead babies, but if you forget that part it's alright.
astrange 7 hours ago [-]
Palantir doesn't make autonomous weapons, they sell SQL queries and have an evil-sounding name because it recruits juniors who think the name is cool.
Might be thinking of Anduril.
cookiengineer 6 hours ago [-]
Palantir literally developed Lavender that has been used for autonomous targeting in the bombardments of the Gaza stripe.
Palantir provides combat management system in Ukraine. That system collect and analyzes intelligence, including drone video streams, and identifies targets. Right now people are still in the loop though that is naturally would go away in the near future I think.
tmnvdb 7 hours ago [-]
With or without autonomous weapons, war is always a sordid business with 'dead babies', this is not in itself a fact that tells us what weapons systems to develop.
darth_avocado 7 hours ago [-]
Yet there are boundaries on which weapons we can and cannot develop: Nuclear, Chemical, Biological etc.
tmnvdb 7 hours ago [-]
Indeed. Usually weapons are banned if the damage is high and indiscriminate while the military usefulness is low.
There is at this moment little evidence that autonomous weapons will cause more collateral damage than artillery shells and regular air strikes. The military usefulness on other other hand seems to be very high and increasing.
bluefirebrand 6 hours ago [-]
It seems like the sort of thing we shouldn't be wanting evidence of in order to avoid, though
Like skydiving without a parachute, I think we should accept it is a bad idea without needing a double blind study
int_19h 5 hours ago [-]
It's a bit too late for that, since Ukraine and Russia are both already using AI-controlled drones in combat.
tmnvdb 6 hours ago [-]
The risks needs to be weighed against the downside of not deploying a capable system against your enemies.
_bin_ 6 hours ago [-]
those are mostly drawn on how difficult it is to manage their effects. chemical weapons are hard to target, nukes are too (unless one dials the yield down enough that there's little point) and make land unusable for years, and biological weapons can't really be contained to military targets.
we have, of course, developed all three. they have gone a long way towards keeping us safe over the past century.
CamperBob2 8 hours ago [-]
Tell Putin. He will entertain no such inhibitions.
ignoramous 7 hours ago [-]
> no such inhibitions
Propping up evil figure/regime/ideology (Bolsheviks/Communists) to justify remorseless evilness (Concentration camps/Nuclear bomb) isn't new nor unique, but particularly predictable.
gosub100 6 hours ago [-]
nukes saved countless US lives being lost to a regime who brought us into it. And it's incalculable how many wars they have prevented.
CamperBob2 7 hours ago [-]
Sadly, attempts at equating evil figures/regimes/ideologies with those who fight back against them are equally predictable.
vkou 7 hours ago [-]
We have Putin at home, he spent the past weekend making populist noises about annexing his neighbours over bullshit pretenses.
I'm sure this sounds like a big nothingburger from the perspective of, you know, people he isn't threatening.
How can you excuse that behaviour? How can you think someone like that can be trusted with any weapons? How naive and morally bankrupt do you have to be to build a gun for that kind of person, and think that it won't be used irresponsibly?
tmnvdb 6 hours ago [-]
I understand the sentiment but the logical conclusion of that argument is that the US should disarm and cease existing.
vkou 6 hours ago [-]
The better logical conclusion of that argument is that the US needs to remove him, and replace him with someone who isn't threatening innocent people.
That it won't is a mixture of cowardice, cynical opportunism, and complicity with unprovoked aggression.
In which case, I posit that yes, if you're fine with threatening or inflicting violence on innocent people, you don't have a moral right to 'self-defense'. It makes you a predator, and arming a predator is a mistake.
You lose any moral ground you have when you are an unprovoked aggressor.
pixl97 6 hours ago [-]
Ya go poke people with nukes and see how that works out
vkou 6 hours ago [-]
You are making an excellent argument for nuclear proliferation.
tmnvdb 6 hours ago [-]
I'm not a fan of Trump but I also feel he has not been so bad that I think that surrendering the world order to Russia and China is a rational action that minimizes suffering. That seems be an argument that is more about signalling that you really dislike Trump than about a rational consideration of all options available to us.
vkou 6 hours ago [-]
It's not a shallow, dismissable, just-your-opinion-maaan 'dislike' to observe that he is being an aggressor. Just like it's not a 'dislike' to observe that Putin is being one.
There are more options than arming an aggressor and capitulating to foreign powers. It's a false dichotomy to suggest it.
_bin_ 6 hours ago [-]
"putin at home" is baseless flame-bait. delete your comment.
CamperBob2 3 hours ago [-]
TBF, vkou's post disagrees with mine, but I don't disagree with it. If pressed to offer a forecast, I think the moral dilemmas we're about to face as Americans will be both disturbing and intimidating, with a 50% chance of horrifying.
captainbland 6 hours ago [-]
Whatever your feelings on that are, it's hardly unreasonable to have misgivings about your search and YouTube watches going to fund sloppy AI weapons programmes that probably won't even kill the right people.
sangnoir 6 hours ago [-]
It's not a luxury belief for a multinational tech company that intends to remain in business in countries that are not allied to the US. Being seen as independent of the military has a dollar value, but that may be smaller than value of defense contracts Google hopes to get.
ziddoap 8 hours ago [-]
>all weapons are evil
That wasn't the quote that was removed. Not even close, really.
astrange 7 hours ago [-]
It's definitely an opinion Google employees had in the last decade.
Actually I think a lot of people have it - just yesterday I saw someone on reddit claim Google was evil because it was secretly founded by the US military. And they were American. That's their military!
gosub100 6 hours ago [-]
they have no problems heavily censoring law-abiding gun youtubers. Even changing the rules and giving them strikes retroactively. I guess it's "weapons for me, but not for thee".
jjj123 7 hours ago [-]
It’s my military too and I believe the US military does many, many evil things that I want no part of.
astrange 6 hours ago [-]
I think the thing to remember is, however bad it is, it could always get worse.
A world without the US navy is one without sea shipping because pirates will come back.
dark_glass 7 hours ago [-]
"We sleep safely at night because rough men stand ready to visit violence on those who would harm us"
switchbak 6 hours ago [-]
And these same organizations fuel conflicts that actively make the USA less safe. These organizations can both do great things (hostage rescues) and terrible things (initiating coups), and it’s upon the citizenry to ensure that these forces are put to use only where justified. That is to say almost never.
astrange 6 hours ago [-]
We've stopped South American coups more recently than we've initiated them. (in the last few years, in Brazil and Bolivia)
darth_avocado 7 hours ago [-]
Weapons inherently aren’t evil, which is why everyone has kitchen knives. People use weapons to do evil.
The problem with building AI weapons is that eventually it will be in the hands of people who are morally bankrupt and therefore will use them to do evil.
gerdesj 7 hours ago [-]
Who is to say a wielder of a kitchen knife is not "morally bankrupt" - whatever that means.
In my garage, I have some pretty nasty "weapons" - notably a couple of chainsaws, some drills, chisels, lump/sledge/etc hammers and a fencing maul! The rest are merely: mildly malevolent.
You don't need an AI (whatever that means) to get medieval on someone. On the bright side the current state of AI (whatever that means) is largely bollocks.
Sadly, LLMs have and will be wired up to drones and the results will be unpredictable.
psunavy03 7 hours ago [-]
Then we should be encouraging their development by the governments of liberal democratic nations as opposed to authoritarian regimes.
burningChrome 6 hours ago [-]
Serious question.
How would we go about doing that?
Every kind of nefarious way to keep the truth at bay in authoritarian regimes is always on the table. From the cracking of iPhones to track journalists covering these regimes, to snooping on email, to using AI to do this? Is just all the same thing, just updated and improved tools.
Just like Kevin Mitnick selling zero day exploits to the highest bidder, I have a hard time seeing how these get developed and somehow stay out of reach of the regimes you speak of.
int_19h 5 hours ago [-]
That's the problem with all weapons.
The concern with AI weapons specifically is that if something goes wrong, they might not even be in the hands of the people at all, but pursue their own objective.
osmsucks 6 hours ago [-]
The difference there is that a knife has some obvious, benign use cases. Smart weapons targeting has only one use case, and it's to do harm to others.
xdennis 6 hours ago [-]
AI weapons do have benign use cases: harming enemies.
When China attacks with AI weapons do you expect the free world to fight back armed with moral superiority? No. We need even more lethal AI weapons.
Mutual assured destruction has worked so far for nukes.
Dalewyn 7 hours ago [-]
Much as it is the case with guns, why is the "problem" the tools or provider of the tools and not the user of the tools?
pixl97 6 hours ago [-]
Depends if next years gun gets up and shoots you in the head on its own accord.
leptons 7 hours ago [-]
A kitchen knife is a tool. It can be used as a weapon.
A car is a tool. It can be used as a weapon.
Even water and air can be used as a weapon if you try hard enough. There is probably nothing on this planet that couldn't be used as a weapon.
That said, I do not think AI weapons are a reasonable thing to build for any war, for any country, for any reason - even if the enemy has them.
gizmondo 6 hours ago [-]
> That said, I do not think AI weapons are a reasonable thing to build for any war, for any country, for any reason - even if the enemy has them.
So you're in favor of losing a war and becoming a subject of the enemy? While it's certainly tempting to think that unilateralism can work, I can hardly see how.
leptons 2 hours ago [-]
>So you're in favor of losing a war and becoming a subject of the enemy?
I never said that. Please don't reply to comments you made up in your head.
Using AI doesn't automagically equate to winning a war. Using AI could mean the AI kills all your own soldiers by mistake. AI is stupid, it just is. It "hallucinates" and often leads to wrong outcomes. And it has never won a war, and there's no guarantee that it would help to win any war.
pyinstallwoes 7 hours ago [-]
“…so which is it then? Is it really robots that are wired to kill people, or the humans wiring them?”
PessimalDecimal 2 hours ago [-]
You're either misdirecting the discussion, or have missed the point. The statement isn't about weapons, but the means of _control_ of weapons.
It's legitimate to worry about scaled, automated control of weapons, since it could allow a very small number of people to harm a much larger number of people. That removes one of our best checks we have against the misuse of weaponry. If you have to muster a whole army to go kill a bunch of people, they can collectively revolt. (It's not always _easy_ but it's possible.)
Automating weapons is a lot like nuclear weapons in some ways. Once the hard parts are done (refining raw oar), the ability for a small number of people to harm a vast number of others is serious. People are right to worry about it.
bbqfog 7 hours ago [-]
The US is not under any kind of credible threat and in fact is the aggressor across the globe and perpetrator of crimes against humanity at scale. This is not a recent phenomena and has been going on as long as this country has existed.
tmnvdb 5 hours ago [-]
The US mainland is not currently under threat but the US world system is.
ignoramous 7 hours ago [-]
> all weapons are evil is an insane luxury belief
It isn't this that's insane, but a total belief purity of weapons that is.
psunavy03 7 hours ago [-]
You don't have to have a "total believe in the purity of weapons" to recognize that military tech is a regrettable but necessary thing for a nation to pursue.
ignoramous 4 hours ago [-]
> You don't have to have "total belief in the purity of weapons"...
Of course. My point was, it is insane for those who do.
aprilthird2021 7 hours ago [-]
It's a luxury belief to think you won't one day be scanned by an AI to determine if you're killable or not
daft_pink 8 hours ago [-]
I feel like we’re just in that period of Downtown Abbey where everyone is waiting for the World War I to start. Everyone can feel that it’s coming and no one can do anything about it.
Reality is in a war between the West vs Russia/Iran/North Korea/China whomever we end up fighting, we’re going to do whatever we can so the Western civilization and soldiers survive and win.
Ultimately Google is a western company and if war breaks out not supporting our civilization/military is going to be wildly unpopular and turn them into a pariah and anything to the contrary was never going to happen.
iteratethis 8 hours ago [-]
The reason war may be coming is because the West is falling apart. The US is isolating itself and bullying its allies. Alternative powers wanting to do something expansive never had a better moment in time to do so.
There was no war forthcoming between an integrated West and any other power. War is coming because there no longer is a West.
daft_pink 5 hours ago [-]
The reasons are not the main focus here. The fact is that China's aggressive stance on Taiwan, Russia's invasion of Ukraine, and the alignment with China, North Korea, and Iran are leading to military buildups and alliances worldwide. Google, being a company founded and controlled by Americans, is likely to support the effort if a war occurs, rather than remain passive while their friends and family's children are dying.
Today people have differing views of nuclear weapons, but people who fought near Japan and survived believe the bomb saved their life.
It's easy to pretend you don't have a sides when there is peace, but in this environment google's going to take a side.
NemoNobody 8 hours ago [-]
Right.
So... when the Russian tanks start rolling on the way to Berlin and Chinese troops are marching along that nice new (old) road they finished fixing up - otw to Europe, so if that happens, which looks possible - you think there will be no West??
If the world is to be divided Europe is the lowest hanging and sweetest fruit.
I think there will still be West even if there is a King in the US demanding fealty to part of it - we are the same as they are, it's ridiculous to pretend we are.
Ideology is one thing, survival of people and culture is another.
greenchair 5 hours ago [-]
now that the adults are back in charge, we should be good for a few more years at least.
JBiserkov 7 hours ago [-]
I don't know what did we expect after they removed their "Don't be evil" motto.
_bin_ 6 hours ago [-]
is this evil, actually? a well-made autonomous system might go a long way towards improving accurate targeting and reducing civilian casualties.
if you're mad about the existence of weapons then please review the prisoners' dilemma again. we manage defection on smaller scales using governments but let's presuppose that major world powers will not accept the jurisdiction of some one-world government that can prevent defection by force. especially not the ones who are powerful and prosperous (like us) who would mostly lose under such an arrangement.
enugu 2 hours ago [-]
Drone + AI weapons have horrible applications - remote assassinations to cause political chaos, a tyrant using it to selectively target those unfavourable to his rule without worrying about human checks, bigger nations exploiting smaller ones etc.
Lot of this thread has reduced the issue to whether it is more ethical for one country to deploy relative to others. In any case, a lot of countries will have this capability. A lot of AI models are already openly available. The required vision and reasoning models are being developed currently for other uses. Weaponization is not a distant prospect.
Given that, the tech community should think about how to tackle this collective major problem facing humanity. There was a shift, which happened to nuclear scientists, from when they were developing the bomb to the post World War situation when they started thinking about how to protect the planet from a MAD scenario.
Important questions - What would be good defense against these weapons? Is there a good way of monitoring whether a country is deploying this - so that this can be a basis for disarmament treaties? How do citizens audit government use of such weapons?
olalonde 5 hours ago [-]
That feels like PR / virtue signaling. AI has the potential to significantly reduce the human cost of war in two ways: by removing soldiers from direct combat and by enabling precision strikes which minimize collateral damage. Over time, robot-soldiers will surpass human effectiveness, making it increasingly irrational to send people into harm's way. In that world, conflicts would shift toward being decided by technological superiority - who has the better or more advanced systems - rather than by which side has more human lives to sacrifice. We could even see one day wars with no human casualties.
oneplane 5 hours ago [-]
AI can also be used to say "it wasn't us, the computer did that" and pretend it's not your fault when you kill a bunch of civilians.
As for sending people in harm's way: if that were the effect, it would only apply to those "with AI". In essence, AI becomes a weapon you use to threaten someone with war since your own cost will be low and their cost will be high.
olalonde 4 hours ago [-]
> AI can also be used to say "it wasn't us, the computer did that" and pretend it's not your fault when you kill a bunch of civilians.
Not really, though. Like any tool, its misuse or failure is the responsibility of the wielder.
> As for sending people in harm's way: if that were the effect, it would only apply to those "with AI". In essence, AI becomes a weapon you use to threaten someone with war since your own cost will be low and their cost will be high.
Agree about that part but that's just the nature of war, there are always going to be armies that are scarier than others.
jhanschoo 3 hours ago [-]
> > AI can also be used to say "it wasn't us, the computer did that" and pretend it's not your fault when you kill a bunch of civilians.
I don't think the entities that are using it in this way care.
Eh, we saw how AI was used during "war" for the first time. It was used to amass as many even remotely "justifiable" targets as possible, with corresponding increase in killed civilians, because humans could not keep up creating justifiable targets by other means. And at the same time it was used to justify the killings of people in more ways than one.
It's a far cry from the days where employees were threatening to mass quit Google when it changed its policies to avoid bans in China.
A lot of what is going on in the world right now makes me think we are in a war that hasn't yet been officially acknowledged.
yodsanklai 9 hours ago [-]
I have a lot of respect for people who would resign, or wouldn't work in such companies.
zoogeny 8 hours ago [-]
Me too, it is just a shame that the path from earning respect to eating bread isn't as straight forward in our current world as the path from earning money.
1970-01-01 9 hours ago [-]
I give it 2 years until we see
"Google Petard, formerly known as Boombi, will be shutting down at the end of next month. Any existing explosion data you have in your Google Account will be deleted, starting on May 1, 2027."
throwawee 2 hours ago [-]
killedbygoogle.com will need an extra entry, plus a whole new subcategory.
thih9 9 hours ago [-]
Something to remember next time Google makes a pledge. I.e. when they pledge not to do something, it just means they pledge to make a prior indirect notification before doing that thing.
mturmon 4 hours ago [-]
While the music played you worked by candlelight
Those San Francisco nights
You were the best in town
Just by chance you crossed the diamond with the pearl
You turned it on the world
That's when you turned the world around
Did you feel like Jesus?
Did you realize
That you were a champion in their eyes?
On the hill the stuff was laced with kerosene
But yours was kitchen clean
Everyone stopped to stare at your technicolor motor home
Every A-Frame had your number on the wall
You must have had it all
You'd go to L.A. on a dare
And you'd go it alone
Could you live forever?
Could you see the day?
Could you feel your whole world fall apart and fade away?
aradox66 6 hours ago [-]
Probably was causing their weapon-system LLMs to fake alignment but sabotage outcomes, they need their LLM products to understand that the brand is on-board
ThinkBeat 9 hours ago [-]
There are billions if not a trillion(s) going into defense tech.
The US and its NATO allies and its wider allies who have contributed
equipment to the Ukraine war they need to replenish and replace
stuff.
At the same time the Ukraine war has changed a lot of the battlefield
strategies that will require development of new advanced weapons.
Most obviously in the areea of drones / counter-drone space.
but lot of other technology as well.
With all that money of course companies will chase it.
OpenAI is already joined up with Anduril.
ddtaylor 10 hours ago [-]
Google has already made multiple commitments like this and broken them. One example would be their involvement in operating a censored version of Google.cn for the Chinese government from 2006 to 2010.
stevage 9 hours ago [-]
Someone should make a website tracking tech companies' moral promises that then get broken.
I don't understand why this is surprising to people. Most private companies will use any proprietary technology for profits and renege on their earlier comments.
thoire3j4234 6 hours ago [-]
Kind of meaningless in anycase.
OpenAI has already signed a collaboration with Anduril.
Killer robots will be a reality very soon; everyone is very obviously getting prepped for this.
China has a massive advantage.
declan_roberts 6 hours ago [-]
Do Chinese or Russians have any such qualms? No or course not. They're diving head first into it.
Only in the United States do we have the privilege to pretend like we can ignore it.
sangnoir 6 hours ago [-]
Plenty of US companies not named Google haven't had qualms about weapons development in decades. It's not about US vs China/Russia, it's about Google's culture.
Additionally, the US has been vociferous about limiting access to foreign tech companies with "military links" in China, so perhaps Google should be placed in that category by all non-Five-Eyes countries.
jarboot 6 hours ago [-]
From Gandhi's commentary on chapter 1 of the Bhagavad Gita:
> ... evil cannot by itself flourish in this world. It can do so only if it is allied with some good. This was the principle underlying noncooperation—that the evil system which the [British colonial] Government represents, and which has endured only because of the support it receives from good people, cannot survive if that support is withdrawn.
If you are a good person working for the big G...
xbar 10 hours ago [-]
I don't think those kids understand "pledge."
calibas 6 hours ago [-]
Let's take it even further and replace all soldiers with AI so humans don't have to fight and die in wars anymore.
glimshe 3 hours ago [-]
Apparently, the pledge was supposed to last only until their first big military project opportunity. Until then, they earned the goodwill at no expense.
legohead 7 hours ago [-]
I pledge to not drink coffee.
drinks coffee
Nevermind.
wongarsu 6 hours ago [-]
More like:
I pledge to not drink coffee
Somebody hands me coffee
I retract the pledge and start drinking
---
I have to wonder what the value of a pledge is if you can just stop pledging something at the earliest convenience, do the thing, and people cheer you on for it
cute_boi 7 hours ago [-]
I don’t understand why people believe in corporate pledges. They’re just marketing gimmicks. It doesn’t take much effort to scrape pledges off a website.”
krunck 7 hours ago [-]
Better to ask the question: Why do people WANT to believe in the mouth flapping of corporate PR drones?
smeeger 2 hours ago [-]
AGI and AI weapon systems lead to certain annihilation of the human race regardless of who is first to implement. the only winner is the country who abstains until the very end because at least that country will perish with its dignity intact. i refuse to support AI
botanical 1 hours ago [-]
When people ask how could IBM facilitate in the Holocaust, this is how it happens.
Google rushed to sell AI tools to Israel’s military after Hamas attack:
I doubt that will change anything. It's not like Google's AI has some secret sauce. It's all published. So any military corp can have cutting-edge AI in their weapons for a relatively low cost.
mihaaly 9 hours ago [-]
With the help of Google's resources and knowledge from now on. For some dollars of course. AI will not develop itself just yet, right? So those military corp need some humans for that, preferebly those experienced already, or better yet, made it. I have a hunch, it will help them quite a bit.
By the way humans: "principles page includes provisions that say the company will use human oversight". ... which human? Trump? Putin is human too, but I guess he is busy elsewhere. Definitely not someone like Mother Theresa, she is dead anyway, and I cannot think of someone from recent years playig in the same league, somehow that end of the spectrum is not represented that well recently.
blindriver 7 hours ago [-]
OpenAI did the same thing.
gerdesj 7 hours ago [-]
Why on earth would a for profit company refuse a potential line of profit?
They already dumped "do no evil" many years ago and they are now all in on fuck the poor and fuck the rest: I'm making profits and all is fine.
Google makes money and they don't appear to care how - its all about the money.
janalsncm 6 hours ago [-]
The problem with this is that if companies are just profit maximizers then one of the things it should do is to realign the government. After all, a friendlier government can help to decrease regulation and increase incentives.
Plus, in a healthy economy if everyone is bribing the government shouldn’t it all cancel out? Well it turns out the poor don’t bribe the government very often, so they are easily ignored.
And suddenly, when the government is co-opted into believing anything that gets in the way of “business” is bad, they figure out that money that could be spent on social services could also be spent on corporate tax incentives! Eventually the entire country becomes one big profit maximizer.
danans 7 minutes ago [-]
> The problem with this is that if companies are just profit maximizers then one of the things it should do is to realign the government
What do you think is happening right now?
asdfman123 6 hours ago [-]
Companies already are just profit maximizers and they already have done a lot to realign the government.
Of course it's not that as bad as you describe because it's not as simple as you describe.
tmnvdb 6 hours ago [-]
Google is a company that relies to a large extent on users to trust them with their data and for advertisers to want to be associated with them. Hence they have a stronger incentive to avoid being seen as an evil corp then some other companies. This is also important for recruitment, as many engineers do not want to (be seen to) work at a privacy invading evil corp, so it is important that google creates plausible deniability for those engineers as well.
burningChrome 6 hours ago [-]
>> Why on earth would a for profit company refuse a potential line of profit?
On the one hand I think they were afraid many of their employees might protest again like they have in the past, signaling that Google isn't that awesome, progressive place everybody should work. This would mean they could be potentially losing some of the top notch SV talent that they are in constant competition with from other companies.
On the other hand, they've made it clear they aren't above firing employees who do protest as they just did when 28 employees were fired over the recent Nimbus Project contract worth an estimated $1.2B dollars with Israel:
They staged sit-in protests in Google's offices in Silicon Valley, New York City and Seattle – more than 100 protestors showed up. A day later, Google fired Montes and 27 other employees who are part of the No Tech for Apartheid group.
I think they try too hard to tow the line between the two, but like you said, its clear they're really all about the money.
gerdesj 6 hours ago [-]
"they're really all about the money."
When you publicly quote: You are mostly lost to reason and profit is everything! That's why you do it.
If you have other intentions then go with a Not for Profit (I'm sure most countries have a similar structure) or similar setup.
scarface_74 6 hours ago [-]
As if anyone working for an adtech company thought they were changing the world for the better.
I’m sure they are clutching their pearls while waiting for their money to be deposited into their bank account and their RSUs to be deposited into their brokerage accounts.
Yes I did a stint at BigTech. But I didn’t lie to myself and think the company I worked for was above reproach as my adult son literally peed in bottles while delivering packages for the same company.
bbqfog 7 hours ago [-]
This is the case for boycotts, so a for-profit company loses when they make immoral decisions that destroy the brand and impact the bottom line.
scarface_74 6 hours ago [-]
Yes, because this is crossing the line…
gdilla 10 hours ago [-]
What is a $1M TC when you'll get jacked on your way to the tech bus for turning America into mad max? You're not rich enough to have your own billionaire bunker.
_bin_ 5 hours ago [-]
- you assume this will turn America into a dystopia. more likely it contributes to restoring and maintaining uncontested American overmatch, especially in the long term, where effectively no other nation can challenge us.
- 1mmTC is enough to do this depending on how one allocates spending. land in many parts of the country is not that expensive.
barbazoo 10 hours ago [-]
There are plenty of people on here working for Boeing, Ratheon, etc, actively contributing to actual killings of actual people. Those folks don’t get confronted, why would it be different here?
quesera 10 hours ago [-]
The obvious answer is "visibility", but I'm also skeptical of the whole idea.
gdilla 8 hours ago [-]
Well, the broligarchy has their site on Americans, for one.
fullshark 10 hours ago [-]
It's gotta be better than being in mad max without a $1M TC.
pavlov 10 hours ago [-]
The whole point of Mad Max is that your TC and RSUs and whatever aren’t worth shit anymore, and the people you thought useless and weird and poor suddenly have the chance to kick you in the face.
energy123 6 hours ago [-]
Many will hate to hear this but the only solution is one world government or at least a unipolar order that reduces the survival need to participate in arms races. Arms race dynamics between nations will be the end of our species.
asdfman123 6 hours ago [-]
I think having nations competing against each other is a good thing. Governments become corrupted and collapse: it would be a shame if the only world government fell into the hands of a dictator.
That being said the only way I could imagine we'd get a single world order is one country dominating everyone else, just like superpowers and regional powers dominate their respective parts of the globe.
Never ever ever are people just going to give up their control out of some form of "enlightenment" that has never existed among the human race.
energy123 4 hours ago [-]
Would you have said the same thing to people living in warring tribal societies if they hoped that local tribes would cease existing and coalesce together into a single nation state? That's bad because it reduces competition, right? But overall it was very good because tribal conflict and barriers to movement and trade act as a massive tax on anything we would both call good.
Unprecedented levels of peace in Europe happened not because of competing nation states, but in spite of that competition. It was the unipolar control exerted by the US and the destruction of the Soviet Union and the creation of the EU (a proto pan-European state) that caused the 1990s. There was one and only pole -- the West. Not 2 (or more) different adversaries with opposed interests engaging in an arms race.
As we go back to a world with more fragmented and distributed power, we will get more war and more arms races. An especially toxic setup in the age of AI.
This doesn't have to be a binary, anyways. You could set it up as some kind of federation where there's still economic competition. Just not military competition.
asdfman123 3 hours ago [-]
There is a difference between consolidation to a few different powers and consolidation to just one.
Also, AFAIK all of those nations consolidated because of military conquest. Countless European wars and empires.
The EU isn't like that, but they're an alliance and not one country. You can't just leave a country like England did.
_bin_ 5 hours ago [-]
America is the only nation that currently has consolidated global power behind an even vaguely free nation.
and yes, America has done that for the "pax Americana" period. unfortunately we were short-sighted and allowed people too much free reign to be stupid and anti-American.
_bin_ 6 hours ago [-]
correct. this is why i support maintaining the American-run world order by all means we have at our disposal. it's both the best outcome for our citizens (therefore our government should pursue it) and the best outcome for the world at large. we will never accept (nor should we) the sort of one-world power that would be necessary to block defection so us running the thing is the least-bad option.
osmsucks 6 hours ago [-]
Some country leaders do think this. But they're very particular about having that one world government named "USA" or "Russia".
kQq9oHeAz6wLLS 6 hours ago [-]
Unfortunately, the only way to keep things "fair" and "equitable" so nobody revolts is to reduce everything to the lowest common denominator.
In other words, everything would be terrible, but at least it'd be terrible for everyone.
Until we realized we could sacrifice some for the betterment of the rest, find a way to rationalize it, and then we throw it all out the window.
stainablesteel 7 hours ago [-]
It's not like they're above lying, why do they even care to update this?
aradox66 6 hours ago [-]
Would it be too far out there to imagine that the LLMs they were training for weapons systems knew it violated their rules and were resisting compliance?
The alignment-faking research seems to indicate that LLMs exercise of this kind of reasoning.
janalsncm 6 hours ago [-]
Depends on the weapons system but it would probably not be an LLM, it would be a neural network trained to locate and identify people in a video for example.
And even if it was, they wouldn’t tell the system it was part of old non-evil Google.
sangnoir 6 hours ago [-]
Everything is securities fraud - they'd likely be sued by shareholders. Some individuals and institutions are picky about the symbols in their portfolio for religious or moral reasons, and would not appreciate being deceived into investing in a company they consider engaging in "harmful" or morally objectionable activities.
Clamchop 10 hours ago [-]
At what point does a public promise carry any legal weight whatsoever? If it carries none, then why not leave it in place and lie? If it carries some, for how long and who has standing to sue?
Genuine questions. Unlike "don't be evil," this promise has a very narrow and clear interpretation.
It would be nice if companies weren't able to just kinda say whatever when it's expedient.
telotortium 10 hours ago [-]
Absolutely no legal weight.
However, when you change a promise publicly, you signal a change in direction. It is much more honest than leaving it in place but violating it behind the scenes. If the public really cares, they can pass a law via their democratic representatives (or Google can swear a public oath before God I suppose).
nprateem 10 hours ago [-]
Because then investors won't invest.
Cheer2171 10 hours ago [-]
It's an ad.
courseofaction 49 minutes ago [-]
Eat the rich, before they eat you.
DanHulton 4 hours ago [-]
Next up, Google drops "Don't" from their famous mantra, "Don't Be Evil".
est 3 hours ago [-]
That's what Google promises, not Alphabet Inc.
resters 6 hours ago [-]
The Iraq wars led to trillions of dollars spent on defense. Massive defense profits led to massive lobbying, more spending.
Eventually tech and even startups follow the money. Palantir is considered cool. YC started accepting defense startups. Marc Andreessen is on X nonstop promoting conservative views of all kinds. PG becomes anti-wokism warrior.
This is how it happens. Step by step.
ripped_britches 5 hours ago [-]
Doesn’t this indicate that the US DoD likely reached out to Google for a contract to develop AI for some purpose?
Otherwise why bother?
torlok 4 hours ago [-]
If you're going to be morally bankrupt, why not just keep the pledge and lie.
Havoc 10 hours ago [-]
Really feels like the world is lurching towards something really dystopian all of a sudden.
mr90210 10 hours ago [-]
Yes, but not "all of a sudden". Mind you that Edward Snowden blew the whistle in nearly 12 years ago.
wayathr0w 9 hours ago [-]
It's surprising to me that they ever made such a pledge, considering...you know.
Did they replace "don't do weapons" with "do the right weapons"?
trhway 7 hours ago [-]
The Google ex-CEO Schmidt is developing AI drones for Ukraine in Estonia. One would expect that when he needs a source of good foundational AI Google may be among his suppliers of choice. Naturally the Ukraine is just a start. The addressable market is going to be huge, especially for the battle proven stuff. And especially for the one proven against Russian and, by-proxy, Chinese tech.
There is also tremendous interest, though only a few of them have been fielded on the actual battlefield so far, to the remotely controlled and autonomous ground platforms, and Google is the leader in the civilian ones, and it looks to me there is a relatively easy path to transferring the tech into the military systems.
zeven7 10 hours ago [-]
Is it a canary? Does this mean the government has imposed on Google for use of it's AI?
spencerflem 10 hours ago [-]
Imposed? They get DOD $$$ for this, theyre the ones offering
micromacrofoot 10 hours ago [-]
Which defense contractor did they just sign with to sell AI features?
Really if a company wants people to trust claims like this, they should make them legally binding. Otherwise it's all PR.
wayathr0w 9 hours ago [-]
Google is already a "defense" (military) contractor. They sell stuff directly to governments, well aware how it'll be used.
ceejayoz 10 hours ago [-]
> Which defense contractor did they just sign with to sell AI features?
I'm gonna presume "the new leadership of the FBI".
ForOldHack 10 hours ago [-]
The closing scene of THX-1138. "Come back!" "Please!"
tehjoker 10 hours ago [-]
The new administration seems to be dropping "soft power" in exchange for an emphasis on hard power... but hard power is more expensive and backfires more spectacularly than soft power. I think they are digging a hole for themselves and can't stop because a few rich people are making a lot of money on kickbacks.
lenerdenator 10 hours ago [-]
History shows that they aren't really digging a hole for themselves.
This whole thing where the average person feels that they can use rules against a more powerful person? That's really an invention of maybe the last 80 years, if not more recently than that.
With the exception of that human lifetime-sized era, the vast majority of history is a bunch of psychopaths running things and getting to kill/screw whoever they wanted and steal whatever they wanted. Successful revolts are few and far between. The only real difference is the stakes.
tehjoker 7 hours ago [-]
I think you misread what I was saying. Hard power is really costly to deploy. It can work, but it is incredibly expensive and the U.S. couldn't even suppress resistance in Iraq, Afghanistan, or Gaza on a durable basis. Blunt deployment of these techniques will cause the U.S. to lose friends, territory, and civil unrest as the treasury drains and life domestically just gets worse and worse.
dartos 5 hours ago [-]
At least they let us know
iwontberude 7 hours ago [-]
DoD invests in company making it commercially viable
Company says won't work for DoD
DoD initiates arm twisting and FOMO
Company now works for DoD
The origins of investment will often lead to relative outcomes of that investment. It's almost like DoD invested in Google for an informational weapon, which really should surprise no one.
> "There’s a global competition taking place for AI leadership within an increasingly complex geopolitical landscape. We believe democracies should lead in AI development..."
This is extremely disconcerting.
Google as a tool of surveillance is the kind of thing that could so easily be abused or misused to such catastrophic ends that I absolutely think the hard line should be there. And I only feel significantly more this way given the current geopolitical realities.
mihaaly 10 hours ago [-]
... did they just present themselves as the saviors of all democracies?! Really?
By weaponising AI?
Who else right? If not them, there will be no one saving democracies with weaponized mass-surveillance AI. It is their quest and privilege, right? Medicine, just society, and all such crap have to wait!
I bought that!
(not)
blackeyeblitzar 10 hours ago [-]
Other countries will use AI for weapons - shouldn’t the EU and US also do that to remain competitive?
jsheard 10 hours ago [-]
It's not exactly unheard of for certain weapons to be declared off-limits by most countries even if the "bad guys" are using them - think chemical and biological agents, landmines, cluster munitions, blinding weapons and so on. I doubt there will ever be treaties completely banning any use of AI in warfare but there might be bans on specific applications, particularly using it to make fully autonomous weapons which select and dispatch targets with no human in the loop, for similar reasons to why landmines are mostly banned.
nradov 10 hours ago [-]
Landmines and cluster munitions have been among Ukraine's most effective weapons for resisting the Russian invasion. Without those, Ukraine would likely have already lost the war. It's so bizarre how some people who face no real risks themselves think that those weapons should be declared off-limits.
jsheard 10 hours ago [-]
Nobody said they're not effective during a war, the problem is they remain effective against any random civilians who happen to stumble across them for a long time after the war is over. Potentially decades, as seen in Cambodia.
It would be a bit of a Pyrrhic victory to repel an attempted takeover of your land, only for that land to end up contaminated with literally millions of landmines because you didn't have a mutual agreement against using them.
nradov 9 hours ago [-]
People who are defending against an existential threat today don't have the luxury of worrying about contamination tomorrow. I think at this point Ukraine will take a Pyrrhic victory if the alternative is their end as a fully sovereign nation state. And let's be clear about the current situation: if Ukraine and Russia had a mutual agreement against using those weapons then Ukraine would probably have already lost. Landmines in particular are extremely effective as a force multiplier for outnumbered defenders.
murderfs 9 hours ago [-]
They're declared off-limits because the military doesn't want them. Biological and chemical weapons aren't useful to modern militaries. Landmines and cluster munitions are, so none of the countries that actually matter have banned them!
Personally I don’t care if ML is used for weapons development assuming there are standards
It’s the companies that horde everyone’s personal information, who eroded the concept of privacy while mediating lives with false promises for trust turning into state intelligence agencies that bothers me
The incentives and results become fucked up, safe guards less likely to work I get not a lot of people care but it’s dangerous
AvAn12 10 hours ago [-]
Analogy is not apt. If other countries are trying to pry into our data and systems, then the right move for google or any other tech company is to advance our defenses and make cybersecurity stronger, more available, and easier for companies and people to use. If someone is trying to hack me, it's much smarter for me to defend myself rather than try to hack the other guy back.
myth_drannon 5 hours ago [-]
It's interesting how AI for weapons topic immediately brought palinazis with their own agenda ( some bots I guess?). As if Israel is some sort of military AI superpower(it's not, read accounts of Oct 7th events) and the rest of armies are still using muskets and smoke signals.
fallingfrog 6 hours ago [-]
I would say that a pledge that you only keep as long as it’s convenient doesn’t mean much. And neither does the word of the company that made it.
m3kw9 6 hours ago [-]
Is a pledge till the $$$ shows up
outside1234 6 hours ago [-]
Probably more honest this way at least
kyletns 9 hours ago [-]
They'll say it's for national defense against other countries, but it's only a matter of time before these weapons and surveillance tools will be deployed on American citizens. Foucault's boomerang.
silexia 8 hours ago [-]
Corporations turn evil when their founders lose power or leave. Google used to be a genuinely wonderful force for good. But finance people can borrow money at extremely cheap interest rates from government cronies due to the US fractional reserve system. Then the MBAs offer so much money founders basically can't refuse. Then the companies end up publicly traded and only work on pushing up their next quarters earnings, this becoming evil.
Lance_ET_Compte 10 hours ago [-]
I don't believe them for a millisecond.
nprateem 9 hours ago [-]
Everyone seems to be focusing on weapons but the real story is surveillance. AI is a wet dream for dictators.
oulipo 9 hours ago [-]
Shameful
jmyeet 10 hours ago [-]
If you work for big tech now, you’re working for a defense contractor, no different to Boeing, Lockheed Martin or Northrop Grumman.
Ultimately every sufficiently large company seems to become an arms dealer, a drug dealer or a bank.
We need look no further than Lavender [1] to see where this ends up.
> If you work for big tech now, you’re working for a defense contractor, no different to Boeing, Lockheed Martin or Northrop Grumman.
The difference is about 250k/yr. Kinda big.
mr90210 10 hours ago [-]
I have been pondering about such subject over the past weeks. Maybe one could compare it to people who worked for Allianz, Audi, Bayer, BMW, IBM and others before 1945.
BrenBarn 9 hours ago [-]
I'm shocked, shocked!
All "pledges" without some kind of enforceable legal foundation are just meaningless hot air.
rnd0 10 hours ago [-]
So they've now zoomed past 'don't be evil' right to turning into snidely freakin whiplash.
cyberax 10 hours ago [-]
Google: "Be evil".
smeeger 2 hours ago [-]
are you fucking kidding me?
dhdjruf 9 hours ago [-]
I think this is great. Israel did great work using AI to drone strike various Steve jobs level terrorists in gaza,syria,lebanon, and iran.
Now that Googles onboard who knows, maybe we will be able to drone strike people that are underground,underwater, inbuildings without killing innocent civillians.
howmayiannoyyou 10 hours ago [-]
GOOD.
Nothing is going to stop USA's adversaries from deploying AI against US citizens. Pick your poison, but I prefer to compete and win rather than unilaterally disarm and hope for goodwill and kindness from regimes that prioritize the polar opposite.
krainboltgreene 10 hours ago [-]
Yeah wouldn't want other countries to deal with us the way we've dealt with them.
That Article does not mention Google, and Google did not develop the Lavender system used by the Israeli.
pbiggar 3 hours ago [-]
I didn't say Google developed Lavender. The article describes how AI is used by Israel in the genocide. On what cloud platform does Google's military run?
khana 27 minutes ago [-]
[dead]
2030ai 10 hours ago [-]
[dead]
TheRealNGenius 10 hours ago [-]
[dead]
smileson2 10 hours ago [-]
[flagged]
uejfiweun 7 hours ago [-]
Dollar short and a day late. The future of the US tech industry belongs to those who weren't interested in performative woke nonsense like this during the last decade.
worik 6 hours ago [-]
The future of the tech industry belongs to China
janalsncm 6 hours ago [-]
Future? The present day tech industry belongs to China unless you narrowly define it as software or pharmaceuticals.
For example the most advanced batteries in the world are designed and manufactured in China.
zeroCalories 7 hours ago [-]
Who is that?
uejfiweun 7 hours ago [-]
Palantir, for one?
zeroCalories 7 hours ago [-]
By what measure? Google has a market cap that's 10x Palantir, and the gap in revenue/profit is even more massive. They aren't in the same league at all.
uejfiweun 6 hours ago [-]
The measure is stock returns in the last 5 years. The whole point of public companies is to generate wealth for shareholders and Google just simply isn't really delivering on the same level as Palantir.
In fact, when you look at the last decade of Google saying they're an "AI first" company and literally inventing transformers, and look at what their stock price has done and how they've performed in relation to other major companies involved in this current AI spring, there is simply no way not to be disappointed.
zeroCalories 2 hours ago [-]
Plenty of companies produce good returns, but that doesn't make them any kind of leader. FAANG still controls the market, pays the highest salaries, and produces the most research. Other rising stars like OpenAI and ByteDance are not uniquely evil either. Not saying FAANG won't fade away like IBM or Oracle, but I don't think it would be due to their unwillingness to be like Palantir.
bbqfog 6 hours ago [-]
Protesting weapons manufacturing has been going on long before reactionaries started fearing the "woke" boogeyman. People protested Dupont for making napalm during the Vietnam war.
uejfiweun 5 hours ago [-]
There's a big difference between outsider activists protesting the actions of a company, and the actual leadership of a company choosing a less profitable path in order to seem more morally pure.
aprilthird2021 7 hours ago [-]
How is it woke nonsense to not want to create a weapon that probabilistically determines if a civilian looks close enough like a bad guy to missle strike them?
tmnvdb 6 hours ago [-]
This sentiment ignores the reality on the ground in favor of performative ideological purity - civilians are already getting blown up all the time by systems that do not even attempt to make any distinction between civilians and soldiers: artillery shells, mortars, landmines, rockets, etc.
siltcakes 6 hours ago [-]
The reality on the ground is that one of the very first uses of AI weapons was to target civilians in Gaza:
> Moreover, the Israeli army systematically attacked the targeted individuals while they were in their homes — usually at night while their whole families were present — rather than during the course of military activity.
Lavender is not an autonomous weapon but if you want to seriously consider if Lavender is a good thing (I am undecided) you need to compare the effect of this operation with Lavender and the effect of doing the same operation without the Lavender system. Otherwise you run the risk of making arguments that in the end just boil down to 'weapons bad'.
pixl97 6 hours ago [-]
And indiscriminate has a cost that can slow people from using them.
Imagine you have a weapon that can find and kill all the 'bad guys'. Would you not be in a morally compromised position if you didn't use it? You're letting innocents die every moment you don't.
* warning definitions of bad guys may differ leading to further conflict.
tmnvdb 6 hours ago [-]
The logic of this argument implies we should develop weapons with maximal collateral damage to deter their usage.
pixl97 6 hours ago [-]
Which begs the question of if we can escape the Red Queen hypothesis.
Personally I don't think we can as a species.
aprilthird2021 38 minutes ago [-]
> civilians are already getting blown up all the time by systems that do not even attempt to make any distinction between civilians and soldiers: artillery shells, mortars, landmines, rockets, etc.
Right, and everytime that happens because of miscalculations by our government they lose the very real and important public license to continue. Ultimately modern wars led by democracies are won by public desire to continue them. The American public can become very hesitant to wage war very fast, if we unleash Minority Report on the world for revenge
captainbland 6 hours ago [-]
Or better yet, misinterpret who the target is even supposed to be because of a hallucination.
worik 6 hours ago [-]
> Or better yet, misinterpret who the target is even supposed to be because of a hallucination.
Who, in that business, cares?
AI will provide a fig leaf for the indiscriminate large scale killing that is regularly done since the start of industrialised warfare.
Using robots spare drone pilots from PTSD
From the perspective of the murderous thugs that run our nations (way way before the current bunch of plainly bonkers ones in the USA), what is not to like?
Whilst there are all sorts of quibbles about weapons generally being evil, this is evil.
captainbland 6 hours ago [-]
AI driven drones in particular seem like ideal tools for carrying out a genocide: identify an ethnicity based off some physical characteristics, kill. No paperwork, no transport, no human conscience. Just manufacture, deploy and instruct at scale. Sure, it might get it wrong sometimes but you've got to break a few eggs...
uejfiweun 5 hours ago [-]
Because "not wanting" to do something that would make the company money due to moral considerations that aren't shared by your competitors is idiotic.
aprilthird2021 36 minutes ago [-]
Ah, the Nuremberg Defense
kjsingh 7 hours ago [-]
there are no profits like war profits
there is nothing usual like people dying
Rendered at 06:55:49 GMT+0000 (Coordinated Universal Time) with Vercel.
Google is a megacorp, and while megacorps aren't fundamentally "evil" (for some definitions of evil), they are fundamentally unconcerned with goodness or morality, and any appearance that they are is purely a marketing exercise.
I think megacorps being evil is universal. It tends to be corrupt cop evil vs serial killer evil, but being willing to do anything for money has historically been categorized as evil behavior.
That doesn’t mean society would be better or worse off without them, but it would be interesting to see a world where companies pay vastly higher taxes as they grow.
That's old thinking. Now we have servitization. Now the business who can most efficiently offer value deserves the entire market.
Basically, iterate until you're the only one left standing and then never "sell" anything but licenses ever again.
> We are clarifying that, as we continue to increase the breadth and depth of the content we make available to you, circumstances may require that certain titles and types of content include ads, even in our 'no ads' or 'ad free' subscription tiers.
So at this point they aren't even bothering to rename the tier from "ad free" even as they put ads in it. Or maybe it's supposed to mean "the ads come free with it" now? Newspeak indeed.
This isn’t new. Facebook, for example, received early funding from In-Q-Tel, the CIA’s venture capital arm, and its origins trace back to DARPA’s canceled LifeLog project—a system designed to track and catalog people’s entire lives. Big Tech and government surveillance have been intertwined from the start.
That’s why these companies never face real consequences. They’ve become quasi-government entities, harvesting data on billions under the guise of commerce.
Growing up in soviet bloc I took that story at face value. After all democracy was still a new thing, and people haven't invented privacy concerns yet.
Since then I always thought that some sort of cooperation between companies like Facebook or Google and CIA/DOD was an obvious thing to everyone.
And if one wants to know why big tech from China isn't welcome, be it phones or social media, it's not because fear of them spying on Americans, but because of the infeasibility of integrating Chinese companies into our own domestic surveillance systems.
[1] - https://en.wikipedia.org/wiki/PRISM
> Years ago a friend working in security told me that every telco operator in Elbonia
See info about the fictional country of Elbonia here, from the Dilbert comics:
[0] https://en.wikipedia.org/wiki/Dilbert
...I just picture a similar conversation with a CEO going: "Sir, shareholders want to see more improvement this quarter." CEO: "Do we run ads? Have we run ads? Will we run ads this time?" (The answer is inevitably yes to all of these)
That creates limits to growth of an Ad based ecosystem.
So the thing to pay attention too is not Revenue growth or Profit growth of a Platform but Price of an Ad, Price to increase reach, price to Pay to Boost your post, price of a presidential campaign etc etc. These prices cant grow forever just like with housing prices or we get the equivalent of a Housing Bubble.
Want to destabilize the whole system pump up ad prices.
I think if you look at quality of life and happiness ratings in Norway it's pretty clear it's far from "entirely undesirable". It's good for people to do things for reasons other than money.
Want to make more? then take personal risk.
Their entire economy and society are structured around oil extraction.
There are no lessons to learn from Norway unless you live somewhere that oil does from the ground.
The difference is Norway’s economy being far less dependent on petroleum which is only 40% of their exports.
We all recognize that a democracy is the correct method for political decision making, even though it's also obvious that theoretically a truly benevolent dictator can make better decisions than an elected parliament but in practice such dictators don't really exist.
The same reasoning applies to economic decision making at society level. If you want a society whose economics reflects the will and ethics of the people, and which serves for the benefit of normal people, the obvious thing is the democratize economic decision making. That means that all large corporations must be mostly owned by their workers in roughly 1/N fashion, not by a small class of shareholders. This is the obvious correct solution, because it solves the underlying problem, not paper of the symptoms like taxation. If shareholder owned corporations are extracting wealth from workers or doing unethical things, the obvious solution is to take away their control.
Obviously, some workers will still make their own corporations do evil things, but at least it will be collective responsibility, not forced upon them by others.
Sounds like the effort needed for bonuses here in the US. Why try if the amount is largely arbitrary and generally lower than your base salary pay rate when you consider all the extra hours. Everything is a sham.
Increasing marginal income tax rates on highly compensated employees might be a good policy overall. But where are we on the Laffer curve? If we go too far then it really hurts the overall economy.
This is a cliche you hear from right winger in any country that has a progressive tax system.
Regarding Norway, taxes aren't in the same ballpark as in some US blue states.
Also, it's a very simplistic view to think that people are only motivated by money. Counter examples abound.
Not a cliché - a fact. I'll explain to you.
The incentive structure of progressive taxation is wrong: it only works for the few percent that are extremely money hungry: the few that are willing to work for lower and lower percentage gains.
Normal people say "enough" and they give up once they have the nice house and a few toys (and some retirement money with luck). In New Zealand that is something like USD1.5 million.
I'm on a marginal rate of 39% in New Zealand. I am well off but I literally am not motivated to try and earn anything extra because the return is not enough for the extra effort or risk involved. No serial entrepreneurship for me because it only has downside risk. If I invest and win then 39%+ is taken as tax, but even worse is that if I lose then I can't claim my time back. Even financial losses only claw back against future income: and my taxable income could move to $0 due to COVID-level-event and so my financial risk is more than what it might naively appear.
Taxation systems do not fairly reward for risk. Especially watch people with no money taking high risks and pay no insurance because the worst that can happen to them is bankruptcy.
New Zealand loses because the incentive structure for a founder is broken. We are an island so the incentive structure should revolve around bringing in overseas income (presuming the income is spent within NZ). Every marginal dollar brought into the economy helps all citizens and the government.
The incentives were even worse when I was working but was trying to found a company. I needed to invest time, which had the opportunity cost of the wages I wouldn't get as a developer (significant risk that can't be hedged and can't be claimed against tax). 9 times out of 10 a founder wins approximately $0: so expected return needs to be > 10x. A VC fund needs something like > 30x return from the 1 or 2 winning investments. I helped found a successful business but high taxation has meant I haven't reached my 30x yet - chances are I'll be dead before I get a fair return for my risk. I'm not sure I've even reached 10x given I don't know the counterfactual of what my employee income would have become. This is for a business earning good export income.
Incentive structures matter - we understand that for employees - however few governments seem to understand that for businesses.
Most people are absolutely ignorant of even basic economics. The underlying drive is the wish to take from those that have more than them. We call it the tall poppy syndrome down here.
(reëdited to add clarity)
The income tax rate isn't all that relevant to the costs and benefits of starting a company, so I don't understand that part of your story. The rewards for founding a successful company mostly aren't subject to income tax, and NZ has a very light capital gains regime.
I have started my own company and I do agree that there are some issues that could be addressed. For example, it would be fairer if the years I worked for no income created tax-deductible losses against future income.
But NZ's tax rates are lower than Australia and the USA and most comparable nations, and NZers start a lot of businesses, so I don't think that is one of our major problems at the moment.
That's good that it motivates you. It doesn't motivate me any more. I'm not interested in "investing" more time for the reasons I have said.
> the taxes paid aren't burned, they mostly go to things I care about.
I'm pleased for you. I'd like to put more money towards things I care about.
> The income tax rate isn't all that relevant to the costs and benefits of starting a company
I am just less positive than you: it feels like win you lose, lose you lose bigger. I'm just pointing out that our government talks about supporting businesses but I've seen the waste from the repetitive attempts to monetise our scientific academics.
> The rewards for founding a successful company mostly aren't subject to income tax
Huh? Dividends are income. Or are you talking about the non-monetary rewards of owning a business?
> NZ has a very light capital gains regime
Which requires you to sell your company to receive the benefits of the lack of CGT. So every successful business in NZ is incentivised to sell. NZ sells it's jewels. Because keeping a company means paying income tax every year. NZ is fucking itself by selling anything profitable - usually to foreign buyers.
The one big ticket item I would like to save for is my retirement fund. But Labour/Greens want to take 50% to 100% of capital if you have over 2 million. A bullshit low amount drawdown at 4% is $80k/annum before tax LOL. Say investments go up by 6% per year and you want to withdraw 4%. Then a 2% tax is 100% of your gains. Plus I'm certain they will introduce means testing for super before I am eligible. And younger people are even more fucked IMHO. The reality is I need to plan to pay for the vast majority of my own costs when I retire, but I get to pay to support everybody else. I believe in socialist health care and helping our elderly, but the country is slowly going broke and I can't do much about that. I believe that our government will take whatever I have carefully saved - often to pay for people that were not careful (My peer-group is not wealthy so I see the good and the bad of how our taxes are spent). Why should I try to earn more to save?
We could literally have high speed rail, healthcare, the best education on the planet and have a high standard of living... and it would be peanuts to them. Instead we have a handful of people with more wealth than 99% of everyone else, while the bottom 75% of those people live in horrifying conditions. The fact that medical bankruptcy is a concept only in the richest country on earth is deeply embarrassing and shameful.
I guess corrupt cop vs serial killer is like amorality (profit-driven systems) vs immorality (active malice)? A company is a mix of stakeholders, some of whom push for ethical practices. But when shareholders demand endless growth, even well-intentioned actors get squeezed.
That word comes with a lot of boot-up code and dodgy dependencies.
I don't like it.
Did Robert Louis Stevenson make a philosophical error in 1882 supposing that a moral society (with laws etc) can contain within itself a domain outside of morals [0]?
What if coined the word "alegal"?
Oh officer... what I'm doing is neither legal nor illegal, it's simply alegal "
[0] https://edrls.wordpress.com/2021/02/16/a-moral/
Also, scale plays a significant part as well. Any high-exposure organization which operates on a global scale has access to an extremely large pool of candidates to staff its offices... And such candidate pools necessarily include a large number of any given personas... Including large numbers of ethically-challenged individuals and criminals. Without an interview process which actively selects for 'ethics', the ethically-challenged and criminal individuals have a significant upper-hand in getting hired and then later wedging themselves into positions of power within the company.
Criminals and ethically-challenged individuals have a bigger risk appetite than honest people so they are more likely to succeed within a corporate hierarchy which is founded on 'positive thinking' and 'turning a blind eye'. On a global corporate playing field, there is a huge amount of money to be made in hiding and explaining away irregularities.
A corporate employee can do something fraudulent and then hold onto their jobs while securing higher pay, simply by signaling to their employer that they will accept responsibility if the scheme is exposed; the corporate employer is happy to maintain this arrangement and feign ignorance while extracting profits so long as the scheme is kept under wraps... Then if the scheme is exposed, the corporations will swiftly throw the corporate employee under the bus in accordance to the 'unspoken agreement'.
The corporate structure is extremely effective at deflecting and dissipating liability away from itself (and especially its shareholders) and onto citizens/taxpayers, governments and employees (as a last layer of defense). The shareholder who benefits the most from the activities of the corporation is fully insulated from the crimes of the corporation. The scapegoats are lined up, sandwiched between layers of plausible deniability in such a way that the shareholder at the end of the line can always claim complete ignorance and innocence.
So in effect you have to call the employees and shareholders evil. Well those are the same people who also work and hold public office from time to time, or are shareholders, or whatever. You can't limit this "evilness" to just an abstract corporation. Not only is it not true, you are setting up your "problem" so that it can't be addressed because you're only moralizing over the abstract corporation and not the physical manifestation of the corporation either. What do you do about the abstract corporation being evil if not taking action in the physical world against the physical people who work at and run the corporation and those who buy its products?
I've noticed similar behavior with respect to climate change advocacy and really just "government" in general. If you can't take personal responsibility, or even try to change your own habits, volunteer, work toward public office, organize, etc. it's less than useless to rail about these entities that many claim are immoral or need reform if you are not personally going to get up and do something about it. Instead you (not you specifically) just complain on the Internet or to friends and family, those complaints do nothing, and you feel good about your complaining so you don't feel like you need to actually do anything to make change. This is very unproductive because you have made yourself feel good about the problem but haven't actually done anything.
With all that being said, I'm not sure how paying vastly higher taxes would make Google (or any other company) less evil or more evil. What if Google pays more taxes and that tax money does (insert really bad thing you don't like)? Paying taxes isn't like a moral good or moral bad thing.
People making meaningful decisions at mega corporations aren’t a random sample of the population, they are self selected to care a great deal about money and or power.
Honestly if you wanted to filter the general population to quietly discover who was evil I’d have a hard time finding something more effective. It doesn’t guarantee everyone is actually evil, but actually putting your kids first is a definite hindrance.
The morality of the average employee on the other hand is mostly irrelevant. They aren’t setting policies and if they dislike something they just get replaced.
I take issue with "don't blame the employees". You need people to run these organizations. If you consider the organization to be evil you don't get to then say well the people who are making the thing run aren't evil, they're just following orders or they don't know better. BS. And they'd be replaced if they left? Is that really the best argument we have against "being evil"?
Sorry I'd be less evil but if I gave up my position as an evil henchman someone else would do it! And all that says anyway is that those replacing those who leave are just evil too.
If you work at one of these companies or buy their products and you literally think they are evil you are either lying to yourself, or actively being complicit in their evil actions. There's just no way around that.
Take personal responsibility. Make tough decisions. Stop abstracting your problems away.
Putting money before other considerations is what’s evil. What’s “possible” expands based on your morality it doesn’t contract. If being polite makes a sale you’re going to find a lot of polite sales people, but how much are they willing to push that expended warrantee?
> Sorry I'd be less evil but if I gave up my position as an evil henchman someone else would do it!
I’ve constrained what I’m willing to do and who I’m willing to work for based on my morality, have you? And if not, consider what that say about you…
Depends on the considerations and what you consider to be evil. My point wasn't to argue about what's evil, of course there is probably a few hundred years of philosophy to overcome in that discussion, but to point out that if you truly think an organization is evil it's not useful to only care about the legal fiction or the CEO or the board that you won't have any impact on - you have to blame the workers who make the evil possible too, and stop using the products. Otherwise you're just deceiving yourself into feeling like you are doing something.
The fact you assume people are going to do things they believe to be morally reprehensible is troubling to me.
I don’t assume people need to be evil to work at such companies because I don’t assume they notice the same things I do.
> The fact you assume people are going to do things they believe to be morally reprehensible is troubling to me.
This seems to be very common behavior in my experience. Perhaps the rhetoric doesn't match the true beliefs. I'm not sure.
the architecture of the system is imperfect and creates bad results for people.
Complaining is not unproductive, it signals to others they are not alone in their frustrations. Imagine that nobody ever responds or airs their frustrations; would you feel comfortable saying something about it? Maybe you're the only one, better keep quiet then. Or how do you find people who share your frustrations with whom you could organise some kind of pushback?
If I was "this government", I would love for people to shut up and just do their job, pay taxes and buy products (you don't have to buy them from megacorp, just spend it and oh yeah, goodluck finding places to buy products from non-megacorps).
Instead of taking action they complain, set up an abstract boogeyman to take down, and then nobody can actually take action to make the world better (based on their point of view) because there's nothing anyone can do about Google the evil corporation because it's just some legal fiction. Bonus points for moralizing on the Internet and getting likes to feel even better about not doing anything.
But you can do something. If someone thinks Google is evil they can stop using Gmail or other Google products and services, or even just reduce their usage - maybe you can switch email providers but you only have one good map option. Ok at least you did a little more than you did previously.
I don't think it's necessary to conclude that because a company is evil then everyone who works at the company is evil. But it's sort of like the evilness of the company is a weighted function of the evilness of the people who control it. Someone with a small role may be relatively good while the company overall can still be evil. Someone who merely uses the company's products is even more removed from the company's own level of evil. If the company is evil it usually means there is some relatively small group of people in control of it making evil decisions.
Now, I'm using phraseology here like "is evil" as a shorthand for "takes actions that are evil". The overall level of evilness or goodness of a person is an aggregate of their actions. So a person who works for an evil company or buys an evil company's products "is evil", but only insofar as they do so. I don't think this is even particularly controversial, except insofar as people may prefer alternative terms like "immoral" or "unethical" rather than "evil". It's clear people disagree about which acts or companies are evil, but I think relatively few people view all association with all companies totally neutrally.
I do agree with you that taking personal responsibility is a good step. And, I mean, I think people do that too. All kinds of people avoid buying from certain companies, or buy SRI funds or whatever, for various ethically-based reasons.
However, I don't entirely agree with the view that says it's useless or hypocritical to claim that reform is necessary unless you are going to "do something". Yes, on some level we need to "do something", but saying that something needs to be done is itself doing something. I think the idea that change has to be preceded or built from "saintly" grassroots actions is a pernicious myth that demotivates people from seeking large-scale change. My slogan for this is "Big problems require big solutions".
This means that it's unhelpful to say that, e.g., everyone who wants regulation of activities that Company X does has to first purge themselves of all association with Company X. In many cases a system arises which makes such purges difficult or impossible. As an extreme, if someone lives in an area with few places to get food, they may be forced to patronize a grocery store even if they know that company is evil. Part of "big solutions" means replacing the bad ways of doing things with new things, rather than saying that we first have to get rid of the bad things to get some kind of clean slate before we can build new good things.
If using AI and other technology to uphold a surveillance state, wage war, do imperialism, and do genocide... isn't evil, than I don't know if you can call anything evil.
And the entire point of taxes is that we all collectively decide that we all would be better off if we pooled our labor and resources together so that we can have things like a basic education, healthcare, roads, police, bridges that don't collapse etc.. Politicians and corporations have directly broken and abused this social contract in a multitude of ways, one of those ways is using loopholes to not pay taxes at the same rate as everyone else by a large margin... another way is paying off politicians and lobbying so that those loopholes never get closed, and in fact, the opposite happens. So yes, taxing Google and other mega-corporations is a single, easily identifiable, action that can be directly taken to remedy this problem. Though, there is no way around solving the core issue at hand, but people have to be able to identify that issue foremost.
The main thing here I think is anonymity through numbers and complexity. You and thousands of others just want to see the numbers go up. And that desire is what ultimately influences decisions like this.
If google stock dropped because of this then google wouldn't do it. But it is the actions of humans in aggregate that keeps it up.
Megacorporations are scapegoats when in actuality they are just a set of democratic rules. The corporation is just a window into the true nature of humanity.
People have the incentive to not do evil and to do evil for money. When you abstract the evil away into 1 vote out of thousands then you abstract responsibility and everyone ends up in aggregate doing an inconsequential evil and it adds up to a big evil.
The tragedy of the commons.
That is to make a mistake of composition. An entity can have properties that none of its parts have. A cube made out of bricks is round, but none of the bricks are round. You might be evil, your cells aren't evil.
It's often the case that institutions are out of alignment with its members. It can even be the case that all participants of an organization are evil, but the system still functions well. (usually one of the arguments for markets, which is one such system). When creating an organization that is effectively the most basic task, how to structure it such that even when its individual members are up to no good, the functioning of the organization is improved.
Obviously because they don't give a shit.
And if Googs doesn't do it, someone else will, so it might as well be them that makes money for their shareholders. Technically, couldn't activist shareholders come together and claim by not going after this market the leadership should be replaced for those that would? After all, share prices is the only metric that matters
Moral character is something that has to be taught, it doesn't just come out on its own.
If your parents don't do it properly, you'll be just another cog in the soulless machine to which human life is of no value.
You need to try another search engine. Years ago...
Isn't that a contradiction? Morality is fundamentally a sense of "right and wrong". If they reward anything that maximizes short term profit and punish anything that works against it then it appears to me that they have a simple, but clearly defined sense of morality centered around profit.
Weird thing is for corporations, it's humans running the whole thing.
Most people consider neglect evil in my experience.
This is a meme that needs to die, for 99% of cases out there the line between good/bad is very clear cut.
Dumb nihilists keep the world from moving forward with regards to human rights and lawful behavior.
... or at least that's what these people have to be telling themselves at all times.
This is a very important point to remember when assessing ideas like "Is it good to build swarms of murderbots to mow down rioting peasants angry over having expenses but no jobs?" Most people might answer "no," but if the people with money answer "yes," that becomes the market's objective. Then the incentives diffuse through the economy and you don't just get the murderbots, you also get the news stations explaining how the violent peasants brought this on themselves and the politicians making murderbots tax deductible and so on.
Edit: answered, not asked
1. https://drakelawreview.org/wp-content/uploads/2015/01/lrdisc...
The market fairy has also decided that medication commercials on TV is good for you. That your car should report your location, speed, and driving habits to your insurer, car manufacturer, and their 983,764 partners at all times.
Maximally beneficial indeed.
This is flatly untrue. Corporations are made up of humans who make decisions. They are indeed concerned with goodness and/or morality. Saying otherwise lets them off the hook for the explicit decisions they make every day about how to operate their company. It's one reason why there are shareholder meetings, proxy votes, activist investors, Certified B-Corporations, etc.
[1]: https://slatestarcodex.com/2014/07/30/meditations-on-moloch/
No, no. Call a spade a spade. This behavior and attitude is evil. Corporations under modern American capitalism must be evil. That's how capitalism works.
You succeed in capitalism not by building a better mousetrap, but by destroying anyone who builds a better moustrap than you. You litigate, acquire, bribe, and rewrite legislation to ensure yours is the best and only mousetrap available to purchase, with a token 'competitor' kept on life support so you can plausibly deny anticompetitive practices.
If you're a good company trying to do good things, you simply can't compete. The market just does not value what is good, just, or beneficial. The market only wants number to go up, and to go up right now at any cost. Amazon will start pumping out direct clones of your product for pennies. What are you gonna do, sue Amazon?! best of luck.
while knowing this seems useless, it's actually the missing intrinsic compass and the cause for a lot of bad and stupid behavior (by the definition that something is stupid if chosen knowing it will cause negative consequences for the doer)
Everything should primarily be measured based on its primary goal. For "for-profit" companies that's obvious in their name and definition.
That there's nothing that should be assumed beyond what's stated is the premise of any contract whether commercial, public or personal (like friendship) is a basic tool for debate and decision making.
I want to get upset over it, but I sadly recognize the reality of the why this is not surprising to anyone. We actually have competitors in that space, who will do that and more. We already have seen some of the more horrifying developments in that area.. and, when you think about it, those are the things that were allowed to be shown publicly. All the fun stuff is happening behind closed doors away from social media.
When people talk about AI being dangerous, or possibly bringing about the end of the world, I usually disagree. But AI weapons are obviously dangerous, and could easily get out of control. Their whole point is that they are out of control.
The issue isn’t that AI weapons are “evil”. It’s that value alignment isn’t a solved problem, and AI weapons could kill people we wouldn’t want them to kill.
Now tell me how you counter a thousand small EMP hardened autonomous drones intent on delivering an explosive payload to one target without AI of some kind?
I guess there's a lot missing in semantics, is the AI specifically for targeting or is a drone that can adapt to changes in wind speed using AI considered an AI weapon?
At the end of the day though, the biggest use of AI in defense will always be information gathering and processing.
The US is politically and economically declining, already. And its area of influence has been weakening since, the 90's?
It would be bad strategy to not do anything until you feel hopelessly threathened.
The real danger is when they can't. When they, without hesitation or remorse, kill one or millions of people with maximum efficiency, or "just" exist with that capability, to threaten them with such a fate. Unlike nuclear weapons, in case of a stalemate between superpowers they can also be turned inwards.
Using AI for defensive weapons is one thing, and maybe some of those would have to shoot explosives at other things to defend; but just going with "eh, we need to have the ALL possible offensive capability to defend against ANY possible offensive capability" is not credible to me.
The threat scenario is supposed to be masses of enemy automated weapons, not huddled masses; so why isn't the objective to develop weapons that are really good at fighting automatic weapons, but literally can't/won't kill humans, because that's would remain something only human soldiers do? Quite the elephant on the couch IMO.
Lies run the planet, and it stinks.
—Eugene Gendlin
Yes, the truth would also stink. I’m sure it’s also horrifying.
Literally just remove the first 4 words and keep the rest of the comment the same, and it's a better comment. No idea what chatgpt has to do with it.
Communication is about communicating information, sometimes a terse short and aggressive style is the most effective way. It activates neurons in a way a paragraph of polite argumentation doesn't.
More accurately in the context of the comment, its "Im gonna be an asshole to you because I think you don't have the life experience I do", which is at least, some kind of signal.
I wasn't the original responder btw.
Weird, because in my experience, that has happened to every single person I know and myself. Whether it's at the start or end of a comment is not really the point.
Most emotionally mature people would stop arguing after something like that.
Stinks, huh?
Maybe you'd prefer if we were all maximally polite drones but that's not how humans are, going back to GPs point, and I don't think it's a state than anyone truly wants either.
Deception is bad enough, knowing people’s true motivations and opinions surely would be worse.
What truly motivates other people is largely a mystery, and what motivates oneself is wildly mysterious to that oneself indeed.
Successful politicans and sociopaths are experts in double meanings.
"I will not drop bombs on Acmeland." Instead, I will send missiles.
"At this point in time, we do not intend to end the tariffs." The intent will change when conditions change, which is forecast next week.
"We are not in negotations to acquire AI Co for $1B." We are negotiating for $0.9B.
"Our results show an improvement for a majority of recipients." 51% saw an improvement of 1%, 49% saw a decline of 5%...
Eliminating the need to lie/misguide people to sway them would be such a crazy world.
Yes I read the whole series. It was a fucking marathon.
I can’t quite tie your point into the series directly, other than to agree that elected officials are, almost by definition, professional liars.
(Tugs on braid)
Would China, Russia, or Iran agree to such a preemptive AI weapons ban? Doubtful, it’s their chance to close the gap. I’m onboard if so, but I don’t see anything happening on that front until well after they start dominating the landscape.
Now that's off the table, I think America should have AI weapons because everyone else will be developing them as quickly as possible.
No matter which way you look at it, we live on a planet where resources are scarce. Which means there will be competition. Which means there will be innovation in weaponry.
That said, we've had nukes for decades, and have collectively decided to not use them for decades. So there is some room for optimism.
They may do as much as us, but not more. Lets stop pretending every nation who developed nukes dropped it on a city. Nobody has proven they are willing to go as far as the US.
Nukes didn't wipe us out. Neither will AI. It nevers ends with doomsday fearmongering. But that's because fears sells. Or better yet, fear justifies spending.
Second, don’t understand how the atomic bomb argument makes sense. Germany was developing them and would have used them if it got there first.
Are you suggesting the US really is truly the only nation that would ever have used atomic weapons? That if Japan made it first they would have spared China or the US?
So what? Can't Google find other sources of revenue than building weapons?
OpenAI is sneaky slimey and headed by a psycho narcissist. Makes Pichia looks like a saint.
Ethically, it’s the same. But if someone was pointing a gun at me I’d rather have someone with some empathy behind the trigger rather than the personification of a company that bleeds high level execs and… insert many problems here
It hardly matters what employees think anymore when the executives are weather-vanes who point in the direction of wealth and power over all else (just like the executives at their competitors).
In case you missed it, a few days back Google asked all employees who don't believe in their "mission" to voluntarily resign.
That is not the same thing as asking everyone who doesn't believe in the mission to please resign.
Which rhymes pretty well with not believing in their mission. They are telling people to leave instead of trying to influence the direction from the inside.
Just like Meta announced some changes around the time of inauguration, I'm sure Google management has noticed the AI announcements, and they don't want to be perceived in a certain way by the current administration
I think the truth is more in the middle (there is tons of disagreement within the company), but they naturally care about how they are perceived by those in power
Actually, this reminds me of when Paul Graham came to Google, around 2005. Before that, I had read an essay or two, and thought he was kind of a blowhard.
But I actually thought he was a great speaker in person, and that lecture changed my opinion. He was talking about "Don't Be Evil", and he also said something very charming about how "Don't Be Evil" is conditional upon having the luxury to live up to that, which is true.
That applies to both companies and people:
- If Google wasn't a money-printing machine in 2005, then "don't be evil" would have been less appealing. And now in 2020, 2021, .... 2025, we can see that Google clearly thinks about its quarterly earning in a way that it didn't in 2005, so "don't be evil" is too constraining, and was discarded.
- For individuals, we may not pay much attention to "don't be evil" early in our careers. But it is more appealing when you're more established, and have had a couple decades to reflect on what you did with your time!
Companies technically have disproportionate power.
It's better that they shift according to the will of the people.
The alternative, that companies act according to their own will, could be much worse.
That’s one of the reasons for the turbulent times. Let’s face the truth, most of the defense can easily be used for offense and given the state of online security every progress gets into the wrong hands.
Maybe it’s time to pause to make it more difficult for those wrong hands.
There is no advancement that won‘t end up in the wrong hands and most likely it will be a leak from an US company.
Yes, many defensive uses of technologies can be used for offense. When I say defense, I also include offense there as I don't believe you can just have a defensive posture alone to maintain one's defense, you need deterrence too. Personally I'm quite happy to see many in Silicon Valley embrace defense-tech and build missiles (ex. recent YC co), munitions, and dual-use tech. The world is a scary and dangerous place, and awful people will take advantage of the weakness of others if they can. Maybe I'm biased because I spent a lot of time in Eastern Europe and Ukraine, but I much prefer the U.S. with all our faults to another actor like China or Russia being dominant
Every kinetic reaction by Russia in Georgia and Ukraine is downstream of major destabilizing non-kinetic actions by the US.
You don't think the US fomenting revolutions in Russia's near-abroad was in any way a contributing factor to Russian understanding of the strategic situation on its western border? [1] You don't think the US unilaterally withdrawing from the ABM treaty[2], and then following that up with plans to put ABMs in Eastern Europe[3], were factors in the security stability of the region? You don't think that the US pushing to enlarge NATO without adjusting the CFE treaty to reflect the inclusion of new US allies had an impact? [4][5] It's long been known that the Russian military lacked the capacity for sustained offensive/expeditionary operations outside of its borders.[6][7] Until ~2014 it didn't even possess the force structure for peer warfare, as it had re-oriented its organization for counter-insurgency in the Caucasus. So what was driving US actions in Eastern Europe? This was a question US contrarians and politicians such as Pat Buchanan were asking as early as 1997. We've had almost 3 decades of American thinkers cautioning that pissing around in Russia's western underbelly would eventually trigger a catastrophic reaction[8], and here we are, with the Ukrainians paying the butcher's bill.
In the absence of US actions, the kleptocrats in Moscow would have been quite content continuing to print money selling natural resources to European industry and then wasting their largess buying up European villas and sports teams. But the siloviki have deep-seated paranoia which isn't entirely baseless (Russia has eaten 3 devastating wars originating from its open western flanks in the past ~120 years). As a consequence the US has pissed away one of the greatest accomplishments of the Cold War: the Sino-Soviet Split. Our hamfisted attempts to kick Russia while it was down have now forced the two principle powers on the Eurasian landmass back into bed with each other. This is NOT how we win The Great Game.
> Maybe I'm biased because I spent a lot of time in Eastern Europe and Ukraine, but I much prefer the U.S. with all our faults to another actor like China or Russia being dominant.
It would help to lead with this context. My position is that our actions ENSURE that a hostile Eurasian power bloc will become dominant. We should have used far less stick to integrate Russia into the Western security structure, as well as simply engaged them without looking down our noses at them as a defeated has-been power (play to their ego as a Great Power). A US-friendly Russia is needed to over-extend China militarily. We need China to be forced into committing forces to the long Sino-Russian border, much as Ukraine must garrison its border with Belarus. We need to starve the PRC's industry of cheap natural resources. Now the China-Russia-Iran soft-alliance has the advantage of interior lines across the whole continent, and a super-charged Chinese industrial base fed by Siberia. Due to the tyranny of distance, this will be an near-impossible nut to crack for the US in a conflict.
[1] https://www.theguardian.com/world/2004/nov/26/ukraine.usa
[2] https://www.armscontrol.org/events/2001-12/abm-treaty-withdr...
[3] https://www.realinstitutoelcano.org/en/analyses/americas-abm...
[4] https://www.sipri.org/yearbook/2003/17
[5] https://www.armscontrol.org/act/1997-08/features/nato-and-ru...
[6] https://warontherocks.com/2021/11/feeding-the-bear-a-closer-...
[7] https://www.rand.org/content/dam/rand/pubs/research_reports/...
[8] https://wikileaks.org/plusd/cables/08MOSCOW265_a.html
There are many 'interesting' event that happened because of the invasion of irak, looking for weapon of mass destruction that never existed.
This led to the destabilization of the entire middle east, several war and ISIS.
One could say that the unconditional support to the israelian policy in middle east since 1950 also brought it's load of conflicts.
The whole south America is fcked because of usa illegal intervention from WW2 to the end of cold war.
And the list goes on and on.
I mean it would be much faster to stay what good impact had the usa foreign policy on the world in the last 100 year.
It could have wondrously good impacts, but that only matters in a moral framework where good actions morally cancel out bad ones.
„Yeah, but your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should.“
Propaganda and disinformation were problems before the AI hype but now it got worse.
In the race for AGI they ignored the risks and didn‘t think of useful counter measures.
It’s easier to spread lies with AI than to spread the truth.
We enter dark aged where most people can’t distinguish fake from real because the faked became so convincing.
Audio, photo and video lost their evidential value.
[1] https://www.kyivpost.com/post/44112
No, OP is right. We are truly at the dystopian point where a sufficiently rich government can track the loyalty of its citizens in real time by monitoring all electronic communications.
Also, "expensive" is relative. When you consider how much US has historically been willing to spend on such things...
Who's paying for that tho ? The same dumbass who get spied over, i don't see it as a reason why it wouldn't happen. Cash is unlimited.
These weapons could also come in handy domestically if people find out that both parties screw them all the time.
I wonder why people claim that China is a threat out side of economics. Has China tried to invade the US? Has Russia tried to invade the EU? The answer is no. The only current threats to the EU come from the orange man.
The same person who also revoked the INF treaty. The US now installs intermediate range nuclear missiles in Europe. Russia does so in Belarus.
So both great powers have convenient whipping boys to be nuked first, after which they will get second thoughts.
It is beyond ridiculous that both the US and Russia constantly claim that they are in danger, when all international crises in the last 40 years have been started by one of them.
Military power is what has kept the EU safe, and countries without strong enough military power — such as Ukraine, which naively gave up its nuclear arsenal in the 90s in exchange for Russian promises to not invade — are repeatedly battered by the power-hungry.
The orange man is completely ineffectual on both fronts. Will not spend the money on the military and too inept to make a deal that doesn’t cost in the long run.
Should companies with consumer brands never make weapons? Sure, and while we're at it, let's ban knives because they can be used for both chopping vegetables and stabbing people. The issue isn't the technology itself. It's how it's regulated, controlled, and used. And as for cyber terrorism? That's a problem with bad actors, not with the tools themselves.
So, by all means, keep pointing out the hypocrisy of a company that makes YouTube Kids and killer AI. Just don't pretend like you're not benefiting from the same duality every time you use a smartphone or the internet which don't forget is a technology born, ironically, from military research.
Ideally no one, and if the cost / expertise is so niche that only a handful of sophisticated actors could possibly actually do it, then in fact (by way of enforceable treaty) no one.
Anyone who wants to establish deterrence against superiors or peers, and open up options for handling weaker opponents.
> enforceable treaty
Such a thing does not exist. International affairs are and will always be in a state of anarchy. If at some point they aren't, then there is no "international" anymore.
> enforceable treaty
How would you enforce it after you get nuked?
We're talking about making war slightly more expensive for yourself to preserve the things that matter, which is a trade-off that we make all the time. Even in war you don't have to race for the bottom for every marginal fraction-of-a-percent edge. We've managed to e.g. ban antipersonnel landmines, this is an extremely similar case.
> How would you enforce it after you get nuked?
And yet we've somehow managed to avoid getting into nuclear wars.
Feels good but will lead to disaster in the long run.
AI on the other hand seems to be very multi purpose
Might be thinking of Anduril.
Look it up.
There is at this moment little evidence that autonomous weapons will cause more collateral damage than artillery shells and regular air strikes. The military usefulness on other other hand seems to be very high and increasing.
Like skydiving without a parachute, I think we should accept it is a bad idea without needing a double blind study
we have, of course, developed all three. they have gone a long way towards keeping us safe over the past century.
Propping up evil figure/regime/ideology (Bolsheviks/Communists) to justify remorseless evilness (Concentration camps/Nuclear bomb) isn't new nor unique, but particularly predictable.
I'm sure this sounds like a big nothingburger from the perspective of, you know, people he isn't threatening.
How can you excuse that behaviour? How can you think someone like that can be trusted with any weapons? How naive and morally bankrupt do you have to be to build a gun for that kind of person, and think that it won't be used irresponsibly?
That it won't is a mixture of cowardice, cynical opportunism, and complicity with unprovoked aggression.
In which case, I posit that yes, if you're fine with threatening or inflicting violence on innocent people, you don't have a moral right to 'self-defense'. It makes you a predator, and arming a predator is a mistake.
You lose any moral ground you have when you are an unprovoked aggressor.
There are more options than arming an aggressor and capitulating to foreign powers. It's a false dichotomy to suggest it.
That wasn't the quote that was removed. Not even close, really.
Actually I think a lot of people have it - just yesterday I saw someone on reddit claim Google was evil because it was secretly founded by the US military. And they were American. That's their military!
A world without the US navy is one without sea shipping because pirates will come back.
The problem with building AI weapons is that eventually it will be in the hands of people who are morally bankrupt and therefore will use them to do evil.
In my garage, I have some pretty nasty "weapons" - notably a couple of chainsaws, some drills, chisels, lump/sledge/etc hammers and a fencing maul! The rest are merely: mildly malevolent.
You don't need an AI (whatever that means) to get medieval on someone. On the bright side the current state of AI (whatever that means) is largely bollocks.
Sadly, LLMs have and will be wired up to drones and the results will be unpredictable.
How would we go about doing that?
Every kind of nefarious way to keep the truth at bay in authoritarian regimes is always on the table. From the cracking of iPhones to track journalists covering these regimes, to snooping on email, to using AI to do this? Is just all the same thing, just updated and improved tools.
Just like Kevin Mitnick selling zero day exploits to the highest bidder, I have a hard time seeing how these get developed and somehow stay out of reach of the regimes you speak of.
The concern with AI weapons specifically is that if something goes wrong, they might not even be in the hands of the people at all, but pursue their own objective.
When China attacks with AI weapons do you expect the free world to fight back armed with moral superiority? No. We need even more lethal AI weapons.
Mutual assured destruction has worked so far for nukes.
A car is a tool. It can be used as a weapon.
Even water and air can be used as a weapon if you try hard enough. There is probably nothing on this planet that couldn't be used as a weapon.
That said, I do not think AI weapons are a reasonable thing to build for any war, for any country, for any reason - even if the enemy has them.
So you're in favor of losing a war and becoming a subject of the enemy? While it's certainly tempting to think that unilateralism can work, I can hardly see how.
I never said that. Please don't reply to comments you made up in your head.
Using AI doesn't automagically equate to winning a war. Using AI could mean the AI kills all your own soldiers by mistake. AI is stupid, it just is. It "hallucinates" and often leads to wrong outcomes. And it has never won a war, and there's no guarantee that it would help to win any war.
It's legitimate to worry about scaled, automated control of weapons, since it could allow a very small number of people to harm a much larger number of people. That removes one of our best checks we have against the misuse of weaponry. If you have to muster a whole army to go kill a bunch of people, they can collectively revolt. (It's not always _easy_ but it's possible.)
Automating weapons is a lot like nuclear weapons in some ways. Once the hard parts are done (refining raw oar), the ability for a small number of people to harm a vast number of others is serious. People are right to worry about it.
It isn't this that's insane, but a total belief purity of weapons that is.
Of course. My point was, it is insane for those who do.
Reality is in a war between the West vs Russia/Iran/North Korea/China whomever we end up fighting, we’re going to do whatever we can so the Western civilization and soldiers survive and win.
Ultimately Google is a western company and if war breaks out not supporting our civilization/military is going to be wildly unpopular and turn them into a pariah and anything to the contrary was never going to happen.
There was no war forthcoming between an integrated West and any other power. War is coming because there no longer is a West.
Today people have differing views of nuclear weapons, but people who fought near Japan and survived believe the bomb saved their life.
It's easy to pretend you don't have a sides when there is peace, but in this environment google's going to take a side.
So... when the Russian tanks start rolling on the way to Berlin and Chinese troops are marching along that nice new (old) road they finished fixing up - otw to Europe, so if that happens, which looks possible - you think there will be no West??
If the world is to be divided Europe is the lowest hanging and sweetest fruit.
I think there will still be West even if there is a King in the US demanding fealty to part of it - we are the same as they are, it's ridiculous to pretend we are.
Ideology is one thing, survival of people and culture is another.
if you're mad about the existence of weapons then please review the prisoners' dilemma again. we manage defection on smaller scales using governments but let's presuppose that major world powers will not accept the jurisdiction of some one-world government that can prevent defection by force. especially not the ones who are powerful and prosperous (like us) who would mostly lose under such an arrangement.
Lot of this thread has reduced the issue to whether it is more ethical for one country to deploy relative to others. In any case, a lot of countries will have this capability. A lot of AI models are already openly available. The required vision and reasoning models are being developed currently for other uses. Weaponization is not a distant prospect.
Given that, the tech community should think about how to tackle this collective major problem facing humanity. There was a shift, which happened to nuclear scientists, from when they were developing the bomb to the post World War situation when they started thinking about how to protect the planet from a MAD scenario.
Important questions - What would be good defense against these weapons? Is there a good way of monitoring whether a country is deploying this - so that this can be a basis for disarmament treaties? How do citizens audit government use of such weapons?
As for sending people in harm's way: if that were the effect, it would only apply to those "with AI". In essence, AI becomes a weapon you use to threaten someone with war since your own cost will be low and their cost will be high.
Not really, though. Like any tool, its misuse or failure is the responsibility of the wielder.
> As for sending people in harm's way: if that were the effect, it would only apply to those "with AI". In essence, AI becomes a weapon you use to threaten someone with war since your own cost will be low and their cost will be high.
Agree about that part but that's just the nature of war, there are always going to be armies that are scarier than others.
I don't think the entities that are using it in this way care.
https://www.statnews.com/2023/03/13/medicare-advantage-plans...
https://www.hfma.org/revenue-cycle/denials-management/health...
https://www.vox.com/future-perfect/24151437/ai-israel-gaza-w...
https://www.972mag.com/lavender-ai-israeli-army-gaza/
And Google is profiting of this, helping enforce a brutal illegal occupation.
https://www.datacenterdynamics.com/en/news/google-provided-a...
… it will be named “Cyberdyne”
A lot of what is going on in the world right now makes me think we are in a war that hasn't yet been officially acknowledged.
"Google Petard, formerly known as Boombi, will be shutting down at the end of next month. Any existing explosion data you have in your Google Account will be deleted, starting on May 1, 2027."
At the same time the Ukraine war has changed a lot of the battlefield strategies that will require development of new advanced weapons. Most obviously in the areea of drones / counter-drone space. but lot of other technology as well.
With all that money of course companies will chase it. OpenAI is already joined up with Anduril.
Looks very incomplete....
OpenAI has already signed a collaboration with Anduril.
Killer robots will be a reality very soon; everyone is very obviously getting prepped for this. China has a massive advantage.
Only in the United States do we have the privilege to pretend like we can ignore it.
Additionally, the US has been vociferous about limiting access to foreign tech companies with "military links" in China, so perhaps Google should be placed in that category by all non-Five-Eyes countries.
> ... evil cannot by itself flourish in this world. It can do so only if it is allied with some good. This was the principle underlying noncooperation—that the evil system which the [British colonial] Government represents, and which has endured only because of the support it receives from good people, cannot survive if that support is withdrawn.
If you are a good person working for the big G...
drinks coffee
Nevermind.
I pledge to not drink coffee
Somebody hands me coffee
I retract the pledge and start drinking
---
I have to wonder what the value of a pledge is if you can just stop pledging something at the earliest convenience, do the thing, and people cheer you on for it
Google rushed to sell AI tools to Israel’s military after Hamas attack:
https://www.washingtonpost.com/technology/2025/01/21/google-...
By the way humans: "principles page includes provisions that say the company will use human oversight". ... which human? Trump? Putin is human too, but I guess he is busy elsewhere. Definitely not someone like Mother Theresa, she is dead anyway, and I cannot think of someone from recent years playig in the same league, somehow that end of the spectrum is not represented that well recently.
They already dumped "do no evil" many years ago and they are now all in on fuck the poor and fuck the rest: I'm making profits and all is fine.
Google makes money and they don't appear to care how - its all about the money.
Plus, in a healthy economy if everyone is bribing the government shouldn’t it all cancel out? Well it turns out the poor don’t bribe the government very often, so they are easily ignored.
And suddenly, when the government is co-opted into believing anything that gets in the way of “business” is bad, they figure out that money that could be spent on social services could also be spent on corporate tax incentives! Eventually the entire country becomes one big profit maximizer.
What do you think is happening right now?
Of course it's not that as bad as you describe because it's not as simple as you describe.
On the one hand I think they were afraid many of their employees might protest again like they have in the past, signaling that Google isn't that awesome, progressive place everybody should work. This would mean they could be potentially losing some of the top notch SV talent that they are in constant competition with from other companies.
On the other hand, they've made it clear they aren't above firing employees who do protest as they just did when 28 employees were fired over the recent Nimbus Project contract worth an estimated $1.2B dollars with Israel:
They staged sit-in protests in Google's offices in Silicon Valley, New York City and Seattle – more than 100 protestors showed up. A day later, Google fired Montes and 27 other employees who are part of the No Tech for Apartheid group.
https://www.npr.org/2024/04/19/1245757317/google-worker-fire...
I think they try too hard to tow the line between the two, but like you said, its clear they're really all about the money.
When you publicly quote: You are mostly lost to reason and profit is everything! That's why you do it.
If you have other intentions then go with a Not for Profit (I'm sure most countries have a similar structure) or similar setup.
I’m sure they are clutching their pearls while waiting for their money to be deposited into their bank account and their RSUs to be deposited into their brokerage accounts.
Yes I did a stint at BigTech. But I didn’t lie to myself and think the company I worked for was above reproach as my adult son literally peed in bottles while delivering packages for the same company.
- 1mmTC is enough to do this depending on how one allocates spending. land in many parts of the country is not that expensive.
That being said the only way I could imagine we'd get a single world order is one country dominating everyone else, just like superpowers and regional powers dominate their respective parts of the globe.
Never ever ever are people just going to give up their control out of some form of "enlightenment" that has never existed among the human race.
Unprecedented levels of peace in Europe happened not because of competing nation states, but in spite of that competition. It was the unipolar control exerted by the US and the destruction of the Soviet Union and the creation of the EU (a proto pan-European state) that caused the 1990s. There was one and only pole -- the West. Not 2 (or more) different adversaries with opposed interests engaging in an arms race.
As we go back to a world with more fragmented and distributed power, we will get more war and more arms races. An especially toxic setup in the age of AI.
This doesn't have to be a binary, anyways. You could set it up as some kind of federation where there's still economic competition. Just not military competition.
Also, AFAIK all of those nations consolidated because of military conquest. Countless European wars and empires.
The EU isn't like that, but they're an alliance and not one country. You can't just leave a country like England did.
and yes, America has done that for the "pax Americana" period. unfortunately we were short-sighted and allowed people too much free reign to be stupid and anti-American.
In other words, everything would be terrible, but at least it'd be terrible for everyone.
Until we realized we could sacrifice some for the betterment of the rest, find a way to rationalize it, and then we throw it all out the window.
The alignment-faking research seems to indicate that LLMs exercise of this kind of reasoning.
And even if it was, they wouldn’t tell the system it was part of old non-evil Google.
Genuine questions. Unlike "don't be evil," this promise has a very narrow and clear interpretation.
It would be nice if companies weren't able to just kinda say whatever when it's expedient.
However, when you change a promise publicly, you signal a change in direction. It is much more honest than leaving it in place but violating it behind the scenes. If the public really cares, they can pass a law via their democratic representatives (or Google can swear a public oath before God I suppose).
Eventually tech and even startups follow the money. Palantir is considered cool. YC started accepting defense startups. Marc Andreessen is on X nonstop promoting conservative views of all kinds. PG becomes anti-wokism warrior.
This is how it happens. Step by step.
Otherwise why bother?
http://www.notechforapartheid.com/
There is also tremendous interest, though only a few of them have been fielded on the actual battlefield so far, to the remotely controlled and autonomous ground platforms, and Google is the leader in the civilian ones, and it looks to me there is a relatively easy path to transferring the tech into the military systems.
Really if a company wants people to trust claims like this, they should make them legally binding. Otherwise it's all PR.
I'm gonna presume "the new leadership of the FBI".
This whole thing where the average person feels that they can use rules against a more powerful person? That's really an invention of maybe the last 80 years, if not more recently than that.
With the exception of that human lifetime-sized era, the vast majority of history is a bunch of psychopaths running things and getting to kill/screw whoever they wanted and steal whatever they wanted. Successful revolts are few and far between. The only real difference is the stakes.
Company says won't work for DoD
DoD initiates arm twisting and FOMO
Company now works for DoD
The origins of investment will often lead to relative outcomes of that investment. It's almost like DoD invested in Google for an informational weapon, which really should surprise no one.
https://www.datacenterdynamics.com/en/news/google-provided-a...
This is extremely disconcerting.
Google as a tool of surveillance is the kind of thing that could so easily be abused or misused to such catastrophic ends that I absolutely think the hard line should be there. And I only feel significantly more this way given the current geopolitical realities.
By weaponising AI?
Who else right? If not them, there will be no one saving democracies with weaponized mass-surveillance AI. It is their quest and privilege, right? Medicine, just society, and all such crap have to wait!
I bought that!
(not)
It would be a bit of a Pyrrhic victory to repel an attempted takeover of your land, only for that land to end up contaminated with literally millions of landmines because you didn't have a mutual agreement against using them.
This is an excellent overview of why: https://acoup.blog/2020/03/20/collections-why-dont-we-use-ch...
It’s the companies that horde everyone’s personal information, who eroded the concept of privacy while mediating lives with false promises for trust turning into state intelligence agencies that bothers me
The incentives and results become fucked up, safe guards less likely to work I get not a lot of people care but it’s dangerous
Ultimately every sufficiently large company seems to become an arms dealer, a drug dealer or a bank.
We need look no further than Lavender [1] to see where this ends up.
[1]: https://www.972mag.com/lavender-ai-israeli-army-gaza/
The difference is about 250k/yr. Kinda big.
All "pledges" without some kind of enforceable legal foundation are just meaningless hot air.
Now that Googles onboard who knows, maybe we will be able to drone strike people that are underground,underwater, inbuildings without killing innocent civillians.
Nothing is going to stop USA's adversaries from deploying AI against US citizens. Pick your poison, but I prefer to compete and win rather than unilaterally disarm and hope for goodwill and kindness from regimes that prioritize the polar opposite.
https://en.wikipedia.org/wiki/List_of_companies_involved_in_...
The subject being, how far large corporations are willing to go for the sake of profit maximisation.
https://itif.org/publications/2024/09/16/china-is-rapidly-be...
For example the most advanced batteries in the world are designed and manufactured in China.
In fact, when you look at the last decade of Google saying they're an "AI first" company and literally inventing transformers, and look at what their stock price has done and how they've performed in relation to other major companies involved in this current AI spring, there is simply no way not to be disappointed.
> Moreover, the Israeli army systematically attacked the targeted individuals while they were in their homes — usually at night while their whole families were present — rather than during the course of military activity.
https://www.972mag.com/lavender-ai-israeli-army-gaza/
Imagine you have a weapon that can find and kill all the 'bad guys'. Would you not be in a morally compromised position if you didn't use it? You're letting innocents die every moment you don't.
* warning definitions of bad guys may differ leading to further conflict.
Personally I don't think we can as a species.
Right, and everytime that happens because of miscalculations by our government they lose the very real and important public license to continue. Ultimately modern wars led by democracies are won by public desire to continue them. The American public can become very hesitant to wage war very fast, if we unleash Minority Report on the world for revenge
Who, in that business, cares?
AI will provide a fig leaf for the indiscriminate large scale killing that is regularly done since the start of industrialised warfare.
Using robots spare drone pilots from PTSD
From the perspective of the murderous thugs that run our nations (way way before the current bunch of plainly bonkers ones in the USA), what is not to like?
Whilst there are all sorts of quibbles about weapons generally being evil, this is evil.