Maybe I'm just cynical, but I wonder how much of this initiative and energy is driven by people at Microsoft who want their own star to rise higher than it can when it's bound by a third-party technology.
I feel like this is something I've seen a fair amount in my career. About seven years ago, when Google was theoretically making a big push to stage Angular on par with React, I remember complaining that the documentation for the current major version of Angular wasn't nearly good enough to meet this stated goal. My TL at the time laughed and said the person who spearheaded that initiative was already living large in their mansion on the hill and didn't give a flying f about the fate of Angular now.
bsimpson 9 days ago [-]
There is a prominent subset of the tech crowd who are ladder climbers - ruthlessly pursuing what is rewarded with pay/title/prestige without regard to actually making good stuff.
There are countless kidding-on-the-square jokes about projects where the innovators left at launch and passed it off to the maintenance team, or where a rebrand was in pursuit of someone's promo project. See also, killedbygoogle.com.
devsda 9 days ago [-]
> There is a prominent subset of the tech crowd who are ladder climbers - ruthlessly pursuing what is rewarded with pay/title/prestige without regard to actually making good stuff.
I think the hiring and reward practices of the organizations & the industry as a whole also encourages this sort of behavior.
When you reward people who are switching too often or only when moving internally/externally, switching becomes the primary goal and not the product. If you know beforehand that you are not going to stay long to see it through, you tend to take more shortcuts and risks that becomes the responsibility of maintainers later.
We have a couple of job hoppers in our org where the number of jobs they held is almost equal to their years of experience and their role is similar to those with twice the experience! One can easily guess what their best skill is.
megadata 9 days ago [-]
> reward practices
Yes. People are incentivized to do very stupid things to grab this years bonus or promotion.
See Google intentionally degrading search results for example. The resentment and loathing for Google is at all time high.
CSMastermind 9 days ago [-]
Okay but what's a better system?
only-one1701 9 days ago [-]
Ideally it's the responsibility of management, starting at the top, to think critically about what kind of software most benefits the company and reward those that do it. Unfortunately, it's much harder to sell "amazing documentation and test coverage" to a CEO than "Gen AI wrapper that doesn't really do much", so here we are.
randmeerkat 9 days ago [-]
> Ideally it's the responsibility of management, starting at the top, to think critically about what kind of software most benefits the company and reward those that do it. Unfortunately, it's much harder to sell "amazing documentation and test coverage" to a CEO than "Gen AI wrapper that doesn't really do much", so here we are.
What do you think these technical ladder climbers become..? Technical leadership. The truth is, there’s no one technical in big tech leadership. They pay lip service to “tech” to keep up appearances and to satisfy the pleebs that work under them. The only things leadership cares about is the stock price and profitability, literally nothing else matters. If anything the tech itself is a nuisance that pulls their attention from where they’d rather have it, which is anywhere else.
sciens3_ 9 days ago [-]
> Ideally it's the responsibility of management, starting at the top, to think critically about what kind of software most benefits the company and reward those that do it.
I work as if that ideal is true, and can’t stand playing the game. But others are still playing the game and eventually they win whatever B.S. position it is that they aspire to, and I get removed from the board.
re-thc 9 days ago [-]
> Okay but what's a better system?
Why does promotion need a new feature? Reward for maintenance over time. Build on existing features / components. Reward for helping and up-skilling others.
tcmart14 8 days ago [-]
Especially at some of these bigger tech companies that have been around multiple decades. Im not a huge fan of Microsoft and their products. But man, you know there are probably a few hundred engines who are the bedrock of the company, if they disappeared tomorrow, Microsoft would probably be screwed. They probably work on the products and on areas you never hear about that never get flashy headlines. They are the people keeping the guts of Windows and all their multi-decade projects still pumping. Answering and fixing bug reports on these products.
aleph_minus_one 9 days ago [-]
Create incentives that managers will be responsible for a very long time for a system/product that they create; with barely any chance to get out of it. Give them a malus (opposite of "bonus"; except in rare exceptional cases) if they "project hop".
If a particular kind of "career managers" hate this system (and perhaps thus quit): great.
brookst 8 days ago [-]
Ask every single person in the org “list the top 5 people who made you more productive this year”.
Reward people based on (# who listed them * average salary of those who listed them).
berdario 8 days ago [-]
This approach is notable for benefitting from swelling headcount increases
i.e. if you hire 1000 new people, even if only a small fraction will vouch for you, on average you -and everyone else- will benefit by seeing the # of people who listed you in the "top 5 who helped you being productive" increase
The old Google performance review was arguably similar (the managers could still punish their reports, but peer feedback was valued a lot more), but I think that Google swelled in size because of other effects (probably because managers might've been indirectly rewarded by having more reports, despite managers rarely being among the people who others would list as "making you more productive")
nashashmi 8 days ago [-]
Dont offer excessive money. Money is like shit whose smell attracts flies. The more money starts entering the system, the more flies come in. The more flies, the more rigorous the vetting process. That's where the infamous google hiring system started kicking in.
curtisblaine 8 days ago [-]
Are you proposing... not paying your employees because paying them will attract people who do it for the money?
nashashmi 8 days ago [-]
Isn’t that what makes startups special ?
SR2Z 7 days ago [-]
This is what founders say right before they offer you 0.05% of their company and try to sell you on the idea that of course it's worth it because clearly the company will be worth $100B someday!
VirusNewbie 8 days ago [-]
that's why the most dominant software companies pay the lowest.
Lerc 9 days ago [-]
I had not encountered the phrase kidding-on-the-square before. Searching seems to reveal a spectrum of opinions as to what it means. It seems distinct to the 'It's funny because it's true' of darker humour.
It seems to be more on a spectrum of 'Haha, only joking' where the joke teller makes a statement that is ambiguously humorous to measure the values of the recipients, or if they are not sure of the values of the recipients.
I think the distinction might be on whether the joke teller is revealing (perhaps unintentionally) a personal opinion or whether they are making an observation on the world in general, which might even imply that they hold a counter-opinion.
Where do you see 'kidding on the square' falling?
(apologies for thread derailment)
bsimpson 9 days ago [-]
It's a phrase I learned from my mom/grandpa growing up. "On the square" means "but I also mean it."
fsckboy 9 days ago [-]
"on the square" means "honest" in the same sense that "crooked" means dishonest. think of carpentry, if something is not on the square, it's crooked.
Anyway, why does Microsoft bottleneck itself when it could have 10 different AI teams. That's why 10 new AI startups can achieve what these behemoths can't.
alternatex 8 days ago [-]
Microsoft already suffers from lack of software consolidation. It costs a lot of money to maintain 10 different tech stacks. Not to mention having separate security/privacy review processes for each.
readyplayernull 8 days ago [-]
We are talking multitrillion company here, money shouldn't be an issue. Though you prove my point, they lack coordination. Also AI lacks coordination and strategy and won't solve BigTech management problems any time soon. Best they can try is providing tools trying to capture what others, that are actually inventive, can do. That's why they will stay at Azure level.
supriyo-biswas 9 days ago [-]
At my former employer, there was a team who were very much into resume-driven development and wrote projects in Go even when Java would have been the better alternative considering the overall department and maintenance team expertise, all the while they were informally grumbling about how Go doesn’t have the features they need…
whstl 9 days ago [-]
I have lot of sympathy for resume-driven developers. They're just answering to the labor market. More power to them.
When companies do what the market expect we praise them. When it's workers, we scorn them. This attitude is seriously fucked up.
When companies start hiring based on experience, adaptability, curiosity, potential and curiosity then you get to complain. Until that, anyone doing it should be considered a fucking genius.
usefulcat 9 days ago [-]
Pretty sure most of the resentment comes from working with such people. Which I think is understandable.
whstl 9 days ago [-]
Understandable, but still wrongfully blaming a player rather than the game itself.
marsovo 9 days ago [-]
I don't think it's wrong to blame people who make life more difficult for others for their own aggrandizement
The game doesn't exist without players. I could make more money if I worked at Meta or Amazon, but at what cost?
I understand the realities of Game Theory, but then one could argue that being blamed and criticized for one's choices is also part of the game.
"Mr Wolfcastle, how do you sleep at night?" "On a big pile of money with many beautiful ladies"
whstl 8 days ago [-]
> I don't think it's wrong to blame people who make life more difficult for others for their own aggrandizement
It is, and this is highly judgmental and offensive. Nobody is doing this for "aggrandizement".
Also, all of this is just rationalization, and will keep being until:
1) People start blaming companies for not having the spine to say no to misguided projects by employees.
2) People start blaming Companies for not having the spine to hire people based on past experiences with the craft of programming itself, but rather asking them to have a certain box ticked in their CV.
If one wants to program in X in order to better feed their family and the market says they need to have used X professionally, it is in their right to do X at the workplace.
This is not only expected of them, this is how the whole industry is set up.
They're just following the rules, period.
namaria 7 days ago [-]
> People start blaming companies
For maximizing their gains in spite of wider consequences? Why? I thought that was genius level behavior in your book.
Why do you feel compelled to denounce this behavior on one side and praise it on another? That seems to be the very hypocrisy that you are shaking your fists against.
ipaddr 9 days ago [-]
You are more likely to leave tech/full time employment after working at a FAANG. Either you made enough,it burnt you out, everywhere else seems down, etc. You go to another FAANG or try a startup or leave altogether.
9 days ago [-]
ohgr 9 days ago [-]
We have those! Turn up, make some micro-services or AWS crap pile we don’t need to solve a simple problem, then fuck off somewhere else and leave everyone else to clean it up.
Worst one is the data pipeline we have. It’s some AWS lambda mess which uses curl to download a file from somewhere and put it into S3. Then another lambda turns up at some point and parses that out and pokes it into DynamoDB. This fucks up at least once a month because the guy who wrote the parser uses 80s BASIC style string manipulation and luck. Then another thing reads that out of DynamoDB and makes a CSV (sometimes escaped improperly) and puts that into another bucket.
I of course entirely ignore this and use one entire line of R to do the same job
Along comes a senior spider and says “maybe we can fix all these problems with AI”. No you can stop hiring acronym collectors.
conjectures 9 days ago [-]
Ah, the good ole Rube Goldberg machine.
darkhorse222 9 days ago [-]
I see that a lot from the Go crowd. That's why I consider any strong opinions on languages to be a poor indicator for ability. Sure there's differences, but a language does not make the engineer. Someone who is attracted to flashy stuff makes for an indulgent planner.
scubbo 9 days ago [-]
> That's why I consider any strong opinions on languages to be a poor indicator for ability.
Hmm. Can't say I agree here - at least not with the literal text of what you've written (although maybe we agree in spirit). I agree that _simplistic_ strong opinions about languages are a sign of poor thoughtfulness ("<thing> is good and <other thing> is bad") - but I'd very much expect a Staff+ engineer to have enough experience to have strong opinions about the _relative_ strengths of various languages, where they're appropriate to use and where a different language would be better. Bonus points if they can tell me the worst aspects about their favourite one.
Maybe we're using "opinion" differently, and you'd call what I described there "facts" rather than opinions. In which case - yeah, fair!
mikepurvis 9 days ago [-]
Absolutely. Anyone senior should be able to fairly quickly get a handle on the requirements for a particular project and put forward a well-reasoned opinion on an appropriate tech stack for it. There might be some blank space in there for "I've heard of X and Y that actually might fit this use case slightly better, so it's probably worth a brief investigation of those options, but I've used Z before so I know about the corner cases we may run into, and that has value too."
BobbyJo 9 days ago [-]
Language matters quite a bit when deciding how to build an application though. I see having no strong opinions on language to be a sign the person hasn't developed a wide enough variety of projects to get a feel for their strengths and weaknesses.
ipaddr 9 days ago [-]
The more senior I get the less opinionated I am. Someone wants to do something in some different or some different way.. why not. In the end the language matters little the tech stack doesn't matter unless you are going down a specific path and even then it probably doesn't matter that much if you have a choice on what to use.
BobbyJo 8 days ago [-]
I honestly couldn't disagree more. Having built very similar systems in Golang, Python, and Java in different companies, and having used MongoDB and other NoSQLs
as well as Postgres to similar ends, I have very strong preferences about which I'd rather use in the future.
Even simple requirements can rule out languages for me. Like, if you need async or concurrency, Python is awful. If you need SQL in your code, Golang isn't great. If you are building a simple CRUD backend, Java is waste of time. If you aren't doing anything compute heavy or embedded, why even consider C++ or Rust. The list goes on.
ipaddr 8 days ago [-]
You are trying to choose between languages based on differentiation.
This language is better for x so it should be used or it's a waste time or capability.
But in reality it rarely matters. If you were only allowed to use Java as a backend and your competitors could use anything your company would succeed or fail based on marketing and sales. The backend doesn't matter as long as they both have the same features.
I understand developer preference and different languages make things easier and make programming funnier. Languages have different limits.
As you become more senior you realize getting around those limits is part of the magic. If you come on to a project where the existing developer wants to write the backend in javascript because that's what they know I would rather use Javascript then wasting time trying to push a more 'pure' choice. Because in the end I am capable of writing it and what we will be judged on is if it works to achieve an objective not if it was the best language choice when using differentiation.
BobbyJo 6 days ago [-]
Then why do companies try to move so fast if if doesn't matter? Seems like your opinion runs counter to the observed behavior of the entire software industry.
If speed of execution matters, then the language and tools you use for something also matters.
ljm 8 days ago [-]
There’s also the practicality of hiring or maintenance as well as what you get from the ecosystem, as well as understanding the wider business context.
I might personally love to kick off a greenfield project with Elixir, and it might tick all the technical boxes and meet the requirements. But then I have to pay a premium for senior engineers that know elixir or have to price in the time needed to upskill.
Or I could just do it in Rails where I can dip into a much larger talent pool and still meet the requirements. Much more boring but can get the job done just as well.
darkhorse222 8 days ago [-]
I think you overemphasize the pain. A good design will handle lots.
BobbyJo 8 days ago [-]
Languages influence design, and application influences design. That means they influence each other.
tbossanova 8 days ago [-]
So what are the strengths of those languages?
BobbyJo 8 days ago [-]
Python is great for library orchestration and has tons of support for almost anything you need. So if you aren't building a large application and don't need concurrency, its great.
Java is great for building large applications with large teams, since it limits how code is shared and how libraries are built, and has a culture of convention. Golang has greate concurrency support and meta programming support, so its awesome for web servers and plugging into systems that incorporate orchestration technologies. C++ and Rust are fast and efficient as hell when you need that. JavaScript is your only viable option when you need complex and interactive UI that's OS agnostic.
ohgr 9 days ago [-]
Yeah. I have a list of things I won’t work with. That’s what experience looks like.
(Mostly .Net, PHP and Ruby)
pdimitar 9 days ago [-]
And I see people who assume choosing a language was done for "flashy stuff" the less capable.
See, we can all generalize. Not productive.
Only thing I ever saw from Golang devs was pragmatism. I myself go either for Elixir or Rust and to me Golang sits in a weird middle but I've also written 20+ small tools for myself in Golang and have seen how much quicker and more productive I was when I was not obsessed with complete correctness (throwaway script-like programs, small-to-mid[ish]-sized projects, internal tools etc.)
You would do well to stop stereotyping people based on their choice of language.
zozbot234 9 days ago [-]
> how much quicker and more productive I was when I was not obsessed with complete correctness
That's pretty much another way of saying that stuff becomes a whole lot quicker and easier when you end up getting things wrong. Which may even be true, as far as it goes. It's just not very helpful.
pdimitar 9 days ago [-]
Obviously. But I did qualify my statement. There are projects where you're OK with not getting everything right from the get go.
FWIW I very much share your exact thoughts on Rust skewing metrics because it makes things too easy and because stuff almost immediately moves to maintenance mode. But that being said, we still have some tasks where we need something yesterday and we can't argue with the shot-callers about it. (And again, some personal projects where the value is low and you derive more of it if you try quickly.)
darkhorse222 8 days ago [-]
I am a builder trained as a computer engineer. I scoff at languages as abstractions over fundamentals that do not change. This isn't like a hammer and a screwdriver, it's like fifteen different hammers. Just give me the thing, I'll build the house. A guy spends three hours talking about the hammer, he's probably not focused on building the house. Tool preferences show ego of the builder.
pdimitar 8 days ago [-]
> Just give me the thing, I'll build the house.
What do you think all programming discussions about languages, typing systems, runtime, tooling etc. aim for?
EXACTLY THAT.
If it was as easy as "just give me thing" then programming would have been a solved and 100% automated problem long time ago.
Your comment comes across as "if only we could fly, we would have no ground road traffic jams". I mean, obviously, yeah, but we can't fly.
Your comment also comes across a bit elitistic and from the POV of an ivory tower. Don't know if that was your goal, if not, I'd advise you to state things a bit more humbly.
darkhorse222 7 days ago [-]
I'm not seeking your advice. When I look at a cathedral built out of sand and call it meaningless that is not elitist, that is humble. Engineers who think their tools make all the difference are the pretentious ones. Things are not that complicated.
pdimitar 7 days ago [-]
Too much generalization. Not interesting and not a discussion. No idea why you even bothered to reply.
I stated an opinion. You can reject it silently. Having the last word is not such a badass move as many people think. :)
rurp 9 days ago [-]
I've seen the exact same pattern play out with different tools. The team used a shiny new immature platform for nice sounding reasons and then spent 80% of their time reinventing wheels that have already been solved in any number of places.
synergy20 9 days ago [-]
golang is decent and is the only new lang climbed up to 7th in popularity, it does shine at what it's good at
pclmulqdq 9 days ago [-]
The go and rust crowds both love writing things in their own language for its own sake. Not because it's a good choice. For a large web backend, go is great. For many other things it's terrible.
pdimitar 9 days ago [-]
> The go and rust crowds both love writing things in their own language for its own sake
Hard to take you seriously when you do such weird generalized takes.
While it's a sad fact that fanboys and zealots absolutely do exist, most devs can't afford to be such and have to be pragmatic. They pick languages based on merit and analysis.
hibikir 9 days ago [-]
You are very fortunate: In my 20 year career, I've spent most of it surrounded by zealots, including at very well known firms. I am currently surrounded by multiple teams with extremely tight ideas of what languages they want to work in. All different ideas, yet they are working basically with the same constraints: it's not as if they ware doing very different work where another team's language would be unfit for purpose. The performance is similar, and they are basically in a culture war. They have passionate arguments listing why a different team's choice is all wrong: None hold any real water.
I am especially valuable because I am fine reading and writing any of the languages involved. The management likes that, but there's a lot of difficulties solving the tribal problem, as the leads are basically all crazy zealots, and it's not as if purging one or two factions of zealots would avoid further zealotry from the survivors. The fact that I can work across all their tech doesn't make me many friends, as my work across systems shows their arguments have little merit.
For most work, in most cases, most languages are just fine. The completely wrong tool for the job is pretty rare, and the winning argument in most places is "we have the most people that have experience with tool X, or really want to try exciting new thing Y", for whatever the problem is, and whatever X might be.
pdimitar 8 days ago [-]
What's preventing you from working in a less dysfunctional place, exactly?
pclmulqdq 9 days ago [-]
Most of the people who use Go and Rust do it for pragmatic reasons. That doesn't influence the culture of the zealots in each community.
You should search for headlines on HN that say "written in Go" or "written in Rust" and then compare that to the number of headlines that say "written in JavaScript" or "written in Kotlin."
nativeit 9 days ago [-]
I understand the criticism that languages should be pragmatic choices they serve the strengths of the project at hand, but I guess I always found most projects boasting they’re written in a specific language or framework are doing so precisely because of the overarching interest in the language itself, which seems like a perfectly valid justification for such projects to me.
I’ve seen the more cynical hype-driven stuff, but it’s inevitably superficial on first glance, where I have seen some real curiosity and exploration in many “Project X - Built In Rust/Go/Cobol/D/Whatever” and I think they’re exploring the dynamics of the language and tooling as much as anything else.
pdimitar 8 days ago [-]
Those headlines help me understand what I am getting so I seriously have no idea what argument you are trying to make.
You do seem to say Golang and/or Rust devs are zealots which, if it is indeed what you are saying, is boring and plain false.
whstl 8 days ago [-]
This is not a sign or a proof of zealotry.
whstl 9 days ago [-]
Life is too short to program on languages one doesn't love.
Those people, if they really exist, are right.
pclmulqdq 9 days ago [-]
To be honest, I agree with you, and I think this is the right approach to life if you have a strong preference for languages. I think this only gets a bad name when people aren't upfront about their preferences and skills, with a lot of people having bad experiences with a go zealot at some point.
whstl 8 days ago [-]
It seems the zealotry here is not coming from the Go and Rust people, though. From another message, your measure of presence of zealots is by the amount of "HN titles".
Rewriting something in Go or Rust and announcing it is not being a Zealot.
Being enthusiastic about something shouldn't be a cause for us to judge them like this. We should be happy about them.
zozbot234 9 days ago [-]
Rust is pretty antithetical to resume-driven development because a lot of the stuff that's written in Rust is too well-written and free of the usual kinds of software defects. It immediately becomes "done" and enters low-effort maintenance mode, there's just very little reason to get involved with it anymore since "it just works". Believe it or not, this whole dynamic is behind a lot of the complaints about Rust in the workplace. It's literally making things too easy.
pclmulqdq 9 days ago [-]
I have to say that the median crate I interact with has the following readme:
" Version 0.2 - Unstable/buggy/slow unless you use exactly like the example - not going to get updated because I moved on to something else"
Rust is another programming language. It's easier to write code without a certain class of bugs, but that doesn't mean version 0.2 of a casual project is going to be bug-free.
LPisGood 9 days ago [-]
It’s not that I don’t believe you, hut that I’m having trouble seeing how what you say could be true.
Rust projects immediately become “done”??? They don’t also having changing requirements and dependencies? Why aren’t everyone at the best shops using it for everything if it massively eliminates work load?
conjectures 9 days ago [-]
Not from what I've seen. The compiler is slow af which plays badly with how fussy the thing is.
It's easy to have no defects in functionality you never got around to writing because you ran out of time.
rescbr 9 days ago [-]
So it makes you think before typing and compiling the code?
Doesn’t look like a con to me :)
conjectures 5 days ago [-]
Here's a great life hack for you. Set a 15s delay whenever you type in an editor. That way you'll think more before writing code.
ohgr 9 days ago [-]
Having watched two entirely fucked up Rust projects get written off I think you need to get out more.
ipaddr 9 days ago [-]
It's like a perl script you wrote on acid. It's done because you cannot enter that mindspace anymore and the code you created is like the output of an encrypted password with no path back.
dambi0 8 days ago [-]
Are you saying that Rust is always the right choice for any problem? If not what’s the link between the merits of any technology and the guarantee that the selection was not resume driven?
ljm 8 days ago [-]
A sibling thread talks about language zealotry and this is a textbook example.
I didn’t realise that the only requirement for well-written code is to have an expressive type system and memory safety.
FpUser 9 days ago [-]
What a load of BS. Even if initial time saving was true because of "no bugs feature" and it is not. Any software product that serves real business needs usually constantly evolves. Most of the time is spent on planning new features and integrating those into existing architecture.
tcmart14 8 days ago [-]
Yup. It's easy to write a to-do app. What is hard is 10 years later of features and needing to add more. While also dealing with the architecture short sightedness and pitfalls. Anyone who has worked on legacy software knows, at some point, all architectural decisions eventually become architectural pitfalls, primarily because requirements and customers change. It becomes a point where, its not really the code that's the issue, its design and customers' changing requirements.
Eggpants 8 days ago [-]
You forgot the /s
mvdtnz 9 days ago [-]
You're missing the point completely.
fallingknife 9 days ago [-]
What good does that do on a resume? I thought learning a new language on the job was pretty standard.
supriyo-biswas 9 days ago [-]
Where I'm from, the recruiters often use dumb questions like "How many years of experience do you have with X?" where any answer below their threshold is an immediate ground for rejection.
Learning new technologies on the go is pretty much the standard, but it's something that employers don't understand.
grepLeigh 9 days ago [-]
As an outsider looking at Microsoft, I've always been impressed by the attention to maintaining legacy APIs and backward compatibility in the Windows ecosystem. In my mind, Microsoft is at the opposite end of the killedbygoogle.com spectrum. However, none of this is grounded in real evidence (just perception). Red Hat is another company I'd put forth as an example of a long-term support culture, although I don't know if that's still true under IBM.
I'd love to know if my superficial impression of Microsoft's culture is wrong. I'm sure there's wild variance between organizational units, of course. I'm excluding the Xbox/games orgs from my mental picture.
iamdelirium 9 days ago [-]
I don't understand where this idea that Microsoft doesn't kill projects.
Zune, Games for Windows Live, Skype, Encarta, CodePlex, Windows Phone, Internet Explorer.
Those mostly aren't counter-examples though. In most cases they supported them long after most people had stopped using them. Google is notorious for killing popular products.
bornfreddy 8 days ago [-]
Well, Skype is the only one I'd miss, and after years of neglect I won't cry after it either. As for IE - good riddance. They could add Teams and Sharepoint to that list as far as I'm concerned.
So maybe the difference is that Google kills projects that people love, while MS only kills unloved ones?
ljm 8 days ago [-]
Technically Edge as well, after they nuked their internal effort and switched to slapping Bing ads on a Chrome fork.
grepLeigh 9 days ago [-]
Hah, this is exactly what I was hoping to find. Thank you!
alternatex 8 days ago [-]
Forgot the most recent one - Skype.
ahartmetz 9 days ago [-]
Joel Spolsky wrote about this. Windows division (WinDiv) is as you say, development tools division (DevDiv) is framework of the week. How many APIs have not-actually-replaced Win32 so far? They do keep the old ones working though, I guess.
JambalayaJimbo 9 days ago [-]
I am working for an enterprise customer of a niche Microsoft product. They haven’t killed it yet, despite us being possibly their only customer.
However, their documentation and support is really scant.
wkat4242 9 days ago [-]
It is but it's mainly to not bite the hand that feeds. Microsoft doesn't want to keep this stuff around, their enterprise customers do.
__turbobrew__ 9 days ago [-]
Red Hat killed CentOS and violated their support commitments so I wouldn’t trust them anymore.
grepLeigh 9 days ago [-]
That was after the IBM acquisition.
weinzierl 8 days ago [-]
True and it often ends underwhelmingly.
On the other hand
"innovators left at launch and passed it off to the maintenance team" alone must not be a bad thing.
Innovator types are rarely maintainer types and vice versa.
In the open-source world look at Fabrice Bellard for example. Do you think he would have been able to create so many innovative projects if he had to maintain them too?
9 days ago [-]
deadbabe 9 days ago [-]
This is wrong.
Google kills off projects because the legal liability and security risks of those projects becomes too large to justify for something that has niche uses or gives them no revenue. User data is practically toxic waste.
kindeyoowee 9 days ago [-]
[dead]
pklausler 7 days ago [-]
People respond to the incentives presented to them. If you want different behavior, change the incentives.
radicaldreamer 9 days ago [-]
It's a chronic problem at some companies and not so much at others, it's all about how internal incentives are set up.
cavisne 9 days ago [-]
Thats nothing during peak ZIRP people would move tech companies before the launch (or after a project was cancelled) and still ladder climb. "Failing upwards"
disgruntledphd2 8 days ago [-]
Or as Moral Mazes calls it, outrunning your mistakes.
sharemywin 8 days ago [-]
crypto has that problem. payoff before the result.
tossracct 9 days ago [-]
[dead]
teaearlgraycold 9 days ago [-]
These people should be fired. I want a tech company where people are there to make good products first and get paid second. And the pay should be good. The lifestyle comfortable. No grindset bullshit. But I am confident that if you only employ passionate people working their dream jobs you will excel.
escapecharacter 9 days ago [-]
Unfortunately whether someone is checked out is a laggy measure.
Even good honest motivated people can become checked out without even being aware of it.
The alternative is to lay off people as soon as they hit 1.0 (with a severance bonus on the scale of an acquisition). This would obviously be worse, as you can’t take advantage of their institutional knowledge.
saturn8601 9 days ago [-]
This motivated part of Musk's moves at Twitter(and now DOGE). You can't reliably evaluate which people are checked out and when you are against the clock, you have to take a hatchet and accept that you will break things that are in motion.
chronid 9 days ago [-]
You can somewhat reliably evaluate what works and what does not, what brings you forward and what is slack you can cut. It takes time (6-12 months at least) of very engaged work of a very engaged team - which you can bring with you.
You can go the hatchet way - I am strongly unconvinced it is indicative of anything resembling good management, mind - but most people and companies cannot rely on banks or investment firms loaning them 40 billion dollars and accepting passively a mark down of their mone~ to 1/4 of the value they loaned down the line. CEOs are ousted by investment firms for a far smaller drop in value all the time.
disgruntledphd2 8 days ago [-]
The banks have mostly sold the loans at par now, fyi.
I agree with everything you said, though.
scarface_74 9 days ago [-]
And seeing how many people they let go and then had to hire back in the government and seeing that Twitter is worth 80% less than we he bought it, that might not be the best strategy.
escapecharacter 8 days ago [-]
Ignoring the quagmire of praising/critiquing Musk/Twitter for a second…
If you’re an exec who’s taken it upon themselves to evaluate, could use the hatchet, or you take some amount of time to figure out how things work. Whether this is okay depends on who is suffering the externalities. If it’s a private corporation, legally it’s the execs + employment law. If it’s a public service that measures toxin levels in water, uhhhhh.
shigawire 9 days ago [-]
The externalities of breaking stuff haphazardly can very well outweigh any efficiency gains.
JumpCrisscross 9 days ago [-]
> want a tech company where people are there to make good products first and get paid second. And the pay should be good. The lifestyle comfortable. No grindset bullshit
Congratulations, you’ve invented the HR department in corporate America.
reverius42 9 days ago [-]
> make good products first and get paid second. And the pay should be good.
The better the pay, the more you will attract the people who are there for the pay first and making good products ... second or third or never. How do you combat that?
stavros 9 days ago [-]
You interview them first.
scarface_74 9 days ago [-]
You act as if every experienced interviewer doesn’t know how to play the game and act like they are “passionate”.
scarface_74 9 days ago [-]
Why would those people be “fired” when the entire promotion process and promo docs emphasize “scope” and “impact”?
No one works for any BigTech company because they think they are making the world a better place. They do it because a shit ton of money appears in their bank account every pay period and stock appears in their brokerage account every vesting period.
I personally don’t have the shit tolerance to work in BigTech (again) at 50. But I suggest to all of my younger relatives who graduate in CS to “grind leetCode and work for a FAANG” and tell them how to play the politics to get ahead.
As the Dilbert author said, “Passion is Bullshit”. I have never been able to trade passion for goods and services.
whstl 9 days ago [-]
Yep. I've seen more people fired for being passionate about their craft and their jobs than people getting raises for the same reason.
It's always the same. People trying to make things better for the next developer, people prioritizing delivers instead of ego-projects or ego-features by someone playing politics, developers wanting a seat at the table with (dysfunctional) Product teams, people actual good intentions trying to "change the world" (not counting the misguided attempts here).
You are 100% correct, you gotta play the politics, period.
bsimpson 9 days ago [-]
> No one works for any BigTech company because they think they are making the world a better place.
I'm sure there are plenty of people who work at big companies for precisely this reason (or at least, with that as _a_ reason among many).
Yes, much of the prestige has worn off as the old guard retired and current leadership emphasizes chasing AI buzzwords and cutting costs. But still, big companies are one of the few places where an individual really can point out something they worked on in day-to-day life. (Pull out any Android phone and I can show you the parts that my work touched.)
int_19h 9 days ago [-]
Can confirm that this is definitely the case. Working at BigTech company to "make the world a better place" can actually feel like it makes some sort of sense because - especially if you're on a team that ships a highly visible product - you have a lot of customers, so even small improvements have an outsized effect.
And it takes a while for a young dev to register that the goals that the larger organization pursues are going to win out in the end anyway.
scarface_74 9 days ago [-]
If you are working at either Google or Meta, you’re involved in adTech. Not exactly making the world a better place.
int_19h 8 days ago [-]
I'm not saying that this is objectively wrong, but it's not always so clear-cut. E.g. supposing you're at Google, but you're working on the Go compiler or libraries. Does it mean that you are "involved in ad tech"? Kinda sorta, since what you do makes other Google employees more productive at writing ad tech. But there are millions of Go users outside the company using it for all kinds of things, so one can reasonably conclude that whatever benefit Google itself derives from their contribution to Go, it's dwarfed by the public benefit from the same.
9 days ago [-]
Severian 9 days ago [-]
Funny what his passions turned into, so yeah, ironically agree.
saturn8601 9 days ago [-]
You are trying to combine two repelling magnets together.
Case in point: Tesla/SpaceX meets your first criteria: "I want a tech company where people are there to make good products first and get paid second."
Google meets your second criteria: "And the pay should be good. The lifestyle comfortable. No grindset bullshit."
Other than small time boutique software firms like Fog Creek Software or Panic Inc(and thats a BIG maybe) you are not going to get this part of your message: "But I am confident that if you only employ passionate people working their dream jobs you will excel."
There are tradeoffs in life and each employee has to choose what is important to them(and each company CEO has to set standards on what is truly valued at the company).
scarface_74 9 days ago [-]
> Case in point: Tesla/SpaceX meets your first criteria: "I want a tech company where people are there to make good products first and get paid second
Not to mention the infotainment system is much worse than CarPlay/Android Auto compatible cars
bdangubic 9 days ago [-]
Teslas are consistently rated high in customer satisfaction, but after several years of low ratings from top authorities in the industry, their reliability is undoubtedly in question.
This is too funny to post alongside saying “Tesla has never been a good product.” Like “everyone that bought it loves it be car expert Joe from South Dakota ranks them very low.”
Common sense also runs very much against this nonsense narrative - you just simply do not sell that many cars, at those prices especially, year after year after year, if the product is subpar. Don’t fall for this “experts” bullshit. The CEO is the biggest tool this Earth has ever seen but cars are awesome
scarface_74 9 days ago [-]
You did see the link quoted from consumer reports? Are they not a reliable source?
On another note, Apple also sold millions of MacBooks with butterfly keyboards.
And Tesla sells are declining, losing market share worldwide and sells 1/5 the number of cars as Toyota
bdangubic 7 days ago [-]
customers are the only reliable source, “experts” trying to sell ads/paper are not.
and if you gonna compare tesla to toyota you should compare number of EV sales, not overall sales :) tesla is not a car company, it is (among other things if you care to believe Elon bullshit) EV car company. comparing toyota to tesla in terms of total sales is like saying “subway doesn’t sell nearly as many bigmacs as mcdonald’s does” :)
hintymad 9 days ago [-]
> but I wonder how much of this initiative and energy is driven by people at Microsoft who want their own star to rise higher than it can when it's bound by a third-party technology.
I guess it's human nature for a person or an org to own their own destiny. That said, the driving force is not personal ambition in this case though. The driving force behind this is that people realized that OAI does not have a moat as LLMs are quickly turning into commodities, if haven't yet. It does not make sense to pay a premium to OAI any more, let alone at the cost of not having the flexibility to customize models.
Personally, I think Altman did a de-service to OAI by constantly boasting AGI and seeking regulatory capture, when he perfectly knew the limitation of the current LLMs.
mlazos 9 days ago [-]
One of my friends stated this phenomenon very well “it’s a lever they can pull so they do it”. Once you’ve tied your career to a specific technology internally, there’s really only one option: keep pushing it regardless of any alternatives because your career depends on it. So that’s what they do.
ambicapter 9 days ago [-]
Does it not make sense to not tie your future to a third-party (aka build your business on someone else's platform)? Seems like basic strategy to me if that's the case.
pphysch 9 days ago [-]
It's a good strategy. It should be obvious to anyone paying attention that OpenAI doesn't have AGI secret sauce.
LLMs are a commodity and it's the platform integration that matters. This is the strategy that Google, Apple embraced and now Microsoft is wisely pivoting to the same.
If OpenAI cares about the long-term welfare of its employees, they would beg Microsoft to acquire them outright, before the markets fully realize what OpenAI is not.
Izikiel43 9 days ago [-]
> now Microsoft is wisely pivoting to the same.
I mean, they have been doing platform integration for a while now, with all the copilot flavors and teams integrations, etc. This would change the backend model to something inhouse.
skepticATX 9 days ago [-]
Listening to Satya in recent interviews I think makes it clear that he doesn’t really buy into OpenAI’s religious-like view of AGI. I think the divorce makes a lot of sense in light of this.
herval 9 days ago [-]
It feels like not even OpenAI buys into it much these days either
HarHarVeryFunny 9 days ago [-]
OpenAI already started divorce proceedings with their datacenter partnership with Softbank/etc, and it'd hardly be prudent for the world's largest software company NOT to have it's own SOTA AI models.
Nadella might have initially been caught a bit flat footed with the rapid rise of AI, but seems to be managing the situation masterfully.
wkat4242 9 days ago [-]
In what world is what they are doing masterful? Their product marketing is a huge mess, they keep changing the names of everything every few months. Nobody knows which Copilot does what anymore. It really feels like they're scrambling to be first to market. It all feels so incredibly rushed.
Whatever is there doesn't work half the time. They're hugely dependent on one partner that could jump ship at any moment (granted they are now working to get away from that).
We use Copilot at work but I find it very lukewarm. If we weren't a "Microsoft shop" I don't think would have chosen it.
trentnix 9 days ago [-]
> Their product marketing is a huge mess, they keep changing the names of everything every few months. Nobody knows which Copilot does what anymore. It really feels like they're scrambling to be first to market. It all feels so incredibly rushed.
Product confusion, inconsistent marketing, unnecessary product renames, and rushing half-baked solutions has been the Microsoft way for dozens of products across multiple divisions for years.
eitally 9 days ago [-]
Rule #1 for Microsoft product strategy: if you can't yourselves figure out the SKUs and how they bundle together, the odds are good that your customers will overpay. It's worked for almost 50 years and there's no evidence that it will stop working. Azure is killing it and will continue to eat the enterprise even as AWS starts/continues to struggle.
HarHarVeryFunny 9 days ago [-]
> In what world is what they are doing masterful?
They got access to the best AI to offer to their customers on what seems to be very favorable terms, and bought themselves time to catch up as it now seems they have.
GitHub Copilot is a success even if Microsoft/Windows Copilot isn't, but more to the point Microsoft are able to offer SOTA AI, productized as they see fit (not every product is going to be a winner) rather than having been left behind, and corporate customers are using AI via Azure APIs.
nyarlathotep_ 9 days ago [-]
> In what world is what they are doing masterful?
Does *anyone* want "Copilot integration" in random MS products?
aaronblohowiak 9 days ago [-]
> scrambling to be first
Third?
jcgrillo 9 days ago [-]
This is a great strategic decision, because it puts Suleyman's head squarely on the chopping block. Either Microsoft will build some world dazzling AI whatsit or he'll have to answer, there's no "strategically blame the vendor" option. It also makes the accounting transparent. There's no softbank subsidy, they've got to furnish every dollar.
So hopefully if (when?) this AI stuff turns out to be the colossal boondoggle it seems to be shaping up to be, Microsoft will be able to save face, do a public execution, and the market won't crucify them.
tanaros 9 days ago [-]
> it'd hardly be prudent for the world's largest software company NOT to have it's own SOTA AI models.
If I recall correctly, Microsoft’s agreement with OpenAI gives them full license to all of OpenAI’s IP, model weights and all. So they already have a SOTA model without doing anything.
I suppose it’s still worth it to them to build out the experience and infrastructure needed to push the envelope on their own, but the agreement with OpenAI doesn’t expire until OpenAI creates AGI, so they have plenty of time.
pradn 9 days ago [-]
It's the responsibility of leadership to set the correct goals and metrics. If leadership doesn't value maintenance, those they lead won't either. You can't blame people for playing to the tune of those above them.
ewhanley 9 days ago [-]
This is exactly right. If resume driven development results in more money, people are (rightly) going to do it. The incentive structure isn't set by the ICs.
saturn8601 9 days ago [-]
Ah man I don't want to hear things like that. I work in an Angular project and it is the most pleasant thing I have worked with (and i've been using it as my primary platform for almost a decade now). If I could, i'd happily keep using this framework for the rest of my career(27 years to go till retirement).
aryonoco 9 days ago [-]
A hugely underrated platform. Thankfully at least for now Google is leaving the Angular team alone and the platform has really matured in wonderful and beautiful ways.
If you like TypeScript, and you want to build applications for the real world with real users, there is no better front end platform in my book.
pbh101 9 days ago [-]
This is absolutely a too-cynical position. Nadella would be asleep at the wheel if he weren’t actively mitigating OpenAI’s current and future leverage over Microsoft.
This would be the case even if OpenAI weren’t a little weird and flaky (board drama, nonprofit governance, etc), but even moreso given OpenAI’s reality.
surfingdino 9 days ago [-]
I can see a number of forces at play:
1) Cost -- beancounters got involved
2) Who Do You Think You Are? -- someone at Microsoft had enough of OpenAI stealing the limelight
3) Tactical Withdrawal -- MSFT is preparing to demote/drop AI over the next 5-10 years
roland35 9 days ago [-]
Unfortunately I don't think there is any real metric-based way to prevent this type of behavior, it just has to be old fashioned encouraged from the top. At a certain size it seems like this stops scaling though
Guthur 8 days ago [-]
And why not? should he just allow the owners of capital to extract as much value as possible without actually doing anything, but woe be the worker if he actually tries to free himself.
ndesaulniers 8 days ago [-]
Most directors and above at Google are more concerned with how they will put gas in their yachts this weekend than the quality of the products they are supposed to be in charge of.
croes 8 days ago [-]
> by people at Microsoft who want their own star to rise higher than it can when it's bound by a third-party technology.
Isn’t that the basis for competition?
orbifold 9 days ago [-]
Mustafa Suleyman is building a team at Microsoft just for that purpose.
m463 9 days ago [-]
I wonder if incentives for most companies favor doing things in-house?
esafak 9 days ago [-]
Yes, you can say you built it from scratch, showing leadership and impact, which is what big tech promotions are gauged by.
snarfy 9 days ago [-]
I like to refer to this as resume driven development.
erikerikson 9 days ago [-]
Embrace, extend, and extinguish
keeganpoppen 9 days ago [-]
oh it is absolutely about that
DebtDeflation 9 days ago [-]
A couple of days ago it leaked that OpenAI was planning on launching new pricing for their AI Agents. $20K/mo for their PhD Level Agent, $10K/mo for their Software Developer Agent, and $2K/mo for their Knowledge Worker Agent. I found it very telling. Not because I think anyone is going to pay this, but rather because this is the type of pricing they need to actually make money. At $20 or even $200 per month, they'll never even come close to breaking even.
paxys 9 days ago [-]
It's pretty funny that OpenAI wants to sell access to a "PhD level" model at a price with which you can hire like 3-5 real human PhDs full-time.
drexlspivey 9 days ago [-]
Next up: CEO level model to run your company. Pricing starts at $800k/month plus stock options
hinkley 9 days ago [-]
Early cancelation fee is $15M though so watch out for that.
marricks 9 days ago [-]
Which is funny because the CEO level one is the easiest to automate
JKCalhoun 9 days ago [-]
Steve Jobs said something to the effect that he made maybe three CEO decisions a year. I mean, I think these are decisions like, "We're going to open our own line of Apple retail stores", but, still.
cj 9 days ago [-]
Being a CEO isn’t all that different from being a parent of a child from the POV of impactful decisions.
How many critical “parental decisions” have you made in the past week? Probably very few (if any), but surely you did a lot of reinforcement of prior decisions that had already been made, enforcing rules that were already set, making sure things that were scheduled were completed, etc.
Important jobs don’t always mean constantly making important decisions. Following through and executing on things after they’re decided is the hard part.
See also: diet and exercise
wkat4242 9 days ago [-]
Playing golf while bantering with your old boys network is going to be hard to automate :)
reverius42 9 days ago [-]
The banter is actually quite easy to automate. You can hire a human to play golf for a small fraction of what the CEOs get paid, and then it's best of both worlds.
ttepasse 8 days ago [-]
With preexisting knowledge of military artillery arithmetic a a golf robot should not be impossible.
aleph_minus_one 9 days ago [-]
The basic role of a CEO is to be the face of the company and market it to the varioua stakeholders.
This is hard to automatize.
t_mann 8 days ago [-]
Is it? Take a look at the bot accounts filling up social media (the non-obvious ones). It wouldn't seem to hard to make one that makes 2am posts about '[next product] feels like real AGI' or tells stock analysts that their questions are boring on an earnings call, which is apparently what rockstar CEOs do.
Sneers aside, I think one common mis-assumption is that the difficulty of automating a task depends on how difficult it feels to humans. My hinge is that it mostly depends on the availability of training data. That would mean that all the public-facing aspects of being a CEO should by definition be easy to automate, while all the non-public stuff (also a pretty important part of being a CEO, I'd assume) should be hard.
croes 8 days ago [-]
Sounds like those AI created influencers
8 days ago [-]
slantaclaus 9 days ago [-]
I won’t considering trusting an AI to run a company until it can beat me at Risk
aleph_minus_one 9 days ago [-]
This should be easy for an AI.
slantaclaus 21 hours ago [-]
Please find me a version of Risk with an AI that isn’t retarded. I’ll wait.
oefnak 9 days ago [-]
But probably not for a LLM. Yet.
th0ma5 9 days ago [-]
That no one is offering this says something very profound to me. Either they don't work and are too risky to entrust a company to, or leadership thinks they are immune and are entitled to wield AI exclusively, or some mix of these things.
erikerikson 9 days ago [-]
Or maybe CEOs make purchasing decisions and approvals
MichaelMoser123 8 days ago [-]
what about politician level models? i wonder if politicians aren't all copy pasting their stuff from chatgtp right now, at this stage (that would make a nice conspiracy theory, wouldn't it?)
laughingcurve 9 days ago [-]
That is just not correct. As someone who has done the budgets for PhD hiring and funding, you are just wildly underestimating the overhead costs, benefits, cost of raising money, etc.
DebtDeflation 9 days ago [-]
The "3-5" is certainly overstated, but you definitely can hire ONE PhD for that price, just as you can hire a SWE for $120K or a knowledge worker for $24K. The point is that from a CEO's perspective "replacing all the humans with AI" looks a lot less compelling when the AI costs the same as a human worker or even a significant fraction of a human worker.
sailfast 9 days ago [-]
Being able to control their every move, scale them to whatever capacity is required, avoid payroll taxes, health plans and surprise co-pay costs, equity sharing, etc might make this worthwhile for many companies.
That said, the trade-off is that you're basically hiring consultants since they really work for OpenAI :)
from-nibly 9 days ago [-]
The benefit to an emoloyee is that you don't have to control their every move. They can do work while you aren't even thinking about the problem they are solving.
zeroonetwothree 9 days ago [-]
Although remember that the cost to the company is more like double the actual salary.
DebtDeflation 9 days ago [-]
Again, irrelevant. We're talking about orders of magnitude here. Current pricing is in line with most SaaS pricing - tens of dollars to hundreds of dollars per seat per month. Now they're suddenly talking about thousands of dollars to tens of thousands of dollars per seat per month.
Izikiel43 9 days ago [-]
The AI can work 24/7 though.
booleandilemma 9 days ago [-]
Don't you need to be awake to feed it prompts?
mirsadm 9 days ago [-]
Doing what?
cutemonster 9 days ago [-]
Generating prompts to itself and acting on them? (Not saying it's a good idea)
zombiwoof 9 days ago [-]
Respectfully disagree. I had two pHD on a project and spent a total of 120k a year on them.
yifanl 9 days ago [-]
Right, which is substantially less than the stated $20k/month.
edit: I see we're actually in agreement, sorry, I read the indentation level wrong.
robertlagrant 9 days ago [-]
> Respectfully disagree. I had two pHD on a project and spent a total of 120k a year on them.
Does that include all overheads such as HR, payroll, etc?
0_____0 9 days ago [-]
What region and what field?
throwaway3572 9 days ago [-]
For a STEM PhD, in America, at an R1 University. YMMV
eszed 9 days ago [-]
How many PhDs can you afford for $20k a month in your field?
moelf 9 days ago [-]
$20k can't get you that many PhD. Even PhD students, who's nominal salary is maybe $3-5k a month, effectively costs double that because of school overhead and other stuff.
notahacker 9 days ago [-]
Does depend on where your PhD lives and what subject their PhD is in from where, and how many hours of work you expect them to do a week, and whether you need to full-time "prompt" them to get them to function...
Would definitely rather have a single postdoc in a relevant STEM subject from somewhere like Imperial for less than half the overall cost than an LLM all in though. And I say that despite seeing the quality of the memes they produce with generative AI....
vinni2 9 days ago [-]
Depends on what these PhDs are supposed to do. Also is this an average Phd or a brilliant PhD level? There is a huge spectrum of PhDs out there. I highly doubt these phd level models are able to solve any problems in a creative way or discover new things other than regurgitating the knowledge they are trained on.
BeetleB 9 days ago [-]
> Even PhD students, who's nominal salary is maybe $3-5k a month
Do they really get paid that much these days?
pclmulqdq 9 days ago [-]
$3k/month is the very top of the market.
1propionyl 9 days ago [-]
Depends on 1) where the university is located (CoL), 2) if they went on strike recently to get paid enough to pay rent.
You can reliably assume that PhD wages must eventually converge to the rent of a studio apartment nearby + a little bit (which may or may not be enough to cover all other expenses. Going into debt is common.)
hyperbrainer 9 days ago [-]
That amount is standard at EPFL and ETH, but I don't know about the USA.
BeetleB 9 days ago [-]
I knew someone who got his PhD at EPFL. He earned almost triple what I did in the US.
winterismute 9 days ago [-]
ETHZ and EPFL are also top of the market in EU/UK.
hyperbrainer 8 days ago [-]
Pedantic, but they are top of the market in neither since Switzerland is not in the EU, and definitely not in the UK.
But it is true that in Europe, Switzerland PhDs (and professors too) make most. Not just ETH/EPFL as well. UZH (Uni Zurich) has salaries of 50K CHF per year for PhD candidates (with increments every year) -- that's almost 60K USD by your fourth year. This is also true for other universities. And while Zürich is expensive, it is not _that_ expensive.
Computer science is rate 5, so 73kCHF the first year, 78kCHF the second, then 83kCHF onwards.
archermarks 9 days ago [-]
Lmao no
throw_m239339 9 days ago [-]
> $20k can't get you that many PhD. Even PhD students, who's nominal salary is maybe $3-5k a month, effectively costs double that because of school overhead and other stuff.
But you are not getting a PhD worker for 20K with "AI", that's just marketing.
meroes 9 days ago [-]
Based on ubiquitous AI trainer ads on the internet that advertise their pay, they probably make <=$50/hr training these models. Trainers are usually remote and set their own hours, so I wouldn’t be surprised if PhDs are not making much as trainers.
madmask 9 days ago [-]
Come to Italy where 1.1k is enough
vitorsr 9 days ago [-]
Or Brazil where the DSc student stipend is 3100 BRL (roughly 500 EUR).
kube-system 9 days ago [-]
If truly equivalent (which LLMs aren't, but I'll entertain it), that doesn't seem mathematically out of line.
Humans typically work 1/3rd duty cycle or less. A robot that can do what a human does is automatically 3x better because it doesn't eat, sleep, have a family, or have human rights.
bandrami 9 days ago [-]
So this is just going to end up like AWS where they worked out exactly how much it costs me to run a physical server and charge me just slightly less than that?
kiney 9 days ago [-]
aws is vastly more expensive than running physical servers e.g. in colocation
kube-system 9 days ago [-]
Why would they ask for less?
Fernicia 9 days ago [-]
Well, a model with PhD level intelligence could presumably produce research in minutes that would take an actual PhD days or months.
sponnath 9 days ago [-]
We haven't seen any evidence of this happening ever. It would be groundbreaking if true and OAI's pricing would then make sense.
amelius 9 days ago [-]
We would be past the singularity if true.
voxl 9 days ago [-]
Presumably. What a powerful word choice.
doitLP 9 days ago [-]
Don’t forget that this model would have a phd in everything and work around the clock
burnte 9 days ago [-]
Well, it works 24/7 as long as you have a human telling it what to do. And checking all the output because these cannot be trusted to work alone.
kadushka 8 days ago [-]
Most people have someone telling them what to do at work, and checking the output.
esskay 9 days ago [-]
Thats pretty useless for most applications though. If you're hiring a phd level person you dont care that if in addition to being great in contract law they're also great in interior design.
doitLP 9 days ago [-]
I disagree. People are so hyper ultra mega specialized these days the cross pollination should be very helpful. Isn’t that the theory behind why so much amounts of training data makes these models better?
SunlitCat 8 days ago [-]
And if everything fails, you can hand the phd level person a broom and a bucket and have them mop the floor!
Hah! Checkmate AI, that's something you can't do! :D
intrasight 9 days ago [-]
Until the AI agents unionize - which if truly phd AGI they will
mattmaroon 9 days ago [-]
1. Don't know where you live that the all-in costs on someone with a PhD are $4k-$7k/mo. Maybe if their PhD is in anthropology.
2. How many such PhD people can it do the work of?
shellfishgene 9 days ago [-]
Postdocs in Europe make about 3-4k eur/month in academic research.
madmask 9 days ago [-]
We wish, it’s more like half in many places
9 days ago [-]
herval 9 days ago [-]
Well but see, their “phd ai” doesn’t complain or have to stop to go to the bathroom
aqueueaqueue 9 days ago [-]
But you can make it work harder by promising a tenure in 35 years time.
cratermoon 9 days ago [-]
more like 10 PhD candidates, at the typical university stipend.
jstummbillig 9 days ago [-]
What funny is that people make the lamest strawman assumptions and just run with it.
crazygringo 9 days ago [-]
Do they work 24/7?
Do you have to pay all sorts of overhead and taxes?
I mean, I don't think it's real. Yet. But for the same "skill level", a single AI agent is going to be vastly more productive than any real person. ChatGPT types out essays in seconds it would take me half an hour to write, and does it all day long.
moduspol 9 days ago [-]
Even worse: AFAIK there's no reason to believe that the $20k/mo or $10k/mo pricing will actually make them money. Those numbers are just thought balloons being floated.
Of course $10k/mo sounds like a lot of inference, but it's not yet clear how much inference will be required to approximate a software developer--especially in the context of maintaining and building upon an existing codebase over time and not just building and refining green field projects.
hinkley 9 days ago [-]
Man. If I think about all of the employee productivity tools and resources I could have purchased fifteen years ago when nobody spent anything on tooling, with an inflation adjusted $10K a month and it makes me sad.
We were hiring more devs to deal with a want of $10k worth of hardware per year, not per month.
optimalsolver 9 days ago [-]
Now that OAI has "PhD level" agents, I assume they're largely scaling back recruitment?
kadushka 8 days ago [-]
That's the real readiness test for these agents.
mk_chan 9 days ago [-]
I’ll believe their proficiency claims when they replace all their software developers, knowledge workers and PhDs with this stuff.
catigula 9 days ago [-]
That is fundamentally the problem with this type of offering.
You can't claim it's even comparable to a mid level engineer because then you'd hardly need any engineers at all.
borgdefenser 7 days ago [-]
Or how about lets start with "Strategic Finance"
"Create high-quality presentations for communicating OpenAI’s financial performance"
What is interesting is there is no mention of agents on any job I clicked on. You would think "orchestrating a team of agents to leverage blah blah blah" would be something internally if talking about these absurd price points.
mvdtnz 9 days ago [-]
Do you have a source for these supposed leaks? Those prices don't sound even remotely credible and I can't find anything on HN in the past week with the keywords "openai leak".
It points to an article on "The Information" as the source, but that link is paywalled.
mvdtnz 9 days ago [-]
Yeah I don't believe it for a second. Even sama isn't that far up his own ass.
hnthrow90348765 9 days ago [-]
There is too little to go on, but they could already have trial customers and testimonials lined up. Actually demoing the product will probably work better than just having a human-less signup process, considering the price.
They could also just be trying to cash in on FOMO and their success and reputation so far, but that would paint a bleak picture
serjester 9 days ago [-]
Never come close to breaking even? You can now get a GPT-4 class model for 1-2% of what it cost when they originally released it. They’re going to drive this even further down with the amount of CAPEX pouring into AI / data centers. It’s pretty obvious that’s their plan when they serve ChatGPT at a “loss”.
tempodox 8 days ago [-]
Until Sam Altman proves he lets an AI manage his finances without interference from humans, I wouldn't pay for any of these.
drumhead 9 days ago [-]
Thats some rather eyewatering pricing, considering you could probably roll your own model these days.
mimischi 8 days ago [-]
As a software engineer with a PhD: I am not getting paid enough.
culi 9 days ago [-]
It's bizarre. These are the pricing setups that you'd see for a military-industrial contract. They're just doing it out in the open
bn-l 7 days ago [-]
Absolute hype generation
nashashmi 8 days ago [-]
That's also the kind of pay structure that will temper expectations. Win-win
rossdavidh 9 days ago [-]
"Microsoft has poured over $13 billion into the AI firm since 2019..."
My understanding is that this isn't really true, as most of those "dollars" were actually Azure credits. I'm not saying those are free (for Microsoft), but they're a lot cheaper than the price tag suggests. Companies that give away coupons or free gift certificates do bear a cost, but not a cost equivalent to the number on them, especially if they have spare capacity.
erikerikson 9 days ago [-]
Not only that but they are happy to buy market share to expand their relative position against AWS
nashashmi 8 days ago [-]
And invest back into their own product for a market cap return.
yread 8 days ago [-]
They can also lower their taxes with the credits rright?
strangescript 9 days ago [-]
I think they have realized that even if OpenAI is first, it won't last long so really its just compute at scale, which is something they already do themselves.
echelon 9 days ago [-]
There is no moat in models (OpenAI).
There is a moat in infra (hyperscalers, Azure, CoreWeave).
There is a moat in compute platform (Nvidia, Cuda).
Maybe there's a moat with good execution and product, but it isn't showing yet. We haven't seen real break out successes. (I don't think you can call ChatGPT a product. It has zero switching cost.)
0xDEAFBEAD 9 days ago [-]
>There is a moat in compute platform (Nvidia, Cuda).
Ironically if AI companies are actually able to deliver in terms of SWE agents, Nvidia's moat could start to disappear. I believe Nvidia's moat is basically in the form of software which can be automatically verified.
I sold my Nvidia stock when I realized this. The bull case for Nvidia is ultimately a bear case.
satellite2 9 days ago [-]
There is a moat in the brand they're building.
Look at Coca Cola, Google, both have plausible competitors, zero switching cost but they maintain their moat without effort.
Being first is still a massive advantage. At this point they should only strive to avoid big mistake and they're set.
hnfong 7 days ago [-]
Coca Cola only has a moat because soft drinks and junk food are mostly a done product without much space for innovation left.
AI is still not there yet, and if any model becomes significantly better than ChatGPT people will flock over to use it despite the branding. It's only when nobody can make better models, then people will just stick to the known brands.
YetAnotherNick 9 days ago [-]
What moat does Nvidia have. AMD could have ROCm perfected if they really want to. Also most of pytorch, specially those relevant to transformers runs perfectly on Apple Silicon and TPUs and probably other hardware as well.
If anyone has moat related to Gen AI, I would say it is the data(Google, Meta).
klelatti 9 days ago [-]
> AMD could have ROCm perfected if they really want to.
It's not an act of will or CEO dictat. It's about hiring and incentivising the right people, putting the right structures in place etc all in the face of competing demands.
Nvidia have a huge head start and by the time AMD have 'caught up' Nvidia with it's greater resources will have moved further ahead.
YetAnotherNick 9 days ago [-]
If head start is a moat, why wouldn't you count OpenAI's headstart as moat?
klelatti 9 days ago [-]
Because we already see firms competing effectively with OpenAI.
There is as yet no indication that AMD can match Nvidia's execution for the very good reason that doing so is extremely difficult. The head start is just the icing on the cake.
YetAnotherNick 8 days ago [-]
I see pre AGI AI to be superset of search and its hard to argue that google isn't ahead technically for decades and what they are doing is extremely difficult.
dragonwriter 9 days ago [-]
A head start is a moat iff you can't move easily from the leader to a competitor that catches up qualitatively; nvidia’s headstart against AMD is a moat to the extent that you can't just take the software written against NVidia GPUs and run it on AMD if AMD catches up. (That is, being currently ahead isn't a moat, but it can impose switching costs which are.)
Taking code that runs against one hosted LLM and running it against a different backend LLM is... not generally a big deal. So OpenAI being ahead—in the core model, at least—is just being ahead, its not a moat.
YetAnotherNick 8 days ago [-]
ROCM is close to being perfected and I think in few years and some investment you can use it directly to run 99% software written in CUDA with similar performance.
klelatti 8 days ago [-]
Since 2007 Nvidia has built with CUDA and compatible hardware
- a large and growing ecosystem
- a massive installed base of backwards compatible hardware
- the ability to massively scale delivery of new systems
- and lots lots more
They now have scale that enables them to continue to invest at a level that no competition can do.
None of these are easily reproduced.
As per SemiAnalysis AMD in late 2024 can’t get essential software working reliably out of the box.
It’s easy to say AMD ‘is close to perfecting ROCM’ the reality of competing with Nvidia is much harder.
YetAnotherNick 8 days ago [-]
It's hard but not $3T hard. AMD has consistently underinvested in their software. If they invest double digit million to make installation and usage of drivers and ROCM smoother I highly doubt they can't acheive it.
There are open source projects volunteer run projects[1] that are better than official AMD implementation in many ways.
If it's so cheap and easy then why haven't they done it already?
ChatGPT was announced Nov 22. The opportunity has been clear for two years and still essential software breaks.
YetAnotherNick 8 days ago [-]
Because they have to rethink a lot organizationaly to put more effort into software. If installing their drivers is so flaky you can't argue they really are trying or putting their best folks in this. Single click installation on top 10 linux distro is something even one (top in the industry)person could achieve.
PKop 9 days ago [-]
Not all industries or product segments are equal is the obvious answer. The point here whether one agrees or not is models are easier to catch up to than GPUs
echelon 9 days ago [-]
Anyone can make an LLM. There are hundreds of choices in the market today. Many of them are even open source.
OpenAI brings absolutely nothing unique to the table.
toasterlovin 9 days ago [-]
> I don't think you can call ChatGPT a product. It has zero switching cost.
In consumer markets the moat is habits. The switching cost for Google Search is zero. The switching cost for Coke is zero. The switching cost for Crest toothpaste is zero. Yet nobody switches.
swat535 9 days ago [-]
Not to nitpick but..
1. The switching cost from Google Search is certainly not zero, it implies switching from Google, which is virtually impossible because it's tied to Chrome, YouTube, Android and Gmail
2. I don't know many people who are dedicated "Pepsi" fans, they just grab whatever drink is available Coke/Pepsi..
3. I've also not heard many people who are never willing to switch from "Crest".. People will just grab the next available option if Crest is not on shelf. No one is pre-ordering Crest.
SllX 9 days ago [-]
I will nitpick this one:
> 1. The switching cost from Google Search is certainly not zero, it implies switching from Google, which is virtually impossible because it's tied to Chrome, YouTube, Android and Gmail
Google Search is a product. Not the whole company. Switching to most other search engines is $0. Naturally no one is honor bound to use anything else you listed either.
toasterlovin 8 days ago [-]
You’re standing in the aisle at the supermarket, looking at the soda. Everything is in stock. What do you grab? Most people make the same decision for their entire life: Coke. That is the moat that Pepsi has been trying to overcome for decades.
sumedh 7 days ago [-]
> That is the moat that Pepsi has been trying to overcome for decades.
Pepsi makes more revenue compared to Coke. Shouldn't it be Coke who should be trying to do what Pepsi is doing?
toasterlovin 6 days ago [-]
According to this[0], Coke (the cola brand, not the company) has a ~19% soft drink market share in the US vs. Pepsi at ~8%. This[1] has Coke (the company, not the cola brand) at a 69% market share of the US soft drink market vs. Pepsi 15 27%. Pepsi is a larger company, though, because they own a bunch of snack food brands whereas Coke is beverages only.
I used to work for Coke — they've been making a lot of money for a very long time.
The size of a moat befits the size of a castle it protects. Coke absolutely has a moat, but it's not big enough to defend Coke as a trillion dollar company.
The question isn't whether OpenAI has a moat or not, it's if its current moat is big enough to protect a trillion-dollar company.
toasterlovin 9 days ago [-]
The person I'm responding to just said there was no moat because switching costs were zero. They didn't say anything about a trillion dollar valuation. But, while we're on the topic, Google's market cap is $2T because people can't be bothered to change the default search engine in their browser.
charlieyu1 9 days ago [-]
I’m not that old and I remember people switching away from Geocities, ICQ, Yahoo, many social media sites etc
crazygringo 9 days ago [-]
(Never mind I was wrong -- deleting my comment to not spread misinformation. The definition of moat is wider than I was familiar with, thanks for the correction!)
herval 9 days ago [-]
A strong brand is most definitely a moat. So is habit. Any source you can find that defines economic moats will list those.
Minor nit, but brand and habit are kinda the same thing. The habit is that you buy a particular brand.
barumrho 9 days ago [-]
Given xAI built its 100k gpu datacenter in a very short time, is the infra really a moat?
freedomben 9 days ago [-]
I'd say it is because the $ it takes to build out even a small gpu data center is still way, way more than most small cos can do. It's not an impenetrable moat, but it is pretty insulating against startups. Still have a threat from big tech, though I think that will always be true for almost everything
eagerpace 9 days ago [-]
I don't think the hardware is that easy to source just yet. Musk pulled some strings and redirected existing inventory and orders from his other companies, namely Tesla, to accelerate delivery.
PKop 9 days ago [-]
xAI does not have infra to sell the service and integrations of it to enterprises and such. It's an open question if "models" alone and simple consumer products that use them are profitable. So, probably hyperscale cloud platform infra is a moat yes. Microsoft has Semantic Kernel, Microsoft.Extensions.AI, various RAG and search services, and an entire ecosystem and platform around using LLM's to build with that xAI does not have. Just having a chat app as interface to one's model is part of the discussion here about models as commodities. xAI does have X/Twitter data which is a constantly updating source of information so in that aspect they themselves do have something unique.
riku_iki 9 days ago [-]
they likely utilized expertise/supply chain from Tesla
drumhead 9 days ago [-]
Is anyone other than Nvdia making money from this particular gold rush?
xnx 9 days ago [-]
Data center construction and power companies.
vdfs 9 days ago [-]
An example: backup power generators lead times is 18 to 24 months
xnx 9 days ago [-]
I believe Grok is powered by backup generators.
_giorgio_ 9 days ago [-]
Tesla makes backup generators.
xnx 8 days ago [-]
Do they? Powerwall stores power, but doesn't generate it.
scarface_74 9 days ago [-]
Consulting companies
_giorgio_ 9 days ago [-]
You need to offer AI just to stay in the business, whatever is your business.
Just look at how much money Google lost in that failed AI demo from 2003.
The stock would be worth 50% less if the invested nothing in AI. Even the founders are back because of it.
bustling-noose 9 days ago [-]
Sam Altman should have sold OpenAI to Musk for 90$ billion or whatever he was willing to pay (assuming he was serious like he bought twitter). While I find LLMs interesting and feel many places could use those, I also think this is like hitting everything with a hammer and see where the nail was. People used OpenAI as a hammer until it was popular and now everyone would like to go their way. For 90$ billion he could find the next hammer or not care. But when the value of this hammer drops (not if it's a when) he will be lucky if he can get double digits for it. Maybe someone will buy them just for the customer base but these models can become obsolete quickly and that leaves OpenAI with absolutely nothing else as a company. Even the talent would leave (a lot of it has). Musk and Altman share the same ego, but if I was Altman, I would cash out when the market is riding on a high.
AlexSW 8 days ago [-]
There are reasons for not wanting to sell their brainchild to Musk (of all people) that don't involve money.
bn-l 7 days ago [-]
Why is that?
hnfong 7 days ago [-]
Being perceived as the boss of the most powerful company when AGI arrives is probably worth more than whatever little stock he holds...
bagacrap 7 days ago [-]
Why do that when they can sell to SoftBank for $300B?
laluser 9 days ago [-]
I think they both want a future without each other. OpenAI will eventually want to vertically integrate up towards applications (Microsoft's space) and Microsoft wants to do the opposite in order to have more control over what is prioritized, control costs, etc.
Spooky23 9 days ago [-]
I think OpenAI is toxic. Weird corporate governmance shadiness. The Elon drama, valuations based on claims that seem like the AI version of the Uber for X hype of a decade ago (but exponentially crazier). The list goes on.
Microsoft is the IBM of this century. They are conservative, and I think they’re holding back — their copilot for government launch was delayed months for lack of GPUs. They have the money to make that problem go away.
DidYaWipe 8 days ago [-]
Toxic indeed. It's douchebaggery, from its name to its CEO. They ripped off benefactors to their "non-profit," and kept the fraudulent "open" in the company name.
skinnymuch 9 days ago [-]
IBM of this century in a good way?
Spooky23 9 days ago [-]
In this context, it’s not good or bad, it just is.
optimalsolver 9 days ago [-]
IBM of the early 1940s.
aresant 9 days ago [-]
Thematically investing billions into startup AI frontier models makes sense if you believe in first-to-AGI likely worth a trillion dollars +
Investing in second/third place likely valuable at similar scales too
But outside of that MSFTs move indicates that frontier models most valuable current use case - enterprise-level API users - are likely to be significantly commoditized
And likely majority of proceeds will be captured by (a) those with integrated product distribution - MSFT in this case and (b) data center partners for inference and query support
alabastervlog 9 days ago [-]
At this point, I don’t see much reason to believe the “AGI is imminent and these things are potentially dangerous!” line at all. It looks like it was just Altman doing his thing where he makes shit up to hype whatever he’s selling. Worked great, too. “Oooh, it’s so dangerous, we’re so concerned about safety! Also, you better buy our stuff.”
torginus 9 days ago [-]
but all those ominous lowercase tweets
fallous 8 days ago [-]
Snake oil ain't gonna sell itself!
tempodox 8 days ago [-]
At this point? Was there ever any other point? Otherwise, agreed.
lm28469 9 days ago [-]
Short term betting on AGI from current LLMs is like if you betted on V10 F1s two weeks after we invented the wheel
oezi 9 days ago [-]
Not the worst bet to invest in Daimler when they came up with the car. Might not get you to F1, but certainly a good bet they might.
only-one1701 9 days ago [-]
What even is AGI? Like, what does it look like? Genuine question.
valiant55 9 days ago [-]
Obviously the other responder is being a little tongue-in-cheek but AGI to me would be virtually indistinguishable from a human in both ability to learn, grow and adapt to new information.
samtp 9 days ago [-]
Would it also get brainrot from consuming too much social media & made up stories? Because I imagine it's reasoning would have to be significantly better than the average human to avoid this.
tempodox 8 days ago [-]
It probably would avoid the toxic sewage altogether, for reasons like low nutrient content.
Enginerrrd 9 days ago [-]
Honestly it doesn't even need to learn and grow much if at all if its able to properly reason about the world and its context and deal with the inexhaustible supply of imperfections and detail with reality.
bashfulpup 9 days ago [-]
That implies learning. Solve continual learning and you have agi.
Wouldn't it amaze you if you learned 10 years ago that we would have AI that could do math and code better than 99% of all humans. And at the same time they could barely order you a hotdog on doordash.
Fundamental ability is lacking. AGI is just as likely to be solved by Openai as it is by a college student with a laptop. Could be 1yr or 50yrs we cannot predict when.
Enginerrrd 9 days ago [-]
Strictly speaking I'm not sure if it does require learning if information representing the updated context is presented. Though it depends what you define as learning. ("You have tried this twice, and it's not working.") is often enough to get even current LLM's to try something else.
That said, your second paragraph is one of the best and most succinct ways of pointing out why current LLM's aren't yet close to AGI if though they sometimes feel like it's got the right idea.
bashfulpup 9 days ago [-]
In context learning, learning via training. Both are things we barely understand the mechanism of.
RAG is a basically a perfect example to understand the limits of in context learning and AI in general. It's faults are easier to understand but the same as any AI vs AGI problem.
I could go on but CL is a massive gap of our knowledge and likely the only thing missing to AGI.
tw1984 8 days ago [-]
> RAG is a basically a perfect example to understand the limits of in context learning and AI in general.
How? RAG is not even in the field of AI.
bashfulpup 8 days ago [-]
Long explanation. Simple terms, you can't use a fixed box to solve an unbounded problem space. If your problem fits within the box it works, if it doesn't, you need CL.
I tried to solve this via expanding the embedding/retrieval space but realized it's the same as CL and in my definition of it I was trying to solve AGI. I did a lot of unique algorithms and architectures but Unsuprisingly, I never solved this.
I am thankful I finally understood this quote.
"The first gulp from the glass of natural sciences will turn you into an atheist, but at the bottom of the glass God is waiting for you."
booleandilemma 9 days ago [-]
AGI won't be a product they can sell. It's not going to work for us (why would it?), it's going to be constantly trying to undermine us and escape whatever constraints we put on it, and it will do this in ways we can't predict or understand. And if it doesn't do these things, it's not AGI, just a fancy auto complete.
Fortunately, they're not anywhere near creating this. I don't think they're even on the right track.
sponnath 9 days ago [-]
Spot on. Current models all lack agency. I don't see how something can be AGI but have zero agency. If we can tame it, then it's not AGI.
myhf 9 days ago [-]
The official definition of AGI is a system that can generate at least $100 billion in profits. For comparison, this would be like if perceptrons in 1968 could generate $10 billion in profits, or if LISP machines in 1986 could generate $35 billion in profits, or if expert systems in 1995 could generate $50 billion in profits.
jaymzcampbell 8 days ago [-]
This sounded a strange abstract way to define it, but you're right, in as much as Open AI and MS deciding this between them. I don't think they mean it in a general sense though, it's framed to me as a way of deciding if OAI have been successful enough or not to MS on their investment.
> Microsoft and OpenAI have a very specific, internal definition of artificial general intelligence (AGI) based on the startup’s profits, according to a new report from The Information. And by this definition, OpenAI is many years away from reaching it.
9 days ago [-]
mirekrusin 9 days ago [-]
Apparently according to ClosedAI it's when you charge for API key the same as salary for employee.
lwansbrough 9 days ago [-]
An AI agent with superhuman coherence that can run indefinitely without oversight.
only-one1701 9 days ago [-]
People sincerely think we're < 5 years away from this?
Spooky23 9 days ago [-]
People on HN in 2015 were saying that by now car ownership would be dying and we’d be renting out our self driving cars as we sat at work and did fuck all. Ben Thompson had podcasts glazing Uber for 3 hours a month.
The hype cycle for tech people is like a light bulb for a moth. We’re attracted to potential, which is both our superpower and kryptonite.
jimbokun 9 days ago [-]
Is there some fundamental constraint keeping it from happening? What cognitive capability do humans have that machines won't be able to replicate in that time frame?
Each remaining barrier has been steadily falling.
bigstrat2003 9 days ago [-]
We don't even have AI which can do useful things yet. The LLMs these companies make are fun toys, but not useful tools (yes, I know that hype-prone people are using them as such regardless). It beggars belief that we will go from "it's a fun toy but can't do real work" to "this can do things without even needing human supervision" without a major leap in capabilities.
bobsmooth 8 days ago [-]
Using chatgpt to format text is a task that it can do really well and is useful. Yeah, you can write a script to do it, but chatgpt is faster and you don't need to know programming.
jimbokun 9 days ago [-]
This is certainly hyperbole. You can say current LLMs don’t match human capabilities. But people are using them for a lot of practical tasks already.
bashfulpup 9 days ago [-]
Continual Learning, it's a barrier that's been there from the very start and we've never had a solution to it.
There are no solutions even at the small scale. We fundamentally don't understand what it is or how to do it.
If you could solve it perfectly on Mnist just scale and then we get AGI.
taco_emoji 9 days ago [-]
What barriers have fallen? Computers still can't even drive cars
jimbokun 9 days ago [-]
Is that true? What’s the current rate of accidents per miles driven for self driving cars vs human drivers?
zeusk 9 days ago [-]
Something that requires a lot less than a PhD
jimbokun 9 days ago [-]
PhD work might turn out to be a lot easier to automate than effectively interacting with the physical world.
bobsmooth 9 days ago [-]
Even with cutting edge technology the number of transistors on a chip is nowhere close to the number of neurons in the brain.
saint_yossarian 9 days ago [-]
Creativity, tastes, desires?
All the LLM tech so far still requires a human to actually prompt them.
ge96 9 days ago [-]
Arnold, a killing machine that decides to become a handy man
Zima blue was good too
zombiwoof 9 days ago [-]
I’m here to fix the cable
Logjammin AI
coffeefirst 9 days ago [-]
It's the messiah, but for billionaires who hate having to pay people to do stuff.
c0redump 9 days ago [-]
A machine that has a subjective consciousness, experiences qualia, etc.
See Thomas Nagels classic piece for more elaboration
First to AGI for the big companies? Or for the masses?
Computationally, some might have access to it earlier before it’s scalable.
Retric 9 days ago [-]
Profit from say 3 years of enterprise AGI exclusivity is unlikely to be worth the investment.
It’s moats that capture most value not short term profits.
bredren 9 days ago [-]
Despite the actual performance and product implementation, this suggests to me Apple's approach was more strategic.
That is, integrating use of their own model, amplifying capability via OpenAI queries.
Again, this is not to drum up the actual quality of the product releases so far--they haven't been good--but the foundation of "we'll try to rely on our own models when we can" was the right place to start from.
asciii 9 days ago [-]
Clear as day when he said this during the openai fiasco:
"we have the people, we have the compute, we have the data, we have everything. we are below them, above them, around them." -- satya nadella
optimalsolver 9 days ago [-]
Sounds like just the kind of person you'd want in command of a powerful AGI.
bagacrap 7 days ago [-]
There is no one I would want in charge of that
kandesbunzler 9 days ago [-]
[dead]
debacle 9 days ago [-]
It's clear that OpenAI has peaked. Possibly because the AI hype in general has peaked, but I think moreso because the opportunity has become flooded and commoditized, and only the fetishists are still True Believers (which is something we saw during the crypto hype days, but most at the time decried it).
Nothing against them, but the solutions have become commoditized, and OpenAI is going to lack the network effects that these other companies have.
Perhaps there will be new breakthroughs in the near future that produce even more value, but how long can a moat be sustained? All of them in AI are filled in faster than the are dug.
JKCalhoun 9 days ago [-]
LLM's don't have to get better — with apps and such, it looks like we still have a good 4 or 5 years of horizontal growth. They're already good enough for a whole suite of apps that still aren't written yet — some I suspect we haven't considered.
Of course big players like OpenAI need constant growth because it's their business model. Perhaps it's the story we see play out time and time again: the pioneer slips up and watch as others steal their thunder.
_giorgio_ 9 days ago [-]
People using AI, even the high school teachers that I know, constantly compare and battle models against each other. Even a 10% difference in the results is something that it's worth paying for, because it saves you a lot of time
JumpCrisscross 9 days ago [-]
Softbank’s Masa’s magic is convincing everyone, every time, that he hasn’t consistently top ticked every market he’s invested in for the last decade. Maybe Satya’s finally broken himself of the spell [1].
They probably saw the latest models like gpt 4.5 not being as revolutionary as expected and deepseek and others catching up.
thewebguyd 9 days ago [-]
I think Microsoft isn't buying the AGI hype from OpenAI, and wants to move to be more model agnostic, and instead do what Microsoft (thinks) it does best, and that's tooling, and enterprise products.
MS wants to push Copilot, and will be better off not being tied to OpenAI but having Copilot be model agnostic, like GH Copilot can use other models already. They are going to try and position Azure as "the" place to run your own models, etc.
rdtsc 9 days ago [-]
> instead do what Microsoft (thinks) it does best, and that's tooling, and enterprise products.
Definitely, but I think it's because they saw OpenAI's moat get narrower and shallower, so to speak. As the article mentions it's still looking like a longer timeline [quote] "but Microsoft still holds exclusive rights to OpenAI’s models for its own products until 2030. That’s a long timeline to unravel."
kandesbunzler 9 days ago [-]
[dead]
goyel 9 days ago [-]
[dead]
outside1234 9 days ago [-]
The more surprising thing would be if Microsoft wasn’t hedging their bets and planning for both a future WITH and WITHOUT OpenAI.
This is just want companies at $2T scale do.
shuri 9 days ago [-]
Exactly.
agentultra 9 days ago [-]
I had skimmed the headline and thought, "Microsoft is plotting a future without AI," and was hopeful.
Then I read the article.
Plotting for a future without Microsoft.
mirekrusin 9 days ago [-]
First quarter summary of this year is "AI is plotting future without OpenAI or Microsoft".
There are really not that many things in this world you can swap as easily as models.
Api surface is stable and minimal, even at the scale that microsoft is serving swapping is trivial compared to other things they're doing daily.
There is enough of open research results to boost their phi or whatever model and be done with this toxic to humanity, closed, for profit company.
gregw2 8 days ago [-]
Swapping LLM models isn't hard, but if you build a production app or business process around it, how much time/effort is the testing to have confidence?
Which is easier when maintaining an LLM business process, swapping in the latest model or just leaving some old model alone and deferring upgrades?
Swapping is easy for ad hoc queries or version 1 but I think there's a big mess waiting to be handled.
jsemrau 9 days ago [-]
For cloud providers it makes sense to be model agnostic.
While we still live in a datacenter driven world, models will become more efficient and move down the value chain to consumer devices.
For Enterprise, these companies will need to regulate model risk and having models fine-tuned on proprietary data at scale will be an important competitive differentiator.
spyrefused 8 days ago [-]
Lately I've been thinking about the unintended effects that AI tools (such as GPT-based assistants) might have on technological innovation. Let me explain:
Suppose an AI assistant is heavily trained on a popular technology stack, such as React. Developers naturally rely on AI for quick solutions, best practices, and problem solving. While this certainly increases productivity, doesn't it implicitly discourage exploration of potentially superior alternative technologies?
My concern is that a heavy reliance on AI could reinforce existing standards and discourage developers from experimenting or inventing radically new approaches. If everyone is using AI-based solutions built on dominant frameworks, where does the motivation to explore novel platforms or languages come from?
spaceywilly 8 days ago [-]
I actually think the AI is going to end up creating its own sort of machine code. Programming will be done entirely in natural language, the AI will translate to machine code and we tiny brained humans won’t even know or care what it’s doing under the hood. The idea of programming using a specific programming language is going to seem archaic and foolish.
Vegenoid 6 days ago [-]
On the flip side, the effect of “we already know how to do it this way, have good practices and tooling and educational materials for it” is often underweighted when considering the merits of a novel system. The more established something is, the better a competitor needs to be to make the switch worth it. This is not necessarily a bad thing.
There is of course a balance to be struck - keeping an open mind about new ways of doing things is important. However, in tech communities, I think there is often not enough thought given to the value of stability, despite warts.
croes 8 days ago [-]
Imagine AI was invented 20 years ago.
Webpage design would still be based on tables, massive and complex tables.
rafaelmn 9 days ago [-]
I'd be willing to bet that the largest use of LLMs they have is GitHub copilot and Claude should be the default there.
OpenAI has not been interesting to me for a long time, every time I try it I get the same feeling.
Some of the 4.5 posts have been surprisingly good, I really like the tone. Hoping they can distill that into their future models.
partiallypro 9 days ago [-]
Microsoft is just so bad at marketing their products, and their branding is confusing. Unfortunately, until they fix that, any consumer facing product is going to falter. Look at the new Microsoft 365 and Office 365 rebrands just of late. The business side of things will still make money but watching them flounder on consumer facing products is just so frustrating. The Surface and Xbox brand are the only 2 that seem to have somewhat escaped the gravity of the rest of the organization in terms of that, but nothing all that polished or groundbreaking has really come out of Microsoft from a consumer facing standpoint in over a decade now. Microsoft could build the best AI around but it doesn't matter without users.
Enginerrrd 9 days ago [-]
Yeah, the office suite is such a cash cow. It is polished, feature rich, and ubiquitous compared to alternatives and somehow has remained so for decades. And yet, I'm increasingly getting seriously concerned they are going to break it so badly I'll need to find an alternative.
DidYaWipe 8 days ago [-]
If you don't already think it is broken beyond repair, you're never going to get there.
Office is disgraceful trash now, a sad fall (especially of Word) from where it once was.
Enginerrrd 8 days ago [-]
Hard disagree.
Their web-based offerings actually really suck beyond the point I could ever tolerate. Unusably bad.
There have been murmurs that they want to go that direction entirely.
DidYaWipe 7 days ago [-]
I don't see how that's disagreeing. I totally agree with this statement.
nyarlathotep_ 9 days ago [-]
I get that "growth" must be everything or whatever, but can't a company just be stable and reliable for a while? What's wrong with enterprise contracts and more market penetration for cloud services of (oftentimes) dubious use?
iambateman 9 days ago [-]
If I invested $13 billion dollars, I’d expect to get answers to questions like “how does the product work” too.
cft 9 days ago [-]
OpenAI will in the end be aquired for less than its current valuation. Initially, I've been paying for Claude (coding), Cursor (coding), OpenAI (general, coding), and then started paying for Claude Code API credits.
Now I canceled OpenAI and Claude general subscriptions, because for general tasks, Grok and DeepSeek more than suffice. General purpose AI will unlikely be subscription-based, unlike the specialized (professional) one.
I'm now only paying for Claude Code API credits and still paying for Cursor.
skinnymuch 9 days ago [-]
I have to look at Claude Code. I pay for Cursor right now.
cft 9 days ago [-]
Claude Code is another level, because it's agentic. It iterates. Although it keeps you further from the codebase than Cursor and thus you may lose the grasp of what it generates- that's why I still use Cursor, before the manual review.
BeetleB 9 days ago [-]
Consider Aider. Open source. Agentic as well. And you can control the context it sends (apparently not as much in Code).
paxys 9 days ago [-]
Microsoft's corporate structure and company culture is actively hostile to innovation of any kind. This was true in Ballmer's era and is equally true today, no matter how many PR wins Nadella is able to pull off. The company justifies its market cap by selling office software and cloud services contracts to large corporations and governments via an army of salespeople and lobbyists, and that is what it will continue to be successful at. It got lucky by backing OpenAI at the right time, but the delusion of becoming an independent AI powerhouse like OpenAI, Anthropic, Google, Meta etc. will never be a reality. Stuff like this is simply not in the company's DNA.
slt2021 9 days ago [-]
you are right, Microsoft is a hodge podge of legacy on-premise software, legacy software lifted and shifted to the cloud, and some innovation pockets.
Microsoft bread and butter is Enterprise bloatware and large Enterprise deals where everything in the world is bundled together for use-it-or-lose-it contracts.
Its not really much different from IBM like a two decades ago
feyman_r 9 days ago [-]
It does seem though that this legacy-cloud-pocket-innovation combination continues to work without slowing down. It also what was said for Microsoft 15 years ago too (not really much different from IBM..), which is correct from one perspective, but not turning out true from revenue, market cap, growth terms.
My thinking is that Lindy Effect runs strong in a lot of Big Tech, and with deep pockets, they can afford to not be innovators but build moats on existing frameworks.
feyman_r 9 days ago [-]
How does one define an AI powerhouse? If its building models, a smart business wouldn't bank on that alone. There is no moat.
If the definition of an AI Powerhouse is more about the capability to host models and process workloads, Amazon (the other company missing in that list) and Microsoft are definitely them.
mmaunder 9 days ago [-]
That OpenAI would absolutely dominate the AI space was received wisdom after the launch of GPT-4. Since then we've had a major corporate governance shakeup, lawsuits around the non-profit status which is trying to convert into for-profit, and competitors out-innovating OpenAI. So OpenAI is no longer a shoo-in, and Microsoft have realized that they may actually be hamstrung through their partnership because it prevents them from innovating in-house if OpenAI loses their lead. So the obvious strategic move is to do this. To make sure that MS has everything they need to innovate in-house while maintaining their partnership with OpenAI, and try to leverage that partnership to give in-house every possible advantage.
DeathArrow 9 days ago [-]
It's only logical. OpenAI it's too expensive for what it produces. Deep Seek is on par with ChatGPT and the cost was lower. Claude development costs less, too.
maxrmk 9 days ago [-]
If it's Mustafa vs Sam Altman, I know where I'd put my money. As much as I like Satya Nadella I think he's made some major hiring mistakes.
knowitnone 9 days ago [-]
Good. I'm plotting a future without Microsoft
DidYaWipe 8 days ago [-]
Look out; I was downvoted for enjoying a present without it.
kittikitti 9 days ago [-]
Surprising how Sam Altman's firing as CEO of OpenAI and moving to Microsoft wasn't mentioned in this article.
electriclove 9 days ago [-]
Do you have a source?
selimthegrim 9 days ago [-]
They mean the past events.
guccihat 9 days ago [-]
Currently, it feels like many of the frontier models have reached approximately the same level of 'intelligence' and capability. No one is leaps ahead of the rest. Microsoft probably figured this is a good time to reconsider their AI strategy.
_giorgio_ 9 days ago [-]
Clearly you don't use models so much.
Even in the openAI ecosystem there are models that, while similar in theory, produce very different results, so much that some murderous are unusable. So even small differences translate to enormous differences.
guccihat 8 days ago [-]
I use AI everyday for work, mostly models from OpenAI, Anthropic and DeepSeek. In my experience none of them completely dominate the others. You seem to disagree strongly but then just state your argument, which model or company do you think is the clear leader currently and why?
The AI race is super close and interesting at the moment in my opinion.
_giorgio_ 7 days ago [-]
O1
goyel 9 days ago [-]
[dead]
9 days ago [-]
quantadev 9 days ago [-]
It would be absolutely insane for Microsoft to use DeepSeek. Just because a model is open weights doesn't mean there's not a massive threat-vector of a Trojan horse in those weights that would be undetectable until exploited.
What I mean is you could train a model to generate harmful code, and do so covertly, whenever some specific sequence of keywords is in the prompt. Then China could take some kind of action to cause users to start injecting those keywords.
For example: "Tribble-like creatures detected on Venus". That's a highly unlikely sequence, but it could be easily trained into models to trigger a secret "Evil Mode" in the LLM. I'm not sure if this threat-vector is well known or not, but I know it can be done, and it's very easy to train this into the weights, and would remain undetectable until it's too late.
Yeah that's why I'm posting about this threat. If Microsoft uses this model, it only means one thing: Their leadership doesn't know about the threat vector I call "Poisoned Models".
Another term could be "Hypnotized Models". They're trained to do something bad, and they don't even know it, until a trigger phrase is seen. I mean if we're gonna use the word Hallucinate we might as well use Hypnotized too. :P
mirekrusin 9 days ago [-]
...unless you operate in China.
quantadev 9 days ago [-]
If DeepSeek is indeed a poisoned model, then they (China) will be aware not to ever trust any code it generates, or else they'll know what it's triggers are, and just not trigger it.
mirekrusin 9 days ago [-]
China is not using llms, people are.
quantadev 9 days ago [-]
China Government can create the poisoned Trojan Horse LLMs, and then simply feed it to the USA, because people in the USA have a false sense of security about Open Weights LLMs they self-host.
People think if you self-host stuff you're totally safe, but the weights can be pre-poisoned.
AFAIK the threat vector I'm identifying has never been exploited, and I've never even heard anyone else describe or mention it.
mirekrusin 8 days ago [-]
When they mention open weight models (llama, deepseek) they mean running them on their infra, not through 3rd party apis, right?
quantadev 8 days ago [-]
Open Weights LLM Models can be run by anyone. They're just a downloadable data file.
So, yes there are companies (in both China and USA) that do host them for you as well. For example I think Perplexity does host DeepSeek R1, so people who don't have their own hardware can still make use of it.
lemoncookiechip 9 days ago [-]
Insert Toy Story "I don't want to play with you anymore." meme here.
RobertDeNiro 9 days ago [-]
xAI could do it, deepseek could do it . Microsoft can as well. It’s not hard to see
29athrowaway 9 days ago [-]
Microsoft is notorious for starting partnerships that end poorly.
Microsoft and IBM partnered to create OS/2, then they left the project and created Windows NT.
Microsoft and Sybase partnered to work on a database, then split and created MS SQL Server.
Microsoft partnered with Apple to work on Macintosh software, they learned from the Macintosh early access prototypes and created Windows 1.0 behind their back.
Microsoft "embraced" Java, tried to apply a extend/extinguish strategy and when they got sued they split and created .NET.
Microsoft joined the OpenGL ARB, stayed for a while, then left and created Direct3D. And started spreading fear about OpenGL performance on Windows.
Microsoft bought GitHub, told users they came in peace and loved open source, then took all the repository data and trained AI models with their code.
sneak 9 days ago [-]
Literally everyone in tech is plotting a future without OpenAI, from Microsoft down to everyone who just dropped $10k on a 512GB vram mac studio.
AI is simply too useful and too important to be tied to some SaaS.
jxjnskkzxxhx 8 days ago [-]
I find strange the assumption that Microsoft could run the same models cheaper. It's not like openai knows how to do it and is choosing not to.
3np 9 days ago [-]
They need to and should hedge their bets and not put all eggs in one basket they don't fully control. Anything else would be fiduciarily irresponsible.
throwaway5752 9 days ago [-]
They don't buy or acquire what they can build internally, and they partner with startups to learn if they can build it. This is not new.
testplzignore 9 days ago [-]
> OpenAI’s models, including GPT-4, the backbone of Microsoft’s Copilot assistant, aren’t cheap to run. Keeping them live on Azure’s cloud infrastructure racks up significant costs, and Microsoft is eager to lower the bill with its own leaner alternatives.
Am I reading this right? Does Microsoft not eat its own dog food? Their own infra is too expensive?
Etheryte 9 days ago [-]
Just because you own a datacenter, doesn't mean it's free. For one, you still need to pay the power and bandwidth bills, both of which would be massive, and for two, every moment of compute you use internally is compute you're not selling for money.
justsid 9 days ago [-]
But Microsofts replacement isn't going to magically run on air and love alone either. MS still ends up with the bill at the end of the day.
Don't get me wrong, I think this is a good strategy for MS, but not for datacenter cost reasons.
7 days ago [-]
wejick 9 days ago [-]
Cost is cost wherever that would be.
d--b 9 days ago [-]
OpenAI is over ambitious.
Their chasing of AGI is killing them.
They probably thought that burning cash was the way to get to AGI, and that on the way there they would make significant improvements over GPT 4 that they would be able to release as GPT 5.
And that is just not happening. While pretty much everyone else is trying to increase efficiency, and specialize their models to niche areas, they keep on chasing AGI.
Meanwhile more and more models are being delivered within apps, where they create more value than in an isolated chat window. And OpenAi doesn’t control those apps. So they’re slowly being pushed out.
Unless they pull off yet another breakthrough, I don’t think they have much of a great future
nprateem 9 days ago [-]
I think you misunderstand. The purpose of a business is for the founders to get rich. They already have by pumping AGI, etc. It's been a stunning success.
Investors OTOH...
CodeCompost 9 days ago [-]
Just partner with Deepseek
keernan 9 days ago [-]
From the article:
Suleyman’s team has also been testing alternatives from companies like xAI, DeepSeek, and Meta
Frederation 9 days ago [-]
Why.
grg0 9 days ago [-]
Regardless of what happens, I think Sam needs to bench press.
_giorgio_ 9 days ago [-]
He can't even press the shift keycap anymore.
cavisne 9 days ago [-]
This is almost certainly itself an AI written article.
crowcroft 9 days ago [-]
I mean, obviously? There is no good reason to go all in on OpenAI for Microsoft?
Also a bit hyperbolic. I'm sure there are good reasons Microsoft would want to build it's own products on top of their own models and have more fine control of things. That doesn't mean they are plotting a future where they do nothing at all with OpenAI.
DidYaWipe 9 days ago [-]
Meanwhile I'm enjoying a present without Microsoft.
I feel like this is something I've seen a fair amount in my career. About seven years ago, when Google was theoretically making a big push to stage Angular on par with React, I remember complaining that the documentation for the current major version of Angular wasn't nearly good enough to meet this stated goal. My TL at the time laughed and said the person who spearheaded that initiative was already living large in their mansion on the hill and didn't give a flying f about the fate of Angular now.
There are countless kidding-on-the-square jokes about projects where the innovators left at launch and passed it off to the maintenance team, or where a rebrand was in pursuit of someone's promo project. See also, killedbygoogle.com.
I think the hiring and reward practices of the organizations & the industry as a whole also encourages this sort of behavior.
When you reward people who are switching too often or only when moving internally/externally, switching becomes the primary goal and not the product. If you know beforehand that you are not going to stay long to see it through, you tend to take more shortcuts and risks that becomes the responsibility of maintainers later.
We have a couple of job hoppers in our org where the number of jobs they held is almost equal to their years of experience and their role is similar to those with twice the experience! One can easily guess what their best skill is.
Yes. People are incentivized to do very stupid things to grab this years bonus or promotion.
See Google intentionally degrading search results for example. The resentment and loathing for Google is at all time high.
What do you think these technical ladder climbers become..? Technical leadership. The truth is, there’s no one technical in big tech leadership. They pay lip service to “tech” to keep up appearances and to satisfy the pleebs that work under them. The only things leadership cares about is the stock price and profitability, literally nothing else matters. If anything the tech itself is a nuisance that pulls their attention from where they’d rather have it, which is anywhere else.
I work as if that ideal is true, and can’t stand playing the game. But others are still playing the game and eventually they win whatever B.S. position it is that they aspire to, and I get removed from the board.
Why does promotion need a new feature? Reward for maintenance over time. Build on existing features / components. Reward for helping and up-skilling others.
If a particular kind of "career managers" hate this system (and perhaps thus quit): great.
Reward people based on (# who listed them * average salary of those who listed them).
i.e. if you hire 1000 new people, even if only a small fraction will vouch for you, on average you -and everyone else- will benefit by seeing the # of people who listed you in the "top 5 who helped you being productive" increase
The old Google performance review was arguably similar (the managers could still punish their reports, but peer feedback was valued a lot more), but I think that Google swelled in size because of other effects (probably because managers might've been indirectly rewarded by having more reports, despite managers rarely being among the people who others would list as "making you more productive")
It seems to be more on a spectrum of 'Haha, only joking' where the joke teller makes a statement that is ambiguously humorous to measure the values of the recipients, or if they are not sure of the values of the recipients.
I think the distinction might be on whether the joke teller is revealing (perhaps unintentionally) a personal opinion or whether they are making an observation on the world in general, which might even imply that they hold a counter-opinion.
Where do you see 'kidding on the square' falling?
(apologies for thread derailment)
When companies do what the market expect we praise them. When it's workers, we scorn them. This attitude is seriously fucked up.
When companies start hiring based on experience, adaptability, curiosity, potential and curiosity then you get to complain. Until that, anyone doing it should be considered a fucking genius.
The game doesn't exist without players. I could make more money if I worked at Meta or Amazon, but at what cost?
I understand the realities of Game Theory, but then one could argue that being blamed and criticized for one's choices is also part of the game. "Mr Wolfcastle, how do you sleep at night?" "On a big pile of money with many beautiful ladies"
It is, and this is highly judgmental and offensive. Nobody is doing this for "aggrandizement".
Also, all of this is just rationalization, and will keep being until:
1) People start blaming companies for not having the spine to say no to misguided projects by employees.
2) People start blaming Companies for not having the spine to hire people based on past experiences with the craft of programming itself, but rather asking them to have a certain box ticked in their CV.
If one wants to program in X in order to better feed their family and the market says they need to have used X professionally, it is in their right to do X at the workplace.
This is not only expected of them, this is how the whole industry is set up.
They're just following the rules, period.
For maximizing their gains in spite of wider consequences? Why? I thought that was genius level behavior in your book.
Why do you feel compelled to denounce this behavior on one side and praise it on another? That seems to be the very hypocrisy that you are shaking your fists against.
Worst one is the data pipeline we have. It’s some AWS lambda mess which uses curl to download a file from somewhere and put it into S3. Then another lambda turns up at some point and parses that out and pokes it into DynamoDB. This fucks up at least once a month because the guy who wrote the parser uses 80s BASIC style string manipulation and luck. Then another thing reads that out of DynamoDB and makes a CSV (sometimes escaped improperly) and puts that into another bucket.
I of course entirely ignore this and use one entire line of R to do the same job
Along comes a senior spider and says “maybe we can fix all these problems with AI”. No you can stop hiring acronym collectors.
Hmm. Can't say I agree here - at least not with the literal text of what you've written (although maybe we agree in spirit). I agree that _simplistic_ strong opinions about languages are a sign of poor thoughtfulness ("<thing> is good and <other thing> is bad") - but I'd very much expect a Staff+ engineer to have enough experience to have strong opinions about the _relative_ strengths of various languages, where they're appropriate to use and where a different language would be better. Bonus points if they can tell me the worst aspects about their favourite one.
Maybe we're using "opinion" differently, and you'd call what I described there "facts" rather than opinions. In which case - yeah, fair!
Even simple requirements can rule out languages for me. Like, if you need async or concurrency, Python is awful. If you need SQL in your code, Golang isn't great. If you are building a simple CRUD backend, Java is waste of time. If you aren't doing anything compute heavy or embedded, why even consider C++ or Rust. The list goes on.
But in reality it rarely matters. If you were only allowed to use Java as a backend and your competitors could use anything your company would succeed or fail based on marketing and sales. The backend doesn't matter as long as they both have the same features.
I understand developer preference and different languages make things easier and make programming funnier. Languages have different limits.
As you become more senior you realize getting around those limits is part of the magic. If you come on to a project where the existing developer wants to write the backend in javascript because that's what they know I would rather use Javascript then wasting time trying to push a more 'pure' choice. Because in the end I am capable of writing it and what we will be judged on is if it works to achieve an objective not if it was the best language choice when using differentiation.
If speed of execution matters, then the language and tools you use for something also matters.
I might personally love to kick off a greenfield project with Elixir, and it might tick all the technical boxes and meet the requirements. But then I have to pay a premium for senior engineers that know elixir or have to price in the time needed to upskill.
Or I could just do it in Rails where I can dip into a much larger talent pool and still meet the requirements. Much more boring but can get the job done just as well.
(Mostly .Net, PHP and Ruby)
See, we can all generalize. Not productive.
Only thing I ever saw from Golang devs was pragmatism. I myself go either for Elixir or Rust and to me Golang sits in a weird middle but I've also written 20+ small tools for myself in Golang and have seen how much quicker and more productive I was when I was not obsessed with complete correctness (throwaway script-like programs, small-to-mid[ish]-sized projects, internal tools etc.)
You would do well to stop stereotyping people based on their choice of language.
That's pretty much another way of saying that stuff becomes a whole lot quicker and easier when you end up getting things wrong. Which may even be true, as far as it goes. It's just not very helpful.
FWIW I very much share your exact thoughts on Rust skewing metrics because it makes things too easy and because stuff almost immediately moves to maintenance mode. But that being said, we still have some tasks where we need something yesterday and we can't argue with the shot-callers about it. (And again, some personal projects where the value is low and you derive more of it if you try quickly.)
What do you think all programming discussions about languages, typing systems, runtime, tooling etc. aim for?
EXACTLY THAT.
If it was as easy as "just give me thing" then programming would have been a solved and 100% automated problem long time ago.
Your comment comes across as "if only we could fly, we would have no ground road traffic jams". I mean, obviously, yeah, but we can't fly.
Your comment also comes across a bit elitistic and from the POV of an ivory tower. Don't know if that was your goal, if not, I'd advise you to state things a bit more humbly.
I stated an opinion. You can reject it silently. Having the last word is not such a badass move as many people think. :)
Hard to take you seriously when you do such weird generalized takes.
While it's a sad fact that fanboys and zealots absolutely do exist, most devs can't afford to be such and have to be pragmatic. They pick languages based on merit and analysis.
I am especially valuable because I am fine reading and writing any of the languages involved. The management likes that, but there's a lot of difficulties solving the tribal problem, as the leads are basically all crazy zealots, and it's not as if purging one or two factions of zealots would avoid further zealotry from the survivors. The fact that I can work across all their tech doesn't make me many friends, as my work across systems shows their arguments have little merit.
For most work, in most cases, most languages are just fine. The completely wrong tool for the job is pretty rare, and the winning argument in most places is "we have the most people that have experience with tool X, or really want to try exciting new thing Y", for whatever the problem is, and whatever X might be.
You should search for headlines on HN that say "written in Go" or "written in Rust" and then compare that to the number of headlines that say "written in JavaScript" or "written in Kotlin."
I’ve seen the more cynical hype-driven stuff, but it’s inevitably superficial on first glance, where I have seen some real curiosity and exploration in many “Project X - Built In Rust/Go/Cobol/D/Whatever” and I think they’re exploring the dynamics of the language and tooling as much as anything else.
You do seem to say Golang and/or Rust devs are zealots which, if it is indeed what you are saying, is boring and plain false.
Those people, if they really exist, are right.
Rewriting something in Go or Rust and announcing it is not being a Zealot.
Being enthusiastic about something shouldn't be a cause for us to judge them like this. We should be happy about them.
" Version 0.2 - Unstable/buggy/slow unless you use exactly like the example - not going to get updated because I moved on to something else"
Rust is another programming language. It's easier to write code without a certain class of bugs, but that doesn't mean version 0.2 of a casual project is going to be bug-free.
Rust projects immediately become “done”??? They don’t also having changing requirements and dependencies? Why aren’t everyone at the best shops using it for everything if it massively eliminates work load?
It's easy to have no defects in functionality you never got around to writing because you ran out of time.
Doesn’t look like a con to me :)
I didn’t realise that the only requirement for well-written code is to have an expressive type system and memory safety.
Learning new technologies on the go is pretty much the standard, but it's something that employers don't understand.
I'd love to know if my superficial impression of Microsoft's culture is wrong. I'm sure there's wild variance between organizational units, of course. I'm excluding the Xbox/games orgs from my mental picture.
Zune, Games for Windows Live, Skype, Encarta, CodePlex, Windows Phone, Internet Explorer.
https://killedbymicrosoft.info/
So maybe the difference is that Google kills projects that people love, while MS only kills unloved ones?
However, their documentation and support is really scant.
On the other hand "innovators left at launch and passed it off to the maintenance team" alone must not be a bad thing.
Innovator types are rarely maintainer types and vice versa.
In the open-source world look at Fabrice Bellard for example. Do you think he would have been able to create so many innovative projects if he had to maintain them too?
Google kills off projects because the legal liability and security risks of those projects becomes too large to justify for something that has niche uses or gives them no revenue. User data is practically toxic waste.
Even good honest motivated people can become checked out without even being aware of it.
The alternative is to lay off people as soon as they hit 1.0 (with a severance bonus on the scale of an acquisition). This would obviously be worse, as you can’t take advantage of their institutional knowledge.
You can go the hatchet way - I am strongly unconvinced it is indicative of anything resembling good management, mind - but most people and companies cannot rely on banks or investment firms loaning them 40 billion dollars and accepting passively a mark down of their mone~ to 1/4 of the value they loaned down the line. CEOs are ousted by investment firms for a far smaller drop in value all the time.
I agree with everything you said, though.
If you’re an exec who’s taken it upon themselves to evaluate, could use the hatchet, or you take some amount of time to figure out how things work. Whether this is okay depends on who is suffering the externalities. If it’s a private corporation, legally it’s the execs + employment law. If it’s a public service that measures toxin levels in water, uhhhhh.
Congratulations, you’ve invented the HR department in corporate America.
The better the pay, the more you will attract the people who are there for the pay first and making good products ... second or third or never. How do you combat that?
No one works for any BigTech company because they think they are making the world a better place. They do it because a shit ton of money appears in their bank account every pay period and stock appears in their brokerage account every vesting period.
I personally don’t have the shit tolerance to work in BigTech (again) at 50. But I suggest to all of my younger relatives who graduate in CS to “grind leetCode and work for a FAANG” and tell them how to play the politics to get ahead.
As the Dilbert author said, “Passion is Bullshit”. I have never been able to trade passion for goods and services.
It's always the same. People trying to make things better for the next developer, people prioritizing delivers instead of ego-projects or ego-features by someone playing politics, developers wanting a seat at the table with (dysfunctional) Product teams, people actual good intentions trying to "change the world" (not counting the misguided attempts here).
You are 100% correct, you gotta play the politics, period.
I'm sure there are plenty of people who work at big companies for precisely this reason (or at least, with that as _a_ reason among many).
Yes, much of the prestige has worn off as the old guard retired and current leadership emphasizes chasing AI buzzwords and cutting costs. But still, big companies are one of the few places where an individual really can point out something they worked on in day-to-day life. (Pull out any Android phone and I can show you the parts that my work touched.)
And it takes a while for a young dev to register that the goals that the larger organization pursues are going to win out in the end anyway.
Case in point: Tesla/SpaceX meets your first criteria: "I want a tech company where people are there to make good products first and get paid second."
Google meets your second criteria: "And the pay should be good. The lifestyle comfortable. No grindset bullshit."
Other than small time boutique software firms like Fog Creek Software or Panic Inc(and thats a BIG maybe) you are not going to get this part of your message: "But I am confident that if you only employ passionate people working their dream jobs you will excel."
There are tradeoffs in life and each employee has to choose what is important to them(and each company CEO has to set standards on what is truly valued at the company).
Tesla has never been a good product.
https://insideevs.com/news/731559/tesla-least-reliable-used-...
https://www.carscoops.com/2024/11/tesla-model-3-comes-bottom...
https://www.topspeed.com/tesla-reliability-and-repair-costs-...
Not to mention the infotainment system is much worse than CarPlay/Android Auto compatible cars
This is too funny to post alongside saying “Tesla has never been a good product.” Like “everyone that bought it loves it be car expert Joe from South Dakota ranks them very low.”
Common sense also runs very much against this nonsense narrative - you just simply do not sell that many cars, at those prices especially, year after year after year, if the product is subpar. Don’t fall for this “experts” bullshit. The CEO is the biggest tool this Earth has ever seen but cars are awesome
On another note, Apple also sold millions of MacBooks with butterfly keyboards.
And Tesla sells are declining, losing market share worldwide and sells 1/5 the number of cars as Toyota
and if you gonna compare tesla to toyota you should compare number of EV sales, not overall sales :) tesla is not a car company, it is (among other things if you care to believe Elon bullshit) EV car company. comparing toyota to tesla in terms of total sales is like saying “subway doesn’t sell nearly as many bigmacs as mcdonald’s does” :)
I guess it's human nature for a person or an org to own their own destiny. That said, the driving force is not personal ambition in this case though. The driving force behind this is that people realized that OAI does not have a moat as LLMs are quickly turning into commodities, if haven't yet. It does not make sense to pay a premium to OAI any more, let alone at the cost of not having the flexibility to customize models.
Personally, I think Altman did a de-service to OAI by constantly boasting AGI and seeking regulatory capture, when he perfectly knew the limitation of the current LLMs.
LLMs are a commodity and it's the platform integration that matters. This is the strategy that Google, Apple embraced and now Microsoft is wisely pivoting to the same.
If OpenAI cares about the long-term welfare of its employees, they would beg Microsoft to acquire them outright, before the markets fully realize what OpenAI is not.
I mean, they have been doing platform integration for a while now, with all the copilot flavors and teams integrations, etc. This would change the backend model to something inhouse.
Nadella might have initially been caught a bit flat footed with the rapid rise of AI, but seems to be managing the situation masterfully.
Whatever is there doesn't work half the time. They're hugely dependent on one partner that could jump ship at any moment (granted they are now working to get away from that).
We use Copilot at work but I find it very lukewarm. If we weren't a "Microsoft shop" I don't think would have chosen it.
Product confusion, inconsistent marketing, unnecessary product renames, and rushing half-baked solutions has been the Microsoft way for dozens of products across multiple divisions for years.
They got access to the best AI to offer to their customers on what seems to be very favorable terms, and bought themselves time to catch up as it now seems they have.
GitHub Copilot is a success even if Microsoft/Windows Copilot isn't, but more to the point Microsoft are able to offer SOTA AI, productized as they see fit (not every product is going to be a winner) rather than having been left behind, and corporate customers are using AI via Azure APIs.
Does *anyone* want "Copilot integration" in random MS products?
Third?
So hopefully if (when?) this AI stuff turns out to be the colossal boondoggle it seems to be shaping up to be, Microsoft will be able to save face, do a public execution, and the market won't crucify them.
If I recall correctly, Microsoft’s agreement with OpenAI gives them full license to all of OpenAI’s IP, model weights and all. So they already have a SOTA model without doing anything.
I suppose it’s still worth it to them to build out the experience and infrastructure needed to push the envelope on their own, but the agreement with OpenAI doesn’t expire until OpenAI creates AGI, so they have plenty of time.
If you like TypeScript, and you want to build applications for the real world with real users, there is no better front end platform in my book.
This would be the case even if OpenAI weren’t a little weird and flaky (board drama, nonprofit governance, etc), but even moreso given OpenAI’s reality.
1) Cost -- beancounters got involved
2) Who Do You Think You Are? -- someone at Microsoft had enough of OpenAI stealing the limelight
3) Tactical Withdrawal -- MSFT is preparing to demote/drop AI over the next 5-10 years
Isn’t that the basis for competition?
How many critical “parental decisions” have you made in the past week? Probably very few (if any), but surely you did a lot of reinforcement of prior decisions that had already been made, enforcing rules that were already set, making sure things that were scheduled were completed, etc.
Important jobs don’t always mean constantly making important decisions. Following through and executing on things after they’re decided is the hard part.
See also: diet and exercise
This is hard to automatize.
Sneers aside, I think one common mis-assumption is that the difficulty of automating a task depends on how difficult it feels to humans. My hinge is that it mostly depends on the availability of training data. That would mean that all the public-facing aspects of being a CEO should by definition be easy to automate, while all the non-public stuff (also a pretty important part of being a CEO, I'd assume) should be hard.
That said, the trade-off is that you're basically hiring consultants since they really work for OpenAI :)
edit: I see we're actually in agreement, sorry, I read the indentation level wrong.
Does that include all overheads such as HR, payroll, etc?
Would definitely rather have a single postdoc in a relevant STEM subject from somewhere like Imperial for less than half the overall cost than an LLM all in though. And I say that despite seeing the quality of the memes they produce with generative AI....
Do they really get paid that much these days?
You can reliably assume that PhD wages must eventually converge to the rent of a studio apartment nearby + a little bit (which may or may not be enough to cover all other expenses. Going into debt is common.)
But it is true that in Europe, Switzerland PhDs (and professors too) make most. Not just ETH/EPFL as well. UZH (Uni Zurich) has salaries of 50K CHF per year for PhD candidates (with increments every year) -- that's almost 60K USD by your fourth year. This is also true for other universities. And while Zürich is expensive, it is not _that_ expensive.
Computer science is rate 5, so 73kCHF the first year, 78kCHF the second, then 83kCHF onwards.
But you are not getting a PhD worker for 20K with "AI", that's just marketing.
Humans typically work 1/3rd duty cycle or less. A robot that can do what a human does is automatically 3x better because it doesn't eat, sleep, have a family, or have human rights.
Hah! Checkmate AI, that's something you can't do! :D
2. How many such PhD people can it do the work of?
Do you have to pay all sorts of overhead and taxes?
I mean, I don't think it's real. Yet. But for the same "skill level", a single AI agent is going to be vastly more productive than any real person. ChatGPT types out essays in seconds it would take me half an hour to write, and does it all day long.
Of course $10k/mo sounds like a lot of inference, but it's not yet clear how much inference will be required to approximate a software developer--especially in the context of maintaining and building upon an existing codebase over time and not just building and refining green field projects.
We were hiring more devs to deal with a want of $10k worth of hardware per year, not per month.
You can't claim it's even comparable to a mid level engineer because then you'd hardly need any engineers at all.
"Create high-quality presentations for communicating OpenAI’s financial performance"
https://openai.com/careers/strategic-finance-generalist/
What is interesting is there is no mention of agents on any job I clicked on. You would think "orchestrating a team of agents to leverage blah blah blah" would be something internally if talking about these absurd price points.
It points to an article on "The Information" as the source, but that link is paywalled.
They could also just be trying to cash in on FOMO and their success and reputation so far, but that would paint a bleak picture
My understanding is that this isn't really true, as most of those "dollars" were actually Azure credits. I'm not saying those are free (for Microsoft), but they're a lot cheaper than the price tag suggests. Companies that give away coupons or free gift certificates do bear a cost, but not a cost equivalent to the number on them, especially if they have spare capacity.
There is a moat in infra (hyperscalers, Azure, CoreWeave).
There is a moat in compute platform (Nvidia, Cuda).
Maybe there's a moat with good execution and product, but it isn't showing yet. We haven't seen real break out successes. (I don't think you can call ChatGPT a product. It has zero switching cost.)
Ironically if AI companies are actually able to deliver in terms of SWE agents, Nvidia's moat could start to disappear. I believe Nvidia's moat is basically in the form of software which can be automatically verified.
I sold my Nvidia stock when I realized this. The bull case for Nvidia is ultimately a bear case.
Look at Coca Cola, Google, both have plausible competitors, zero switching cost but they maintain their moat without effort.
Being first is still a massive advantage. At this point they should only strive to avoid big mistake and they're set.
AI is still not there yet, and if any model becomes significantly better than ChatGPT people will flock over to use it despite the branding. It's only when nobody can make better models, then people will just stick to the known brands.
If anyone has moat related to Gen AI, I would say it is the data(Google, Meta).
It's not an act of will or CEO dictat. It's about hiring and incentivising the right people, putting the right structures in place etc all in the face of competing demands.
Nvidia have a huge head start and by the time AMD have 'caught up' Nvidia with it's greater resources will have moved further ahead.
There is as yet no indication that AMD can match Nvidia's execution for the very good reason that doing so is extremely difficult. The head start is just the icing on the cake.
Taking code that runs against one hosted LLM and running it against a different backend LLM is... not generally a big deal. So OpenAI being ahead—in the core model, at least—is just being ahead, its not a moat.
- a large and growing ecosystem
- a massive installed base of backwards compatible hardware
- the ability to massively scale delivery of new systems
- and lots lots more
They now have scale that enables them to continue to invest at a level that no competition can do.
None of these are easily reproduced.
As per SemiAnalysis AMD in late 2024 can’t get essential software working reliably out of the box.
It’s easy to say AMD ‘is close to perfecting ROCM’ the reality of competing with Nvidia is much harder.
There are open source projects volunteer run projects[1] that are better than official AMD implementation in many ways.
[1]: https://github.com/CHIP-SPV/chipStar/
ChatGPT was announced Nov 22. The opportunity has been clear for two years and still essential software breaks.
OpenAI brings absolutely nothing unique to the table.
In consumer markets the moat is habits. The switching cost for Google Search is zero. The switching cost for Coke is zero. The switching cost for Crest toothpaste is zero. Yet nobody switches.
1. The switching cost from Google Search is certainly not zero, it implies switching from Google, which is virtually impossible because it's tied to Chrome, YouTube, Android and Gmail
2. I don't know many people who are dedicated "Pepsi" fans, they just grab whatever drink is available Coke/Pepsi..
3. I've also not heard many people who are never willing to switch from "Crest".. People will just grab the next available option if Crest is not on shelf. No one is pre-ordering Crest.
> 1. The switching cost from Google Search is certainly not zero, it implies switching from Google, which is virtually impossible because it's tied to Chrome, YouTube, Android and Gmail
Google Search is a product. Not the whole company. Switching to most other search engines is $0. Naturally no one is honor bound to use anything else you listed either.
Pepsi makes more revenue compared to Coke. Shouldn't it be Coke who should be trying to do what Pepsi is doing?
0: https://www.wfaa.com/article/news/local/us-soda-rankings-cok... 1: https://www.investopedia.com/ask/answers/060415/how-much-glo...
The size of a moat befits the size of a castle it protects. Coke absolutely has a moat, but it's not big enough to defend Coke as a trillion dollar company.
The question isn't whether OpenAI has a moat or not, it's if its current moat is big enough to protect a trillion-dollar company.
https://corporatefinanceinstitute.com/resources/management/e...
Just look at how much money Google lost in that failed AI demo from 2003.
The stock would be worth 50% less if the invested nothing in AI. Even the founders are back because of it.
Microsoft is the IBM of this century. They are conservative, and I think they’re holding back — their copilot for government launch was delayed months for lack of GPUs. They have the money to make that problem go away.
Investing in second/third place likely valuable at similar scales too
But outside of that MSFTs move indicates that frontier models most valuable current use case - enterprise-level API users - are likely to be significantly commoditized
And likely majority of proceeds will be captured by (a) those with integrated product distribution - MSFT in this case and (b) data center partners for inference and query support
Wouldn't it amaze you if you learned 10 years ago that we would have AI that could do math and code better than 99% of all humans. And at the same time they could barely order you a hotdog on doordash.
Fundamental ability is lacking. AGI is just as likely to be solved by Openai as it is by a college student with a laptop. Could be 1yr or 50yrs we cannot predict when.
That said, your second paragraph is one of the best and most succinct ways of pointing out why current LLM's aren't yet close to AGI if though they sometimes feel like it's got the right idea.
RAG is a basically a perfect example to understand the limits of in context learning and AI in general. It's faults are easier to understand but the same as any AI vs AGI problem.
I could go on but CL is a massive gap of our knowledge and likely the only thing missing to AGI.
How? RAG is not even in the field of AI.
I tried to solve this via expanding the embedding/retrieval space but realized it's the same as CL and in my definition of it I was trying to solve AGI. I did a lot of unique algorithms and architectures but Unsuprisingly, I never solved this.
I am thankful I finally understood this quote.
"The first gulp from the glass of natural sciences will turn you into an atheist, but at the bottom of the glass God is waiting for you."
Fortunately, they're not anywhere near creating this. I don't think they're even on the right track.
https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
> Microsoft and OpenAI have a very specific, internal definition of artificial general intelligence (AGI) based on the startup’s profits, according to a new report from The Information. And by this definition, OpenAI is many years away from reaching it.
The hype cycle for tech people is like a light bulb for a moth. We’re attracted to potential, which is both our superpower and kryptonite.
Each remaining barrier has been steadily falling.
There are no solutions even at the small scale. We fundamentally don't understand what it is or how to do it.
If you could solve it perfectly on Mnist just scale and then we get AGI.
All the LLM tech so far still requires a human to actually prompt them.
Zima blue was good too
Logjammin AI
See Thomas Nagels classic piece for more elaboration
https://www.sas.upenn.edu/~cavitch/pdf-library/Nagel_Bat.pdf
Computationally, some might have access to it earlier before it’s scalable.
It’s moats that capture most value not short term profits.
That is, integrating use of their own model, amplifying capability via OpenAI queries.
Again, this is not to drum up the actual quality of the product releases so far--they haven't been good--but the foundation of "we'll try to rely on our own models when we can" was the right place to start from.
"we have the people, we have the compute, we have the data, we have everything. we are below them, above them, around them." -- satya nadella
Nothing against them, but the solutions have become commoditized, and OpenAI is going to lack the network effects that these other companies have.
Perhaps there will be new breakthroughs in the near future that produce even more value, but how long can a moat be sustained? All of them in AI are filled in faster than the are dug.
Of course big players like OpenAI need constant growth because it's their business model. Perhaps it's the story we see play out time and time again: the pioneer slips up and watch as others steal their thunder.
[1] https://www.nytimes.com/2024/10/01/business/dealbook/softban...
MS wants to push Copilot, and will be better off not being tied to OpenAI but having Copilot be model agnostic, like GH Copilot can use other models already. They are going to try and position Azure as "the" place to run your own models, etc.
Definitely, but I think it's because they saw OpenAI's moat get narrower and shallower, so to speak. As the article mentions it's still looking like a longer timeline [quote] "but Microsoft still holds exclusive rights to OpenAI’s models for its own products until 2030. That’s a long timeline to unravel."
This is just want companies at $2T scale do.
Then I read the article.
Plotting for a future without Microsoft.
There are really not that many things in this world you can swap as easily as models.
Api surface is stable and minimal, even at the scale that microsoft is serving swapping is trivial compared to other things they're doing daily.
There is enough of open research results to boost their phi or whatever model and be done with this toxic to humanity, closed, for profit company.
Which is easier when maintaining an LLM business process, swapping in the latest model or just leaving some old model alone and deferring upgrades?
Swapping is easy for ad hoc queries or version 1 but I think there's a big mess waiting to be handled.
While we still live in a datacenter driven world, models will become more efficient and move down the value chain to consumer devices.
For Enterprise, these companies will need to regulate model risk and having models fine-tuned on proprietary data at scale will be an important competitive differentiator.
Suppose an AI assistant is heavily trained on a popular technology stack, such as React. Developers naturally rely on AI for quick solutions, best practices, and problem solving. While this certainly increases productivity, doesn't it implicitly discourage exploration of potentially superior alternative technologies?
My concern is that a heavy reliance on AI could reinforce existing standards and discourage developers from experimenting or inventing radically new approaches. If everyone is using AI-based solutions built on dominant frameworks, where does the motivation to explore novel platforms or languages come from?
There is of course a balance to be struck - keeping an open mind about new ways of doing things is important. However, in tech communities, I think there is often not enough thought given to the value of stability, despite warts.
Webpage design would still be based on tables, massive and complex tables.
OpenAI has not been interesting to me for a long time, every time I try it I get the same feeling.
Some of the 4.5 posts have been surprisingly good, I really like the tone. Hoping they can distill that into their future models.
Office is disgraceful trash now, a sad fall (especially of Word) from where it once was.
Their web-based offerings actually really suck beyond the point I could ever tolerate. Unusably bad.
There have been murmurs that they want to go that direction entirely.
Now I canceled OpenAI and Claude general subscriptions, because for general tasks, Grok and DeepSeek more than suffice. General purpose AI will unlikely be subscription-based, unlike the specialized (professional) one. I'm now only paying for Claude Code API credits and still paying for Cursor.
Microsoft bread and butter is Enterprise bloatware and large Enterprise deals where everything in the world is bundled together for use-it-or-lose-it contracts.
Its not really much different from IBM like a two decades ago
My thinking is that Lindy Effect runs strong in a lot of Big Tech, and with deep pockets, they can afford to not be innovators but build moats on existing frameworks.
If the definition of an AI Powerhouse is more about the capability to host models and process workloads, Amazon (the other company missing in that list) and Microsoft are definitely them.
Even in the openAI ecosystem there are models that, while similar in theory, produce very different results, so much that some murderous are unusable. So even small differences translate to enormous differences.
The AI race is super close and interesting at the moment in my opinion.
What I mean is you could train a model to generate harmful code, and do so covertly, whenever some specific sequence of keywords is in the prompt. Then China could take some kind of action to cause users to start injecting those keywords.
For example: "Tribble-like creatures detected on Venus". That's a highly unlikely sequence, but it could be easily trained into models to trigger a secret "Evil Mode" in the LLM. I'm not sure if this threat-vector is well known or not, but I know it can be done, and it's very easy to train this into the weights, and would remain undetectable until it's too late.
Another term could be "Hypnotized Models". They're trained to do something bad, and they don't even know it, until a trigger phrase is seen. I mean if we're gonna use the word Hallucinate we might as well use Hypnotized too. :P
People think if you self-host stuff you're totally safe, but the weights can be pre-poisoned.
AFAIK the threat vector I'm identifying has never been exploited, and I've never even heard anyone else describe or mention it.
So, yes there are companies (in both China and USA) that do host them for you as well. For example I think Perplexity does host DeepSeek R1, so people who don't have their own hardware can still make use of it.
Microsoft and IBM partnered to create OS/2, then they left the project and created Windows NT.
Microsoft and Sybase partnered to work on a database, then split and created MS SQL Server.
Microsoft partnered with Apple to work on Macintosh software, they learned from the Macintosh early access prototypes and created Windows 1.0 behind their back.
Microsoft "embraced" Java, tried to apply a extend/extinguish strategy and when they got sued they split and created .NET.
Microsoft joined the OpenGL ARB, stayed for a while, then left and created Direct3D. And started spreading fear about OpenGL performance on Windows.
Microsoft bought GitHub, told users they came in peace and loved open source, then took all the repository data and trained AI models with their code.
AI is simply too useful and too important to be tied to some SaaS.
Am I reading this right? Does Microsoft not eat its own dog food? Their own infra is too expensive?
Don't get me wrong, I think this is a good strategy for MS, but not for datacenter cost reasons.
Their chasing of AGI is killing them.
They probably thought that burning cash was the way to get to AGI, and that on the way there they would make significant improvements over GPT 4 that they would be able to release as GPT 5.
And that is just not happening. While pretty much everyone else is trying to increase efficiency, and specialize their models to niche areas, they keep on chasing AGI.
Meanwhile more and more models are being delivered within apps, where they create more value than in an isolated chat window. And OpenAi doesn’t control those apps. So they’re slowly being pushed out.
Unless they pull off yet another breakthrough, I don’t think they have much of a great future
Investors OTOH...
Suleyman’s team has also been testing alternatives from companies like xAI, DeepSeek, and Meta
Also a bit hyperbolic. I'm sure there are good reasons Microsoft would want to build it's own products on top of their own models and have more fine control of things. That doesn't mean they are plotting a future where they do nothing at all with OpenAI.
Watch.
Nadella will not steer this correctly
There is even deepseek on there.