These kinds of deals were very much a la mode just prior to the .com crash. Companies would buy advertising, then the websites and ad agencies would buy their services and they'd spend it again on advertising. The end result is immense revenues without profits.
zemvpferreira 3 hours ago [-]
There’s one key difference in my opinion: pre-.com deals were buying revenue with equity and nothing else. It was growth for growth’s sake. All that scale delivered mostly nothing.
OpenAI applies the same strategy, but they’re using their equity to buy compute that is critical to improving their core technology. It’s circular, but more like a flywheel and less like a merry-go-round. I have some faith it could go another way.
Arkhaine_kupo 3 hours ago [-]
> they’re using their equity to buy compute that is critical to improving their core technology
But we know that growth in the models is not exponential, its much closer to logarithmic. So they spend =equity to get >results.
The ad spend was a merry go round, this is a flywheel where the turning grinds its gears until its a smooth burr. The math of the rising stock prices only begins to make sense if there is a possible breakthrough that changes the flywheel into a rocket, but as it stands its running a lemonade stand where you reinvest profits into lemons that give out less juice
J_McQuade 2 hours ago [-]
There is something about an argument made almost entirely out of metaphors that amuses me to the point of not being able to take it seriously, even if I actually agree with it.
powerhouse007 2 hours ago [-]
As much as I dislike metaphors, this sounded reasonable to me. Just don't go poking holes in the metaphor instead of the real argument.
gilleain 2 hours ago [-]
Indeed, poking holes in the metaphor is like putting a pin in a balloon, rather than knocking it out of the park by addressing the real argument.
DenisM 2 hours ago [-]
OpenAI invests heavily into integration with other products. If model development stalls they just need to be not worse than other stalled models while taking advantage of brand recognition and momentum to stay ahead in other areas.
In that sense it makes sense to keep spending billions even f model development is nearing diminishing return - it forces competition to do the same and in that game victory belongs to the guy with deeper pockets.
Investors know that, too. A lot of startup business is a popularity contents - number one is more attractive for the sheer fact of being number one. If you’re a very rational investor and don’t believe in the product you still have to play this game because others are playing it, making it true. The vortex will not stop unless limited partners start pushing back.
chii 2 hours ago [-]
The bigger threat is if their models "stall", while a new up-start discovers an even better model/training method.
What _could_ prevent this from happening is the lack of available data today - everybody and their dog is trying to keep crawlers off, or make sure their data is no longer "safe"/"easy" to be used to train with.
DenisM 5 minutes ago [-]
They can also buy out the startup or match the development by hiring more people. Their comp packages are very competitive.
otherjason 2 hours ago [-]
But, if model development stalls, and everyone else is stalled as well, then what happens to turn the current wildly-unprofitable industry into something that "it makes sense to keep spending billions" on?
accrual 1 hours ago [-]
I suspect if model development stalls we may start to see more incremental releases to models, perhaps with specific fixes or improvements, updates to a certain cutoff date, etc. So less fanfare, but still some progress. Worth spending billions on? Probably not, but the next best avenue would be to continue developing deeper and deeper LLM integrations to stay relevant and in the news.
The new OpenAI browser integration would be an example. Mostly the same model, but with a whole new channel of potential customers and lock in.
camdenreslink 35 minutes ago [-]
If model development stalls, then the open weight free models will eventually totally catch up. The model itself will become a complete commodity.
brokencode 48 minutes ago [-]
Yeah, except you can keep on squeezing these lemons for a long time before they run out of juice.
Even if the model training part becomes less worthwhile, you can still use the data centers for serving API calls from customers.
The models are already useful for many applications, and they are being integrated into more business and consumer products every day.
Adoption is what will turn the flywheel into a rocket.
some_guy_nobel 13 minutes ago [-]
> OpenAI applies the same strategy, but they’re using their equity to buy compute that is critical to improving their core technology. It’s circular, but more like a flywheel and less like a merry-go-round. I have some faith it could go another way.
I'm commenting here in case a large crash occurs, to have a nice relic of the zeitgeist of the time.
zemvpferreira 5 minutes ago [-]
Happy to have provided. I’m not an AI bull and not in any way invested in the U.S. economy besides a little money in funds, but I do try to think about the war of today vs the war of yesterday. Hopefully that’s always en vogue.
_heimdall 2 hours ago [-]
I think that, at best, that description boils down to Nvidia, Oracle, etc inventing fake wealth to build something and OpenAI building their own fake wealth by getting to use that new compute effectively for free.
There are physical products involved, but the situation otherwise feels very similar to ads prior to dotcom.
slashdev 2 hours ago [-]
The same way the stock market invents a trillion dollars of fake wealth on a strong up day?
That's capital markets working as intended. It's not necessarily doomed to end in a fiery crash, although corrections along the way are a natural part of the process.
It seems very bubbly to me, but not dotcom level bubbly. Not yet anyway. Maybe we're in 1998 right now.
_heimdall 60 minutes ago [-]
The stock market isn't inventing money. Those investing in the stock market might be, those buying on leverage for example.
Capital markets weren't intended for round trip schemes. If a company on paper hands 100B to another company who gives it back to the first company, that money never existed and that is capital markets being defrauded rather than working as expected.
rapind 2 hours ago [-]
I think it's worse. The US market feels like a casino to me right now and grift is at an all time high. We're not getting good economic data, it's super unpredictable, and private equity is a disaster waiting to happen IMO. For sure there are smart people able to make money on the gamble, but it's not my jam.
I don't tend to benefit from my predictions as things always take longer to unfold than I think they will, but I'm beyond bearish at present. I'd rather play blackjack.
slashdev 1 hours ago [-]
More money is lost by bears fighting a bull market, than in actual bear market crashes.
I’ve made that mistake already.
I’m nervous about the economic data and the sky high valuations, but I’ll invest with the trend until the trend changes.
teiferer 2 hours ago [-]
> It seems very bubbly to me, but not dotcom level bubbly.
Not? Money is thrown after people without really looking at the details, just trying to get in on the hype train? That's exactly how the dotcom bubble felt like.
slashdev 1 hours ago [-]
Nvidia has a trailing PE of 50. Cisco was 200 At the height of the dotcom bubble.
Nowhere near that level. There’s real demand and real revenue this time.
It won’t grow as fast as investors expect, which makes it a bubble if I’m right about that. But not comparable to the dotcom bubble. Not yet anyway.
_heimdall 57 minutes ago [-]
We shouldn't judge whether an indicator is stable or okay only by looking to see if its the highest historical value.
PE ratios of 50 make no sense, there is no justification for such a ratio. At best we can ignore the ratio and say PE ratios are only useful in certain situations and this isn't one of them.
Imagine if we applied similar logic to other potential concerns. Is a genocide of 500,000 people okay because others have done drastically more?
slashdev 47 minutes ago [-]
I’m not asking if it makes sense, I’m simply pointing out that by that measure this is much less extreme than 2000. As I stated, I think we’re in a bubble, so valuations won’t make much sense.
If you have a better measure, share it. I trust data more than your or my feelings on the matter.
staticautomatic 2 hours ago [-]
I sell you a cat for $1B and you sell me a dog for $1B and now we’re both billionaires! Whether the capital markets “want” that or not it’s still silly.
slashdev 1 hours ago [-]
If we’re both willing to pay that in a free market economy, then we both leave the deal happy.
Things are worth what people are willing to pay for them. And that can change over time.
Sentiment matters more than fundamental value in the short term.
Long term, on a timescale of a decade or more, it’s different.
overfeed 21 minutes ago [-]
> If we’re both willing to pay that in a free market economy
The thing is: you've paid nothing - all you did was trade pets and played an accounting trick to make them seem more valuable than they are.
fireflash38 11 minutes ago [-]
Is that not fraud?
0xbadcafebee 2 hours ago [-]
Eventually when ChatGPT replaces Google Search, they will run ads, and so have that whole revenue stream. Still isn't enough money to buy the trillions worth of infrastructure they want, but it might be enough to keep the lights on.
schmidtleonard 1 hours ago [-]
That's an insightful point! Making insightful points like that one is taxing on the brain, you should consider an electolyte drink like Brawndo™ (it's got what plants crave) to keep yourself sharp!
Ugh I hate it so much, but you're right, it's coming.
bayarearefugee 2 hours ago [-]
> critical to improving their core technology
It is at the very least highly debatable how much their core technology is improving from generation to generation despite the ballooning costs.
bgwalter 2 hours ago [-]
Dotcom scams included "vendor financing", where telecom equipment providers invested in their customers who built infrastructure:
The customers bought real equipment that was claimed to be required for the "exponential growth" of the Internet. It is very much like building data centers.
api 2 hours ago [-]
The assumption is that they have a large moat.
If they don't then they're spending a ton of money to level up models and tech now, but others will eventually catch up and their margins will vanish.
This will be true if (as I believe) AI will plateau as we run out of training data. As this happens, CPU process improvements and increased competition in the AI chip / GPU space will make it progressively cheaper to train and run large models. Eventually the cost of making models equivalent in power to OpenAI's models drops geometrically to the point that many organizations can do it... maybe even eventually groups of individuals with crowdfunding.
OpenAI's current big spending is helping bootstrap this by creating huge demand for silicon, and that is deflationary in terms of the cost of compute. The more money gets dumped into making faster cheaper AI chips the cheaper it gets for someone else to train GPT-5+ competitors.
The question is whether there is a network effect moat similar to the strong network effect moats around OSes, social media, and platforms. I'm not convinced this will be the case with AI because AI is good at dealing with imprecision. Switching out OpenAI for Anthropic or Mistral or Google or an open model hosted on commodity cloud is potentially quite easy because you can just prompt the other model to behave the same way... assuming it's similar in power.
simgt 2 hours ago [-]
> This will be true if (as I believe) AI will plateau as we run out of training data.
Why would they run out of training data? They needed external data to bootstrap, now it's going directly to them through chatgpt or codex.
delis-thumbs-7e 2 hours ago [-]
As much ChatGPT says I’m basically a genius for asking it a good Vegan cake recipes, I don’t think that is providing it any data it doesn’t already have that makes it anyway better. Also at this point the massive increases in data and computing power seem to bring ever decreasing improvements (and sometimes just decline), so it seems we are simply hitting a limit this kind of architecture can achieve no matter what you throw at it.
DenisM 2 hours ago [-]
ChatGPT chat logs contain massive amount of data teased out of people’s brains. But much of it is lore, biases, misconceptions, memes. There are nuggets of gold in there but it’s not at all clear if there’s a good way to extract them, and until then chat logs will make things worse, not better.
I’m thinking they eventually figure out who is the source of good data for a given domain, maybe.
Even if that is solved, models are terrible at long tail.
api 58 minutes ago [-]
When I say models will plateau I don't mean there will be no progress. I mean progress will slow down since we'll be scraping the bottom of the barrel for training data. We might never quite run out but once we've sampled every novel, web site, scientific paper, chat log, broadcast transcript, and so on, we've exhausted the rich sources for easy gains.
DenisM 7 minutes ago [-]
Chat logs don’t run out. We may run out of novelty in those logs, at which point we may have ran out of human knowledge.
Or not - there still knowledge in people heads that is not bleeding into ai chat.
One implication here is that chats will morph to elicit more conversation to keep mining that mine. Which may lead to the need to enrage users to keep engagement.
delis-thumbs-7e 2 hours ago [-]
Apple new M5 can run models over 10B parametres and if they give their new Studio next year enough juice, it can run maybe 30B local model. How long is it that you can run a full GPT-5 on your laptop or homeserver with few grands worth of hardware? What is going to happen to all these GPU farms, since as I understood they are fairly useless for anything else?
treis 1 hours ago [-]
Very few people own top of the line Macs and most interactions are on phones these days. We are many generations of phones away from running GPT-5 on a phone without murdering your battery.
Even if that weren't true having your software be cheaper to run is not a bad thing. It makes the software more valuable in the long run.
api 1 hours ago [-]
Quantized, a top-end Mac can run models up to about 200B (with 128GiB of unified RAM). They'll run a little slow but they're usable.
This is a pricey machine though. But 5-10 years from now I can imagine a mid-range machine running 200-400B models at a usable speed.
moralestapia 3 hours ago [-]
>they’re using their equity to buy compute that is critical to improving their core technology
That's only like 1/8th of the flywheel, though.
runarberg 2 hours ago [-]
Wasn’t there also a bunch of telecom infrastructure created in the dot-com bubble, tangible products created, etc? Things like servers, telephone wires, underwater internet cables, tech-storefronts, internet satellites, etc.
spogbiper 2 hours ago [-]
so much fiber was run that in the US over 90% of it wasn't even used
2 hours ago [-]
ignoramous 3 hours ago [-]
> There’s one key difference in my opinion
The other difference (besides Sam's deal making ability) is, willing investors: Nvidia's stock rally leaves it with a LOT of room to fund big bets right now. While in Oracle's case, they probably see GenAI as a way to go big in the Enterprise Cloud business.
afavour 3 hours ago [-]
> Nvidia's stock rally leaves it with a LOT of room to fund big bets right now
And then what happens if the stock collapses?
mulmen 2 hours ago [-]
Hence the emphasis on right now.
SecretDreams 2 hours ago [-]
> I have some faith it could go another way.
I wonder how they felt during the .com era.
brazukadev 2 hours ago [-]
Yes, this time is different, trust big bro sama.
boringg 28 minutes ago [-]
The original "Tech" boom was an infrastructure boom by the telecoms funded by leveraged debt. It was an overbuild mismatch with the market timing. If you brought forward the timeline to when that infrastructure was used (late 2000s) you probably would never have had the crash.
This boom is a data center boom with AI being the software layer/driver. This one potentially has a lot longer to run even though everyone is freaking out now. If you believe the AI is rebuilding compute then this changes our compute paradigm in the future. As well as long as we don't get an over leveraged build out without revenue coming in the door. I think we are seeing a lot of revenue come in for certain applications.
The companies that are all smoke and mirrors built on chatGPT with little defensibility are probably the same as the ones you are referring to in the current era. Or the AI tooling companies.
To be clear circular deal flow is not a good look.
I can see the both sides of bull and bear at this moment.
Circular investments were also a compounding factor in the Japanese asset price bubble.
The practice was known as “zaitech”
> zaitech - financial engineering
> In 1984, Japan’s Ministry of Finance permitted companies to operate special accounts for their shareholdings, known as tokkin accounts. These accounts allowed companies to trade securities without paying capital gains tax on their profits.
> At the same time, Japanese companies were allowed to access the Eurobond market in London. Companies issued warrant bonds, a combination of traditional corporate bonds with an option (the “warrant") to purchase shares in the company at a specified price before expiry. Since Japanese shares were rising, the warrants became more valuable, allowing companies to issue bonds with low-interest payments.
> The companies, in turn, placed the money they raised into their tokkin accounts that invested in the stock market. Note the circularity: companies raised money by selling warrants that relied on increasing stock prices, which was used to buy more shares, thus increasing their gains from investing in the stock market.
Here is a charitable perspective on what's happening:
- Nvidia has too much cash because of massive profits and has nowhere to reinvest them internally.
- Nvidia instead invests in other companies that use their gpus by providing them deals that must be spent on nvidia products.
- This accelerates the growth of these companies, drives further lock in to nvidia's platform, and gives nvidia an equity stake in these companies.
- Since growth for these companies is accelerated, future revenue will be brought forward for nvidia and since these investments must be spent on nvidia gpus it drives further lock in to their platform.
- Nvidia also benefits from growth due to the equity they own.
This is all dependent on token economics being or becoming profitable. Everything seems to indicate that once the models are trained, they are extremely profitable and that training is the big money drain. If these models become massively profitable (or at least break even) then I don't see how this doesn't benefit Nvidia massively.
3 hours ago [-]
pols45 3 hours ago [-]
Yup. Not just Nvidia. Just look at the quarterly results reported by Amazon, Google, Meta, Microsoft and Apple. Each one is reporting revenues never before seen in history. If you make 100 Billion a quarter you have to spend it on something.
These guys are running hyper optimized cash extraction mega machines. There is no comparison to previous bubbles, cause so no such companies ever existed in the past.
solarwindy 3 hours ago [-]
100 billion a quarter is Alphabet, right? Given how much click fraud there is, and that every org and business under the sun is held to ransom to feature on the SERP for their own name even — it’s tempting to say Google’s become a private tax on everything.
daedrdev 19 minutes ago [-]
No, Apple also has 100 billion dollars in revenue despite floundering AI and running a very hardware dependent business.
trollbridge 3 hours ago [-]
Odd how they are simultaneously having large layoffs even as reporting record revenues.
The question is where the profits are.
jcheng 2 hours ago [-]
The layoffs at Amazon and Microsoft are not due to lack of profits. They’re massively profitable right now.
They're "massively profitable" because they're laying off large portions of a major cost center - labor - and backloading uncoming data center construction costs. As those come due, and labor needs rise again, that profit disappears.
wmeredith 2 hours ago [-]
Amazon - 14,000 layoffs; significant
Microsoft - 14,000 (multiple rounds); significant
Meta - 600 layoffs; insignificant for company size
Google - "Several hundred layoffs"; insignificant for a company size
So many such profitable companies are the best possible evidence for the need for drastic antitrust intervention. The lack of competition and regulation is leading to a massive drain on every other sector.
belter 3 hours ago [-]
> Everything seems to indicate that once the models are trained, they are extremely profitable
Some data would reinforce your case. Do you have it?
Right. As far as I can tell, OpenAI, Grok, etc sell me tokens at a loss.
But I am having a hard time figuring out how to turn tokens into money (i.e. increased productivity). I can justify $40-$200 per developer per month on tokens but not more than that.
koolba 2 hours ago [-]
There’s about 5M software devs in the US so even at $1000/year/person spend, that’s only $5B of revenue to go around. Theres plenty of other uses cases but focusing on pure tech usage, it’s hard to see how the net present value of that equates to multiple trillions of dollars across the ecosystem.
treis 2 hours ago [-]
It's the first new way of interacting with computers since the iPhone. It's going to be massively valuable and OpenAI is essentially guaranteed to be one of the players.
_aavaa_ 13 minutes ago [-]
Why is their product not palm? Or windows mobile?
schwarzrules 2 hours ago [-]
I'm not trying to be annoying, but surely if you'd justify spending $200/developer/month, you could afford $250/month...
The reason I wonder about that is because that also seems to be the dynamic with all these deals and valuations. Surely if OpenAI would pay $30 billion on data centers, they could pay $40 billion, right? I'm not exactly sure where the price escalations actually top out.
h2zizzle 2 hours ago [-]
No? That's a 25% expense increase. You just ate the margins on my product/service, and then some.
simianwords 2 hours ago [-]
why would they sell you at a loss when they have been decreasing prices by 2x every year or so for the last 3 years? people wanted to purchase the product at price "X" in 2023 and now the same product costs X costs 10 times less over the years.. do you think they were always selling at a loss?
simianwords 3 hours ago [-]
Inference cost has been going down for a while now. At what point do you think it will be profitable? When cost goes down by 2x? 5x?
bwfan123 3 hours ago [-]
This is behind a paywall. Is there a free link you can share ?
belter 2 hours ago [-]
Would love to, and its normally what I do, but archive.is is currently down. At least here from the outer belt.
Eisenstein 3 hours ago [-]
Your conclusion about training being the cost factor that will eventually align with profitability in the inference phases relies on training new models not being an endless arms race.
vmg12 3 hours ago [-]
If the inference is profitable and training new models is actually an endless arms race that's actually the best outcome for nvidia specifically.
Eisenstein 2 hours ago [-]
Only in the short term.
TZubiri 3 hours ago [-]
I'd gander a guess that there's nothing tech specific here and that fraudulent schemes are well defined for the SEC and commercial courts to take action if something is not kosher
datadrivenangel 3 hours ago [-]
It's usually not actually fraud. It's the amazon reinvesting back into growth, except the unit economics don't work if everyone cashes out at the same time, and if anyone starts cashing out the growth stops and everyone cashes out before it's too late.
CPLX 3 hours ago [-]
Exactly, everything old is new again. This was one of the drivers of the original dot-com bubble.
Microsoft seemingly just revealed that OpenAI lost $11.5B last quarter
boringg 6 minutes ago [-]
Real question -- how else is OpenAI supposed to fund itself? It has capital requirements that the most moneyed business companies can't provide. So it has to come up with ways to get access to money while de-risking the terms. Not saying the circularity works but I don't know how else you raise at their scale.
This money is well beyond VC capability.
Either this lets them build to net positive without dying from painful financing terms or they explode spectacularly. Their rate of adoption it seems to be the former.
FloorEgg 51 minutes ago [-]
What's crazy is that with the
2021 changes to IRC § 174 most software r&d spending is considered capital investment and can't be immediately expensed. Has to be amortized over 5 years.
I don't know how that 11.5B number was derived, but I would wager that the net loss on income statement is a lot lower than the net negative cash flow on cash flow statement.
If that 11.5B is net profit/loss, then whatever the portion of the expense part of the calculation that's software R&D could be 5x larger if it weren't for the new amortization rule.
gausswho 36 minutes ago [-]
Wasn't that change cancelled this summer?
d-moon 31 minutes ago [-]
It was
2 hours ago [-]
guywithahat 2 hours ago [-]
It's incredible how Tesla used to lose a few hundred million a year and analysis shows would freak out claiming they'd never be profitable. Now Rivian can lose 5 billion a year and I don't hear anything about it, and OpenAI can lose 11 billion in a quarter and Microsoft still backs them.
I do think this is going to be a deeply profitable industry, but this feels a little like the WeWork CEO flying couches to offices in private jets
dmoy 2 hours ago [-]
> Now Rivian can lose 5 billion a year and I don't hear anything about it, and OpenAI can lose 11 billion in a quarter and Microsoft
Rivian stock is down 90%, and I fairly regularly read financial news about it having bad earnings, stock going even lower, worst-in-industry reliability, etc etc.
I don't know why you don't hear about it, but it might be because it's already looking dead in the water so there's no additional news juice to squeeze out of it.
Schiendelman 1 hours ago [-]
Rivian lost something like $5B in 2024, but they're on track to only lose $2.25B in 2025. That trend line is clear. In 2026 they release a much lower cost model, and a lot of that loss has been development of that model. They probably won't achieve profitability in 2026, but if they get their loss down to $1B in 2026, in 2027 we'll likely see them go net positive.
accrual 2 hours ago [-]
> like the WeWork CEO flying couches to offices in private jets
I found there was more than just couches on the WeWork private jets:
We had an impressive new technology (the Web), and everyone could see it was going to change the world, which fueled a huge gold rush that turned into a speculative bubble. And yes, ultimately the Web did change the world and a lot of people made a lot of money off of it. But that largely happened later, after the bubble burst, and in ways that people didn't quite anticipate. Many of the companies people were making big bets on at the time are now fertile fodder for YouTube video essays on spectacular corporate failures, and many of the ones that are dominant now were either non-existent or had very little mindshare back in the late '90s.
For example, the same year the .com bubble burst, Google was a small new startup that failed to sell their search engine to Excite, one of the major Web portal sites at the time. Excite turned them down because they thought $750,000 was too high a price. 2 years later, after the dust had started to settle, Excite was bankrupt and Google was Google.
And things today sure do strike me as being very similar to things 25, 30 years ago. We've got an exciting new technology, we've got lots of hype and exuberant investment, we've got one side saying we're in a speculative bubble, and the other side saying no this technology is the real deal. And neither side really wants to listen to the more sober voices pointing out that both these things have been true at the same time many times in the past, so maybe it's possible for them to both be true at the same time in the present, too. And, as always, the people who are most confident in their ability to predict the future ultimately prove to be no more clairvoyant than the rest of us.
teiferer 1 hours ago [-]
> we've got one side saying we're in a speculative bubble, and the other side saying no this technology is the real deal.
Um I think nobody is really denying that we are in a bubble. It's normal for new tech and the hype around it. Eventually the bad apples are weeded out and some things survive, others die out.
The first disagreement is how big the bubble is, i.e. how much air is in it that could vanish. And that's because of the second disagreement, which is about how useful this tech is and how much potential it has. It's clear that it has some undeniable usefulness. But some people think we'll soon have AGI replacing everybody and the opposite is that's all useless crap beyond a few niche applications. Most people fall somewhere in between, with a somewhat bimodal split between optimists and skeptics. But nobody really contends that it's a bubble.
bobxmax 1 hours ago [-]
[dead]
OtherShrezzing 2 hours ago [-]
>and OpenAI can lose 11 billion in a quarter and Microsoft still backs them.
For Microsoft, and the other hyperscalers supporting OpenAI, they're all absolutely dependent on OpenAI's success. They can realistically survive through the difficult times, if the bubble bursts because of a minor player - for example if Coreweave or Mistral shuts down. But if the bubble bursts because the most visible symbol of AI's future collapses, the value-destruction for Microsoft's shareholders will be 100x larger than OpenAI's quarterly losses. The question for Microsoft is literally as fundamental as "do we want to wipe $1tn off our market cap, or eat $11bn losses per quarter for a few years?" and the answer is pretty straightforward.
Altman has played an absolute blinder by making the success of his company a near-existential issue for several of the largest companies to have ever existed.
TYPE_FASTER 50 minutes ago [-]
> Altman has played an absolute blinder by making the success of his company a near-existential issue for several of the largest companies to have ever existed.
Yeah true, the whole pivot from non-profit to Too Big to Fail is pretty amazing tbh.
whimsicalism 1 hours ago [-]
They’re dependent on usage of their cloud. I don’t agree that they are as dependent on OAI as you suggest. Ultimately, we’ve unlocked a new paradigm and people need GPUs to do things - regardless of whether that GPU is running OAI branded software or not.
adastra22 1 hours ago [-]
Why? Microsoft has permanent, royalty free access to the frontier models. If OpenAI went under, MSFT would continue hosting GPT-5 on Azure, GitHub Copilot, etc. and not be affected in the slightest.
lokar 2 hours ago [-]
Very few industries are “deeply profitable” absent the illegal abuse of monopoly power
potato3732842 1 hours ago [-]
Don't forget the perfectly legal use of legislation and bureaucratic precedent that gives them "soft/lossy monopoly" power or all but forces people do to business with them.
lokar 1 hours ago [-]
OpenAI is pretty clearly pushing for complex government regulation as a way to protect their lead and prevent new entrants in the market.
Iulioh 1 hours ago [-]
And as we saw, once a model is trained you need very little compute to run it and there is very little advantage in begin the 1st model and the 10th model.
Monopoly in this field is impossible, your product won't ever be so good that the competition does not make sense
Add to this that AGI is impossible with LLMs...
lokar 55 minutes ago [-]
I’m not so sure. Look for more gov regulations that make it hard for startups. Look for stricter enforcement of copyright (or even updates to laws) once the big players have secured licensing deals, to cut off the supply of cheap training data.
bigwheels 1 hours ago [-]
> this feels a little like the WeWork CEO flying couches to offices in private jets
Fascinating! I unearthed the TL;DR for anyone else interested:
* WeWork purchased a $60 million Gulfstream G650ER private jet for Neumann's use.
* The G650ER was customized with two bedrooms and a conference table.
* Neumann used the jet extensively for global travel, meetings, and family trips.
* The jet was also used to transport items like a "sizable chunk" of marijuana in a cereal box, which might be worse and more negligent than couches.
That was back in the mid 2010s right? Companies had yet to reach 1T valuation. 5bil against 1T is a drop in the bucket
boringg 25 minutes ago [-]
You can't honestly be comparing a shitty real estate play like WeWork to the real functional benefits people get out of ChatGPT.
ChatGPT was mind blowing when you first used it. WeWork is a real estate play fronted by a self aggrandizing self dealing CEO.
raincole 2 hours ago [-]
And did people listen to those "analyses" and dump Tesla, or its stock kept skyrocketing?
random9749832 1 hours ago [-]
Tesla is a meme stock.
misiti3780 26 minutes ago [-]
not true.
alfalfasprout 49 minutes ago [-]
Investors are trying to bet on OpenAI being the first to replace all human skilled labor. Of course, this is foolish for a few reasons:
1. Performance of AI tools improving but marginally so in practice
2. If human labor was replaced, it's the start of global societal collapse so any winnings would be moot.
randomNumber7 2 hours ago [-]
The winner takes it all, so it is reasonable to bet big to be the one.
anonymousiam 2 hours ago [-]
The one what? What is the secret sauce that will distinguish one LLM from another? Is it patentable? What's going to prevent all of the free LLMs from winning the prize? An AI crash seems inevitable.
schnitzelstoat 2 hours ago [-]
It could end up like Search did, at first you had Lycos, AskJeeves, Altavista etc. and then Google became absolutely dominant.
They want to be the Google in this scenario.
skeeter2020 2 hours ago [-]
Then they're doing it backwards. Google first built a far superior product, then pursued all the tricks to maintain their monopoly. OpenAI at best has the illusion of a superior product, and even that is a stretch.
camdenreslink 43 minutes ago [-]
Google was by far the best product. Maybe an LLM provider will emerge in that way, but it seems they are all very similar in capability right now.
wyre 28 minutes ago [-]
I don't believe google won the search engine wars because they had the best product, while it may be true, the won because the of the tools they provided to their users. Email, cloud storage, docs/sheets/drive, Chrome, etc
ncls 14 minutes ago [-]
They were already pretty dominant in search by the time they released most if not all of those. They got into that position by being the better search engine - better results and nicer to use (clean design, faster loading times).
j16sdiz 2 hours ago [-]
You need the infrastructure, not just the model.
The model can be free, but the infrastructure (data center) ain't.
Workaccount2 2 hours ago [-]
The goal isn't to be the best LLM, the goal is to be the first self-improving LLM.
On paper, whoever gets there first, along with the needed compute to hand over to the AI, wins the race.
kurisufag 2 hours ago [-]
The moment properly self-improving AI (that doesn't run into some logistic upper bound of performance) is released, the economy breaks.
The AI, having theoretically the capacity to do anything better than everyone else, will not need support (in resources or otherwise) from any other business except perhaps once to kickstart its exponential growth. If it's guarded, every other company becomes instantly worthless on the long term, and if not anyone with a bootstrap-level of compute will be able to also, do anything ever on a long enough time frame.
It's not a race for ROI, it's to have your name go in the book as one of the guys that first obsoleted the relationship between effort, willpower, intelligence, etc. and the ability to bring arbitrary change to the world.
forgetfulness 1 hours ago [-]
The machine god would still need resources provided by humans on their terms to run; the AI wouldn’t sweat having to run, for instance, 5 years straight of its immortality just to figure out a 10 years plan to eventually run at 5% less power than now, but humans may not be willing to foot the bill for this.
There’s no guarantee that the singularity makes economic sense for humans.
kurisufag 37 minutes ago [-]
Presuming the kind of runaway superintelligence people usually discuss, the sort with agency, this just turns into a boxing problem.
Are we /confident/ a machine god with `curl` can't gain its own resilient foothold on the world?
adastra22 1 hours ago [-]
Maybe in paper, but only on paper. There are so many half baked assumptions in that self-improvement logic.
weregiraffe 23 minutes ago [-]
Self-improving LLM is as probable as a perpetual motion machine.
Practically, LLMs train on data. Any output of an LLM is a derivative of the training data and can't teach it anything new.
Conceptually, if a stupid AI can build a smart AI, it would mean that the stupid AI is actually smart, otherwise it wouldn't have been able too.
Joel_Mckay 1 hours ago [-]
Silicon valley capital investment firms have always exploited regulatory capture to "compete". The public simply has a ridiculously short memory of the losers pushed out of the market during the loss-leader to exploit transition phase.
Currently, the trend is not whether one technology will outpace the other in the "AI" hype-cycle ( https://en.wikipedia.org/wiki/Gartner_hype_cycle ), but it does create perceived asymmetry with skilled-labor pools. That alone is valuable leverage to a corporation, and people are getting fired or ripped off anticipating the rise of real "AI".
One day real "AI" may exist, but a LLM or current reasoning model is unlikely going to make that happen. It is absolutely hilarious there is a cult-like devotion to the AstroTurf marketing.
The question is never whether this is right or wrong... but simply how one may personally capture revenue before the Trough of disillusionment. =3
simonsarris 2 hours ago [-]
I don't really believe that, and I thought it was interesting on Meta's earnings call that Zuck (or the COO) said that it seems unlikely at this point that a single company will dominate every use of LLMs/image models, and that we should expect to see specialization going forward.
aaronblohowiak 2 hours ago [-]
Do you have any reasoning to support the notion that this market is winner takes all?
chii 2 hours ago [-]
With enough money to lobby, they can make it a winner takes all market (ala, a regulated monopoly).
deadbabe 10 minutes ago [-]
But then you get stuff like Deepseek R1.
whimsicalism 1 hours ago [-]
Want to bet? I see this claim all over the internet and do not believe it for a moment.
loudmax 1 hours ago [-]
As I understand the argument, it's that AI will reach a level where it's smart enough to improve itself, leading to a feedback loop where it takes off like a rocket. In this scenario, whoever is in second place is left so far in the dust that it doesn't matter. Whichever model is number one is so smart that it's able to absorb all economic demand, and all the other models will be completely obsolete.
This would be a terrifyingly dystopian outcome. Whoever owns this super intelligence is not going to use it for the good of humanity, they're going to use it for personal enrichment. Sam Altman says OpenAI will cure cancer, but in practice they're rolling out porn. There's more immediate profit to be made from preying on loneliness and delusion than there is from empowering everyone. If you doubt the other CEOs would do the same, just look at them kissing the ass of America's wannabe dictator in the White House.
Another possible outcome is that no single model or company wins the AI race. Consumers will choose the AI models that best suit their varying needs, and suppliers will compete on pricing and capability in a competitive free market. In this future, the winners will be companies and individuals who make best use of AI to provide value. This wouldn't justify the valuations of the largest AI companies, and it's absolutely not the future that they want.
runarberg 2 hours ago [-]
Does the winner take it all?
I agree this is a reasonable bet though but for different reason, I believe this is a large scale exploitation where money is systematically siphoned away from workers and into billionaires via e.g. hedgefunds, bailouts, dividend payouts, underpay, wagetheft, etc. And the more they blow out this bubble the more money they can exploit out from workers. As such it is not really a bet, but rather the cost of business. Profits are guaranteed as long as workers are willing to work for yours.
boringg 9 minutes ago [-]
As an aside does anyone get the feeling that NYT is also training its fire on all California tech companies these days? I know that NYT really doesn't like California (always hasn't - from restaurants to culture to business) but curious if other people see that as well?
rchaud 2 hours ago [-]
OpenAI is raising funding based on its own forecasts for AI demand growth, and sending most of it to Oracle, MSFT, Nvidia as well as paying insiders enormous salaries.
There are some interesting parallels here with the business model described in the book Confessions of an Economic Hitman. Developing countries take out huge loans from US lenders to build an electric grid, based on inflated forecasts from US consultancies they hired. The countries take on the debt, but the money mostly bypasses them and lands in the pockets of US engineering firms doing the construction, and government insiders taking kickbacks for greasing the wheels.
When the forecasted growth in industrial production fails to materialize, the countries are unable to repay the debt and have no option but to offer the US access to their resources, ports and votes in the UN.
What happens when OpenAI's forecasts of gargantuan growth fail to materialize and they're unable to sell more stock to pay off lenders? Does Uncle Sam step in with a bailout for "national security" reasons?
Given that AI is a national security matter now, I'd expect the U.S.A to step in and rescue certain companies in the event of a crash. However, I'd give higher chances to NVIDIA than OpenAI. Weights are easily transferrable and the expertise is in the engineers, but ability to continue making advanced chips is not as easily transferred.
philipwhiuk 3 hours ago [-]
If they're too-important-to-fail they're too important not to be broken up or nationalised.
jimbokun 2 hours ago [-]
While that is a sensible opinion the 2008 crash showed that it is not the opinion of decision makers in the US.
whimsicalism 1 hours ago [-]
I’m curious if those of you calling for nationalization have worked for the government or a state-owned enterprise like Amtrak. People should witness the effects of long-term public sector ownership on productivity and effectiveness in a workplace.
saulpw 28 minutes ago [-]
Yeah, like IBM and Intel and GE and GM are shining examples of how effectively the private sector runs companies. Maybe large enterprises are by their nature inefficient. Maybe productivity isn't the best metric for a utility. We could, for instance, prioritize resiliency, longevity, accessibility, and environmental concerns.
whimsicalism 26 minutes ago [-]
Even those problematic companies exemplify the difference: when enterprises are mismanaged and fail, capital is reallocated away from them.
embedding-shape 3 hours ago [-]
Why is ML knowledge "in the engineers" while chip manufacturing apparently sits in the company/hardware/something else than the engineers/humans?
NBJack 3 hours ago [-]
Read up a bit on the effort needed to get a fab going, and the yield rates. While engineers are crucial in the setup, the fab itself is not as 'fungible' as the employees involved.
I can spin up a strong ML team through hiring in probably 6-12 months with the right funding. Building a chip fab and getting it to a sensible yield would take 3-5 years, significantly more funding, strong supply lines, etc.
embedding-shape 1 hours ago [-]
> I can spin up a strong ML team through hiring in probably 6-12 months with the right funding
Not sure what to call this except "HN hubris" or something.
There are hundreds of companies who thought (and still think) the exact same thing, and even after 24 months or more of "the right funding" they still haven't delivered the results.
I think you're misunderstanding how difficult all of this is, if you think it's merely a money problem. Otherwise we'd see SOTA models from new groups every month, which we obviously aren't, we have a few big labs iteratively progressing SOTA, with some upstarts appearing sometimes (DeepSeek, Kimi et al) but it isn't as easy as you're trying to make it out to be.
whimsicalism 1 hours ago [-]
There’s a lot in LLM training that is pretty commodity at this point. The difficulty is in data - and a large part of why it has gotten more challenging is simply that some of the best sources of data have locked down against scraping post-2022 and it is less permissible to use copyrighted data than the “move fast and break things” pre-2023 era.
As you mentioned, multiple no name chinese companies have done it and published many of their results. There is a commodity recipe for dense transformer training. The difference between Chinese and US is that they have less data restrictions.
I think people overindex on the Meta example. It’s hard to fully understand why Meta/llama have failed as hard as they have - but they are an outlier case. Microsoft AI only just started their efforts in earnest and are already beating Meta shockingly.
noosphr 26 minutes ago [-]
>Otherwise we'd see SOTA models from new groups every month
We do.
It's just that startups don't go after the frontier models but niche spaces which are under served and can be explored with a few million in hardware.
Just like how open AI made gpt2 before they made gpt3.
embedding-shape 15 minutes ago [-]
> We do.
> It's just that startups don't go after the frontier models but niche spaces
But both of "New SOTA models every month" and "Startups don't go for SOTA" cannot be true at the same time. Either we get new SOTA models from new groups every month (not true today at least) or we don't, maybe because the labs are focusing on non-SOTA instead.
marcyb5st 51 minutes ago [-]
Fully agree. I also think we are deep into the diminishing returns territory.
If I have to guess OAI and others pay top dollars for talent that has a higher probability of discovering the next "attention" mechanism and investors are betting this is coming soon (hence the hige capitalizations and willing to loive with 11B losses/quarter). If they lose patience in throwing money at the problem I see only few players remaining in the race because they have other revenue streams
trollbridge 3 hours ago [-]
Right. I could spin up a strong ML team, an AI startup, build a foundational model, etc give a reasonable amount of seed capital.
Build a chip fab? I’ve got no idea where to start, where to even find people to hire, and i know the equipment we’d need to acquire would be also quite difficult to get at any price.
wongarsu 3 hours ago [-]
But the fabs don't belong to NVIDIA, they belong to TSMC. I have no doubt that Taiwan and maybe even the US government would step in to save TSMC if for some reason it got existential problems, but that doesn't provide an argument for saving NVIDIA
OfficialTurkey 3 hours ago [-]
> I can spin up a strong ML team through hiring in probably 6-12 months with the right funding.
Mark Zuckerberg would like a word with you
singron 3 hours ago [-]
Nvidia isn't a fab.
tonyarkles 3 hours ago [-]
First-order: because of the capex and lead times. If you grab a bunch of world-class ML folks and put them in a room together, they're going to be able to start producing world-class work together. If you grab a bunch of world-class chip designers in the same scenario but don't have world-class fabs for them to use, they're not going to be able to ship competitive designs.
embedding-shape 3 hours ago [-]
> If you grab a bunch of world-class chip designers in the same scenario but don't have world-class fabs for them to use, they're not going to be able to ship competitive designs.
But why such an unfair comparison?
Instead of comparing "skilled people with hardware VS skilled people without hardware", why not compare it to "a bunch of world-class ML folks" without any computers to do the work, how could they produce world-class work then?
jimbokun 2 hours ago [-]
Much easier and cheaper to source computers than a fab.
embedding-shape 2 hours ago [-]
Right, but to source a fab you need experience as well, nothing you can just hire a random person to do exactly.
tonyarkles 48 minutes ago [-]
To simplify it down even more:
- For the ML team, you need money. Money to pay them and money to get access to GPUs. You might buy the GPUs and make your own server farm (which also takes time) or you might just burn all that money with AWS and use their GPUs. You can trade off money vs. time.
- For the chip design team, you need money and time. There's no workaround for the time aspect of it. You can't spend more money and get a fab quicker.
embedding-shape 40 minutes ago [-]
> - For the ML team, you need money. Money to pay them and money to get access to GPUs. You might buy the GPUs and make your own server farm (which also takes time) or you might just burn all that money with AWS and use their GPUs. You can trade off money vs. time.
Even if you do those things though, it doesn't guarantee success or you'll be able to train something bigger. For that you need knowledge, hard work and expertise, regardless of how much money you have. It's not a problem you can solve by throwing money at it, although many are trying. You can increase the chances of hopefully discovering something novel that helps you build something SOTA, but as current history tells us, it isn't as easy as "ML Team + Money == SOTA model in a few months".
jeffwask 3 hours ago [-]
The start-up costs of creating a new chip manufacture are significantly higher (you can't just SAAS your way into factories) and the chips themselves more subject to IP and patents owned by that company.
bob1029 3 hours ago [-]
One person can implement a transformer model from scratch in a weekend. Hardware is not the valuable part of machine learning. Data and how it is used are.
The "magic of AI" doesn't live inside an Nvidia GPU. There are billions of dollars of marketing being deployed to convince you it does. As soon as the market realizes that nvidia != magic AI box, the music should stop pretty quickly.
chermi 2 hours ago [-]
Umm, part of it does. It necessary but not sufficient, at least to achieve it on the timescales we've seen. Scale is part of the "magic".
tehjoker 3 hours ago [-]
That's true, but without the kind of horsepower provided by modern hardware, even though I'm skeptical that it's all needed, especially given DeepSeek's amazing results, AI would be nearly impossible.
There are some important innovations on the algorithm / network structure side, but all these ideas are only able to be tried because the hardware supports it. This stuff has been around for decades.
chermi 2 hours ago [-]
Deepseek required existing models that required the horsepower.
jimbokun 2 hours ago [-]
Chip designs have strong IP protections.
AI models do not. Sure you can't just copy the exact floating point values without permission. But with enough capital you can train a model just as good, as the training and inference techniques are well known.
embedding-shape 1 hours ago [-]
> But with enough capital you can train a model just as good, as the training and inference techniques are well known
You're not alone in believing just money can train a good model, and I've already answered elsewhere why things aren't so easy as you believe, but besides this, where are y'all getting that from? Is there some popular social media influencer that keeps parroting this or where it comes from? Clearly you're not involved in those processes/workflows yourself, then you wouldn't claim it's just a money problem, so where are you all getting this from?
thesz 3 hours ago [-]
Chip manufacturing is extremely time consuming, especially when we are talking about masks for lithography.
The rights on masks for chips and their parts (IPs) belong to companies.
And one definitely does not want these masks to be sold during bankruptcy process to (arbitrary) higher bidders.
3 hours ago [-]
lz400 2 hours ago [-]
Even if/when the bubble pops, I don't think NVIDIA is even close to need rescuing or being in trouble. They might end being worth 2 trillion instead of 5 but they're still selling GPUs nobody else knows how to make that power one of the most important technologies in the world. Also, all their other divisions.
The .com bubble didn't stop the internet or e-commerce, they still won, revolutioned everything, etc. etc. Just because there's a bubble it doesn't mean AI won't be successful. It will be, almost for sure. We've all used it, it's truly useful and transformative. Let's not miss the forest for the trees.
mv4 10 minutes ago [-]
How's this legal? Smaller businesses get in trouble for creative deals leading to inflated earnings.
gregoriol 3 hours ago [-]
Will Sam Altman's fall be as legendary as Sam Bankman-Fried's?
h2zizzle 2 hours ago [-]
I'm assuming Altman wasn't screwing his CFO and letting her post to 4chan about it, so probably not that bad.
Lionga 5 minutes ago [-]
Altman was screwing / raping his sister so not quite sure who is worse.
1 hours ago [-]
baggachipz 3 hours ago [-]
Hopefully moreso
whimsicalism 1 hours ago [-]
ressentiment: the forum
layer8 2 hours ago [-]
SBF’s fall is almost forgotten already.
Hilift 2 hours ago [-]
Most of the funds lost to SBF were recovered. And CPZ has a pardon. Crypto has evaporated about $2 trillion in assets since then.
kyruzic 2 hours ago [-]
The funds in USD were recovered because bitcoins value is 5x higher than it was when he got arrested.
7thpower 2 hours ago [-]
And a set of fundamentally sound investments, including Anthropic iirc.
hiddencost 2 hours ago [-]
I don't understand why you think it's OK to flagrantly violate financial laws for consumer protection, just because the bet got lucky?
Flockster 4 hours ago [-]
Okay, that article is a little bit shallow. I just summarises the headlines of the last weeks of circular deals. But is there also a more in depth article that sheds a little more light onto what this actually means? From a financial perspective?
He also has a podcast called Better Offline, which is slightly too ad heavy for my taste. Nevertheless, with my meagre understanding of the large corporate finances I was not able to find any errors in his core argument regardless of his somewhat sensationalist style of writing.
OfficialTurkey 3 hours ago [-]
My complaint about Ed Zitron is that he's _always_ shouting into the void about something. A lot of the issues he covers are legitimate and deserve the scorn he gives them but at some point it became hard for me to sort the signal from the noise.
enraged_camel 2 hours ago [-]
Ed Zitron sucks because he constantly spitballs on easy to confirm topics and keeps being wrong in ways that should be trivial to check and fix. Case in point:
It’s probably hard to do that in a news context because the real rationales are pretty tight.
Depending on your POV OpenAI and the surrounding AI hype machine is at the extremes either the dawn of a new era, or a metastasized financial cancer that’s going to implode the economy. Reality lies in the middle, and nobody really knows how the story is going to end.
In my personal opinion, “financial innovation” (see: the weird opaque deals funding the frantic data center construction) and bullshit like these circular deals driving speculation is a story we’ve seen time and time again, and it generally ends the same way.
An organization that I’m familiar with is betting on the latter - putting off a $200M data center replacement, figuring they’ll acquire one or two in 2-3 years for $0.20 on the dollar when the PE/private debt market implodes.
gitremote 3 hours ago [-]
> Reality lies in the middle
The argument to moderation/middle ground fallacy is a fallacy.
Not really. The idea that reality lies _in_ the middle is fairly coherent. It's not, on it's face, absolutely true but there are and infinite number of options between two outcomes so the odds are overwhelmingly in the favor that the truth lies somewhere in between. Is either side totally right about every single point of contention between them? Probably not, so the answer is likely in the middle. The fallacy is a lot easier to see when you're arguing about one precise point. In that case, someone is probably right and wrong. But, in cases where a side is talking about a complex event with a multitude of data points, both extremes are likely not completely correct and the answer does, indeed, lie in between the extremes.
The fallacy is that the true lies _at_ the middle, not in the middle.
philistine 2 hours ago [-]
You're thinking in one dimension. Truth. Add another dimension, time, and now we're talking about reality.
Ultimately, if both sides have a true argument, the real issue is which will happen first in time? Will AI change the world before the whole circular investment vehicle implode? Or after, like happened with the dotcom boom?
gitremote 1 hours ago [-]
Flat-earthers: The earth is flat.
Round-earthers: The earth is round.
"Reality lies in the middle" argument: The earth is oblong, not a perfect sphere, so both sides were right.
parineum 1 hours ago [-]
"Round" does not mean spherical and both of these claims are falsifiable and mutually exclusive.
The AI situation doesn't not have two mutually exclusive claims, it has two claims on the opposite sides of economic and cultural impact that are differences of magnitude and direction.
AI can both be a bubble and revolutionary, just like the internet.
suddenlybananas 2 hours ago [-]
>infinite number of options between two outcomes so the odds are overwhelmingly in the favor that the truth lies somewhere in between
This is totally fallacious.
parineum 1 hours ago [-]
It isn't.
"AI is a bubble" and "AI is going to replace all human jobs" is, essentially, the two extremes I'm seeing. AI replacing some jobs (even if partially) and the bubble-ness of the boom are both things that exist on a line between two points. Both can be partially true and exist anywhere on the line between true and false.
No jobs replaced<-------------------------------------->All jobs replaced
Bubble crashes the economy and we all end up dead in a ditch from famine<---------------------------------------->We all end up super rich in the post scarcity economy
mamonster 1 hours ago [-]
It is completely fallacious.
For one, in higher dimensions, most of the volume of a hypersphere is concentrated near the border.
Secondly, and it is somewhat related, you are implicitly assuming some sort of convexity argument (X is maybe true, Y is maybe true, 0.5X + 0.5 Y is maybe true). Why?
suddenlybananas 1 hours ago [-]
I agree there is a large continuum of possibilities, but that does not mean that something in the middle is more likely, that is the fallacious step in the reasoning.
afavour 2 hours ago [-]
> Depending on your POV OpenAI and the surrounding AI hype machine is at the extremes either the dawn of a new era
Eh, in a way they're not mutually exclusive. Look back at the dot com crash: it was all about things like online shopping, which we absolutely take for granted and use every day in 2025. Same for the video game crash in the 80s. They are both an overhyped bubble and and the dawn of a new era.
They are fueling rich family office money into the bank accounts of their personell. Not bad not bad.
nocoolnametom 46 minutes ago [-]
The fact that it is private equity that is going to evaporate when the bubble bursts is the only silver lining I can see. However, my natual cynicism makes me bet they'll spend whatever they've got left over on their pet politicians to use government (ie, public funding) to bail themselves back out.
skeeter2020 2 hours ago [-]
>> figure out how to innovate on the financial model
Does it feel rather Orwellian that the original geeks now seem to be the same people who - forget about claiming technological innovation of as their own - completely discount it and apparently the important thing is now the creativity in funding an enterprise? We don't hear about the breakthroughs from the technologists, but the funding announcements from th investors and CEOs. It's not about the benefits of the technology, but how they're going to pay for it. Seems like a wildly perverse version of wag the dog...
whimsicalism 1 hours ago [-]
this is all a function of the media reporting, the change in ‘nerd culture’ has been vastly overreported.
these companies are staffed by spectrum-y nerds that we are being desperately propagandized into thinking are actually frat ‘bros’.
uberdru 3 hours ago [-]
I was at a bitcoin conference in 2018. One guy in the booth told me that the company had set up a $100M fund to fund startups that agreed to build apps on their blockchain. I wonder where they are now?
ZiiS 3 hours ago [-]
As long as they kept another $100M in coins then fairly happy.
rw3 3 hours ago [-]
Hank green did a vlog on this a few weeks ago and it’s a great explainer.
This comment is pretty depressing but it seems to be the path we're headed to:
> It's bad enough that people think fake videos are real, but they also now think real videos are fake. My channel is all wildlife that I filmed myself in my own yard, and I've had people leaving comments that it's AI, because the lighting is too pretty or the bird is too cute. The real world is pretty and cute all the time, guys! That's why I'm filming it!
Combine this with selecting only what you want to believe in and you can say that video/image that goes against your "facts" is "fake AI". We already have some people in pretty powerful positions doing this to manipulate their bases.
philistine 1 hours ago [-]
> We already have some people in pretty powerful positions doing this to manipulate their bases.
You don't have to be vague. Let's be specific. The President of the United States implied a very real voiceover of President Reagan was AI. Reagan was talking about the fallacy of tariffs as engines of economic growth, and it was used in an ad by the government of Ontario to sow divide within Republicans. It worked, and the President was nakedly mad at being told by daddy Reagan.
dizzydes 2 hours ago [-]
I feel bad for the guy but I think this confusion will be extraordinary and get people off the internet.
throwaway106382 3 hours ago [-]
We are heading to an apocalyptic level of psychosis where human beings won't even believe the things they see with their own eyes are real anymore because of being flooded with AI slop 24/7/365.
jimbokun 2 hours ago [-]
We desperately need a technological solution to be able to somehow "sign" images and videos as being real and not generated or manipulated by AI.
I have no idea how such a thing would work.
SoftTalker 2 hours ago [-]
It won't work, because most people do not understand what a digital signature is and they will just say that has been faked as well.
jimbokun 2 hours ago [-]
There was a discussion on here recently about a new camera that could prove images taken with it weren't AI fakes, and most of the comments were skeptical anyone would care about such things.
This is an example of how people viscerally hate anyone passing off AI generated images and video as real.
I can understand how someone's approach can be "hack all the things", however, at some point you run into the fundamental boundaries of the box you are in and you can't hack your way around those.
jacquesm 3 hours ago [-]
That doesn't really matter: as long as there are idiots who will buy your inflated stock you've externalized the problem for yourself whilst staying within the box.
delis-thumbs-7e 3 hours ago [-]
Lazy Susan is not a hack - it’s a scam.
jwpapi 1 hours ago [-]
When I was 16 I started working at a startup buying and reselling used electronics.
There were like 5 competitors all trying to become the winner takes it all. Afaik after 10 years some closed, restructured but most of them burnt a lot of money. One lets call him indie dev made a lot of money building a simple comparison platform and getting 10-20% on all deals.
This is n=1, but I think it still made me really averse to raising money.
sigbottle 55 minutes ago [-]
Weird angle, but isn't "believing there will be a crash" sort of framing it as if this were still normal market dynamics?
OpenAI and AI in general has posed itself as an existential threat and tightly integrated itself (how well? let's argue later) with so many facts of society, especially government, that like, realistically there just can't be a crash, no?
Or is this too doomsday / conspiratorial?
I just find it weird that we're framing it as crash/not crash when it seems pretty clear to me they really genuinely believe in AGI, and if you can get basically all facets of society to buy in... well, airlines don't "crash" anymore, do they?
1899-12-30 4 minutes ago [-]
A crash in the stock market doesn't necessarily mean a crash in the real market, The AI bubble burst being dot com style vs a gfc debacle depends on how much critical financial infrastructure is at risk during the debt deleveraging. If you look at the gdp growth during those two periods, the dot com era was a mild stagnation compared to the gfc's actual gdp decline.
random9749832 1 hours ago [-]
A lose lose situation for most people. Either the stock market crashes or AI progress meets expectations in the coming years and people start losing jobs.
barbazoo 36 minutes ago [-]
So real estate it is after all?!
AndrewDucker 3 hours ago [-]
The most interesting thing here is that it's now reached the NY Times.
trollbridge 3 hours ago [-]
“I’m not hearing any music.”
crazygringo 3 hours ago [-]
This is such a strange article -- there's nothing particularly unusual going on here.
The first example basically stands in for all of them -- Microsoft invests $13B in OpenAI, and OpenAI spends $13B on Azure. This is literally just OpenAI purchasing Microsoft cloud usage with OpenAI's stock rather than its cash. There is nothing unusual, illicit, or deceptive about this. This is entirely normal. You can finance your spending through debt or equity. They're financing through equity, as most startups do, and they presumably get a better deal (better rates, more guaranteed access) via Microsoft than via other random investors and then buying the cloud compute retail from Microsoft.
This isn't deceiving any investors. This is all out in the open. And it's entirely normal business practice. Nothing of this is an indicator of a bubble or anything.
Or take the deal with Oracle -- Oracle is building data centers for OpenAI, with the guarantee that OpenAI will use them. That's just... a regular business deal. What is even newsworthy about this? NYT thinks these are "circular" deals, but by this logic every deal is a "circular" deal, because both sides benefit. This is just... normal capitalism.
delis-thumbs-7e 2 hours ago [-]
I remember the same argument being used before the 2008 crash.
Point is that all of this companies need to start making real profits and pretty damn big ones, otherwise all of this will collapse. Problem is that unless Altman has some super-intelligent super-AI hidden in his closet, it is very unlikely that it will.
And whose gonna take the bill when it falls? Let me guess… Where have I seen this before…?
matwood 13 minutes ago [-]
> Point is that all of this companies need to start making real profits and pretty damn big ones
MS, Meta, Google, Apple, Nvidia make enormous profits. I think part of this AI push we're seeing is that all of these companies have so much money they don't know how to spend it all. Meta is a great case where they bounced from blowing excess cash on the metaverse and now to AI.
crazygringo 2 hours ago [-]
That's fine, but that's a separate conversation. Maybe this is a bubble, maybe it isn't.
My point is that the way it's all being financed is just regular financing. This article is trying to present the way it's being funded as novel, as "complex and circular", when it's not. This is how funding and investment works 365 days a year in all sectors. Nothing about the funding arrangements is a bubble indicator.
So this is a strange article from the NYT, because it's trying to present normal everyday financing deals as uniquely "complex and circular".
delis-thumbs-7e 2 hours ago [-]
I don’t know financial world well enough to say whether that’s here nor there, but can you give me examples from other companies or sectors where a company X funds the company Y with tens to hundreds of billions that the company Y uses to buy a service from the company X.
Furthermore, yes it might be business as usual but so is fraud and god knows what else in this particular political era. In order to strengthen your argument you have to not only show that the phenomenon is not only common, but good for the overall economy.
methodical 1 hours ago [-]
Circularly passing around tens to hundreds of billions of dollars for things which don't exist and may never exist to fund a technology that hasn't A. lived up to the hype they've marketed and B. proven any strategy to breakeven is fundamentally not that much different than the way in which Enron strategically boasted their revenue numbers by passing the money between shell corporations that their CFO created.
The main difference of course being that these are actual companies as opposed to just entities intently designed to inflate the apparent financials. While it seems like that difference means this situation is perfectly fine as compared with the fraudulent case of Enron, the net effect is still the same; these companies are posting crazy quarter over quarter revenue growth, sending their stock prices to crazy highs and P/E multiples, while the insiders are cashing out to the tunes of hundreds of millions of dollars.
I don't really see how exactly you're trying to make the argument that it may or may not be a bubble, it objectively meets the definition of a bubble in the traditional economic sense (when an asset's market price surges significantly above its intrinsic value, driven by speculative behavior rather than fundamental factors). These companies are massively overvalued on the speculative value of AI, despite AI having not yet shown much economic viability for actual profit (not just revenue).
Worse yet, it's not just one company with inflated numbers, it's pretty much the entire top end of the market. To compare it to the dot com bubble wouldn't be a stretch, it'd basically be apples to apples as far as I see it.
philipwhiuk 3 hours ago [-]
> Microsoft invests $13B in OpenAI, and OpenAI spends $13B on Azure.
This isn't deceiving any investors.
It's Microsoft increasing its revenue by selling its stock.
crazygringo 3 hours ago [-]
Microsoft isn't selling any stock. It's using its cash.
And an increase in revenue isn't the point. Microsoft isn't doing this to try to bump its short-term stock price or anything -- investors know where revenue is coming from. Microsoft is doing it because it thinks OpenAI is a good investment and wants to make money with that investment and have greater control.
jpollock 3 hours ago [-]
The last time this hit the news, it was the dotcom bubble, and Nortel was in a similar position with startups, taking equity for equipment.
crazygringo 3 hours ago [-]
No, that's not the last time this hit the news. This happens literally all the time. Again, this is just business as usual. It's not specific to AI, it's not specific to tech, and it's nothing to do with bubbles.
gdulli 2 hours ago [-]
Sometimes additional context can take the same action that looks harmless in a vacuum and turn it into a bad idea or even a crime!
crazygringo 2 hours ago [-]
Then it would be great to have that context that shows criminality. Because that's an extraordinary claim you're suggesting, which is going to require actual evidence.
As for "bad ideas", businesses make tons of decisions every day that turn out to be good or bad in hindsight. So again, more specifics are needed here.
So what exactly are you suggesting? What context do you think the NYT chose to omit, and why would they omit it if it was meaningful?
Eisenstein 3 hours ago [-]
The bubble part is that nvidia is getting revenue from people investing money in their hardware in order to sell something that has not yet been shown to be profitable. If it turns out no one can make enough money selling AI generated data to justify the costs spent on the compute needed to generate it at the current rate, then what nvidia are selling becomes much less valuable, and the whole thing collapses. We haven't figured out yet whether or not that will be the case.
crazygringo 3 hours ago [-]
But that has nothing to do with the arrangement of deals here.
If it's a bubble, then it will pop. If it's not a bubble, then all these investments will turn out to be great. But that's a different question.
The point is, all these deals happen all the time. They're not some kind of sign of a bubble. They happen just as much in non-bubbles. They're just capitalism working as usual.
bwfan123 2 hours ago [-]
These deals happen all the time. The case for a bubble is the following.
When Microsoft offers cloud-credits in exchange for openai equity, what it has effectively done is to purchase its own azure revenues. ie, a company uses its own cash to purchase its own revenues. This produces an illusion of revenue growth which is not economically sustainable. This is happening for all clouds right now wherein their revenues are inflated by uneconomic ai purchases. This is also happening for the gpu chip vendors as well, wherein they are offering cash or warrants to fund their own chip sales.
crazygringo 2 hours ago [-]
But nobody is falling for the "illusion of revenue growth". This is out in the open. This isn't a scam. Investors know this and are pricing accordingly. They see the revenue growth but also see the decrease in cash.
What Microsoft is actually doing is taking the large profits it would have otherwise made on its cloud compute with retail customers, losing much/all of those profits as it sells the compute more cheaply to OpenAI, and converting those lost profits into ownership of OpenAI because Microsoft's goal is to own more of OpenAI.
There is nothing "bubble" about this. Microsoft isn't some opaque startup investors don't understand. All of this is incredibly transparent.
bwfan123 2 hours ago [-]
There will be increased transparency since microsoft will now have to report on the performance of its openai equity [1]. The concern is that while chatgpt is a great app, the economic benefits of the current investments are being questioned. There is starting to be skepticism of ai as the public starts to get jaded. This happens in all fads. That explains why the media is buzzing with articles like these which are becoming increasingly critical while earlier they were all aboard the ai-train.
The Luddites are out in full force today. The anti-tech sentiment is strong. Am I sure this isn’t Reddit?
schnitzelstoat 2 hours ago [-]
Yeah, Reddit has a strong anti-AI sentiment.
I'm not anti-AI, I'm just sceptical that it's as powerful as the AI companies are making out. I don't think we are anywhere near AGI, like centuries away.
I also don't think AI is going to be able to do all human jobs, in the physical world we have seen relatively little progress in robotics compared to the leaps made with transformers. And in the information world, while the LLMs can assist in many tasks and make workers more efficient I don't think they can entirely replace programmers (who are the expensive workers).
So yeah, I just don't think we are going to see the kind of world-changing benefits that OpenAI etc. are promising and which their valuations appear to be based upon.
jplusequalt 32 minutes ago [-]
"Luddite" is the new "DEI" for tech-bros.
cindyllm 16 minutes ago [-]
[dead]
brazukadev 2 hours ago [-]
have you not been here for long enough? HN crowd is treating genAI the same way it treats blockchain, nothing new.
barbazoo 32 minutes ago [-]
My team has shipped now heavily used features that are built with GenAI under the hood. I have a hard time not seeing the value in that technology.
Personally I haven’t seen blockchain make any impact whatsoever but maybe it’s just a little more niche or just a different one.
llbbdd 1 hours ago [-]
Worse, honestly. There was a strong case to be made that crypto was absent of a real problem to solve. Meanwhile people use GenAI for real work every day and a disturbing cut of HN has their ears plugged, insisting it's a bubble and that everyone is lying about it working.
throwaway106382 3 hours ago [-]
Speedrunning to "too big to fail". Turn on the infinite money printers and feed them directly into Sam Altman's bank account or the Chinese/Russians/Iranians/Boogeymen will destroy us all.
throwaway106382 3 hours ago [-]
Isn't paying a company to dig a hole who then pays you the same amount to fill said hole illegal?
baggachipz 3 hours ago [-]
In a fair and just system with appropriate oversight, yes. So in this instance, no.
baq 3 hours ago [-]
Even worse in VAT countries where such carousels make you eligible for a tax return on technically zero added value
danans 2 hours ago [-]
Only if you defraud investors in hole-digging corp and hole-filling corp by that by doing this you will be able to extract Unobtanium, which will make both companies 1000x profitable.
throwaway106382 2 hours ago [-]
This is just starting to sound more and more like "we're almost at AGI I promise bro just need one more round of investment bro please just one trillion more dollars please bro".
chermi 2 hours ago [-]
Yes, but what does that have to do with this situation? The hole served no purpose. The companies are using the GPUs.
brazukadev 2 hours ago [-]
99% of code I generated using genAI served no purpose at the end of the day
chermi 2 hours ago [-]
Ok. Maybe use it better? Or don't use it at all. Doesn't mean it's not being used to some end, unlike a hole.
Keep in mind also that the models are going to continue improving, if only on cost. Just a significant cost reduction allows for more "thinking" mode use.
Most of the reports about how useless LLMs are were from older models being used by people that don't know how to use LLMs. I'm not someone that thinks they're perfect or even great yet, but their not dirt.
zetanor 3 hours ago [-]
Not if it increases the GDP.
throwaway106382 3 hours ago [-]
Well, I've got great news then 92% of GDP growth in the first half of 2025 was hole filling companies paying hole digging companies to dig holes and in-kind pay them to fill them up again
what could possibly go wrong
TZubiri 3 hours ago [-]
Seems like a net loss due to transactional costs.
klustregrif 3 hours ago [-]
The increase in value of the companies outweighs the transactional costs and then you borrow against the value of the company and make new circular deals. It works really well for a very long time and then at some point it doesn’t. The trick of the game is to get big corps involved and key decision makers so that the government bails out everyone in the end.
automatic6131 2 hours ago [-]
> The trick of the game is to get big corps involved and key decision makers so that the government bails out everyone in the end.
This is bad. We should not shrug our shoulders and go "Oh ho, this is how the game is played" as though we can substitute cynicism for wisdom. We should say "this is bad, this is a moral hazard, and we should imprison and impoverish those who keep trying it".
Or we'll get more.
throwaway106382 3 hours ago [-]
They are banking on:
* stock prices increasing more than the non-existent money being burnt
* they are now too big to fail - turn on the real money printers and feed it directly into their bank accounts so the Chinese/Russians/Iranians/Boogeymen don't kill us all
seems like it's like a mix of enron, subprime mortgages, and .com boom all in one
megaloblasto 2 hours ago [-]
Everyone loves to compare AI with the dot com bubble. My question is, were there any policies put in place after the dot com bubble to mitigate a similar crash? Or did we learn nothing?
anonymousiam 2 hours ago [-]
Sort of like Bitcoin...
9cb14c1ec0 3 hours ago [-]
Complex and circular deals lead to the downfall of Enron. Just saying...
adaisadais 3 hours ago [-]
I’ve been listening to “The Smartest Guys In The Room” (the definitive book on Enron and their scandal) and one of the ways Enron continued to grow and grow is by setting up a really complicated system of moving debt onto equities off of their balance sheet.
While it was sorta legal (at the time) it was not ethical and led to a massive collapse of the #1 company at the time.
Makes you wonder if AI is in such a bubble. (It is).
sergiotapia 3 hours ago [-]
When the AI bubble pops, what will happen to the software engineering jobs?
afavour 3 hours ago [-]
There will be a bunch of layoffs and slowly they'll rehire back to pre-hysteria levels. I think the world is still going to need software engineers no matter what but companies will slow down on new features etc in an economic crunch.
forgetfulness 2 hours ago [-]
The ripple effect will be felt hard, as American engineers are squeezed between offshoring and more engineers with Big Tech resumes being released into the market, and returnees go push back wages in their home countries in turn
miltonlost 3 hours ago [-]
They'll have to come in and redo all the work that people put onto LLMs as actual engineering software. The number of features I've worked on that could have been done with normal computing practices but instead shoehorned in bad AI to make decisions/routing logic is too high.
kakacik 3 hours ago [-]
If it pops, some ai engineers will need to start doing some normal work again, and rest of us... we just continue doing what we were doing for past decades.
Or maybe not, nobody knows the future any more then next guy in line.
brazukadev 2 hours ago [-]
free AI credits will be a thing of the past, "productivity" (real or not) will dive and real software engineering will become a moat again.
guluarte 59 minutes ago [-]
this seems like a fake circular economy, ms invests in openai which uses the money in azure, amazon invests in anthropic which pays aws for hardware and infra, nvidia invests in openai which uses the money to buy nvidia hardware, etc
JohnMakin 2 hours ago [-]
"Circular deals" feels like an awfully cute way to say "fraud"
jmyeet 2 hours ago [-]
Many here now didn't live through the dot-com bubble as an adult so can't really appreciate what it was like. The hype was something hard to describe. Financial analysts and journalists struggled to come up with ways to describe the health of these "companies". My favorite was what revenue multiple companies would trade it.
But the major takeaway was that almost none of these companies were real businesses. This is why I laughed at dot-com comparisons in the 2010s around the tech giants because Apple, Google, Microsoft, etc were money-printing machines on a scale we have trouble comprehending. That doesn't make them immune to economic struggles. Ad spending with Google will rise and fall with the economy.
OpenAI has a paper valuation in the hundreds of billions of dollars now and no prospect of a revenue model that will justify that for many, many years.
Currently, the hardware is a barrier to entry but that won't last. It has parallels in the dot-com era too when servers were expensive. The cost of training LLMs is (at least) halving every year. We're probably reaching the limits of what these transformers can do and we'll need another big breakthrough to improve.
OpenAI's moat is tenuous. Their value is in the model they don't release. But DeepSeek is a warning shot that it will be in somebody's geopolitical interest, probably China's, to prevent a US tech monopoly on AI.
If you look at these AI companies, so many of them are basically scams. I saw a video about a household humanoid robot that was, surprise surprise, just someone in a VR suit. Many cities have delivery drones now but somebody is remotely driving them.
I saw somebody float the theory that the super-profitable big tech companies are engaging in layoffs not because they don't need people but to pay for the GPUs. It's an interesting idea. A lot of these NVidia deals are just moving money around where NVidia comes out on top with a bunch of equity in these companies should they become trillion dollar companies.
Oh and take out data center building from the US economy and we're in recession. I do think this is a bubble and it will burst sooner rather than later.
coldfireza 2 hours ago [-]
it's a bubble
coldfireza 2 hours ago [-]
It's a bubble
ForHackernews 3 hours ago [-]
"You give me a million GPUs for free, I'll announce that you have sacrificed a million GPUs to the machine gods, and your stock price will spike 200 times the value of those GPUs."
righthand 3 hours ago [-]
I honestly don’t get it. People love being swindled? Or people have enough cash to throw into the swindling machine even for no gain? Must be nice.
ak_111 3 hours ago [-]
You can do a lot of money in swindles and bubbles if you time your exit well. There is a fair bit of opportunistic investors who did well in the NFT craze, who speculated knowing fully well that NFT is a craze that will go to zero.
jacquesm 3 hours ago [-]
The Greater Fool theory of investing strikes again.
itsnowandnever 3 hours ago [-]
everything will eventually go to zero. we look at some of these things and laugh because we're pretty sure they're going to go to zero within weeks or months vs years. but by the end of all of our lifetimes, most the companies on the stock market will be replaced. the few that won't are probably investment banks like goldman sachs
itsnowandnever 3 hours ago [-]
these deals are made as part of a market so it's more like musical chairs where every time you change a chair you get a ton of money but you don't want to be the one that's stuck without a chair at the end
ceejayoz 3 hours ago [-]
They've all realized the guy without the chair can be the taxpayer.
KaiserPro 3 hours ago [-]
Modern finance is all about debt.
Central banks don't print money[1] but investment banks do. Think about it like this: Someone deposits $100. The bank pays interest, to make money on to pay that interest, ~$90 is loaned out to someone.
Now, I still have a bank slip that says $100 in the account, and the bank has given $90 of that to someone else. We now have $190 in the economy! The catch is, that money needs to be paid back, so when people need to call in that cash, suddenly the economy only has $10, because the loan needed to be paid back, causing a cash vacuum.
But that paying back is also where the profit is, because you sell off the loan book, and you can get all your money back, including future interest. So you have lent out $90, sold the right to collect the repayments to someone else as a bond, so you now have $120, a profit of $30
That $30 comes pretty much from nowhere. (there are caveats....)
Now we have my bank account, after say a year with $104 in it, the bank has $26 pure profit AND someone has a bond "worth" $90 which pays $8 a year. but guess what, that bond is also a store of value. So even though its debt, it acts as money/value/whatever.
Now, the numbers are made up, so are the percentages. but the broad thrust is there.
[1] they do
cindyllm 3 hours ago [-]
[dead]
sherinjosephroy 3 hours ago [-]
[dead]
podgorniy 3 hours ago [-]
[dead]
Rendered at 17:15:45 GMT+0000 (Coordinated Universal Time) with Vercel.
OpenAI applies the same strategy, but they’re using their equity to buy compute that is critical to improving their core technology. It’s circular, but more like a flywheel and less like a merry-go-round. I have some faith it could go another way.
But we know that growth in the models is not exponential, its much closer to logarithmic. So they spend =equity to get >results.
The ad spend was a merry go round, this is a flywheel where the turning grinds its gears until its a smooth burr. The math of the rising stock prices only begins to make sense if there is a possible breakthrough that changes the flywheel into a rocket, but as it stands its running a lemonade stand where you reinvest profits into lemons that give out less juice
In that sense it makes sense to keep spending billions even f model development is nearing diminishing return - it forces competition to do the same and in that game victory belongs to the guy with deeper pockets.
Investors know that, too. A lot of startup business is a popularity contents - number one is more attractive for the sheer fact of being number one. If you’re a very rational investor and don’t believe in the product you still have to play this game because others are playing it, making it true. The vortex will not stop unless limited partners start pushing back.
What _could_ prevent this from happening is the lack of available data today - everybody and their dog is trying to keep crawlers off, or make sure their data is no longer "safe"/"easy" to be used to train with.
The new OpenAI browser integration would be an example. Mostly the same model, but with a whole new channel of potential customers and lock in.
Even if the model training part becomes less worthwhile, you can still use the data centers for serving API calls from customers.
The models are already useful for many applications, and they are being integrated into more business and consumer products every day.
Adoption is what will turn the flywheel into a rocket.
I'm commenting here in case a large crash occurs, to have a nice relic of the zeitgeist of the time.
There are physical products involved, but the situation otherwise feels very similar to ads prior to dotcom.
That's capital markets working as intended. It's not necessarily doomed to end in a fiery crash, although corrections along the way are a natural part of the process.
It seems very bubbly to me, but not dotcom level bubbly. Not yet anyway. Maybe we're in 1998 right now.
Capital markets weren't intended for round trip schemes. If a company on paper hands 100B to another company who gives it back to the first company, that money never existed and that is capital markets being defrauded rather than working as expected.
I don't tend to benefit from my predictions as things always take longer to unfold than I think they will, but I'm beyond bearish at present. I'd rather play blackjack.
I’ve made that mistake already.
I’m nervous about the economic data and the sky high valuations, but I’ll invest with the trend until the trend changes.
Not? Money is thrown after people without really looking at the details, just trying to get in on the hype train? That's exactly how the dotcom bubble felt like.
Nowhere near that level. There’s real demand and real revenue this time.
It won’t grow as fast as investors expect, which makes it a bubble if I’m right about that. But not comparable to the dotcom bubble. Not yet anyway.
PE ratios of 50 make no sense, there is no justification for such a ratio. At best we can ignore the ratio and say PE ratios are only useful in certain situations and this isn't one of them.
Imagine if we applied similar logic to other potential concerns. Is a genocide of 500,000 people okay because others have done drastically more?
If you have a better measure, share it. I trust data more than your or my feelings on the matter.
Things are worth what people are willing to pay for them. And that can change over time.
Sentiment matters more than fundamental value in the short term.
Long term, on a timescale of a decade or more, it’s different.
The thing is: you've paid nothing - all you did was trade pets and played an accounting trick to make them seem more valuable than they are.
Ugh I hate it so much, but you're right, it's coming.
It is at the very least highly debatable how much their core technology is improving from generation to generation despite the ballooning costs.
https://time.com/archive/6931645/how-the-once-luminous-lucen...
The customers bought real equipment that was claimed to be required for the "exponential growth" of the Internet. It is very much like building data centers.
If they don't then they're spending a ton of money to level up models and tech now, but others will eventually catch up and their margins will vanish.
This will be true if (as I believe) AI will plateau as we run out of training data. As this happens, CPU process improvements and increased competition in the AI chip / GPU space will make it progressively cheaper to train and run large models. Eventually the cost of making models equivalent in power to OpenAI's models drops geometrically to the point that many organizations can do it... maybe even eventually groups of individuals with crowdfunding.
OpenAI's current big spending is helping bootstrap this by creating huge demand for silicon, and that is deflationary in terms of the cost of compute. The more money gets dumped into making faster cheaper AI chips the cheaper it gets for someone else to train GPT-5+ competitors.
The question is whether there is a network effect moat similar to the strong network effect moats around OSes, social media, and platforms. I'm not convinced this will be the case with AI because AI is good at dealing with imprecision. Switching out OpenAI for Anthropic or Mistral or Google or an open model hosted on commodity cloud is potentially quite easy because you can just prompt the other model to behave the same way... assuming it's similar in power.
Why would they run out of training data? They needed external data to bootstrap, now it's going directly to them through chatgpt or codex.
I’m thinking they eventually figure out who is the source of good data for a given domain, maybe.
Even if that is solved, models are terrible at long tail.
Or not - there still knowledge in people heads that is not bleeding into ai chat.
One implication here is that chats will morph to elicit more conversation to keep mining that mine. Which may lead to the need to enrage users to keep engagement.
Even if that weren't true having your software be cheaper to run is not a bad thing. It makes the software more valuable in the long run.
This is a pricey machine though. But 5-10 years from now I can imagine a mid-range machine running 200-400B models at a usable speed.
That's only like 1/8th of the flywheel, though.
The other difference (besides Sam's deal making ability) is, willing investors: Nvidia's stock rally leaves it with a LOT of room to fund big bets right now. While in Oracle's case, they probably see GenAI as a way to go big in the Enterprise Cloud business.
And then what happens if the stock collapses?
I wonder how they felt during the .com era.
This boom is a data center boom with AI being the software layer/driver. This one potentially has a lot longer to run even though everyone is freaking out now. If you believe the AI is rebuilding compute then this changes our compute paradigm in the future. As well as long as we don't get an over leveraged build out without revenue coming in the door. I think we are seeing a lot of revenue come in for certain applications.
The companies that are all smoke and mirrors built on chatGPT with little defensibility are probably the same as the ones you are referring to in the current era. Or the AI tooling companies.
To be clear circular deal flow is not a good look.
I can see the both sides of bull and bear at this moment.
2020: https://www.youtube.com/watch?v=rpiZ0DkHeGE 2019: https://www.cadtm.org/spip.php?page=imprimer&id_article=1732...
The practice was known as “zaitech”
> zaitech - financial engineering
> In 1984, Japan’s Ministry of Finance permitted companies to operate special accounts for their shareholdings, known as tokkin accounts. These accounts allowed companies to trade securities without paying capital gains tax on their profits.
> At the same time, Japanese companies were allowed to access the Eurobond market in London. Companies issued warrant bonds, a combination of traditional corporate bonds with an option (the “warrant") to purchase shares in the company at a specified price before expiry. Since Japanese shares were rising, the warrants became more valuable, allowing companies to issue bonds with low-interest payments.
> The companies, in turn, placed the money they raised into their tokkin accounts that invested in the stock market. Note the circularity: companies raised money by selling warrants that relied on increasing stock prices, which was used to buy more shares, thus increasing their gains from investing in the stock market.
https://www.capitalmind.in/insights/lost-decades-japan-1980s...
- Nvidia has too much cash because of massive profits and has nowhere to reinvest them internally.
- Nvidia instead invests in other companies that use their gpus by providing them deals that must be spent on nvidia products.
- This accelerates the growth of these companies, drives further lock in to nvidia's platform, and gives nvidia an equity stake in these companies.
- Since growth for these companies is accelerated, future revenue will be brought forward for nvidia and since these investments must be spent on nvidia gpus it drives further lock in to their platform.
- Nvidia also benefits from growth due to the equity they own.
This is all dependent on token economics being or becoming profitable. Everything seems to indicate that once the models are trained, they are extremely profitable and that training is the big money drain. If these models become massively profitable (or at least break even) then I don't see how this doesn't benefit Nvidia massively.
These guys are running hyper optimized cash extraction mega machines. There is no comparison to previous bubbles, cause so no such companies ever existed in the past.
The question is where the profits are.
https://www.macrotrends.net/stocks/charts/MSFT/microsoft/ebi...
https://www.macrotrends.net/stocks/charts/AMZN/amazon/ebitda
Microsoft - 14,000 (multiple rounds); significant
Meta - 600 layoffs; insignificant for company size
Google - "Several hundred layoffs"; insignificant for a company size
Apple - No layoffs
Source: https://techcrunch.com/2025/10/24/tech-layoffs-2025-list/
Some data would reinforce your case. Do you have it?
Here is my data point: "You Have No Idea How Screwed OpenAI Actually Is" - https://wlockett.medium.com/you-have-no-idea-how-screwed-ope...
The reason I wonder about that is because that also seems to be the dynamic with all these deals and valuations. Surely if OpenAI would pay $30 billion on data centers, they could pay $40 billion, right? I'm not exactly sure where the price escalations actually top out.
https://www.theregister.com/2025/10/29/microsoft_earnings_q1...
Microsoft seemingly just revealed that OpenAI lost $11.5B last quarter
This money is well beyond VC capability.
Either this lets them build to net positive without dying from painful financing terms or they explode spectacularly. Their rate of adoption it seems to be the former.
I don't know how that 11.5B number was derived, but I would wager that the net loss on income statement is a lot lower than the net negative cash flow on cash flow statement.
If that 11.5B is net profit/loss, then whatever the portion of the expense part of the calculation that's software R&D could be 5x larger if it weren't for the new amortization rule.
I do think this is going to be a deeply profitable industry, but this feels a little like the WeWork CEO flying couches to offices in private jets
Rivian stock is down 90%, and I fairly regularly read financial news about it having bad earnings, stock going even lower, worst-in-industry reliability, etc etc.
I don't know why you don't hear about it, but it might be because it's already looking dead in the water so there's no additional news juice to squeeze out of it.
I found there was more than just couches on the WeWork private jets:
https://www.inverse.com/input/tech/weworks-adam-neumann-got-...
We had an impressive new technology (the Web), and everyone could see it was going to change the world, which fueled a huge gold rush that turned into a speculative bubble. And yes, ultimately the Web did change the world and a lot of people made a lot of money off of it. But that largely happened later, after the bubble burst, and in ways that people didn't quite anticipate. Many of the companies people were making big bets on at the time are now fertile fodder for YouTube video essays on spectacular corporate failures, and many of the ones that are dominant now were either non-existent or had very little mindshare back in the late '90s.
For example, the same year the .com bubble burst, Google was a small new startup that failed to sell their search engine to Excite, one of the major Web portal sites at the time. Excite turned them down because they thought $750,000 was too high a price. 2 years later, after the dust had started to settle, Excite was bankrupt and Google was Google.
And things today sure do strike me as being very similar to things 25, 30 years ago. We've got an exciting new technology, we've got lots of hype and exuberant investment, we've got one side saying we're in a speculative bubble, and the other side saying no this technology is the real deal. And neither side really wants to listen to the more sober voices pointing out that both these things have been true at the same time many times in the past, so maybe it's possible for them to both be true at the same time in the present, too. And, as always, the people who are most confident in their ability to predict the future ultimately prove to be no more clairvoyant than the rest of us.
Um I think nobody is really denying that we are in a bubble. It's normal for new tech and the hype around it. Eventually the bad apples are weeded out and some things survive, others die out.
The first disagreement is how big the bubble is, i.e. how much air is in it that could vanish. And that's because of the second disagreement, which is about how useful this tech is and how much potential it has. It's clear that it has some undeniable usefulness. But some people think we'll soon have AGI replacing everybody and the opposite is that's all useless crap beyond a few niche applications. Most people fall somewhere in between, with a somewhat bimodal split between optimists and skeptics. But nobody really contends that it's a bubble.
For Microsoft, and the other hyperscalers supporting OpenAI, they're all absolutely dependent on OpenAI's success. They can realistically survive through the difficult times, if the bubble bursts because of a minor player - for example if Coreweave or Mistral shuts down. But if the bubble bursts because the most visible symbol of AI's future collapses, the value-destruction for Microsoft's shareholders will be 100x larger than OpenAI's quarterly losses. The question for Microsoft is literally as fundamental as "do we want to wipe $1tn off our market cap, or eat $11bn losses per quarter for a few years?" and the answer is pretty straightforward.
Altman has played an absolute blinder by making the success of his company a near-existential issue for several of the largest companies to have ever existed.
Yeah true, the whole pivot from non-profit to Too Big to Fail is pretty amazing tbh.
Monopoly in this field is impossible, your product won't ever be so good that the competition does not make sense
Add to this that AGI is impossible with LLMs...
Fascinating! I unearthed the TL;DR for anyone else interested:
* WeWork purchased a $60 million Gulfstream G650ER private jet for Neumann's use.
* The G650ER was customized with two bedrooms and a conference table.
* Neumann used the jet extensively for global travel, meetings, and family trips.
* The jet was also used to transport items like a "sizable chunk" of marijuana in a cereal box, which might be worse and more negligent than couches.
Sources:
https://www.vanityfair.com/hollywood/2022/03/adam-neumann-re...
https://nypost.com/2021/07/17/the-shocking-ways-weworks-ex-c...
ChatGPT was mind blowing when you first used it. WeWork is a real estate play fronted by a self aggrandizing self dealing CEO.
1. Performance of AI tools improving but marginally so in practice 2. If human labor was replaced, it's the start of global societal collapse so any winnings would be moot.
They want to be the Google in this scenario.
The model can be free, but the infrastructure (data center) ain't.
On paper, whoever gets there first, along with the needed compute to hand over to the AI, wins the race.
The AI, having theoretically the capacity to do anything better than everyone else, will not need support (in resources or otherwise) from any other business except perhaps once to kickstart its exponential growth. If it's guarded, every other company becomes instantly worthless on the long term, and if not anyone with a bootstrap-level of compute will be able to also, do anything ever on a long enough time frame.
It's not a race for ROI, it's to have your name go in the book as one of the guys that first obsoleted the relationship between effort, willpower, intelligence, etc. and the ability to bring arbitrary change to the world.
There’s no guarantee that the singularity makes economic sense for humans.
Are we /confident/ a machine god with `curl` can't gain its own resilient foothold on the world?
Practically, LLMs train on data. Any output of an LLM is a derivative of the training data and can't teach it anything new.
Conceptually, if a stupid AI can build a smart AI, it would mean that the stupid AI is actually smart, otherwise it wouldn't have been able too.
Currently, the trend is not whether one technology will outpace the other in the "AI" hype-cycle ( https://en.wikipedia.org/wiki/Gartner_hype_cycle ), but it does create perceived asymmetry with skilled-labor pools. That alone is valuable leverage to a corporation, and people are getting fired or ripped off anticipating the rise of real "AI".
https://www.youtube.com/watch?v=_zfN9wnPvU0
One day real "AI" may exist, but a LLM or current reasoning model is unlikely going to make that happen. It is absolutely hilarious there is a cult-like devotion to the AstroTurf marketing.
The question is never whether this is right or wrong... but simply how one may personally capture revenue before the Trough of disillusionment. =3
This would be a terrifyingly dystopian outcome. Whoever owns this super intelligence is not going to use it for the good of humanity, they're going to use it for personal enrichment. Sam Altman says OpenAI will cure cancer, but in practice they're rolling out porn. There's more immediate profit to be made from preying on loneliness and delusion than there is from empowering everyone. If you doubt the other CEOs would do the same, just look at them kissing the ass of America's wannabe dictator in the White House.
Another possible outcome is that no single model or company wins the AI race. Consumers will choose the AI models that best suit their varying needs, and suppliers will compete on pricing and capability in a competitive free market. In this future, the winners will be companies and individuals who make best use of AI to provide value. This wouldn't justify the valuations of the largest AI companies, and it's absolutely not the future that they want.
I agree this is a reasonable bet though but for different reason, I believe this is a large scale exploitation where money is systematically siphoned away from workers and into billionaires via e.g. hedgefunds, bailouts, dividend payouts, underpay, wagetheft, etc. And the more they blow out this bubble the more money they can exploit out from workers. As such it is not really a bet, but rather the cost of business. Profits are guaranteed as long as workers are willing to work for yours.
There are some interesting parallels here with the business model described in the book Confessions of an Economic Hitman. Developing countries take out huge loans from US lenders to build an electric grid, based on inflated forecasts from US consultancies they hired. The countries take on the debt, but the money mostly bypasses them and lands in the pockets of US engineering firms doing the construction, and government insiders taking kickbacks for greasing the wheels.
When the forecasted growth in industrial production fails to materialize, the countries are unable to repay the debt and have no option but to offer the US access to their resources, ports and votes in the UN.
What happens when OpenAI's forecasts of gargantuan growth fail to materialize and they're unable to sell more stock to pay off lenders? Does Uncle Sam step in with a bailout for "national security" reasons?
I can spin up a strong ML team through hiring in probably 6-12 months with the right funding. Building a chip fab and getting it to a sensible yield would take 3-5 years, significantly more funding, strong supply lines, etc.
Not sure what to call this except "HN hubris" or something.
There are hundreds of companies who thought (and still think) the exact same thing, and even after 24 months or more of "the right funding" they still haven't delivered the results.
I think you're misunderstanding how difficult all of this is, if you think it's merely a money problem. Otherwise we'd see SOTA models from new groups every month, which we obviously aren't, we have a few big labs iteratively progressing SOTA, with some upstarts appearing sometimes (DeepSeek, Kimi et al) but it isn't as easy as you're trying to make it out to be.
As you mentioned, multiple no name chinese companies have done it and published many of their results. There is a commodity recipe for dense transformer training. The difference between Chinese and US is that they have less data restrictions.
I think people overindex on the Meta example. It’s hard to fully understand why Meta/llama have failed as hard as they have - but they are an outlier case. Microsoft AI only just started their efforts in earnest and are already beating Meta shockingly.
We do.
It's just that startups don't go after the frontier models but niche spaces which are under served and can be explored with a few million in hardware.
Just like how open AI made gpt2 before they made gpt3.
> It's just that startups don't go after the frontier models but niche spaces
But both of "New SOTA models every month" and "Startups don't go for SOTA" cannot be true at the same time. Either we get new SOTA models from new groups every month (not true today at least) or we don't, maybe because the labs are focusing on non-SOTA instead.
If I have to guess OAI and others pay top dollars for talent that has a higher probability of discovering the next "attention" mechanism and investors are betting this is coming soon (hence the hige capitalizations and willing to loive with 11B losses/quarter). If they lose patience in throwing money at the problem I see only few players remaining in the race because they have other revenue streams
Build a chip fab? I’ve got no idea where to start, where to even find people to hire, and i know the equipment we’d need to acquire would be also quite difficult to get at any price.
Mark Zuckerberg would like a word with you
But why such an unfair comparison?
Instead of comparing "skilled people with hardware VS skilled people without hardware", why not compare it to "a bunch of world-class ML folks" without any computers to do the work, how could they produce world-class work then?
- For the ML team, you need money. Money to pay them and money to get access to GPUs. You might buy the GPUs and make your own server farm (which also takes time) or you might just burn all that money with AWS and use their GPUs. You can trade off money vs. time.
- For the chip design team, you need money and time. There's no workaround for the time aspect of it. You can't spend more money and get a fab quicker.
Even if you do those things though, it doesn't guarantee success or you'll be able to train something bigger. For that you need knowledge, hard work and expertise, regardless of how much money you have. It's not a problem you can solve by throwing money at it, although many are trying. You can increase the chances of hopefully discovering something novel that helps you build something SOTA, but as current history tells us, it isn't as easy as "ML Team + Money == SOTA model in a few months".
The "magic of AI" doesn't live inside an Nvidia GPU. There are billions of dollars of marketing being deployed to convince you it does. As soon as the market realizes that nvidia != magic AI box, the music should stop pretty quickly.
There are some important innovations on the algorithm / network structure side, but all these ideas are only able to be tried because the hardware supports it. This stuff has been around for decades.
AI models do not. Sure you can't just copy the exact floating point values without permission. But with enough capital you can train a model just as good, as the training and inference techniques are well known.
You're not alone in believing just money can train a good model, and I've already answered elsewhere why things aren't so easy as you believe, but besides this, where are y'all getting that from? Is there some popular social media influencer that keeps parroting this or where it comes from? Clearly you're not involved in those processes/workflows yourself, then you wouldn't claim it's just a money problem, so where are you all getting this from?
The rights on masks for chips and their parts (IPs) belong to companies.
And one definitely does not want these masks to be sold during bankruptcy process to (arbitrary) higher bidders.
The .com bubble didn't stop the internet or e-commerce, they still won, revolutioned everything, etc. etc. Just because there's a bubble it doesn't mean AI won't be successful. It will be, almost for sure. We've all used it, it's truly useful and transformative. Let's not miss the forest for the trees.
He also has a podcast called Better Offline, which is slightly too ad heavy for my taste. Nevertheless, with my meagre understanding of the large corporate finances I was not able to find any errors in his core argument regardless of his somewhat sensationalist style of writing.
https://bsky.app/profile/notalawyer.bsky.social/post/3ltkami...
Depending on your POV OpenAI and the surrounding AI hype machine is at the extremes either the dawn of a new era, or a metastasized financial cancer that’s going to implode the economy. Reality lies in the middle, and nobody really knows how the story is going to end.
In my personal opinion, “financial innovation” (see: the weird opaque deals funding the frantic data center construction) and bullshit like these circular deals driving speculation is a story we’ve seen time and time again, and it generally ends the same way.
An organization that I’m familiar with is betting on the latter - putting off a $200M data center replacement, figuring they’ll acquire one or two in 2-3 years for $0.20 on the dollar when the PE/private debt market implodes.
The argument to moderation/middle ground fallacy is a fallacy.
https://en.wikipedia.org/wiki/Argument_to_moderation
The fallacy is that the true lies _at_ the middle, not in the middle.
Ultimately, if both sides have a true argument, the real issue is which will happen first in time? Will AI change the world before the whole circular investment vehicle implode? Or after, like happened with the dotcom boom?
Round-earthers: The earth is round.
"Reality lies in the middle" argument: The earth is oblong, not a perfect sphere, so both sides were right.
The AI situation doesn't not have two mutually exclusive claims, it has two claims on the opposite sides of economic and cultural impact that are differences of magnitude and direction.
AI can both be a bubble and revolutionary, just like the internet.
This is totally fallacious.
"AI is a bubble" and "AI is going to replace all human jobs" is, essentially, the two extremes I'm seeing. AI replacing some jobs (even if partially) and the bubble-ness of the boom are both things that exist on a line between two points. Both can be partially true and exist anywhere on the line between true and false.
No jobs replaced<-------------------------------------->All jobs replaced
Bubble crashes the economy and we all end up dead in a ditch from famine<---------------------------------------->We all end up super rich in the post scarcity economy
For one, in higher dimensions, most of the volume of a hypersphere is concentrated near the border.
Secondly, and it is somewhat related, you are implicitly assuming some sort of convexity argument (X is maybe true, Y is maybe true, 0.5X + 0.5 Y is maybe true). Why?
Eh, in a way they're not mutually exclusive. Look back at the dot com crash: it was all about things like online shopping, which we absolutely take for granted and use every day in 2025. Same for the video game crash in the 80s. They are both an overhyped bubble and and the dawn of a new era.
Does it feel rather Orwellian that the original geeks now seem to be the same people who - forget about claiming technological innovation of as their own - completely discount it and apparently the important thing is now the creativity in funding an enterprise? We don't hear about the breakthroughs from the technologists, but the funding announcements from th investors and CEOs. It's not about the benefits of the technology, but how they're going to pay for it. Seems like a wildly perverse version of wag the dog...
these companies are staffed by spectrum-y nerds that we are being desperately propagandized into thinking are actually frat ‘bros’.
This comment is pretty depressing but it seems to be the path we're headed to:
> It's bad enough that people think fake videos are real, but they also now think real videos are fake. My channel is all wildlife that I filmed myself in my own yard, and I've had people leaving comments that it's AI, because the lighting is too pretty or the bird is too cute. The real world is pretty and cute all the time, guys! That's why I'm filming it!
Combine this with selecting only what you want to believe in and you can say that video/image that goes against your "facts" is "fake AI". We already have some people in pretty powerful positions doing this to manipulate their bases.
You don't have to be vague. Let's be specific. The President of the United States implied a very real voiceover of President Reagan was AI. Reagan was talking about the fallacy of tariffs as engines of economic growth, and it was used in an ad by the government of Ontario to sow divide within Republicans. It worked, and the President was nakedly mad at being told by daddy Reagan.
I have no idea how such a thing would work.
This is an example of how people viscerally hate anyone passing off AI generated images and video as real.
There were like 5 competitors all trying to become the winner takes it all. Afaik after 10 years some closed, restructured but most of them burnt a lot of money. One lets call him indie dev made a lot of money building a simple comparison platform and getting 10-20% on all deals.
This is n=1, but I think it still made me really averse to raising money.
OpenAI and AI in general has posed itself as an existential threat and tightly integrated itself (how well? let's argue later) with so many facts of society, especially government, that like, realistically there just can't be a crash, no?
Or is this too doomsday / conspiratorial?
I just find it weird that we're framing it as crash/not crash when it seems pretty clear to me they really genuinely believe in AGI, and if you can get basically all facets of society to buy in... well, airlines don't "crash" anymore, do they?
The first example basically stands in for all of them -- Microsoft invests $13B in OpenAI, and OpenAI spends $13B on Azure. This is literally just OpenAI purchasing Microsoft cloud usage with OpenAI's stock rather than its cash. There is nothing unusual, illicit, or deceptive about this. This is entirely normal. You can finance your spending through debt or equity. They're financing through equity, as most startups do, and they presumably get a better deal (better rates, more guaranteed access) via Microsoft than via other random investors and then buying the cloud compute retail from Microsoft.
This isn't deceiving any investors. This is all out in the open. And it's entirely normal business practice. Nothing of this is an indicator of a bubble or anything.
Or take the deal with Oracle -- Oracle is building data centers for OpenAI, with the guarantee that OpenAI will use them. That's just... a regular business deal. What is even newsworthy about this? NYT thinks these are "circular" deals, but by this logic every deal is a "circular" deal, because both sides benefit. This is just... normal capitalism.
Point is that all of this companies need to start making real profits and pretty damn big ones, otherwise all of this will collapse. Problem is that unless Altman has some super-intelligent super-AI hidden in his closet, it is very unlikely that it will.
And whose gonna take the bill when it falls? Let me guess… Where have I seen this before…?
MS, Meta, Google, Apple, Nvidia make enormous profits. I think part of this AI push we're seeing is that all of these companies have so much money they don't know how to spend it all. Meta is a great case where they bounced from blowing excess cash on the metaverse and now to AI.
My point is that the way it's all being financed is just regular financing. This article is trying to present the way it's being funded as novel, as "complex and circular", when it's not. This is how funding and investment works 365 days a year in all sectors. Nothing about the funding arrangements is a bubble indicator.
So this is a strange article from the NYT, because it's trying to present normal everyday financing deals as uniquely "complex and circular".
Furthermore, yes it might be business as usual but so is fraud and god knows what else in this particular political era. In order to strengthen your argument you have to not only show that the phenomenon is not only common, but good for the overall economy.
The main difference of course being that these are actual companies as opposed to just entities intently designed to inflate the apparent financials. While it seems like that difference means this situation is perfectly fine as compared with the fraudulent case of Enron, the net effect is still the same; these companies are posting crazy quarter over quarter revenue growth, sending their stock prices to crazy highs and P/E multiples, while the insiders are cashing out to the tunes of hundreds of millions of dollars.
I don't really see how exactly you're trying to make the argument that it may or may not be a bubble, it objectively meets the definition of a bubble in the traditional economic sense (when an asset's market price surges significantly above its intrinsic value, driven by speculative behavior rather than fundamental factors). These companies are massively overvalued on the speculative value of AI, despite AI having not yet shown much economic viability for actual profit (not just revenue).
Worse yet, it's not just one company with inflated numbers, it's pretty much the entire top end of the market. To compare it to the dot com bubble wouldn't be a stretch, it'd basically be apples to apples as far as I see it.
This isn't deceiving any investors.
It's Microsoft increasing its revenue by selling its stock.
And an increase in revenue isn't the point. Microsoft isn't doing this to try to bump its short-term stock price or anything -- investors know where revenue is coming from. Microsoft is doing it because it thinks OpenAI is a good investment and wants to make money with that investment and have greater control.
As for "bad ideas", businesses make tons of decisions every day that turn out to be good or bad in hindsight. So again, more specifics are needed here.
So what exactly are you suggesting? What context do you think the NYT chose to omit, and why would they omit it if it was meaningful?
If it's a bubble, then it will pop. If it's not a bubble, then all these investments will turn out to be great. But that's a different question.
The point is, all these deals happen all the time. They're not some kind of sign of a bubble. They happen just as much in non-bubbles. They're just capitalism working as usual.
When Microsoft offers cloud-credits in exchange for openai equity, what it has effectively done is to purchase its own azure revenues. ie, a company uses its own cash to purchase its own revenues. This produces an illusion of revenue growth which is not economically sustainable. This is happening for all clouds right now wherein their revenues are inflated by uneconomic ai purchases. This is also happening for the gpu chip vendors as well, wherein they are offering cash or warrants to fund their own chip sales.
What Microsoft is actually doing is taking the large profits it would have otherwise made on its cloud compute with retail customers, losing much/all of those profits as it sells the compute more cheaply to OpenAI, and converting those lost profits into ownership of OpenAI because Microsoft's goal is to own more of OpenAI.
There is nothing "bubble" about this. Microsoft isn't some opaque startup investors don't understand. All of this is incredibly transparent.
[1] https://news.ycombinator.com/item?id=45719669
I'm not anti-AI, I'm just sceptical that it's as powerful as the AI companies are making out. I don't think we are anywhere near AGI, like centuries away.
I also don't think AI is going to be able to do all human jobs, in the physical world we have seen relatively little progress in robotics compared to the leaps made with transformers. And in the information world, while the LLMs can assist in many tasks and make workers more efficient I don't think they can entirely replace programmers (who are the expensive workers).
So yeah, I just don't think we are going to see the kind of world-changing benefits that OpenAI etc. are promising and which their valuations appear to be based upon.
Personally I haven’t seen blockchain make any impact whatsoever but maybe it’s just a little more niche or just a different one.
Keep in mind also that the models are going to continue improving, if only on cost. Just a significant cost reduction allows for more "thinking" mode use.
Most of the reports about how useless LLMs are were from older models being used by people that don't know how to use LLMs. I'm not someone that thinks they're perfect or even great yet, but their not dirt.
what could possibly go wrong
This is bad. We should not shrug our shoulders and go "Oh ho, this is how the game is played" as though we can substitute cynicism for wisdom. We should say "this is bad, this is a moral hazard, and we should imprison and impoverish those who keep trying it".
Or we'll get more.
* stock prices increasing more than the non-existent money being burnt
* they are now too big to fail - turn on the real money printers and feed it directly into their bank accounts so the Chinese/Russians/Iranians/Boogeymen don't kill us all
While it was sorta legal (at the time) it was not ethical and led to a massive collapse of the #1 company at the time.
Makes you wonder if AI is in such a bubble. (It is).
Or maybe not, nobody knows the future any more then next guy in line.
But the major takeaway was that almost none of these companies were real businesses. This is why I laughed at dot-com comparisons in the 2010s around the tech giants because Apple, Google, Microsoft, etc were money-printing machines on a scale we have trouble comprehending. That doesn't make them immune to economic struggles. Ad spending with Google will rise and fall with the economy.
OpenAI has a paper valuation in the hundreds of billions of dollars now and no prospect of a revenue model that will justify that for many, many years.
Currently, the hardware is a barrier to entry but that won't last. It has parallels in the dot-com era too when servers were expensive. The cost of training LLMs is (at least) halving every year. We're probably reaching the limits of what these transformers can do and we'll need another big breakthrough to improve.
OpenAI's moat is tenuous. Their value is in the model they don't release. But DeepSeek is a warning shot that it will be in somebody's geopolitical interest, probably China's, to prevent a US tech monopoly on AI.
If you look at these AI companies, so many of them are basically scams. I saw a video about a household humanoid robot that was, surprise surprise, just someone in a VR suit. Many cities have delivery drones now but somebody is remotely driving them.
I saw somebody float the theory that the super-profitable big tech companies are engaging in layoffs not because they don't need people but to pay for the GPUs. It's an interesting idea. A lot of these NVidia deals are just moving money around where NVidia comes out on top with a bunch of equity in these companies should they become trillion dollar companies.
Oh and take out data center building from the US economy and we're in recession. I do think this is a bubble and it will burst sooner rather than later.
Central banks don't print money[1] but investment banks do. Think about it like this: Someone deposits $100. The bank pays interest, to make money on to pay that interest, ~$90 is loaned out to someone.
Now, I still have a bank slip that says $100 in the account, and the bank has given $90 of that to someone else. We now have $190 in the economy! The catch is, that money needs to be paid back, so when people need to call in that cash, suddenly the economy only has $10, because the loan needed to be paid back, causing a cash vacuum.
But that paying back is also where the profit is, because you sell off the loan book, and you can get all your money back, including future interest. So you have lent out $90, sold the right to collect the repayments to someone else as a bond, so you now have $120, a profit of $30
That $30 comes pretty much from nowhere. (there are caveats....)
Now we have my bank account, after say a year with $104 in it, the bank has $26 pure profit AND someone has a bond "worth" $90 which pays $8 a year. but guess what, that bond is also a store of value. So even though its debt, it acts as money/value/whatever.
Now, the numbers are made up, so are the percentages. but the broad thrust is there.
[1] they do