NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
AMD to buy Silo AI for $665M (ft.com)
helsinkiandrew 104 days ago [-]
mastax 104 days ago [-]
I’d argue that a factor in CUDA’s success is their army of in-house researchers which use CUDA to do novel things. Sometimes those things get turned into products (OptiX) other times they are essentially DevRel to show off what the hardware can do and documentation for how to do it. Additionally I’m sure they use pre-release hardware and software and give feedback about how to improve it.

I don’t know what AMD has in mind for this acquisition but I could see there being a lot of value having an in house LLM team to create models for customers to build on, run in benchmarks, and improve their products.

eightysixfour 104 days ago [-]
Yes, nvidia spends a lot of time and money developing software that induces demand for their GPUs.
xyst 104 days ago [-]
Nvidia also spends a metric shit ton of money to make sure professors use and teach on their platform.

I don’t remember any alternatives in uni. Maybe OpenCL but only lightly mentioned

wsay 104 days ago [-]
As someone who has designed and taught those courses, my experience (admittedly only one persons) is that you pick what will work with the least hassle - because you'll have plenty of hassle elsewhere and probably no real time to deal with any of it without making more.
sillysaurusx 104 days ago [-]
This is actually one of my favorite comments of all time, because it's how software wins. The software that students use is the software the industry uses about five years later.
nh2 104 days ago [-]
Not always.

One of our machine learning courses was taught in Matlab.

Unsurprisingly, nobody used Matlab after uni, or 5 years later.

lmpdev 103 days ago [-]
Also did an algorithms in machine learning course in matlab

It’s a great language choice for it

It weeded out the script kiddies who incorrectly signed up wanting a Tensorflow or PyTorch course

It’s a fairly bland and slow but usable language for the task

Shits me off to no end a lot of engineering courses moreorless indoctrinate their students into using it unconditionally, though

Octave exists but is a relative pain to use

plaguuuuuu 103 days ago [-]
It's still a pain spending time learning matlab syntax/semantics when you could just, idk, use C or Haskell instead
gibolt 104 days ago [-]
Matlab is fairly easy to work with (initially) and is great when learning a new concept, instead of learning that plus arbitrary syntax of the tool.

It isn't particularly fast though, and the simplicity quickly becomes an obstacle when solving a real problem.

TremendousJudge 103 days ago [-]
My experience in university was the exact opposite. The stuff we were using was 5-10 years behind what industry was using.
chii 104 days ago [-]
> The software that students use is the software the industry uses about five years later.

which is why it's anti-competitive for a company to sponsor university courses (such as providing educational versions for free). It should be disallowed, unless the course is _specifically_ teaching the software, rather than a general course.

paulddraper 104 days ago [-]
That's competitive, not anti-competitive.

Anti-competitive means others are not allowed to do the same.

chii 104 days ago [-]
> others are not allowed to do the same.

it's usually the case where the sponsor is the sole sponsors (aka, the course does not teach both X and Y, esp. if X is given to the uni for free).

It's anti-competitive to allow companies to embed themselves in general courses, despite it not being so by the letter of the laws.

zamfi 104 days ago [-]
Sort of -- but basically no course is going to teach X and Y, if they're functionally equivalent ways to learn about Z, because almost no course is specifically about X or Y, it's about Z, and learning both X and Y isn't germane to learning Z, just learning one is enough.

As long as the companies behind X and Y both have a fair shot at sponsorship, this isn't really anti-competitive. It's literally a competition in which the companies compete for student and faculty attention.

Anti-competitive would be a company saying "you must teach X and not Y in your class about Z because you use Xco's mail services" or some other such abuse of one contractual relationship for an unrelated gain.

paulddraper 104 days ago [-]
They say "hey if you want to teach a class using X, we'll sponsor it."

A competitor can complete for that sponsorship. So long as it's done on direct merit of the value, there's no problem.

Anti-competitive would be providing products or services and forcibly leveraging that into an unrelated contract.

fngjdflmdflg 104 days ago [-]
>Nvidia also spends a metric shit ton of money to make sure professors use and teach on their platform.

Do you have a source for this claim? Or do you simply mean that since they spend money making it better that professors end up using it on their own accord?

physicsguy 104 days ago [-]
I hold an NVidia instructors cert from when I worked in academia. They even give you access to hardware while you’re running courses on it. It’s super easy and totally free.
nojvek 104 days ago [-]
I won an Nvidia GPU while I was doing my advanced graphics course for making custom shaders.

Had to buy a new power supply just so I could use it.

flakiness 104 days ago [-]
They co-author the definitive CUDA textbook, and it's based on their sponsored class (You can find the story in the intro of the book.) https://www.amazon.com/Programming-Massively-Parallel-Proces...
YetAnotherNick 104 days ago [-]
Co authoring a book is not "metric shit ton of money".
chaostheory 103 days ago [-]
No, I think it’s a source for the claim and not the actual evidence of what they spent it on.
helloericsf 104 days ago [-]
OpenCL was discussed more frequently in classes about a decade ago. However, I haven't heard it mentioned in the last five years or so.
104 days ago [-]
eddiewithzato 103 days ago [-]
Yea people tried to push OpenCL back then, it simply was just inferior
Izikiel43 104 days ago [-]
Opencl is horrible compared to cuda
Narhem 104 days ago [-]
Especially since AMD and nVidia have similar costs for a GPU
kcb 104 days ago [-]
AMD has hip which is basically a CUDA clone.
pjmlp 104 days ago [-]
Only for those that equate CUDA to C++ only, and poor tooling.
kcb 103 days ago [-]
They've replicated many of the libraries as well. But yea haven't personally tried it.
Narhem 104 days ago [-]
Not exactly but they give massive discounts and the tools are much more appropriate to use for late undergrads and grads.
light_hue_1 104 days ago [-]
> Nvidia also spends a metric shit ton of money to make sure professors use and teach on their platform.

Nah. People teach what they use because that's what's easy.

baumy 104 days ago [-]
It's definitely both.

I'm sure plenty of professors use CUDA in their courses because it's what they actually use. At the same time, in 2013 when I was in college I took a course on "parallel computing" as a CS elective. The professor told us on day 1 that NVidia was sponsoring the course and had donated a bunch of GPUs to the clusters we could remotely connect into for the sake of the class. Naturally we used CUDA exclusively.

I know for a fact that this happened at a lot of schools. I don't know if it's still happening since I'm not in that world anymore, but don't see why it would have stopped.

Narhem 104 days ago [-]
CUDA is extremely simple, the classes might as well be on rails. OpenCL is like impossible without graphics and/or CUDA/distributed computing/operating system experience.
kimixa 104 days ago [-]
I'm not sure if I really agree - the level of abstraction used for each is extremely similar. There's not really any "Graphics Pipeline Specifics" pollution in OpenCL or CUDA.

You can pretty much translate something like https://github.com/jcupitt/opencl-experiments/blob/master/Op... with string replace function names.

alphabeta2024 104 days ago [-]
You get free access to hardware for courses if you teach CUDA courses.
thanhan201 104 days ago [-]
[flagged]
Narhem 104 days ago [-]
No nvidia makes great tooling. Like as a startup if I had to pick a development tool AmD fails repeatedly while nvida tooling is like matlab level of usefulness.

Those companies have money to make ‘nice’ things which open source software doesn’t have the time to do.

For 100m you could probably make some pretty sweet clones if amd is hiring anybody to man that position.

xzel 104 days ago [-]
I’m not sure if I’m in the minority here but Matlab levels of tooling is an insult. Their guides were always two or three steps before being useful. Just enough to make you think whatever they were selling would solve your problems but never enough when really building a solution.
Narhem 101 days ago [-]
Before visual studio code the laggyness of eclipse always pushed me to use notepad++. Which in turn pushed me away from Java.
cosentiyes 104 days ago [-]
> matlab level of usefulness

that's a little harsh :D

eightysixfour 104 days ago [-]
I don’t understand what you are disagreeing with.

Nvidia makes software that induces demand for their products. Sometimes that software is a tool, or a platform, or an ML model, or foundational research on algorithms.

Dalewyn 104 days ago [-]
>Those companies have money to make ‘nice’ things which open source software doesn’t have the time to do.

I would posit it's a lack of will rather than time.

lmpdev 103 days ago [-]
Moreso lack of will to effectively mass organise

Thousands of OSS devs would be willing to devote serious time to it, but can’t/won’t run the gauntlet of starting such a ludicrously large project from scratch

It’s easy to contribute, difficult to be the one organising the contributions

A real “where do I even begin” problem

KeplerBoy 104 days ago [-]
The biggest frameworks are still from other players though. Pytorch, tensorflow and jax aren't funded by Nvidia.
stanleykm 104 days ago [-]
But they are built on top of nvidia tooling and you can use nvidia tools to do more extensive profiling than other players offer.
KeplerBoy 104 days ago [-]
True, gotta love Nsight Systems and Compute.

That's the first hurdle of working with AMD GPUs, I have no idea what the GPU is actually doing because there is no quality profiler.

Conscat 104 days ago [-]
Is Omniperf/Omnitrace not very good? I haven't used it, but I have been using Nsight Systems recently and it looks comparable to me at a glance.
eightysixfour 104 days ago [-]
That’s ignoring a huge swath of software nvidia uses to push industry forwards (in the direction they want it to go).

Omniverse, Isaac, Metropolis, Rapids, etc.

fortran77 104 days ago [-]
Yeah but the frameworks use CUDA in their NVIDIA implementation , don’t they?
mcbuilder 104 days ago [-]
No way would any of those have any have bindings to backend libraries like cuDNN.
cityofdelusion 104 days ago [-]
The success of CUDA is basically a dual effect of AMD devices being inefficient and bad for years, plus AMD having no answer to CUDA for a solid 7+ years while the foundations of GPGPU were being laid down.

Mindshare shifts slowly.

chidli1234 104 days ago [-]
It's an acquisition, usually for patents/IP. There will be layoffs.
ahartmetz 104 days ago [-]
AMD is (according to their own statements) in the process of picking up a lot of software manpower. And wages in Finland are European tier, not US West Coast. Why lay them off?
Yizahi 103 days ago [-]
Because nobody has been fired or fiscally punished for firing excessive number of people? :) People are notoriously bad at predicting potential positives, so firing people means nobody can prove that something wasn't created. In reverse it is possible to blame people for overhiring because that can be supported by hard numbers.
pjietr 104 days ago [-]
I guess their growth strategy was mostly about hiring every Finnish person, or living in Finland (and then in other countries) with a PhD in some quantitative topic and then market the "we have xx PhDs as a consultancy for all your projects". So you probably are right that not all these are needed anymore?
Narhem 104 days ago [-]
Why Finnish and not American or Persian?
ErikBjare 103 days ago [-]
Finnish company
thenaturalist 104 days ago [-]
Huge congratulations to the founders and what a nice mark for the European (and Nordirc) startup community.

It's gonna be quite interesting to see if this works out strategically.

I guess the bet is an in-house army of PhDs vs. having a CUDA - which you don't as a second mover here - and assuming PhDs tightly coupled with the hardware can outperform an open framework/ push Triton to parity to CUDA over time.

sva_ 104 days ago [-]
Congrats to the founders indeed, but

> what a nice mark for the European (and Nordirc) startup community.

Not sure if it is a great win for the EU at large if their AI startups get bought up by American companies though, to be fair.

thenaturalist 104 days ago [-]
The economy and startup world isn't a zero sum game.

Ultimately the AI play* is open source for the foreseeable future, even more so for AMD if they want to sell their chips.

And if Silo AI's people accelerate competition in the AI HW space by accelerating Triton development/ raising the industry's competitive edge against Nvidia, we all benefit from stronger competition.

And in most other European startup hot spots, senior staff/ founders with previous exits reinvested their earnings into the domestic startup scene through founding again or becoming Business Angels or going VC.

I see this as a huge net win.

* EDIT: For integrating with compute, I guess.

627467 104 days ago [-]
Actually, it is zero sum. there's finite resources, human talent, and centers for decision making. yeah, European startup gets American money today, and American decision making center grows larger. whether the money paid into Europeans now is used to prop up new generation of startups - in any meaningful way - will remain to be seen. most likely: these senior staff/founders will probably allocate their cash where it is more efficient and I doubt it will be (meaningfully) in europe
thenaturalist 104 days ago [-]
The fact you're posting this comment in a thread of a press release of the acquisition of a European startup entity is in itself a counterfactual, wouldn't you agree?

One of the Cofounders of Silo is ex-Nokia...

Should tell you everything about zero-sum games.

Sure, the US is the dominant financial and technological economy on the planet and that will not change for the foreseeable future.

But implying a globalized, technology enabled economy will behave in a zero-sum fashion is just plain wrong.

The US is where it is today because post WWII it geniously recognized the value of free and global trade and invested heavily in its Navy to enable and protect said trade.

Instead of making things on your own in the US, you could sit in New York and invest globally - the value of your investment and access to its dividends guaranteed by the power of the US military.

Relative value against the status quo is created every day everywhere by millions of smart people.

What Europe - and Finland in that example - has is a century old tradition and established infrastructure for high education.

That investment will continue to pay off for the foreseeable future.

ClumsyPilot 104 days ago [-]
> geniously recognized the value of free and global trade and invested heavily in its Navy to enable and protect said trade

This reads like a person taking credit for the sun rising in the east and setting in the west.

United States is rich for three reasons:

Firstly USA stole textile and other technology from the British empire.

Secondly, gen 1 ‘non-free trade’ empires like the British got demolished in the war. All of the world’s industrial nations were in ruins.

Third step, the ‘genius’ was the Marshall Plan, was giving reconstruction loans to British and the French that they could only spend on American products - remember their industry was demolished, further stimulating American economy.

Global trade grew after 1955 when we invented containerisation.

And USA does not really believe in global free trade - that’s for its club of friends. Everyone else gets a sudden 100% solar panel tax or a 100% ev tax when they want to export to US. Or they get a sudden coup if their government wanted to stop exporting bananas to US

mistrial9 103 days ago [-]
oversimplified explanations here - great to relieve some hostility perhaps but not complete.. lots of holes in this blanket explanation.

History of trade protectionism ? that is older than sailing ships. Yes, the achievements of old Europe include flaming the largest and deadliest wars in world history. The USA benefited from not being destroyed? sure OK.. maybe war is a bad idea for prosperity.

Modern trade values might be divided into "oil and gas" and then everything else.. "arms trade" and then everything else.. Big Pharma ? ok you got me, yes the US rules it financially, but then old Europe has some assets like that, but not on front stage.. thinking textile dye chemistry for an example.

The USA has no special awards for inventing trade protectionism, just a vigorous practice at the right time due to the idiocy of others.

You speak English pretty well.. maybe that language is part of the success here? many other angles easily come to mind...

bee_rider 104 days ago [-]
Don’t European programmers make much less than Americans? I wouldn’t be surprised if they kept a pretty big footprint over there.

Big picture the US unemployment rate is quite a bit lower than the EU, so I’m sure any global company is happy to draw from the bigger pool.

Finally, benefits can be unbalanced in favor of one entity or another without being zero sum. Even if the US benefits more from this deal, the purchasing company, AMD, still turns sand into extremely valuable electronics. That’s not a zero-sum activity.

mistrial9 103 days ago [-]
> US unemployment rate

do not believe this number.. it is a manipulated statistic on the front line of old class wars regarding labor versus capital. hint- capital interests know the Federal government very well

xmprt 104 days ago [-]
The world is very much not zero sum and never has been. If it was then we'd be stuck in the stone age because every advancement that benefits one person would hurt the rest. Instead we see that over the course of history the average wealth of the world has gone up. There are certainly some negative intangibles that come from that (eg. climate change or ecosystem collapse) but it's hard to quantify if that makes it zero sum and even so, the context of this thread is about human vs human zero sum games.
jltsiren 103 days ago [-]
The world was effectively a zero-sum game until the industrial revolution. For most of human history, the average growth per capita was something like 0.01%/year. There was some growth, but you could not see it in a single lifetime. Which is why conquering your neighbors, taking their possessions, and enslaving them was such a popular form of business.
xmprt 103 days ago [-]
I'm not sure I even agree with this. If by conquering your neighbors, your civilization grows 5x (as opposed to just having you + your neighbors), then doesn't that by definition mean it's not zero sum?
singhrac 104 days ago [-]
I’m not sure we agree on what zero sum here means, but one direct consequence of having a decent exit here is that the investors in Silo will get a capital return they can use to raise more funds.

I don’t know what the founders of Silo will do, but the investors are in the business of investing, and incrementally the viability of being an AI VC in this area has gone up (depends on the counterfactual but I think cash exit is better than some chance of IPO).

mgfist 104 days ago [-]
You say it's zero-sum then in the next sentence say "whether the money paid into Europeans now is used to prop up new generation of startups - in any meaningful way - will remain to be seen", which surely implies that it's not necessarily zero-sum.
nl 104 days ago [-]
https://en.wikipedia.org/wiki/Jevons_paradox might be interesting to read.
blackhawkC17 104 days ago [-]
Europe’s tech ecosystem will still benefit a lot regardless. Zero-sum thinking is not good- it causes economic regression and poverty in the long run.
fauigerzigerk 104 days ago [-]
>Not sure if it is a great win for the EU at large if their AI startups get bought up by American companies though, to be fair.

That would be a concern if the plan was to move the entire team to the US. But if the Finland based company just becomes a part of AMD then I see little downside. Some very competent people in Finland now have $665M to fund new startups.

Ultimately I think the most important question is where the interesting and high productivity work gets done. That's the place that benefits most.

Rinzler89 104 days ago [-]
>That would be a concern if the plan was to move the entire team to the US.

The issue is that all that Finnish labor now fuels a US tech giant who's profit center is in the US, not in EU, therefore mostly boosting the US economy in the process.

Then there's also the trade barriers that come with now becoming a US tech company instead of a Finnish one. You can't sell to China, and other countries on the US's shit list without Uncle Sam's approval.

fauigerzigerk 104 days ago [-]
>The issue is that all that Finnish labor now fuels a US tech giant who's profit center is in the US, not in EU, therefore mostly boosting the US economy in the process.

No, this is not how it works. Assuming Silo AI continues to operate out of Finland, its investments, the consumption of its employees and its exports will continue to count towards Finland's GDP just like before. Any profits go to AMD shareholders all over the world, not just in the US. The strategic alignment between Silo AI and AMD may well benefit both Finland and the US.

We have a similar debate in the UK regarding DeepMind. And yes it's true, if you assume that DeepMind or Silo AI would have become world dominating tech behemoths in their own right, then it would have been better for Britain/Finland if they hadn't been sold.

But it's also possible that the UK and Finish operations are ultimately more successful as part of Google/AMD because they benefit from strategic opportunities they wouldn't otherwise have.

I'm not saying that headquarters don't matter or that there are no downsides (e.g wrt corporation tax). What I am saying is that it's not automatically a bad thing for a country if a company gets sold to a foreign corporation.

One thing is for sure. It's far better to have a lot of US subsidiaries in the country than watching your graduates and startup founders leave for the US.

pjc50 104 days ago [-]
> US tech giant who's profit center is in the US, not in EU, therefore mostly boosting the US economy in the process

More of a matter of accounting than reality. For years, Apple were deliberately not repatriating their profits to avoid tax, keeping them out of the US economy. https://www.cnbc.com/2018/01/17/it-looks-like-apple-is-bring...

The question of where a profit is actually made for a multinational company can be very unclear.

user90131313 103 days ago [-]
Yes indeed, they bought it for peanuts? Wework got billions and many other BS startup. 665 million is like almost free
mistrial9 104 days ago [-]
they have loyal and stable staff with healthy family lives unlike 8 of 10 California companies
blackhawkC17 104 days ago [-]
Employee loyalty isn’t a good thing. One of the best things about Silicon Valley is that people can swiftly change companies when they get higher offers. Non-competes are void in California.

There’s a reason US salaries for software devs are 2-5x EU salaries for similar roles.

snowpid 104 days ago [-]
What if I told you that non - competes aren't a thing in Germany. (And a big part in other parts of US)
storyinmemo 104 days ago [-]
Well I'd tell you that they aren't a thing in California.
talldayo 104 days ago [-]
...as of six months ago.
Alupis 104 days ago [-]
Anyone could ask you to sign a non-compete. But in California, they have been legally unenforceable for as long as I have been alive.

What was changed is they now cannot make condition of employment based on signing this unenforceable contract.

zombiwoof 104 days ago [-]
As someone who has been stuck in Silicon Valley for 20 years I can say hands down the German and European teams I’ve worked with far outshine the hacker ego Hollywood hipster techbros of San Francisco. Yet the latter make 2-5x the income.
snowpid 104 days ago [-]
Thanks :) (

But I guess there are also very capable American teams and narcissistic European CS.

(I guess it is a very good question why this difference exist and how to change economic policy)

rangestransform 104 days ago [-]
employee loyalty is a good thing if it's bought and not expected
p_j_w 104 days ago [-]
>There’s a reason US salaries for software devs are 2-5x EU salaries for similar roles.

When you account for medical costs, rent (especially compared to the localities in the USA that provide these huge salaries), extra vacation time, and for those with children, education and child care, this gap narrows considerably.

Rent alone... one can find a reasonable spot in Berlin for ~$1300/mo. Good luck finding more than a shared box in the Tenderloin for that much in the Bay Area.

blackhawkC17 104 days ago [-]
> When you account for medical costs, rent (especially compared to the localities in the USA that provide these huge salaries), extra vacation time, and for those with children, education and child care, this gap narrows considerably.

That's what Europeans generally say to justify or cope with their low salaries, but it's not true. After accounting for all these, an SV, NYC, Seattle, etc., engineer ends up with far more disposable income than their EU counterpart.

The US has the highest average disposable income worldwide; the rest almost don't come close [1]. That's why it has much more entrepreneurial activity.

Yes, the US isn't perfect, but the EU doesn't come close to the US in terms of money for highly skilled professional workers.

1- https://www.statista.com/statistics/725764/oecd-household-di...

mminer237 104 days ago [-]
I agree with you, but OECD's disposable income does not include housing, education, healthcare, or childcare unless they're paid through taxes.
p_j_w 104 days ago [-]
>After accounting for all these, an SV, NYC, Seattle, etc., engineer ends up with far more disposable income than their EU counterpart.

I said it narrows the gap, not closes it.

>https://www.statista.com/statistics/725764/oecd-household-di...

Your link is behind a paywall, I can't view that data.

blackhawkC17 104 days ago [-]
Point noted, Wikipedia breaks down the data better - https://en.wikipedia.org/wiki/Disposable_household_and_per_c....
p_j_w 102 days ago [-]
Housing is notably not included here. Further, while it says government or non-profit provided health care and educated are included in the incomes for countries where it's available, the countries where it is NOT available don't show a reduced disposable income to pay for these things. These data do not show what you think they show. They already show a smaller gap than you had originally implied, and as I stated earlier, that gap is going to narrow substantially when the rest of these factors are taken into account.
104 days ago [-]
SSLy 104 days ago [-]
My cursory understanding is that Silo is a developer of LLMs that run on top of compute platforms. Isn't the problem with no one using AMD's accelerators the fact that their programming environment is sub-par compared to CUDA, or even Apple's?
btown 104 days ago [-]
The underinvestment in, and abandonment of, a project for a CUDA compatibility layer https://github.com/vosen/ZLUDA?tab=readme-ov-file#faq by AMD a few months ago hints that they no longer see CUDA compatibility as a goal. Perhaps they see Silo as a way to jumpstart bringing ROCm to parity with CUDA's toolkit. It's hard to understand if there's an underlying strategy to how they'll stay relevant from a software perspective when they're abandoning projects like this.

Discussion: https://news.ycombinator.com/item?id=39344815

IshKebab 104 days ago [-]
It makes no business sense for them to try to get CUDA compatibility. That would just cement CUDA as the de facto standard, at which point they are locked in to playing catch up forever as nVidia intentionally adds features to break compatibility.

Much more sensible to work on getting rock solid support for their own standards into all the major ML platforms/libraries.

joe_the_user 104 days ago [-]
It depends what means by "business sense". Compatible makers have profited, did profit during PC era. Indeed, one of AMD's core businesses is make xx86 compatible CPUs.

Nvidia and standard-maker is limited in what breaking changes they introduce - these can harm their customers as much as they harm the competition. Intel failed to force all their changes on AMD as the xxx86 market expanded (notably, the current iteration of CPUs standards was set by AMD after Intel was unable to sell their completely new standard).

Still, I'd acknowledge that "business sense" today follows the approach of only aiming for markets the company can completely control and by that measure, CUDA compatibility isn't desirable.

mandevil 104 days ago [-]
I think the key is that CUDA is much more like the Microsoft Windows software part of the duopoly than the Intel x86 hardware part of the old Wintel duopoly. At best back in the glory days of that era, you could have weird hackish things like WINE ... until Microsoft's business model changed and started being interested in supporting virtualization to build up Azure.

The key is that while there were many clones of x86, there never really was an attempt at a company built around "run MS Windows programs natively" because maintaining software compatability is an order of magnitude harder than doing it for hardware.

joe_the_user 104 days ago [-]
CUDA is absolutely not equivalent to Windows as a platform. It's essentially a single API, not a huge, multilayer and futzy platform with multiple weirdly behaved APIs.

Moreover, companies aren't buying GPUs to keep their huge stable of legacy applications running. They want to create new AI applications and CUDA is a simple API for doing that (at a certain level).

mandevil 104 days ago [-]
CUDA is a programming language with libraries (cuBLAS, cuSPARSE, etc.) that are constantly having things added and try to maintain backwards compatibility. It's not as big and hefty as all of Win32 sure, but it's still far more difficult than x86 compatibility.
joe_the_user 103 days ago [-]
Those libraries are written CUDA. Windows has had a multitude of APIs and layers (DOS, WIN32, etc) none of which were written in an underlying language/api.

Microsoft's entire history is around building a moat of APIs because the PC software industries has a wide variety. Nvidia has, so far, been focused on building actually useful things for developers. Basically, where all the other manufacturers viewed their chips as special purpose devices, Nvidia allowed developers to treat their chips as generic parallel processors and this facilitated the current AI revolution/bubble. Now that Nvidia has created this market, it can charge by the use rather than charging by processing power. The thing is that Nvidia's large potential competitors simply don't want to create clones even if they could - because clones would have to be sold by processing power rather than with a markup for their usefulness. It's worth looking at the list of x86 compatible makers [1]. Making an x86 wasn't quite something you could do in your garage but clearly the barriers to entry weren't huge. But any Nvidia compatible is going to cost a large amount of capital but can only sell by processor power and so AMD, Intel and similar sized entities don't have an interest in doing this.

[1] https://en.wikipedia.org/wiki/List_of_x86_manufacturers

[1] https://en.wikipedia.org/wiki/List_of_x86_manufacturers

seunosewa 104 days ago [-]
That's an unfortunate choice. AMD has excelled in making compatible hardware. Not so much software success, if any.
Narhem 104 days ago [-]
It takes a lot of design skill to make something complex simple. I have a lot of doubt AMDs department even has design skills.
Narhem 104 days ago [-]
CUDA is a decided abstraction with OpenCL I wouldn’t be surprised if eventually they pick a different abstraction to describe the interface they use for writing programs
slashdave 104 days ago [-]
They don't need parity. They just need ROCm (or OpenCL) to be a reasonably viable alternative.
doikor 104 days ago [-]
They have been using/building stuff for the LUMI supercomputer which has a bit over 12000 MI250X

https://ir.amd.com/news-events/press-releases/detail/1206/am...

“Silo AI has been a pioneer in scaling large language model training on LUMI, Europe’s fastest supercomputer powered by over 12,000 AMD Instinct MI250X GPUs,”

anewhnaccount2 104 days ago [-]
They successfully trained LLMs on Lumi, which has AMD Instinct MI250X GPUs. This perhaps provides a hint about one angle on why AMD are interested.
zacksiri 104 days ago [-]
It makes sense then for AMD to buy them out.

If they’ve trained LLMs with lumi which has a lot of instinct GPUs there is a high chance they’ve had to work through and solve a lot of the gaps in software support from AMD.

They may have already figured out a lot of stuff and kept it all proprietary and AMD buying them out is a quick way to get access to all the solutions.

I suspect AMD is trying to fast track their software stack and this acquisition allows them to do just that.

rcarmo 104 days ago [-]
I am curious if the models are any good, though. The landscape is so fragmented I never heard of Poro.
ghnws 104 days ago [-]
Poro (reindeer in finnish) is specifically developed to be used in Finnish. GPT etc. general models struggle with less used languages. Unfortunately this sale likely means this development will cease.
rcarmo 103 days ago [-]
Reindeer is a great name, and gives me an idea - next time I create an Azure OpenAI resource (depending on model availability and data residency requirements, sometimes you need to create more than one) I'm going to start going through Santa's reindeer names.
hrududuu 104 days ago [-]
Gpt4 or even 3.5 is quite good at Finnish. Was there ever a benchmark against closed source models?
pantalaimon 104 days ago [-]
So AMD wants to know how they did it, understand.
antupis 104 days ago [-]
Silo mainly does consulting and those models were kinda done on side. But great for founders and truly weird acquisition for AMD.
pjietr 104 days ago [-]
Silo.ai is mostly a consulting house for various proof-of-concept type of projects with the LLM product being only the recent addition
Keyframe 104 days ago [-]
Maybe they can use LLMs now to program their platform for them? </snark or not really>
csomar 104 days ago [-]
Sure, buying this company for 600m will fix everything.
sqeaky 104 days ago [-]
That is the most sarcastic thing I have read in weeks.

But isn't getting a software stack the exact kind of thing they need? Is there no overlap in the skills at the purchased company and the skills needed to make the AMD software stack not suck?

lyu07282 104 days ago [-]
That assumes that the reason AMD's software stack sucks is because of skill, not company culture, management or other reasons that won't change with this acquisition.
throwway120385 104 days ago [-]
If it's a culture problem and the C-suite is aware of it, then one reason to buy a company with a working software stack is to percolate their culture into your company so you can be successful.
szundi 104 days ago [-]
Hopefully the ceo of the acquired company gets a director role in AMD then at least, not subordinated a supposedly subpar cultured director already in AMD
aardvarkr 104 days ago [-]
I have a friend working there and it's a bunch of old curmudgeons stuck in their way. Good luck changing culture with a single acquisition
throwway120385 104 days ago [-]
The company I used to work for is doing this to the engineering org in my current employer. It requires the leadership from the old company to be embedded in very senior positions, and it requires buy-in from the existing C-suite. There's a lot of backroom politics to change culture along with a bunch of work to prove yourselves to people who aren't involved in the backroom. There have been a bunch of points at which I didn't think it would continue but so far the original team has been pretty successful at rising.

Think of it as a reverse McDonnell-Douglas.

imtringued 104 days ago [-]
I also have this impression. The software problems that are plaguing AMD are in the "less than $10 million" range, if they hired the right people to work on the most severe bugs in their GPU drivers and let them do their job.
baobabKoodaa 104 days ago [-]
Sure, there is some overlap. Is that overlap worth 665M?
rvnx 104 days ago [-]
Yes, it brought instantly (at least partially) +12B USD on the valuation of AMD. This shows to investors that AMD is still in the race.
drexlspivey 104 days ago [-]
You shouldn't attribute that on the acquisition. The stock went 3.8% up today but also 4% up on monday, 4.8% up last Friday 4.2% up last Tuesday etc.
sqeaky 104 days ago [-]
Going straight by stock price isn't very valuable unless you're selling immediately.

600 million dollars is a lot, and in order for that 12 billion increase to stick around this team up needs to present a lot of value. I'm optimistic but I'm also an outsider.

baobabKoodaa 104 days ago [-]
Yeah I saw the stock market uptick, but that is a kneejerk reaction by the public markets. It's not as if the public market participants have had ample time to evaluate the merits of the acquisition, and even then, if they are right or not.
szundi 104 days ago [-]
Anyway, it seems market thinks this is a 20x value acquisition.
eysgshsvsvsv 104 days ago [-]
You all live in a simple world where complex systems are fixed in simple statements like software stack is all they need.
sqeaky 104 days ago [-]
Why the personal attack?

I said that I interpreted the previous comment as sarcastic so I could be called out if it wasn't. The author hasn't yet disagreed. And I think sarcasm is warranted in a space that has witnessed so many bad acquisitions.

On software at AMD; if my world is so simple, please explain where I am wrong. I never said this was a simple solution, I implied there was some overlap needed skills.

ROCm sucks, it has licensing and apparently use issues. It has had performance issues, and that is getting better. It isn't in a lot of the places it needs to be where it could be considered a default choice.

Apparently, Silo uses AMD stuff to do ML work. Apparently, they have domain experts in this space. It seems likely that getting input from such people could positively influence the ML and hardware.

Of course there will be complexity in this process. This is a 600 million dollar deal involving thousands of people (not just Silo employee, but AMD people, regulators, stakeholders, etc). I don't think anyone is implying this is simple.

I only wanted to say, "This isn't obviously dumb".

mindcrime 104 days ago [-]
I'm curious about these "licensing issues" you speak of. From what I've seen, the vast majority of the ROCm components are MIT licensed, with a few bits of Apache license and NCSA Open Source License mixed in. Could you possibly elaborate on that?
sqeaky 104 days ago [-]
It has been a while, but last time I got the ROCm drivers and some other items that I needed from them there was a really weird proprietary license. That might not be the case anymore my information might be stale.
104 days ago [-]
104 days ago [-]
anonym29 104 days ago [-]
"fix"? What is there to fix? AMD has been simultaneously fighting Intel and Nvidia, two MUCH larger companies, and it's been winning the fight against Intel for close to a decade now.

It's certainly not Lisa Su's fault that the clowns over at Intel got stuck on variations of 14nm (with clever marketing names like 14nm+++++) for nearly a decade, but credit certainly is hers for introducing Zen and putting AMD back on top of the x86 market.

With the new x870(e) motherboards and Granite Ridge chips right around the corner, effortlessly destroying the pyrotechnic processing units known as Raptor Lake, it's honestly a miracle to me that Intel's stock price is still as high as it is.

Guess wall street still loves those billions of forcefully confiscated taxpayer dollars being doled out by Uncle Sam to a graying dinosaur like Intel who couldn't even compete without those handouts... the quality of their marketplace offerings certainly isn't what's keeping that valuation up!

philistine 104 days ago [-]
I’m also bullish on Intel, but clearly not as much as you. Intel is transitioning right now. x86 is never going to reclaim the crown of most important architecture, so Intel is trying its best to become a foundry for all the fabless customers out there. It’s going to take a long while, but right now they’re the best company to compete with TSMC in ten years. If Apple uses their foundries next decade, you’ll know Intel is back on top.
throwaway2037 103 days ago [-]

    > x86 is never going to reclaim the crown of most important architecture
To be clear, I assume you are including 32-bit and 64-bit, e.g., x86-64. I am surprised by this comment. To me, x86 won the architecture battle because of Linux (and less Microsoft Windows). Nothing is so cheap to deploy and maintain as a Linux server that runs x86-64 procs. Yes, I know you can buy single board computers, but x86 wins in the triangulation of dollars-watts-performance. If you disagree, what do you think is the most important architecture today?
philistine 103 days ago [-]
ARM. Every second that passes, for each person that is born that will use a computer, at least two people who will never use a computer is born. Those two people will own a phone though, and that phone will have an ARM chip.

As time goes on, more and more ARM chips will take roles traditionally taken by x86-64. Hell, ARM is already the best-selling architecture. Laws of scale will dictate that investments in x86-64 will fail to keep pace with ARM. Apple Silicon is already showing a small fragment of that effect. The chips are incredibly competitive, and for what Apple has chosen to focus on (perf-per-watt), unbeatable. ARM investments by other companies are catching up to Apple, and x86-64 does not make enough money to reverse that trend.

talldayo 101 days ago [-]
There's a small hitch. Unless China is allowed to directly pilfer ARM IP, they're going to pull off a RISC-V (or homegrown ISA) transition at the drop of a hat. And if American phone manufacturers are intent on diversifying, they very well might too; there's little forcing you to use ARM as your RISC ISA on Android or iOS. Competitive ARM cores don't grow on trees, and the licensing fee does't make it very attractive to compete unless you have a de-facto ISA license like Apple or Nvidia. And both of those companies have dabbled in RISC-V too. The intent to usurp ARM's throne has been clear ever since Nvidia was blocked from outright buying it in it's entirety.

Look at it this way; Apple can design their own cores whether they use ARM or RISC-V, and control the software from top to bottom either way. Nvidia's already shipping RISC-V microcontrollers to cut down on manufacturing margins, and without better options the rest of the world might follow. ARM's dominance is only possible if better RISC options don't exist; and for anyone that's not ARM the idea of IP serfdom sounds awful.

seabird 104 days ago [-]
This site is full of people with the west coast VC-driven-tech bizarro world blinders on. If AMD just keeps at what they've been doing well (matching or beating Intel processors) instead of chasing after the latest buzzword grift bubble, they're doomed in the eyes of people with that mindset.
throwway120385 104 days ago [-]
AMD needs to expand the user base of their GPUs away from gaming and desktop graphics. Buying an AI company that is using their stack for compute is a really good way of learning how to do that. It's essentially now an in-house team to dogfood all of your brand new products and tell your other engineering teams what they're doing wrong.

In my mind it's not about AI per se, but about using the hot use case for GPU to drive meaningful change in your software stack. There are tons and tons and tons of GPGPU users out there who aren't training LLMs but who need a high-quality compute stack.

mandevil 104 days ago [-]
I think AMD's concern is that x86 might not be much of a market in 10 years. Between Apple, Amazon Graviton, and Nvidia Grace Hopper's ARM CPU we are seeing a sustained successful attack on x86 the likes of which we haven't seen... ever? Sustained and successful non-x86 Desktops, servers, and next-gen datacenter platforms, where does that leave AMD? (Intel has a little more diversification because of it's foundry opportunities, but is in the same boat.)
light_hue_1 104 days ago [-]
> "fix"? What is there to fix? AMD has been simultaneously fighting Intel and Nvidia, two MUCH larger companies, and it's been winning the fight against Intel for close to a decade now.

There's everything to fix. AMD is sitting on a gold mine and is squandering massive amounts of money every month that they don't just get their shitty software stack in order.

AMD could be as rich as NVIDIA. Instead, Lisa Su for some insane reason refuses to build even the most mediocre ML-capable libraries for their GPUs.

If I could ask anyone in the ML world at the moment what the heck they're thinking, it would be her. Nothing makes sense about AMDs actions for years on this topic. If I was the board, I'd be talking about her exit for wasting such an opportunity.

latchkey 104 days ago [-]
> Instead, Lisa Su for some insane reason refuses to build even the most mediocre ML-capable libraries for their GPUs.

Spending $665m on a company that builds AI tooling, is a refusal?

lostmsu 103 days ago [-]
Considering people mostly bitch about AMD driver bugs and their GPU computing-related SDKs, I don't think buying a high-level AI sweat shop is going to fix their issues. I am not even sure if in the entire Silo AI you could find 5 people who understand AMD GPU assembly code and can effectively debug generated kernels.

They'd get more value offering $500k+ comp to a few people from https://handmadecities.com/

latchkey 103 days ago [-]
> I don't think buying a high-level AI sweat shop is going to fix their issues... They'd get more value...

It is a wonder why you aren't the CEO.

lostmsu 102 days ago [-]
That's all right: she is 20 or so years older, I'm still on track to get there.
rdtsc 104 days ago [-]
What do you think they should do?
littlecranky67 104 days ago [-]
I've been thinking that NVDA stock is massively overpriced - yes, AI is a hot topic, but their only advantage is the software stack. It is just a matter of time until Intel and AMD realize that they should join hands and do an open-source CUDA alternative for their respecitve GPUs (yes, Intel has competetive GPUs and just like AMD and Nvidia they will try to get a share of the AI chip market share).
cityofdelusion 104 days ago [-]
Problem is the CUDA advantage is gigantic and it has been known for years in GPGPU processing, way before AI was a meme. AMD has lost countless developers over the year just on hello world style projects. Developers had a solid 6-7 years of living with OpenCL when the green rival had a very mature and nice CUDA sitting there. I’ve been out of that world for a while now, but it was truly painful and turned a lot of devs off programming AMD devices. Now there’s a big moat of entrenched developers that could take decades to displace. It’s like trying to displace C++ with Java 22 — possible, but it’s a slow, slow trudge and everyone still remembers Java 1.4
YetAnotherNick 104 days ago [-]
No, the amount of code written in CUDA for pytorch could easily be rewritten in CUDA for few million or tens of millions of investment. The problem is that it is damn near impossible to get good performance in AMD. For complicated CUDA programs like flash attention(few 100 lines of code), no amount of developers could write those few 100 lines for AMD to get the same performance.
pzo 104 days ago [-]
Even worse: GPGPU is not only about LLM or even ML. It's also for computer vision, signal processing, pointcloud processing, e.g. Opencv has backend for CUDA, open3d, PCL the same. Even apple is kind of worse than AMD regarding ecosystem of libraries and open source high performance algorithms - when I tried to port some ICP pipeline to apple metal there was nothing there, most libraries and research code target only CUDA
pjmlp 104 days ago [-]
While I agree with the sentiment towards CUDA, the example is a bit off, given that C++ basically lost all mindshare in distributed computing, to Java and others, and is hardly visible in CNCF projects landscape.

Displacing C++ in compiler development and HFT/HPC/GPGPU with Java 22, most likely not happening, everwhere else it has been loosing mindshare, the current cybersecurity laws versus WG21 attitude towards them, doesn't help.

breggles 104 days ago [-]
"AMD is among several companies contributing to the development of an OpenAI-led rival to Cuda, called Triton, which would let AI developers switch more easily between chip providers. Meta, Microsoft and Intel have also worked on Triton."

Last paragraph

singhrac 104 days ago [-]
This is a bit misleading since Triton is a bit higher level than CUDA. But the idea is kind of right - there’s active development of AMD and Intel backends, and Pytorch is investing into Triton as well.
dehrmann 104 days ago [-]
NVDA's moat is over-stated. There are several deep-pocketed players with pretty good AI chips. The big players are training models at such a large scale that they can afford to back them by different architectures. Smaller players use frameworks like Pytorch and Tensorflow, but those are backed by big players buying from Nvidia.

But valuation isn't the NVDA trade right now; it's that there's still a bigger fool.

nipponese 104 days ago [-]
NVDA P/E ratio 78.70

AMD P/E ratio 263.25

If NVDA is overpriced, AMD is REALLY over-priced.

hmm37 104 days ago [-]
AMD PE ratio is that high due to their purchase of Xilinx. It's forward PE ratio is much much lower, in the 50s.
drexlspivey 104 days ago [-]
I'm curious, how does the all-stock acquisition that closed 2.5 years ago affect their trailing P/E but not their forward P/E ?
staticman2 103 days ago [-]
Different accounting methods from what I gather. The acquisition is being accounted for over a 5 year period for the trailing p/e but not being included in the forward p/e over this 5 year period. This really shows how p/e is not a great metric in a vacuum.
lostmsu 103 days ago [-]
Thank you all guys for the explanation. I was very puzzled seeing AMD p/e as a complete dilettante in finance reporting.
rubatuga 104 days ago [-]
When you see p/e mentioned in a debate run far away.
swores 104 days ago [-]
I wish people wouldn't post such pointless comments - the only users who get any value from reading your sentence are people who already share your view and can go "hah yeah!", while you couldn't be bothered to explain why it's your view to anyone who doesn't already think the same thing. Literally no benefit over not saying anything. Sorry to be blunt.
killerstorm 103 days ago [-]
Perhaps better metric would be price/revenue.

Profits are very volatile. E.g. if AMD doubles the revenue profits might go 10x up, as R&D costs do not depend on the number of units sold

airstrike 104 days ago [-]
"just" a matter of time... If it were that easy, it would have already been done, or so they say. Also don't forget network effects
nabla9 104 days ago [-]
> Intel has competitive GPUs

No they don't. Both Intel and AMD compare their newest GPU favorably against Nvidia's H100 that has been on the market longer and soon to be replaced and then it's never H100 NVL for a reason.

Intel and AMD can sell their GPU's only with lower profit margin. If they could match FLOPS per total ownership they would sell much better.

Both are years behind.

latchkey 104 days ago [-]
Benchmarks were just run, MI300x is onpar/better than an H100. Next generation of MI (MI325x) is coming out end of the year and those specs look fantastic too. Especially on the all important memory front. 288GB is fantastic.

Both companies will leapfrog each other with new releases. Anyone who believes that there should only be a single vendor for all AI compute will quickly find themselves on the wrong side of history

talldayo 104 days ago [-]
> 288GB is fantastic

This reminds me of those "192GB is fantastic" people that bought maxed-out M2 Ultras for AI inference. It can be awesome, but you need a substantial amount of interconnect bandwidth and powerful enough local compute before it's competitive. In products where AI is an afterthought, you're fighting against much different constraints than just having a lot of high-bandwidth memory.

I've always rooted for Team Red when they made an effort to do things open-source and transparently. They're a good role-model for the rest of the industry, in a certain sense. But I have to make peace with the fact that client-side AI running on my AMD machines isn't happening. Meanwhile, I've been using CUDA, CUDNN, CUBLAS, DLSS, on my Nvidia machine for years. On Linux!

latchkey 104 days ago [-]
This response feels like you could be conflating desktop usage with enterprise compute?
nabla9 104 days ago [-]
Comparisons against H100 I have seen are always:

  8x AMD MI300X (192GB, 750W)   
  8x H100 SXM5 (80GB, 700W) 
Never against 8x H100 NVL (188GB, <800W)

What the customer does not see is how AMD must spend 2 times more money to produce a chip that is competitive against architecture that is soon 2 years old.

latchkey 104 days ago [-]
> Never against 8x H100 NVL (188GB, <800W)

Probably because they aren't widely available yet. It is also a dual card to get that much memory, which is still less than 192GB and far less than 288GB.

https://www.anandtech.com/show/18780/nvidia-announces-h100-n...

> What the customer does not see is how AMD must spend 8-10 times more money to produce a chip that is competitive against architecture that is soon 2 years old.

Source?

claytonjy 103 days ago [-]
as sibling mentioned the 188GB is for a pair. The memory bump is from enabling the 6th block of memory that is otherwise disabled on H100s. I assume an "NVL box" is still 8 total GPUs, so more like

    8x H100 NVL (94GB, 800W)
the AMD box has a lot more GPU memory
RyanShook 104 days ago [-]
2024 YTD returns: NVDA 172% AMD 27% INTC -30%
cj 104 days ago [-]
Stocks of companies that develop extremely niche and technical things is a tiny sliver of the stock market that I actually think communities like HN would be better at valuing than the market.

Technology stocks are the only ones I personally day trade for that reason. Example: at the beginning of a pandemic lockdowns, any HN user could have anticipated increased internet usage and buy Cloudflare/Fastly stock and made a lot of money before the rest of the market realized that CDN companies will significantly benefit from that specific macro event.

I'm not convinced the market (or market analysts) have a deep understanding of Nividia's long-term advantage. If they did, we would have seen a much slower and steadier valuation increase rather than the meteoric rise. Meteoric stock price rise/fall = the market is having trouble valuing the stock.

In other words, stock prices don't add much to the conversation.

storyinmemo 104 days ago [-]
Intel's profit, and revenue, have declined for 3 consecutive years. Their price to earnings ratio is 36.

Nvidia's revenue is now greater than Intel's with 20% of the employees that Intel has. Their PE ratio is 78, roughly double that of Intel.

The market valued Nvidia as growing and Intel as not.

104 days ago [-]
104 days ago [-]
postmeta 104 days ago [-]
pytorch already supports AMD with device=cuda

already opensourced ROCm/HIP

https://github.com/pytorch/pytorch/blob/fb8876069d89aaf27cc9...

throwaway2037 103 days ago [-]
In my view, LLM is just the first step in the AI journey. The LLM boom will help NVidia to grow very fast and increase R&D. During this time, I expect new AI leaps that are not LLM-related. To be clear: I'm not talking about AGI, but rather, other practical advances.
wmf 104 days ago [-]
They've been working on that for years.
dylan604 104 days ago [-]
Yeah? And? So? As if CUDA was developed overnight and never worked on again. Such a weak comment
zokier 104 days ago [-]
AMD has been working on GPGPU at least as long as nVidia.

AMDs "CTM" SDK was released in 2006, same year as CUDA. In 2007 they released Stream SDK. Then they had "APP SDK" for a while, which iirc coincided with their opencl phase. And now they landed on rocm.

Meanwhile nvidia has kept trucking with just CUDA.

baobabKoodaa 104 days ago [-]
Happy to see this acquisition landing in Finland, but I have to wonder how the purchase price is justified. Silo AI is primarily a consulting company doing "traditional" kinds of AI consulting projects. Their LLM project is like a side hustle for the company.
nicce 104 days ago [-]
Personally, I am bit sad that nothing stays in Finland. Too many promising companies have been sold into foreign countries recently. Just because founders look for exit strategy (not claiming that it is the case here). Not good for Finland in general.
baobabKoodaa 104 days ago [-]
Well, in this case the purchase price appears grossly overpriced. So even though Finland lost an AI startup, it gained money that is worth more than the startup. That money will to a large extent flow back into the Finnish economy in the form of taxes, investment in new startups, etc.
nicce 104 days ago [-]
> That money will to a large extent flow back into the Finnish economy in the form of taxes, investment in new startups, etc.

Short term gains, in terms of taxes.

Otherwise, there are no guarantees for that. Shareholders might just make some castle. Who knows. Or move away to different country.

thenaturalist 104 days ago [-]
> Shareholders might just make some castle.

And then be left with nothing?

Look at Silo's About page.

The people who started this are not slackers or already had so much money before that they could have bought a 3rd Porsche.

Do you think these people will pull back and do nothing as their ability to benefit from and shape the technological advances happening just increases with this exit?

I highly doubt that.

> Or move away to different country.

And then?

Capital is global. And as per these [0] statistics, Finland is ranked 4th for per capita VC money invested in 2018, far ahead of France and Germany.

As per this [1] article from May, Finland received the most private equity and VC investment adjusted for GDP in all of Europe in 2023.

Finland is an attractive country to invest in, and I highly doubt native speakers with an excellent local network - i.e. much more expertise than the average non-Finnish speaking invesotor - will not be aware of that and capitalize on it.

[0]: https://www.statista.com/statistics/879124/venture-capital-a...

[1]: https://www.goodnewsfinland.com/en/articles/breaking-news/20...

bjornsing 104 days ago [-]
But hats off to Finland for producing these companies. Here in Sweden there’s pretty much nothing in cloud computing or AI, AFAIK.
kakoni 104 days ago [-]
Well in Finland we seem to produce promising "early-stage" companies which are then eagerly sold to bigger players. Vs in Sweden there is will (and capital) to keep growing these.
Ekaros 104 days ago [-]
I'm more so for taking money off the table when possible. Future returns are future returns, they can materialize, but might not.
nicce 104 days ago [-]
But if that happens (almost) every time for a potential company, then you will never likely have successful company in Finland, where the decision making also stays in Finland, and the money benefits the country in larger scale.

There is this saying that "don't sell the cow when you can sell the milk" - maybe there is still some wisdom... but Finland keeps selling the cow and buying the milk back over and over again. And then they wonder why the state of the economy is so sad and they never see "new Nokia".

bee_rider 104 days ago [-]
Looking at their “about” page,

https://www.silo.ai/about

It looks like 300 “AI experts” employed. So I guess they have paid $2M a pop. I’m not sure how to put that into perspective really, though…

throw0101c 104 days ago [-]
> It looks like 300 “AI experts” employed. So I guess they have paid $2M a pop.

What was the per employee acquisition cost of WhatsApp (who had 50 employees, IIRC)?

dghlsakjg 104 days ago [-]
When acquiring a telecommunications network, I suspect that network size (user count) is far more relevant for valuation, if anything, having a low employee count with a massive network like WhatsApp was probably a huge selling point.
_flux 104 days ago [-]
Nice to see AMD finally doing something about competing in the compute market (LLM being the hottest thing at the moment)!

Though apparently MI300X is a fine product as well. But it still needs code.

moffkalast 104 days ago [-]
If they spent the 665M on improving ROCm instead they'd get a hell lot more return on it.
vegabook 104 days ago [-]
This is an indictment of Lisa Su's own ROCm strategy. An implicit admission of failure, without explicitly admitting it. I predict this acquisition will cause even more software schizophreny inside AMD as multiple conflicting teams pinball their way around towards nowhere in particular.
lostmsu 103 days ago [-]
I have to agree. I doubt AI architects that use high-level libraries are going to fix AMD's bottom line. Only maybe as a marketing ploy for future sales.
samuell 104 days ago [-]
jacobgorm 104 days ago [-]
This happened after Silo trained an LLM on the AMD-powered LUMI supercomputer.
petesergeant 104 days ago [-]
Seems like an excellent exit strategy in hindsight. Spend a gazillion dollars of investor money on AMD hardware, get bought back by AMD because you worked out how to use that hardware
jacobgorm 104 days ago [-]
baobabKoodaa 104 days ago [-]
Where in that source does it claim that Silo didn't have to pay to use the hardware?
jacobgorm 104 days ago [-]
I don’t think it is possible to pay for access to LUMI. I know my company has been in talks about getting free access as it sits under utilized most of the time. These supercomputers are mostly vanity projects for EU politicians, there is no commercial use case.
Certhas 104 days ago [-]
I don't know about Lumi specifically, but top tier scientific supercomputers should typically have 80+% utilisation rate:

https://doku.lrz.de/usage-statistics-for-supermuc-ng-1148309...

Smaller machines will tend lower from what i have seen. If you give a large enough pool of scientists access to significant compute resources, they will generally figure out how to saturate them. Also, scientific teams often can't pay top software engineers. Lots of hardware is a way to compensate for inefficient code. If Lumi is underutilized to such an extent someone is funking up.

There is of course no commercial use case for these computers. That's not the point of these machines.

Pandabob 104 days ago [-]
Came here to point this out. Silo never had to invest huge amounts on GPUs. A shrewd move by the founders.
rubatuga 104 days ago [-]
There is debate about public investment into private ventures, but in this case it may provide long term benefits to Finland
petesergeant 104 days ago [-]
Even better!
stefan_ 104 days ago [-]
An inverse Nadella, wherein you buy a chunk of OpenAI and they turn around and buy a bunch of Azure time (then give it away to people on ChatGPT cuz ain't no one about making money in that business)
kakoni 104 days ago [-]
Well actually it was Silo + Turku University's TurkuNLP group [1]

[1] https://www.amd.com/content/dam/amd/en/documents/resources/c...

pavlov 104 days ago [-]
The economic mood in Finland is downright depressed [1]. This kind of news is therefore extremely welcome because it indicates there's a way forward, out of the old industry doldrums where people are still moaning about closed paper mills and Nokia's failure 15 years ago.

$665M USD isn't a staggering number by Silicon Valley standards, but it's very significant for a nation of five million people that hasn't seen global startup successes like neighboring Sweden with Spotify and others.

[1] The actual level of depression is somewhat hard to track because Finns are always pessimistic regardless of how well they're doing. (This also makes them the happiest people on Earth in polls. The situation right now is never quite as bad as one had expected beforehand, so when a pollster calls to ask, the conclusion must be that they're pretty happy with things overall at that specific moment, but surely everything is going in the wrong direction anyway.)

SebaSeba 104 days ago [-]
Contrary to what you say, Finnish startups have been very successful. Here's just a couple examples:

- Supercell sold 81.4% stake to Tencent in 2018 with a valuation of $10.2 billion.

- Wolt was acquired by DoorDash in 2021 with a valuation of $8.1 billion.

The list is much longer with startups that currently generate revenues of tens or hundreds of millions in a year that have not been sold.

pavlov 104 days ago [-]
These two are great success stories, but they’re also the only Finnish unicorn exits in the post-Nokia era.

The exits were somewhat less exciting to founders than these numbers suggest. Supercell sold 51% to SoftBank already in 2013 for 1.1B EUR. And Wolt’s purchase price was paid entirely in DoorDash stock which was down 75% by the time the lockups expired.

Startups generating low-hundreds of millions in annual revenue just aren’t unicorns anymore, unless they happen to be AI.

SebaSeba 104 days ago [-]
Both Supercell and Wolt have their headquarters steadily in Finland. The founders and Finnish early investors have gained hundreds of millions or billions of euros wealth for themselves which they have further spended and invested in Finland. They have paid huge amounts of taxes and keep on doing all of these since they are still located in Finland. It's hard to downplay the value of those IMO. Overall Rovio wasn't a complete disaster either. First made billions of euros for many years and was later sold to Sega for >$700 million. Still has HQ in Finland.

There's plenty of interesting and fast growing startups still left here. For example Supermetrics, Varjo, Smartly, Iceye, Aiven to name a few. IMO you are being pessimistic.

SebaSeba 104 days ago [-]
In any case, I agree in that the acquisition is great news and the economy is in a depression. :) Huge part of it is because Finnish mortgages are mostly straight tied to Euribor unlike in other Euro countries and since post covid the interest rates went up, Finns got f*cked. Hopefully the Euribor interest rate will be going down and the mortgages will start to become smaller, at least when they are being paid off.
104 days ago [-]
thenaturalist 104 days ago [-]
In 2023, Finland has received the highest investment of private equity and VC adjusted for GDP in all of Europe: https://www.goodnewsfinland.com/en/articles/breaking-news/20...
pavlov 104 days ago [-]
Which is great, but doesn’t move the needle of popular perception the same way as large acquisitions and IPOs do.

The start of the startup investment pipeline in Finland has been flowing pretty well. The outputs at the end of the pipeline have been more questionable. Silo’s acquisition is a positive example of activity at that end.

non-e-moose 104 days ago [-]
Former AMD employee here (2007-2012) AMD 'dropped the ball' BADLY when (2012) then-VP Ben Bar-Haim decided to do a software purge, and focused on retaining the over-bureaucratic folks of ATI/Markham. Net result: NVidia was (and did) pick up a lot of very smart researchers and developers from AMD (I know of a couple whom were thoroughly disgusted with AMD management at that time)

He also trashed a lot of good and useful software projects for seemingly protectionist reasons (if it wasn't ATI/Markham, it was dumped)

glzone1 104 days ago [-]
Wasn't there a point at which AMD was actually looking at buying nvidia but Jensen wanted to be something like CEO. Jensen actually worked at AMD so there was already a connection there.

Instead AMD bought ATI which if I remember was barely hanging on. Not saying it was a bad purchase, just interesting that a bet on ATI (always had buggy drivers in my experience) which hadn't really demonstrated success ... how decisions ripple for a while.

JonChesterfield 104 days ago [-]
Interesting context, thank you.
trhway 104 days ago [-]
$2M/head. That is a steal, even by European standards.

"In July 2024, Silo AI has 300+ employees out of which 125+ hold a PhD degree."

high_na_euv 104 days ago [-]
What are they going to give AMD that they are priced this high?
thenaturalist 104 days ago [-]
Expert talent and prob some in house tech.

Mainly talent I guess which they can put to accelerating Triton development, their alternative to CUDA.

duxup 104 days ago [-]
Does the desirable talent in this case have equity / future vesting equity that is a part of the price?

I just wonder as many decades ago I was a part of a company who wanted to get into a market, they bought a little start up, and over the course of a year everyone quit, and the project eventually folded entirely ;) It was sorta hilarious, but also bizarre that the acquiring company didn't think of that.

thenaturalist 104 days ago [-]
I mean buying a "private AI lab" doesn't sound to me like they have the purchase price worth in IP which is so desirable nobody else has unlocked it and it lends itself particularly well to being integrated with AMD tech?

Let's see if more details come to light, but a good part of that price is spent for sure on people.

It'd be hilarious indeed if they wouldn't be able to or haven't properly incentivized them to retain them.

stanleykm 104 days ago [-]
Did you mean ROCm? afaik Triton is a python framework that sits on top of CUDA and ROCm.
mindcrime 104 days ago [-]
I don't know how this specific acquisition is going to work out, but at least we can say one thing. This represents some kind of response to the constant chorus of "AMD don't appreciate the importance of software. AMD should invest more in software. CUDA, CUDA, CUDA" comments that one always hears when AMD is mentioned.

Of course there's room to debate the details here: would they have, perhaps, been better off investing that money in their existing software team(s)? Or spinning up (a) new team(s) from scatch? Who's to say. But at least it show some intention on their behalf to beef up their software stance, and generally speaking that feels like a positive step to me.

But then again, I'm an AMD fan boi who is invested in the ROCm ecosystem, so I'm not entirely unbiased. But I think the overall point stands, regardless of that.

AlotOfReading 104 days ago [-]
AMD has also been doing a bunch of hiring for their software teams. I've seen a few colleagues that AMD previously couldn't have afforded accept offers to work on GPU stuff.
viewtransform 104 days ago [-]
Look on the AMD careers website. There are a lot of software jobs related to AI and the pay has gone upto 300K/yr (in Santa Clara) .
Aaronstotle 104 days ago [-]
This and the MI300x makes me hopeful for AMD
latchkey 104 days ago [-]
It really is a fantastic piece of hardware. We just need the software to catch up.
dotnet00 104 days ago [-]
Which tbf has been an apt description of AMD GPUs for the better part of a decade. Great hardware, god awful software and even worse long term software strategy.

It's why the 'fine wine' spin on the long term performance of AMD GPUs exists in gaming circles.

latchkey 104 days ago [-]
You're totally right. That said, spending $665m on an AI company seems, at first glance, like a step in the right direction. I'm sure there are a 1000 ways they could have spent that much money, but hey... I do appreciate them at least trying to do something to resolve the issue. Another way to think of it is that now there is a whole team that isn't dedicated to nvidia.
dotnet00 104 days ago [-]
Yeah I'm not arguing against this acquisition, just commenting on how things have been so far. At this point I'm kind of apathetic, it's good if whatever they do eventually leads them to fixing their software woes, and I'll come back to their stuff then. If not, I'm fine with sticking to CUDA for now.

Ultimately they're all GPU programming languages, once you're good with one, switching to another one is not that hard (as long as the supporting software is good of course).

daghamm 104 days ago [-]
Since FT is paywalled and the press release link from Silo is currently pointig to nowhere:

https://www.silo.ai/blog/amd-to-acquire-silo-ai-to-expand-en...

I've no idea what is going on. This is 5 times bigger than their combined AI acquisitions in the last 12 months. The only link between Silo and AMD is that Silo has been using an AMD accelerator cluster for training.

cooper_ganglia 104 days ago [-]
I honestly don't understand how paywalled links get so much traction, most people probably can't even engage with the material. Thanks for the direct link to Silo AI's press release!
idunnoman1222 104 days ago [-]
Because everyone knows how to use archive.org
thenaturalist 104 days ago [-]
See @helsinkiandrews comment, he posted the de-paywalled link: https://archive.ph/33O61
YinLuck- 104 days ago [-]
[dead]
getcrunk 104 days ago [-]
I’m still pissed they finally brought rocm support to their Gpgus on windows starting with the 6800xt … I have the 6700xt
hiddencost 104 days ago [-]
This seems like a pure acquire, and not at a good price? $2M/head with a four year lock in isn't great.
woadwarrior01 104 days ago [-]
Looks like a consulting company[1] at first glance. Also, empty HuggingFace account[2].

[1]: https://www.silo.ai/ [2]: https://huggingface.co/SiloAI

anewhnaccount2 104 days ago [-]
The models are here: https://huggingface.co/LumiOpen
baobabKoodaa 104 days ago [-]
Who is downvoting this? You are correct. Silo.AI is a consulting company with an LLM side hustle. This acquisition is weird.
m3kw9 104 days ago [-]
Looks like they have expertise in using AMD gpus to train LLMs and will be tasked to catch up to cuda
wantsanagent 104 days ago [-]
I'm curious how this deal happened. There are a lot of LLM shops out there, how did this nordic co get the attention of AMD and why did they think this co stood out among the crowd.
m3kw9 104 days ago [-]
They had their team use AMD to train LLMs
samuell 104 days ago [-]
They've been around since before the LLM era? (I learned about them in 2018)
Workaccount2 104 days ago [-]
Imagine AMD simply put that $665M into tooling and driver development. The stock probably would have doubled.
thenaturalist 104 days ago [-]
What's the difference to what they did in this acquisition?

Who's gonna improve tooling and develop drivers?

PhD level AI experts such as employed by Silo AI, probably, right?

EDIT: For context [0], Nvidia invested billions into CUDA development way back when it was unsexy.

Clearly a second mover won't need that much, Nvidia proved the market.

But a billion doesn't seem like a large sum for the potential upside of AMD catching a significantly larger share of the budget going into AI - many times the value of this acquisition.

0: https://www.newyorker.com/magazine/2023/12/04/how-jensen-hua...

Workaccount2 104 days ago [-]
Perhaps their goal is to develop an LLM and then prompt it to fix ROCm.
speed_spread 104 days ago [-]
The org structure and culture dynamics of large companies like AMD makes it very difficult to achieve quality results when starting from scratch. 665M$ might well have been too much money, putting too much pressure for results for anything valuable to emerge. A 665M$ acquisition means they know exactly what they are getting, and they are getting it _now_.
mistrial9 104 days ago [-]
also note they paid in cash.. usually a premium in itself.
petesergeant 104 days ago [-]
> Imagine AMD simply put that $665M into tooling and driver development

Feels like a company saying they're going to "spend a few weeks paying down tech debt", which generally amounts to nothing getting done. Progress happens in creative pursuit of another goal and with hard constraints, in my experience. You can fix a specific piece of tech debt while working on a product feature that's adjacent to it, and you can create some great tooling and drivers while working on a product that needs them, but just setting aside the money for greenfield development often/usually ends up with it being set alight. I have worked at least one very well-funded place where the lack of product focus and thus lack of any constraints has just led to endless wheel spinning under the guise of "research".

duxup 104 days ago [-]
I always wonder about these thought experiments. Given a few good talented people and good management ... you'd think they'd be able to put a team together, but maybe talent in this area is few / far between?

To be clear, i'm not disagreeing, I really don't know, but yeah $665M, could do a lot with that.

short_sells_poo 104 days ago [-]
You are basically paying some premium for the fact that someone already did the hiring and built the talent pool and a cohesive team. Doing that from scratch is a multi-year project, so they basically bought a shortcut.
duxup 104 days ago [-]
Yeah I get the general idea that you're paying more for the assembled team and software / experience.

It's just always wonky as acquisitions generally don't seem to be 100% known quantities / outcomes. People paying big premiums for what sometimes turn out to be nothing.

That package of talent and etc is handy, but also seems like sometimes it makes it harder to really know what you'll get out of it. It's an interesting dynamic.

yoouareperfect 104 days ago [-]
If AMD and Intel team up on soft to replace CUDA, then I'm selling all my NVDA stock and even shorting it short term
machinekob 104 days ago [-]
They tried SYCL and no one is using it
ColonelPhantom 104 days ago [-]
SYCL also doesn't meet the listed requirement of 'Intel and AMD teaming up'. Intel seems to be the only hardware vendor to actually care about SYCL, and AMD is instead backing HIP which is 'standardized' but boils down to 'just take cuda and run :s/cu/hip/g'.

If AMD were to work on SYCL tooling and, say, build a 'syclcc' next to 'hipcc' that ingested SYCL to run it on ROCm, I feel like interest in SYCL could potentially grow, since Intel is supporting it properly already and it would be actually a cross-vendor standard.

Codeplay (which is part of Intel) does provide 'plugins' to run oneAPI (SYCL) on NVIDIA and AMD hardware, which is great but is still being made, indirectly, by Intel, who in the end want to sell Intel hardware.

dst_ 103 days ago [-]
I’m quite surprised about this acquisition (and it’s scale) for a few reasons:

1. It’s a consulting firm and not a product shop, so you’re only paying for people, although they’ve from the start tried really hard to brand themselves as a startup.

2. They’ve been training an LLM, but mainly with tax payer money using a government super computer (that uses AMD chips), which is arguably their only product, but completely open source.

3. Some of the founders are well-connected in Finland which has given them a seat (and visibility) in goverment initiatives, but this is mainly BS.

4. They have the least rigorous hiring process I’ve ever witnessed.

I remember checking them out in 2018 when I was looking into switching companies. Back then when I checked their folks on LinkedIn, the prior experience in AI most of their people had was taking a few Coursera courses.

Later they called me and asked me about joining and I’ve never had an interview where no one asked anything technical beyond what I’ve worked on.

I also hope employees get something out of this, because their offer was a revenue share on billed hours and no mention of equity. However, my understanding is that they had a large investment from private equity (Altor), so the mission was to make the PE company money and not the employees.

They’ve allowed part-time offers where you still keep working at a university, so I assume this has been quite interesting for many researchers, i.e. you get paid some on top of the crappy university pay and you also get to see real problems that companies have.

rasz 103 days ago [-]
Will work just as great as $1.9B USD Pensando acquisition, or $334m SeaMicro.
rreichman 103 days ago [-]
Was Pensando a bad acquisition? Isn't it a bit early to tell?
karlzt 103 days ago [-]
I wonder why it is 665 instead of 666?
hi 104 days ago [-]
Anyone know a timeline for AMD on MLPerf?
latchkey 104 days ago [-]
It won't be for a while. It really takes someone to focus on this and it isn't just AMD. The team at MLPerf will need to step in as well and from my discussions with them, they are busy enough as it is with their own goals.

My company, Hot Aisle, has a box of mi300x (soon to be +16 more) that we have dedicated as a free resource to unbiased benchmarking. That's instigated articles like the Chips & Cheese one and the Nscale Elio post...

https://chipsandcheese.com/2024/06/25/testing-amds-giant-mi3...

https://www.nscale.com/blog/nscale-benchmarks-amd-mi300x-gpu...

georgehotz 104 days ago [-]
AMD is already on MLPerf in the form of the tinybox red :)
Kelteseth 104 days ago [-]
They should have bought tiny for 600 million ;)
latchkey 104 days ago [-]
Now do #mi300x. I've already offered you the compute resources, but you called me an AMD shill, lol... pot kettle... ¯\_(ツ)_/¯
nashashmi 104 days ago [-]
Amd once bought an ARM server manufacturer. It went down the tubes. I don’t think this will work either.
bot0047 104 days ago [-]
If nVidia is IBM then AMD could be the next Microsoft.
hmaxwell 104 days ago [-]
This is a nothing burger compared to amazon and google giving $4b and $2b respectively to Anthropic
jenny2244 104 days ago [-]
[flagged]
uptownfunk 104 days ago [-]
Smells fishy anti trust
lopkeny12ko 104 days ago [-]
Wow. I hope this is blocked by the DoJ on antitrust grounds.
duxup 104 days ago [-]
Arguably as far as anti trust grounds go wouldn't AMD being a more viable competitor in the AI space be ... good?
104 days ago [-]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 19:55:36 GMT+0000 (Coordinated Universal Time) with Vercel.