Assuming 8MB per instance, in theory I could run over 2,000,000 copies of DOOM on this thing at the same time.
Would love to know what the framerate would be
Hope I get crazy rich one day so I can spend money doing stupid stuff like this.
throwup238 24 hours ago [-]
Now you just need to figure out how to simulate transistors in an instance of the game, so that you can port DOOM to run on a 2,000,000 transistor DOOMputer.
yetihehe 22 hours ago [-]
Anyone has any idea what throughput you can achieve with this? Is it simply 128x5600MT which would mean 700GB/s?
smolder 22 hours ago [-]
64 channels, 2 DIMMs per channel, so I guess half that.
znpy 19 hours ago [-]
It sounds less interesting when you realize the system has four processors, so you're getting "only" four terabytes for cpu, which isn't that much more than what you can currently do on
Some applications get latency spikes when dealing with numa systems.
tuananh 1 days ago [-]
it's 16TB of DDR5 btw
metadat 1 days ago [-]
Yes, 128x128.
Good for a database, maybe.
What else?
smolder 23 hours ago [-]
Serving remote desktops to several hundred developers. Maybe a video content server for a netflix or youtube type business. Hosting a large search index? Some kind of scientific computing?
guenthert 21 hours ago [-]
Numeric simulation (HPC). Some, not all, simulations need lots of memory. In 2018 larger servers running such had 1TiB, so I'm not the least surprised that six years later it's 16.
HeatrayEnjoyer 22 hours ago [-]
A half dozen GPT-4 instances
metadat 15 hours ago [-]
LLM inference processors (GPUs) don't use DDR, it uses special, costly stacked HBM ram soldered to the board.
I tested out running Llama on a 512GB machine, it's rather slow and inefficient. Maybe 1-token/sec.
rustcleaner 1 days ago [-]
Large Language Models.
moomoo11 24 hours ago [-]
Dumb question but why don’t we see more cracked out high memory machines? I mean like 1 petabyte RAM.
Or do these already exist
guenthert 21 hours ago [-]
I'd think the market share for applications which need huge amount of space, but little CPU processing power and memory transfer rate is rather small.
Lenovo's slides indicate that they foresee this server be used for in-memory data bases.
Weren't there also distributed fs where the meta-data server couldn't be scaled out?
eqvinox 17 hours ago [-]
We don't see more of these machines because most tasks are better served by a higher number of smaller machines. The only benefit of boxes like this is having all of that RAM in one box. Very few use cases need that.
moomoo11 6 hours ago [-]
Would be fun for a graph db
rustcleaner 1 days ago [-]
Qubes OS.
pokoreli2016 16 hours ago [-]
[dead]
Rendered at 07:10:52 GMT+0000 (Coordinated Universal Time) with Vercel.
Would love to know what the framerate would be
Hope I get crazy rich one day so I can spend money doing stupid stuff like this.
Some applications get latency spikes when dealing with numa systems.
Good for a database, maybe.
What else?
I tested out running Llama on a 512GB machine, it's rather slow and inefficient. Maybe 1-token/sec.
Or do these already exist
Lenovo's slides indicate that they foresee this server be used for in-memory data bases.
Weren't there also distributed fs where the meta-data server couldn't be scaled out?