NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Quake runs in just 276 kB RAM on the Arduino Nano Matter board (community.silabs.com)
Dwedit 2 days ago [-]
The game is not running from RAM, it is running from Flash ROM. This means that code and static data can be placed on ROM rather than in RAM.

This is comparable to the GBA, which has 384KB of total RAM, and a ROM cartridge slot for storing the game code and data. But the GBA is only 16MHz, the EFR32MG24 system used for this project is overclocked to 136.5MHz.

smirutrandola 2 days ago [-]
The article says that even if you put all the static data to flash, you still have to fit about 1.5 MB of non static data, if you don't optimize it. Beside that, all graphics is loaded from the relatively slow external SPI flash, which tops at 17 MB/s with overclock. Yes, the GBA is much slower, but the access to cartridge data is faster than 17 MB/s (and also the random-read speed is in the 100 ns range, not 1-2 us range).
zahlman 2 days ago [-]
>This is comparable to the GBA, which has 384KB of total RAM

I assume you are thinking of the 32KiB of on-chip work RAM plus 256KiB of on-board work RAM plus 96KiB of video RAM. But pedantically there is also a 1KiB region of palette RAM and 1KiB of "object attribute memory", separate from the VRAM, making 386KiB total. (Not counting the I/O control registers, which one ordinarily wouldn't think of as "memory" but get a dedicated region of that address space.)

Aside from the ROM on a cartridge - up to 32MiB - there is 16KiB of BIOS ROM, and the system can address 64KiB of EEPROM for game save data.

https://problemkaputt.de/gbatek.htm#gbamemorymap

Dwedit 2 days ago [-]
I really don't count Palette and OAM as extra memory, despite me having used unused palette memory as a place to store sound sample data.
smirutrandola 2 days ago [-]
Link to the video showing Quake running: https://www.youtube.com/watch?v=hVnfwzxTJ00
vardump 2 days ago [-]
A great achievement, given the hardware.

Quake will probably run at 60 FPS on RP2350. Double buffered and with full sound quality. But it's nowhere near as hard to achieve it as on Arduino Nano Matter board. RP2350 got 520 kB RAM, dual core Cortex M33 and can run even at 300 MHz (150 MHz nominal).

Earlier: https://news.ycombinator.com/item?id=41195669

bittwiddle 2 days ago [-]
Impressive memory optimizations. Streaming out converted pixel values was a neat way of pulling off the "framebuffer" without having enough memory for storing all the 16 bit values. Solid engineering.
Dwedit 2 days ago [-]
Ooh, once we get to "streaming pixel values" out, then we're secretly using the LCD screen's internal memory as a second framebuffer.
marlone93 2 days ago [-]
The LCD internal memory is write only and it is used just to hold the image being shown. Unlike the GBA where the video RAM is like a GP RAM, just slower.
mrguyorama 1 days ago [-]
It is no different than when the atari 2600 used the slow fade of the CRT's phosphor coating as its "framebuffer".

Anything you put out a dedicated video port is gone to you.

thesnide 2 days ago [-]
This shows the power of having a fixed computing budget.

Many modern software should really be done this way to limit the amount of energy used. Specially on laptops, but also in the cloud.

Yet, it is mostly never worth it to optimize compared to adding more features to fill a list ;)

lacoolj 2 days ago [-]
what's with the website load time? like individual elements on this page taking multiple seconds to show. is it not 2024 yet?
bragr 2 days ago [-]
Having a CDN doesn't help your performance when you tell it not to cache the page

  bragr@<>:~$ dig +short community.silabs.com
  community.silabs.com.00da0000000l2kimas.live.siteforce.com.
  sdc.prod.communities.salesforce.cdn.edgekey.net.
  e78038.dsca.akamaiedge.net.
  173.223.234.17
  173.223.234.11
  bragr@<>:~$ curl -Is https://community.silabs.com/s/share/a5UVm000000Vi1ZMAS/quake-ported-to-arduino-nano-matter-and-sparkfun-thing-plus-matter-boards?language=en_US | grep -i cache
  cache-control: no-cache,must-revalidate,max-age=0,no-store,private
  x-origin-cache-control: no-cache,must-revalidate,max-age=0,no-store,private
That said, the assets are cacheable so there was probably just a thundering hurd for the assets until they were well cached by Akamai's mid and edge tiers
toast0 2 days ago [-]
When I've used a CDN, there were separate headers to control the CDN with the same semantics as cache-control... so you can serve the cache-control you want to browsers and control the CDN separately.

If it doesn't feel like it's cached, it probably isn't; but you can't assume the cache-control headers you see are controlling the CDN.

bragr 2 days ago [-]
Depends on the Akamai property config which could be anything. IIRC by default it uses the standard cache headers and doesn't strip or rewrite them, although it definitely can.
iknowstuff 2 days ago [-]
ugh, old.reddit.com sends a no-store when signed in and its driving me mad because it breaks back/forward cache.
ahoka 2 days ago [-]
All “security” guidelines blindly suggest no-store. Also private with no-store makes no sense.
Muromec 2 days ago [-]
>individual elements on this page taking multiple seconds to show. is it not 2024 yet?

It's exactly what 2024 feels like. Future sucks.

Gee101 2 days ago [-]
Maybe it's running on an Arduino Nano Matter.
Narishma 2 days ago [-]
ant6n 2 days ago [-]
The real hackery is the port for GBA mentioned in the article (running on 16.7MHz): https://www.xda-developers.com/how-quake-ported-game-boy-adv...
smirutrandola 2 days ago [-]
Yes that is really impressive.

Still it was done with 50% more memory, 1/3 of resolution and not implementing the whole game features.

vardump 1 days ago [-]
But with a fraction of CPU resources. Arduino Nano's Cortex M33 is overclocked at 135 MHz, while GBA's ARM7TDMI is running at mere 16.78 MHz.

ARM7TDMI takes 1-4 cycles to perform a simple 32bit x 32bit multiply, depending on the multiplier. I believe Cortex M33 takes just 1 cycle to do same. ARM7TDMI has no divide instruction and critically, no FPU that Quake requires.

GBA has only 32 kB of 0-wait state RAM (AKA internal working RAM). Versus 276 kB on the Arduino Nano.

GBA's 256 kB RAM block (external working RAM) has massive 6 cycle access time when loading a 32-bit value.

It's a true miracle someone managed to even get 1/3 of resolution on this weak hardware!

marlone93 1 days ago [-]
I think the article says the same. The gba port is impressive.

I guess FPU would not be even required with 120 pix horizontal resolution.

CM33 does in a single cycle even more: 2 16 bits multiplications, addition and accumulation, for instance.

Still it is the first time the "full" Quake was ported in less than 300 kB.

vardump 1 days ago [-]
Agreed on other counts except for FPU.

Quake performs one FPU divide per pixel for texture mapping perspective correction.

ARM7TDMI does not have any kind of divide, so perspective correction is tricky, even if it's just 120 px horizontally.

marlone93 1 days ago [-]
Afaik, Quake does not do one divide per pixel, it is in steps of 8 pixels (see dscan.c in winquake). Yes, there is non divide but instead of taking hundreds of cycles, tables and other approximations could be used. Of course, div/vdiv which take only 14 cycles or less are a strong boost on CM4/33.
vardump 1 days ago [-]
Oh, it divides only once every 8 pixels and interpolates in between and still looks so good? I stand corrected.

By the way, it's "d_scan.c" for anyone who's trying to web search for it.

marlone93 1 days ago [-]
It means almost an order of magnitude less divisions (and additional calculations as well).

Quake had to do this because it would have been too much especially for a low-end Pentium when it was released in 1996. Yes it is not even noticeable, especially at low res.

badsectoracula 21 hours ago [-]
Depends on the textures used. High contrast textures with vertical lines (e.g. dark wood on bright wall) would make the distortion very visible even at 320x200. However most of the game's textures are not like that.

There are some user made maps however where this can be seen (e.g. i remember playing a map which was supposed to be inside a fantasy town and it used a bunch of wood-on-wall textures that made the distortion apparent).

rasz 15 hours ago [-]
Abrash did this in Quake because those divides are _Free_ when intervened with other code. Pentium FPU is pipelined, you can push FDIV, then FXCH to another data and do something else for a while instead of waiting for the result. The price is hand tuned assembly code that works fast only on Intel FPU in 1996. AMD caught up in 1998-99 finally implementing pipelined FDIV and 0 cycle FXCH.

https://www.phatcode.net/res/224/files/html/ch63/63-02.html

smirutrandola 11 hours ago [-]
OMG that link (and its parent) is extremely interesting! Thank for sharing!
rasz 15 hours ago [-]
>In a PC or in other Quake ports, all the data is available from RAM (if not even from the CPU data cache), which had a relatively high bandwidth and low latency even back to 1996. In fact, the bandwidth for sequential reads varied a lot but with a 40 MHz EDO 64-bit DRAM (already available on 1996) one could get a maximum throughput of 320 MB/s.

The youth, so sweet and naive :) EDO ram on average Pentium motherboard does around 50-70MB/s. 256-1024KB of L2 cache bumps that to 70-120MB/s depending on chipset and cache type (and obviously usage pattern, Quake wasnt optimized on that aspect at all). Tiny 8KB of L1 below 200MB/s.

smirutrandola 11 hours ago [-]
I think that figure is of course referring to the peak bandwidth in burst mode, where you are sequentially access data (tacc = 25 ns), in the ideal case. In that sense, also the figure given for the Cortex M33 are to be taken as absolute maximum as well. 50-70 MB/s sounds representative of random read of 8 bytes blocks, where you have the full access time. This will go even lower if you just make byte-access.

Quake was indeed optimized to work on such 1996 Pentium PCs. Look for instance how the edge/surface/span arrays are allocated in the stack: they allocate extra size, to be sure that the data will be aligned to the cache line size.

rasz 9 hours ago [-]
50-70 MB/s is burst best case scenario linear read into nothing on contemporary 1996 chipsets/CPUs. Moving more or less halves that, writing is slightly faster than reading due to cache lookups.

320MB/s is even faster than theoretical maximum of EDO on Pentium platform. 8 bytes x 66MHz / 5-2-2-2 timings = <260MB/s burst.

Quake optimized for prefilling caches, but not for contemporary cache sizes. https://dependency-injection.com/2mb-cache-benchmarks/ Doom gains tiny amount when going from 256KB to 512KB, Quake linearly gains all the way to mindbogglingly absurd 2MB of L2. Could really benefit from data-oriented design, but there was no tooling for that at the time not to mention time crunch, Abrash did all he could under circumstances.

smirutrandola 6 hours ago [-]
I don't think the initial access time (latency) shall be included when speaking about peak bandwidth, otherwise it is not the peak bandwidth.

Peak bandwidth shall be considered with ideally infinite (large enough) payload to make latency negligible. When you have these two values, latency and peak bandwidth, you can estimate your (still theoretical, of course) performance given the transfer size.

The article uses the 240-320 MB/s peak bandwidth, and 110-130 ns latency for a comparison with the used external flash, which has latency in the us range and a peak bandwidth of 17 MB/s (arguably assuming using infinite payload, as 136.5/8 is about 17, i.e. without taking the initial setup time).

Still, even if you compare the actual speeds of a 1996 Pentium with the theoretical external flash speed values cited in the article, the consideration does not change: the external flash is much slower than what you could get even in 1996.

anthk 2 days ago [-]
This is witchcraft...
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 18:24:34 GMT+0000 (Coordinated Universal Time) with Vercel.