uPlot maintainer here. this looks interesting, i'll do a deeper dive soon :)
some notes from a very brief look at the 1M demo:
- sampling has a risk of eliminating important peaks, uPlot does not do it, so for apples-to-apples perf comparison you have to turn that off. see https://github.com/leeoniya/uPlot/pull/1025 for more details on the drawbacks of LTTB
- when doing nothing / idle, there is significant cpu being used, while canvas-based solutions will use zero cpu when the chart is not actively being updated (with new data or scale limits). i think this can probably be resolved in the WebGPU case with some additional code that pauses the updates.
- creating multiple charts on the same page with GL (e.g. dashboard) has historically been limited by the fact that Chrome is capped at 16 active GL contexts that can be acquired simultaneously. Plotly finally worked around this by using https://github.com/greggman/virtual-webgl
> data: [[0, 1], [1, 3], [2, 2]]
this data format, unfortunately, necessitates the allocation of millions of tiny arrays. i would suggest switching to a columnar data layout.
Really appreciate you taking the time to look, Leon - uPlot has been a huge inspiration for proving that browser charts don't have to be slow.
Both points are fair:
1. LTTB peak elimination - you're right, and that PR is a great reference. For the 1M demo specifically, sampling is on by default to show the "it doesn't choke" story. Users can set sampling: 'none' for apples-to-apples comparison. I should probably add a toggle in the demo UI to make that clearer.
2. Idle CPU - good catch. Right now the render loop is probably ticking even when static. That's fixable - should be straightforward to only render on data change or interaction. Will look into it.
Would love your deeper dive feedback when you get to it. Always more to learn from someone who's thought about this problem as much as you have.
dapperdrake 3 days ago [-]
Blind sampling like this makes it useless for real-world statistics of the kind your users care about.
And column-oriented data is a must. Look at Rlang's data frames, pandas, polars, numpy, sql, and even Fortran's matrix layout.
Also need specialized expicitly targetable support for Float32Array and Float64Array. Both API and ABI are necessary if you want to displace incumbents.
There is huge demand for a good web implementation. This is what it takes.
I once had to deal with many million data points for an application. I ended up mip-mapping them client-side.
But regarding sampling, if it's a line chart, you can sample adaptively by checking whether the next point makes a meaningfully visible difference measured in pixels compared to its neighbours. When you tune it correctly, you can drop most points without the difference being noticeable.
I didn't find any else doing that at the time, and some people seemed to have trouble accepting it as a viable solution, but if you think about it, it doesn't actually make sense to plot say 1 million points in a line chart 1000 pixels wide. On average that would make 1000 points per pixel.
PaulDavisThe1st 3 days ago [-]
We routinely face this in the audio world when drawing waveforms. You typically have on the order of 10-100k samples per second, durations of 10s-1000s of seconds, and pixel widths of on the order of 1-10k pixels.
Bresenham's is one algorithm historically used to downsample the data, but a lot of contemporary audio software doesn't use that. In Ardour (a cross-platform, libre, open source DAW), we actually compute and store min/max-per-N-samples and use that for plotting (and as the basis for further downsampling.
leeoniya 3 days ago [-]
> In Ardour (a cross-platform, libre, open source DAW), we actually compute and store min/max-per-N-samples and use that for plotting (and as the basis for further downsampling.
I discovered flot during my academic research career circa 2008 and it saved my ass more times than I can count. I just wanted to say thank you for that. I wouldn't be where I am today without your help :)
leeoniya 3 days ago [-]
hey!
> But regarding sampling, if it's a line chart, you can sample adaptively by checking whether the next point makes a meaningfully visible difference measured in pixels compared to its neighbours.
uPlot basically does this (see sibling comment), so hopefully that's some validation for you :)
dapperdrake 3 days ago [-]
This is a good sampling transform to offer. Call it "co-domain awareness" or something.
vlovich123 4 days ago [-]
Is there any techniques using wavelet decomposition to decimate the high frequency component while retaining peaks? I feel like that's a more principled approach than sampling but I haven't seen any literature on it describing the specific techniques (unless the idea is fundamentally unsound which is not obvious to me).
huntergemmer 4 days ago [-]
Interesting idea - I haven't explored wavelet-based approaches but the intuition makes sense: decompose into frequency bands, keep the low-frequency trend, and selectively preserve high-frequency peaks that exceed some threshold.
My concern would be computational cost for real-time/streaming use cases. LTTB is O(n) and pretty cache-friendly. Wavelet transforms are more expensive, though maybe a GPU compute shader could make it viable.
The other question is whether it's "visually correct" for charting specifically. LTTB optimizes for preserving the visual shape of the line at a given resolution. Wavelet decomposition optimizes for signal reconstruction - not quite the same goal.
That said, I'd be curious to experiment. Do you have any papers or implementations in mind? Would make for an interesting alternative sampling mode.
vlovich123 3 days ago [-]
I don't. I just remember watching a presentation on it and it always struck me that wavelets are an incredibly powerful and underutilized technique for data reduction while preserving quality in a quantifiable and mathematically justifiable way.
I don't have any papers in mind, but I do think that the critique around visual shape vs signal reconstruction may not be accurate given that wavelets are starting to see a lot of adoption in the visual space (at least JPEG2000 is the leading edge in that field). Might also be interesting to use DCT as well. I think these will perform better than LTTB (of course the compute cost is higher but there's also HW acceleration for some of these or will be over time).
dapperdrake 3 days ago [-]
This might be because JPEG already does FFT/DCT.
3 days ago [-]
dapperdrake 3 days ago [-]
Doesn't FFT depend at least on a "representative" sample of the entire dataset?
Sounds like what makes sql joins NP-hard.
vlovich123 1 days ago [-]
No, FFT is perfectly information preserving by definition. Thats why there’s an inverse FFT operation that restores the original signal without any loss (well, modulo accumulated floating point error when working in the discrete instead of symbolic space).
dapperdrake 3 days ago [-]
This really depends on your problem domain.
apitman 3 days ago [-]
> creating multiple charts on the same page with GL (e.g. dashboard) has historically been limited by the fact that Chrome is capped at 16 active GL contexts that can be acquired simultaneously. Plotly finally worked around this by using https://github.com/greggman/virtual-webgl
Sometimes I like to ponder on the immense amount of engineering effort expended on working around browser limitations.
dapperdrake 3 days ago [-]
Think of it as finally targeting a smartphone. People like beautiful pictures. And your phone is already in your hand.
aurbano 4 days ago [-]
Not much to add, but as a very happy uPlot user here - just wanted to say thank you for such an amazing library!!
leeoniya 4 days ago [-]
yw!
sarusso 4 days ago [-]
What I did in a few projects to plot aggregated (resampled) data without loosing peaks was to plot it over an area chart representing the min-max values before aggregating (resampling). It worked pretty well.
Bengalilol 3 days ago [-]
One small thing I noticed: when you zoom in or out (or change the time span), the y-axis stays the same instead of adapting to the visible data.
dapperdrake 3 days ago [-]
Both are useful. With the y-axis staying the same there is a stable point of reference. Then you can see how sub-samples behave relative to your whole sample.
Cabal 3 days ago [-]
I wouldn't spend too much of your time deep diving - it's an AI slop project.
fuckyah 3 days ago [-]
[dead]
fuckyah 3 days ago [-]
[dead]
zokier 4 days ago [-]
If you have tons of datapoints, one cool trick is to do intensity modulation of the graph instead of simple "binary" display. Basically for each pixel you'd count how many datapoints it covers and map that value to color/brightness of that pixel. That way you can visually make out much more detail about the data.
In electronics world this is what "digital phosphor" etc does in oscilloscopes, which started out as just emulating analog scopes. Some examples are visible here https://www.hit.bme.hu/~papay/edu/DSOdisp/gradient.htm
huntergemmer 4 days ago [-]
Great suggestion - density mapping is a really effective technique for overplotted data. Instead of drawing 1M points where most overlap, you're essentially rendering a heatmap of point concentration. WebGPU compute shaders would be perfect for this - bin the points into a grid, count per cell, then render intensity. Could even do it in a single pass. I've been thinking about this for scatter plots especially, where you might have clusters that just look like solid blobs at full zoom-out. A density mode would reveal the structure. Added to the ideas list - thanks for the suggestion!
akomtu 4 days ago [-]
You don't need webgpu for that. It's a standard vertex shader -> fragment shader pass with the blending mode set to addition.
MindSpunk 4 days ago [-]
Drawing lots of single pixels with alpha blending is probably one of the least efficient ways to use the rasterizer though. A good compute shader implementation would be substantially faster.
akomtu 4 days ago [-]
At 1M points it hardly makes a difference. Besides, 1 point -> 1 pixel mapping is good enough for a demo, but in practice it will produce nasty aliasing artifacts because real datasets aren't aligned with pixel coordinates. So you have to draw each point as a 2x2 square at least with precise shading, and we are back to the rasterizer pipeline. Edit: what actually needs to be computed is the integral of the points dataset over each square pixel, and that depends on the shape of each point, even if it's smaller than a pixel.
dheera 3 days ago [-]
Aren't we at petaflops now with GPUs? 1M or even 1G points should be no issue if it renders to a framebuffer and doesn't go through mountains af JS framework rubbish followed by mountains of GTK/Qt/.NET rubbish.
rustystump 3 days ago [-]
Not true. Fill rate and memory speed is still a huge bottleneck. The issue is not “rubbish” but memory speed. It is almost always memory speed, cache, ram, disk etc.
There is this misconception that if one uses js or c# to tell a gpu what to do it is somehow slower than rust. It only is if you crunching data but moving memory to the gpu and telling gpu to crunch is virtually identical.
dheera 3 days ago [-]
PCIe 6.0 x16 delivers ~128 GB/s so the billion points can be loaded in milliseconds onto the GPU. The GPU's memory is much faster.
rustystump 3 days ago [-]
Most consumers dont have that and at 60 fps you are already maxing it out and more assuming os is doing nothing else. Bandwidth even on gpus is still the bottleneck.
Even then, when u write to a framebuffer directly in the gpu if the locations of the points are not contiguous you are thrashing. Rendering points very fast is still very much about reducing the data set down to bypass all the layers of memory walls.
dapperdrake 3 days ago [-]
No difference for human visuals or no difference for discrete data or no difference for "continuous" f32 data?
vanderZwan 4 days ago [-]
That works if more overdraw = more intensity is all you care about, and may very well be good enough for many kinds of charts. But with heat map plots one usually wants a proper mapping of some intensity domain to a color map and a legend with a color gradient that tells you which color represents which value. Which requires binning, counting per bin, and determining the min and max values.
akomtu 4 days ago [-]
Emm.. no, you just do one render pass to a temp framebuffer with 1 red channel, then another fragment shader maps it to an RGB palette.
vanderZwan 3 days ago [-]
Wait, does additional blending let you draw to temp framebuffers with high precision and without clamping? Even so you'd still need to know the maximum value of the temp framebuffer though.
akomtu 3 days ago [-]
That's what EXT_float_blend does. It's true, though, that you can't find the global min/max in webgl2. This could be done, theoretically, with mipmaps if only those mipmaps supported the max function.
vanderZwan 2 days ago [-]
Couldn't you do that manually with a simple downscaling filter? I'd be very shocked if fragment shaders did not have a min or max function.
Repeatedly shrinking by a factor of two means log2(max(width, height)) passes, each pass is a quarter of the pixels of the previous pass so that's a total of 4/3 times the pixels of the original image. Should be low enough overhead, right?
akomtu 2 days ago [-]
Sure, that will work, but it's log2 passes + temp framebuffers. As for overhead, I'm afraid it will eat a couple fps if you run it on every frame. In practice, though, I'm not sure that finding the exact maximum is that valuable for rendering: a good guess based on the dataset type will do. For example, if you need to render N points that tend to congregate in the center, using sqrt(N) as the heuristic for the maximum works very well.
jsmailes 3 days ago [-]
That digital phosphor effect is fascinating! As someone who works frequently with DSP and occasionally with analogue signals, it's incredible to see how you can pull out the carrier/modulation just by looking at (effectively) a moving average. It's also interesting to see just how much they have to do behind the scenes to emulate a fairly simple physical effect.
leeoniya 4 days ago [-]
agreed, heatmaps with logarithmic cell intensity are the way to go for massive datasets in things like 10,000-series line charts and scatter plots. you can generally drill downward from these, as needed.
dapperdrake 3 days ago [-]
Good idea.
Add Lab-comor space for this though, like the color theme solarized-light.
Also add options to side-step red-green blindness and blue-yellow blindndess.
hienyimba 4 days ago [-]
Right on time.
We’ve been working on a browser-based Link Graph (osint) analysis tool for months now (https://webvetted.com/workbench). The graph charting tools on the market are pretty basic for the kind of charting we are looking to do (think 1000s of connected/disconnected nodes/edges. Being able to handle 1M points is a dream.
This will come in very handy.
huntergemmer 4 days ago [-]
That's a cool project! Just checked out the workbench. I should be upfront though: ChartGPU is currently focused on traditional 2D charts (line, bar, scatter, candlestick, etc.), not graph/network visualization with nodes and edges. That said, the WebGPU rendering patterns would translate well to force-directed graphs. The scatter renderer already handles thousands of instanced points - extending that to edges wouldn't be a huge leap architecturally.
Is graph visualization something you'd want as part of ChartGPU, or would a separate "GraphGPU" type library make more sense? Curious how you're thinking about it.
agentcoops 4 days ago [-]
Really fantastic work! Can't wait to play around with your library. I did a lot of work on this at a past job long ago and the state of JS tooling was so inadequate at the time we ended up building an in-house Scala visualization library to pre-render charts...
More directly relevant, I haven't looked at the D3 internals for a decade, but I wonder if it might be tractable to use your library as a GPU rendering engine. I guess the big question for the future of your project is whether you want to focus on the performance side of certain primitives or expand the library to encompass all the various types of charts/customization that users might want. Probably that would just be a different project entirely/a nightmare, but if feasible even for a subset of D3 you would get infinitely customizable charts "for free." https://github.com/d3/d3-shape might be a place to look.
In my past life, the most tedious aspect of building such a tool was how different graph standards and expectations are across different communities (data science, finance, economics, natural sciences, etc). Don't get me started about finance's love for double y-axis charts... You're probably familiar with it, but https://www.amazon.com/Grammar-Graphics-Statistics-Computing... is fantastic if you continue on your own path chart-wise and you're looking for inspiration.
huntergemmer 4 days ago [-]
Thanks - and great question about direction. My current thinking: Focus on performance-first primitives for the core library. The goal is "make fast charts easy" not "make every chart possible." There are already great libraries for infinite customization (D3, Observable Plot) - but they struggle at scale.
That said, the ECharts-style declarative API is intentionally designed to be "batteries included" for common cases. So it's a balance: the primitives are fast, but you get sensible defaults for the 80% use case without configuring everything. Double y-axis is a great example - that's on the roadmap because it's so common in finance and IoT dashboards. Same with annotations, reference lines, etc. Haven't read the Grammar of Graphics book but it's been on my list - I'll bump it up. And d3-shape is a great reference for the path generation patterns. Thanks for the pointers!
Question: What chart types or customization would be most valuable for your use cases?
agentcoops 4 days ago [-]
Most of my use cases these days are for hobby projects, which I would bucket into the "data science"/"data journalism" category. I think this is the easiest audience to develop for, since people usually don't have any strict disciplinary norms apart from clean and sensible design. I mention double y-axes because in my own past library I stupidly assumed no sensible person would want such a chart -- only to have to rearchitect my rendering engine once I learned it was one of the most popular charts in finance.
That is, you're definitely developing the tool in a direction that I and I think most Hacker News readers will appreciate and it sounds like you're already thinking about some of the most common "extravagances" (annotations, reference lines, double y-axis etc). As OP mentioned, I think there's a big need for more performant client-side graph visualization libraries, but that's really a different project. Last I looked, you're still essentially stuck with graphviz prerendering for large enough graphs...
huntergemmer 4 days ago [-]
Ha - the double y-axis story is exactly why I want to get it right. Better to build it in properly than bolt it on later.
"Data science/data journalism" is a great way to frame the target audience. Clean defaults, sensible design, fast enough that the tool disappears and you just see the data.
And yeah, graphviz keeps coming up in this thread - clearly a gap in the ecosystem. Might be a future project, but want to nail the 2D charting story first and foremost.
Thanks for the thoughtful feedback - this is exactly the kind of input that shapes the roadmap.
graphviz 4 days ago [-]
Gratifying that it's still useful.
A lot of improvements are possible, based on 20 years of progress in interactive systems, and just overall computing performance.
lmeyerov 3 days ago [-]
You may enjoy Graphistry (eg, pygraphistry, GraphistryJS), where our users regularly do 1M+ graph elements interactively, such as for event & entity data. Webgl frontend, GPU server backend for layouts too intense for frontend. We have been working on stability over the last year with large-scale rollout users (esp cyber, IT, social, finance, and supply chain), and now working on the next 10X+ of visual scaling. Python version: https://github.com/graphistry/pygraphistry . It includes many of the various tricks mentioned here, like GPU hitmapping, and we helped build various popular libs like apache arrow for making this work end-to-end :)
Most recently adding to the family is our open source GFQL graph language & engine layer (cypher on GPUs, including various dataframe & binary format support for fast & easy large data loading), and under the louie.ai umbrella, piloting genAI extensions
my 2 cents: I'm one of these people that could possibly use your tool. However, the website doesnt give me much info. I'd urge you to add some more pages that showcase the product and what it can do with more detail. Would help capture more people imo.
Also fixed from earlier suggestions and feedback as noted before:
- Data zoom slider bug has been fixed (no longer snapping to the left or right)
- Idle CPU usage bug (added user controls along with more clarity to 1M point benchmark)
13 hours on the front page, 140+ comments and we're incorporating feedback as it comes in.
This is why HN is the best place to launch. Thanks everyone :)
mcintyre1994 3 days ago [-]
Pretty sure you have an extra 60x multiplier on all those time frames. Eg 1s shows 1 minute, 15m looks like 15 hours, 1D looks like 2 months.
huntergemmer 3 days ago [-]
Update: Patched idle CPU usage while nothing is being rendered.
One thing to note: I added a toggle to "Benchmark mode" in the 1M benchmark example - this preserves the benchmark capability while demonstrating efficient idle behavior.
Another thing to note: Do not be alarmed when you see the FPS counter display 0 (lol), that is by design :) Frames are rendered efficiently. If there's nothing to render (no dirty frames) nothing is rendered. The chart will still render at full speed when needed, it just doesn't waste cycles rendering the same static image 60 times per second.
Blown away by all of you amazing people and your support today :)
azangru 4 days ago [-]
Bug report: there is something wrong with the slider below the chart in the million-points example:
While dragging, the slider does not stay under the cursor, but instead moves by unexpected distances.
huntergemmer 4 days ago [-]
Thanks - you're the second person to report this! Same issue as the Mac M1 scrollbar bug reported earlier.
Looks like the data zoom slider has a momentum/coordinate mapping issue. Bumping this up the priority list since multiple people are hitting it.
virgil_disgr4ce 4 days ago [-]
I also experienced this behavior :)
barrell 4 days ago [-]
I just rewrote all the graphs on phrasing [1] to webgl. Mostly because I wanted custom graphs that didn’t look like graphs, but also because I wanted to be able to animate several tens of thousands of metrics at a time.
After the initial setup and learning curve, it was actually very easy. All in all, way less complicated than all the performance hacks I had to do to get 0.01% of the data to render half as smooth using d3.
Although this looks next level. I make sure all the computation happens in a single o(n) loop but the main loop still takes place on the cpu. Very well done
To anyone on the fence, GPU charting seemed crazy to me beforehand (classic overengineering) but it ends up being much simpler (and much much much smoother) than traditional charts!
TimeLine maintainer here. Their demo for live-streamed data [0] in a line plot is surprisingly bad given how slick the rest of it seems. For comparison, this [1] is a comparatively smooth demo of the same goal, but running entirely on the main thread and using the classic "2d" canvas rendering mode.
Given that the author's post and comments all sound like they were run through an LLM, I'm not at all surprised.
janice1999 4 days ago [-]
That was obvious before even looking at the repo because the OP used "the core insight" in the intro. Other telltale signs of these type of AI projects:
- new account
- spamming the project to HN, reddit etc the moment the demo half works
- single contributor repo
- Huge commits minutes apart
- repo is less than a week old (sometimes literally hours)
- half the commits start with "Enhance"
- flashly demo that hides issues immediately obvious to experts in the field
- author has slop AI project(s)
OP uses more than one branch so he's more sophisticated than most.
yogitakes 4 days ago [-]
Congrats, but 1M is nothing spectacular for apps in finance.
Here’s a demo of wip rendering engine we’re working on that boosted our previous capabilities of 10M data points to 100M data points.
@huntergemmer - assuming you are the author, curious about your experience using .claude and .cursor, I see sub agents defined under these folders, what percent of your time spent would you say is raw coding vs prompting working on this project? And perhaps any other insights you may have on using these tools to build a library - see your first commit was only 5 days ago.
mikepurvis 4 days ago [-]
I've always been a bit skeptical of JS charting libs that want to bring the entire data to the client and do the rendering there, vs at least having the option to render image tiles on the server and then stream back tooltips and other interactive elements interactively.
However, this is pretty great; there really aren't that many use cases that require more than a million points. You might finally unseat dygraphs as the gold standard in this space.
zozbot234 4 days ago [-]
> render image tiles on the server and then stream back tooltips and other interactive elements interactively.
I guess the real draw here is smooth scrolling and zooming, which is hard to do with server-rendered tiles. There's also the case of fully local use, where server rendering doesn't make much sense.
tomjakubowski 4 days ago [-]
> I've always been a bit skeptical of JS charting libs that want to bring the entire data to the client and do the rendering there
The computer on my desk only costs me the electric power to run it, and there's 0 network latency between it and the monitor on which I'm viewing charts. If I am visualizing some data and I want to rapidly iterate on the visualization or interact with it, there's no more ideal place for the data to reside than right there. DDR5 and GPUs will be cheap again, some day.
dapperdrake 3 days ago [-]
And with a JS-friendly tool you can also test your plots on a tablet and a phone in your local wifi.
internetter 4 days ago [-]
> I've always been a bit skeptical of JS charting libs that want to bring the entire data to the client and do the rendering there, vs at least having the option to render image tiles on the server and then stream back tooltips and other interactive elements interactively.
I agree, unfortunately no library I've found supports this. I currently SSR plots to SVG using observable plot and JSDom [0]. This means there is no javascript bundle, but also no interactivity, and observable doesn't have a method to generate a small JS sidecar to add interactivity. I suppose you could progressive enhance, but plot is dozens of kilobytes that I'd frankly rather not send.
I’ve had a lot of success rendering svg charts via Airbnb’s visx on top of React Server Components, then sprinkling in interactivity with client components. Worth looking into if you want that balance.
It’s more low level than a full charting library, but most of it can run natively on the server with zero config.
I’ve always found performance to be kind of a drag with server side dom implementations.
mikepurvis 4 days ago [-]
There's no question that it's a huge step up in complexity to wire together such tightly-linked front and backend components, but it is done for things like GIS, where you want data overlays.
I think it's just a different mindset; GIS libs like Leaflet kind of assume they're the centerpiece of the app and can dictate a bunch of structure around how things are going to work, whereas charting libs benefit a lot more from "just add me to your webpack bundle and call one function with an array and a div ID, I promise not to cause a bunch of integration pain!"
Last time I tried to use it for dashboarding, I found Kibana did extremely aggressive down-sampling to the point that it was averaging out the actual extremes in the data that I needed to see.
dapperdrake 3 days ago [-]
The API and ABI for this are tricky to get right.
volkercraig 4 days ago [-]
> I kept hitting the same wall: charting libraries that claim to be "fast" but choke past 100K data points
Haha, Highcharts is a running joke around my office because of this. Every few years the business will bring in consultants to build some interface for us, and every time we will have to explain to them that highcharts, even with it's turbo mode enabled chokes on our data streams almost immediately.
ranger_danger 4 days ago [-]
No Firefox support? It has had WebGPU support since version 141.
Even when I turn on dom.webgpu.enabled, I still get "WebGPU is disabled by blocklist" even though your domain is not in the blocklist, and even if I turn on gfx.webgpu.ignore-blocklist.
embedding-shape 4 days ago [-]
Works for me with 146.0.1 (Linux) and having dom.webgpu.enabled set to true.
tonyplee 4 days ago [-]
Works for me too 145/Windows - default settings.
Very cool project. Thanks!!!
jsheard 4 days ago [-]
Which platform? I think FF has only shipped WebGPU on Windows so far.
ranger_danger 4 days ago [-]
Linux, but apparently it's supported on both, but only enabled by default on Windows. I manually enabled it but it's still not working for me.
Quick update: Just shipped a fix for the data zoom slider bug that several of you reported (thanks d--b, azangru, and others).
The slider should now track the cursor correctly on macOS. If you tried the million-points demo earlier and the zoom felt off, give it another shot.
This is why I love launching on HN - real feedback from people actually trying the demos. Keep it coming! :)
Tiberium 3 days ago [-]
The project (even if it's made with the help of LLMs) is nice, but the author writing all of his HN comments with LLMs is not.
deepfriedrice 3 days ago [-]
Yeah this entire thread has a weird vibe. OP is clearly a competent engineer to have wrangled LLMs into building this (whether a 5 day old vibe code lib can survive this initial virality will be interesting to see), but seeing so much engagement with prototypical vacant LLM output is eerie
pier25 4 days ago [-]
Very cool. Shame there's not a webgl fallback though. It will be a couple of years until webgpu adoption is good enough.
And even if WebGPU is enabled, the implementation might still be broken or inefficient in various ways. For example, Firefox uses some ridiculous polling-based approach [1] to check for completion, which disqualifies the implementation for many performance-critical applications.
Please support a fallback, ideally a 2D one too. WebGPU and WebGL are a privacy nightmare and the former is also highly experimental. I don't mind sub-60 FPS rendering, but I'd hate having to enable either of them just to see charts if websites were to adopt this library.
The web is already bad requiring JavaScript to merely render text and images. Let's not make it any worse.
dapperdrake 3 days ago [-]
WebGL punts to WebGPU for decent compute shaders.
sroussey 4 days ago [-]
It’s available everywhere if you are on newest OS and newest browser.
Biggest issue is MacOS users with newer Safari on older MacOS.
kawogi 4 days ago [-]
Support for Firefox on Linux is still only in nightly (unless that changed "very" recently)
This blocks progress (and motivation) on some of my projects.
Joeboy 4 days ago [-]
Apparently you can turn it on with about:config / dom.webgpu.enabled
But personally, I'm not going to start turning on unsafe things in my browser so I can see the demo. I tried firefox and chromium and neither worked so pfft, whatever.
SeasonalEnnui 4 days ago [-]
What's the best way to get all those points from a backend into the frontend webgpu compute shader?
There doesn't seem to be a communication mechanism that has minimal memcopy or no serialization/deserialization, the security boundary makes this difficult.
I have a backend array of 10M i16 points, I want to get this into the frontend (with scale & offset data provided via side channel to the compute shader).
As it stands, I currently process on the backend and send the frontend a bitmap or simplified SVG. I'm curious to know about the opposite approach.
olau 3 days ago [-]
Not sure, but I solved a similar problem many years ago, and ended up concluding it was silly to send all the data to the client when the client didn't have the visual resolution to show it anyway. So I sampled it adaptively client-side by precomputing and storing multiple zoom-levels. That way the client-side chart app would get the points and you could zoom in, but you'd only ever retrieve about 1000-2000 points at the time.
SeasonalEnnui 3 days ago [-]
Yeah I agree, I'd like to get an idea of the order-of-magnitude of difference between the two approaches by trying it out but realistically I don't think there's an easy way to get a i16 raw array into the browser runtime with minimal overhead (WebRTC maybe?)
dapperdrake 3 days ago [-]
That was also my research group's approach.
shunia_huang 3 days ago [-]
I'm not so good at English but points are:
- Websocket to send raw point data batch by batch
- Strip the float value to integer if possible or multiple it before sending if it won't exceed Number.Max_Integer or something alike
- The front-end should build wrapper around the received raw data for indexing so that no need to modify the data
- There should be drawing/chart libraries handling the rendering quite well with proper data format with batched data
rustystump 3 days ago [-]
I did something similar for syncing 10m particles in a sim for a multiplayer test. The gist is that at a certain scale it is cheaper to send a frame buffer but the scale needs to be massive.
For this, compression/quantize numbers and then pass that directly to the gpu after it comes off the network. Have a compute shader on the gpu decompress before writing to a frame buffer. This is what high performance lidar streaming renderers do as lidar data is packed efficiently for transport.
fulafel 3 days ago [-]
Look up trasnferable objects, it's not new. The fetch api can get you ArrayBuffers that you can shuffle around zero copy, besides to webgl buffers, also to web workers.
But minimizing copying or avoiding format conversions doesn't necessarily get you best performance of course.
SeasonalEnnui 3 days ago [-]
I had a look, that certainly looks like part of the solution, now I need to get that array buffer from my backend into the browser runtime transferable object.
I tried it out, fetching i8 arrays from a localhost server, sending to webgpu and rendering the waveform. Wow, faster than I expected, 2 billion points/sec.
lmeyerov 3 days ago [-]
Apache arrow is great here, basically the reason we wrote the initial js tier is for easier shuttling from cloud GPUs & cloud analytics pipelines to webgl in the browser
jeffbee 4 days ago [-]
The number of points actually being rendered doesn't seem to warrant the webgpu implementation. It's similar to the number of points that cubism.js could throw on the screen 15 years ago.
altern8 4 days ago [-]
All charts in the demo failed for me.
Error message: "WebGPU Error: Failed to request WebGPU adapter. No compatible adapter found. This may occur if no GPU is available or WebGPU is disabled.".
kettlecorn 4 days ago [-]
Does your browser support WebGPU yet? It's likely it does not.
WebGPU is supported on Chrome and on the latest version of Safari. On Linux with all browsers WebGPU is only supported via an experimental flag.
altern8 4 days ago [-]
I'm not sure. I'm using the latest version of Chrome.
Maybe I messed with the settings at some point and disabled something.
mahkoh 4 days ago [-]
WebGPU seems to be enabled by default in chromium 144 on linux at least on AMD GPUs.
mholt 3 days ago [-]
I'm getting the same error, but ://gpu shows that WebGPU is "Hardware accelerated"
embedding-shape 4 days ago [-]
Fun benchmark :) I'm getting 165 fps (screen refresh rate), 4.5-5.0 in GPU time and 1.0 - 1.2 in CPU time on a 9970x + RTX Pro 6000. Definitely the smoothest graph viewer I've used in a browser with that amount of data, nicely done!
Would be great if you had a button there one can press, and it does a 10-15 second benchmark then print a min/max report, maybe could even include loading/unloading the data in there too, so we get some ranges that are easier to share, and can compare easier between machines :)
huntergemmer 4 days ago [-]
165 fps on that setup - that's awesome to hear! Thanks for testing on high-end hardware.
Love the benchmark button idea. A "Run Benchmark" mode that captures:
- Load time
- GPU time
- CPU time
- Min/max/avg FPS over 10-15 seconds
- Hardware info
Then export a shareable summary or even a URL with encoded results. Would make for great comparison threads.
Adding this to the roadmap - would make a great v0.2 feature. Thanks for the suggestion!
zamadatix 4 days ago [-]
Just to emphasize how good the performance is, I get 34.7 FPS on the Million Points demo... with sampling disabled and fully zoomed out!!!
ColinEberhardt 3 days ago [-]
Nice work!
D3fc maintainer here. A few years back we added WebGL support to D3fc (a component library for people building their own charts with D3), allowing it to render 1m+ datapoints:
That's what I'm using now but I gave it too much data and it takes like a minute to render so I'm quite interested in this.
huntergemmer 3 days ago [-]
Not yet - area charts work but stacking isn't implemented. I'll add this today for you :)
pdyc 4 days ago [-]
Wow, this is great. I practically gave up on rendering large data in EasyAnalytica because plotting millions of points becomes a bad experience, especially in dashboards with multiple charts. My current solution is to downsample to give an “overview” and use zoom to allow viewing “detailed” data, but that code is fragile.
One more issue is that some browser and OS combinations do not support WebGPU, so we will still have to rely on existing libraries in addition to this, but it feels promising.
samradelie 4 days ago [-]
Fantasic Hunter, congrats!
I've been looking for a followup to uPlot - Lee who made uPlot is a genius and that tool is so powerful, however I need OffscreenCanvas running charts 100% in worker threads. Can ChartGPU support this?
I started Opus 4.5 rewrite of uPlot to decouple it from DOM reliance, but your project is another level of genius.
I hope there is consideration for running your library 100% in a worker thread ( the data munging pre-chart is very heavy in our case )
Again, congrats!
huntergemmer 4 days ago [-]
Thanks! Leon's uPlot is fantastic - definitely an inspiration.
Worker thread support via OffscreenCanvas is a great idea and WebGPU does support it. I haven't tested ChartGPU in a worker context yet, but the architecture should be compatible - we don't rely on DOM for rendering, only for the HTML overlay elements (tooltips, axis labels, legend).
The main work would be:
1. Passing the OffscreenCanvas to the worker
2. Moving the tooltip/label rendering to message-passing or a separate DOM layer
For your use case with heavy data munging, you could also run just the data processing in a worker and pass the processed arrays to ChartGPU on the main thread - that might be a quicker win.
Would you open an issue on GitHub? I'd love to understand your specific workload better. This feels like a v0.2 feature worth prioritizing.
samradelie 4 days ago [-]
You have a good point about doing zero copy transferables which would probably work.
There is certainly something beautiful about your charging GPU code being part of a file that runs completely isolated in another thread along with our websocket Data fire hose
Architecturally that could be something interesting where you expose a typed API wrapping postmessage where consumers wanting to bind the main thread to a worker thread could provide the offscreen canvas as well as a stream of normalized, touch and pointer events, keyboard and wheel. Then in your worker listeners could handle these incoming events and treat them as if they were direct from the event listeners on the main thread; effectively, your library is thread agnostic.
I'd be happy to discuss this on GitHub. I'll try to get to that today. See you there.
I am on the same boat. Current user and a fan of uPlot starting to hit performance limits. Thank you for this library, I will start testing it soon.
On the topic of support for worker threads, in my current project I have multiple data sources, each handled by its own worker. Copying data between worker and main thread - even processed - can be an expensive operation. Avoiding it can further help with performance.
facontidavide 4 days ago [-]
Cool to see that this project started 5 days ago!
Unfortunately, I can not make it work on my system (Ubuntu, chrome, WebGPU enabled as described in the documentation).
On the other hand, It works on my Android phone...
Funny enough, I am doing something very similar: a C++ portable (Windows, Linux MacOS) charting library, that also compile to WASM and runs in the browser...
I am still at day 2, so see you in 3 days, I guess!
ivanjermakov 4 days ago [-]
I was able to make WebGPU work (and work well!) in Chrome Linux by enabling Vulkan renderer in Chrome flags.
Good catch! Thanks for actually clicking around and finding this - added to my issue tracker.
fourthark 3 days ago [-]
Can't tell what this demo is streaming, looks like a static line but it's working hard on something. It can't seem to decide whether to display the top number in red or green either.
d--b 4 days ago [-]
This looks great. Quick feedback, scrollbars don't work well on my mac mini M1.
The bar seems to move twice as fast as the mouse.
huntergemmer 4 days ago [-]
Thanks for the bug report! That's the data zoom slider - sounds like a momentum/inertia scrolling issue on macOS.
Which demo were you on? (million-points, live-streaming, or sampling?) I'll test on M1 today and get a fix out.
Really appreciate you taking the time to try it :)
qayxc 4 days ago [-]
Same issue on Windows - doesn't seem to be OS-related, but a general problem.
The sliders and the zoom are basically unusable.
monegator 4 days ago [-]
On windows 10, too.
Firefox 147.0.1 (You may want to update your "supported" chart! Firefox has WebGPU now)
abuldauskas 4 days ago [-]
I also noticed it. On million-points. MacBook Pro M2 on Firefox Nightly 148.0a1 (2026-01-09) (aarch64)
mikepurvis 4 days ago [-]
I see the same on Windows 11, both FF and Chrome.
smusamashah 4 days ago [-]
Can it scroll while populating? I was trying to heart rate chart using libs which is captured at 60fps from camera (finger on camera with flash light). Raw drawing with canvas was faster than any libs.
Drawing and scrolling live data was problem for a lib (dont remember which one) because it was drawing the whole thing on every frame.
But still, love to see it. WebGPU will surely go forward slowly as these things naturally do, but practical experimentation is essential.
utf_8x 3 days ago [-]
If this doesn't work for you on a reasonably recent version of Firefox, you can enable experimental (but in my experience quite stable) WebGPU support in `about:config` by setting `dom.webgpu.enabled` to true.
imiric 4 days ago [-]
This is great, but I don't see it being useful for most use cases.
Most high-level charting libraries already support downsampling. Rendering data that is not visible is a waste of CPU cycles anyway. This type of optimization is very common in 3D game engines.
Also, modern CPUs can handle rendering of even complex 2D graphs quite well. The insanely complex frontend stacks and libraries, a gazillion ads and trackers, etc., are a much larger overhead than rendering some interactive charts in a canvas.
I can see GPU rendering being useful for applications where real-time updates are critical, and you're showing dozens of them on screen at once, in e.g. live trading. But then again, such applications won't rely on browsers and web tech anyway.
dfortes 4 days ago [-]
> You are an elite WebGPU developer with deep expertise in modern GPU programming for the web. You are a master of the WebGPU API, WGSL shader language, compute shaders, and high-performance graphics rendering across all major browsers.
This is just embarrassing.
facontidavide 3 days ago [-]
Since I still see this on the front page, I am taking the liberty to plug my own funny experiment (still WIP).
By no mean it is as nice looking as your demo, but it is interesting to ME...
C++, compiled to WASM, using WebGL. Works on Firefox too. M4 decimation.
There is also ggplot, ggplot2 and the Grammar of Graphics by Leland Wilkinson. Sadly, Algebra is so incompatible with Geometry that I found the book beautiful but useless for my problem domains after buying and reading and pondering it.
mitdy 4 days ago [-]
What purposes have you found for rendering so many datapoints? It seems like at a certain point, say above a few thousand, it becomes difficult to discriminate/less useful to render more in many cases
aixnr 4 days ago [-]
[dead]
dapperdrake 4 days ago [-]
When did WebGPU become good enough at compute shaders? When I tried and failed at digging through the spec about a year ago it was very touch and go.
Maybe am just bad at reading specifications or finding the right web browser.
embedding-shape 4 days ago [-]
In Chromium it's been good for a good while, judges still out on when it'll be good in Firefox. Safari I have no clue about, nor whatever Microsoft calls their browser today.
buibuibui 3 days ago [-]
Is there a best practice how to stream and plot large signal data (e.g. > 1M data points of multiple sine waves) from a Python backend (e.g. numpy + FastAPI) to frontend?
My current solution is: fetch ADC data, convert the bytes to base64 and embed it to JSON that will be send to the frontend. Frontend reverses this process and plot it to eCharts.
dapperdrake 3 days ago [-]
How else is the data going to make it to your phone?
elAhmo 4 days ago [-]
Safari on latest Sequoia doesn't support this. Given that many people will not upgrade to the latest version, it is a shame Safari is behind these things.
dapperdrake 3 days ago [-]
Wait until you hear about the pixel size restrictions on safari canvases.
dangoodmanUT 4 days ago [-]
Some of these don't feel 60fps, like the streaming one. I don't really know how to verify that though. Or maybe i'm just so used to 144fps.
reactordev 3 days ago [-]
Nice. Add some extras like being able to draw lines (for the candlesticks) and bands. Could just be an add on. The ability not only to plot things, but point them out is a must for me.
I’ve written several of these in the past. Was going to write one in pure WebGPU for a project I’m working on but you beat me to it and now I feel compelled to try yours before going down yet another charting rabbit hole.
akst 3 days ago [-]
I think I’d be interested in seeing something written about architectural decisions and how the architecture and your experience writing it differed from other non GPU projects (charting ideally but non-charting is fine too), and any unique hurdles you encountered in building this project?
mholt 3 days ago [-]
Hmm, I'm getting "Failed to request WebGPU adapter. No compatible adapter found. This may occur if no GPU is available or WebGPU is disabled." but brave://flags reports that WebGPU is "Hardware accelerated." Any way for me to try this out?
How do you think is it possible? Because on RN most of the Graph Libs on CPU or Skia (which is good but still utilise CPU for Path rendering)
amirhirsch 4 days ago [-]
Very Nice. There is an issue with panning on the million point demo -- it currently does not redraw until the dragging velocity is below some threshold, but it should seem like the points are just panned into frame. It is probably enough to just get rid of the dragging velocity threshold, but sometimes helps to cache an entire frame around the visible range
jhatemyjob 4 days ago [-]
I don't really care about this, like at all. But I just wanted to say, that's an amazing name. Well done.
akdor1154 4 days ago [-]
The rendering is very cool, but what i really want is this as a renderer i can plug into Vega.
Vega/VGlite have amazing charting expressivity in their spec language, most other charting libs don't come close. It would be very cool to be able to take advantage of that.
Moosdijk 4 days ago [-]
There seems to be a webgl render engine suitable for vega [0].
Have you tried and if so, what was your experience?
this is so well done, thanks for sharing it. i've been trying to communicate with people how we are living in the golden age of dev where things that previously couldn't have been created, now can be. this is an amazing example of that.
4 days ago [-]
KellyCriterion 4 days ago [-]
Curious:
How does TradingView et.al. solves this problem? They should have the same limitations?
(actually, Im a user of the site, though I never started digging down how they made id)
artursapek 4 days ago [-]
Tradingview’s charts couldn’t handle a million data points. They typically just render a few thousand candlesticks at a time, which is trivial with well optimized Canvas code.
justplay 4 days ago [-]
Amazing. I can't express how thankful I am for you building this.
rzmmm 3 days ago [-]
Very cool project. Edward Tufte presented decades ago that great visualizations maximizes data-ink ratio. This is what he ment ;)
deburo 4 days ago [-]
Nicely done. Will you be able to render 3D donuts? And even animations, say pick a slice & see it tear apart from the donut.
huntergemmer 4 days ago [-]
Thanks! Currently focused on 2D charts. That's where the "big data" performance problem is most painful.
3D is coming (it's the same rendering pipeline), but I'd want to get the 2D story solid first before expanding scope.
The slice animation is doable though - we already have animation infrastructure for transitions. An "explode slice on click" effect would be a fun addition to the pie/donut charts.
What's your use case? Dashboard visuals or something else?
dvh 4 days ago [-]
Doesn't work on my Android phone because no GPU (but I have webgl is that not enough?)
Awesome stuff. just curious how much of this was written by AI autonomously ?
Andr2Andr 4 days ago [-]
Will it be possible to plot large graphs/ networks with thousands of nodes?
kayson 4 days ago [-]
Zoom doesn't seem to work on Firefox mobile. Just zooms the whole page in.
escapecharacter 4 days ago [-]
I'd love to know if this is compatible as embedded in a Jupyter Notebook.
btbuildem 4 days ago [-]
I like how you used actual financial data for the candlestick example :)
lacoolj 4 days ago [-]
Doesn't work for me? Latest chrome, RTX 4080, what am I missing?
cuvinny 4 days ago [-]
I had to enable it in both Firefox (about:config search webgpu) and in Chrome (chrome://flags and enable Unsafe WebGPU Support) on my linux machine.
lacoolj 2 days ago [-]
ahhh ok thanks!
mdulcio 4 days ago [-]
Have you tried rendering 30 different instances at the same time?
4 days ago [-]
rustystump 3 days ago [-]
There is no problem with using ai but i strongly suspect that anyone can ask claud to make a gpu version of uplot and get a result of similar quality level here. AI emjois included.
The code in the repo is pretty awful with zero abstraction of duplicated render pipeline building and ai slop comments all over the place like “last resort do this”. Do not use this for production code. Instead, prompt the ai yourself and use your own slop.
The performance here is also terrible given it is gpu based. A gpu based renderer done correctly should be able to hit 50-100m blocks/lines etc at 60fps zoom/panning.
It is a testament to how good ai is though and the power of the dunning kruger effect
I hope you have a way to monetize/productize this, because this has three.js potential. I love this. Keep goin! And make it safe (a way to fund, don't overextend via OSS). Good luck, bud.
Also, you are a master of naming. ChartGPU is a great name, lol!
huntergemmer 4 days ago [-]
Thanks! The name was honestly just "what does this do" + "how does it do it" haha.
Interesting you mention three.js - there's definitely overlap in the WebGPU graphics space. My focus is specifically on 2D data visualization (time series, financial charts, dashboards), but I could see the rendering patterns being useful elsewhere.
On sustainability - still figuring that out. For now it's a passion project, but I've thought about a "pro" tier for enterprise features (real-time collaboration, premium chart types) while keeping the core MIT forever. Open to ideas if you have thoughts.
Appreciate the kind words! :)
PxldLtd 4 days ago [-]
Have you thought about leaning into some of the fintech space? They'd happily pay for the sorts of features they need to stream financial data (which is usually bazillions of data points) and graph it efficiently.
Off the top of my head, look into Order Book Heatmaps, 3D Volatility Surfaces, Footprint Charts/Volatility deltas. Integrating drawing tools like Fibonacci Retracements, Gann Fans etc. It would make it very attractive to people willing to pay.
huntergemmer 3 days ago [-]
Absolute gold comment here :)
This comment was buried yesterday. I'm sorry for the late response!
I was thinking about a pro tier for this kind of specialized stuff. Core stays MIT forever, but fintech tooling could be paid.
Of the chart types you listed, is there a preference for what gets done first?
Order Book Heatmaps first?
PxldLtd 3 days ago [-]
No problem :) I asked a friend who's a bit closer to the space and he agrees, definitely Order Book Heatmaps. The speed you're getting would make this a killer feature.
Competitors typically have to snapshot/aggregate because their graphing libraries are heavily CPU bound. Being able visualise level 2/3 data without downsampling is a big win. Also being able to smoothly roll back through the last 12hrs of tick-level history would be really neat too.
I'd say the bare minimum feature set outside of that is going to be:
- Non linear X axis for gaps/sessions
- Crosshairs that snap to OHLC data
- Logarithmic scales, Candlesticks, Heikin-Ashi, and Volume profiles
- Getting the 'feel' nice so that you can quickly scale and drag (people are real sticklers for the feel of these tools)
- Solid callbacks for events for exchange integration, people hate leaving their charts to place an order eg (onOrderModify etc)
- Provide a nice websocket data ingestion pipeline
- Provide an api so devs can integrate their own indicators, some sort of 'layers' API or something.
Sorry if I can't be of more help as I'm just a hobbyist in this area!
keepamovin 2 days ago [-]
hunter that's why your licensing is super important. If you don't lock it down you are doing free R&D for giant firms who have the money to make you, but will just rip you off if they can. I speak from extensive OSS experience. The feelgood of giving away wears off, make the right choices with regard to IP and you can capture the value you are creating for people who use it.
PxldLtd 17 hours ago [-]
This is a really good point. Competitors in this space have a lot of resources so there's a tightrope to walk if you go the OSS core route. Any of these competitors could leverage your core and provide many more features than you could reasonably implement.
BSL/BUSL seems like a good fit for licensing here. It's technically source available instead of open source but just adds the layer that a competitor can't be built using your core. Otherwise the core is free to modify and fork. AGPL might be an option but I fear it would scare off a lot of companies in the space who have policies against AGPL licensed code but you'd get to keep advertising as OSS.
lelanthran 3 days ago [-]
"The core insight" ...
4 days ago [-]
maximgeorge 4 days ago [-]
[dead]
4 days ago [-]
ycombadmin3 4 days ago [-]
[dead]
3 days ago [-]
acedTrex 4 days ago [-]
Both a .cursor AND a .claude folder, what a yikes. Slop post galore
logicallee 4 days ago [-]
.c? what a yikes. I took a quick look at the code and this application doesn't even have any machine code in it, it's just words like "while", "for", "if", "else" and other English words - someone back in 1970s, I'm sure.
facontidavide 4 days ago [-]
Soon there will be only 3 factors that we will care about: API (easy to use and integrate), behavior (does it do what I want it to do?) and testability (do I have sufficient guaranty that the code doesn't have errors).
The fact that the code was generated by a human or a machine is less and less important.
acedTrex 4 days ago [-]
And how do you verify those three things in a rapid low effort fashion?
embedding-shape 4 days ago [-]
Why don't you judge the results, if the slop is so easy to detect, instead of using the mere indication of a particular tool to mean it's slop? Lazy.
stephenhumphrey 4 days ago [-]
Healthy skepticism is certainly laudable, but too many llmuddites seem rather aggressive while whistling past their own graveyards.
embedding-shape 4 days ago [-]
Yeah, hypers need to cool their language down a bit and llmuddites need to acquire a bit of nuance. New technology tends to create large camps initially on both sides :)
acedTrex 4 days ago [-]
Because it is not reasonable to expend high effort to verify something that took no effort to create. That is not a workable long term solution. Instead you have to rely on low effort signals to signify if something is WORTH expending energy on.
embedding-shape 4 days ago [-]
I didnt read your comment, because it would take me longer than just writing: your comment looks like LLM slop.
acedTrex 4 days ago [-]
If thats your signal then you lock yourself out of all parts of this website, not the best heuristic
embedding-shape 4 days ago [-]
Yeah, that'd be stupid right? I have the same mindset with projects and code. If it's a good project, it's a good project, I don't care what IDE they used. If the code is shit it's shit regardless of how it was produced. But at least point to something concrete so me and others could potentially learn something, or understand something better.
keepamovin 4 days ago [-]
Agree, but, ah, can you illuminate. <totally-offtopic data-but="i am intesnely curious"> Quite amazing 6000 points in under 3 months. Yuge. OK a few almost 1K posts, but they drop off. You must have some mega-point comments. Can you, eh, "point me" to a comment of yours that has a super amount of upvotes?
Sorry if this is weird, it's just I've never personally experienced a comment with anything more than 100 - 200 points. And that was RARE. I totally get if you don't want to...but like, what were your "kilopoint" comments, or thereabouts? </offtopic>
embedding-shape 4 days ago [-]
Yeah, thanks for giving me a reality-check on my HN addiction, really need to put back my original /etc/hosts file it seems :|
So, apparently these are my five most upvoted comments (based on going through the first 100 pages of my own comments...):
I think if you too retire, have nothing specific to do for some months, and have too much free time to discuss with strangers on the internet about a wide range of topics you ideally have strong opinions about, you too can get more HN karma than you know what to do with :)
buckle8017 4 days ago [-]
WebGPU is a security nightmare.
The idea that GPU vendors are going to care about memory access violations over raw performance is absurd.
the__alchemist 4 days ago [-]
Security is one aspect to consider. It's not a veto button!
buckle8017 4 days ago [-]
It's absolutely a veto button on something so pervasive.
What is wrong with you JavaScript bros.
the__alchemist 4 days ago [-]
Not a JS bro here; low-level embedded/scientific programmer who does a lot of graphics and general compute work on GPUs.
Rendered at 10:07:05 GMT+0000 (Coordinated Universal Time) with Vercel.
some notes from a very brief look at the 1M demo:
- sampling has a risk of eliminating important peaks, uPlot does not do it, so for apples-to-apples perf comparison you have to turn that off. see https://github.com/leeoniya/uPlot/pull/1025 for more details on the drawbacks of LTTB
- when doing nothing / idle, there is significant cpu being used, while canvas-based solutions will use zero cpu when the chart is not actively being updated (with new data or scale limits). i think this can probably be resolved in the WebGPU case with some additional code that pauses the updates.
- creating multiple charts on the same page with GL (e.g. dashboard) has historically been limited by the fact that Chrome is capped at 16 active GL contexts that can be acquired simultaneously. Plotly finally worked around this by using https://github.com/greggman/virtual-webgl
> data: [[0, 1], [1, 3], [2, 2]]
this data format, unfortunately, necessitates the allocation of millions of tiny arrays. i would suggest switching to a columnar data layout.
uPlot has a 2M datapoint demo here, if interested: https://leeoniya.github.io/uPlot/bench/uPlot-10M.html
Both points are fair:
1. LTTB peak elimination - you're right, and that PR is a great reference. For the 1M demo specifically, sampling is on by default to show the "it doesn't choke" story. Users can set sampling: 'none' for apples-to-apples comparison. I should probably add a toggle in the demo UI to make that clearer.
2. Idle CPU - good catch. Right now the render loop is probably ticking even when static. That's fixable - should be straightforward to only render on data change or interaction. Will look into it.
Would love your deeper dive feedback when you get to it. Always more to learn from someone who's thought about this problem as much as you have.
And column-oriented data is a must. Look at Rlang's data frames, pandas, polars, numpy, sql, and even Fortran's matrix layout.
Also need specialized expicitly targetable support for Float32Array and Float64Array. Both API and ABI are necessary if you want to displace incumbents.
There is huge demand for a good web implementation. This is what it takes.
Am interested in collaborating.
https://www.linkedin.com/in/huntergemmer/
I once had to deal with many million data points for an application. I ended up mip-mapping them client-side.
But regarding sampling, if it's a line chart, you can sample adaptively by checking whether the next point makes a meaningfully visible difference measured in pixels compared to its neighbours. When you tune it correctly, you can drop most points without the difference being noticeable.
I didn't find any else doing that at the time, and some people seemed to have trouble accepting it as a viable solution, but if you think about it, it doesn't actually make sense to plot say 1 million points in a line chart 1000 pixels wide. On average that would make 1000 points per pixel.
Bresenham's is one algorithm historically used to downsample the data, but a lot of contemporary audio software doesn't use that. In Ardour (a cross-platform, libre, open source DAW), we actually compute and store min/max-per-N-samples and use that for plotting (and as the basis for further downsampling.
this is, effectively, what uPlot does, too: https://github.com/leeoniya/uPlot/issues/1119
I discovered flot during my academic research career circa 2008 and it saved my ass more times than I can count. I just wanted to say thank you for that. I wouldn't be where I am today without your help :)
> But regarding sampling, if it's a line chart, you can sample adaptively by checking whether the next point makes a meaningfully visible difference measured in pixels compared to its neighbours.
uPlot basically does this (see sibling comment), so hopefully that's some validation for you :)
My concern would be computational cost for real-time/streaming use cases. LTTB is O(n) and pretty cache-friendly. Wavelet transforms are more expensive, though maybe a GPU compute shader could make it viable.
The other question is whether it's "visually correct" for charting specifically. LTTB optimizes for preserving the visual shape of the line at a given resolution. Wavelet decomposition optimizes for signal reconstruction - not quite the same goal.
That said, I'd be curious to experiment. Do you have any papers or implementations in mind? Would make for an interesting alternative sampling mode.
I don't have any papers in mind, but I do think that the critique around visual shape vs signal reconstruction may not be accurate given that wavelets are starting to see a lot of adoption in the visual space (at least JPEG2000 is the leading edge in that field). Might also be interesting to use DCT as well. I think these will perform better than LTTB (of course the compute cost is higher but there's also HW acceleration for some of these or will be over time).
Sounds like what makes sql joins NP-hard.
Sometimes I like to ponder on the immense amount of engineering effort expended on working around browser limitations.
In electronics world this is what "digital phosphor" etc does in oscilloscopes, which started out as just emulating analog scopes. Some examples are visible here https://www.hit.bme.hu/~papay/edu/DSOdisp/gradient.htm
There is this misconception that if one uses js or c# to tell a gpu what to do it is somehow slower than rust. It only is if you crunching data but moving memory to the gpu and telling gpu to crunch is virtually identical.
Even then, when u write to a framebuffer directly in the gpu if the locations of the points are not contiguous you are thrashing. Rendering points very fast is still very much about reducing the data set down to bypass all the layers of memory walls.
Repeatedly shrinking by a factor of two means log2(max(width, height)) passes, each pass is a quarter of the pixels of the previous pass so that's a total of 4/3 times the pixels of the original image. Should be low enough overhead, right?
Add Lab-comor space for this though, like the color theme solarized-light.
Also add options to side-step red-green blindness and blue-yellow blindndess.
We’ve been working on a browser-based Link Graph (osint) analysis tool for months now (https://webvetted.com/workbench). The graph charting tools on the market are pretty basic for the kind of charting we are looking to do (think 1000s of connected/disconnected nodes/edges. Being able to handle 1M points is a dream.
This will come in very handy.
Is graph visualization something you'd want as part of ChartGPU, or would a separate "GraphGPU" type library make more sense? Curious how you're thinking about it.
More directly relevant, I haven't looked at the D3 internals for a decade, but I wonder if it might be tractable to use your library as a GPU rendering engine. I guess the big question for the future of your project is whether you want to focus on the performance side of certain primitives or expand the library to encompass all the various types of charts/customization that users might want. Probably that would just be a different project entirely/a nightmare, but if feasible even for a subset of D3 you would get infinitely customizable charts "for free." https://github.com/d3/d3-shape might be a place to look.
In my past life, the most tedious aspect of building such a tool was how different graph standards and expectations are across different communities (data science, finance, economics, natural sciences, etc). Don't get me started about finance's love for double y-axis charts... You're probably familiar with it, but https://www.amazon.com/Grammar-Graphics-Statistics-Computing... is fantastic if you continue on your own path chart-wise and you're looking for inspiration.
That said, the ECharts-style declarative API is intentionally designed to be "batteries included" for common cases. So it's a balance: the primitives are fast, but you get sensible defaults for the 80% use case without configuring everything. Double y-axis is a great example - that's on the roadmap because it's so common in finance and IoT dashboards. Same with annotations, reference lines, etc. Haven't read the Grammar of Graphics book but it's been on my list - I'll bump it up. And d3-shape is a great reference for the path generation patterns. Thanks for the pointers!
Question: What chart types or customization would be most valuable for your use cases?
That is, you're definitely developing the tool in a direction that I and I think most Hacker News readers will appreciate and it sounds like you're already thinking about some of the most common "extravagances" (annotations, reference lines, double y-axis etc). As OP mentioned, I think there's a big need for more performant client-side graph visualization libraries, but that's really a different project. Last I looked, you're still essentially stuck with graphviz prerendering for large enough graphs...
"Data science/data journalism" is a great way to frame the target audience. Clean defaults, sensible design, fast enough that the tool disappears and you just see the data.
And yeah, graphviz keeps coming up in this thread - clearly a gap in the ecosystem. Might be a future project, but want to nail the 2D charting story first and foremost.
Thanks for the thoughtful feedback - this is exactly the kind of input that shapes the roadmap.
A lot of improvements are possible, based on 20 years of progress in interactive systems, and just overall computing performance.
Most recently adding to the family is our open source GFQL graph language & engine layer (cypher on GPUs, including various dataframe & binary format support for fast & easy large data loading), and under the louie.ai umbrella, piloting genAI extensions
You can now render up to 5 million candles. Just tested it - Achieved 104 FPS with 5M candles streaming at 20 ticks/second.
Demo: https://chartgpu.github.io/ChartGPU/examples/candlestick-str...
Also fixed from earlier suggestions and feedback as noted before:
- Data zoom slider bug has been fixed (no longer snapping to the left or right) - Idle CPU usage bug (added user controls along with more clarity to 1M point benchmark)
13 hours on the front page, 140+ comments and we're incorporating feedback as it comes in.
This is why HN is the best place to launch. Thanks everyone :)
One thing to note: I added a toggle to "Benchmark mode" in the 1M benchmark example - this preserves the benchmark capability while demonstrating efficient idle behavior.
Another thing to note: Do not be alarmed when you see the FPS counter display 0 (lol), that is by design :) Frames are rendered efficiently. If there's nothing to render (no dirty frames) nothing is rendered. The chart will still render at full speed when needed, it just doesn't waste cycles rendering the same static image 60 times per second.
Blown away by all of you amazing people and your support today :)
https://chartgpu.github.io/ChartGPU/examples/million-points/...
While dragging, the slider does not stay under the cursor, but instead moves by unexpected distances.
Looks like the data zoom slider has a momentum/coordinate mapping issue. Bumping this up the priority list since multiple people are hitting it.
After the initial setup and learning curve, it was actually very easy. All in all, way less complicated than all the performance hacks I had to do to get 0.01% of the data to render half as smooth using d3.
Although this looks next level. I make sure all the computation happens in a single o(n) loop but the main loop still takes place on the cpu. Very well done
To anyone on the fence, GPU charting seemed crazy to me beforehand (classic overengineering) but it ends up being much simpler (and much much much smoother) than traditional charts!
[1] https://phrasing.app
[0]: https://chartgpu.github.io/ChartGPU/examples/live-streaming/...
[1]: https://crisislab-timeline.pages.dev/examples/live-with-plug...
[1]: https://github.com/ChartGPU/ChartGPU/blob/main/.cursor/agent...
[2]: https://github.com/ChartGPU/ChartGPU/blob/main/.claude/agent...
- new account
- spamming the project to HN, reddit etc the moment the demo half works
- single contributor repo
- Huge commits minutes apart
- repo is less than a week old (sometimes literally hours)
- half the commits start with "Enhance"
- flashly demo that hides issues immediately obvious to experts in the field
- author has slop AI project(s)
OP uses more than one branch so he's more sophisticated than most.
Here’s a demo of wip rendering engine we’re working on that boosted our previous capabilities of 10M data points to 100M data points.
https://x.com/TapeSurfApp/status/2009654004893339903?s=20
https://plotly.com/python/performance/
However, this is pretty great; there really aren't that many use cases that require more than a million points. You might finally unseat dygraphs as the gold standard in this space.
I guess the real draw here is smooth scrolling and zooming, which is hard to do with server-rendered tiles. There's also the case of fully local use, where server rendering doesn't make much sense.
The computer on my desk only costs me the electric power to run it, and there's 0 network latency between it and the monitor on which I'm viewing charts. If I am visualizing some data and I want to rapidly iterate on the visualization or interact with it, there's no more ideal place for the data to reside than right there. DDR5 and GPUs will be cheap again, some day.
I agree, unfortunately no library I've found supports this. I currently SSR plots to SVG using observable plot and JSDom [0]. This means there is no javascript bundle, but also no interactivity, and observable doesn't have a method to generate a small JS sidecar to add interactivity. I suppose you could progressive enhance, but plot is dozens of kilobytes that I'd frankly rather not send.
[0] https://github.com/boehs/site/blob/master/conf/templating/ma...
It’s more low level than a full charting library, but most of it can run natively on the server with zero config.
I’ve always found performance to be kind of a drag with server side dom implementations.
I think it's just a different mindset; GIS libs like Leaflet kind of assume they're the centerpiece of the app and can dictate a bunch of structure around how things are going to work, whereas charting libs benefit a lot more from "just add me to your webpack bundle and call one function with an array and a div ID, I promise not to cause a bunch of integration pain!"
Last time I tried to use it for dashboarding, I found Kibana did extremely aggressive down-sampling to the point that it was averaging out the actual extremes in the data that I needed to see.
Haha, Highcharts is a running joke around my office because of this. Every few years the business will bring in consultants to build some interface for us, and every time we will have to explain to them that highcharts, even with it's turbo mode enabled chokes on our data streams almost immediately.
Even when I turn on dom.webgpu.enabled, I still get "WebGPU is disabled by blocklist" even though your domain is not in the blocklist, and even if I turn on gfx.webgpu.ignore-blocklist.
Very cool project. Thanks!!!
https://caniuse.com/webgpu
The slider should now track the cursor correctly on macOS. If you tried the million-points demo earlier and the zoom felt off, give it another shot.
This is why I love launching on HN - real feedback from people actually trying the demos. Keep it coming! :)
https://caniuse.com/webgpu
[1] https://bugzilla.mozilla.org/show_bug.cgi?id=1870699
And there is the issue of getting the browser to use the correct GPU in the first place, but that is a different can of worms.
Please support a fallback, ideally a 2D one too. WebGPU and WebGL are a privacy nightmare and the former is also highly experimental. I don't mind sub-60 FPS rendering, but I'd hate having to enable either of them just to see charts if websites were to adopt this library.
The web is already bad requiring JavaScript to merely render text and images. Let's not make it any worse.
Biggest issue is MacOS users with newer Safari on older MacOS.
This blocks progress (and motivation) on some of my projects.
But personally, I'm not going to start turning on unsafe things in my browser so I can see the demo. I tried firefox and chromium and neither worked so pfft, whatever.
There doesn't seem to be a communication mechanism that has minimal memcopy or no serialization/deserialization, the security boundary makes this difficult.
I have a backend array of 10M i16 points, I want to get this into the frontend (with scale & offset data provided via side channel to the compute shader).
As it stands, I currently process on the backend and send the frontend a bitmap or simplified SVG. I'm curious to know about the opposite approach.
For this, compression/quantize numbers and then pass that directly to the gpu after it comes off the network. Have a compute shader on the gpu decompress before writing to a frame buffer. This is what high performance lidar streaming renderers do as lidar data is packed efficiently for transport.
But minimizing copying or avoiding format conversions doesn't necessarily get you best performance of course.
Error message: "WebGPU Error: Failed to request WebGPU adapter. No compatible adapter found. This may occur if no GPU is available or WebGPU is disabled.".
WebGPU is supported on Chrome and on the latest version of Safari. On Linux with all browsers WebGPU is only supported via an experimental flag.
Maybe I messed with the settings at some point and disabled something.
Would be great if you had a button there one can press, and it does a 10-15 second benchmark then print a min/max report, maybe could even include loading/unloading the data in there too, so we get some ranges that are easier to share, and can compare easier between machines :)
Love the benchmark button idea. A "Run Benchmark" mode that captures: - Load time - GPU time - CPU time - Min/max/avg FPS over 10-15 seconds - Hardware info
Then export a shareable summary or even a URL with encoded results. Would make for great comparison threads.
Adding this to the roadmap - would make a great v0.2 feature. Thanks for the suggestion!
D3fc maintainer here. A few years back we added WebGL support to D3fc (a component library for people building their own charts with D3), allowing it to render 1m+ datapoints:
https://blog.scottlogic.com/2020/05/01/rendering-one-million...
That's what I'm using now but I gave it too much data and it takes like a minute to render so I'm quite interested in this.
One more issue is that some browser and OS combinations do not support WebGPU, so we will still have to rely on existing libraries in addition to this, but it feels promising.
I've been looking for a followup to uPlot - Lee who made uPlot is a genius and that tool is so powerful, however I need OffscreenCanvas running charts 100% in worker threads. Can ChartGPU support this?
I started Opus 4.5 rewrite of uPlot to decouple it from DOM reliance, but your project is another level of genius.
I hope there is consideration for running your library 100% in a worker thread ( the data munging pre-chart is very heavy in our case )
Again, congrats!
Worker thread support via OffscreenCanvas is a great idea and WebGPU does support it. I haven't tested ChartGPU in a worker context yet, but the architecture should be compatible - we don't rely on DOM for rendering, only for the HTML overlay elements (tooltips, axis labels, legend).
The main work would be: 1. Passing the OffscreenCanvas to the worker 2. Moving the tooltip/label rendering to message-passing or a separate DOM layer
For your use case with heavy data munging, you could also run just the data processing in a worker and pass the processed arrays to ChartGPU on the main thread - that might be a quicker win.
Would you open an issue on GitHub? I'd love to understand your specific workload better. This feels like a v0.2 feature worth prioritizing.
There is certainly something beautiful about your charging GPU code being part of a file that runs completely isolated in another thread along with our websocket Data fire hose
Architecturally that could be something interesting where you expose a typed API wrapping postmessage where consumers wanting to bind the main thread to a worker thread could provide the offscreen canvas as well as a stream of normalized, touch and pointer events, keyboard and wheel. Then in your worker listeners could handle these incoming events and treat them as if they were direct from the event listeners on the main thread; effectively, your library is thread agnostic.
I'd be happy to discuss this on GitHub. I'll try to get to that today. See you there.
On the topic of support for worker threads, in my current project I have multiple data sources, each handled by its own worker. Copying data between worker and main thread - even processed - can be an expensive operation. Avoiding it can further help with performance.
Funny enough, I am doing something very similar: a C++ portable (Windows, Linux MacOS) charting library, that also compile to WASM and runs in the browser...
I am still at day 2, so see you in 3 days, I guess!
Which demo were you on? (million-points, live-streaming, or sampling?) I'll test on M1 today and get a fix out.
Really appreciate you taking the time to try it :)
Drawing and scrolling live data was problem for a lib (dont remember which one) because it was drawing the whole thing on every frame.
Although dragging the slider at the bottom is currently kind of broken as mentioned in another comment, seems like they are working on it though.
All the optimizations mentioned except LTTB downsampling in compute shaders can be done in WebGL.
Web charts with > 1 M points and 60 FPS zooming/panning have been available since 2019. For example, here's a line chart with 100M points (100x more): https://lightningchart.com/lightningchart-js-demos/100M/
But still, love to see it. WebGPU will surely go forward slowly as these things naturally do, but practical experimentation is essential.
Most high-level charting libraries already support downsampling. Rendering data that is not visible is a waste of CPU cycles anyway. This type of optimization is very common in 3D game engines.
Also, modern CPUs can handle rendering of even complex 2D graphs quite well. The insanely complex frontend stacks and libraries, a gazillion ads and trackers, etc., are a much larger overhead than rendering some interactive charts in a canvas.
I can see GPU rendering being useful for applications where real-time updates are critical, and you're showing dozens of them on screen at once, in e.g. live trading. But then again, such applications won't rely on browsers and web tech anyway.
This is just embarrassing.
By no mean it is as nice looking as your demo, but it is interesting to ME... C++, compiled to WASM, using WebGL. Works on Firefox too. M4 decimation.
https://one-million-points-wasm.netlify.app/
http://perceptualedge.com/examples.php
There is also ggplot, ggplot2 and the Grammar of Graphics by Leland Wilkinson. Sadly, Algebra is so incompatible with Geometry that I found the book beautiful but useless for my problem domains after buying and reading and pondering it.
Maybe am just bad at reading specifications or finding the right web browser.
I’ve written several of these in the past. Was going to write one in pure WebGPU for a project I’m working on but you beat me to it and now I feel compelled to try yours before going down yet another charting rabbit hole.
How do you think is it possible? Because on RN most of the Graph Libs on CPU or Skia (which is good but still utilise CPU for Path rendering)
Vega/VGlite have amazing charting expressivity in their spec language, most other charting libs don't come close. It would be very cool to be able to take advantage of that.
[0] https://github.com/vega/vega-webgl-renderer
3D is coming (it's the same rendering pipeline), but I'd want to get the 2D story solid first before expanding scope.
The slice animation is doable though - we already have animation infrastructure for transitions. An "explode slice on click" effect would be a fun addition to the pie/donut charts.
What's your use case? Dashboard visuals or something else?
The code in the repo is pretty awful with zero abstraction of duplicated render pipeline building and ai slop comments all over the place like “last resort do this”. Do not use this for production code. Instead, prompt the ai yourself and use your own slop.
The performance here is also terrible given it is gpu based. A gpu based renderer done correctly should be able to hit 50-100m blocks/lines etc at 60fps zoom/panning.
It is a testament to how good ai is though and the power of the dunning kruger effect
I hope you have a way to monetize/productize this, because this has three.js potential. I love this. Keep goin! And make it safe (a way to fund, don't overextend via OSS). Good luck, bud.
Also, you are a master of naming. ChartGPU is a great name, lol!
Interesting you mention three.js - there's definitely overlap in the WebGPU graphics space. My focus is specifically on 2D data visualization (time series, financial charts, dashboards), but I could see the rendering patterns being useful elsewhere.
On sustainability - still figuring that out. For now it's a passion project, but I've thought about a "pro" tier for enterprise features (real-time collaboration, premium chart types) while keeping the core MIT forever. Open to ideas if you have thoughts.
Appreciate the kind words! :)
Off the top of my head, look into Order Book Heatmaps, 3D Volatility Surfaces, Footprint Charts/Volatility deltas. Integrating drawing tools like Fibonacci Retracements, Gann Fans etc. It would make it very attractive to people willing to pay.
This comment was buried yesterday. I'm sorry for the late response!
I was thinking about a pro tier for this kind of specialized stuff. Core stays MIT forever, but fintech tooling could be paid.
Of the chart types you listed, is there a preference for what gets done first?
Order Book Heatmaps first?
Competitors typically have to snapshot/aggregate because their graphing libraries are heavily CPU bound. Being able visualise level 2/3 data without downsampling is a big win. Also being able to smoothly roll back through the last 12hrs of tick-level history would be really neat too.
I'd say the bare minimum feature set outside of that is going to be:
- Non linear X axis for gaps/sessions
- Crosshairs that snap to OHLC data
- Logarithmic scales, Candlesticks, Heikin-Ashi, and Volume profiles
- Getting the 'feel' nice so that you can quickly scale and drag (people are real sticklers for the feel of these tools)
- Solid callbacks for events for exchange integration, people hate leaving their charts to place an order eg (onOrderModify etc)
- Provide a nice websocket data ingestion pipeline
- Provide an api so devs can integrate their own indicators, some sort of 'layers' API or something.
Sorry if I can't be of more help as I'm just a hobbyist in this area!
BSL/BUSL seems like a good fit for licensing here. It's technically source available instead of open source but just adds the layer that a competitor can't be built using your core. Otherwise the core is free to modify and fork. AGPL might be an option but I fear it would scare off a lot of companies in the space who have policies against AGPL licensed code but you'd get to keep advertising as OSS.
The fact that the code was generated by a human or a machine is less and less important.
Sorry if this is weird, it's just I've never personally experienced a comment with anything more than 100 - 200 points. And that was RARE. I totally get if you don't want to...but like, what were your "kilopoint" comments, or thereabouts? </offtopic>
So, apparently these are my five most upvoted comments (based on going through the first 100 pages of my own comments...):
- 238 - https://news.ycombinator.com/item?id=46574664 - Story: Don't fall into the anti-AI hype
- 127 - https://news.ycombinator.com/item?id=46114263 - Story: Mozilla's latest quagmire
- 92 - https://news.ycombinator.com/item?id=45900337 - Story: Yt-dlp: External JavaScript runtime now required f...
- 78 - https://news.ycombinator.com/item?id=46056395 - Story: I don't care how well your "AI" works
- 73 - https://news.ycombinator.com/item?id=46635212 - Story: The Palantir app helping ICE raids in Minneapolis
I think if you too retire, have nothing specific to do for some months, and have too much free time to discuss with strangers on the internet about a wide range of topics you ideally have strong opinions about, you too can get more HN karma than you know what to do with :)
The idea that GPU vendors are going to care about memory access violations over raw performance is absurd.
What is wrong with you JavaScript bros.