NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Show HN: A physically-based GPU ray tracer written in Julia (makie.org)
krastanov 2 hours ago [-]
As an aside, it is really interesting to see a computational package that, while supporting multiple GPU vendors, was first vetted on AMD, not NVidia. It is encouraging to see ROCM finally shaking off its reputation for poor support.
simondanisch 1 hours ago [-]
well, I do hate vendor lock in with a passion ;) But yeah, a lot did happen, this likely wouldn't have been possible one or two years ago!
bobajeff 50 minutes ago [-]
It's says:

>the reference implementation from Physically Based Rendering (Pharr, Jakob, Humphreys)

I'd like to know a little about the process you went through for the port. That book * sounds like an excellent resource to start from but what was it like using it and the code?

* https://pbrt.org/

simondanisch 30 minutes ago [-]
I've done lots of manually refactoring of the initial Prototype in Trace.jl (by Anton Smirnov, who I think ported an earlier version of the pbrt book). This helped familiarizing myself with the math and infrastructure and the general problems a raytracer faces and lay the ground work for the general architecture and what to pay attention to for fast GPU execution. One key insight was, that its possible to not need to have an UberMaterial, but instead use a MultiTypeSet for storing different materials and lights, which allows fast and concretely typed iterations.

Then I found that pbrt moved away from the initial design and I used claude code to port large parts of the new C++ code to Julia. This lead to a pretty bad port and I had lots of back and forth to fix bugs, improve the GPU acceleration, make the code more concise and "Julian" and correct the AIs mistakes and bogus design decisions ;) This polish isn't really over yet, but it works well enough and is fast enough for a beta release!

amelius 2 hours ago [-]
Is the material description part of the language the same as in PBRT?

I'm asking because I had a lot of trouble trying to describe interfaces between materials, only to find out that what I wanted to do was not possible in PBRT without modifying the code. Apparently, in PBRT a material can only have one other material touching it. So, for example rendering a glass filled with water and ice is not possible without hacks. From a user's point of view this is a bit of a let-down, of course.

Context: https://news.ycombinator.com/item?id=45668543

simondanisch 2 hours ago [-]
Nope, we made a complete high level Julia interface and I plan to have the Makie API be the main user facing scene description, which can be more descriptive than pbrt I think!
amelius 2 hours ago [-]
Ok. Did you see this:

https://blog.yiningkarlli.com/2019/05/nested-dielectrics.htm...

And I'm curious how you solve it.

simondanisch 1 hours ago [-]
Sorry, I was on my phone. This doesn't seem to be a problem of the description language, but rather how the integrator and materials work internally, so this works the same way in Julia currently. I do think though, that its more approachable to add experimental features like this in the Julia version. Would certainly be an interesting project! I do want to over time get further away from the pbrt-v4 architecture and get to something much more modular and easy to extend. I feel like the overlaps resolve should happen at scene creation time, to not have an expensive priority stack at raytracing time - then it would be just a matter of better tracking the media at boundary crossing. But haven't really thought this through of course ;)
the_harpia_io 26 minutes ago [-]
honestly the AMD-first bit surprised me - usually ROCm support is an afterthought or just broken outright.

curious about BVH traversal specifically. dynamic dispatch patterns across GPU backends can get weird fast. did KernelAbstractions hold up there or were there vendor-specific fallbacks needed for the heavier acceleration structure work?

simondanisch 13 minutes ago [-]
Well I'm a bit of an AMD "fanboy" and really dislike NVIDIA's vendor lock in. I'm not sure what you mean by dynamic dispatch across GPU backends - nothing should be dynamic there and most easier primitives map quite nicely between vendors (e.g. local memory, work groups etc). To be honest, the BVH/TLAS has been pretty simple in comparison to the wavefront infrastructure. We haven't done anything fancy yet, but the performance is still really good. I'm sure there are still lots of things we can do to improve performance, but right now I've concentrated on getting something usable out. Right now, we're mostly matching pbrt-v4 performance, but I couldn't compare to their NVIDIA only GPU acceleration without an NVIDIA gpu. I can just say that the performance is MUCH better than what I initially aimed for and it feels equally usable as some of the state of the art renderers I've been using. A 1:1 comparison is still missing though, since it's not easy to do a good comparison without comparing apples to oranges (already mapping materials and light types from one render to another is not trivial).
blueaquilae 1 hours ago [-]
That's an impressive accomplishment and a fantastic tool to explore.
NoboruWataya 2 hours ago [-]
I don't hear nearly as much about Julia as I used to. A few years ago the view was that it was about to replace Python as the language of choice for data science. Seems like that didn't happen?
simondanisch 1 hours ago [-]
I think the hype has slowed down, but all growth statistics haven't. Personally, I think Julia is the only language where I can implement something like Makie without running into a maintenance nightmare, and with Julia GPU programming is actually fun and high level and composes well, which I miss in most other languages. So, I dont really care about it replacing python or not. I do think for replacing python Julia will need to solve compilation latency, shipping AOT binaries and maybe interpret more of the glue code, which currently introduces quite a lot of compilation overhead without much gains in terms of performance.
electroly 23 minutes ago [-]
I don't know about everyone else, but slow Julia compilation continues to cause me ongoing suffering to this day. I don't think they're ever going to "fix" this. On a standard GitHub Actions Windows worker, installing the public Julia packages I use, precompiling, and compiling the sysimage takes over an hour. That's not an exaggeration. I had to juice the worker up to a custom 4x sized worker to get the wall clock time to something reasonable.

It took me days to get that build to work; doing this compilation once in CI so you don't have to do it on every machine is trickier than it sounds in Julia. The "obvious" way (install packages in Docker, run container on target machine) does not work because Julia wants to see exactly the same machine that it was precompiled on. It ends up precompiling again every time you run the container on other machines. I nearly shed a tear the first time I got Julia not to precompile everything again on a new machine.

R and Python are done in five minutes on the standard worker and it was easy; it's just the amount of time it takes to download and extract the prebuilt binaries. Do that inside a Docker container and it's portable as expected. I maintain Linux and Windows environments for the three languages and Julia causes me the most headaches, by far. I absolutely do not care about the tiny improvement in performance from compiling for my particular microarch; I would opt into prebuilt x86_64 generic binaries if Julia had them. I'm very happy to take R and Python's prebuilt binaries.

bobajeff 1 hours ago [-]
As someone who currently uses dabbles in both. That prediction seems a bit unrealistic. Julia is a fantastic language but it has some trade offs that need to be considered. Probably the most well known is `time to first x`. Julia like Python is used comfortably in notebooks but loading libraries can take a minute, compared to Python where it happens right away. It may lead you to not reach for it when you want to do quick testing of something especially plotting. You can mitigate this somewhat by loading all the libraries you'll ever need at startup (preferably long before you are ready to experiment) but that assumes you already know what libraries you'll need for what you're wanting to try.
simondanisch 58 minutes ago [-]
What prediction? Maybe I need to rephrase what I said: My prediction is, that if Julia ever wants to have a shot at replacing Python, it absolutely has to solve the first time to first x problem! That's what I mean by shipping fully ahead of time compiled binaries and interpreting more glue code - which both have the potential to solve the first time to x problem.
bobajeff 44 minutes ago [-]
The prediction I was referring to was the one in the parent comment. (The one I was commenting under)
simondanisch 40 minutes ago [-]
Ah sorry :D
IshKebab 1 hours ago [-]
IMO it just had too many rough edges. Very slow compilation, correctness issues (https://yuri.is/not-julia/), kinda janky tooling (not nearly as bad as pip tbf). Even basic language mistakes like implicit variable declaration and 1-based indexing (in 2012??).

Yes 1-based indexing is a mistake. It leads to significantly less elegant code - especially for generic code - and is no harder to understand than 1-based indexing for people capable of programming. Fight me.

simondanisch 1 hours ago [-]
lol. There's not much to fight since its a very personal problem how you want to write code. It's evident that all the capable programmers in the Julia community, have found satisfactory ways to get around it, so if you haven't yet, I don't see how that's a Julia problem ;) I can only say I haven't had a single problem with one based indexing in 12 years of developing Julia code. I also haven't run into many correctness issues compared to other languages I've been using. I think Yuri also has been using lots of packages which haven't been very mature. How on earth can you compare a 10 years old library with lots of maintainers with packages created in one year by one person? That's at least what Yuri's critic boils down to me.
LoganDark 2 hours ago [-]
On iOS Safari the videos are fullscreening themselves as I scroll. I've seen this on other blogs before but I don't know what causes it. Super annoying
simondanisch 2 hours ago [-]
Ugh, yeah I had some super weird bugs like this in safari, still haven't found the source :(
embedding-shape 1 hours ago [-]
Don't quote me on this, but I think there is a "playsinline" / "webkit-playsinline" attribute for the video element you need to add to avoid that, + if it's autoplay you need to set "muted" too. I've also had this happen and I think both/either of those solved it last time.
johnbatman 1 hours ago [-]
[dead]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 14:40:04 GMT+0000 (Coordinated Universal Time) with Vercel.