NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Faster asin() was hiding in plain sight (16bpp.net)
jason_s 1 hours ago [-]
While I'm glad to see the OP got a good minimax solution at the end, it seems like the article missed clarifying one of the key points: error waveforms over a specified interval are critical, and if you don't see the characteristic minimax-like wiggle, you're wasting easy opportunity for improvement.

Taylor series in general are a poor choice, and Pade approximants of Taylor series are equally poor. If you're going to use Pade approximants, they should be of the original function.

I prefer Chebyshev approximation: https://www.embeddedrelated.com/showarticle/152.php which is often close enough to the more complicated Remez algorithm.

xt00 21 minutes ago [-]
To be accurate, this is originally from Hastings 1955, Princeton "APPROXIMATIONS FOR DIGITAL COMPUTERS BY CECIL HASTINGS", page 159-163, there are actually multiple versions of the approximation with different constants used. So the original work was done with the goal of being performant for computers of the 1950's. Then the famous Abramowitz and Stegun guys put that in formula 4.4.45 with permission, then the nvidia CG library wrote some code that was based upon the formula, likely with some optimizations.
LegionMammal978 2 hours ago [-]
In general, I find that minimax approximation is an underappreciated tool, especially the quite simple Remez algorithm to generate an optimal polynomial approximation [0]. With some modifications, you can adapt it to optimize for either absolute or relative error within an interval, or even come up with rational-function approximations. (Though unfortunately, many presentations of the algorithm use overly-simple forms of sample point selection that can break down on nontrivial input curves, especially if they contain small oscillations.)

[0] https://en.wikipedia.org/wiki/Remez_algorithm

herf 53 minutes ago [-]
They teach a lot of Taylor/Maclaurin series in Math classes (and trig functions are sometimes called "CORDIC" which is an old method too) but these are not used much in actual FPUs and libraries. Maybe we should update the curricula so people know better ways.
jason_s 1 hours ago [-]
Not sure I would call Remez "simple"... it's all relative; I prefer Chebyshev approximation which is simpler than Remez.
LegionMammal978 40 minutes ago [-]
Perhaps, but at least I find it very simple for the optimality properties it gives: there is no inherent need to say, "I know that better parameters likely exist, but the algorithm to find them would be hopelessly expensive," as is the case in many forms of minimax optimization.
stephencanon 52 minutes ago [-]
Ideally either one is just a library call to generate the coefficients. Remez can get into trouble near the endpoints of the interval for asin and require a little bit of manual intervention, however.
patchnull 2 hours ago [-]
The huge gap between Intel (1.5x) and M4 (1.02x) speedups is the most interesting result here. Apple almost certainly uses similar polynomial approximations inside their libm already, tuned for the M-series pipeline. glibc on x86 tends to be more conservative with precision, leaving more room on the table. The Cg version is from Abramowitz and Stegun formula 4.4.45, which has been a staple in shader math for decades. Funny how knowledge gets siloed, game devs and GPU folks have known about this class of approximation forever but it rarely crosses into general systems programming.
stephencanon 1 hours ago [-]
These sorts of approximations (and more sophisticated methods) are fairly widely used in systems programming, as seen by the fact that Apple's asin is only a couple percent slower and sub-ulp accurate (https://members.loria.fr/PZimmermann/papers/accuracy.pdf). I would expect to get similar performance on non-Apple x86 using Intel's math library, which does not seem to have been measured, and significantly better performance while preserving accuracy using a vectorized library call.

The approximation reported here is slightly faster but only accurate to about 2.7e11 ulp. That's totally appropriate for the graphics use in question, but no one would ever use it for a system library; less than half the bits are good.

Also worth noting that it's possible to go faster without further loss of accuracy--the approximation uses a correctly rounded square root, which is much more accurate than the rest of the approximation deserves. An approximate square root will deliver the same overall accuracy and much better vectorized performance.

patchnull 59 minutes ago [-]
Great point about the approximate sqrt being low-hanging fruit. The correctly rounded sqrt is doing way more work than the rest of the pipeline deserves at that error budget. I wonder if the author benchmarked with rsqrtss plus a Newton-Raphson refinement step — on x86 that gives you roughly 23 bits of precision for a fraction of the latency of sqrtss, which is still massive overkill for a 2.7e11 ulp result but would show an even bigger speedup.
stephencanon 46 minutes ago [-]
For the asinf libcall on macOS/x86, my former colleague Eric Postpischil invented the novel (at least at the time, I believe) technique of using a Remez-optimized refinement polynomial following rsqrtss instead of the standard Newton-Raphson iteration coefficients, which allowed him to squeeze out just enough extra precision to make the function achieve sub-ulp accuracy. One of my favorite tricks.

We didn't carry that algorithm forward to arm64, sadly, because Apple's architects made fsqrt fast enough that it wasn't worth it in scalar contexts.

Pannoniae 1 hours ago [-]
Yeah, the only big problem with approx. sqrt is that it's not consistent across systems, for example Intel and AMD implement RSQRT differently... Fine for graphics, but if you need consistency, that messes things up.
stephencanon 58 minutes ago [-]
Newer rsqrt approximations (ARM NEON and SVE, and the AVX512F approximations on x86) make the behavior architectural so this is somewhat less of a problem (it still varies between _architectures_, however).
AlotOfReading 2 hours ago [-]
I'm pretty sure it's not faster, but it was fun to write:

    float asin(float x) {
      float x2 = 1.0f-fabs(x);
      u32 i = bitcast(x2);
      i = 0x5f3759df - (i>>1);
      float inv = bitcast(i);
      return copysign(pi/2-pi/2*(x2*inv),x);
    }
Courtesy of evil floating point bithacks.
def-pri-pub 2 hours ago [-]
> floating point bithacks

The forbidden magic

chuckadams 1 hours ago [-]
You brought Zalgo. I blame this decade on you.
adampunk 55 minutes ago [-]
// what the fuck
moffkalast 1 hours ago [-]
> float asinine(float x) {

FTFY :P

exmadscientist 59 minutes ago [-]
This line:

> This amazing snippet of code was languishing in the docs of dead software, which in turn the original formula was scrawled away in a math textbook from the 60s.

was kind of telling for me. I have some background in this sort of work (and long ago concluded that there was pretty much nothing you can do to improve on existing code, unless either you have some new specific hardware or domain constraint, or you're just looking for something quick-n-dirty for whatever reason, or are willing to invest research-paper levels of time and effort) and to think that someone would call Abramowitz and Stegun "a math textbook from the 60s" is kind of funny. It's got a similar level of importance to its field as Knuth's Art of Computer Programming or stuff like that. It's not an obscure text. Yeah, you might forget what all is in it if you don't use it often, but you'd go "oh, of course that would be in there, wouldn't it...."

wolfi1 47 minutes ago [-]
Abramowitz/Stegun has been updated 2010 and resides now here: https://dlmf.nist.gov/
sixo 20 minutes ago [-]
It appears that the real lesson here was to lean quite a bit more on theory than a programmer's usual roll-your-own heuristic would suggest.

A fantastic amount of collective human thought has been dedicated to function approximations in the last century; Taylor methods are over 200 years old and unlikely to come close to state-of-the-art.

scottlamb 2 hours ago [-]
Isn't the faster approach SIMD [edit: or GPU]? A 1.05x to 1.90x speedup is great. A 16x speedup is better!

They could be orthogonal improvements, but if I were prioritizing, I'd go for SIMD first.

I searched for asin on Intel's intrinsics guide. They have a AVX-512 instrinsic `_mm512_asin_ps` but it says "sequence" rather than single-instruction. Presumably the actual sequence they use is in some header file somewhere, but I don't know off-hand where to look, so I don't know how it compares to a SIMDified version of `fast_asin_cg`.

https://www.intel.com/content/www/us/en/docs/intrinsics-guid...

2 hours ago [-]
TimorousBestie 1 hours ago [-]
I don’t know much about raytracing but it’s probably tricky to orchestrate all those asin calls so that the input and output memory is aligned and contiguous. My uneducated intuition is that there’s little regularity as to which pixels will take which branches and will end up requiring which asin calls, but I might be wrong.
scottlamb 1 hours ago [-]
I'd expect it to come down to data-oriented design: SoA (structure of arrays) rather than AoS (array of structures).

I skimmed the author's source code, and this is where I'd start: https://github.com/define-private-public/PSRayTracing/blob/8...

Instead of an `_objects`, I might try for a `_spheres`, `_boxes`, etc. (Or just `_lists` still using the virtual dispatch but for each list, rather than each object.) The `asin` seems to be used just for spheres. Within my `Spheres::closest_hit` (note plural), I'd work to SIMDify it. (I'd try to SIMDify the others too of course but apparently not with `asin`.) I think it's doable: https://github.com/define-private-public/PSRayTracing/blob/8...

I don't know much about ray tracers either (having only written a super-naive one back in college) but this is the general technique used to speed up games, I believe. Besides enabling SIMD, it's more cache-efficient and minimizes dispatch overhead.

edit: there's also stuff that you can hoist in this impl. Restructuring as SoA isn't strictly necessary to do that, but it might make it more obvious and natural. As an example, this `ray_dir.length_squared()` is the same for the whole list. You'd notice that when iterating over the spheres. https://github.com/define-private-public/PSRayTracing/blob/8...

TimorousBestie 1 hours ago [-]
This tracks with my experience and seems reasonable, yes. I tend to SoA all the things, sometimes to my coworkers’ amusement/annoyance.
Am4TIfIsER0ppos 1 hours ago [-]
I don't do much float work but I don't think there is a single regular sine instruction only old x87 float stack ones.

I was curious what "sequence" would end up being but my compiler is too old for that intrinsic. Even godbolt didn't help for gcc or clang but it did reveal that icc produced a call https://godbolt.org/z/a3EsKK4aY

orangepanda 2 hours ago [-]
> Nobody likes throwing away work they've done

I like throwing away work I've done. Frees up my mental capacity for other work to throw away.

empiricus 40 minutes ago [-]
Does anyone knows the resources for the algos used in the HW implementations of math functions? I mean the algos inside the CPUs and GPUs. How they make a tradeoff between transistor number, power consumption, cycles, which algos allow this.
glitchc 1 hours ago [-]
The 4% improvement doesn't seem like it's worth the effort.

On a general note, instructions like division and square root are roughly equal to trig functions in cycle count on modern CPUs. So, replacing one with the other will not confer much benefit, as evidenced from the results. They're all typically implemented using LUTs, and it's hard to beat the performance of an optimized LUT, which is basically a multiplexer connected to some dedicated memory cells in hardware.

kstrauser 58 minutes ago [-]
> The 4% improvement doesn't seem like it's worth the effort.

People have gotten PhDs for smaller optimizations. I know. I've worked with them.

> instructions like division and square root are roughly equal to trig functions in cycle count on modern CPUs.

What's the x86-64 opcode for arcsin?

charcircuit 15 minutes ago [-]
The effort of typing about 10 words into a LLM is minimal.
ok123456 43 minutes ago [-]
Chebyshev approximation for asin is sum(2T_n(x) / (pi*n*n),n), the even terms are 0.
stephc_int13 1 hours ago [-]
My favorite tool to experiment with math approximation is lolremez. And you can easily ask your llm to do it for you.
drsopp 2 hours ago [-]
Did some quick calculations, and at this precision, it seems a table lookup might be able to fit in the L1 cache depending on the CPU model.
Pannoniae 2 hours ago [-]
Microbenchmarks. A LUT will win many of them but you pessimise the rest of the code. So unless a significant (read: 20+%) portion of your code goes into the LUT, there isn't that much point to bother. For almost any pure calculation without I/O, it's better to do the arithmetic than to do memory access.
jcalvinowens 1 hours ago [-]
Locality within the LUT matters too: if you know you're looking up identical or nearby-enough values to benefit from caching, an LUT can be more of a win. You only pay the cache cost for the portion you actually touch at runtime.

I could imagine some graphics workloads tend compute asin() repeatedly with nearby input values. But I'd guess the locality isn't local enough to matter, only eight double precision floats fit in a cache line.

jcalvinowens 1 hours ago [-]
Surely the loss in precision of a 32KB LUT for double precision asin() would be unacceptable?
Jyaif 1 hours ago [-]
By interpolating between values you can get excellent results with LUTs much smaller than 32KB. Will it be faster than the computation from op, that I don't know.
drsopp 14 minutes ago [-]
I experimented a bit with the code. Various tables with different datatypes. There is enough noise from the Monte Carlo to not make a difference if you use smaller data types than double or float. Even dropping interpolation worked fine, and got the speed to be on par with the best in the article, but not faster.
jcalvinowens 6 minutes ago [-]
Does your benchmark use sequential or randomly ordered inputs? That would make a substantial difference with an LUT, I would think. But I'm guessing. Maybe 32K is so small it doesn't matter (if almost all of the LUT sits in the cache and is never displaced).

> if you use smaller data types than double or float. Even dropping interpolation worked fine,

That's kinda tautological isn't it? Of course reduced precision is acceptable where reduced precision is acceptable... I guess I'm assuming double precision was used for a good reason, it often isn't :)

jcalvinowens 54 minutes ago [-]
I'm very skeptical you wouldn't get perceptible visual artifacts if you rounded the trig functions to 4096 linear approximations. But I'd be happy to be proven wrong :)
groundzeros2015 2 hours ago [-]
I don’t want to fill up L1 for sin.
2 hours ago [-]
1 hours ago [-]
erichocean 2 hours ago [-]
Ideal HN content, thanks!
adampunk 2 hours ago [-]
We love to leave faster functions languishing in library code. The basis for Q3A’s fast inverse square root had been sitting in fdlibm since 1986, on the net since 1993: https://www.netlib.org/fdlibm/e_sqrt.c
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 17:04:41 GMT+0000 (Coordinated Universal Time) with Vercel.