NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
At the Mountains of Madness (antithesis.com)
dreamcompiler 5 days ago [-]
I once had a robotic cat litter box that cleaned itself. Except once a week it would get clogged and I would have to spend a quality hour disassembling it, scrubbing off the feces embedded on the delicate parts, and reassembling it. Every two weeks when it heated up its artificial litter to dry, it would have missed a small piece of cat shit that when baked, filled my house with an aroma that I would not recommend you even try to imagine.

And of course it needed special, expensive supplies that you had to buy from the manufacturer because the bottles had numbered chips.

I eventually threw the damn thing out and now I just use a manual litter box. Takes 15 seconds a day to clean. It's a chore but it's a small predictable chore.

When I read about NixOS I remember that robot litter box. It seems like it solves a real problem of difficulty X but it brings five brand new problems from a parallel universe you didn't know existed and they're all written in an indecipherable language and have difficulty 10X.

lexlash 5 days ago [-]
Nix, and NixOS, are designed for those of us who have to clean 10,000 proverbial litter boxes every day. I use Nix fairly extensively at work; I use it very little at home, where I don't need to worry about what dependency someone took on a specific version of Python five years ago, etc.

It's like k8s, imo - it solves some real problems at scale but is rarely going to be a good idea for individual users.

Zababa 5 days ago [-]
It's also nice in the small. At home I like using datasette to search SQLite databases. One day it decided to stop working. I tried reinstalling it with pipx, didn't work either. nix-shell -p datasette, it works.
brookst 5 days ago [-]
You are an excellent writer.

I have not use Nix, but I have had a similar bad experience with a similar but different automatic litter box. Your “small predictable chore” point is spot on: how much human grief is created by elaborate, expensive, unreliable solutions to minor annoyances?

UniverseHacker 5 days ago [-]
If you enjoyed that litter box experience you should consider owning an older high mileage and high end German luxury car… ideally something with a double digit number of cylinders, and a computer controlled adjustable… everything
hackernudes 5 days ago [-]
A lot of Linux users are tinkerers. In my experience, Nix provides almost an endless sink for tinkering. There are escape hatches to "just make stuff work" but it becomes a fun challenge to do it the nix way.
throwaway7ahgb 5 days ago [-]
This also describes DIY home automation for me. I spend hours setting up some fancy automation routines that work great. Then, a few months later something fails and I spend more time fixing it than the automation affords me in the first place.
Zababa 5 days ago [-]
In my experience it kind of just works. Takes a bit more maintenance than arch, around an hour a month at most. I had to reinstall it once and it was way easier than my arch, since way more stuff is in text files I can commit.

Also, since it's very very easy to rollback to a previous version, managing unpredictable issues is easy. I have a colleague that lost lots of time to an arch kernel panic during an update that required a reinstall. On NixOS I can reboot, choose the last derivation, work and fix that if/when I want.

huppeldepup 5 days ago [-]
What you’re describing is called the law of conservation of misery. Misery is like a water-filled party balloon. If you squeeze it in one place it expands in another, often between your fingers.
NikkiA 5 days ago [-]
And then some Go fanatic will come along and use this experience to explain why it makes perfect sense to static link bits of the standard library into every single binary ever produced, and dynamic linking was the worst idea since mankind's ancestors crawled out of the ocean and past the whales crawling back in.
egorfine 5 days ago [-]
Why am I sensing systemd vibes from your message lol
dreamcompiler 5 days ago [-]
Don't get me started on crappy cancerous operating systems that grow like an out-of-control tumor but happen to have the Linux kernel attached as an appendage.
egorfine 4 days ago [-]
Absolutely fuck systemd. I have been ranting about it just recently[1] and have created a script[2] to rip this cancer of my host OS.

[1] https://ubuntuforums.org/showthread.php?t=2498615 [2] https://gist.github.com/egorFiNE/30ee7910ca4b7b9b706d385e432...

finnh 6 days ago [-]
Every time I look at NixOS, I think that it perfectly solves a problem that I only have once every 5 years, when buying a new computer. I think I even looked into it once to automate that exact process, but that idea fell apart at the first line of Nix syntax. I'll stick with OSX and `brew bundle` I guess...

But then I read a piece like this and remember that some people do have to plumb the depths of C/C++ linkers, and I'm glad I'm not one of them.

Great post! FWIW I always want to know the prompt text when seeing an AI-generated image, I wish there were a convention around that.

api 5 days ago [-]
I think it solves a problem that shouldn't exist: managing complex state in an operating system.

OSes should be mostly immutable. Apps should own their state. Everything else should be in a neat tidy box that is portable. Mobile almost gets this right.

The idea of installing things "on" the OS needs to die, badly. It's a security and privacy nightmare because it means everything more or less has root, and it makes every OS install a special snowflake that is under the hood a giant mixed pile of shit.

lexlash 5 days ago [-]
NixOS aside, Nix manages state _outside_ of / independent with respect to your operating system, which is why it's so damn useful.

With Nix, I can build OCI images the exact same way every time; with Docker, I have to hope that the `apt update` thrown in at the top doesn't accidentally put me on a new major version of some dependency that breaks the rest of the script. I tend to deal with Dockerfiles written five or more years ago, so I will admit to bias here.

I'll also admit that I don't really enjoy NixOS. It's neat enough on my headless devices but not something I'd want to try to daily drive; I'm more a fan of the Universal Blue / Project Bluefin approach.

finnh 5 days ago [-]
For sure. Qubes OS is an interesting step in that direction. Mobile does mostly get it right - and yet the devices are single-user only, it's so odd. The fact that I can't share an iPad with my kid without needing to fully disable Messages, Photos, etc is crazy-making.
akvadrako 2 days ago [-]
Image based OSes like Fedora silverblue are largely immutable. It's a much better model and think it's catching on.
TacticalCoder 5 days ago [-]
> ... because it means everything more or less has root, and it makes every OS install a special snowflake that is under the hood a giant mixed pile of shit.

In many Linux distros systemd as PID 1 comes to mind...

lmz 5 days ago [-]
If anything systemd is a step against "a giant mixed pile of shit" in that the "shit" is single-source now as opposed to cobbled together from 10 different projects.
lloeki 5 days ago [-]
> Every time I look at NixOS, I think that it perfectly solves a problem that I only have once every 5 years, when buying a new computer. I think I even looked into it once to automate that exact process, but that idea fell apart at the first line of Nix syntax. I'll stick with OSX and `brew bundle` I guess...

To each their own!

With two Mac laptops, each with a Linux VM, plus five Raspberry Pi and a Mac Mini under Asahi on NixOS it's been a godsend to have a consistent management system and setup with reusable bricks, that is also able to remote build on the Mini for the Pis.

That plus shell.nix and direnv, and you can pry Nix from my cold, dead hands.

lexlash 5 days ago [-]
Chiming in to say that direnv is one of the greatest projects I've ever come across and it gets damn near everything right out of the box - you can also use it without any Nix at all. (It makes a nice gateway to Nix, though; once you have your directory-based env vars, it's a shorter hop to directory-based package configuration...)
lloeki 5 days ago [-]
Absolutely. I use it both with and without Nix.

By direnv's design, this vscode extension restores sanity in vscode env handling mess†: https://marketplace.visualstudio.com/items?itemName=mkhl.dir...

† Depending on how you (re)start vscode (terminal vs launchd) it's going to either have some project env vars or not. e.g do `code /some/path` in a terminal and it inherits env vars from the terminal, which is nonsense on macOS because then if you reopen the project the env vars are gone because it's been relaunched by launchd. Dunno if it has been fixed but it was even worse when a vscode process initially started via terminal would have env vars inherited for all subsequently opened projects, even different ones.

pxc 4 days ago [-]
Nix and direnv is such an insanely good combo. I use them together, typically via devenv, the latter sometimes as a library on top of a plain flake.nix, other times with the full devenv CLI and experiene— I love both for different use cases. Really pleasant.
shrx 6 days ago [-]
Some generators like Automatic1111 embed the prompt in the image metadata.
Filligree 5 days ago [-]
ComfyUI embeds the entire graph, but only if you use its internal “save image” function.
sjburt 6 days ago [-]
It seems like every article about nix goes on and on about DLL hell. I've been using Debian/Ubuntu for 15+ years and never really experienced dependency hell. I guess maybe this is thanks to hard work by Debian maintainers and rarely needing to run a bleeding edge library, but also, why do we need to run bleeding edge versions of everything and then invent an incredibly complicated scheme to keep multiple copies of each library, most of which are completely compatible with each other?

And then when there's a security problem, who goes and checks that every version of every dependency of every application has actually been patched and updated? Why would I want to roll a system back to an (definitely insecure) state of a few months ago?

What problem does Nix solve that SO numbers (properly used) doesn't?

I have many of the same questions about Snap and even Docker.

klodolph 6 days ago [-]
I’m using Nix for development and generally I agree.

The first catch is that I want to be able to update my system on a regular basis, and keep using exactly the same dependencies in my project after an update. Maybe I’m in the middle of working on a change.

The second catch is that sometimes my development environment is really weird, and the packages I need aren’t in Debian. At least, not the versions I want. Nix can handle cross-compilation environments and you can use it for embedded development. You stick your entire development toolchain (arm-none-eabi-gcc, whatever) inside your development environment.

> Why would I want to roll a system back to an (definitely insecure) state of a few months ago?

Periodically, I want to update everything in my development environment to the latest version of everything. Sometimes, something will break. Maybe a new version of GCC reveals previously undiscovered bugs in my code. Maybe a function gets removed from a library (I’ve seen it happen). In Nix, it’s pretty easy to pin my entire development environment to an old version, while I’m still updating the rest of my system. I can also get the same environment on either Linux or macOS with relatively minimal hassle (with the note that I’ve run into several packages that just don’t run on macOS, which required me to make “fixed” versions).

Also keep in mind when I say “Nix”, I’m talking about nixpkgs. I’m not using NixOS and I just don’t care about NixOS.

Nix also has its pain points. I think of it as being like a coarse-grained Bazel with a ton of packages.

nyarlathotep_ 6 days ago [-]
My Nix experience is limited, so forgive my ignorance here, but is it possible to create a development environment for an "older" project as well?

Say I need some 3.20 version of CMake and gcc 9/whatever or something--i assume such a thing is possible, but I've not seen a simple way to "pin versions" of things the way you would in say a language's package manager.

klodolph 6 days ago [-]
My Nix experience is pretty limited, too. Nix is not great at pinning to specific versions.

If your older project was made in Nix, it’s no problem. You just check out the old copy of the project and you automatically get the old copy of the dependencies.

If your old project needs some specific major version of GCC, going back to like 4.8, there are specific packages in Nix. You just add “gcc48” to your dependencies and you get GCC 4.8. You still get newer versions of e.g. binutils.

If your old project needs a specific version of CMake, I know two ways to get that, but they’re a little ugly.

First method is to import an old <nixpkgs> containing the right version of CMake, and then import that into your environment. You search through Git history of the nixpkgs repository until you find one with the correct version. Yes, this sounds awful. It’s not that bad. I’m not sure how to do this with flakes.

You can also copy the CMake derivation into your project and modify it to compile & build the version of CMake you like. This is the approach I would normally use, most of the time.

There may be easier ways to do this. I’m not sure.

nyarlathotep_ 6 days ago [-]
Thanks, that clears up my understanding. I'd suspected it was something along those lines.
lexlash 5 days ago [-]
https://www.nixhub.io/ - and I’ve seen others - make the searching easy. It’s odd to me that I rely so much on web based tools (like search.nixos.org!) to look up nixpkgs details for my commandline but eh.

For a flake, you’d specify an input of that specific revision of nixpkgs and then refer to your CMake dependency with respect to that input. You may end up with - by design - a lot of duplicated dependencies, but it’d work.

https://blog.mplanchard.com/posts/installing-a-specific-vers... is a nice writeup (not mine) with examples of a few different ways to do this in flakes and not-flakes.

klodolph 5 days ago [-]
Yeah, relying on `search.nixos.org` is a little weird. Package discovery overall is a little weird… I think I usually do it through nix repl.
HideousKojima 6 days ago [-]
I run into DLL hell any time I try to install some software that isn't in some sort of package repository or other happy path. Most recent example I can think of was about a year ago when I helped my father-in-law install Klipper on an RPi4 so that he could do input shaping on his 3d prints. All of the guides and documentation seemed to assume that you were using a specific version of Linux on the RPi, and if you weren't (like my FIL) then welcome to dependency hell. Took several hours of pulling my hair out to resolve them all, for a non-developer it would have been impossible.
michaelmrose 5 days ago [-]
So the installation story for klipper is hilariously bad. Its a shell script that apt installs and pip installs things as root. It is the very definition of non-portable.

The distribution story for Python apps in general is fairly bad and the developer really takes it to the next level.

This isn't DLL hell as a function of leaving the happy path its DLL hell as a function of burning the happy path whilst dancing naked on the ashes of the greenery that once was the happy path.

See Calibre for willful disobedience vs instead of insanity. It eshews system libraries and uses a shell script to install but it dumps everything including Python libs in a directory.

HideousKojima 5 days ago [-]
Well it's slightly comforting to know that my experience with Klipper wasn't unique, at least. I have an Ender 3 but don't really do anything beyond the basics with it so I hadn't heard of Klipper prior.
lexlash 5 days ago [-]
You don't need to run bleeding edge versions unless you feel like it; there's a stable release with rolling security patches every 6 months (current is 24.05, next will be 24.11).

You don't need to keep multiple copies of each library - but you _can_ when you find out that an update broke something you care about while still updating everything else on your system. You aren't rolling back your entire system state, just the...light-cone of the one tool that has issues.

The problem with SO numbers is that your Python/Ruby/Java/NodeJS packaging and tooling doesn't respect that at all. If you can satisfy all of your dependencies using the Debian-maintained repositories great! When you can't, Nix provides a harm-reduction framework.

Nix also makes certain hard things trivial - like duplicating the exact system state that someone else used to build a thing some months/years ago, or undoing the equivalent of a `dist-upgrade` gone awry.

> And then when there's a security problem, who goes and checks that every version of every dependency of every application has actually been patched and updated?

The nixpkgs maintainers, same as the Debian maintainers. Repology's down right now but nixpkgs seems to do quite well on a CVE level.

> Why would I want to roll a system back to an (definitely insecure) state of a few months ago?

Insecure is sometimes preferable to down. Being able to inspect an older/insecure state with new/secure tools is neat.

> I have many of the same questions about Snap and even Docker.

Snap and Docker solve similar problems that most people don't have. Same with k8s. You might just not have these problems - I have a screwdriver on my desk that's specifically for opening up GameCube consoles (so it's longer than the one I use to open up N64 cartridges, even though it's the same shape); unless you have that specific need, it'd be completely pointless in your toolbox and cause you trouble every time you tried to use it.

TacticalCoder 5 days ago [-]
> I've been using Debian/Ubuntu for 15+ years and never really experienced dependency hell.

Debian (or derivatives) too here and once in a very rare while I encounter some dependency hell.

IIRC the last problematic one was trying to add JPEG XL support to Emacs. Emacs uses ImageMagick under the hood to display pictures (for example from image-dired) if I'm not mistaken. But the version of ImageMagick shipped with the latest Debian stable (Bookworm) doesn't support JPEG XL yet.

Something like that.

It does happen but I do agree that it's highly uncommon.

> Why would I want to roll a system back to an (definitely insecure) state of a few months ago?

A just question!

klodolph 6 days ago [-]
> No such file or directory

Anyone who’s run into this problem remembers it! (This isn’t a Nix problem—this is just the baffling errors you get because a.out exists, but one of the libraries it needs does not, and the error message doesn’t distinguish that case.)

Anyway, Nix.

Nix has the Nix way of building things. Nix doesn’t give you standard tools. It gives you wrappers around the standard tools that force you to do things a certain way. Part of that is futzing around with RPATH—because Nix stores everything in an unusual location. The user experience around this is awful, if you ever run into a case where Nix’s tooling doesn’t automatically do the right thing for you. It’s not just RPATH, but also other paths.

What’s the solution?

Honestly—I think it would make sense for Nix to have a “cross compilation” mode where you tell it to cross-compile for other Linuxes. You know, something like pkgsCross.x86_64-generic-linux. This comes with all the cross-compilation headaches, but you know what? You are cross-compiling.

lilyball 6 days ago [-]
Nix does have tools for running stuff in an FHS container. Something I have considered but not yet attempted is to use this to wrap the build such that building the binary happens in the FHS container (using the unwrapped versions of the compiler and associated tooling).
6 days ago [-]
throwway120385 6 days ago [-]
Yeah I wish the ld-linux.so interpreter would actually indicate when a file it's trying to link wasn't found. Something like "unable to locate shared object BLAH" would go a long way. It's like a rite of passage the first time you debug something like that.
nyarlathotep_ 6 days ago [-]
I've only encountered this once and I've not forgotten it.

Downloaded a release of some binary for Linux, but I'd downloaded the FreeBSD built binary. Was lost until I explored in the same fashion as the author.

wwilson 6 days ago [-]
Post author here. Feel free to ask me any questions about the piece of software that I most regret having had to write.
dasyatidprime 6 days ago [-]
I would like to half-seriously recommend that you overwrite a different character than the first when mangling the environment variable name. Specifically one beyond the third, so as to stay within the LD_ namespace (not yours exactly, but at least easier to keep track of and more justifiably excludable from random applications) and deny someone ten years from now the exciting journey of figuring out why their MD_PRELOAD environment variable is overwitten with garbage on some systems. How do you feel about LD_PRELOAF?

Also it's probably better to leave LD_PRELOAD properly unset rather than just null if it was unset before; in particular I wonder if empty-but-set might still trip some software's “someone is playing tricks” alarms.

There are probably other ways this is less than robust…

(hi, I kind of have a Thing for GNU and Linux innards sometimes)

wwilson 6 days ago [-]
Good suggestion on leaving LD_PRELOAD unset if it was previously unset. We will fix that.

I’m torn on whether MD_PRELOAD or LD_PRELOAF is more obnoxious to other programs.

Fun fact: A previous version of this program used an even more inscrutable `env[0][0]+=1`, which is great as a sort of multilingual C/English pun, but terrible in the way that all “clever” code is terrible.

foobiekr 6 days ago [-]
As an aside, a lot of people don't know about ldd, and introducing it to them is very cool, but it should almost always come with a warning - maybe add a note that people should be careful with ldd - ldd _may_ execute arbitrary code. This is in the ldd man page, but most people never read documentation. It is unsafe to use on any binary you're not otherwise believed safe.
wwilson 6 days ago [-]
Great point! I'll update the post to mention that.
colinsane 6 days ago [-]
> This minimal meta-loader will totally work if you invoke it directly like `$ meta_loader.sh foo`, and it will totally not work if you hardcode its path (or a symlink to it) in the ELF headers of a binary.

why not have `foo` be a shell script which invokes the meta loader on the "real" foo? like:

``` #!/bin/sh # file: /bin/foo

# invoke the real "foo" (renamed e.g. ".foo-wrapped" or "/libexec/foo" or anything else easy for the loader to locate but unlikely to be invoked accidentally) exec meta_loader.sh .foo-wrapped "$@" ```

it's a common enough idiom that nixpkgs provides the `wrapProgram` function to generate these kinds of wrapper scripts during your build: even with an option to build a statically-linked binary wrapper instead of a shell-script wrapper (`makeBinaryWrapper`).

Klaster_1 6 days ago [-]
No questions about Madness, but I really enjoyed the article tone and playfulness. Thank you.
adamgordonbell 6 days ago [-]
Love it. I came to this same insight about nix and containers being two approaches to a dynamic linking work around, but via a different path, of building my own little container runtime.

Feels like we are building things who's original purpose is now holding us back, but path dependence leaves us stuck wrapping abstractions in other abstractions.

pxc 4 days ago [-]
Why did you want to use Nix to make impure binaries for other distros? Much of the appeal of it for me is using it to distribute software in a more reliably portable way, but that of course always means shipping a whole chunk of /nix/store, once way or another.

What made your team/company want to use Nix to build binaries and then strip them down for old-fashioned, dependency hell-ish distribution? Why not install Nix on your target systems or use Nix bundle, generate containers, etc.?

foobiekr 6 days ago [-]
It's a shame that DLL hell was never resolved in the obvious way: deduplication of identical libraries through cryptographic hashes. Containers basically threw away any hope of sharing the bytes on disk - and more importantly _in ram_. Disk bytes are cheap, ram bytes are not, let alone TLB space, branch predictor context, and so on.

There was a middle ground possible at one point where containers actually were packaged with all of their dependencies, but a container installer would fragment this assembly into cryptographically verifiable share dependencies, but we lost that because it was hard.

xenophonf 6 days ago [-]
> deduplication of identical libraries through cryptographic hashes

Isn't that how the .NET CLR's global assembly cache works?

foobiekr 5 days ago [-]
A lot of things work that way because it’s obvious to everyone except the container runtimes on Linux.
pxc 4 days ago [-]
The container runtimes have to cope with Dockerfiles and similar, which know nothing about packages. To get the kind of granularity you want here, you have to do actual packaging work, which is the thing Docker sold everyone on avoiding.

If you are willing to do that kind of packaging work you can get the best of both worlds today with Nix or Guix. But containers are attractive because you can chuck whatever pathological build process your developers have evolved over the decades into a Containerfile and it'll mostly work.

rbanffy 6 days ago [-]
> deduplication of identical libraries through cryptographic hashes

Or, maybe, adding a version string to the file name, so, if you were compiled with data structures for libFoo1 (which you found on libFoo.h provided by libFoo1-devel) you’ll link to libFoo1 and not libFoo or libFoo2.

foobiekr 5 days ago [-]
Never use a string. What if the string is wrong? Use hashes.
rbanffy 5 days ago [-]
If the file for version 2 or libFoo is libFoo1, it only means someone shouldn’t be naming things.

Being in the file name makes it trivial to retrieve and immediately obvious to a human reading the information.

jadbox 6 days ago [-]
I'd like to express that while this article is WAY outside my wheelhouse, but I liked the writing style and the AI illustrations felt like they were emotionally additive to the section rather just a distraction (to me). Also, my head hurts trying to still understand this cursed thing: https://github.com/antithesishq/madness/blob/main/pkgs/madne...
o11c 5 days ago [-]
Somehow your website has semi-broken scrolling, which is impressive since that normally only happens when Javascript is enabled.

Also, please QEFS (quote every string) in your shell script fragments.

jcgrillo 6 days ago [-]
Also not a question, just want to say that "crt glow" (or maybe "Cerenkov glow"?) effect upon hovering over a link is awesome.
trod123 6 days ago [-]
Hi Will, I'm curious what your thoughts are about the Nix uniqueness problem, and the characterization of failures, or lack thereof under undefined behavior's failure domains. Exception handling generally requires a defined and deterministic state which can't be guaranteed given design choices to resolve DLL hell under Nix (i.e. its a stochastic process).

I mention this since it is a similar form of the problem you mention in writing this piece of software, that can lead to madness.

Also, operationally, the troubleshooting problem-space of keeping things running, segments nicely into deterministic and non-deterministic regions; which the latter ends up costing orders of magnitude more as a function of time to resolve since you can't perturb individual subsystems to test for correct function, without determinism and time in-variance (as system's properties), testing piecemeal has contradictions in stochastic processes.

Hashing by rigorous definition is non-unique (i.e. its like navigating a circle), and there is no proof of uniformity. So problems in this space would be in the latter region.

While, there are heuristics from cryptography that suggest using factional cubic roots to initialize the fields brings more uniformity to the examined space than not, there is no proof of such.

When building resilient systems, engineers often try to remove any brittle features that promote failures.

Interestingly, as a side note, ldd output injects non-determinism into the pipe by flattening empty columns non-deterministically (i.e. if you ldd ssh client, you'll see the null state for each input to output has more than a single meaning/edge on the traversal depending on object type, this ends up violating the 1:1 unique input-output state graph/map required for determinism as a property, though it won't be evident until you run use it as an input that problematically maps later in automation (i.e. grepping the output with RegEx will silently fail, providing what looks like legitimate output if one doesn't look too closely).

PaX ended up forking the project with the fix, because the maintainers refused to admit the problem (reported 2016, forked in 2018), the bug remains in all current versions of ldd (to my knowledge).

While based in theory, these types of problems crop up everywhere in computation and few seem to recognize them.

Working with system's properties, and whether they are preserved; informs on whether the system can be safely and consistently used in later automated processes, as well as maintained at cheap cost.

Businesses generally need a supportable and defensible infrastructure.

limaoscarjuliet 6 days ago [-]
Been there, done that. In my case, I symlinked myself out of this mess rather than modify ELF.
swayvil 6 days ago [-]
Dig the purple anteater pix.
georgewsinger 6 days ago [-]
=======Technical Summary========

Here's a problem with NixOS:

1. Suppose we have a `./nixos_binary_program_with_glibc-newer` compiled on a NixOS machine against bleeding edge `glibc-newer`.

2. `./nixos_binary_program_with_glibc-newer` will have `/nix/store/glibc-newer/linux-ld.so` path hardcoded into its ELF header which will be used when the program launches to find all of the program's shared libraries, and so forth. (And this is a fact that `ldd` will obfuscate!).

3. When `./nixos_binary_program_with_glibc-newer` is distributed to machines which use `glibc-older` instead of `glibc-newer`, the hardcoded `linux-ld.so` from (2) will fail to be found, leading to a launch error.

4. (3) will also happen on machines which don't use nix in the first place.

=======Will's Solution========

1. Use `patchelf` to hardcode a standard FHS `ld-linux.so` location into `nixos_binary_program_with_glibc-newer`'s ELF header (using e.g. `/lib64/ld-linux-x86-64.so.2` as the path)

2. Use a metaloader to launch `nixos_binary_program_with_glibc-newer` with an augmented `RPATH` which has a bunch of different `/nix/store/glibc-newer` paths, so that nix machines can find a suitable `ld-linux.so` to launch the program with.

This will make `nixos_binary_program_with_glibc-newer` work on any machine, including both non-nix machines and nix machines (which might be running older versions of glibc by default)!

AdamH12113 6 days ago [-]
I'm still confused why static linking isn't a more common solution to versioning issues. Software developers normally have no problem using an order of magnitude more resources to solve organizational problems. Is there any technical advantage to dynamic linking other than smaller binaries and maybe slightly faster load times from disk?
anyfoo 6 days ago [-]
Static linking essentially freezes not only the ABI to the kernel, but many implementation details of the linked libraries as well. Including, for example, how library client code talks to daemons, or formats of files directly read in by the libraries. The timezone configuration would be one instance, or things related to NSS.

It's really not viable in a lot of cases, unless you like rebuilding (or at least relinking) with every system software update.

And then there's of course the memory savings. macOS and iOS for example have giant "shared caches" which are mapped into all processes and comprise of all the system libraries. (Other OSs often do this on the individual shared library level.) With static linking, you'd instead have many copies of lots of potentially-but-not-necessarily identical library code pages in DRAM.

nyarlathotep_ 6 days ago [-]
Seems its rare to hear a defense in favor of dynamic linking (aside from vague allusions to "resource use", especially in light of "successor languages" seemingly moving away from this approach. Thanks for this.
klodolph 6 days ago [-]
IIRC the underlying implementation may be different on other systems. I think in particular, DNS resolution.

Linux is the only system where static linking all the way really makes any sense. For most systems, you don’t get a stable syscall ABI. Instead, you get a stable ABI to the library which does syscalls for you… Windows has kernel32.dll, macOS has libSystem.

Note that on Linux, the vDSO is dynamically linked.

Compilation speed is a big plus. For large projects, linking time can easily dominate the time needed for incremental rebuilds.

Due to tooling issues, PIE is a lot easier with dynamic linking, and this gives you better ASLR. These issues are solvable, but it’s a lot easier to get PIE if you use dynamic linking. If you want static PIE, you need to compile all your static libs as PIE—doable, but you don’t get it out of the box.

wwilson 6 days ago [-]
Biggest advantages I know of for dynamic linking:

* You can use the LD_PRELOAD trick to override behavior at runtime.

* You can run with entirely different implementations of the dynamically linked library in different places.

* Software can pick up interface-compatible upgrades to its dependencies without being re-compiled and distributed again.

We use all three of these tricks in our SDKs, FWIW. But it is still a giant pain in the ass.

lexlash 5 days ago [-]
Off the top of my head:

1) glibc doesn't static link and musl requires you to understand how musl is different from glibc (DNS resolution being a favorite), so you always end up with at least that dynamic dependency, at which point might as well have more dynamic dependencies

2) static binaries take significantly more time to build, and engineers really hate waiting - more than they care about wasted resources :)

3) static linking means having to re-ship your entire app/binary when a dependency needs patching - and I'm not sure how many tools are smart enough to detect vulnerable versions of static-linked dependencies in a binary vs. those that scan hashes in /usr/lib and so on. If your tool is tiny this doesn't matter, but if it's not, you end up in a lot of pain

4) licensing of dependencies that are statically linked in is sometimes legally interesting or different versus dynamic linking, but I'm not sure how many people actually think about that one

I've also personally had all kinds of weird issues trying to build static binaries of various tools/libraries; since it's not as common a need, expect to have to put in a bunch of effort on edge cases.

Resource usage _does_ come up - a great example of this is how Apple handled Swift for awhile - every Swift application had to bundle the full runtime, effectively shipping a static build, and a number of organizations rejected Swift entirely because it lead to large enough downloads that Apple would push it to Wi-Fi or users would complain. :)

partdavid 6 days ago [-]
In addition to the problems others mention, dynamic object dependencies are only one slice of a dependency pie; if you can config-manage your way out of all of the others then maybe dynamic libs aren't really a lot of problem? I think this is why container images became so popular, because they close over a lot more dependencies than just dynamic libs.

And the scope of the solution isn't very wide. In the Go community it's common to distribute statically-linked binaries because it solves so many problems--but it just kind of moves them to installation or configuration time because you have to pick a platform and platform version and so forth to find the binary you need, if you want your tool to work on more than one of them.

banish-m4 5 days ago [-]
Completely missed that Nix solves RPM dependency hell, which is a superset of the shared library hell.

Another problem not solved by NixOS and most other distros is conflating and mixing dependencies in a messy, fragile way rather than having a clear separation between the OS and add-ons that FreeBSD and others have. Congruent with this is proper configuration and lifecycle management.

I'm also wondering about the security of this RPATH approach, if it does or doesn't introduce vulnerabilities.

bbor 6 days ago [-]
I love what they’re going for, but I couldn’t help but react negatively at finding out that I had been hyped up for a post on some small technical topic for an OS I don’t know of. Maybe title it “At the Mountains of NIXos Madness”? But then again I’m just a grouch! Well written article regardless, from what I was able to get out of it
pizzalife 6 days ago [-]
Calling binaries using ld-linux used to be a popular way to get around noexec on filesystems, since the libraries are usually in a place that is executable..
NoraCodes 6 days ago [-]
What is that abominable diffusion output doing at the top of an otherwise interesting article?
wwilson 6 days ago [-]
Our artist is on vacation, and some fool gave the CEO access to Midjourney.
bloopernova 6 days ago [-]
In my opinion, generative AI pictures make a blog post feel cheaper and less truthful. Just my view, I fully accept that I'm probably in a minority of opinion.
pizzalife 6 days ago [-]
I agree with you since it adds absolutely no value to the article. Technical articles don't need unrelated pictures that add huge page breaks.
didsomeonesay 6 days ago [-]
FWIW, I enjoyed how the pictures were adding a little theme, were consistent and broke up the reading nicely without being too "noisy" (compared to e.g. technical articles full of meme pictures).
imagineerschool 6 days ago [-]
These synthetic artifacts will come to be regarded as psychological asbestos.

Please consider labelling it, and giving provenance data. And protecting public sanity by putting it behind a clickwall.

knowaveragejoe 6 days ago [-]
What is this trifling and snobby driveby commentary doing in the comments of an otherwise interesting article?
NoraCodes 6 days ago [-]
I think it's useful to impose a social cost for using plagarism machines to make slop.
dhash 6 days ago [-]
I loved this post, and patchelf is a real gem of a utility.
DoreenMichele 6 days ago [-]
tl;dr: we are open-sourcing an internal tool that solves a problem that we think many NixOS shops are likely to run into. The rest of this post is just the story of how we came to write this tool, which is totally a skippable story.

The tool happens to be called Madness, thus the Lovecraftian reference in this piece.

Madness enables you to easily run the same binary on NixOS and non-NixOS systems


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 14:08:00 GMT+0000 (Coordinated Universal Time) with Vercel.