NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Containerization is a Swift package for running Linux containers on macOS (github.com)
commandersaki 1 days ago [-]
Video about it here: https://developer.apple.com/videos/play/wwdc2025/346/

Looks like each container gets its own lightweight Linux VM.

Can take it for a spin by downloading the container tool from here: https://github.com/apple/container/releases (needs macOS 26)

WhyNotHugo 22 hours ago [-]
> Looks like each container gets its own lightweight Linux VM.

That sounds pretty heavyweight. A project with 12 containers will run 12 kernels instead of 1?

Curious to see metrics on this approach.

haiku2077 19 hours ago [-]
This is the approach used by Kata Containers/Firecracker. It's not much heavier than the shared kernel approach, but has significantly better security. An bug in the container runtime doesn't immediately break the separation between containers.

The performance overhead of the VM is minimal, the main tradeoffs is container startup time.

Yeroc 13 hours ago [-]
I wonder why Apple cared so much about the security aspect to take the isolated VM approach versus shared VM approach. Seems unlikely that Apple hardware is going to be used to host containerized applications in production where this would be more of a concern. On the other hand, it's more likely to be used for development purposes where the memory overhead could be a bigger concern.
ghostly_s 13 hours ago [-]
> Seems unlikely that Apple hardware is going to be used to host containerized applications in production

I imagine this is certainly happening already inside Apple datacenters.

haiku2077 13 hours ago [-]
One of the use cases for this feature is for macOS desktop apps to run Linux sidecars, so this needed to be secure for end user devices.
surajrmal 18 hours ago [-]
Ram overhead can be nontrivial. Each kernel has its own page cache.
haiku2077 18 hours ago [-]
On a non Linux OS that should be offset by being able to allocate RAM separately to each container instead of the current approach in Docker Desktop where a static slice of your system memory is always allocated to the Docker VM.
fpoling 14 hours ago [-]
This a feature targeting developers or perhaps apps running on end-user machine where page cache sharing between applications or container does not typically get much of RAM saving.

Linux kernel overhead itself while non-trivial is still very manageable in those settings. AWS Nitro stripped down VM kernel is about 40 MB, I suppose for Apple solution it will be similar.

13 hours ago [-]
arijun 22 hours ago [-]
Is that not the premise of docker?
rtkwe 21 hours ago [-]
No it's the opposite, the entire premise of Docker over VMs is that you run one instance of all the OS stuff that's shared so it takes less resources than a VM and the portable images are smaller because they don't contain the OS image.
dwaite 16 hours ago [-]
The premise is containerization, not necessarily particular resource usage by the host running the containers.

For hosted services, you want to choose - is it worth running a single kernel with a lot of containers for the cost savings from shared resources, or isolate them by making them different VMs. There are certainly products for containers which lean towards the latter, at least by default.

For development it matters a lot less, as long as the sum resources of containers you are planning to run don't overload the system.

rtkwe 10 hours ago [-]
The VM option is relatively new and the original idea was to provide that isolation without the weight of a VM. Also I'm not sure that docker didn't coin the word containerization, I've alway associated it with specifically the kind of packaging docker provides and don't remember it being mentioned around VMs.
pjmlp 13 hours ago [-]
On Windows containers you can chose if the kernel is shared across containers or not, it in only on Linux containers mode that the kernel gets shared.
WhyNotHugo 21 hours ago [-]
Nope, docker uses the host's kernel, so there are zero additional kernels.

On non-Linux, you obviously need an additional kernel running (the Linux kernel). In this case, there are N additional kernels running.

quietbritishjim 21 hours ago [-]
> On non-Linux, you obviously need an additional kernel running (the Linux kernel).

That seems to be true in practice, but I don't think it's obviously true. As WSL1 shows, it's possible to make an emulation layer for Linux syscalls on top of quite a different operating system.

capitol_ 19 hours ago [-]
I would draw the opposite conclusion from the WSL1 attempt.

It was a strategy that failed in practice and needed to be replaced with a vm based approach.

The Linux kernel have a huge surface area with some subtle behavior in it. There was no economic way to replicate all of that and keep it up to date in a proprietary kernel. Specially as the VM tech is well established and reusable.

paulryanrogers 20 hours ago [-]
WSL1 wasn't really a VM though? IIRC it was implementing syscalls over the Windows kernel.
quietbritishjim 20 hours ago [-]
Indeed, WSL1 isn't a VM. As I said, it's just:

> an emulation layer for Linux syscalls on top of quite a different operating system.

My point was that, in principle, it could be possible to implement Linux containers on another OS without using VMs.

However, as you said (and so did I), in practice no one has. Probably because it's just not worth the effort compared to just using a VM. Especially since all your containers can share a single VM, so you end up only running 2 kernels (rather than e.g. 11 for 10 containers). That's exactly how Docker on WSL2 works.

derekdb 16 hours ago [-]
gVisor has basically re-implemented most of syscall api, but only when the host is also Linux.
ongy 20 hours ago [-]
I think that's the point. You don't have to run the full kernel to run some linux tools.

Though I don't think it ever supported docker. And wasn't really expected to, since the entire namespaces+cgroup stuff is way deeper than just some surface level syscall shims.

asveikau 17 hours ago [-]
And long before WSL, *BSD was doing this with the Linux syscall abi.
lloeki 13 hours ago [-]
> On non-Linux, you obviously need an additional kernel running (the Linux kernel)

Only "obvious" for running Linux processes using Linux container facilities (cgroups)

Windows has its own native facilities allowing Windows processes to be containerised. It just so happens that in addition to that, there's WSL2 at hand to run Linux processes (containerised or not).

There is nothing preventing Apple to implement Darwin-native facilities so that Darwin processes would be containerised. It would actually be very nice to be able to distribute/spin up arbitrary macOS environments with some minimal CLI + CLT base† and run build/test stuff without having to spawn full-blown macOS VMs.

† "base" in the BSD sense.

karel-3d 20 hours ago [-]
eh docker desktop nowadays runs VMs even on Linux
speedgoose 20 hours ago [-]
Docker Desktop is non free proprietary software that isn’t very good anyway.
detaro 22 hours ago [-]
no.
AdamN 20 hours ago [-]
I could imagine one Linux kernel running in a VM (on top of MacOS) and then containers inside that host OS. So 1 base instance (MacOS), 1 hypervisor (Linux L0), 12 containers (using that L0 kernel).
haiku2077 19 hours ago [-]
That's how Docker Desktop for Mac works. With Apples approach you have 12 VMs with 12 Linux kernels.
OJFord 1 days ago [-]
The submission is about https://github.com/apple/containerization, not https://github.com/apple/container.

The former is for apps to ship with container sidecars (and cooler news IMO); the latter is 'I am a developer and I want to `docker run ...`'.

(Oh, and container has a submission here: https://news.ycombinator.com/item?id=44229239)

badc0ffee 1 days ago [-]
The former is the framework enabling Linux containers on lightweight VMs and the latter is a tool using that framework.
solarexplorer 21 hours ago [-]
I would assume that "lightweight" in this case means that they share a single Linux kernel. Or that there is an emulation layer that maps the Linux Kernel API to macOS. In any case, I don't think that they are running a Linux kernel per container.
ylk 20 hours ago [-]
You don’t have to assume, the docs in the repo tell you that it does run a Linux kernel in each VM. It’s one container per VM.
solarexplorer 16 hours ago [-]
Good call, thanks for clarifying!
commandersaki 11 hours ago [-]
"Lightweight" in the sense that the VM contains one static executable that runs the container, and not a full fledged Ubuntu VM (e.g. Colima).
20 hours ago [-]
paxys 1 days ago [-]
Also works on macOS 15, but they mentioned that some networking features will be limited.
selkin 1 days ago [-]
It seems to work on macOS 15 as well, with some limitations[0].

[0] https://github.com/apple/container/blob/main/docs/technical-...

philips 11 hours ago [-]
Shoutout to Michael Crosby, the person in this video, who was instrumental in getting Open Containers (https://opencontainers.org) to v1.0. He was a steady and calm force through a very rocky process.
discohead 11 hours ago [-]
"A new report from Protocol today details that Apple has gone on a cloud computing hiring spree over the last few months... Michael Crosby, one of a handful of ex-Docker engineers to join Apple this year. Michael is who we can thank for containers as they exist today. He was the powerhouse engineer behind all of it, said a former colleague who asked to remain anonymous."

https://9to5mac.com/2020/05/11/apple-cloud-computing/

zmmmmm 1 days ago [-]
interesting choice - doesn't that then mean that container to container integration is going to be harder and a lot of overhead per-container? I would have thought a shared VM made more sense. I wonder what attracted them to this.
pxc 1 days ago [-]
It seems great from a security perspective, and a little bit nice from a networking perspective.
selimnairb 20 hours ago [-]
The "one IP per container" approach (instead of shared IPs) is similar to how kubernetes pods work.
mickdarling 21 hours ago [-]
I can see the decision to do it this way being related to their private secure cloud infrastructure for AI tools.
JoBrad 20 hours ago [-]
I like the security aspect. Maybe DNS works, and you can use that for communication between containers?
honkycat 12 hours ago [-]
> Looks like each container gets its own lightweight Linux VM.

We're through the looking glass here, people

zoobab 1 days ago [-]
"Looks like each container gets its own lightweight Linux VM."

Not a container "as such" then.

How hard is it to emulate linux system calls?

teruakohatu 1 days ago [-]
> How hard is it to emulate linux system calls?

It’s doable but a lot more effort. Microsoft did it with WSL1 and abandoned it with WSL2.

tsimionescu 1 days ago [-]
Note that they didn't "do it" for WSL1, they started doing it, realized it is far too much work to cover eveything, and abandoned the approach in favor of VMs. It's not like WSL1 was a fully functioning Linux emulator on top of Windows, it was still very far from it, even though it could do many common tasks.
benwad 1 days ago [-]
I've always wondered why only Linux can do 'true' containers without VMs. Is there a good blog post or something I can read about the various technical hurdles?
NexRebular 24 hours ago [-]
> I've always wondered why only Linux can do 'true' containers without VMs.

Solaris/illumos has been able to do actual "containers" since 2004[0] and FreeBSD has had jails even before that[1].

[0] https://www.usenix.org/legacy/event/lisa04/tech/full_papers/... [1] https://papers.freebsd.org/2000/phk-jails.files/sane2000-jai...

syhol 21 hours ago [-]
Many OS's have their own (sometimes multiple) container technologies, but the ecosystem and zeitgeist revolves around OCI Linux containers.

So it's more cultural than technical. I believe you can run OCI Windows containers on Windows with no VM, although I haven't tried this myself.

bayindirh 1 days ago [-]
BSD can do BSD containers with Jails for more than a decade now?

Due to innate features of a container, it can be of the same OS of the host running on the system, since they have no kernel. Otherwise you need to go the VM route.

dwaite 16 hours ago [-]
In this context (OCI containers) that seems very inaccurate. For instance, ocijail is a two year old project still considered experimental.
soupbowl 15 hours ago [-]
FreeBSD has beta podman (OCI) support right now, using freebsd base images not Linux. It is missing some features but coming along.
tsimionescu 23 hours ago [-]
I'm not sure about MacOS, but otherwise all major OSs today can run containers natively. However, the interest in non-Linux containers is generally very very low. You can absolutely run Kubernetes as native Windows binaries [0] in native Windows containers, but why would you?

Note that containers, by definition, rely on the host OS kernel. So a Windows container can only run Windows binaries that interact with Windows syscalls. You can't run Linux binaries in a Windows container anymore than you can run them on Windows directly. You can run Word in a Windows container, but not GCC.

[0] https://learn.microsoft.com/en-us/virtualization/windowscont...

kcoddington 21 hours ago [-]
I wouldn't think there are many use cases for Windows, but I imagine supporting legacy .NET Framework apps would be a major one.
tsimionescu 19 hours ago [-]
Is there any limitation in running older.NET Framework on current Windows? Back when I was using it, you could have multiple versions installed at the same time, I think.
pjmlp 13 hours ago [-]
You can, but there are companies that also want to deploy different kinds of Windows software into Kubernetes clusters and so.

Some examples would be Sitecore XP/XM, SharePoint, Dynamics deployments.

notpushkin 1 days ago [-]
Windows can do “true” containers, too. These containers won’t run Linux images, though.
dijit 22 hours ago [-]
Can it? As far as I understood windows containers required Hyper-V and the images themselves seem to contain an NT kernel.

Not that it helps them run on any other Windows OS other than the version they were built on, it seems.

noisem4ker 22 hours ago [-]
Source?

The following piece of documentation disagrees:

https://learn.microsoft.com/en-us/virtualization/windowscont...

> Containers build on top of the host operating system's kernel (...), and contain only apps and some lightweight operating system APIs and services that run in user mode

> You can increase the security by using Hyper-V isolation mode to isolate each container in a lightweight VM

pjmlp 13 hours ago [-]
Yes, it is based on Windows Jobs API.

Additionally you can decide if the images contain the kernel, or not.

There is nothing in OS containers that specifies the golden rule how the kernel sharing takes place.

Remember containers predate Linux.

dwaite 16 hours ago [-]
Every OS can theoretically do 'true' containers without VMs - for containers which match the host platform.

You can have Windows containers running on Windows, for instance.

Containers themselves are a packaging format, and do rather little to solve the problem of e.g. running Linux-compiled executables on macOS.

ownagefool 1 days ago [-]
Containers are essentially just a wrapper tool for a linux kernel feature called cgroups, with some added things such as layered fs and the distribution method.

You can also use just use cgroups with systemd.

Now, you could implement something fairly similar in each OS, but you wouldn't be able to use the vast majority of contained software, because it's ultimately linux software.

xrisk 1 days ago [-]
cgroups is for controlling resource allocation (CPU, RAM, etc). What you mean is probably namespaces.
ownagefool 22 hours ago [-]
It's technically both I guess, but fair correction.
anthk 21 hours ago [-]
Containers don't virtualize, just separate environments.
NexRebular 1 days ago [-]
> How hard is it to emulate linux system calls?

FreeBSD has linuxulator and illumos comes with lx-zones that allow running some native linux binaries inside a "container". No idea why Apple didn't go for similar option.

citrin_ru 24 hours ago [-]
FreeBSD Linux emulation is being developed for 20 (may be even 30) years. While Apple can throw some $$$ to get it implemented in a couple years using virtualisation requires much less development time (so it’s cheaper).
rcleveng 19 hours ago [-]
Apple's already got the Virtualization framework and hypervisor already (https://developer.apple.com/documentation/virtualization), so adding the rest of the container ecosystem seems like a natural next step.

It puts them on par with Windows that has container support with a free option, plus I imagine it's a good way to pressure test swift as a language to make sure it really can be the systems programming language they are betting that it can and will be.

OrbStack has a great UX and experience, so I imagine this will eat into Docker Desktop on Mac more than OrbStack.

masklinn 22 hours ago [-]
Because that‘s a huge investment for something they have no reason or desire to productivize.
surajrmal 18 hours ago [-]
syscalls are just a fraction of the surface area. There are many files in many different vfs you need to implement, things like selinux and ebpf, iouring, etc. It's also a constantly shifting target. The VM API is much simpler, relatively stable, and already implemented.

Emulating Linux only makes sense on devices with constrained resources.

throwaway1482 16 hours ago [-]
> How hard is it to emulate linux system calls?

Just replace the XNU kernel with Linux already.

sangeeth96 1 days ago [-]
The CLI from the press release/WWDC session is at https://github.com/apple/container which I think is what many like myself would be interested in. I was hoping this'd be shipped with the newest Xcode Beta but that doesn't seem to be the case. Prebuilt packages are missing at the moment but they are working on it: https://github.com/apple/container/issues/54
n2d4 1 days ago [-]
Seems prebuilt packages were released exactly one minute after your comment: https://github.com/apple/container/releases/tag/0.1.0
sangeeth96 1 days ago [-]
Beat me to it, thanks!
OJFord 1 days ago [-]
julik 18 hours ago [-]
Really curious how this improves the filesystem bridging situation (which with Docker Desktop was basically bouncing from "bad" to "worse" and back over the years). Or whether it changes it at all.
torginus 15 hours ago [-]
I'm just taking a wild guess here, but I'd guess it's not a problem - WSL2 works afaik by having a native ext4 partition, and the Windows kernel accesses it. Intra-OS file perf is great, but using Windows to access Linux files is slow.

MacOS just understands ext4 directly, and should be able to read/write it with no performance penalty.

dwaite 16 hours ago [-]
I would imagine it is low lift - using https://developer.apple.com/documentation/virtualization/sha... which is already built into the OS
merb 13 hours ago [-]
If they wanted to improve the situation they would’ve needed to ship an apfs driver and a Linux kernel. Sadly they didn’t.
wmf 12 hours ago [-]
AFPS isn't the solution since you can't have two kernels accessing the same FS. The solution is probably something like virtio-fs with DAX.
benwaffle 14 hours ago [-]
I wonder how it compares to orbstack
candiddevmike 1 days ago [-]
Wonder how Docker feels about this. I'd assume a decent amount of Docker for Desktop is on Mac...
paxys 1 days ago [-]
Well it makes developing Docker Desktop infinitely easier for them, since they no longer need to start their own Linux VM under the hood. I think the software is "sticky" enough that people will still prefer to use Docker Desktop for the familiar CLI and UX, Docker Compose, and all the Docker-specific quirks that make migrating to a different container runtime basically impossible.
pxc 1 days ago [-]
Docker Desktop on Windows uses WSL to provide the Docker daemon, doesn't it? So Docker Desktop has a history of leaning into OS offerings for virtualizing Linux like this where they can.
cosmotic 1 days ago [-]
Docker desktop on macos uses the Apple virtualization framework to run a Linux VM these days.

https://developer.apple.com/documentation/virtualization

prmoustache 1 days ago [-]
I never used docker desktop and am struggling to understand what you are supposed to be doing with a gui in a docker/container context.
stackskipton 1 days ago [-]
GUI lets you look at logs quickly, there is buttons to click quickly open http://localhost:<port>, stop and start containers, get shell in container and bunch of other stuff that people developing or testing against containers need locally.
prmoustache 1 days ago [-]
I am surprised a developer would not have chosen to redirect the port at run time already and would not be running the containers in the foreground in the first place.
stackskipton 1 days ago [-]
So many developers don't learn docker. I'm Ops type person, outside FAANG, most devs are just flinging code at screen to close JIRA tickets, get the build to go green and collect a paycheck to go home. Docker, that's for us Ops people who have build that rickety pipeline that somehow manages to get your code into a container and into the Kubernetes cluster.
phinnaeus 1 days ago [-]
Dropbox copypasta goes here
Spivak 1 days ago [-]
I mean maybe but if you run your containers not via a GUI you get most of that for free or at worst with a docker logs or docker exec command.

Do people learn docker not via the CLI?

lenerdenator 1 days ago [-]
They do, then they realize that it's not the core component of their jobs (unless they're ops) and it is easier to press a "stop" button to kill containers, at least in their use case.
DanHulton 1 days ago [-]
I did. Well, I did until I found lazydocker, a TUI that handles the majority of the day-to-day stuff that I need to do that isn't already written into tasks in my justfile: https://github.com/jesseduffield/lazydocker
queenkjuul 1 days ago [-]
I for one have been using docker on Linux for years and have to use a Mac at work, and I'm totally baffled by the fact i need to install docker desktop to use the CLI and don't get why you'd need or want the GUI.

And like I'm not all anti-GUI, it's just that docker is one of those things I've never even imagined using a GUI for

spockz 1 days ago [-]
You don’t have to install docker desktop. The cli can be installed via homebrew. (Co)Lima, podman, or others, can be used to create a VM running the docker engine.

It’s just that Docker Desktop makes it easy and also provides other integrations like file system sharing etc.

selcuka 1 days ago [-]
I mean, it's nice to have a GUI when running multiple containers on Docker, or Kubernetes, but I've never used Docker Desktop on my work Mac either.

For Kubernetes, something like K9s [1] or Headlamp [2] works fine. I remember seeing something similar for Docker but I can't remember the name.

[1] https://k9scli.io/ [2] https://headlamp.dev/

TheDong 1 days ago [-]
I think there's a difference in that dropbox was targeted at regular users, not just developers.

I think docker desktop and apple's containerization are both targeted firmly at developers only.

It's like programming, sure it's possible to write code in microsoft office or xcode or vscode, but all programmers I've met opt for ed or vi.

pjmlp 13 hours ago [-]
Developers are users as well, I don't get the macho thing that developers always have to do it the hard way.
dajtxx 5 hours ago [-]
The problem (as far as I can tell) is that for Windows and MacOS you can't install the docker daemon etc without installing Desktop.

I have a Mac for work and containers are a pain. I've tried Podman, UTM, colima, Docker Desktop etc and it all boils down to the same thing - run a linux VM and have the command line utils cooperate with the VM to run the containers.

It comes down to which solution has the least friction and irritations and Docker might still win there.

My current setup is UTM running a debian VM which I share my source directory with and ssh into to run docker. This is simpler for my brain to understand because the linux VM isn't a hidden component I forget to manage.

But it's not obvious how to mount the shared directory and I'm constantly running into networking problems - currently I cannot connect as myself and must sudo ssh for it to work. A reboot (of the Mac) used to fix it, but no longer does. I've given up trying to fix it and just sudo.

arjonagelhout 1 days ago [-]
For me, Docker Desktop is simply an easy way to launch the Docker daemon and inspect some created images and their corresponding logs. Other than that, the cli suffices.
hiccuphippo 1 days ago [-]
We had to remove Docker Desktop at my job (I think they started asking for money?) and moved to Lima/Colima. If this project means one less program to configure to get my docker containers running then I'm all for it.
queenkjuul 1 days ago [-]
Docker desktop for commercial use requires a license and they don't release binaries for Mac other than desktop. Seems like their one route to monetization. I use docker for literally only one build that doesn't work on native macOS so i love the idea of switching to a simple standalone CLI
bruckie 1 days ago [-]
I use Rancher Desktop plus the FOSS docker CLI from Homebrew. Works well, and has no licensing issues.
pxc 1 days ago [-]
Imo, the GUI isn't really the most important part of things like Docker Desktop.

The nice part is that they (a) set up the Linux VM that runs the Docker daemon for you and (b) handle the socket forwarding magic that lets you communicate with it "directly" by running the Docker client on the host OS. This is likewise true for Podman Desktop, Rancher Desktop, etc.

The GUI is icing on the cake, imo.

bdcravens 1 days ago [-]
Very few use the GUI for things other than configuring Docker engine settings like memory, etc.
coredog64 20 hours ago [-]
Unless this provides an extremely compatible Docker socket implementation, this is the answer. When Docker changed the licensing for Docker Desktop, my previous employer made it harder to get permission. However, there were a few tools that were in common usage, and once you mentioned that you used them, you got your permission.

Some progress has been made to create a non-Docker implementation that integrates with all those random tools that expect to be able to yeet bytes into or out of the Docker socket, but I still hit blockers the last time I tried.

n2d4 1 days ago [-]
This doesn't compete with Docker for Desktop, as more low-level than that.

Docker for Desktop sits on-top of container/virtualization software (Hypervisor.framework and QEMU on Mac, WSL on Windows, containerd on Linux). So there's a good chance that future versions of Docker for Desktop will use this library, but they don't really compete with each other.

cogman10 1 days ago [-]
Probably about the same way they feel about podman.
baby_souffle 1 days ago [-]
I guess it'll depend on whether or not this starts shipping by default with newacOS installs.

If it doesn't, then it's still a toss-up whether or not user chooses docker/podman/this...etc.

If it ends up shipping by default and is largely compatible with the same command line flags and socket API... Then docker has a problem.

For what it's worth, I prefer podman but even on Linux where the differentiators should be close to zero, I still find certain things that only docker does.

Kwpolska 1 days ago [-]
Podman is fairly niche. This is an Apple product that Apple developer circles will push hard.
hocuspocus 1 days ago [-]
Alternatives to Docker Desktop aren't niche at all since Docker started charging money.

My org's management wasn't taking the issue seriously, but once the subscription cost reached one FTE's salary, they started listening to people who had already switched to Podman, Rancher or (Co)Lima.

m463 1 days ago [-]
and telemetry on macos
cogman10 1 days ago [-]
I agree, Apple has a lot of weight. Podman, however, also has a fair bit of heft behind it (IBM via Redhat).

I'll not deny that it's a bit niche, but not so much so that it's completely unknown.

xp84 1 days ago [-]
I apologize if this sounds like a hot take, but "Apple developer circles," as in, people who use XCode at all and care about any part of Apple's toolchain[0], is a very small number of people compared to "All developers who happen to code on Macs." In my experience at least, the typical developer who uses macOS codes in VSCode on JS, Python, etc., groans when some file association accidentally launches XCode, and would likely prefer to use normal Docker like the do on their Linux servers, rather than proprietary Darwin weirdness.

"Apple developer circles" to me means the few mostly indies who build non-electron Mac apps and non-ReactNative ios apps, but those developers mostly are writing client code and don't even touch servers.

All this said, my above "gut feelings" don't explain why Apple would have bothered spending their time making this when Orbstack, Docker, etc. already meet the needs of the developers on Mac who actually need and use containers.

[0]: besides the "Command line tools" that allow compilation to work, of course.

rollcat 23 hours ago [-]
> All this said, my above "gut feelings" don't explain why Apple would have bothered spending their time making this when Orbstack, Docker, etc. already meet the needs of the developers on Mac who actually need and use containers.

Before Orbstack, running Docker on Macs was a total pain - the official desktop app is so awful, I doubt anyone at Docker is actually using it. Nevertheless, it was still too useful to let it pass. It was time either Docker or Apple stepped up, but they are both 10 years late to this party. Orbstack fixed the problem.

It would be interesting to see the reaction from Danny Lee, he's hanging out on HN sometimes. I hope this framework ends up being a building block, rather than outright competition.

jeroenhd 1 days ago [-]
Podman is the easy go-to for companies that don't like how Docker Desktop requires a license.

I'm sure Apple will try to push their own version of Docker but I'm not sure if they'll be able to win over any Docker Desktop businesses unless their tool also works on other operating systems.

pjmlp 1 days ago [-]
On Windows it is Rancher Desktop that tends to be used, especially since podman only of late started making an easy GUI offering.

Sadly all of them are Electron based.

hocuspocus 20 hours ago [-]
Most of my coworkers on Windows use none of these desktop applications, there's very little value in their features if you're already using WSL2 and the docker integration of your favorite IDE.
smw 17 hours ago [-]
You're missing the fact that "docker desktop" actually provides the docker daemon as well as a GUI. There are alternatives for both Mac and Windows, but I'd wager that many people use "docker desktop" just for the ability to run docker containers from the cli.
hocuspocus 43 minutes ago [-]
Windows and macOS are specifically not the same here.

WSL2 provides everything you need to install the docker daemon and CLI, and the VS Code extension gives you a pretty decent GUI, there's no need for anything else really.

pjmlp 19 hours ago [-]
Well, it depends if people have a background as Windows developers, or UNIX refugees on Windows.
blinded 1 days ago [-]
Also the second they started charging podman dev picked up and that has gotten real good.
sneak 1 days ago [-]
Docker Desktop is closed source proprietary software and this is free software, so this is a win (for us, at least).
1 days ago [-]
spockz 1 days ago [-]
At first I thought this sounded like a blend of the virtualisation framework with a firecracker style lightweight kernel.

This project had its own kernel, but it also seems to be able to use the firecracker one. I wonder what the advantages are. Even smaller? Making use of some apple silicon properties?

Has anyone tried it already and is it fast? Compared to podman on Linux or Docker Desktop for Mac?

rfoo 1 days ago [-]
The advantage is, now there's an Apple team working on it. They will be bothered by their own bugs and hopefully get them fixed.

Virtualization.framework and co was buggy af when introduced and even after a few major macOS versions there are still lots of annoyances, for example the one documented in "Limitations on macOS 15" of this project, or half-assed memory ballooning support.

Hypervisor.framework on the other hand, is mostly okay, but then you need to write a lot more codes. Hypervisor.framework is equivalent to KVM and Virtualization.framework is equivalent to qemu.

egorfine 23 hours ago [-]
> They will be bothered by their own bugs

Laughs in Xcode

Aaron2222 23 hours ago [-]
And QEMU on macOS uses Virtualization.framework for hardware virtualisation.
pxc 1 days ago [-]
This is the most surprising and interesting part, imo:

> Contributions to `container` are welcomed and encouraged. Please see our main contributing guide for more information.

This is quite unusual for Apple, isn't it? WebKit was basically a hostile fork of KHTML, Darwin has been basically been something they throw parts of over the wall every now and then, etc.

I hope this and other projects Apple has recently put up on GitHub see fruitful collaboration from user-developers.

I'm a F/OSS guy at heart who has reluctantly become a daily Mac user due to corporate constraints that preclude Linux. Over the past couple of years, Apple Silicon has convinced me to use an Apple computer as my main laptop at home (nowadays more comparable, Linux-friendly alternatives seem closer now than when I got my personal MacBook, and I'm still excited for them). This kind of thing seems like a positive change that lets me feel less conflicted.

Anyway, success here could perhaps be part of a virtuous cycle of increasing community collaboration in the way Apple engages with open-source. I imagine a lot of developers, like me, would both personally benefit from this and respect Apple for it.

boxed 1 days ago [-]
> WebKit was basically a hostile fork of KHTML

Chromiom is a hostile fork of WebKit. Webkit was a rather polite fork of KHTML, just that they had a team of full time programmers so KHTML couldn't keep up with the upstream requests and gave up since WebKit did a better job anyway.

I personally would LOVE if a corporation did this to any of my open source projects.

todotask2 1 days ago [-]
And the creator of KHTML is now part of WebKit team at Apple.

Even KDE eventually dropped KHTML in favor of KHTML’s own successor, WebKit-based engines (like QtWebKit and later Qt WebEngine based on Chromium).

A web engine isn’t just software — it needs to keep evolving.

Recognising the value of someone’s work is better than ignoring it and trying to build everything from scratch on your own, Microsoft's Internet Explorer did not last.

bigyabai 1 days ago [-]
Blink is the hostile fork of WebKit. And you would not like if any corporations did this to your Open Source project; on HN alone I see a small army's worth of people who bitch about websites built for Chrome but not Safari. That's how Konquerer users felt back when Apple didn't collaborate downstream, so turnabout is truly fair play.
kergonath 1 days ago [-]
> That's how Konquerer users felt back when Apple didn't collaborate downstream, so turnabout is truly fair play.

You are rewriting history here. The main KHTML developers were hired by Apple and Konqueror got on with the new engine. There was no fuss and no drama.

The reason why it’s fair play is that the license allows it. Google is no white knight out to avenge the poor KHTML users from 2003.

mattl 1 days ago [-]
I think there was some perceived initial concern about the patches provided by Apple to KHTML in so much as they now had to merge a huge amount of code into the project and much of it was (IIRC) lots and lots of ifdef statements.
kergonath 13 hours ago [-]
Yes, but the main issue was the volume and frequency of patches, not that the patches were intentionally hard to upstream (you can always complain about style, though). I don’t have the links handy right now but I remember the discussions amongst the KHTML devs at the time.
pxc 22 hours ago [-]
> There was no fuss and no drama.

I didn't write my initial comment here to relitigate this, but you are absolutely the one rewriting history. I remember reading about it because I was a KDE user at the time. But sources are easy to find; there are blog posts and press articles cited in Wikipedia. Here's a sample from one:

> Do you have any idea how hard it is to be merging between two totally different trees when one of them doesn't have any history? That's the situation KDE is in. We created the khtml-cvs list for Apple, they got CVS accounts for KDE CVS. What did we get? We get periodical code bombs in the form of them releasing WebCore. Many of us wanted to even sign NDA's with Apple to at least get access to the history of their internal vcs and be able to be merging the changes incrementally, the way they can right now. Nothing came out of it. They do the very, very minimum required by LGPL.

> And you know what? That's their right. They made a conscious decision about not working with KDE developers. All I'm asking for is that all the clueless people stop talking about the cooperation between Safari/Konqueror developers and how great it is. There's absolutely nothing great about it. In fact "it" doesn't exist. Maybe for Apple - at the very least for their marketing people. Clear?

https://web.archive.org/web/20100529065425/http://www.kdedev...

From another, the very developer they later hired described the same frustrations in more polite language:

> As is somewhat well known, Apple's initial involvement in the open-source project known at KHTML was tense. KHTML developers like Lars were frustrated with Apple's bare-bones commitment to contributing their changes back to the project. "It was hard, and in some cases impossible to pick apart Apple's changes and apply them back to KHTML," he told us. Lars went on to say, "This kind of development is really not what I wanted to do. Developers want to spend their time implementing new features and solving problems, not cherry picking through giant heaps of code for hours at a time."

https://arstechnica.com/gadgets/2007/06/ars-at-wwdc-intervie...

This uncooperative situation persisted for the first 3 or 4 years of the lifetime of Apple's fork, at least.

> The reason why it’s fair play is that the license allows it. Google is no white knight out to avenge the poor KHTML users from 2003.

You're right about this, though.

Anyway, there's no need to deny or erase this in order to defend Apple. Just pointing to other open-source projects they released or worked with in the intervening years, as many other commenters have done in reply to my initial comment, is sufficient!

bigyabai 16 hours ago [-]
Okay. Just make sure nobody searches up the KDE blogs from back then, it might derail your argument.

> Google is no white knight out to avenge the poor KHTML users from 2003.

Nope. They're here to molest your runtime. Portions of are not expected to survive the assault.

Normally, this is where I'd say "us Linux and Mac users should join arms and fight the corporations!" but that bridge has been burning for almost 20 years now. These days I'm quite content with Safari's fate regardless of how cruel it's treated; after all, the license allows it. No fuss, and no drama. Healthy as a horse, honest.

kergonath 13 hours ago [-]
The developers moved on, that’s all, that’s why there was no fork and no momentum behind the original KHTML library. WebKit became quickly the gold standard at the time of the Acid tests, replaced KHTML in most places and nobody looked back. It remained functionally identical, except that it had orders of magnitude more resources than before.

There’s more blood and drama every time there’s a GTK update.

> These days I'm quite content with Safari's fate regardless of how cruel it's treated; after all, the license allows it. No fuss, and no drama.

Well, bitching is not very productive. We can regret a Blink monoculture, but it would have been exactly the same if Chrome kept using WebKit (if anything, that would have been worse), or if they switched to Gecko. The drama with Chrome has nothing to do with who forked whom.

boxed 24 hours ago [-]
> And you would not like if any corporations did this to your Open Source project; on HN alone I see a small army's worth of people who bitch about websites built for Chrome but not Safari.

Those are unrelated things.

bigyabai 16 hours ago [-]
What engine is Blink based on? You can Google it.
holycrapwhodat 1 days ago [-]
> WebKit was basically a hostile fork of KHTML...

WebKit has been a fully proper open source project - with open bug tracker, patch review, commit history, etc - since 2005.

Swift has been a similarly open project since 2015.

Timeline-wise, a new high profile open source effort in 2025 checks out.

jen20 1 days ago [-]
FoundationDB is a fully proper open source project since 2018…
willtemperley 1 days ago [-]
I find Apple to be very collaborative on OSS - I hacked up a feature I needed in swift-protobuf and over a couple of weeks two Apple engineers and one Google engineer spent a significant amount of time reviewing and helping me out. It was a good result and a great learning experience.
gigatexal 1 days ago [-]
I too am more of a reluctant convert to Mac from Linux. It really does just work most of the time for me in the work context. It allows me to get my job done and not worry because it’s the most supported platform at the office. Shrug. But also the hardware is really really really nice.

I do have a personal MacBook pro that I maxed out (https://gigatexal.blog/pages/new-laptop/new-laptop.html) but I do miss tinkering with my i3 setup and trying out new distos etc. I might get a used thinkpad just for this.

But yeah my Mac personal or work laptop just works and as I get older that’s what I care about more.

Going to try out this container binary from them. Looks interesting.

roughly 1 days ago [-]
If you’re looking for a hobby computer, Framework’s laptops are a lot of fun. There’s something about a machine that’s so intentionally designed to be opened up and tinkered with - it’s not my daily driver, but it’s my go to for silly projects now.
zapzupnz 1 days ago [-]
It's not that surprising. Much of Swift and its frameworks are contributed by the open source community.
pxc 1 days ago [-]
That's true, but I always thought of Swift as exceptional in this because Swift is a programming language, and this has become the norm for programming languages in my lifetime.

If my biases are already outdated, I'm happy to learn that. Either way, my hopes are the same. :)

samtheprogram 1 days ago [-]
Jai has been one exception I can think of here. It hasn’t been publicly released yet, either (you can email/request pre-release access, though)
klausa 1 days ago [-]
What’s Jai?
noufalibrahim 1 days ago [-]
Apple has a lot of good stuff out there doesn't it? Aren't llvm and cups theirs more or less?
kergonath 1 days ago [-]
They gave up on CUPS, which was left in limbo for way too long. Now it’s been forked, but I don’t know how successful that fork is.

They took over LLVM by hiring Chris Lattner. It was still a significant investment and they keep pouring resources into it for a long while before it got really widespread adoption. And yes, that project is still going.

merb 13 hours ago [-]
Tbf if you look at all the printer drivers out there. You know why they dropped it. PPD is also not a good standard. I mean it would not be too bad, but what printer developers do to make their shitty printers work… (like adding binary command filters and stuff, binary tray mgmt extensions…) xerox for one example ships really strange drivers. Most of the time I use their windows ppd and strip the binary stuff.
pxc 6 hours ago [-]
CUPS is still the only print system macOS has. Apple never dropped it in the sense of ceasing to use it! "They dropped it" only in the sense of more or less ceasing to maintain it-- there was only one commit in the course of about a year, and no patches accepted from outside contributors at that time-- until it eventually had to be forked.

The name stands for Common Unix Printing System, and Apple CUPS ceased to meaningfully be that after its author left the company. But Apple still uses CUPS in their operating systems!

Squarex 24 hours ago [-]
cups seems to be properly maintained now https://github.com/openprinting/cups
kergonath 13 hours ago [-]
Yes, that’s the fork I mentioned. The last version of Apple CUPS seems to be 3 years old https://www.cups.org/ .
compiler-guy 13 hours ago [-]
Apple is heavily involved in llvm, but so are a several other companies. Most prominently Google, which contributes a huge amount, and much of testing infrastructure. But also Sony and SiFive and others as well.

It’s all very corporate, but also widely distributed and widely owned.

overfeed 1 days ago [-]
> I'm a F/OSS guy at heart who has reluctantly become a daily Mac user due to corporate constraints that preclude Linux

I suspect this move was designed to stop losing people like you to WSL.

guztaver 1 days ago [-]
As a long-time Linux user, I can confidently say that the experience of using a M1 Pro is significantly superior to WSL on Windows!

I can happily use my Mac as my primary machine without much hassle, just like I would often do with WSL.

mikepurvis 1 days ago [-]
I'm in that camp— I was an Intel Mac user for a decade across three different laptops, and switched to WSL about six years ago. Haven't strongly considered returning.
ma5ter 1 days ago [-]
> I suspect this move was designed to stop losing people like you to WSL.

I am also thinking the same, Docker desktop experience was not that great at least on Intel Macs

1 days ago [-]
nhumrich 1 days ago [-]
Since this is touching Linux, and Linux is copy left, they _have_ to do this.
mirashii 1 days ago [-]
In addition to the other comments about the fact that this wasn't forced to adopt the GPL, even if it were, there's nothing in the license that forces you to work with the community to take contributions from the public. You can have an entirely closed development process, take no feedback, accept no patches, and release no source code until specifically asked to do so.

They don't have to do literally any of this.

pxc 1 days ago [-]
Right! The exciting thing is the approach, not the license.
n2d4 1 days ago [-]
Touching Linux would not be enough. It would have to be a derivative work, which this is (probably?) not.

Besides, I think OP wasn't talking about licenses; Apple has a lot of software under FOSS licenses. But usually, with their open-source projects, they reject most incoming contributions and don't really foster a community for them.

TimTheTinker 1 days ago [-]
> derivative work

Or distributing builds of something that statically links to it. (Which some would argue creates a derivative work.)

jen20 1 days ago [-]
This doesn’t do that, though.
pxc 1 days ago [-]
If the license of this project were determined by obligations to the Linux kernel, it would be GPLv2, not Apache License 2.0!
Aurornis 1 days ago [-]
The comment was about them welcoming contributions, not making it open source.
sho_hn 1 days ago [-]
So both of the other big two desktop OSs now have official mechanisms to run Linux VMs to host Linux-native applications.

You can make some kind of argument from this that Linux has won; certainly the Linux syscall API is now perhaps the most ubiquitous application API.

sangeeth96 1 days ago [-]
> Linux has won

Needing two of the most famous non-Linux operating systems for the layman to sanely develop programs for Linux systems is not particularly a victory if you look at it from that perspective. Just highlights the piss-poor state of Linux desktop even after all these years. For the average person, it's still terrible on every front and something I still have a hard time recommending when things so often go belly up.

Before you jump on me, every year, I install the latest Fedora/Ubuntu (supposedly the noob-friendly recommendations) on a relatively modern PC/Laptop and not once have I stopped and thought "huh, this is actually pretty usable and stable".

omnimus 1 days ago [-]
I am ux designer and forever Mac user. I also try Fedora on random stuff. I am not sure why but last time tried it i got Blender circa 10 years ago vibes from desktop linux gnome.

Everybody has been making fun of Blender forever but they consistently made things better step by step and suddenly few UX enhancements the wind started shift. It completely flipped and now everybody is using it.

I wouldn’t be surprised if desktop Linux days are still ahead. It’s not only Valve and gaming. Many things seems start to work in tandem. Wayland, Pipewire, Flatpack, atomic distros… hey even Gnome is starting to look pretty.

robertlagrant 22 hours ago [-]
It definitely could happen, but there are two things standing in the way of it:

- there's not one desktop Linux that everyone uses (or even uses by default), and it's not resolving any time soon

- I use Ubuntu+Gnome by default, and I wouldn't say it looks great at all, other than the nice Ubuntu desktop background, and the large pretty sidebar icons

- open source needs UX people to make their stuff look professional. I'm looking at you, LibreOffice

DrScientist 22 hours ago [-]
Forget looks - I'd just be happy with rock solid.

The standard Ubuntu+Gnome desktop crashes far too often.

Now I have no idea whose fault that is ( graphics driver, window system, or desktop code - or all three ) - but it's been a persistent problem for linux 'desktops' over many many years.

omnimus 18 hours ago [-]
Imho the bright side is that this has solutions and it is getting better. Linux can be very stable, look at servers to android or even steam deck. It's mostly hardware lottery that means it comes down to hw companies support.
newdee 15 hours ago [-]
Atomic distros (fedora’s specifically) are what got me to stick to desktop Linux. That was after seeing how well the Steam Deck worked, and therefore Proton. I haven’t reinstalled in almost 2 years. Not even got the distro itch once.
akie 1 days ago [-]
I've been hearing that for 20 years though...
cromka 1 days ago [-]
And that’s exactly what OP alluded to in their Blender comparison.
omnimus 1 days ago [-]
So what? It just means aspirations have been there.

I’ve not been waiting 20 years for linux. But looking at it right now seems pretty positive to me.

jeroenhd 1 days ago [-]
The problem with the Linux desktop isn't usability, it's the lack of corporate control software. Without corporate MDM and antivirus, you'll find it rather annoying to get a native Linux desktop in many companies.

For Windows and MacOS you can throw a few quick bucks over the wall and tick a whole bunch of ISO checkboxes. For Linux, you need more bespoke software customized to your specific needs, and that requires more work. Sure, the mindless checkboxes add nothing to whatever compliance you're actually trying to achieve, but in the end the auditor is coming over with a list of checkboxes that determine whether you pass or not.

I haven't had a Linux system collapse on me for years now thanks to Flatpak and all the other tools that remove the need for scarcely maintained external repositories in my package manager. I find Windows to be an incredible drag to install compared to any other operating system, though. Setup takes forever, updates take even longer, there's a pretty much mandatory cloud login now, and the desktop looks like a KDE distro tweaked to hell (in a bad way).

Gnome's "who needs a start button when there's one on the keyboard" approach may take some getting used to, but Valve's SteamOS shows that if you prevent users from mucking about with the system internals because gary0x136 on Arch Forums said you need to remove all editors but vi, you end up with a pretty stable system.

kstrauser 1 days ago [-]
In defense of MDM, those checkboxes aren’t even entirely useless. It’s so nice being able to demonstrate that every laptop in the company has an encrypted hard drive, which you should be doing anyway. It turns a lost or stolen laptop from a major situation to a minor financial loss and inconvenience.

Yes, a lot of MDM feature are just there to check ISOwhatever boxes. Some are legitimately great, though. And yes, even though I’m personally totally comfortable running a Linux laptop, come SOC2 audit time it’s way harder to prove that a bunch of Linux boxes meet required controls when you can’t just screenshot the Jamf admin page and call it good.

HdS84 1 days ago [-]
We introduced MDM for our Mac boxes early this year. Over half(!) had outdated mac versions and missed multiple major updates. Before that - it was always really obvious that you needed to run the newest version ASAP (asap=All dev tools run on the newest version, which was not a given, so a few weeks delay was ok). We have lots of linux boxes and I suspect their patch state is even worse - but how to check that? There are a dozen distros and a few self build systems...
kstrauser 19 hours ago [-]
That was our experience, too. Sales people never update. They just don’t.

One day I asked our CFO something, and watched him log into his laptop with like 4 keypresses. And that’s how we got more complex password requirements deployed everywhere.

Having spent a few years as a CISO, I’m now understand much more about why we have all those pain in the neck controls. There’s a saying about OSHA regulations that each rule is written in blood. I don’t know what the SOC2 version of that is, but there should be one.

HdS84 13 hours ago [-]
Yes, halfway decent security runs counter to most people's inclinations. Like osha or medecine rules. So enforcement is important, though it is annoying
kstrauser 12 hours ago [-]
I've gotten a lot of mileage out of explaining why we're enforcing controls. "OK, as an engineer, I'm not fond of this either, but here's why it's important..." goes a long way.
ndriscoll 19 hours ago [-]
Do those MDM solutions look into the Linux VMs? Because once I get one of those Rube Goldberg machine working-ish, I'm naturally going to do my best to never touch it/never update anything. Native Linux tends to Just Work and has easy rollbacks, so it's fine to update.
HdS84 13 hours ago [-]
Probably not... So they will have issues, too.
jeppester 21 hours ago [-]
> Before you jump on me, every year, I install the latest Fedora/Ubuntu (supposedly the noob-friendly recommendations) on a relatively modern PC/Laptop and not once have I stopped and thought "huh, this is actually pretty usable and stable".

Funnily enough that's how I feel every time I use Windows or Mac. Yet I'm not bold enough to call them "piss poor". I'm pretty sure I - mostly - feel like that because they are different from what I'm used to.

nsagent 17 hours ago [-]
As someone who grew up running Microsoft OSes, starting with DOS, then Windows and who has used a Mac laptop since the Windows Vista days, my perspective on the usability of Linux Desktop is unrelated to it simply being "different from what I'm used to."

Transitioning from Windows to Mac was much more of an adjustment than Linux Desktop. It's just that Linux has too many rough edges. While it's possible I've simply been unlucky, everytime I've tried Linux it's been small niggling issue after small niggling issue that I have to work around and it feels like a death of a thousand paper cuts. (BTW I first tried Linux desktop back in the late 90s and most recently used it as my main work laptop for 9 months this past year.)

Note, I'm more than happy to use Linux as a server. I run Linux servers at home and have for decades. But the desktop environments I've tried have all been irksome.

Note that I'm not mentioning particular distros or desktop environments because I've tried various over the years.

jeppester 15 hours ago [-]
It's hard to guess why you have such an experience when you are not being more precise than "issue after issue", but it would seem plausible that you are using hardware with poor support.

After all there are plenty of people - including me - who do not share that experience at all.

sho_hn 1 days ago [-]
I'd say that's a fairly web development-centric take. I work at an embedded shop that happily puts a few million cars running Linux on the road every year, and we have hundreds of devs mainly running Linux to develop for Linux.
sangeeth96 1 days ago [-]
The average person is not dishing out software that runs on millions of cars from the average PC/laptop they got off the shelves from their bestbuy equivalent. I’d say the same for the average developer. I’d also guess if given a choice and unless there are technical limitations that prevent it from being so, even the devs in your shop would rather prefer to switch to a usable daily driver OS to get things done.

The desktop marketshare stats back me up on the earlier point and last I checked, no distro got anywhere close?

Sure, Android is the exception (if we agree to consider) but until we get serious dev going there and until Android morphs into a full-fledged desktop OS, my point stands.

etra0 1 days ago [-]
well, don't forget there's a fully fledged console now too, which by the way, runs games made for windows on linux, with better performance.

And yes, that's bought by the 'average person'.

23 hours ago [-]
sho_hn 22 hours ago [-]
> I’d also guess if given a choice and unless there are technical limitations that prevent it from being so, even the devs in your shop would rather prefer to switch to a usable daily driver OS to get things done.

On the contrary, our devs generally clamor for expanded Linux support from company IT.

There's just no other OS that's anywhere near as useful for real software engineering that isn't on a web stack.

MacOS is a quirky almost-Linux where you have to fiddle with Homebrew to get useful tools. On Windows you end up installing three copies of half of Linux userspace via WSL, Cygwin and chocolatey to get things done. All real tools are generally the open source ones that run better on native Linux, with Windows equivalents often proprietary and dead/abandoned.

Let me give you a basic embedded SW example: Proxying a serial connection over a TCP or UDP socket. This is super trivial on Linux with standard tools you get in every distro. You can get similar tools for Windows (virtual COM port drivers, etc.), but they're harder to trust (pre-compiled binaries with no source), often half-abandoned (last release 2011 or something) and unreliable. And the Linux tools are fiddly to build on MacOS because it's just not the standard. This pattern replicates across many different problems. It's simply less headache to run the OS where things just work and are one package manager invocation away.

There's simply significant swaths of software development where Linux and Linux-friendly Open Source tools/projects have hands-down won, are the ubiquitous and well-maintained option, and on the other systems to have to jump through hoops and take extra steps to set up a pseudo-Linux to get things done.

Honestly, there's also the fact that MacOS and Windows users are equally used to their systems as Linux users are to theirs, and are equally blind to all the bugs, hoops and steps they have to take. If you're a regular, happy Linux user and attempt to switch (and I have done this just recently, actually, porting a library and GUI app to control/test/debug servo motors to Window), the amount of headache to endure on the other operating systems just to get set up with a productive environment is staggering, not to mention the amount of crap you have to click away. Granted, MacOS is a fair bit less annoying than Windows in the latter regard, though.

I'll happily claim that Linux today is the professional option for professional developers, anyhow. And you web folks would likely be surprised how much of the code of the browser engines your ecosystem relies on was written and continues to be written on Linux desktops (I was there :-), and ditto for a lot of the backend stuff you're building your apps on, and a fair amount of the high-end VFX/graphics and audio SW used to make the movies you're watching, and so on and so forth.

Are there more web devs churning out CRUD apps and their mobile wrappers on MacOS in the absolute? For sure, by orders of magnitude. But the real stuff happens on Linux, and my advice to young devs who want to get good and do stuff that matters (as someone who hires them) is to get familiar with that environment.

graemep 24 hours ago [-]
> Just highlights the piss-poor state of Linux desktop even after all these years.

What exactly is wrong with it? I prefer KDE to either Windows or MacOS. Obviously a Linux desktop is not going to be identical to whatever you use so there is a learning curve, but the same is true, and to a much greater extent, for moving from Windows to MacOS.

> layman to sanely develop programs for Linux systems

> or the average person

The "layman" or "average person" does not develop software.

The average person has plenty of problems dealing with Windows. They are just used to putting up with being unable to get things to work. Ran into that (a multi-function printer/scanner not working fully) with someone just yesterday.

If you find it hard to adjust to a Linux desktop you should not be developing software (at any rate not developing software that matters to anyone).

I have switched a lot of people to Linux (my late dad, my ex-wife, my daughter's primary school principal) who preferred it to Windows and my kids grew up using it. No problems.

sangeeth96 23 hours ago [-]
> What exactly is wrong with it? I prefer KDE to either Windows or MacOS.

KDE is my choice as well (Xfce #2) if I have to be stuck with a Linux distro for a long period but I'd rather not put myself in that position because it's still going to be a nightmare. My most recent install from this year of Kubuntu/KDE Fedora had strange bugs where applications froze and quitting them were more painful than macOS/Windows, or that software updates through their app store thingy end up in some weird state that won't reset no matter how many times I reboot, hard crashes and so on on a relatively modern PC (5900X, RTX 3080, 32G RAM). I had to figure out the commands to force reset/clean up things surrounding the package management in order to continue to install/update packages. This is the kind of thing I never face with Silicon macs or even Windows 10/11.

This is a dealbreaker for the vast majority of people but let's come to your more interesting take:

> If you find it hard to adjust to a Linux desktop you should not be developing software

And that sums up the vast majority of Linux users who still think every other year is the year of "Linux desktop". It's that deeply ignorant attitude instead of acknowledging all these years of clusterfuck after clusterfuck of GUIs, desktop envs, underlying tech changes (Xorg, Wayland) and myriads of confusing package distribution choices (debs, rpms, snaps, flatpaks, appimages and so on), that no sane person is ever going to embrace a Linux distro as their daily driver.

You need a reality reset if you think getting used to Linux is a qualifier to making great software.

graemep 18 hours ago [-]
> KDE is my choice as well (Xfce #2) if I have to be stuck with a Linux distro for a long period but I'd rather not put myself in that position because it's still going to be a nightmare. My most recent install from this year of Kubuntu/KDE Fedora had strange bugs where applications froze and quitting them were more painful than macOS/Windows, or that software updates through their app store thingy end up in some weird state that won't reset no matter how many times I reboot, hard crashes and so on on a relatively modern PC (5900X, RTX 3080, 32G RAM).

A matter of your experience. Its not something that happens to me or anyone I know personally. Even using a less newbie friendly distro (I use Manjaro) its very rare.

I have not tried Fedora for many years, but the last time I did it was not a particularly easy distro to use. It is also a test distro for RHEL and Centos so should be expected to be a bit unstable.

> It's that deeply ignorant attitude instead of acknowledging all these years of clusterfuck after clusterfuck of GUIs, desktop envs, underlying tech changes (Xorg, Wayland) and myriads of confusing package distribution choices (debs, rpms, snaps, flatpaks, appimages and so on)

Most of which is hidden from the user behind appstores. The only thing non-geek users need to know is which DE they prefer (or they can let someone else pick it for them, or use the distro default).

Even a user who wants to tinker only needs to know one of the distribution formats, one desktop environment. You are free to learn about more, but there is absolutely no need to. You also need to learn these if you use WSL or some other container.

> You need a reality reset if you think getting used to Linux is a qualifier to making great software.

What I said is that the ability to cope with the tiny learning curve to adjust to a different desktop look and feel is a disqualifier for for being a developer.

Every non-technical user who switches from Windows to MacOS does it, so its very odd it is a barrier for a developer.

regularfry 23 hours ago [-]
If you're just kicking the tyres on Fedora or Ubuntu, you're not getting KDE. I love it myself, but I know it's there. The average curious person is going to get whatever Gnome thinks they deserve at that point in time.
graemep 18 hours ago [-]
Gnome being the default does probably harm Linux desktop adoption.

On the other hand do people care that much about DEs? Most people just want to start their web browser or whatever.

afavour 19 hours ago [-]
> If you find it hard to adjust to a Linux desktop you should not be developing software

For most it’s not a case of whether you can do it, it’s whether it’s worth doing it. For me Linux lacks the killer feature that makes any of that adjustment worth my (frankly, valuable) time. That’s doubly so for any of us that develop user facing software: our users aren’t going to be on Linux so we need to have a more mainstream OS to hand for testing anyway.

ndriscoll 18 hours ago [-]
If you're developing server software (presumably you are if using containers), it's going to run on Linux, so desktop Linux is by far the sanest choice with the least moving parts.
graemep 18 hours ago [-]
Certainly, but then that is also a valid objection (and one I have heard) for switching from Windows to MacOS.

The objection is really I do not want to use anything different, which is fine. After many years of using Linux I feel the same about using Windows or MacOS

> For me Linux lacks the killer feature that makes any of that adjustment worth my (frankly, valuable) time

It lacks all the irritants in Windows 11 every Windows user seems to complain of?

> That’s doubly so for any of us that develop user facing software: our users aren’t going to be on Linux so we need to have a more mainstream OS to hand for testing anyway.

SO for desktop software, that is not cross platform, yes. If you are developing Windows software you need Windows.

If you are developing server software it will probably be deployed to Linux, if you are developing web apps the platform is the browser and the OS is irrelevant, and if you are developing cross platform desktop apps then you need to test on all of them so you need all.

Cthulhu_ 24 hours ago [-]
Linux has not won on the desktop and probably never will, granted. But linux has won for running server-side / headless software, and has done so for years.

That said, counterpoint to my own, Android is Linux and has billions of installations, and SteamOS is Linux. I think the next logical step for SteamOS is desktop PCs, since (anecdotally) gaming PCs only really play games and use a browser or web-tech-based software like Discord. If that does happen, it'll be a huge boost to Linux on the consumer desktop.

bigstrat2003 1 days ago [-]
> not once have I stopped and thought "huh, this is actually pretty usable and stable".

I think we need to have a specific audience in mind when saying whether or not it's stable. My Arch desktop (user: me) is actually really stable, despite the reputation. I have something that goes sideways maybe once a year or so, and it's a fairly easy fix for me when that does happen. But despite that, I would never give my non-techy parents an Arch desktop. Different users can have different ideas of stable.

cromka 1 days ago [-]
My problem with Arch 12 years ago was exactly the fact that things would just randomly stop working and I often wouldn’t know until I needed it. What drew the line for me was when I needed to open a USB pendrive and it wouldn’t mount — if I remember correctly something related to udisk at the time and a race condition. I spent like 30 minutes looking into it and it was just embarrassing as I had someone over my shoulder waiting for those files.

This is when I gave up and switched to Apple. I am now moving back to Linux but Arch still seems like it’s too hacky and too little structured organizationally to be considered trustworthy. So, Ubuntu or Debian it is, but fully haven’t decided yet.

Still, I would be happy to be convinced otherwise. I’m particularly surprised Steam uses it for their OS.

serbuvlad 1 days ago [-]
I have been using arch for about a year now.

I've crapped my system on install, or when trying to reconfigure core features.

Updates? 0 issues. Like genuinely, none.

I've used Ubuntu and Mint before and Arch "just works" more then either of them in my experience.

spooneybarger 23 hours ago [-]
I had awful experiences with arch over a decade ago. I started using it again last year and it's been completely solid and the least problematic Linux distribution that I've used in ages.
vbezhenar 24 hours ago [-]
I'm not going to jump on you, but for me Linux is much more friendly than Windows or macOS. I tried to use macOS, just because their Apple silicone computers are so powerful, but in the end I abandoned it and switched back to Thinkpad with Linux. Windows is outright unusable and macOS is barely usable for me, while Linux just works.
qudat 21 hours ago [-]
FOSS OS dev is slow but is built on cross collaboration so the foundation is strong. Corporate OS has the means to tune to end user usage and can move very fast when business interests align with user experience.

When you are a DE that’s embedded in FOSS no one has an appetite to fund user experience the same way as corporate OS can.

We do have examples where this can work, like with the steam deck/steamOS but it’s almost counter to market incentives because of how slow dev can become.

I see the same problem with chat and protocol adoption. IRC as a protocol is too slow for companies who want to move fast and provide excellent UX, so they ditch cross collaboration in order to move fast.

ladyanita22 24 hours ago [-]
The moment I read "Needing two of the most famous non-Linux operating systems for the layman to sanely develop programs for Linux systems" I knew this comment would be a big pile of unfactual backed opinions.
heavyset_go 1 days ago [-]
In my experience, Linux is great for the type of user who would be well-suited with a Chromebook. Stick a browser, office suite and Zoom on it, and enable automatic updates, and they'll be good to go.
cosmic_cheese 1 days ago [-]
Linux is great for users on the extreme ends of the spectrum, with grandma who only needs email on one end and tiling WM terminal juggler on the other. Where it gets muddy is for everybody in the middle.

That’s not to say it can’t or doesn’t work for some people in the middle, but for this group it’s much more likely that there’s some kind of fly in the soup that’s preventing them from switching.

It’s where I’m at. I keep secondary/tertiary Linux boxes around and stay roughly apprised of the state of the Linux desktop but I don’t think I could ever use it as my “daily driver” unless I wrote my own desktop environment because nothing out there checks all of the right boxes.

heavyset_go 1 days ago [-]
> Linux is great for users on the extreme ends of the spectrum, with grandma who only needs email on one end and tiling WM terminal juggler on the other.

> That’s not to say it can’t or doesn’t work for some people in the middle, but for this group it’s much more likely that there’s some kind of fly in the soup that’s preventing them from switching.

Generally agree with these points with some caveats when it comes to "extremes".

I think for middle to power users, as long as their apps and workflows have a happy path on Linux, their needs are served. That happy path necessarily has to exist either by default or provisioned by employers/OEMs, and excludes anything that requires more than a button push like the terminal.

This is just based on my own experience, I know several people ranging from paralegals working on RHEL without even knowing they're running Linux, to people in VFX who are technically skilled in their niche, but certainly aren't sys admins or tiling window manager users.

Then there are the ~dozen casual gamers with Steam Decks who are served well by KDE on their handhelds, a couple moved over to Linux to play games seemingly without issue.

cosmic_cheese 17 hours ago [-]
Using Linux is definitely easier when there’s just one thing you’re doing primarily, as is often the case in corporate settings. When things start to fall apart for me is with heavier multitasking (more than 2-3 windows) and doing a wide variety of things, as one might with their primary home machine.
Hard_Space 24 hours ago [-]
Well-observed. I come back to check out the state of the Linux desktop every 2-3 years, and I always find that the latest layer/s of instrumentality and GUI are thin as frosting on a cake - as soon as you need anything that's not in the box, you're immediately in Sudo-land.
xd1936 18 hours ago [-]
Fedora/Debian + AMD ThinkPad here. Haven't had any crashes or instability in 5+ years.
sfpotter 1 days ago [-]
Terrible on every front? I'm sorry, but it's hard to take this seriously. I've been daily driving Fedora with Cinnamon for the past 4 years and it works just fine. I use Mac and Windows on a regular basis and both are chock full of AI bloatware and random BS. On the same hardware, Linux absolutely runs circles around Windows 10 and Windows 11. If the application you need to use doesn't run on Linux; well, OK... not much you can do about that. But to promote that grievance to "terrible on every front" is ridiculous.
MantisShrimp90 20 hours ago [-]
Meh, you're making the same mistake most do on this one. You're treating the Linux desktop like it's compatible even though these two non-linux operating systems are made by some of the biggest companies ever with allot of engineering hours paid to lock people in.

Plus, one could argue they've actually just established dominance through market lockin by ensuring the culture never had a chance and making operating system moves hard for the normal person.

But more importantly if we instead consider the context that this is largely a collection of small utilities made by volunteers vs huge companies with paid engineering teams, one should be amazed at how comparable they are at all.

19 hours ago [-]
Lio 1 days ago [-]
I disagree. The only feature I miss on Linux is the ctrl-scroll to zoom feature of macOS.

If Gnome implemented that as well as macOS does I’d happily switch permanently.

esskay 1 days ago [-]
The only feature? Like across the entire OS? Pretty broad. If you were right then adoption would be orders of magnitude higher.
Lio 20 hours ago [-]
It's the only feature I missed. That doesn't mean that you won't be looking for something else. I run almost the same FOSS day to day on both Mac and Linux.

I've worked in jobs that only used Linux as the day to day desktop operating system. I currently work on macOS.

What features do you think are missing?

pjmlp 1 days ago [-]
On the server room yes, but only in the sense UNIX has won, and Linux is the cheapest way to acquire UNIX, with the BSDs sadly looking from their little corner.

However on embedded, and desktop, the market belongs to others, like Zehyr, NutXX, Arduino, VxWorks, INTEGRITY,... and naturally Apple, Google and Microsoft offerings.

Also Linux is an implementation detail on serverless/lambda deployments, only relevant to infrastructure teams.

inopinatus 22 hours ago [-]
BSD has nothing to feel mournful about. Its derivatives are frequently found in the data center, but largely unremarked because it’s under the black box of storage and network appliances.

And it’s in incredible numbers - hundreds of millions of units - of game consoles.

The BSD family isn’t taking a bow in public, that’s all.

pjmlp 19 hours ago [-]
Orbis OS has very little of FreeBSD, if that is what you mean.

And outside NetFlix, there aren't many big shots talking about it nowadays.

inopinatus 11 hours ago [-]
It looks exactly like a BSD syscall table. Including one I wrote an implementation of. https://www.psdevwiki.com/ps5/Syscalls
pjmlp 3 hours ago [-]
Usually an OS is a little bit more than a syscall table.
22 hours ago [-]
aljgz 1 days ago [-]
Well. It can also be argued that the other two platforms are finding ways to allow using Linux without leaving those platforms, which slows down market share of Linux on desktop as the primary OS.
selcuka 1 days ago [-]
> which slows down market share of Linux on desktop as the primary OS

I think what slows down market share of Linux on desktop is Linux on desktop itself.

I use Linux, and I understand that it's a very hard job to take it to the level of Windows or macOS, but it is what it is.

heavyset_go 1 days ago [-]
It makes Linux the common denominator between all platforms, which could potentially mean that it gets adopted as a base platform API like POSIX is/was.

More software gets developed for that base Linux platform API, which makes releasing Linux-native software easier/practically free, which in turn makes desktop Linux an even more viable daily driver platform because you can run the same apps you use on macOS or Windows.

pjmlp 1 days ago [-]
As someone that was once upon a time a FOSS zealot with M$ on email signature and all, the only reason I care about Linux on the desktop is exactly Docker containers, everything else I use the native platform software.

Eventually I got practical and fed up with ways of Linux Desktop.

gf000 24 hours ago [-]
The thing is.. I am forced to use windows for my current job and it is so much worse than Linux desktop has ever been in the last 10-15 years, I'm honestly buffled.

Like, suspend-wake is honestly 100% reliable compared to whatever my Windows 11 laptop does, random freezes, updates are still a decade behind what something like NixOS has (I can just start an update and since the system is immutable it won't disturb me in any shape or form).

wolvesechoes 23 hours ago [-]
My corporate Windows laptop is awful, but it is because it being corporate. At home I have used Linux exclusively from 2019 to 2024. Then I switched to Windows 11 LTSC IoT (yes yes, piracy bad) and I don't look back.
pjmlp 22 hours ago [-]
Don't mistake Windows with corporate junk for compliance, it doesn't work properly regardless of the OS.
heavyset_go 1 days ago [-]
> Eventually I got practical and fed up with ways of Linux Desktop.

I was in the same boat and used macOS for a decade since it was practical for my needs.

These days I find it easier to do my work on Linux, ironically cross-platform development & audio. At least in my experience, desktop Linux is stable, works with my commercial apps, and things like collaboration over Zoom/Meet/etc with screen sharing actually work out of the box, so it ticks all of my boxes. This certainly wasn't the case several years ago, where Linux incompatibility and instability could be an issue when it comes to collaboration and just getting work done.

pjmlp 1 days ago [-]
Yet, just last year I ended up getting rid of a mini-PC, because I was stupid enough not to validate its UEFI firmware would talk to Linux.

I have spent several months trying to make it work, across a couple of distros and partition layouts, only managing to boot them, if placed on external storage.

Until I can get into Media Market kind of store and get a PC, of whatever shape, with something like Ubuntu pre-installed, and everything single hardware feature works without "yes but", I am not caring.

heavyset_go 24 hours ago [-]
I'm not trying to convince you, I'm just sharing my experience.

IMO, just like with macOS, one should buy hardware based on whether their OS supports it. There are plenty of mini PCs with Linux pre-installed or with support if you just Google the model + Linux. There's entire sites like this where you can look up computers and components by model and check whether there is support: https://linux-hardware.org/?view=computers

You can even sort mini PCs on Amazon based on whether they come with Linux: https://www.amazon.com/Mini-Computers-Linux-Desktop/s?keywor...

The kernel already has workarounds for poorly implemented firmware, ACPI, etc. There's only so much that can be done to support bespoke platforms when manufacturers don't put in the work to be compatible, so buy from the ones that do.

> Until I can get into Media Market kind of store and get a PC, of whatever shape, with something like Ubuntu pre-installed, and everything single hardware feature works without "yes but", I am not caring.

You can go to Dell right now and buy laptops pre-installed with Ubuntu instead of Windows: https://www.dell.com/en-us/shop/dell-laptops/scr/laptops/app...

pjmlp 24 hours ago [-]
Yes, I know those as well, my Asus Netbook (remember those?) came with Linux pre-installed, the wlan and GL ES support was never as good as on the Windows side, and once Flash was gone, never got VAAPI to work in more recent distros, it eventually died, 2009 - 2024.

Notice how quickly this has turned into the usual Linux forums kind of discussion that we have been having for the last 30 years regarding hardware support?

flmontpetit 19 hours ago [-]
Fascinating to me how Windows and Linux have cross-pollinated each other through things like WSL and Proton. Platform convergence might become a thing within our lifetimes.
sbarre 19 hours ago [-]
I made a "long bet" with a friend about a decade ago that by 2030 'Microsoft Windows' would just be a proprietary window manager running on Linux (similar - in broad strokes - to the MacOS model that has Darwin under the hood).

I don't think I'll make my 2030 date at this point but there might be some version of Windows like this at some point.

I also recognize that Windows' need to remain backwards compatible might prevent this, unless there's a Rosetta-style emulation layer to handle all the Win32 APIs etc..

flmontpetit 16 hours ago [-]
I think Microsoft will let Windows slowly die over the years. I am certain that at the strategy level, they have already accepted that their time as a device platform vendor will not last. Windows will be on life support for a while, as MS slowly corrals its massive client base onto its SaaS platforms, before it becomes a relic of the past. Beyond that point, the historical x86 PC-compatible platform lineage will either die with it, or be fully overtaken by Desktop Linux whereupon it will slowly lose ground to non-x86 proprietary platforms over the years.

The average end user will be using some sort of Tivoized device, which will be running a closed-source fork of an open-source kernel, with state-of-the-art trusted computing modules making sure nobody can run any binaries that weren't digitally signed and distributed through an "app store" owned by the device vendor and from which they get something like a 25% cut of all sales.

In other words, everything will be a PlayStation, and Microsoft will be selling their SaaS services to enterprise users through those. That is my prediction.

inopinatus 22 hours ago [-]
That isn’t exactly new, the hypervisor underneath has been in macOS for years, but poorly exploited. It’s gained a few features but what’s really substantial today are the (much) enhanced ergonomics on top.
sho_hn 22 hours ago [-]
I know, but they've invested some effort into e.g. a custom Linux kernel config and vminitd+RPC for this, so the optimizations specific to running containerized Linux apps are new.
jeroenhd 1 days ago [-]
Linux has already won, in the form of Android and to an extent ChromeOS. Many people just don't recognize it as such because most of the system isn't the X11/Wayland desktop stack the "normal" Linux distros use.

Hell, Samsung is delivering Linux to the masses in the form of Wayland + PulseAudio under the brand name "Tizen". Unlike desktop land, Tizen has been all-in on Wayland since 2013 and it's been doing fine.

pjmlp 1 days ago [-]
Google could replace Linux kernel with something else and no one would notice, other than OEMs and people rooting their devices.

Likewise with ChromeOS.

They are Pyrrhic victories.

As for Tizen, interesting that Samsung hasn't yet completely lost interest on it.

gf000 24 hours ago [-]
Ah yeah, isn't that the definition of something you don't directly depend on? Of course they "could just replace the OS", I can also just write a new web browser and use it to browse the web as it's supposedly a standard.

Except neither will support even a fraction of the originals' capabilities, at much worse performance and millions of incompatibilities at every corner.

pjmlp 22 hours ago [-]
The kernel, not the OS.

The OS is a mix of Java, Kotlin, JavaScript, NDK APIs and the standard ISO C and ISO C++ libraries.

steeleduncan 1 days ago [-]
> Google could replace Linux kernel with something else and no one would notice, other than OEMs and people rooting their devices.

This would be better phrased If Google could replace Linux kernel with something else noone would notice,

Google have spent a decade trying to replace the Linux with something else (Fuschia), and don't seem to have gotten anywhere

pjmlp 1 days ago [-]
Don't mistake company politics between ChromeOS, Android and Fuchsia business units, and the technical possibility of actually bothering to do so.

Also don't forget Fuchsia has been mostly a way to keep valuable engineers at Google as retention project.

They haven't been trying to replace anything as such, and Linux kernel on Android even has userspace drivers with stable ABI for Java and C++, Rust on the kernel, all features upstream will never get.

Or on Rust's case, Google didn't bother with the drama, they decided to include it, and that was it.

v5v3 23 hours ago [-]
HarmonyOS has it's own non Linux Kernel so Linux now has a major competitor that will be present in a huge number of devices.

https://en.m.wikipedia.org/wiki/HarmonyOS_NEXT

tannhaeuser 22 hours ago [-]
"It" (aka the cloud providers) has won in the foobar POSIX department such that only a full Linux VM can run your idiosyncractic web apps despite or actually because of hundreds of package managers and dependency resolution and late binding mechanisms, yes.
duped 1 days ago [-]
Except for graphics, audio, and GUIs for which no good solutions exist
heavyset_go 1 days ago [-]
I'd consider revisiting this. These days you can do studio level video production, graphics and pro audio on Linux using native commercial software from a bare install on modern distributions.

I do pro audio on Linux, my commercial DAWs, VSTs, etc are all Linux-native these days. I don't have to think about anything sound-wise because Pipewire handles it all automatically. IMO, Linux has arrived when it comes to this niche recently, five years ago I'd have to fuck around with JACK, install/compile a realtime kernel and wouldn't have as many DAWs & VSTs available.

Similarly, I have a friend in video production and VFX whose studio uses Linux everywhere. Blender, DaVinci Resolve, etc make that easy.

There is a lack of options when it comes to pro illustration and raster graphics. The Adobe suite reigns supreme there.

wwweston 17 hours ago [-]
Can you tell me more about the audio work you’re doing (sound design? instrument tracking? mixing? mastering? god help you live sound?) and the distro and applications you use?

I am more amateur/hobbyist than pro, but this is the primary reason I’m on macOS and I wouldn’t mind reasons to try Linux again (Ubuntu Studio ~8 years ago was my last foray).

heavyset_go 14 hours ago [-]
> (sound design? instrument tracking? mixing? mastering? god help you live sound?)

This minus live sound, and I stick exclusively to MIDI controllers.

> and the distro and applications you use?

I'm on EndeavourOS, which is just Arch with a GUI installer + some default niceties.

I came from using Reaper on macOS, which is native on Linux, but was really impressed with Bitwig Studio[1] so I use that for most of everything.

I really like u-he & TAL's commercial offerings, Vital, and I got mileage out of pages like this[2] that list plugins that are Linux compatible. I'm insane so I also sometimes use paid Windows plugins over Yabridge, which works surprisingly well, but my needs have been suited well by what's available for Linux.

There's also some great open source plugins like Surge XT, Dexed & Vaporizer2, and unique plugins ChowMatrix.

> I wouldn’t mind reasons to try Linux again (Ubuntu Studio ~8 years ago was my last foray).

IMO the state of things is pretty nice now, assuming your hardware and software needs can be met. If you give it a try, I think a rolling release would be best, as you really want the latest Pipewire/Wireplumber support you can get.

[1] https://www.bitwig.com/

[2] https://linuxdaw.org/

wwweston 10 hours ago [-]
Thanks -- great to have the overview!
stebian_dable 1 days ago [-]
Affinity suite has decent Wine community support by the way for raster / vector graphics.
paxys 1 days ago [-]
Is it winning if you are the only one playing the game?

Brag about this to an average Windows or Mac user and they will go "huh?" and "what is Linux?"

sho_hn 1 days ago [-]
> Is it winning if you are the only one playing the game?

Depending on what you mean with "the game", I'd say even more so.

MS/Apple used to villify or ridicule Linux, now they need to distribute it to make their own product whole, because it turns out having an Open Source general purpose OS is so convenient and useful it's been utilized in lots of interesting ways - containers, for example - that the proprietary OS implementations simply weren't available for. I'd say it's a remarkable development.

yjftsjthsd-h 1 days ago [-]
By that logic, this feature and WSL shouldn't exist.
paxys 1 days ago [-]
They exist because linux server developers would rather use windows or mac as their primary desktop OS rather than linux. That's not a flex for linux desktop. Quite the opposite.
Lio 1 days ago [-]
Equally, they exist because mac and windows users would rather use Linux for their server operating system than anything else and that’s not a flex for Apple or Microsoft either.
heavyset_go 1 days ago [-]
In my experience, it isn't Linux server developers who decide what platform their organizations provision on their employees' devices. That's up to management and IT departments who prefer the simplicity of employees using the same systems they do, and prefer to utilize the competencies in macOS/Windows administration their IT departments have.
Yasuraka 17 hours ago [-]
I am on my third employer in 5 years and every dev team I can across that had the choice picked Linux.

I personally don't know a dev worth his salt who'd prefer windows

baq 1 days ago [-]
Trust me I’d rather use Linux than macOS, that’s after 2.5 years of full time work on a beefy MacBook Pro. The problem is that it isn’t possible to buy a machine as good as the MacBook which runs Linux. Asahi is not ready and won’t be for years, if ever.
whatevermom 1 days ago [-]
Asahi is not that bad, did you try it out? I’ve been building a Sway configuration from scratch on it since two weeks and it’s working pretty well. Did a ton of administrative stuff with it yesterday without much trouble other than the key bindings being a bit weird coming from macOS.
baq 1 days ago [-]
Last time I checked M3 support is not coming anytime soon. M1 is kinda sorta maybe good enough sometimes? But it wouldn't be my main dev box.
gf000 23 hours ago [-]
The flex is that you could have just used "server developers" and it would have meant the exact same thing.
OJFord 1 days ago [-]
'Linux with macOS.'
18 hours ago [-]
bakztfutur3 1 days ago [-]
[flagged]
1 days ago [-]
sitole 1 days ago [-]
Has anyone tried turning on nested virt yet? Since the new container CLI spins each container in its own lightweight Linux VM via Virtualization.framework, I’m wondering whether the framework will pass the virtualization extensions through so we can modprobe kvm inside the guest.

Apple’s docs say nested virtualization is only available on M3-class Macs and newer (VZGenericPlatformConfiguration.isNestedVirtualizationSupported) developer.apple.com, but I don’t see an obvious flag in the container tooling to enable it. Would love to hear if anyone’s managed to get KVM (or even qemu-kvm) running inside one of these VMs.

pmarreck 18 hours ago [-]
So the x64 containers will run fine on Apple Silicon?
dwaite 16 hours ago [-]
On a ARM linux target, they do support Rosetta 2 translation of intel binaries under virtualization using Rosetta 2. I do not know if their containerization supports it.

https://developer.apple.com/documentation/virtualization/run...

Given that they announced a timeline for sunsetting Rosetta 2, it may be low priority.

pmarreck 9 hours ago [-]
x64 is not going away anytime soon, so that’s unfortunate
roberttod 1 days ago [-]
I need to look into this a little more, but can anyone tell me if this could be used to bundle a Linux container into a MacOS app? I can think of a couple of places that might be useful, for example giving a GPT access to a Linux environment without it having access to run root CLI commands.
paxys 1 days ago [-]
Yes, as long as you are okay with your app only working on macOS 26. Otherwise you can already achieve what you want using Virtualization.framework directly, though it'll be a little more work.
OJFord 1 days ago [-]
Yes, that's exactly what it's for.
paxys 1 days ago [-]
Thinking more about this a bit, one immediate issue I see with adoption is that the idea of launching each container in its own VM to fully isolate it and give it its own IP, while neat, doesn't really translate to Linux or Windows. This means if you have a team of developers and a single one of them doesn't have a mac, your local dev model is already broken. So I can't see a way to easily replace Docker/Compose with this.
dontdoxxme 1 days ago [-]
It translates exactly to Kubernetes though, except without the concept of pods, I don't see anything in this that would stop them adding pods on top later, which would allow Kubernetes or compose like setups (multiple containers in the same pod).
qalmakka 23 hours ago [-]
that's nice and all - but where are the native Darwin containers? Why is it ok for Apple to continue squeezing people with macOS CI jobs to keep buying stupid Mac Minis to put in racks only to avoid a mess? Just pull FreeBSD jails!
conradludgate 21 hours ago [-]
egorfine 23 hours ago [-]
This is my pain point.

I would really want to have a macOS (not just Darwin) container, but it seems that it is not possible with macOS. I don't remember the specifics, but there was a discussion here at HN a couple of month ago and someone with intrinsic Darwin knowledge explained why.

SamuelAdams 1 days ago [-]
I wonder if this will dramatically improve gaming on a Mac? Valve has been making games more reliable due to Steam Deck, and gaming on Linux is getting better every year.

Could games be run inside a virtual Linux environment, rather than Apple’s Metal or similar tool?

This would also help game developers - now they only need to build for Windows, Linux, and consoles.

pxc 1 days ago [-]
Apple's Virtualization Framework doesn't support 3D acceleration for non-macOS guests.
throwaway127482 1 days ago [-]
Isn't the Linux gaming stuff really an emulator for Windows games? So it'd be like, windows emulation inside Linux virtualization inside macos?
lxgr 1 days ago [-]
As far as I understand, it's a modified/extended version of Wine, which, as the name suggests, is not an emulator (but rather a userspace reimplementation of the Windows API, including a layer that translates DirectX to OpenGL/Vulkan).

The reverse, i.e. running Linux binaries on Windows or macOS, is not easily possible without virtualization, since Linux uses direct syscalls instead of always going through a dynamically linked static library that can take care of compatibility in the way that Wine does. (At the very least, it requires kernel support, like WSL1; Wine is all userspace.)

izacus 1 days ago [-]
No, and with sunset of Rosetta, they'll kill off many of the few games that fun on macOS.
golf1052 1 days ago [-]
According to reporting Rosetta will still be supported for old games that rely on Intel code

> But after that, Rosetta will be pared back and will only be available to a limited subset of apps—specifically, older games that rely on Intel-specific libraries but are no longer being actively maintained by their developers. Devs who want their apps to continue running on macOS after that will need to transition to either Apple Silicon-native apps or universal apps that run on either architecture.

https://arstechnica.com/gadgets/2025/06/apple-details-the-en...

izacus 19 hours ago [-]
Oh, that's good news.
wmf 1 days ago [-]
Windows games already run on macOS via WINE. Using a VM would just add overhead not reduce it.
october8140 1 days ago [-]
I imagine running in a VM would hurt performance a lot.
lxgr 1 days ago [-]
Not necessarily. For example, the Xbox 360 runs every game in a hypervisor, so technically, everything is running in a VM.

It's all a question of using the right/performant hardware interfaces, e.g. IOMMU-based direct hardware access rather than going through software emulation for performance-critical devices.

solomatov 1 days ago [-]
Does anyone know whether they have optimized memory management, i.e. virt machine not consuming more RAM than required?
dontdoxxme 1 days ago [-]
nasretdinov 16 hours ago [-]
From that document I read that it in fact does, but it doesn't release memory if app started consuming less. It does memory balooning though, so the VM only consumes as much RAM as the maximum amount requested by the app
jbverschoor 1 days ago [-]
In my opinion this is a step towards the Apple cloud hosting.

They have Xcode cloud.

The $4B contract with Amazon ends, and it’s highly profitable.

Build a container, deploy on Apple, perhaps with access to their CPU’s

paxys 1 days ago [-]
It's quite a stretch to go from Apple launching container support for macOS to "they are going to compete with AWS". Especially considering Apple's own server workloads and data storage are mostly on GCP.
slroger 2 hours ago [-]
Yeah that would be great. I dont understand why they dont explore this option
n2d4 1 days ago [-]
It's still virtualization, so it'll necessarily be (slightly) slower than just running Linux natively. I don't think Apple's hardware makes up for that, certainly not at the price point at which they sell it.
jbverschoor 14 hours ago [-]
Compared to EC2? You've got to be kidding me.
newman314 1 days ago [-]
I wonder how this will affect apps like Orbstack
SparkyMcUnicorn 1 days ago [-]
My guess is that Orbstack might switch to using this, and it'll just be a more competitive space with better open source options popping up.

People still want the nice UI/UX, and this is just a Swift package.

jbverschoor 1 days ago [-]
Orbstack also does kubernetes etc
9dev 1 days ago [-]
Huh. I suppose it’s a good thing I never came around to migrating our team from docker desktop to Orbstack, even though it seems like they pioneered a lot of the Apple implementation perks…
xp84 1 days ago [-]
I still haven't heard why anyone would prefer the new Apple-proprietary thing vs Orbstack. I would not hold my breath on it being better.
zshrc 1 days ago [-]
Wild because "Apple proprietary" is on GitHub and Orbstack is closed source but go off I guess.
xp84 1 days ago [-]
You got me. It was super inaccurate to use "proprietary" here (though if i understand correctly, podman is another option that is FOSS).

License aside, though, I would still bet that relying on the Apple-specific version of something like this will cause headaches for teams unless you're operating in an environment that's all-in on Apple. Like, your CI tooling in the cloud runs on a Mac, that degree of vendor loyalty. I've never seen any shop like that.

Plus when this tooling does have interoperability bugs, I do not trust Apple to prioritize or even notice the needs of people like me, and they're the ones in charge of the releases.

9dev 1 days ago [-]
If Apple is committed to containers on MacOS, it makes sense to use their implementation over a third party. They know their own platform more intimately, can push for required kernel changes internally if necessary, and will provide this feature free of charge since it's in their own interest to do so—as apparent from the fact the source is published on GitHub, under Apache.

As opposed to that, there's OrbStack, a venture-backed closed source application thriving off of user licenses, developed by a small team. As empathetic as I am with them, I know where I bet my money on in this race.

rollcat 23 hours ago [-]
> As opposed to that, there's OrbStack, a venture-backed closed source application thriving off of user licenses, developed by a small team. As empathetic as I am with them, I know where I bet my money on in this race.

Orbstack started out as one kid with a passion for reducing the suffering of the masses, and from day 1 he was relentless about making the experience as smooth as possible, even for the weirdos like me (e.g. I have a very elaborate ssh config). He was very careful and thoughtful about choosing a monetisation model that wouldn't hinder people exactly like him - passionate hackers on a shoestring budget.

Yeah, it's now venture-backed. I'm not concerned, as long as Danny is in charge.

jpgvm 1 days ago [-]
It's the other way around, the Apple code is FOSS, Apache 2 to be specific.

Presumably it's not as good right now but where it ends up depends entirely on Apple's motivation. When they are determined they can build very good things.

st3fan 1 days ago [-]
Here here .. i prefer these new built-in tools. Who cares it is proprietary open source. It works with standard OCI containers. Goodbye Docker.app
bdcravens 1 days ago [-]
They could replace their underlying implementations with this, and for most users, they wouldn't notice the difference, other than any performance gains.
cedws 1 days ago [-]
Forget Linux containers on Mac, as far as I’m concerned that’s already a solved problem. What about Mac containers? We still don’t have a way to run a macOS app with its own process namespace/filesystem in 2025. And with all this AI stuff, there’s a need to minimise blast radius of a rogue app more than ever.
hadlock 1 days ago [-]
Is there any demand for mac binaries in production? I can't think of a single major cloud provider that offers Mac hardware based k8s nor why you'd want to pay the premium over commodity hardware. Linux seems to be the lingua franca of containerized software distribution. Even windows support for containers is sketchy at best
vineyardmike 1 days ago [-]
> I can't think of a single major cloud provider that offers Mac hardware based k8s nor why you'd want to pay the premium over commodity hardware

If you're a dev team that creates Mac/iOS/iPad/etc apps, you might want Mac hardware in your CI/CD stack. Cloud providers do offer virtual Macs for this purpose.

If you're a really big company (eg. a top-10 app, eg. Google) you might have many teams that push lots of apps or app updates. You might have a CI/CD workflow that needs to scale to a cluster of Macs.

Also, I'm pretty sure apple at least partially uses Apple hardware in the serving flow (eg. "Private Cloud Compute") and would have an interest in making this work.

Oh, and it'd be nice to be able to better sand-box untrusted software running on my personal dev machine.

alwillis 1 days ago [-]
> uses Apple hardware in the serving flow (eg. "Private Cloud Compute")

Private Cloud Compute is different hardware: https://security.apple.com/blog/private-cloud-compute/

vineyardmike 1 days ago [-]
> The root of trust for Private Cloud Compute is our compute node: custom-built server hardware that brings the power and security of Apple silicon to the data center, with the same hardware security technologies used in iPhone, including the Secure Enclave and Secure Boot. We paired this hardware with a new operating system: a hardened subset of the foundations of iOS and macOS

I would cal this "Apple Hardware" even if its not the same thing you can buy at an Apple Store.

jurip 1 days ago [-]
I don't think the parent was asking for server side macOS containerization, but desktop. It'd be nice to put something like Cursor in a sandbox where it really couldn't rm -rf your home directory. I'd love to do the same thing with every app that comes with an installer.
duped 18 hours ago [-]
You already can with `sandbox_exec`. And the entire entitlements design is there to force apps to have granular permissions.
hadlock 1 days ago [-]
I've had really poor experience doing anything with container deployed consumer apps in Linux. As soon as you even look at going out of the happy path, things immediately start going sideways.
duped 18 hours ago [-]
flatpak and snap are both containerization-adjacent technologies for consumer apps, docker containers are not really intended for that use case.
hamandcheese 17 hours ago [-]
I think at one point (many years ago) I read that imgix.com was using macs for their image processing CDN nodes.

In my experience, the only use case for cloud macs is CI/CD (and boy does it suck to use macOS in the cloud).

tgma 1 days ago [-]
Mm... AppStore and Gatekeeper?
outcoldman 1 days ago [-]
Not sure what exactly is happening, but feels very slow. Builds are taking way longer. Tried to run builder with -c and -m to add more CPU and memory.
tibbar 1 days ago [-]
What setup are you comparing this to? In the past silicon Macs plus, say, Rancher Desktop have been happy to pretend to build an x86 image for me, but those images have generally not actually worked for me on actual x86 hardware.
outcoldman 1 days ago [-]
Comparing to Docker for Mac. Running on MBA M2. Building a 5GB image (packaging enterprise software).

Docker for Mac builds it in 4 minutes.

container tool... 17 minutes. Maybe even more. And I did set the cpu and memory for the builder as well to higher number than defaults (similar what Docker for Mac is set for). And in reality it is not the build stage, but "=> exporting to oci image format" that takes forever.

Running containers - have not seen any issues yet.

rfoo 1 days ago [-]
This does not support memory ballooning yet. But they have documented custom kernel support, so, goodbye OrbStack.
worldsavior 1 days ago [-]
Orbstack is docker. People might still prefer docker.
1 days ago [-]
arianvanp 11 hours ago [-]
Them synthesizing an EXT4 file system from tarball layers instead of something like EROFS is so extremely odd. Really really strange design.
dang 1 days ago [-]
Related ongoing threads:

Container: Apple's Linux-Container Runtime - https://news.ycombinator.com/item?id=44229239 - June 2025 (11 comments)

Apple announces Foundation Models and Containerization frameworks, etc - https://news.ycombinator.com/item?id=44226978 - June 2025 (345 comments)

(Normally we'd merge them but it seems there are significant if subtle differences)

filleokus 1 days ago [-]
Looks cool! In the short demo [0] they mention "within a few hundred milliseconds" as VM boot time (I assume?). I wonder how much tweaking they had to do, because this is using the Virtualization.framework, which has been around a while and used by Docker dekstop / Colima / UTM (as an option).

I wonder what the memory overhead is, especially if running multiple containers - as that would spin up multiple VM's.

[0]: https://developer.apple.com/videos/play/wwdc2025/346 10:10 and forwards

open592 1 days ago [-]
They include the kernel config here[0]

> Containers achieve sub-second start times using an optimized Linux kernel configuration[0] and a minimal root filesystem with a lightweight init system.

[0]: https://github.com/apple/containerization/blob/main/kernel/c...

20 hours ago [-]
miovoid 1 days ago [-]
I hope it will support nested virtualization.
mustache_kimono 1 days ago [-]
This is great. Also about time, etc.

But is it also finally time to fix dtrace on MacOS[0]?

[0]: https://developer.apple.com/forums/thread/735939?answerId=76...

mattclarkdotnet 1 days ago [-]
Spoiler alert: it’s not containers.

It’s some nice tooling wrapped around lightweight VMs, so basically WSL2

amazingman 1 days ago [-]
Are the lightweight VMs running containers?
cromka 1 days ago [-]
WSL1, rather.
sampton 1 days ago [-]
Apple please expose GPU cores to the VMs.
emmelaich 1 days ago [-]
I've used pytorch successfully in a MacOS VM on MacOS using https://tart.run/ so I'd expect it to work here too.
emmelaich 8 hours ago [-]
update: torch for Linux on ARM isn't built with Apple's MPS support so it didn't work with the pip install version. Perhaps it's possible to compile from scratch to have it.
lisperforlife 1 days ago [-]
You can use libkrun to pretty much do the same thing.
joshdavham 1 days ago [-]
Will this likely have any implications for tools like ‘act’ for running local GitHub actions? I’ve had some trouble running act on apple silicon in the past.
OJFord 1 days ago [-]
In theory could make it more seamless, so installation instructions didn't include 'you must have a functioning docker engine' etc. - but in practice I assume it's a platform-agnostic non-Swift tool that isn't interested in a macOS-specific framework to make it smoother on just one platform.
xmorse 16 hours ago [-]
Is this basically the same thing as Orbstack?
fralix 19 hours ago [-]
And when native OCI macos container engine native ?!
peterpost2 1 days ago [-]
Terrible name. Look like a neat product though!
bravesoul2 1 days ago [-]
Tailored Swift would be better
badc0ffee 17 hours ago [-]
TAYNE (short for conTAYNEr): https://www.youtube.com/watch?v=a8K6QUPmv8Q
sirjaz 14 hours ago [-]
This is just wsl2 from Microsoft, albeit with an Apple spin
pmarreck 18 hours ago [-]
Prefer the Nix approach unless a container approach is absolutely necessary.
m3kw9 1 days ago [-]
I’m already running docker on macOS what’s the difference?
omeid2 1 days ago [-]
This is really bad news for Linux on Desktop.

Many developers I know don't use MacOS mainly because they depend on containers and virtualisation is slow, but if Apple can pull off efficient virtualisation and good system integration (port mapping, volumes), then it will eat away at a large share of linux systems.

sneak 1 days ago [-]
Surprising to me that this uses swift CLI tools (free software) and make, not Xcode.
w10-1 1 days ago [-]
Containers are mainly for CI+testing and for Linux workflows, so xcodebuild is not really an option.
detourdog 1 days ago [-]
Xcode also has command line tools that can do the same.
sneak 1 days ago [-]
Obtaining and using Xcode requires submitting to an additional license contract from Apple. Swift and Make do not.
detourdog 1 days ago [-]
Are you sure about that? I mean accepting license aggreements is pretty standard and doesn't bother me.

This guide seems to have no specific license agreement.

https://www.freecodecamp.org/news/install-xcode-command-line...

sneak 1 days ago [-]
Accepting license agreements isn’t standard because EULAs aren’t standard. Each one is a contract and each one is unique.

Just because you click through them all without reading doesn’t mean they are all equivalent. Xcode has an EULA. Swift and Make do not, being free software.

They are not the same.

detourdog 23 hours ago [-]
I think any using Swift CLI tools would be bound to the same EULA as the Xcode CLI tools. I worked with a german developer that did all his Xcode work in emacs with GCC in Objective-C days. Somebody like to stand on ideals and complain and others like to get work done.
sneak 21 hours ago [-]
No, Swift is free software (Apache licensed). Open source is incompatible with EULAs. An EULA makes the software nonfree.
detourdog 18 hours ago [-]
GCC is free and can compile Objective-C.
jamie0 1 days ago [-]
disappointing theres still no namespacing in darwin for macos containers. would be great to run xcodebuild in a container
1 days ago [-]
ANGXL123 1 days ago [-]
[dead]
9d 1 days ago [-]
[flagged]
alexjplant 1 days ago [-]
> Let's run linux inside a container inside docker inside macos inside an ec2 macos instance inside a aws internal linux host inside a windows pc inside the dreaming mind of a child.

Not even the first non-hyperbolic part of what you wrote is correct. "Container" most often refers to OS-level virtualization on Linux hosts using a combination of cgroups, resource groups, SDN, and some mount magic (among other things). MacOS is BSD-based and therefore doesn't support the first two things in that list. Apple can either write a compatibility shim that emulates this functionality or virtualize the Linux kernel to support it. They chose the latter. There is no Docker involved.

This is a completely sane and smart thing for them to do. Given the choice I'd still much rather run Linux but this brings macOS a step closer to parity with such.

9d 1 days ago [-]
To be honest, I don't know what Docker or any of these things are. I just wanted to sound smart so I could fit in and people would like me.
rvz 1 days ago [-]
Requires an Apple Silicon Mac to run.

> You need an Apple silicon Mac to build and run Containerization.

> To build the Containerization package, your system needs either:

> macOS 15 or newer and Xcode 26 Beta

> macOS 26 Beta 1 or newer

Those on Intel Macs, this is your last chance to switch to Apple Silicon, (Sequoia was the second last)[0] as macOS Tahoe is the last version to support Intel Macs.

[0] https://news.ycombinator.com/item?id=41560423

haiku2077 1 days ago [-]
Also, there are some really amazing deals on used/refurb M2 Macs out there. ~$700 for a Macbook Air is a pretty great value, if you can live with 16GB of RAM and a okay but not amazing screen.
paxys 1 days ago [-]
$450 for a M4 Mac mini (at Microcenter, but Best Buy will price match) is possibly the best computer hardware deal out there. It is an incredible machine.
xp84 1 days ago [-]
Having run a Mac Mini with a 256GB internal drive for 2-3 years I will dispute that anyone should buy base models for that reason. MacOS makes it as painful as possible for you to use external drives. For instance, no software for "cloud" drives (google drive, onedrive, icloud drive) is allowed to locate its local copy on an "External" drive, so you can't keep your files locally and in the cloud, have to pick one. Photos can have its library moved at least.

I like the hardware, hate the absurd greedy storage and RAM prices.

samtheprogram 1 days ago [-]
> For instance, no software for "cloud" drives (google drive, onedrive, icloud drive) is allowed to locate its local copy on an "External" drive

Source? Is this self-imposed, or what does “allowed” mean?

Even if true, technical people can work around this by either spoofing a non-external drive or using `ln`, no?

haiku2077 1 days ago [-]
> Even if true, technical people can work around this by either spoofing a non-external drive or using `ln`, no?

IIRC Google Drive for Desktop won't sync the target of a symbolic link. It will sync the target of a hard link, but hard links can only target the same filesystem that the link is on, so you can't target an external drive on macOS AFAIK.

I can't speak for the other software you mentioned.

GeekyBear 1 days ago [-]
The M4 Mini ships with 16 Gigs of RAM minimum and accepts third party SSD replacements.
xp84 1 days ago [-]
Not SSDs. Weird little proprietary NAND modules that someone reverse-engineered and that Apple will hopefully not issue a software update to brick. The controller part of the SSD is in the CPU. For “reasons” I guess
GeekyBear 15 hours ago [-]
> The controller part of the SSD is in the CPU. For “reasons” I guess

Probably because Apple spent half a billion dollars for the patent portfolio of a company building enterprise SSD controllers a decade ago. People seem to like data storage integrity.

> Anobit appears to be applying a lot of signal processing techniques in addition to ECC to address the issue of NAND reliability and data retention. In its patents there are mentions of periodically refreshing cells whose voltages may have drifted, exploiting some of the behaviors of adjacent cells and generally trying to deal with the things that happen to NAND once it's been worn considerably.

Through all of these efforts, Anobit is promising significant improvements in NAND longevity and reliability.

https://www.anandtech.com/show/5258/apple-acquires-anobit-br...

paxys 1 days ago [-]
AI FOMO at least made them bump the base RAM to 16GB. 256GB is pitiful but manageable if you don't need to handle large files. And jumping up to $800 just for another 256GB is absolutely not worth it.
xp84 1 days ago [-]
I agree on your second point, which is why unless Apple ever moves away from only having 2010-era storage amounts and absurd prices to size up, from now on I'll be buying used only. Just picked up an M3 MacBook Air with 16GB and 1TB SSD, mint condition, for under a grand.
xp84 1 days ago [-]
Indeed. I just grabbed a mint M3 MBA on ebay for about $950 with a 1TB ssd (which tbh was my main need to upgrade this family member in the first place, as they weren't CPU-bound on the old M1). Wild deals to be had!
socalgal2 1 days ago [-]
a 30% discount for a 3 yr old machine is good? A new one is $999.
DrBenCarson 1 days ago [-]
When the 3yo machine does 100% of what you need without missing a beat and has 8h screen-on battery life, yes, yes, it is
nicoburns 1 days ago [-]
Even better deals on M1s which aren't much slower than M2s
keysdev 1 days ago [-]
Any linux or bsd that has goodhardwawe support for intel mac?
bjackman 1 days ago [-]
For the older ones with Broadcom WiFi I was able to get stock Ubuntu working great by following this:

https://askubuntu.com/questions/55868/installing-broadcom-wi...

Not sure about the newer ones.

Gathering this information and putting together a distro to rescue old Macbooks from the e-waste bin would be a worthwhile project. As far as I can tell they're great hardware.

I imagine things get harder once you get into the USB-C era.

monkey_monkey 1 days ago [-]
This site is very useful for getting Linux on more recent Intel Macs - I was able to get Ubuntu running on a 2018 MBA

https://t2linux.org/

pjmlp 1 days ago [-]
That was officially communicated at the state of the union session.
1 days ago [-]
justinzollars 1 days ago [-]
I'm excited to run Systemd on mac!
watersb 1 days ago [-]
:-)

It isn't systemd:

> Containers achieve sub-second start times using an optimized Linux kernel config, minroot filesystem, and a lightweight init system, vminitd

https://github.com/apple/containerization/blob/main/vminitd

trallnag 1 days ago [-]
Wouldn't be surprised if this goes through the same process Windows users did with WSL. Starting out with no systemd, to community-developed systemd-in-a-bottle setups, to proper systemd integration
TacticalCoder 1 days ago [-]
> I'm excited to run Systemd on mac!

OCI containers are supposed to be "one container, one PID": at the very least the container's server is PID1 (at times other processes may be spawned but typically the container's main process is going to be PID1).

Containerization is literally the antithesis of systemd.

So I don't understand your comment.

1 days ago [-]
IshKebab 1 days ago [-]
Getting worried about WSL I see!
SkepticalWhale 1 days ago [-]
Whenever I have to develop on Windows, I clone my repos and run neovim / docker inside of WSL, for the improved performance (versus copying / mounting files from windows host) and linux. The dev experience is actually pretty good once you get there.

I'm not sure this is the same, though. This feels more like docker for desktop running on a lightweight vm like Colima. Am I wrong?

metaltyphoon 1 days ago [-]
This is my same workflow even for C#
tgma 1 days ago [-]
I'm glad this will kill the Docker Desktop clone business on Mac. Friend company got hit by using one of the free ones and got rug pulled by them.
m463 1 days ago [-]
I think this is purely a checkbox feature to compare against wsl. Otherwise apple just wouldn't get involved (not engineers, who would do lots of good things, but management that let this out)
throwaway1482 18 hours ago [-]
If they're going this way, why not just replace the macOS kernel (XNU) with Linux? They'll get so much more.
badc0ffee 17 hours ago [-]
Because the rest of the system uses a bunch of things that have no drop-in Linux equivalent - SIP, Mach ports, firmlinks, etc.
throwaway1482 17 hours ago [-]
Those can be emulated with the likes of SELinux, sockets, and bind mounts. It will take a lot of effort and some adaptation, but it could be done.
bdcravens 1 days ago [-]
Cool, but until someone (Apple or otherwise) implements Docker Compose on top of this, it's unlikely to see much use.
conradludgate 21 hours ago [-]
You only need to expose a docker daemon, which docker compose will use. The daemon is just a unix socket to a process that manages the containers, which is very likely a trivial change on top of the existing container codebase.

For instance, Orbstack implements the docker daemon socket protocol, so despite not being docker, it still allows using docker compose where containers are created inside of Orbstack.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 08:51:44 GMT+0000 (Coordinated Universal Time) with Vercel.