There's a different thread if you want to wax about Fluid Glass etc [1], but there's some really interesting new improvements here for Apple Developers in Xcode 26.
The new foundation frameworks around generative language model stuff looks very swift-y and nice for Apple developers. And it's local and on device. In the Platforms State of the Union they showed some really interesting sample apps using it to generate different itineraries in a travel app.
The other big thing is vibe-coding coming natively to Xcode through ChatGPT (and other) model integration. Some things that make this look like a nice quality-of-life improvement for Apple developers is the way that it tracks iterative changes with the model so you can rollback easily, and the way it gives context to your codebase. Seems to be a big improvement from the previous, very limited GPT integration with Xcode and the first time Apple Developers have a native version of some of the more popular vibe-coding tools.
Their 'drag a napkin sketch into Xcode and get a functional prototype' is pretty wild for someone who grew up writing [myObject retain] in Objective-C.
Are these completely ground-breaking features? I think it's more what Apple has historically done which is to not be first into a space, but to really nail the UX. At least, that's the promise – we'll have to see how these tools perform!
Does that explain why you don't have to worry about token usage? The models run locally?
hbcondo714 1 days ago [-]
> You don’t have to worry about the exact tokens that Foundation Models operates with, the API nicely abstracts that away for you [1]
I have the same question. Their Deep dive into the Foundation Models framework video is nice for seeing code using the new `FoundationModels` library but for a "deep dive", I would like to learn more about tokenization. Hopefully these details are eventually disclosed unless someone else here already knows?
I guess I'd say "mu", from a dev perspective, you shouldn't care about tokensever - if your inference framework isn't abstracting that for you, your first task would be to patch it to do so.
To parent, yes this is for local models, so insomuch worrying about token implies financial cost, yes
IanCal 21 hours ago [-]
Ish - it always depends how deep in the weeds you need to get. Tokenisation impacts performance, both speed and results, so details can be important.
refulgentis 19 hours ago [-]
I maintain a llama.cpp wrapper, on everything from web to Android and cannot quite wrap my mind around if you'd have any more info by getting individual token IDs from the API, beyond what you'd get from wall clock time and checking their vocab.
lqstuart 18 hours ago [-]
I don’t really see a need for token IDs alone, but you absolutely need per-token logprob vectors if you’re trying to do constrained decoding
refulgentis 16 hours ago [-]
Interesting point, my first reaction was "why do you need logprobs? We use constrained decoding for tool calls and don't need them"...which is actually false! Because we need to throw out those log probs then find the highest log prob of a token meeting the constraints.
lqstuart 8 hours ago [-]
Haha yeah. I’ve seen you mention the llama cpp wrapper elsewhere, it sounds cool! I’ve worked enough with vLLM and sglang to get angry at xgrammar, which I believe has some common ancestry with the GGML stack (GBNF if I’m not mistaken, which I may be). The constrained decoding part is as simple as you’d expect, just applies a bitmask to the logprobs during the “logit processing” and continuing as normal.
rubicon33 15 hours ago [-]
The direction the software engineering is going in with this whole "vibe coding" thing is so depressing to me.
I went into this industry because I grew up fascinated by computers. When I learned how to code, it was about learning how to control these incredible machines. The joy of figuring something out by experimenting is quickly being replaced by just slamming it into some "generative" tool.
I have no idea where things go from here but hopefully there will still be a world where the craft of hand writing code is still valued. I for one will resist the "vibe coding" train for as long as I possibly can.
spagoop 15 hours ago [-]
To be meta about it, I would argue that thinking "generatively" is a craft in and of itself. You are setting the conditions for work to grow rather than having top-down control over the entire problem space.
Where it gets interesting is being pushed into directions that you wouldn't have considered anyway rather than expediting the work you would have already done.
I can't speak for engineers, but that's how we've been positioning it in our org. It's worth noting that we're finding GenAI less practical in design-land for pushing code or prototyping, but insanely helpful helping with research and discovery work.
We've been experimenting with more esoteric prompts to really challenge the models and ourselves.
Here's a tangible example: Imagine you have an enormous dataset of user-research, both qual and quant, and you have a few ideas of how to synthesize the overall narrative, but are still hitting a wall.
You can use a prompt like this to really get the team thinking:
"What empty spaces or absences are crucial here? Amplify these voids until they become the primary focus, not the surrounding substance. Describe how centering nothingness might transform your understanding of everything else. What does the emptiness tell you?"
or
"Buildings reveal their true nature when sliced open. That perfect line that exposes all layers at once - from foundation to roof, from public to private, from structure to skin.
What stories hide between your floors? Cut through your challenge vertically, ruthlessly. Watch how each layer speaks to the others. Notice the hidden chambers, the unexpected connections, the places where different systems touch.
What would a clean slice through your problem expose?"
LLM's have completely changed our approach to research and, I would argue, reinvigorated an alternate craftsmanship to the ways in which we study our products and learn from our users.
Of course the onus is on us to pick apart the responses for any interesting directions that are contextually relevant to the problem we're attempting to solve, but we are still in control of the work.
Happy to write more about this if folks are interested.
alt227 14 hours ago [-]
Reading this post is like playing buzz word bingo!
tansan 15 hours ago [-]
This sounds like a boomer trying to resist using Google in favor of encyclopedias.
Vibe coding can be whatever you want to make of it. If you want to be prescriptive about your instructions and use it as a glorified autocomplete, then do it. You can also go at it from a high-level point of view. Either way, you still need to code review the AI code as if it was a PR.
aziaziazi 14 hours ago [-]
Is any AI assisted coding === Vibe Coding now?
Coding with an AI can be whatever one can achieve, however I don’t see how vibe coding would be related to an autocomplete: with an autocomplete you type a bit of code that a program (AI or not) complete. In VC you almost doesn’t interact with the editor, perhaps only for copy/paste or some corrections. I’m not even sure for the manual "corrections" parts if we take Simon Willinson definition [0], which you’re not forced to obviously, however if there’s contradictory views I’ll be glad to read them.
0 > If an LLM wrote every line of your code, but you've reviewed, tested, and understood it all, that's not vibe coding in my book—that's using an LLM as a typing assistant
(Your may also consider rethinking your first paragraph up to HN standards because while the content is pertinent, the form sounds like a youngster trying to demo iKungFu on his iPad to Jackie Chan)
msgodel 11 hours ago [-]
Karpathy's definition of vibe coding as I understood it was just verbally directing an agent based on vibes you got from the running app without actually seeing the code.
swsieber 14 hours ago [-]
No, this sounds like an IC resisting becoming a manager.
fragmede 14 hours ago [-]
No, that's what's separates the vibecoding from the glorified autocomplete. as originally defined, vibe coding doesn't include the final code review of the generated code, just a quick spot check, and then moving on to the next prompt.
hooverd 14 hours ago [-]
You can take an augmented approach, a sort of capability fusion, or you can spam regenerate until it works.
pzo 1 days ago [-]
I might we wrong but I guess this will only works on iphone 16 devices and iphone 15 pro - thus drastically limits your user base and you would still have to use online API for most apps. I was hoping they provide free ai api on their private cloud for other devices even if also running small models
jjcob 19 hours ago [-]
If you start writing an app now, by the time it's polished enough to release it, the iPhone 16 will already be a year old phone, and there will be plenty potential customers.
If your app is worthwhile, and gets popular in a few years, by that time iPhone 16 will be an old phone and a reasonable minimum target.
Skate to where the puck is going...
rs186 15 hours ago [-]
Developers could be adding a feature utilizing LLMs to their existing app that already has a large user base. This could be a matter of a few weeks from an idea ti shipping the feature. While competitors use API calls to just "get things done", you are trying to figure out how to serve both iPhone 16 and older users, and potentially Android/web users if your product is also available elsewhere. I don't see how an iPhone 16 only feature helps anyone's product development, especially when the quality still remains to be seen.
edude03 12 hours ago [-]
Basically this - network effects are huge. People will definitely by hardware if it solves a problem for them - so many people bought blackberries just for BBM.
geodel 17 hours ago [-]
Exactly, it can take at least a couple of years to get big/important apps to use iOS, macOS features. By that Iphone 16 would be quite common.
charliebwrites 16 hours ago [-]
If the new foundation models are on device, does that mean they’re limited to information they were trained on up to that point?
Or do have the ability to reach out to the internet for up to the moment information?
dwaite 11 hours ago [-]
In addition to context you provide, the API lets you programmatically declare tools
pxc 2 days ago [-]
I hoped for a moment that "Containerization Framework" meant that macOS itself would be getting containers. Running Linux containers and VMs on macOS via virtualization is already pretty easy and has many good options. If you're willing to use proprietary applications to do this, OrbStack is the slickest, but Lima/Colima is fine, and Podman Desktop and Rancher Desktop work well, too.
The thing macOS really painfully lacks is not ergonomic ways to run Linux VMs, but actual, native containers-- macOS containers. And third parties can't really implement this well without Apple's cooperation. There have been some efforts to do this, but the most notable one is now defunct, judging by its busted/empty website[1] and deleted GitHub organization[2]. It required disabling SIP to work, back when it at least sort-of worked. There's one newer effort that seems to be alive, but it's also afflicted with significant limitations for want of macOS features[3].
That would be super useful and fill a real gap, meeting needs that third-party software can't. Instead, as wmf has noted elsewhere in these comments, it seems they've simply "Sherlock'd" OrbStack.
> The thing macOS really painfully lacks is not ergonomic ways to run Linux VMs, but actual, native containers-- macOS containers
Linux container processes run on the host kernel with extra sandboxing. The container image is an easily sharable and runnable bundle.
macOS .app bundles are kind of like container images.
You can sign them to ensure they are not modified, and put them into the “registry” (App Store).
The Swift ABI ensures it will likely run against future macOS versions, like the Linux system APIs.
There is a sandbox system to restrict file and network access. Any started processes inherit the sandbox, like containers.
One thing missing is fine grained network rules though - I think the sandbox can just define “allow outbound/inbound”.
Obviously “.app”s are not exactly like container images , but they do cover many of the same features.
xyzzy_plugh 18 hours ago [-]
You're kind of right. But at the same time they are nowhere close. The beauty of Linux containerization is that processes can be wholly ignorant that they are not in fact running as root. The containers get, what appear to them, to be the whole OS to themselves.
You don't get that in macOS. It's more of a jail than a sandbox. For example, as an app you can't, as far as I know, shell out and install homebrew and then invoke homebrew and install, say, postgres, and run it, all without affecting the user's environment. I think that's what people mean when they say macOS lacks native containers.
griels 15 hours ago [-]
Hard same. I wonder if this does anything different to the existing projects that would mean one could use the WSL2 approach where containerd is running in the Linux micro-VM. A key component is the RPC framework - seems to be how orbstack's `macctl` command does it. I see mention of GRPC, sandboxes and containers in the binfmt_misc handling code, which is promising:
Providing isolated environments for CI machines and other build environments!
If the sandboxing features a native containerization system relied on were also exposed via public APIs, those could could also potentially be leveraged by developer tools that want to have/use better sandboxing on macOS. Docker and BuildKit have native support for Windows containers, for instance. If they could also support macOS the same way, that would be cool for facilitating isolated macOS builds without full fat VMs. Tools like Dagger could then support more reproducible build pipelines on macOS hosts.
It could also potentially provide better experiences for tools like devcontainers on macOS as well, since sharing portions of your filesystem to a VM is usually trickier and slower than just sharing those files with a container that runs under your same kernel.
For many of these use cases, Nix serves very well, giving "just enough" isolation for development tasks, but not too much. (I use devenv for this at work and at home.) But Nix implementations themselves could also benefit from this! Nix internally uses a sandbox to help ensure reproducible builds, but the implementation on macOS is quirky and incomplete compared to the one on Linux. (For reasons I've since forgotten, I keep it turned off on macOS.)
raydev 1 days ago [-]
Clean build environments for CICD workflows, especially if you're building/deploying many separate projects and repos. Managing Macs as standalone build machines is still a huge headache in 2025.
BrandonSmith 1 days ago [-]
What's wrong with Cirrus CLI and Tart built on Apple's Virtualization.framework?
Tart is great! This is probably the best thing available for now, though it runs into some limitations that Apple imposes for VMs. (Those limitations perhaps hint at why Apple hasn't implemented this-- it seems they don't really want people to be able to rent out many slices of Macs.
One clever and cool thing Tart actually does that sort of relates to this discussion is that it uses the OCI format for distributing OS images!
(It's also worth noting that Tart is proprietary. Some users might prefer something that's either open-source, built-in, or both.)
itake 1 days ago [-]
I might misunderstand the project, but I wish there was a secure way for me to execute github projects. Recently, the OS has provided some controls to limit access to files, etc. but I'd really like a "safe boot" version that doesn't allow the program to access the disk or network.
the firewall tools are too clunky (and imho unreliable).
wpm 2 days ago [-]
Same thing containers/jails are useful for on Linux and *BSD, without needing to spin up an entirely separate kernel to run in a VM to handle it.
tensor 2 days ago [-]
MacOS apps can already be sandboxed. In fact it's a requirement to publish them to the Mac App Store. I agree it'd be nice to see this extended to userland binaries though.
Etheryte 2 days ago [-]
You can't really sandbox development dependencies in any meaningful way. I want to throw everything and the kitchen sink into one container per project, not install a specific version of Python, Node, Perl or what have you globally/namespaced/whatever. Currently there's no good solution to that problem, save perhaps for a VM.
uv doesn't provide strong isolation; a package you install using uv can attempt to delete random files in your home folder when you import it, for example.
NewJazz 2 days ago [-]
People use containers server side in Linux land mostly... Some desktop apps (flatpak is basically a container runtime) but the real draw is server code.
Do you think people would be developing and/or distributing end user apps via macOS containers?
csomar 1 days ago [-]
ie: You want to build a binary for macOS from your Linux machine. Right now, it is possible but you still need a macOS license and to go through hoops. If you were able to containerize macOS, then you create a container and then compile your program inside it.
nixosbestos 1 days ago [-]
No, that's not at all how that would work. You're not building a macOS binary natively under a Linux kernel.
doctorpangloss 2 days ago [-]
Orchestrating macOS only software, like Xcode, and software that benefits from Environment integrity, like browsers.
One of Apple's biggest value props to other platforms is environment integrity. This is why their containerization / automation story is worse than e.g. Android.
pxc 1 days ago [-]
Ah, that's great! I'd forgotten it moved and struggled to track it down.
dedicate 2 days ago [-]
Okay, the AI stuff is cool, but that "Containerization framework" mention is kinda huge, right? I mean, native Linux container support on Mac could be a game-changer for my whole workflow, maybe even making Docker less of a headache.
12_throw_away 1 days ago [-]
FWIW, here are the repos for the CLI tool [1] and backend [2]. Looks like it is indeed VM-based container support (as opposed to WSLv1-style syscall translation or whatever):
Containerization provides APIs to:
[...]
- Create an optimized Linux kernel for fast boot times.
- Spawn lightweight virtual machines.
- Manage the runtime environment of virtual machines.
I'm kinda ignorant about the current state of Linux VMs, but my biggest gripe with VMs is that OS kernels kind of assume they have access to all the RAM the hardware has - unlike the reserve/commit scheme processes use for memory.
Is there a VM technology that can make Linux aware that it's running in a VM, and be able to hand back the memory it uses to the host OS?
Or maybe could Apple patch the kernel to do exactly this?
Running Docker in a VM always has been quite painful on Mac due to the excess amount of memory it uses, and Macs not really having a lot of RAM.
StopDisinfo910 23 hours ago [-]
That's called memory balooning and is supported by KVM on Linux. Proxmox for example can do that. It does need support on both the host and the guest.
rwmj 23 hours ago [-]
It's still a problem for containers-in-VMs. You can in theory do something with either memory ballooning or (more modern) memory hotplugging, but the dance between the OS and the hypervisor takes a relatively long time to complete, and Linux just doesn't handle it well (eg. it inevitably places unmovable pages into newly reserved memory, meaning it can never be unplugged). We never found a good way to make applications running inside the VM able to transparently allocate memory. You can overprovision memory, and hypervisors won't actually allocate it on the host, and that's the best you can do, but this also has problems since Linux tends to allocate a bunch of fixed data structures proportional to the size of memory it thinks it has available.
HighGoldstein 23 hours ago [-]
> Is there a VM technology that can make Linux aware that it's running in a VM, and be able to hand back the memory it uses to the host OS?
Isn't this an issue of the hypervisor? The guest OS is just told it has X amount of memory available, whether this memory exists or not (hence why you can overallocate memory for VMs), whether the hypervisor will allocate the entire amount or just what the guest OS is actually using should depend on the hypervisor itself.
SkiFire13 21 hours ago [-]
> or just what the guest OS is actually using should depend on the hypervisor itself.
How can the hypervisor know which memory the guest OS is actually using? It might have used some memory in the past and now no longer needs it, but from the POV of the hypervisor it might as well be used.
This is a communication problem between hypervisor and guest OS, because the hypervisor manages the physical memory but only the guest OS known how much memory should actually be used.
masklinn 20 hours ago [-]
A generic vmm can not, but these are specific vmms so they can likely load dedicated kernel mode drivers into the well known guest to get the information back out.
SkiFire13 15 hours ago [-]
The driver would still be part of the guest.
masklinn 14 hours ago [-]
If you control both the VMM and the guest through a driver you have an essentially infinite latitude to set up communications between the two: virtual devices, iommu, interrupts, ...
torginus 23 hours ago [-]
Just looked it up - and the answer is 'baloon drivers', which are special drivers loaded by the guest OS, which can request and return unused pages to the host hypervisor.
Apparently docker for Mac and Windows uses these, but in practice, docker containers tend to grow quite large in terms of memory, so not quite sure how well it works in practice, its certainly overallocates compared to running docker natively on a Linux host.
trws 23 hours ago [-]
The short answer is yes, Linux can be informed to some extent but often you still want a memory balloon driver so that the host can “allocate” memory out of the VM so the host OS can reclaim that memory. It’s not entirely trivial but the tools exist, and it’s usually not too bad on vz these days when properly configured.
Asmod4n 22 hours ago [-]
It’s one reason i don’t like WSL2. When you compile something which needs 30 GB RAM the only thing you can do is terminate the wsl2 vm to get that ram back.
Quarrel 20 hours ago [-]
Since late 2023, WSL2 has supported "autoMemoryReclaim", nominally still experimental, but works fine for me.
I just noticed the addition of container cask when I ran b”brew update”.
I chased the package’s source and indeed it’s pointing to this repo.
You can install and use it now on the latest macOS (not 26). I just ran “container run nginx” and it worked alright it seems. Haven’t looked deeper yet.
notpushkin 20 hours ago [-]
There’s some problem with networking: if you try to run multiple containers, they won’t see each other. Could probably be solved by running a local VPN or something.
cogman10 1 days ago [-]
WSLv1 never supported a native docker (AFAIK, perhaps I'm wrong?)
That said, I'd think apple would actually be much better positioned to try the WSL1 approach. I'd assume apple OS is a lot closer to linux than windows is.
selkin 1 days ago [-]
This doesn't look like WSL1. They're not running Linux syscalls to the macOS kernel, but running Linux in a VM, more like the WSL2[0] approach.
In the end they're probably run into the same issues that killed WSL1 for Microsoft— the Linux kernel has enormous surface area, and lots of pretty subtle behaviour, particularly around the stuff that is most critical for containers, like cgroups and user namespaces. There isn't an externally usable test suite that could be used to validate Microsoft's implementation of all these interfaces, because... well, why would there be?
Maintaining a working duplicate of the kernel-userspace interface is a monumental and thankless task, and especially hard to justify when the work has already been done many times over to implement the hardware-kernel interface, and there's literally Hyper-V already built into the OS.
fiddlerwoaroof 1 days ago [-]
Yeah, it probably would be feasible to dust off the FreeBSD Linux compatibility layer[1] and turn that into native support for Linux apps on Mac.
I think Apple’s main hesitation would be that the Linux userland is all GPL.
If they built as a kernel extension it would probably be okay with gpl.
There’s a huge opportunity for Apple to make kernel development for xnu way better.
Tooling right now is a disaster — very difficult to build a kernel and test it (eg in UTM, etc.).
If they made this better and took more of an OSS openness posture like Microsoft, a lot of incredible things could be built for macOS.
I’ll bet a lot of folks would even port massive parts of the kernel to rust for them for free.
fiddlerwoaroof 5 hours ago [-]
My impression is they’re basically trying to end third party kernel development; macOS has been making it progressively more difficult to use kexts and has been providing alternate toolkits for doing things that used to require drivers.
paxys 2 days ago [-]
It's impossible to have "native" support for Linux containers on macOS, since the technology inherently relies on Linux kernel features. So I'm guessing this is Apple rolling out their own Linux virtualization layer (same as WSL). Probably still an improvement over the current mess, but if they just support LXC and not Docker then most devs will still need to install Docker Desktop like they do today.
tensor 2 days ago [-]
Apple has had a native hypervisor for some time now. This is probably a baked in clone of something like https://mac.getutm.app/ which provides the stuff on top of the hypervisor.
neuralkoi 2 days ago [-]
In case you're wondering, the Hypervisor.framework C API is really neat and straightforward:
One of the reasons OrbStack is so great is because they implement their own hypervisor: https://orbstack.dev/
Apple’s stack gives you low-level access to ARM virtualization, and from there Apple has high-level convenience frameworks on top. OrbStack implements all of the high-level code themselves.
Using a hypervisor means just running a Linux VM, like WSL2 does on Windows. There is nothing native about it.
Native Linux (and Docker) support would be something like WSL1, where Windows kernel implemented Linux syscalls.
petersellers 2 days ago [-]
Hyper-V is a type 1 hypervisor, so Linux and Windows are both running as virtual machines but they have direct access to hardware resources.
It's possible that Apple has implemented a similar hypervisor here.
mdaniel 2 days ago [-]
Surely if Windows kernel can be taught to respond to those syscalls, XNU can be taught it even easier. But, AIUI the Windows kernel already had a concept of "personalities" from back when they were trying to integrate OS/2 so that zero-to-one for XNU could be a huge lift, not the syscalls part specifically
heavyset_go 1 days ago [-]
XNU similarly has a concept of "flavors" and uses FreeBSD code to provide the BSD flavor. Theoretically, either Linux code or a compatibility layer could be implemented in the kernel in a similar way. The former won't happen due to licensing.
shawnz 1 days ago [-]
WSL1 didn't use the existing support for personalities in NT
kergonath 1 days ago [-]
> the Windows kernel already had a concept of "personalities" from back when they were trying to integrate OS/2 so that zero-to-one for XNU could be a huge lift, not the syscalls part specifically
XNU is modular, with its BSD servers on top of Mach. I don’t see this as being a strong advantage of NT.
literalAardvark 2 days ago [-]
Exactly. So it wouldn't necessarily be easier. NT is almost a microkernel.
9dev 1 days ago [-]
Yep. People consistently underestimate the great piece of technology NT is, it really was ahead of its time. And a shame what Microsoft is doing with it now.
okanat 1 days ago [-]
Was it ahead? I am not sure. There was lots of research on microkernels at the time and NT was a good compromise between a mono and a microkernel. It was an engineering product of its age. A considerably good one. It is still the best popular kernel today. Not because it is the best possible with today's resouces but because nobody else cares about core OS design anymore.
I think it is the Unix side that decided to burry their heads into sand. We got Linux. It is free (of charge or licensing). It supported files, basic drivers and sockets. It got commercial support for servers. It was all Silicon Valley needed for startups. Anything else is a cost. So nobody cared. Most of the open source microkernel research slowly died after Linux. There is still some with L4 family.
Now we are overengineering our stacks to get closer to microkernel capabilities that Linux lacks using containers. I don't want to say it is ripe for disruption becuse it is hard and again nobody cares (except some network and security equipment but that's a tiny fraction).
9dev 1 days ago [-]
> Was it ahead? I am not sure.
You say this, but then proceed to state that it had a very good design back then informed by research, and still is today. Doesn't that qualify? :-)
NT brought a HAL, proper multi-user ACLs, subsystems in user mode (that alone is amazing, even though they sadly never really gained momentum), preemptive multitasking. And then there's NTFS, with journaling, alternate streams, and shadow copies, and heaps more. A lot of it was very much ahead of UNIX at the time.
> nobody else cares about core OS design anymore.
Agree with you on that one.
okanat 17 hours ago [-]
> You say this, but then proceed to state that it had a very good design back then informed by research, and still is today. Doesn't that qualify? :-)
I meant that NT was a product that matched the state of the art OS design of its time (90s). It was the Unix world that decided to be behind in 80s forever.
NT was ahead not because it is breaking ground and bringing in new design aspects of 2020s to wider audiences but Unix world constantly decides to be hardcore conservative and backwards in OS design. They just accept that a PDP11 simulator is all you need.
It is similar to how NASA got stuck with 70s/80s design of Shuttle. There was research for newer launch systems but nobody made good engineering applications of them.
pjmlp 1 days ago [-]
It is as native as any Linux cloud instance.
tensor 2 days ago [-]
> The Containerization framework enables developers to create, download, or run Linux container images directly on Mac. It's built on an open-source framework optimized for Apple Silicon and provides secure isolation between container images
That's their phrasing, which suggests to me that it's just a virtualization system. Linux container images generally contain the kernel.
hackyhacky 2 days ago [-]
> Linux container images generally contain the kernel.
No, containers differ from VMs precisely in requiring dependency on the host kernel.
tensor 1 days ago [-]
Hmm, so they do. I assumed because you pulled in a linux distro that the kernel was from that distro is used too, but I guess not. Perhaps they have done some sort of improvement where they have one linux kernel running via the hypervisor that all containers use. Still can't see them trying to emulate linux calls, but who knows.
froggit 1 days ago [-]
> I assumed because you pulled in a linux distro that the kernel was from that distro is used too,
Thst's how docker works on WSL2, run it on top of a virtualised linux kernal. WSL2 is pretty tightly integrated with windows itself, stil a linux vm though. It seems kinda weird for apple to reinvent the wheel for that kind of thing for containers.
froggit 1 days ago [-]
> Thst's how docker works on WSL2, run it on top of a virtualised linux kernal. WSL2 is pretty tightly integrated with windows itself, stil a linux vm though. It seems kinda weird for apple to reinvent the wheel for that kind of thing for containers.
Can't edit my posts mobile but realized that's, what's the word, not useful... But yeah, sharing the kernal between containers but otherwise makes them isolated allegedly allows them to have VMesque security without the overhead of seperate VMs for each image. There's a lot more to it, but you get the idea.
badgersnake 1 days ago [-]
They usually do contain a kernel because package managers are too stupid to realise it’s a container, so they install it anyway.
jzelinskie 2 days ago [-]
The screenshot in TFA pretty clearly shows docker-like workflows pulling images, showing tags and digests and running what looks to be the official Docker library version of Postgres.
paxys 2 days ago [-]
Every container system is "docker-like". Some (like Podman) even have a drop-in replacement for the Docker CLI. Ultimately there are always subtle differences which make swapping between Docker <> Podman <> LXC or whatever else impossible without introducing messy bugs in your workflow, so you need to pick one and stick to it.
cogman10 1 days ago [-]
If you've not tried it recently, I suggest give the latest version of podman another shot. I'm currently using it over docker and a lot of the compatibility problems are gone. They've put in massive efforts into compatibility including docker compose support.
Yeah, from a quick glance the options are 1:1 mapped so an
alias docker='container'
Should work, at least for basic and common operations
bandoti 2 days ago [-]
What about macOS being derived from BSD? Isn’t that where containers came from: BSD jails?
I know the container ecosystem largely targets Linux just curious what people’s thoughts are on that.
p_ing 2 days ago [-]
OS X pulls some components of FreeBSD into kernel space, but not all (and those are very old at this point). It also uses various BSD bits for userspace.
Conceptually similar but different implementations. Containers uses cgroups in Linux and there is also file system and network virtualization as well. It's not impossible but it would require quite a bit of work.
sarlalian 1 days ago [-]
Another really good read about containers, jails and zones.
BSD jails are architected wholly differently from what something like Docker provides.
Jails are first-class citizens that are baked deep into the system.
A tool like Docker relies using multiple Linux features/tools to assemble/create isolation.
Additionally, iirc, the logic for FreeBSD jails never made it into the Darwin kernel.
Someone correct me please.
AdieuToLogic 1 days ago [-]
> BSD jails are architected wholly differently from what something like Docker provides.
> Jails are first-class citizens that are baked deep into the system.
Both very true statements and worth remembering when considering:
> Additionally, iirc, the logic for FreeBSD jails never made it into the Darwin kernel.
You are quite correct, as Darwin is is based on XNU[0], which itself has roots in the Mach[1] microkernel. Since XNU[0] is an entirely different OS architecture than that of FreeBSD[3], jails[4] do not exist within it.
Thank you for the links I will take a closer look at XNU. It’s neat to see how these projects influence each other.
dboreham 2 days ago [-]
> what something like Docker provides
Docker isn't providing any of the underlying functionality.
BSD jails and Linux cgroups etc aren't fundamentally different things.
nyrikki 18 hours ago [-]
Jails were explicitly designed for security, cgroups were more generalized as more about resource control, and leverages namespaces, capabilities, apparmor/SELinux to accomplish what they do.
> Jails create a safe environment independent from the rest of the system. Processes created in this environment cannot access files or resources outside of it.[1]
While you can accomplish similar tasks, they are not equivalent.
Assume Linux containers are jails, and you will have security problems. And on the flip side, k8s pods share UTM,IPC, Network namespaces, yet have independent PID and FS namespaces.
Depending on your use case they may be roughly equivalent, but they are fundamentally different approaches.
„Container“ is sort of synonymous with „OCI-compatible container“ these days, and OCI itself is basically a retcon standard for docker (runtime, images etc.). So from that perspective every „container system“ is necessarily „docker-like“ and that means Linux namespaces and cgroups.
pjmlp 1 days ago [-]
With a whole generation forgetting they came first in big iron UNIX like HP-UX.
jeberle 1 days ago [-]
Interesting. My experience w/ HP-UX was in the 90s, but this (Integrity Virtual Machines) was released in 2005. I might call out FreeBSD Jails (2000) or Solaris Zones (2005) as an earlier and a more significant case respectively. I appreciate the insight, though, never knew about HP-UX.
Another reason it matters is they might have done it differently which could inspire future improvements. :)
I like to read bibliographies for that reason—to read books that inspired the author I’m reading at the time. Same goes for code and research papers!
pjmlp 1 days ago [-]
Some people think it matters to properly learn history, instead of urban myths.
9dev 1 days ago [-]
History is one thing, who-did-it-first is often just a way to make a point in faction debates. In the broader picture, it makes little difference IMHO.
enceladus06 2 days ago [-]
WSL throughput is not enough for file intensive operations. It is much easier and straightforward to just delete windows and use Linux.
sarlalian 1 days ago [-]
Unless you need to have a working video or audio config as well.
okanat 1 days ago [-]
Using the Linux filesystem has almost no performance penalty under WSL2 since it is a VM. Docker Desktop automatically mounts the correct filesystem. Crossing the OS boundary for Windows files has some overhead of course but that's not the usecase WSL2 is optimized for.
With WSL2 you get the best of both worlds. A system with perfect driver and application support and a Linux-native environment. Hybrid GPUs, webcams, lap sensors etc. all work without any configuration effort. You get good battery life. You can run Autodesk or Photoshop but at the same time you can run Linux apps with almost no performance loss.
thrawa8387336 1 days ago [-]
FWIW I get better battery life with ubuntu.
okanat 18 hours ago [-]
Are you comparing against the default vendor image that's filled with adware or a clean Windows install with only drivers? There is a significant power use difference and the latter case has always been more power efficient for me compared to the Linux setup. Powering down Nvidia GPU has never fully worked with Linux for me.
elektrontamer 20 hours ago [-]
How? What's your laptop brand and model?
I've never had better battery life with any machine using ubuntu.
msgodel 2 days ago [-]
If they implemented the Linux syscall interface in their kernel they absolutely could.
vips7L 2 days ago [-]
Aren't the syscalls a constant moving target? Didn't even Microsoft fail at keeping up with them in WSL?
koito17 1 days ago [-]
Linux is exceptional in that it has stable syscall numbers and guarantees stability. This is largely why statically linked binaries (and containers) "just work" on Linux, meanwhile Windows and Mac OS inevitably break things with an OS update.
Microsoft frequently tweaks syscall numbers, and they make it clear that developers must access functions through e.g. NTDLL. Mac OS at least has public source files used to generate syscall.h, but they do break things, and there was a recent incident where Go programs all broke after a major OS update. Now Go uses libSystem (and dynamic linking)[2].
arm64 macOS doesn't even allow statically linked binaries at all.
on the windows side, syscall ABI became stable since Server 2022 to run mismatched container releases
asabil 2 days ago [-]
Not Linux syscalls, they are a stable interface as far as the Linux kernel is concerned.
PhilipRoman 1 days ago [-]
They're not really a moving target (since some distros ship ancient kernels, most components will handle lack of new syscalls gracefully), but the surface is still pretty big. A single ioctl() or write() syscall could do a billion different things and a lot of software depends on small bits of this functionality, meaning you gotta implement 99% of it to get everything working.
rjsw 1 days ago [-]
FreeBSD and NetBSD do this.
NewJazz 2 days ago [-]
They didn't.
sequence7 23 hours ago [-]
WSL doesn't have a virtualization layer, WSL1 did have but it wasn't a feasible approach so WSL2 is basically running VMs with the Hyper-V hypervisor.
Apple looks like it's skipped the failed WSL1 and gone straight for the more successful WSL2 approach.
DidYaWipe 1 days ago [-]
I installed Orbstack without Docker Desktop.
pjmlp 1 days ago [-]
WSL 1.0, given that WSL 2.0 is regular Linux VM running on HYPER-V.
LoganDark 2 days ago [-]
I wonder if User-Mode Linux could be ported to macOS...
wmf 2 days ago [-]
It would probably be slower than just running a VM.
thde 2 days ago [-]
> Meet Containerization, an open source project written in Swift to create and run Linux containers on your Mac. Learn how Containerization approaches Linux containers securely and privately. Discover how the open-sourced Container CLI tool utilizes the Containerization package to provide simple, yet powerful functionality to build, run, and deploy Linux Containers on Mac.
> Containerization executes each Linux container inside of its own lightweight virtual machine.
That’s an interesting difference from other Mac container systems. Also (more obvious) use Rosetta 2.
selkin 1 days ago [-]
Podman Desktop, and probably other Linux-containers on macOS tools, can already create multiple VMs, each hosting a subset of the containers you run on your Mac.
What seems to be different here, is that a VM per each container is the default, if not only, configuration.
And that instead of mapping ports to containers (which was always a mistake in my opinion), it creates an externally routed interface per machine, similar to how it would work if you'd use macvlan as your network driver in Docker.
Both of those defaults should remove some sharp edges from the current Linux-containers on macOS workflows.
WhyNotHugo 2 days ago [-]
The ground keeps shrinking for Docker Inc.
They sold Docker Desktop for Mac, but that might start being less relevant and licenses start to drop.
On Linux there’s just the cli, which they can’t afford to close since people will just move away.
Docker Hub likely can’t compete with the registries built into every other cloud provider.
aequitas 1 days ago [-]
There is already a paid alternative, Orbstack, for macOS which puts Docker for Mac to shame in terms of usability, features and performance. And then there are open alternatives like Colima.
marcalc 1 days ago [-]
Use OrbStack for sometime, made my dev team’s m1 run our kubernetes pods in a much lighter fashion. Love it.
cromka 1 days ago [-]
How does it compare to Podman, though?
worthless-trash 1 days ago [-]
Podman works absolutely beautifully for me, other platforms, I tripped over weird corner cases.
pjmlp 1 days ago [-]
That is why they are now into the reinventing application servers with WebAssembly kind of vibe.
9dev 1 days ago [-]
It’s really awful. There’s a certain size at which you can pivot and keep most of your dignity, but for Docker Inc., it’s just ridiculous.
amelius 1 days ago [-]
They got Sherlocked.
mmcnl 2 days ago [-]
It's cool but also not as revolutionary as you make it sound. You can already install Podman, Orbstack or Colima right? Not sure which open-source framework they are using, but to me it seems like an OS-level integration of one of these tools. That's definitely a big win and will make things easier for developers, but I'm not sure if it's a gamechanger.
rnubel 2 days ago [-]
All those tools use a Linux VM (whether managed by Qemu or VZ) to run the actual containers, though, which comes with significant overhead. Native support for running containers -- with no need for a VM -- would be huge.
jemmyw 1 days ago [-]
Still needs a VM. It'll be running more VMs than something like orbstack, which I believe runs just one for the docker implementation. Whether that means better or worse performance we'll find out.
SpaceNugget 1 days ago [-]
there's still a VM involved to run a Linux container on a Mac. I wouldn't expect any big performance gains here.
mmcnl 1 days ago [-]
Yes, it seems like it's actually a more refined implementation than what currently exists. Call me pleasantly surprised!
It looks like nothing here is new: we have all the building blocks already. What Apple done is packaged it all nicely, which is nothing to discount: there's a reason people buy managed services over just raw metal for hosting their services, and having a batteries included development environment is worth a premium over the need to assemble it on your own.
mrbonner 1 days ago [-]
The containerization experience on macOS has historically been underwhelming in terms of performance. Using Docker or Podman on a Mac often feels sluggish and unnecessarily complex compared to native Linux environments. Recently, I experimented with Microsandbox, which was shared here a few weeks ago, and found its performance to be comparable to that of native containers on Linux. This leads me to hope that Apple will soon elevate the developer experience by integrating robust containerization support directly into macOS, eliminating the need for third-party downloads.
nottorp 1 days ago [-]
Docker at least runs a linux vm that runs all those containers. Which is a lot of needless overhead.
The equivalent of Electron for containers :)
rcarmo 23 hours ago [-]
Use Colima.
marviel 2 days ago [-]
yeah -- I saw it's built on "open source foundations", do you know what project this is?
If I had to guess, colima? But there are a number of open source projects using Apple's virtualisation technologies to run a linux VM to host docker-type containers.
Once you have an engine podman might be the best choice to manage containers, or docker.
cmiles74 2 days ago [-]
Being able to drop Docker Desktop would be great. We're using Podman on MacOS now in a couple places, it's pretty good but it is another tool. Having the same tool across MacOS and Linux would be nice.
9dev 1 days ago [-]
Migrate to Orbstack now, and get a lot of sanity back immediately. It’s a drop-in replacement, much faster, and most importantly, gets out of your way.
mgreg 2 days ago [-]
There's also Rancher Desktop (https://rancherdesktop.io/). Supports moby and containerd; also optionally runs kubernetes.
samgranieri 2 days ago [-]
I have to drop docker desktop at work and move to podman.
I'm the primary author of amalgamation of GitHub's scripts to rule them all with docker compose so my colleagues can just type `script/setup` and `script/server` (and more!) and the underlying scripts handle the rest.
Apple including this natively is nice, but I won't be a able to use this because my scripts have to work on linux and probably WSL
Colima is my guess, only thing that makes sense here if they are doing a qemu vm type of thing
mbreese 2 days ago [-]
That's my guess too... Colima, but probably doing a VM using the Virtualization framework. I'll be more curious if you can select x86 containers, or if you'll be limited to arm64/aarch64. Not that it really makes that much of a difference anymore, you can get pretty far with Linux Arm containers and VMs.
WD-42 1 days ago [-]
Should be easy enough, look for the one with upstream contributions from Apple.
Oh, wait.
1 days ago [-]
wmf 2 days ago [-]
They Sherlocked OrbStack.
12_throw_away 1 days ago [-]
Well, Orbstack isn't really anything special in terms of its features, it's the implementation that's so much better than all the other ways of spinning up VMs to run containers on macos. TBH, I'm not 100% sure 2025 Apple is capable anymore of delivering a more technically impressive product than orbstack ...
ale 2 days ago [-]
That's a good thing though right?
wmf 1 days ago [-]
It would be better for the OrbStack guy if they bought it.
WD-42 1 days ago [-]
Apple sees some nice code under a pushover license and they just can’t help themselves.
wmf 1 days ago [-]
Interestingly it looks like Apple has rewritten much of the Docker stack in Swift rather than using existing Go code.
Ok, I've squeezed containerization into the title above. It's unsatisfactory, since multiple announced-things are also being discussed in this thread, but "Apple's kitchen-sink announcement from WWDC this year" wouldn't be great either, and "Apple supercharges its tools and technologies for developers to foster creativity, innovation, and design" is right out.
It seems like a big step in the right direction to me. It's hard to tell if its 100% compatible with Docker or not, but the commands shown are identical (other than swapping docker for container).
Even if its not 100% compatible this is huge news.
2 days ago [-]
nodja 2 days ago [-]
> Apple Announces Foundation Models and Containerization frameworks, etc.
This sounds like apple announced 2 things, AI models and container related stuff
I'd change it to something like:
> Apple Announces Foundation Models, Containerization frameworks, more tools
dang 2 days ago [-]
The article says that what was announced is "foundation model frameworks", hence the awkward twist in the title, to get two frameworkses in there.
LoganDark 2 days ago [-]
Small nitpick but "Announces" being capitalized looks a bit weird to me.
The architecture is such that the model can be specialized by plugging in more task-specific fine-tuning models as adapters, for instance one made for handling email tasks.
At least in this version, it looks like they have only enabled use of one fine-tuning model (content tagging)
pzo 18 hours ago [-]
I'm still a little dissapointed. It seems those models are only available for iPhone series 16 and iPhone 15 pro. According to mixpanel that's only 25% of all iOS devices and even less if taking into account iPadOS. You will still have to use some other online model if you want to cover all iOS 26 users because I doubt apple will approve your app if it will only work on those Apple Intelligence devices.
Why should I bother then as a 3rd party developer? Sure nice not having a cost for API for 25% of users but still those models are very small and equivalent of qwen2.5 4B or so and their online models supposed equivalent of llama scout. Those models are already very cheap online so why bother having more complicated code base then? Maybe in 2 years once more iOS users replace their phones but I'm unlikely to use this for developing iOS in the next year.
This would be more interesting if all iOS 26 devices at least had access to their server models.
greggsy 16 hours ago [-]
Uptake of iPhone 16+ devices will be much more than 25% by the time someone develops the next killer app using these tools, which will no doubt spur sales anyway.
alt227 14 hours ago [-]
If there was a killer app for AI (sorry LLMs) then it would have come out by now and AI (sorry LLMs) would have taken off properly.
rs186 15 hours ago [-]
App development could be as quickly as a few weeks. If the only "killer apps" we have seen in the past three years are the ChatGPT kind, I'm not holding my breath for a brand new "killer app" that runs only on iPhone 16+.
TechDebtDevin 14 hours ago [-]
Why would anyone bother with Apple. Let their product deteriorate and die. It only takes one product to get people off iphone and theyre (tim) cooked.
chakintosh 2 days ago [-]
Some 15 years ago, A friend of mine said to me "mark my words, Apple will eventually merge OSX with iOS on the iPad". And with every passing keynote since then, it seemed Apple's been inching towards that prophecy, and today, the iPad has become practically a MacBook Air with a touch screen. Unless you were a video editor, programmer who needs resources to compile or a 3D artist, I don't see how you'd need anything other than an iPad.
paxys 2 days ago [-]
The fact that they haven't done it in 15 years should be an indication that they don't intend to do it at all. Remember that in the same time period Apple rebuilt every Macbook from scratch from the chipset up. Neither the hardware nor software is a barrier to them merging the two platforms. It's that the ecosystems are fundamentally incompatible. A true "professional" device needs to offer the user full control, and Apple isn't giving up this control on an i-Device. The 30% cut is simply too lucrative.
Secure Boot on other platforms is all-or-nothing, but Apple recognizes that Mac users should have the freedom to choose exactly how much to peel back the security, and should never be forced to give up more than they need to. So for that reason, it's possible to have a trusted macOS installation next to a less-trusted installation of something else, such as Asahi Linux.
Contrast this with others like Microsoft who believe all platforms should be either fully trusted or fully unsupported. Google takes this approach with Android as well. You're either fully locked in, or fully on your own.
NotPractical 1 days ago [-]
> You're either fully locked in, or fully on your own.
I'm not sure what you mean by that. You can trivially root a Pixel factory image. And if you're talking about how they will punish you for that by removing certain features: Apple does that too (but to a lesser extent).
On Android devices with AVB (so basically everything nowadays), once the bootloader is unlocked, so many things already either lock you out or degrade your service in various ways. For example, Netflix will downgrade you to 480p, Google Pay will stop working, many apps will just straight up disappear from the Play Store because SafetyNet will stop passing (especially on newer devices with hardware attestation), banking apps (most notably Cash App) will often stop working, many other third-party apps that don't even have anything to do with banking will still lock you out, etc.
On many Android devices, unlocking the boot loader at any point will also permanently erase the DRM keys, so you will never again be able to watch high resolution Netflix (or any other app that uses Widevine), even if you relocked the bootloader and your OS passed verified boot checks.
On a Mac, you don't need to "unlock the bootloader" to do anything. Trust is managed per operating system. As long as you initially can properly authenticate through physical presence, you totally can install additional operating systems with lower levels of trust and their existence won't prevent you from booting back into the trusted install and using protected experiences such as Apple Pay. Sure, if you want to modify that trusted install, and you downgrade its security level to implement this, then those trusted experiences will stop working (such as Apple Pay, iPhone Mirroring, and 4K Netflix in Safari, for instance), but you won't be rejected by entire swathes of the third-party app ecosystem and you also won't lose the ability to install a huge fraction of Mac apps (although iOS and iPadOS apps will stop working). You also won't necessarily be prevented from turning the security back up once you're done messing around, and gaining every one of those experiences back.
So sure, you can totally boil it down to "Apple still punishes you, only a bit less", but not only do they not even punish your entire machine the way Microsoft and Google do, but they even only punish the individual operating system that has the reduced security, don't punish it as much as Microsoft and Google do, and don't permanently lock things out just because the security has ever been reduced in the past.
Do keep in mind though, the comparison to Android is a bit unfair anyway because Apple's equivalent to the Android ecosystem is (roughly; excluding TV and whatever for brevity) iPhone and iPad, and those devices have never and almost certainly will never offer anything close to a bootloader unlock. I just had used it as an example of the all or nothing approach. Obviously Apple's iDevice ecosystem doesn't allow user tampering at all, not even with trusted experiences excluded.
Fun fact though: The Password category in System Settings will disappear over iPhone Mirroring to prevent the password from being changed remotely. Pretty cool.
NotPractical 1 days ago [-]
That is a good point. I wish dual booting with different security settings was possible on Android as well. The incentives for Google to implement that aren't really there though.
privacyking 1 days ago [-]
Out of interest, are you currently using android (or fork) or iOS?
LoganDark 1 days ago [-]
I used Android until around last January year when I switched to iPhone, because it works better with Mac (which I'd switched back to about a month prior, after having enough of around four years of dealing with Windows's bullshit). Not that Android worked well with Windows... I just didn't even have the idea in my head that devices could work well together at all. AirDrop changed my mind! (And all the other niceties, like Do Not Disturb syncing, and so on...)
I used to tweak/mod Android and most recently preferred customizing the OEM install over forks. I stopped doing that when TWRP ran something as OpenRecoveryScript and immediately wiped the phone without giving me any opportunity to cancel. My most recent Android phone I never bothered to root. I may never mod Android again.
resource_waste 21 hours ago [-]
This is a pretty wild take.
Its reasonable to install a different OS on Android, even if some features don't work. I've done this, my friends and family have done this, I've seen it IRL.
I've never seen anyone do this on iPhone in my entire life.
But I flipped and I'm a Google hater. Expensive phones and no aux port. At least I can get cheap androids still.
alt227 14 hours ago [-]
Whats an aux port?
LoganDark 14 hours ago [-]
Headphone jack.
LoganDark 14 hours ago [-]
> I've never seen anyone do this on iPhone in my entire life.
My comment's about macOS. Even though it's a completely different market segment than Android, I'm only using Android as an example.
amoshebb 1 days ago [-]
the only macbook I’ve tried to put linux on was a t2 machine, and it still doesn’t sleep/suspend right, so I’m a bit skeptical that apple is really leading the way here, but maybe I’ve just not touched any recent windows devices either
LoganDark 1 days ago [-]
To be fair, sleep/suspend has been a rather infamously difficult problem for Linux when it comes to devices that weren't designed to run Linux. I think the Macs with T2 chips were a bit weird anyway and I wonder if they had already been working on Apple Silicon Macs that far back and that's why the T2 became a thing?
bigyabai 1 days ago [-]
Apple is also rather notorious for tinkering with Intel's ACPI files, for better or worse. Suspend is finnecky enough on hardware that supports it, and probably outright impossible if your CPU power states disagree with what the software is expecting.
bigyabai 1 days ago [-]
If anyone wants to read up on all the features Apple didn't implement from Intel Macs that made Linux support take so long, here is a list of UEFI features that represents only a small subset of the missing support relative to AMD and Intel chipsets: https://en.wikipedia.org/wiki/UEFI#Features
Alternatively, read about iBoot. Haha, just kidding! There is no documentation for iBoot, unlike there is for uBoot and Clover and OpenCore and SimpleBoot and Freeloader and systemd-boot. You're just expected to... know. Yunno?
LoganDark 1 days ago [-]
To be fair, this is how homebrew for Apple devices has always worked. You've always had to effectively reverse engineer the platform in order to write privileged code. Although I get the argument that if Apple were explicitly trying to support alternative operating systems they probably could have done more to make it easy, really what they were doing with this was first and foremost enabling additional use cases for macOS, and then maybe silently doing it in a way that third parties would also be able to benefit from. The Asahi wiki does a bit of a better job of explaining this, but the suspicion is that Apple did this not necessarily to make it easier for alternative operating systems to exist but to prevent the Mac from needing to be jailbroken when alternative operating systems were bound to happen anyway.
bigyabai 1 days ago [-]
It's not how homebrew worked on Intel Macs, or even PowerMacs[0] either. It's a change made with the Apple Silicon lineup - I cannot speak on Apple's behalf to tell you why they did that. But I can blame UEFI as the reason why the M3 continues to have pitiful Linux support when brand-new AMD and Intel chips have video drivers and power management on Day One.
The EFI environment does provide some basic drivers for the boot environment, but they all go away once the OS loads, except for a handful of functions such as EFI variable management. (Linux can also reuse a framebuffer originally obtained from EFI for a very limited form of video support - efifb - but that’s not proper video support.) So EFI doesn’t get credit for video drivers or power management.
For power management, you can however give some credit to ACPI, which is not directly related to UEFI (it predates it), but is likewise an open standard, and is generally found on the same devices as UEFI (i.e. PCs and ARM servers). ACPI also provides the initial gateway to PCIe, another open standard; so if you have a discrete video card then you can theoretically access it without chipset-specific drivers (but of course you still need a driver for the card itself).
But for onboard video, and I believe a good chunk of power management as well, the credit goes to drivers written for Linux by the hardware vendors.
LoganDark 1 days ago [-]
Sorry, I should have specified Apple Silicon rather than just "Apple devices". Obviously the devices that used widely supported CPUs running pretty much widely supported firmware were pretty easy to install non-Apple things on. My Mid-2015 A1398 ran a triple boot between macOS, Windows and Arch Linux thanks to rEFInd.
ivape 20 hours ago [-]
They don’t want to overtake their desktop device market. If the UI fully converges, then all you have a iPad with a keyboard across all devices (laptops, desktop).
burntalmonds 2 days ago [-]
I think practically everyone is better off with a laptop. iPad is great if you're an artist using the pencil, or just consuming media on it. Otherwise a macbook is far more powerful and ergonomic to use.
poulsbohemian 2 days ago [-]
I think perhaps you are overestimating the computing needs of the majority of the population. Get one of the iPad cases with a keyboard and an iPad is in many ways a better laptop.
nottorp 1 days ago [-]
But the majority won't pay extra for an ipad and a keyboard, when they can pay less for an air with everything included...
poulsbohemian 1 days ago [-]
I'm not sure - I just looked casually at some options and it appears one can find an iPad between $700-$900 for a pretty solid model, which includes the $250 folio keyboard. The base model MBA starts at $999. So depends on whether you want a traditional laptop or a "computing device."
kyawzazaw 17 hours ago [-]
yes they will
tiltowait 13 hours ago [-]
The problem is that almost everything, including basic web browsing, is straight-up worse on the iPad. Weird incompatibilities, sites that don’t honor desktop mode, tabs unloading from memory, random reloads, etc. all mar the experience.
NewJazz 2 days ago [-]
Or maybe a stand and separate keyboard. Better ergonomics than a laptop that way with similar portability.
Karrot_Kream 1 days ago [-]
Any keyboard you recommend? I'm looking around myself.
NewJazz 1 days ago [-]
Something wireless would be nice for portability IMO, e.g. Apple or Logitech Bluetooth. Security considerations there though.
I wouldn't want a numpad. A track point would be ape.
I struggle with keyboard recommendations b/c I'm not fully satisfied lol.
lucasoshiro 1 days ago [-]
I have an iPad and really like it, but no, it is not.
Several small things combined make it really different to the experience that I have with a desktop OS. But it is nice as side device
poulsbohemian 1 days ago [-]
I'm guessing you are coming at it from the perspective of a laptop user and likely a power user. The majority of the population just needs to scroll social media, message some friends, send an email or two, do a little shopping, maybe write a document or two. For this crowd an iPad is plenty. When I was a software developer - yeah, I had a Mac Pro on my desk and a MBP I carried when I traveled. Now as a real estate agent, an iPad is plenty for when I'm on the go.
focusedone 1 days ago [-]
I used to think that, not having used an iPad. Now I carry a work-issued iPad with 5G and it's actually pretty convenient for remote access to servers. I wouldn't want to spend a day working on it, but it's way faster than pulling out a laptop to make one tiny change on a server. It's also great for taking notes at meetings/conferences.
It's irritatingly bad at consuming media and browsing the web. No ad blocking, so every webpage is an ad-infested wasteland. There are so many ads in YouTube and streaming music. I had no idea.
It's also kindof a pain to connect to my media library. Need to figure out a better solution for that.
So, as a relatively new iPad user it's pleasantly useful for select work tasks. Not so great at doomscrolling or streaming media. Who knew?
pkage 1 days ago [-]
There's native ad blocking on iOS and has been for a while—I've found that to significantly enhance the usability of the device. I use Wipr[0], other options are available.
I use Wipr on my phone, the experience is a lot worse than ublock origin on desktop...
e6quisitory 1 days ago [-]
Use Orion Browser. It allows installing Firefox/Chrome extensions. Install Firefox unlock Origin.
poglet 1 days ago [-]
Try the Brave browser for YouTube. I used Jellyfin for my media library and that seemed to work fine for tv and movies.
I just got a Macbook and haven't touched my iPad Pro since, I would think I could make a change faster on a Macbook then iPad if they were both in my bag. Although I do miss the cellular data that the iPad has.
threeseed 1 days ago [-]
> practically everyone is better off with a laptop
The majority of the world are using their phones as a computing device.
And as someone with a MacBook and iPad the later is significantly more ergonomic.
solomatov 1 days ago [-]
I prefer MacBook to iPad most of the time. The only use case for iPad for me where it shines is when I need to use a pencil.
ZeroTalent 2 days ago [-]
I don't understand why my MacBook doesn't have a touchscreen. I'm switching to an iPad Pro tomorrow. I use Superwhisper to talk to it 90% of the time anyway.
dexwiz 2 days ago [-]
My theory is because of the hinge, which is a common point of failure on laptops. Either you are putting extra strain on it by having someone constantly touching the screen, and some users just mash their fingers into touch screens. Or users want a fully openable screen to mimic a tablet format, and those hinges always seem to fail quicker. Every touchscreen laptop I've had eventually has had the hinge fail.
cosmic_cheese 1 days ago [-]
There seems to be some kind of incompatibility between antiglare and oleophobic coatings that may also contribute.
Every single touch screen laptop I’ve seen has huge reflection issues, practically being mirrors. My assumption is that in order for the screen to not get nasty with fingerprints in no time, touchscreen laptops need oleophobic coating, but to add that they have to use no antiglare coating.
Personally I wouldn’t touch my screen often enough to justify having to contend with glare.
raydev 1 days ago [-]
Apple is capable of solving it if they want to. They don't want to (yet at least).
thetallguyyy 2 days ago [-]
Because MacBooks have subpar displays, at least the M4 Air does. The iPad Pro is a better value.
losvedir 2 days ago [-]
I don't use an iPad much, but it's been interesting to watch from afar how it's been changing over these years.
They could have gone the direction of just running MacOS on it, but clearly they don't want to. I have a feeling that the only reason MacOS is the way it is, is because of history. If they were building a laptop from scratch, they would want it more in their walled garden.
I'm curious to see what a "power user" desktop with windowing and files, and all that stuff that iPad is starting to get, ultimately looks like down this alternative evolutionary branch.
hamandcheese 2 days ago [-]
Its obvious isn't it? It will look like a desktop, except Apple decides what apps you can run and takes their 30% tax on all commerce.
KolibriFly 1 days ago [-]
Yeah, it's like we're watching two parallel evolution paths: macOS dragging its legacy along, and iPadOS trying to reinvent "productivity" from first principles, within Apple's tight design sandbox.
athenot 2 days ago [-]
Whether or not they eventually fuse, I don't know—I doubt it. But the approach they've taken over the past 15 years to gradually increase the similarities in user experience, while not trying to force a square peg in a round hole, have been the best path in terms of usability.
I think Microsoft was a little too eager to fuse their tablet and desktop interface. It has produced some interesting innovations in the process but it's been nowhere near as polished as ipadOS/macOS.
nicbou 22 hours ago [-]
I really wish there was some sort of hybrid device. I often travel by foot/bike/motorbike and space comes at a premium. I'd have a Microsoft Surface if Windows was not so unbearable.
On the other hand, I have come to love having a reading/writing/sketching device that is completely separate from my work device. I can't get roped into work and emails and notifications when I just want to read in bed. My iPad Mini is a truly distraction-free device.
I also think it would be hard to have a user experience that works great both for mobile work and sitting-at-a-desk work. I returned my Microsoft Surface because of a save dialog in a sketching app. I did not want to do file management because drawing does not feel like a computing task. On the other hand, I do want to deal with files when I'm using 3 different apps to work on a website's files.
jeron 2 days ago [-]
ipad hardware is a full blown M chip. There's no real hardware limitation that stops the iPad from running macOS, but merging it cannibalizes each product line's sales
chakintosh 2 days ago [-]
The new windowing feature basically cannibalizes MacBook Air.
threetonesun 2 days ago [-]
A Macbook Air is cheaper than an iPad Pro with a keyboard though. Not to mention you still can't run apps from outside the app store, and most of these new features we're hoping work as well as they do on MacOS, but given that background tasks had to be an API, I doubt they will.
cosmic_cheese 1 days ago [-]
iPad+keyboard is also awkwardly top heavy and not very well suited for lap use. That might cease to be an issue with sufficiently dense batteries bringing down the weight of the iPad though.
tarentel 2 days ago [-]
There's still software I can't run on an iPad which is basically the only reason I have a MacBook Air. Maybe for some a windowing system may be the push to switch but that seems doubtful to me.
qn9n 17 hours ago [-]
Yeah I think the majority of users, even in an office environment would be better of with an iPad in 99% of cases. All standard office stuff, like presentations; documents and similar are going to run better on an iPad. There are less foot guns, users are less likely to open 300 tabs just because they can.
If you are a developer or a creative however, then a Mac is still very useful.
KolibriFly 1 days ago [-]
I still find iPadOS frustrating for certain "pro" workflows. File management, windowing, background tasks - all still feel half-baked compared to macOS. It's like Apple's trying to protect the simplicity of iOS while awkwardly grafting on power-user features
dcchambers 2 days ago [-]
> The iPad has become practically a MacBook Air with a touch screen. Unless you were a video editor, programmer who needs resources to compile or a 3D artist, I don't see how you'd need anything other than an iPad.
No! It's not - and it's dangerous to propagate this myth. There are so many arbitrary restrictions on iPad OS that don't exist on MacOS. Massive restrictions on background apps - things like raycast (MacOS version), Text Expander, cleanshot, popclip, etc just aren't possible in iPad OS. These are tools that anyone would find useful. No root/superuser access. I still can't install whatever apps I want from whatever sources I want. Hell, you can't even write and run iPadOS apps in a code editor on the iPad itself. Apple's own editor/development tool - Xcode - only runs on MacOS.
The changes to window management are great - but iPad and iPadOS are still extremely locked down.
renrutal 2 days ago [-]
With Microsoft opening Windows's kernel to the Xbox team, and a possible macOS-iPadOS unification, we are reaching multiple levels of climate changes in Hell. It's hailing!
gnatman 1 days ago [-]
But when you have so many customers buying and using both, seems like it'd be bad business for them to fully merge those lines.
Bengalilol 1 days ago [-]
> I don't see how you'd need anything other than an iPad.
For the same price, you still get a better mac.
omega3 2 days ago [-]
Does an iPad allow for multiple users?
crooked-v 2 days ago [-]
Yes, but only if it's enrolled in MDM, bizarrely enough.
thimabi 2 days ago [-]
I don’t think that’s bizarre at all, there’s a clear financial incentive for things to be this way. Apple can’t have normal people sharing a single device instead of buying one for each.
alwillis 2 days ago [-]
> Yes, but only if it's enrolled in MDM, bizarrely enough
In education or corporate settings, where account management is centralized, you want each person who uses an iPad to access their own files, email, etc.
ics 13 hours ago [-]
Same applies to “families” and it’s somewhat bizarre that this is still ignored in 2025.
iAMkenough 14 hours ago [-]
In home settings, where devices are shared with multiple family members, you want each person who uses an iPad to access their own files, email, etc.
Parents and spouses would appreciate if they could take the multiple user experience for tvOS and make it an option for iPadOS.
eastbound 2 days ago [-]
I wish Apple provided the MDM, rather than relying on a random consumer ecosystem of dodgy companies who all charge 3-18$ per machine per month, which is a lot.
Auth should be Apple Business Manager; image serving should be passive directories / cloud buckets.
cj 1 days ago [-]
Apple launched their own solution last year (maybe it was the year before).
They can't do this. It would destroy their ability to rent their iOS users out because they'd have access to dev tools and could "scale the wall."
browningstreet 2 days ago [-]
Then why break it off as iPadOS?
m3kw9 2 days ago [-]
I told that to John Gruber and he said never will happen
jonplackett 2 days ago [-]
I wish they’d focus on just enabling actual functionality on iPad - like can I have Xcode please? And a shell?
I dgaf what the UI looks like. It’s fine.
kmeisthax 2 days ago [-]
Nothing Apple can do to iPadOS is going to fix the fundamental problem that:
1. iPadOS has a lot of software either built for the "three share sheets to the wind" era of iPadOS, or lazily upscaled from an iPhone app, and
2. iPadOS does not allow users to tamper with the OS or third-party software, so you can't fix any of this broken mess.
Video editing and 3D would be possible on iPadOS, but for #1. Programming is genuinely impossible because of #2. All the APIs that let Swift Playgrounds do on-device development are private APIs and entitlements that third-parties are unlikely to ever get a provisioning profile for. Same for emulation and virtualization. Apple begrudgingly allows it, but we're never going to get JIT or hypervisor support[0] that would make those things not immediately chew through your battery.
[0] To be clear, M1 iPads supported hypervisor; if you were jailbroken on iPadOS 14.5 and copied some files over from macOS you could even get full-fat UTM to work. It's just a software lockout.
badc0ffee 1 days ago [-]
The video on Containerization.framework, and the Container tool, is live [0].
It looks like each container will run in its own VM, that will boot into a custom, lightweight init called vminitd that is written in Swift. No information on what Linux kernel they're using, or whether these VMs are going to be ARM only or also Intel, but I haven't really dug in yet [1].
If you search the code, they support Rosetta, which means an ARM Linux kernel running x86 container userspace with Rosetta translation. Linux kernel can be told to use a helper to translate code using non-native architecture.
looks like there isn't much to take away from this, here's a few bullet points:
Apple Intelligence models primarily run on-device, potentially reducing app bundle sizes and the need for trivial API calls.
Apple's new containerization framework is based on virtual machines (VMs) and not a true 'native' kernel-level integration like WSL1.
Spotlight on macOS is widely perceived as slow, unreliable, and in significant need of improvement for basic search functionalities.
iPadOS and macOS are converging in terms of user experience and features (e.g., windowing), but a complete merger is unlikely due to Apple's business model, particularly App Store control and sales strategies.
The new 'Liquid Glass' UI design evokes older aesthetics like Windows Aero and earlier Aqua/skeuomorphism, indicating a shift away from flat design.
App Store control is something that EU is challenging, including on iPads. So while there’s no macOS APIs on ipadOS, I can totally see 3rd party solutions running macOS apps (and Linux or Windows, too) in a VM and outputting the result as now regular iPad windowed apps.
Oh, Apple is doing windows Aero now? Wonder how long that one'll last.
sammysheep 1 days ago [-]
I watched the video and it seems they are statically linking atop musl to build their lightweight VM layer. I guess the container app itself might use glibc, but will the musl build for the VM itself cause downstream performance issues? I'm no expert in virtualization to be able to understand if this should be a concern or not.
I'm cautious. Apple's history with developer tools is hit or miss. And while Xcode integrating ChatGPT sounds helpful in theory, I wonder how smooth that experience really is.
mohsen1 2 days ago [-]
iPad update is going to encourage a new series of folks trying to use iPads for general programming. I'm curious how it goes this time around. I'm cautiously optimistic
msgodel 2 days ago [-]
Isn't it still impossible to run any dev tools on the iPad?
robterrell 2 days ago [-]
IIRC Swift Playgrounds goes pretty deep -- a full LLVM compiler for Swift and you can use any platform API -- but you can't build something for distribution. The limitations are all at the Apple policy level.
zapzupnz 1 days ago [-]
Not quite. As another user mentioned, there's Swift Playgrounds which is complete enough that you can even upload apps made in it to the App Store. Aside from that, there are also IDEs like Pythonista for creating Python-based apps and others for Lua, JavaScript, etc. many of which come with their own frameworks for making native iOS/iPadOS interfaces.
mohsen1 19 hours ago [-]
I can assume that they are going to bring the Container stuff to iPad at some point. That would unlock so many things...
rs186 15 hours ago [-]
No vscode, no deal. I don't see that happening any time soon.
KolibriFly 1 days ago [-]
I think the story might actually be changing this time
eastbound 2 days ago [-]
You can’t run Docker on an iPad.
nehalem 2 days ago [-]
I wonder what happened to Siri. Not a single mention anywhere?
KolibriFly 1 days ago [-]
I actually loved Siri when it first came out. It felt magical back then (in a way)
1 days ago [-]
m3kw9 2 days ago [-]
hope to show you more later this year. was like the first thing they said about apple intelligence
alt227 14 hours ago [-]
Which is the same as what they said last year.
esafak 2 days ago [-]
Does this mean we will longer need Docker Desktop or colima?
Unlikely to happen soon. It’s maintained by one engineer who is very against anything resembling iTerm2.
garciasn 2 days ago [-]
Just use iTerm2 (Warp or Kitty are two other options out of many) and be done w/it; why would Apple even worry about this when so few people who care about terminal applications even think twice about it?
dorian-graph 2 days ago [-]
I've tried all of them, including ones that yourself, and others, haven't mentioned like Rio. I stand by wanting Terminal.app simply updated with better colour support, then it's one less alternative program to get.
Onavo 2 days ago [-]
Also ghostty
1 days ago [-]
pjmlp 1 days ago [-]
WebKit is also being swiftified, as mentioned on the platforms state of the union.
Klonoar 1 days ago [-]
As in they're integrating Swift into the WebKit project, or exposing Swift-y wrappers over WebKit itself?
pjmlp 1 days ago [-]
There is probably going to be a session later this week, the reference seemed to imply they are integrating Swift into Webkit project for new development.
Klonoar 1 days ago [-]
Interesting, I wonder if that pushes Swift on Linux further given other projects (webkitgtk etc).
pjmlp 1 days ago [-]
Most likely not, for Apple what matters for Swift on Linux, is being a good server language for app developers that want to share code between app and server, with Apple no longer caring to sell macOS for servers.
Everything else they would rather see devs stay on their platforms, see the official tier 1 scenarios on swift.org.
2 days ago [-]
teruakohatu 1 days ago [-]
> Every Apple Developer Program membership includes 200GB of Apple hosting capacity for the App Store. Apple-Hosted Background Assets can be submitted separately from an app build.
Is this the first time Apple has offered something substantial for the App store fees beyond the SDK/Xcode and basic app distribution?
Is it a way to give developers a reason to limit distribution to only the official App Store, or will this be offered regardless of what store the app is downloaded from?
samcat116 1 days ago [-]
> Is this the first time Apple has offered something substantial for the App store fees beyond the SDK/Xcode and basic app distribution?
They've offered 25hrs/mo of Xcode Cloud build time for the last couple years.
glhaynes 1 days ago [-]
Background Assets have existed for years. I’m not sure that 200GB figure is new.
Klonoar 1 days ago [-]
Huh. Does this cover if you use public CloudKit databases...?
visiondude 2 days ago [-]
Excited to try these out and see benchmarks. Expectations for on device small local model should be pretty low but let’s see if Apple cooked up any magic here.
Hopefully not bound to SwiftUI like seemingly everything else Apple Intelligence so far. But on-device llm (private) would be real nice to have.
samcat116 1 days ago [-]
The api looks like "give it a string prompt, async get a string back", so not tied to any particular UI Framework.
datadrivenangel 2 days ago [-]
"The framework has native support for Swift, so developers can easily access the Apple Intelligence model with as few as three lines of code."
Bad news.
KerrAvon 2 days ago [-]
Swift != SwiftUI
lenerdenator 2 days ago [-]
I like that there's support for locally-run models on Xcode.
I wish I thought that the Game Porting Toolkit 3 would make a difference, but I think Apple's going to have to incentivize game studios to use it. And they should; the Apple Silicon is good enough to run a lot of games.
... when are they going to have the courage to release MacOS Bakersfield? C'mon. Do it. You're gonna tell me California's all zingers? Nah. We know better.
nikolayasdf123 2 days ago [-]
yeah, getting better LLM support for XCode is great!
bishfish 1 days ago [-]
I sure hope they provide an accessibility option to turn down translucency to improve contrast or this UI is a non-starter for me. Without using it, this new UI looks like it may favor design over usability. Why don’t they do something more novel and let user tweak interface to their liking?
glhaynes 1 days ago [-]
They’ve had Reduce Transparency (under Accessibility) for a long time now. It still works.
encom 1 days ago [-]
>favor design over usability
That's... kinda what Apple is famous for.
alt227 14 hours ago [-]
The files are INSIDE the computer!
can16358p 1 days ago [-]
I hope they don't turn Liquid Glass into Aqua... which I hated. The only time I started to like the iOS interface was iOS 7 with flat design. I hope they don't turn this into old, skeuomorphic, Aqua-like UI by time.
bandoti 2 days ago [-]
Not sure about that Liquid Glass idea.
Ultimately UI widgets are rooted in reality (switches, knobs, doohickeys) and liquid glass is Salvador-Dali-Esque.
Imagine driving a car and the gear shifter was made of liquid glass… people would hit more grannies than a self-driving Tesla.
2 days ago [-]
Havoc 1 days ago [-]
TIL macOS doesn’t have native containers, just in vm.
Don’t use macOS but had just kinda assumed it would by virtue of shared unixy background with Linux
makapuf 18 hours ago [-]
Dont containers imply a linux kernel interface ? hence, you can only have truly native containers on linux or use containers in a VM or some kind of Wine-like translation layer.
simonw 2 days ago [-]
Is there a beta we can install to try out these models yet?
Repo says it uses Hypervisor.framework on Apple Silicon devices.
tough 13 hours ago [-]
ah ty i should have read more
thiyagutenysen 22 hours ago [-]
Does this mean, we can use llm for free in developing ios apps?
dblooman 2 days ago [-]
Can someone who uses Xcode daily compare to say Cursor or VsCode how the developer experience is. Just curious how Apple is keeping up
nikolayasdf123 2 days ago [-]
XCode so far is very rudimentary. miles behind VSCode in autocomplete. autocomplete is very small, single line, and suggests very very rarely. and no other features except autocomplete exist.
very good to see XCode LLM improvements!
> I use VSCode Go daily + XCode Swift 6 iOS 18 daily
WhyNotHugo 1 days ago [-]
Several years ago XCode also had “jump to definition” and a few other features.
bearjaws 1 days ago [-]
All this focus on low power gaming makes me think Apple wants to get in on the Steam Deck hype.
SlowTao 1 days ago [-]
Apple is in a reasonably good place to make gaming work for them.
Their hardware across the board is fairly powerful (definetly not top end), they have a good API stack especially with Metal. And they have systems at all levels including TV. If they were to just make a standard controller or just say "PS5 dualshock is our choice" they could have a nice little slice for themselves.
TheAceOfHearts 1 days ago [-]
As I understand it, Apple has a long history of entitlement and burning bridges with every major game developer while making collaboration extremely painful. They were in a much better place to make gaming work 10 years ago when major gaming studios were still interested in working with them.
jamesy0ung 1 days ago [-]
Just let me have JIT! My jailbroken iPad Pro can emulate Wii at 4k without getting warm. Unfortunately you have to hack around enabling JIT on newer ios releases.
syspec 5 hours ago [-]
You can use DualShocks on Apple TV or iPad games - it's supported. Of course on Mac as well
raydev 1 days ago [-]
They've been hyping up their hardware capabilities and APIs for years now.
throwaway314155 1 days ago [-]
Until Apple-ported games are able to be installed from Steam instead of the App Store, you can count me out.
bigyabai 1 days ago [-]
They better have a partnership with Sony in the works, then. Valve and Apple's approach to supporting video games diverged a decade ago. Hearing "Steam" and "Apple" uttered in the same breath is probably giving people panic attacks already.
N_A_T_E 2 days ago [-]
> New Design with Liquid Glass
Yes, bringing back aqua! I even see blue in their examples.
babyshake 2 days ago [-]
Does the privacy preserving aspect of this mean that Apple Intelligence can be invoked within an app, but the results provided by Apple Intelligence are not accessible to the app to be transmitted to their server or utilized in other ways? Or is the privacy preservation handled in a different way?
jonplackett 2 days ago [-]
I think they just mean private from Apple. I don’t see how they can keep it private from the developer if it’s integrated into the app
bitpush 2 days ago [-]
What model are they bundling? Something apple-custom? How capable is it?
Apple has their own models under the hood I believe. I remember from like a year or two ago they had an open line called "ELM" (Efficient Language Model), but I'm not sure if that's what they're actually using.
I am excited to see what the benchmarks look like though, once it's live.
tough 1 days ago [-]
they also use their ANE and CoreML for smaller on-device stuff
iPadOS and OSX continue to converge into one platform.
xattt 2 days ago [-]
Calling it: Apple allOS 27 incoming next year, with Final Cut Pro on your Apple Watch.
2 days ago [-]
rconti 2 days ago [-]
Multi-user iPadOS when?
xp84 2 days ago [-]
When they figure out how to make it not dent sales of individual devices. If you and your spouse could easily share one around the house for different purposes but still having each of your personal apps and settings, you might not buy two!
rconti 15 hours ago [-]
I think this may be overestimating how often people buy tablets. My wife has an iPad Air 1 or 2, so it's close to 10 years old and mostly sits in a drawer. I had a VERY old iPad 2 that I held off on replacing because I wanted to wait for a multi-user iPad.
I finally gave up and bought a Mini6 a year or two ago, which gets.... also minimal use. And I'm sure not buying ANOTHER tablet we're not going to use.
If they were multi-user I actually think we'd both get more value out of it, and upgrade our one device more often.
alwillis 2 days ago [-]
> If you and your spouse could easily share one around the house for different purposes but still having each of your personal apps and settings, you might not buy two!
I get it, but an iPad starts at $349; often available for less.
At this point, an iPad is no different than a phone—most people wouldn't share a single tablet.
Laptops and desktops that run macOS, Linux, Windows which are multiuser operating systems have largely become single-user devices.
xp84 1 days ago [-]
> an iPad starts at $349; often available for less.
It's less about the cost and more about having to have another stupid device to charge, update, and keep track of, when a tablet is not a device that gets used enough by any one person to be worth all that. It would be much more convenient to have a single device on a coffee or end table which all family members could use when they need to do more than you can do on a phone.
> Laptops and desktops that run macOS, Linux, Windows which are multiuser operating systems have largely become single-user devices.
Maybe. Probably 90% of work laptops are single-user, I'm sure. But for home computers, multi-user can be very useful. And it's better than ever to use laptops as dumb terminals, since all most people's stuff is in the cloud. It's not nearly as much trouble to get your secondary user account on a spare laptop in the living room to be useful as it was in the Windows XP days. Just having a browser that's signed into your stuff, plus Messages or Whatsapp, and maybe Slack/Discord/etc. is enough.
> most people wouldn't share a single tablet.
Since iPads have never supported doing so in a sane way, that unfounded assertion is just as likely due to the fact that it's a terrible experience today, since if you share one today, someone else will be accidentally marking your messages as read, you'll be polluting their browser or YouTube history, etc.
It's also the kind of dismissive claim true Apple believers tend to trot out when someone points out a shortcoming: "Nobody wants to use a touchscreen laptop!" "Nobody wants USB-C on an iPhone when Lightning is slightly smaller!" "Nobody needs an HDMI port or SD slot on a MacBook Pro!" "Nobody needs a second port on the 12-inch MacBook!" Most of the above things have come true except the touch laptop, and somehow it hasn't hurt anyone, but the "nobody wants..." crew immediately stops when Apple finally [re-]embraces something
spockz 1 days ago [-]
We use iPads interchangeably. All personal apps like banking are on phones. Some apps that only I would use such as for the roomba and car are on both.
Having profiles for the kids however would be nice though. But most apps have that built in themselves.
alt227 14 hours ago [-]
>Having profiles for the kids however would be nice though.
I find this madness that apple doesnt have this already.
olyjohn 2 days ago [-]
Never, it'll be single user MacOS.
turnsout 2 days ago [-]
Thank goodness… this will hopefully help keep app bundle sizes down, and allow developers to avoid calling AI APIs for trivial stuff like summaries.
xyst 2 days ago [-]
back to "glass" UI element/design? Early 2000s is back, I guess.
Edit: surprised apple is dumping resources into gaming, maybe they are playing the long game here?
retskrad 2 days ago [-]
After reading the book "Apple in China", it’s hilarious to observe the contrast between Apple as a ruthless, amoral capitalist corporation behind the scenes and these WWDC presentations...
bigyabai 2 days ago [-]
This just in: company that spends billions on marketing is effective at marketing their products. News at 11.
reaperducer 2 days ago [-]
News at 11.
…10 Central and Mountain.
codethief 2 days ago [-]
> New Design with Liquid Glass
Looks like software UI design – just like fashion, film, architecture and many other fields I'm sure – has now officially entered the "nothing new under the sun" / "let's recycle ideas from xx years ago" stage.
To be clear, this is just an observation, not a judgment of that change or the quality of the design by itself. I was getting similar vibes from the recent announcement of design changes in Android.
daveidol 2 days ago [-]
To me it looks more like Windows Vista's "Aero" than OS X's "Aqua".
SlowTao 1 days ago [-]
And I couldnt be happier to see it back. I have not been a fan of the flattening of UI design over the last 15 years.
hn_throwaway_99 1 days ago [-]
But the opposite of "flat" is not "transparent".
This was posted in another HN thread about Liquid Glass: https://imgur.com/a/6ZTCStC . I'm sure Apple will tweak the opacity before it goes live, but this looks horribly insane to me.
NoPicklez 1 days ago [-]
Agreed, people have said perhaps its Apple's way of bringing VR vibes to the UI, showing layers of UI elements.
But I'm not so sure if I want transparent.
dundarious 19 hours ago [-]
The explicitly mention this is (paraphrasing) "bringing elements from visionOS to all your devices" in the video in TFA.
crawsome 20 hours ago [-]
I'll just want the option to turn it off because it will use extra CPU cycles just existing.
I remember the catastrophe of Windows Vista, and how you needed a capable GPU to handle the glass effect. Otherwise, one of your (Maybe two) CPU cores would have to process all that overhead.
SlowTao 11 hours ago [-]
Yeah it definetly needs work. But I hope they do tone it down like Microsoft did with Aero glass effects between Vista and win 7.
They are heading in a good direction, it just needs to be toned down. But like any new graphics technology the first year is the "WOW WE CAN DO X!!!!" then the more tame stuff comes along.
mickdarling 19 hours ago [-]
bleary eyed, waking up while trying to find my reading glasses would make that interface essentially useless.
buildbot 2 days ago [-]
Yes, I immediately thought of Windows Aero too!!! I wasn’t able to enable it until I got a 9800GX2 a few years later, very cool at the time combined with the ability to have movies as your desktop background. It was a nice vibe.
Maybe this is consequence of the Frutiger Aero trend, and that users miss the time where user interfaces were designed to be cool instead of only useful
Krssst 21 hours ago [-]
Current interfaces are not aimed at being optimally useful. Padding everywhere as of today means more time scrolling and wasted screen space. Animations everywhere means a lot of wasted time watching pixels moving instead of the computer/phone giving us control immediately after it did the thing we (maybe) asked for. Hiding scrollbars is a nightmare in general in desktop OSes but is the default (once lost half an hour setting up a proxy because the "save" button was hidden behind a scrollbar).
Usability feels it has only been down since Windows 7. (on another hand, Windows has plenty of accessibility features that help a lot in restoring usability)
hbn 2 days ago [-]
I love that we're getting some texture back. UI has been so boring since iOS 7.
Sebastiaan de With of Halide fame did a writeup about this recently, and I think he makes some great points.
Interesting, I never made the connection between dashboard widgets UI and early iPhone UI. It does make sense, early iPhone had a UI that was glossier and more colorful than "metallic" aqua.
adolph 2 days ago [-]
Open link and type into this box "physicality is the new skeumorphism"
Read on and:
They are completely dynamic: inhabiting characteristics that are akin to actual materials and objects. We’ve come back, in a sense, to skeuomorphic interfaces — but this time not with a lacquer resembling a material. Instead, the interface is clear, graphic and behaves like things we know from the real world, or might exist in the world. This is what the new skeuomorphism is. It, too, is physicality.
Well worth reading for the retrospective of Apple's website taking a twenty year journey from flatland and back.
lobsterthief 1 days ago [-]
They’re describing material design, which Google popularized. Skeuomorphism with things that could exist in the real world, avoid breaking the laws of physics, etc. Which then morphed into flat design as things like drop shadows were seen as dated. You are here.
crooked-v 2 days ago [-]
I kind of hate it. Every use of it in the videos shown so far has moments where it's so transparent as to have borderline unreadable contrast.
kayodelycaon 2 days ago [-]
Same. And white on light blue is just as bad. Looks like I’ll be using more accessibility features.
MBCook 1 days ago [-]
This is the first time I have ever thought “maybe I don’t want to update my phone“. Entirely because of the look.
zapzupnz 1 days ago [-]
In Settings -> Accessibility -> Display, you can enable Increase Contrast or Reduce Transparency to get rid of some of the worse glass effects, and Settings -> Accessibility -> Motion, you can enable Reduce Motion to get rid of the some of the light effects for content passing under glass buttons.
summarity 2 days ago [-]
The last example in the first carousel is the worst, the bottom glass elements have complete unreadable text
SlowTao 1 days ago [-]
I agree with you, I hope they quickly tweak this into something more readable. There could be a really nice mid ground here.
ordinaryradical 2 days ago [-]
I used to find these changes compelling but now I think they are mostly a pain in the ass or questionable.
Proof of a well-designed UI is stability, not change.
Reads to me strongly of an effort to give traditional media something shiny to put above the headline and keep the marketing engine running.
crawsome 20 hours ago [-]
If you read the press release, you can see it's 100% about marketing and nothing else.
Apple will spend 10x the effort to tell you way a useless feature is necessary before they look at user feedback.
spike021 2 days ago [-]
I’m usually on board with Apple UI changes but something about all the examples they showed today just looked really cheap.
My only guess is this style looks better while using the product but not while looking at screenshots or demos built off Illustrator or whatever they’re using.
kif 2 days ago [-]
I love it. Reminds me of Windows 7. The nostalgia is too strong with this one.
sho_hn 1 days ago [-]
In fact, Apple once did a version of Aqua that did an overengineered materials-based rasterization at runtime, including a physically correct glass effect.
It was too slow and was later optimized away to run off of pre-rendered assets with some light typical style engine procedural code.
Feels like someone just dusted off the old vision now that the compute is there.
Barrin92 2 days ago [-]
Just one or two years ago I remember a handful of articles popping up that Gen Z was really into Frutiger Aero, that's the first thing I thought of, with the nature themes and skeuomorphic UI elements.
Thanks, I wasn't even aware the style had gotten a name in the meantime.
SlowTao 1 days ago [-]
Back when Jobs was introducing one of the Mac OS X versions, there was a line that stuck with me.
Showing off the pulsating buttons he said something like "we have these processors that can do billions of calculations of second, we might as well use them to make it look great".
And yet a decade later, they were undoing all of that to just be flat an boring. Im glad they are using the now trillions of calculations a second to bring some character back into these things.
scyzoryk_xyz 1 days ago [-]
He was selling. The audience were sales. OS's were fully matured at that point. Computers were something you buy at a store. It was a selling point.
A decade later they were handling the windfall that came with smartphone ascendancy. An emergence of an entirely new design language for touch screen UI. Skeumorphism was slowing that all down.
Making it all flat meant making it consistent, which meant making it stable, which meant scalability. iOS7 made it so that even random developers' apps could play along and they needed a lot of developers playing along.
breadwinner 2 days ago [-]
Liquid Glass is not adding a dimension. It is still flat UI, sadly. They just gave the edges of the window a glass like effect. There's also animation ("liquid" part). Overall, very disappointing.
dheera 1 days ago [-]
The world flip flops from flat to 3D UI design every few years.
We were in a flat era for the last several years, this kicks off the next 3D era.
19 hours ago [-]
w10-1 1 days ago [-]
HN should have a conference-findings thread for something like WWDC, with priority impact rankings
P4: Foundation models will get newbies involved, but aren't ready to displace other model providers.
P4: New containers are ergonomic when sub-second init is required, but otw no virtualization news.
P2: Concurrency now visible in instruments and debuggable, high-performance tracing avoid sampling errors; are we finally done with our 4+ years of black-box guesswork? (Not to mention concurrency backtracking to main-thread-by-default as a solution.)
P5: UI Look-and-feel changes across all platforms conceal the fact that there are very few new API's.
Low content overall: Scan the platforms, and you see only L&F, app intents, widgets. Is that really all? (thus far?) - It's quite concerning.
Also low quality: online links point no where, half-baked technologies are filling presentation slots: Swift+Java interop is no where near usable, other topics just point to API documentation, "code-along" sessions restating other sessions.
Beware the new upgrade forcing function: adding to the memory requirements of AI, the new concurrency tracing seems to require M4+ level device support.
19 hours ago [-]
amluto 2 days ago [-]
> This year, App Intents gains support for visual intelligence. This enables apps to provide visual search results within the visual intelligence experience, allowing users to go directly into the app from those results.
How about starting with reliably, deterministically, and instantly (say <50ms) finding obvious things like installed apps when searching by a prefix of their name? As a second criterion, I would like to find files by substrings of their name.
Spotlight is unbelievably bad and has been unbelievably bad for quite a few years. It seems to return things slowly, in erratic order (the same search does not consistently give the same results) and unreliably (items that are definitely there regularly fail to appear in search results).
cube2222 2 days ago [-]
Fwiw, spotlight in MacOS seems to be getting a major revamp too (basing this on the WWDC livestream, but there seems to be a note about it on their blog[0] too), pushing it a bit more in the direction of tools like Alfred or Raycast, and allegedly also being faster (but that's marketing speak of course, so we'll see when Fall comes).
“How about starting with reliably, deterministically, and instantly (say <50ms) finding obvious things like <…> searching by a prefix of their name? As a second criterion, I would like to find files by substrings of their name”
Even I can, and have, build search functionality like this. Deterministically. No LLMs or “AI” needed. In fact for satisfying the above criteria this kind of implementation is still far more reliable.
amluto 2 days ago [-]
I've also written search code like this. It's trivial, at least at the scale of installed apps and such on a single computer.
AI makes it strictly worse. I do not want intelligence. I want to type, for example, "saf" and have Safari appear immediately, in the same place, every time, without popping into a different place as I'm trying to click it because a slower search process decided to displace the result. No "temperature", no randomness, no fancy crap.
olyjohn 2 days ago [-]
Quicksilver worked great back in the day before Spotlight was ever even a thought.
busymom0 2 days ago [-]
I have no idea what happened to my Mac in the last month but for some reason, spotlight isn't able to search by name any app name anymore. Like if search for Safari, it will show me results for everything except the Safari app. Even tried searching for Safari.app and still no results. It can't find any apps.
doctorpangloss 2 days ago [-]
User: yells Feedback into void.
2 days ago [-]
KlausWinter 1 days ago [-]
[dead]
curtisszmania 1 days ago [-]
[dead]
behnamoh 2 days ago [-]
[flagged]
2 days ago [-]
mattfrommars 19 hours ago [-]
[flagged]
19 hours ago [-]
Dusksky 1 days ago [-]
[flagged]
sheerun 1 days ago [-]
k
vouaobrasil 2 days ago [-]
Apple's integration of AI into its MacOS is the one reason why I am considering a switch back to Linux after my current laptop dies.
jw1224 2 days ago [-]
If that’s the one reason, have you considered just… not using the AI features?
doublerabbit 2 days ago [-]
Sure you can for now. But what when it's forced upon you to use them?
jw1224 2 days ago [-]
Well if that hypothetical situation ever happens, you can just switch to Linux then.
sph 2 days ago [-]
Why do you care if they switch now?
jug 1 days ago [-]
There is no real need and the issue is hypothetical?
vouaobrasil 2 days ago [-]
I find it offensive to have any generative AI code on my computer.
dkdcio 2 days ago [-]
I promise you there is Linux code that has been tab-completed with Copilot or similar, perhaps even before ChatGPT ever launched
vouaobrasil 2 days ago [-]
That is true. I actually was ambiguous in my post, because I meant code that generates stuff, not that was generated by AI, even though I don't like the latter, either.
socalgal2 2 days ago [-]
I think I know what you meant. You mean you don't want code that runs generative AI in your computer? But, what you wrote could also mean you don't want any code running that was generated by AI. Even with open source, your computer will be running code generated by AI as most open source projects are using it. I suspect it will be nearly impossible to avoid. Most open source projects will accept AI generated code as long as it's been reviewed.
vouaobrasil 2 days ago [-]
Good point, and you were right. I was ambiguous. I meant a system that generates stuff, not stuff that was generated by AI. But I'd rather not use stuff that was generated by AI, either. But you are also right. That will become impossible, and probably already is. Not a very nice world, I think. Best thing to do then is to minimize it, and avoid computers as much as possible....
azinman2 2 days ago [-]
So, then don’t do that? It’s not like it’s automatically generating code without you asking.
vouaobrasil 2 days ago [-]
I didn't say "generating code", I meant I find it offensive to have any code sitting on my computer that generates code, whether I use it or not. I prefer minimalism: just have on my computer what I will use, and I have a limited data connection which means even more updates with useless code I won't use.
reaperducer 2 days ago [-]
I find it offensive to have any generative AI code on my computer.
Settings → Apple Intelligence and Siri → toggle Apple Intelligence off.
It's not enabled by default. But in case you accidentally turned it on, turning it off gets you a bunch of disk space back as the AI stuff is removed from the OS.
Some people are just looking for a reason to be offended.
zapzupnz 1 days ago [-]
The theatrics of being *forced* to use completely optional, opt-in features has been a staple of discussions regarding Apple for years.
Every year, macOS and iPadOS look superficially more and more similar, but they remain distinct in their interfaces, features, etc. But the past 15 years have been "we'll be *forced* to only use Apple-vetted software, just like the App Store!"
And yeah, the Gatekeeper mechanism got less straight-forward to get around in macOS 15, but … I don't know, someone will shoot me down for this, but it's been a long 15 years to be an Apple user with all that noise going on around you from people who really don't have the first clue what they're talking about — and on HN, no less.
They can come back to me when what they say actually happens. Until then, fifteen dang years.
vouaobrasil 1 days ago [-]
Not forced to use, forced to download and waste 2GB of disk space.
reaperducer 1 days ago [-]
I presume you're talking about Apple Intelligence.
It's not forced. It's completely optional. It has to be downloaded.
And if you activate it, then change your mind, you get the disk space back when you turn it off.
vouaobrasil 1 days ago [-]
I have a limited connection, and don't want to update my computer with AI garbage.
reaperducer 1 days ago [-]
So don't. You have to tell the computer to download Apple Intelligence. It doesn't just happen on its own.
Just don't push the Yes button when it offers.
vouaobrasil 1 days ago [-]
Well, I thought it came with the OS update, so I guess I was mistaken then.
antipaul 1 days ago [-]
With a single toggle, you can turn off Apple Intelligence
See (System) Settings
vouaobrasil 1 days ago [-]
But I can't toggle off downloading it, which is 2GB on my limited connection and 2GB of MY disk space.
echelon 2 days ago [-]
This reads like the crotchety and persnickety 60-somethings in the 1990's who said the internet was a passing and annoying fad.
pests 2 days ago [-]
I was musing before sleep days ago about how maybe the internet still is just a fad. We’ve had a few decades of it, yeah, but maybe in the future people will look at it as boring tech just like I viewed VCRs or phones when I was growing up. Maybe we’re still addicted to the novelty of it, but in the future it fades into the background of life.
I’ve read stories about how people were amazed at calling each other and would get together or meet at the local home with a phone installed, a gathering spot, make an event about it. Now it’s boring background tech.
We kind of went through a faze of this with the introduction of webcams. Omegle, Chatroulette, it was a wild Wild West. Now it’s normalized, standard for work with the likes of Zoom, with FaceTiming just being normal.
aquariusDue 1 days ago [-]
A few years ago I would've said you were incredibly cynical, but nowadays with so much AI slop around social media and just tonnes of bad content I tend to agree with you.
I think younger me would think the same. Its not even the AI slop or bad content but also the intrusive tracking, data collection, and the commercialization of interests. I just feel gross participating.
vouaobrasil 2 days ago [-]
I do think there is a lot of valid criticism of the internet. I certainly don't think it's an annoying fad but I do think it has caused a lot of bad things for humanity. In some ways, life was much better without it, even though there are some benefits.
sph 2 days ago [-]
It is impossible to have a negative opinion of AI without silly comments like this just one step removed from calling you a boomer or a Luddite. Yes all technological progress is good and if you don’t agree you’re a dumb hick.
AI maximalists are like those 100 years ago that put radium everywhere, even in toothpaste, because new things are cool and we’re so smart you need to trust us they won’t cause any harm.
I’ll keep brushing my teeth with baking soda, thank you very much.
vouaobrasil 1 days ago [-]
I am a Luddite, but I think that's a good thing. I don't mind the negative comments at all. I get them all the time.
echelon 1 days ago [-]
On the other side of that are the people screaming that AI is murder.
There are lots of folks like this, and it's getting exhausting that they make being anti-AI their sole defining character trait: https://www.reddit.com/r/ArtistHate
vouaobrasil 1 days ago [-]
It's also exhausting to see endless new applications of AI, even worse IMO.
Joel_Mckay 2 days ago [-]
Actually, most "AI" cults blindly worship at their own ignorance:
The ML hype-cycle has happened before... but this time everyone is adding more complexity to obfuscate the BS. There is also a funny callback to YC in the Lisp story, and why your karma still gets incinerated if one points out its obvious limitations in a thread.
Have a wonderful day, =3
sabareesh 2 days ago [-]
Good move, not sure they are exposing other modalities as well ?
visarga 2 days ago [-]
I guess LLM and AI are forbidden words in Apple language. They do their utmost to avoid these words.
simonw 2 days ago [-]
They took the clever (in my opinion) decision to rebrand "AI" as "Apple Intelligence", presumably partly in order to avoid the infinite tired "it's not really AI" takes that have surrounded that acronym for decades.
meindnoch 2 days ago [-]
It's about as cringe as that Chinese guy with the funny-shaped head, who said a few years ago that AI for him means "alibaba intelligence".
Geee 2 days ago [-]
And that's why we haven't heard of him since then.
iambateman 2 days ago [-]
LLM's get six mentions in this article.
mbowcut2 2 days ago [-]
Nah, I think they made it model agnostic, which is kinda smart.
barbazoo 2 days ago [-]
Search for "large language model" instead of "LLM".
rtaylorgarlock 2 days ago [-]
Because they don't own it, or the models they (don't) own aren't good enough for a standalone brand? Sure seems like it.
Rendered at 08:40:14 GMT+0000 (Coordinated Universal Time) with Vercel.
The new foundation frameworks around generative language model stuff looks very swift-y and nice for Apple developers. And it's local and on device. In the Platforms State of the Union they showed some really interesting sample apps using it to generate different itineraries in a travel app.
The other big thing is vibe-coding coming natively to Xcode through ChatGPT (and other) model integration. Some things that make this look like a nice quality-of-life improvement for Apple developers is the way that it tracks iterative changes with the model so you can rollback easily, and the way it gives context to your codebase. Seems to be a big improvement from the previous, very limited GPT integration with Xcode and the first time Apple Developers have a native version of some of the more popular vibe-coding tools.
Their 'drag a napkin sketch into Xcode and get a functional prototype' is pretty wild for someone who grew up writing [myObject retain] in Objective-C.
Are these completely ground-breaking features? I think it's more what Apple has historically done which is to not be first into a space, but to really nail the UX. At least, that's the promise – we'll have to see how these tools perform!
1: https://news.ycombinator.com/item?id=44226612
Does that explain why you don't have to worry about token usage? The models run locally?
I have the same question. Their Deep dive into the Foundation Models framework video is nice for seeing code using the new `FoundationModels` library but for a "deep dive", I would like to learn more about tokenization. Hopefully these details are eventually disclosed unless someone else here already knows?
[1] https://developer.apple.com/videos/play/wwdc2025/301/?time=1...
To parent, yes this is for local models, so insomuch worrying about token implies financial cost, yes
I went into this industry because I grew up fascinated by computers. When I learned how to code, it was about learning how to control these incredible machines. The joy of figuring something out by experimenting is quickly being replaced by just slamming it into some "generative" tool.
I have no idea where things go from here but hopefully there will still be a world where the craft of hand writing code is still valued. I for one will resist the "vibe coding" train for as long as I possibly can.
Where it gets interesting is being pushed into directions that you wouldn't have considered anyway rather than expediting the work you would have already done.
I can't speak for engineers, but that's how we've been positioning it in our org. It's worth noting that we're finding GenAI less practical in design-land for pushing code or prototyping, but insanely helpful helping with research and discovery work.
We've been experimenting with more esoteric prompts to really challenge the models and ourselves.
Here's a tangible example: Imagine you have an enormous dataset of user-research, both qual and quant, and you have a few ideas of how to synthesize the overall narrative, but are still hitting a wall.
You can use a prompt like this to really get the team thinking:
"What empty spaces or absences are crucial here? Amplify these voids until they become the primary focus, not the surrounding substance. Describe how centering nothingness might transform your understanding of everything else. What does the emptiness tell you?"
or
"Buildings reveal their true nature when sliced open. That perfect line that exposes all layers at once - from foundation to roof, from public to private, from structure to skin.
What stories hide between your floors? Cut through your challenge vertically, ruthlessly. Watch how each layer speaks to the others. Notice the hidden chambers, the unexpected connections, the places where different systems touch.
What would a clean slice through your problem expose?"
LLM's have completely changed our approach to research and, I would argue, reinvigorated an alternate craftsmanship to the ways in which we study our products and learn from our users.
Of course the onus is on us to pick apart the responses for any interesting directions that are contextually relevant to the problem we're attempting to solve, but we are still in control of the work.
Happy to write more about this if folks are interested.
Vibe coding can be whatever you want to make of it. If you want to be prescriptive about your instructions and use it as a glorified autocomplete, then do it. You can also go at it from a high-level point of view. Either way, you still need to code review the AI code as if it was a PR.
Coding with an AI can be whatever one can achieve, however I don’t see how vibe coding would be related to an autocomplete: with an autocomplete you type a bit of code that a program (AI or not) complete. In VC you almost doesn’t interact with the editor, perhaps only for copy/paste or some corrections. I’m not even sure for the manual "corrections" parts if we take Simon Willinson definition [0], which you’re not forced to obviously, however if there’s contradictory views I’ll be glad to read them.
0 > If an LLM wrote every line of your code, but you've reviewed, tested, and understood it all, that's not vibe coding in my book—that's using an LLM as a typing assistant
https://arstechnica.com/ai/2025/03/is-vibe-coding-with-ai-gn...
(Your may also consider rethinking your first paragraph up to HN standards because while the content is pertinent, the form sounds like a youngster trying to demo iKungFu on his iPad to Jackie Chan)
If your app is worthwhile, and gets popular in a few years, by that time iPhone 16 will be an old phone and a reasonable minimum target.
Skate to where the puck is going...
Or do have the ability to reach out to the internet for up to the moment information?
The thing macOS really painfully lacks is not ergonomic ways to run Linux VMs, but actual, native containers-- macOS containers. And third parties can't really implement this well without Apple's cooperation. There have been some efforts to do this, but the most notable one is now defunct, judging by its busted/empty website[1] and deleted GitHub organization[2]. It required disabling SIP to work, back when it at least sort-of worked. There's one newer effort that seems to be alive, but it's also afflicted with significant limitations for want of macOS features[3].
That would be super useful and fill a real gap, meeting needs that third-party software can't. Instead, as wmf has noted elsewhere in these comments, it seems they've simply "Sherlock'd" OrbStack.
--
1: https://macoscontainers.org/
2: https://github.com/macOScontainers
3: https://github.com/Okerew/osxiec
Linux container processes run on the host kernel with extra sandboxing. The container image is an easily sharable and runnable bundle.
macOS .app bundles are kind of like container images.
You can sign them to ensure they are not modified, and put them into the “registry” (App Store).
The Swift ABI ensures it will likely run against future macOS versions, like the Linux system APIs.
There is a sandbox system to restrict file and network access. Any started processes inherit the sandbox, like containers.
One thing missing is fine grained network rules though - I think the sandbox can just define “allow outbound/inbound”.
Obviously “.app”s are not exactly like container images , but they do cover many of the same features.
You don't get that in macOS. It's more of a jail than a sandbox. For example, as an app you can't, as far as I know, shell out and install homebrew and then invoke homebrew and install, say, postgres, and run it, all without affecting the user's environment. I think that's what people mean when they say macOS lacks native containers.
https://github.com/apple/containerization/blob/d1a8fae1aff6f...
If the sandboxing features a native containerization system relied on were also exposed via public APIs, those could could also potentially be leveraged by developer tools that want to have/use better sandboxing on macOS. Docker and BuildKit have native support for Windows containers, for instance. If they could also support macOS the same way, that would be cool for facilitating isolated macOS builds without full fat VMs. Tools like Dagger could then support more reproducible build pipelines on macOS hosts.
It could also potentially provide better experiences for tools like devcontainers on macOS as well, since sharing portions of your filesystem to a VM is usually trickier and slower than just sharing those files with a container that runs under your same kernel.
For many of these use cases, Nix serves very well, giving "just enough" isolation for development tasks, but not too much. (I use devenv for this at work and at home.) But Nix implementations themselves could also benefit from this! Nix internally uses a sandbox to help ensure reproducible builds, but the implementation on macOS is quirky and incomplete compared to the one on Linux. (For reasons I've since forgotten, I keep it turned off on macOS.)
https://tart.run
https://github.com/cirruslabs/cirrus-cli
One clever and cool thing Tart actually does that sort of relates to this discussion is that it uses the OCI format for distributing OS images!
(It's also worth noting that Tart is proprietary. Some users might prefer something that's either open-source, built-in, or both.)
the firewall tools are too clunky (and imho unreliable).
https://devenv.sh/
UV is pretty good for python too.
Do you think people would be developing and/or distributing end user apps via macOS containers?
Read more about it here - https://github.com/darwin-containers
The developer is very responsive.
One of Apple's biggest value props to other platforms is environment integrity. This is why their containerization / automation story is worse than e.g. Android.
Is there a VM technology that can make Linux aware that it's running in a VM, and be able to hand back the memory it uses to the host OS?
Or maybe could Apple patch the kernel to do exactly this?
Running Docker in a VM always has been quite painful on Mac due to the excess amount of memory it uses, and Macs not really having a lot of RAM.
Isn't this an issue of the hypervisor? The guest OS is just told it has X amount of memory available, whether this memory exists or not (hence why you can overallocate memory for VMs), whether the hypervisor will allocate the entire amount or just what the guest OS is actually using should depend on the hypervisor itself.
How can the hypervisor know which memory the guest OS is actually using? It might have used some memory in the past and now no longer needs it, but from the POV of the hypervisor it might as well be used.
This is a communication problem between hypervisor and guest OS, because the hypervisor manages the physical memory but only the guest OS known how much memory should actually be used.
Apparently docker for Mac and Windows uses these, but in practice, docker containers tend to grow quite large in terms of memory, so not quite sure how well it works in practice, its certainly overallocates compared to running docker natively on a Linux host.
add:
[experimental] autoMemoryReclaim=gradual
to your .wslconfig
See: https://learn.microsoft.com/en-us/windows/wsl/wsl-config
I chased the package’s source and indeed it’s pointing to this repo.
You can install and use it now on the latest macOS (not 26). I just ran “container run nginx” and it worked alright it seems. Haven’t looked deeper yet.
That said, I'd think apple would actually be much better positioned to try the WSL1 approach. I'd assume apple OS is a lot closer to linux than windows is.
[0] https://devblogs.microsoft.com/commandline/announcing-wsl-2/...
Maintaining a working duplicate of the kernel-userspace interface is a monumental and thankless task, and especially hard to justify when the work has already been done many times over to implement the hardware-kernel interface, and there's literally Hyper-V already built into the OS.
I think Apple’s main hesitation would be that the Linux userland is all GPL.
[1]: https://docs.freebsd.org/en/books/handbook/linuxemu/
There’s a huge opportunity for Apple to make kernel development for xnu way better.
Tooling right now is a disaster — very difficult to build a kernel and test it (eg in UTM, etc.).
If they made this better and took more of an OSS openness posture like Microsoft, a lot of incredible things could be built for macOS.
I’ll bet a lot of folks would even port massive parts of the kernel to rust for them for free.
1. Creating and configuring a virtual machine:
2. Allocating guest memory: 3. Creating virtual CPUs: 4. Setting registers: 5. Running guest code: 6. Handling VM exits:Apple’s stack gives you low-level access to ARM virtualization, and from there Apple has high-level convenience frameworks on top. OrbStack implements all of the high-level code themselves.
Native Linux (and Docker) support would be something like WSL1, where Windows kernel implemented Linux syscalls.
It's possible that Apple has implemented a similar hypervisor here.
XNU is modular, with its BSD servers on top of Mach. I don’t see this as being a strong advantage of NT.
I think it is the Unix side that decided to burry their heads into sand. We got Linux. It is free (of charge or licensing). It supported files, basic drivers and sockets. It got commercial support for servers. It was all Silicon Valley needed for startups. Anything else is a cost. So nobody cared. Most of the open source microkernel research slowly died after Linux. There is still some with L4 family.
Now we are overengineering our stacks to get closer to microkernel capabilities that Linux lacks using containers. I don't want to say it is ripe for disruption becuse it is hard and again nobody cares (except some network and security equipment but that's a tiny fraction).
You say this, but then proceed to state that it had a very good design back then informed by research, and still is today. Doesn't that qualify? :-)
NT brought a HAL, proper multi-user ACLs, subsystems in user mode (that alone is amazing, even though they sadly never really gained momentum), preemptive multitasking. And then there's NTFS, with journaling, alternate streams, and shadow copies, and heaps more. A lot of it was very much ahead of UNIX at the time.
> nobody else cares about core OS design anymore.
Agree with you on that one.
I meant that NT was a product that matched the state of the art OS design of its time (90s). It was the Unix world that decided to be behind in 80s forever.
NT was ahead not because it is breaking ground and bringing in new design aspects of 2020s to wider audiences but Unix world constantly decides to be hardcore conservative and backwards in OS design. They just accept that a PDP11 simulator is all you need.
It is similar to how NASA got stuck with 70s/80s design of Shuttle. There was research for newer launch systems but nobody made good engineering applications of them.
That's their phrasing, which suggests to me that it's just a virtualization system. Linux container images generally contain the kernel.
No, containers differ from VMs precisely in requiring dependency on the host kernel.
Thst's how docker works on WSL2, run it on top of a virtualised linux kernal. WSL2 is pretty tightly integrated with windows itself, stil a linux vm though. It seems kinda weird for apple to reinvent the wheel for that kind of thing for containers.
Can't edit my posts mobile but realized that's, what's the word, not useful... But yeah, sharing the kernal between containers but otherwise makes them isolated allegedly allows them to have VMesque security without the overhead of seperate VMs for each image. There's a lot more to it, but you get the idea.
I know the container ecosystem largely targets Linux just curious what people’s thoughts are on that.
Good read from horse mouth:
https://developer.apple.com/library/archive/documentation/Da...
https://blog.jessfraz.com/post/containers-zones-jails-vms/
Jails are first-class citizens that are baked deep into the system.
A tool like Docker relies using multiple Linux features/tools to assemble/create isolation.
Additionally, iirc, the logic for FreeBSD jails never made it into the Darwin kernel.
Someone correct me please.
Both very true statements and worth remembering when considering:
> Additionally, iirc, the logic for FreeBSD jails never made it into the Darwin kernel.
You are quite correct, as Darwin is is based on XNU[0], which itself has roots in the Mach[1] microkernel. Since XNU[0] is an entirely different OS architecture than that of FreeBSD[3], jails[4] do not exist within it.
The XNU source can be found here[2].
0 - https://en.wikipedia.org/wiki/XNU
1 - https://en.wikipedia.org/wiki/Mach_(kernel)
2 - https://github.com/apple-oss-distributions/xnu
3 - https://cgit.freebsd.org/src/
4 - https://man.freebsd.org/cgi/man.cgi?query=jail&apropos=0&sek...
Docker isn't providing any of the underlying functionality. BSD jails and Linux cgroups etc aren't fundamentally different things.
> Jails create a safe environment independent from the rest of the system. Processes created in this environment cannot access files or resources outside of it.[1]
While you can accomplish similar tasks, they are not equivalent.
Assume Linux containers are jails, and you will have security problems. And on the flip side, k8s pods share UTM,IPC, Network namespaces, yet have independent PID and FS namespaces.
Depending on your use case they may be roughly equivalent, but they are fundamentally different approaches.
[1] https://freebsdfoundation.org/freebsd-project/resources/intr...
https://en.wikipedia.org/wiki/HP_Integrity_Virtual_Machines
https://en.m.wikipedia.org/wiki/HP-UX
What you searched for is an evolution of it.
I like to read bibliographies for that reason—to read books that inspired the author I’m reading at the time. Same goes for code and research papers!
With WSL2 you get the best of both worlds. A system with perfect driver and application support and a Linux-native environment. Hybrid GPUs, webcams, lap sensors etc. all work without any configuration effort. You get good battery life. You can run Autodesk or Photoshop but at the same time you can run Linux apps with almost no performance loss.
Microsoft frequently tweaks syscall numbers, and they make it clear that developers must access functions through e.g. NTDLL. Mac OS at least has public source files used to generate syscall.h, but they do break things, and there was a recent incident where Go programs all broke after a major OS update. Now Go uses libSystem (and dynamic linking)[2].
[1] https://j00ru.vexillium.org/syscalls/nt/64/
[2] https://go.dev/doc/go1.11#runtime
on the windows side, syscall ABI became stable since Server 2022 to run mismatched container releases
Apple looks like it's skipped the failed WSL1 and gone straight for the more successful WSL2 approach.
https://developer.apple.com/videos/play/wwdc2025/346/
That’s an interesting difference from other Mac container systems. Also (more obvious) use Rosetta 2.
What seems to be different here, is that a VM per each container is the default, if not only, configuration. And that instead of mapping ports to containers (which was always a mistake in my opinion), it creates an externally routed interface per machine, similar to how it would work if you'd use macvlan as your network driver in Docker.
Both of those defaults should remove some sharp edges from the current Linux-containers on macOS workflows.
They sold Docker Desktop for Mac, but that might start being less relevant and licenses start to drop.
On Linux there’s just the cli, which they can’t afford to close since people will just move away.
Docker Hub likely can’t compete with the registries built into every other cloud provider.
The equivalent of Electron for containers :)
Once you have an engine podman might be the best choice to manage containers, or docker.
I'm the primary author of amalgamation of GitHub's scripts to rule them all with docker compose so my colleagues can just type `script/setup` and `script/server` (and more!) and the underlying scripts handle the rest.
Apple including this natively is nice, but I won't be a able to use this because my scripts have to work on linux and probably WSL
Oh, wait.
https://github.com/abiosoft/colima
That is what I have been using since 2010, until WSL came to be, it has been ages since I ever dual booted.
https://orbstack.dev/pricing
Orbstack owners are going to be fuming at this news!
https://learn.microsoft.com/en-us/windows/wsl/compare-versio...
https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...
It seems like a big step in the right direction to me. It's hard to tell if its 100% compatible with Docker or not, but the commands shown are identical (other than swapping docker for container).
Even if its not 100% compatible this is huge news.
This sounds like apple announced 2 things, AI models and container related stuff I'd change it to something like:
> Apple Announces Foundation Models, Containerization frameworks, more tools
[0]: https://en.wikipedia.org/wiki/Title_case
"Foundation Models" is an Apple product name for a framework that taps into a bunch of Apple's on-device AI models.
https://machinelearning.apple.com/research/introducing-apple...
https://machinelearning.apple.com/research/apple-intelligenc...
The architecture is such that the model can be specialized by plugging in more task-specific fine-tuning models as adapters, for instance one made for handling email tasks.
At least in this version, it looks like they have only enabled use of one fine-tuning model (content tagging)
Why should I bother then as a 3rd party developer? Sure nice not having a cost for API for 25% of users but still those models are very small and equivalent of qwen2.5 4B or so and their online models supposed equivalent of llama scout. Those models are already very cheap online so why bother having more complicated code base then? Maybe in 2 years once more iOS users replace their phones but I'm unlikely to use this for developing iOS in the next year.
This would be more interesting if all iOS 26 devices at least had access to their server models.
Secure Boot on other platforms is all-or-nothing, but Apple recognizes that Mac users should have the freedom to choose exactly how much to peel back the security, and should never be forced to give up more than they need to. So for that reason, it's possible to have a trusted macOS installation next to a less-trusted installation of something else, such as Asahi Linux.
Contrast this with others like Microsoft who believe all platforms should be either fully trusted or fully unsupported. Google takes this approach with Android as well. You're either fully locked in, or fully on your own.
I'm not sure what you mean by that. You can trivially root a Pixel factory image. And if you're talking about how they will punish you for that by removing certain features: Apple does that too (but to a lesser extent).
https://github.com/cormiertyshawn895/RecordingIndicatorUtili...
On many Android devices, unlocking the boot loader at any point will also permanently erase the DRM keys, so you will never again be able to watch high resolution Netflix (or any other app that uses Widevine), even if you relocked the bootloader and your OS passed verified boot checks.
On a Mac, you don't need to "unlock the bootloader" to do anything. Trust is managed per operating system. As long as you initially can properly authenticate through physical presence, you totally can install additional operating systems with lower levels of trust and their existence won't prevent you from booting back into the trusted install and using protected experiences such as Apple Pay. Sure, if you want to modify that trusted install, and you downgrade its security level to implement this, then those trusted experiences will stop working (such as Apple Pay, iPhone Mirroring, and 4K Netflix in Safari, for instance), but you won't be rejected by entire swathes of the third-party app ecosystem and you also won't lose the ability to install a huge fraction of Mac apps (although iOS and iPadOS apps will stop working). You also won't necessarily be prevented from turning the security back up once you're done messing around, and gaining every one of those experiences back.
So sure, you can totally boil it down to "Apple still punishes you, only a bit less", but not only do they not even punish your entire machine the way Microsoft and Google do, but they even only punish the individual operating system that has the reduced security, don't punish it as much as Microsoft and Google do, and don't permanently lock things out just because the security has ever been reduced in the past.
Do keep in mind though, the comparison to Android is a bit unfair anyway because Apple's equivalent to the Android ecosystem is (roughly; excluding TV and whatever for brevity) iPhone and iPad, and those devices have never and almost certainly will never offer anything close to a bootloader unlock. I just had used it as an example of the all or nothing approach. Obviously Apple's iDevice ecosystem doesn't allow user tampering at all, not even with trusted experiences excluded.
Fun fact though: The Password category in System Settings will disappear over iPhone Mirroring to prevent the password from being changed remotely. Pretty cool.
I used to tweak/mod Android and most recently preferred customizing the OEM install over forks. I stopped doing that when TWRP ran something as OpenRecoveryScript and immediately wiped the phone without giving me any opportunity to cancel. My most recent Android phone I never bothered to root. I may never mod Android again.
Its reasonable to install a different OS on Android, even if some features don't work. I've done this, my friends and family have done this, I've seen it IRL.
I've never seen anyone do this on iPhone in my entire life.
But I flipped and I'm a Google hater. Expensive phones and no aux port. At least I can get cheap androids still.
My comment's about macOS. Even though it's a completely different market segment than Android, I'm only using Android as an example.
Alternatively, read about iBoot. Haha, just kidding! There is no documentation for iBoot, unlike there is for uBoot and Clover and OpenCore and SimpleBoot and Freeloader and systemd-boot. You're just expected to... know. Yunno?
[1] https://mac-classic.com/articles/open-firmware-basics/
For power management, you can however give some credit to ACPI, which is not directly related to UEFI (it predates it), but is likewise an open standard, and is generally found on the same devices as UEFI (i.e. PCs and ARM servers). ACPI also provides the initial gateway to PCIe, another open standard; so if you have a discrete video card then you can theoretically access it without chipset-specific drivers (but of course you still need a driver for the card itself).
But for onboard video, and I believe a good chunk of power management as well, the credit goes to drivers written for Linux by the hardware vendors.
I wouldn't want a numpad. A track point would be ape.
I struggle with keyboard recommendations b/c I'm not fully satisfied lol.
Several small things combined make it really different to the experience that I have with a desktop OS. But it is nice as side device
It's irritatingly bad at consuming media and browsing the web. No ad blocking, so every webpage is an ad-infested wasteland. There are so many ads in YouTube and streaming music. I had no idea.
It's also kindof a pain to connect to my media library. Need to figure out a better solution for that.
So, as a relatively new iPad user it's pleasantly useful for select work tasks. Not so great at doomscrolling or streaming media. Who knew?
[0]: https://kaylees.site/wipr2.html
I just got a Macbook and haven't touched my iPad Pro since, I would think I could make a change faster on a Macbook then iPad if they were both in my bag. Although I do miss the cellular data that the iPad has.
The majority of the world are using their phones as a computing device.
And as someone with a MacBook and iPad the later is significantly more ergonomic.
Every single touch screen laptop I’ve seen has huge reflection issues, practically being mirrors. My assumption is that in order for the screen to not get nasty with fingerprints in no time, touchscreen laptops need oleophobic coating, but to add that they have to use no antiglare coating.
Personally I wouldn’t touch my screen often enough to justify having to contend with glare.
They could have gone the direction of just running MacOS on it, but clearly they don't want to. I have a feeling that the only reason MacOS is the way it is, is because of history. If they were building a laptop from scratch, they would want it more in their walled garden.
I'm curious to see what a "power user" desktop with windowing and files, and all that stuff that iPad is starting to get, ultimately looks like down this alternative evolutionary branch.
I think Microsoft was a little too eager to fuse their tablet and desktop interface. It has produced some interesting innovations in the process but it's been nowhere near as polished as ipadOS/macOS.
On the other hand, I have come to love having a reading/writing/sketching device that is completely separate from my work device. I can't get roped into work and emails and notifications when I just want to read in bed. My iPad Mini is a truly distraction-free device.
I also think it would be hard to have a user experience that works great both for mobile work and sitting-at-a-desk work. I returned my Microsoft Surface because of a save dialog in a sketching app. I did not want to do file management because drawing does not feel like a computing task. On the other hand, I do want to deal with files when I'm using 3 different apps to work on a website's files.
If you are a developer or a creative however, then a Mac is still very useful.
No! It's not - and it's dangerous to propagate this myth. There are so many arbitrary restrictions on iPad OS that don't exist on MacOS. Massive restrictions on background apps - things like raycast (MacOS version), Text Expander, cleanshot, popclip, etc just aren't possible in iPad OS. These are tools that anyone would find useful. No root/superuser access. I still can't install whatever apps I want from whatever sources I want. Hell, you can't even write and run iPadOS apps in a code editor on the iPad itself. Apple's own editor/development tool - Xcode - only runs on MacOS.
The changes to window management are great - but iPad and iPadOS are still extremely locked down.
For the same price, you still get a better mac.
In education or corporate settings, where account management is centralized, you want each person who uses an iPad to access their own files, email, etc.
Parents and spouses would appreciate if they could take the multiple user experience for tvOS and make it an option for iPadOS.
Auth should be Apple Business Manager; image serving should be passive directories / cloud buckets.
Haven’t tried it though, still using JamF.
https://www.apple.com/business/essentials/
I dgaf what the UI looks like. It’s fine.
1. iPadOS has a lot of software either built for the "three share sheets to the wind" era of iPadOS, or lazily upscaled from an iPhone app, and
2. iPadOS does not allow users to tamper with the OS or third-party software, so you can't fix any of this broken mess.
Video editing and 3D would be possible on iPadOS, but for #1. Programming is genuinely impossible because of #2. All the APIs that let Swift Playgrounds do on-device development are private APIs and entitlements that third-parties are unlikely to ever get a provisioning profile for. Same for emulation and virtualization. Apple begrudgingly allows it, but we're never going to get JIT or hypervisor support[0] that would make those things not immediately chew through your battery.
[0] To be clear, M1 iPads supported hypervisor; if you were jailbroken on iPadOS 14.5 and copied some files over from macOS you could even get full-fat UTM to work. It's just a software lockout.
It looks like each container will run in its own VM, that will boot into a custom, lightweight init called vminitd that is written in Swift. No information on what Linux kernel they're using, or whether these VMs are going to be ARM only or also Intel, but I haven't really dug in yet [1].
[0] https://developer.apple.com/videos/play/wwdc2025/346
[1] https://github.com/apple/containerization
Actually, they explain it in detail here: https://github.com/apple/containerization/issues/70#issuecom...
It's unclear whether this will keep being supported in macOS 28+, though: https://github.com/apple/container/issues/76, https://www.reddit.com/r/macgaming/comments/1l7maqp/comment/...
Apple Intelligence models primarily run on-device, potentially reducing app bundle sizes and the need for trivial API calls.
Apple's new containerization framework is based on virtual machines (VMs) and not a true 'native' kernel-level integration like WSL1.
Spotlight on macOS is widely perceived as slow, unreliable, and in significant need of improvement for basic search functionalities.
iPadOS and macOS are converging in terms of user experience and features (e.g., windowing), but a complete merger is unlikely due to Apple's business model, particularly App Store control and sales strategies.
The new 'Liquid Glass' UI design evokes older aesthetics like Windows Aero and earlier Aqua/skeuomorphism, indicating a shift away from flat design.
Full summary (https://extraakt.com/extraakts/apple-intelligence-macos-ui-o...)
This doesn’t sound impressive, it sounds insane.
Containerization is a Swift package for running Linux containers on macOS - https://news.ycombinator.com/item?id=44229348 - June 2025 (158 comments)
Container: Apple's Linux-Container Runtime - https://news.ycombinator.com/item?id=44229239 - June 2025 (11 comments)
See also:
- https://edu.chainguard.dev/chainguard/chainguard-images/abou...
- https://andygrove.io/2020/05/why-musl-extremely-slow/
https://machinelearning.apple.com/research/apple-foundation-...
edit: For those curious, https://youtu.be/51iONeETSng?t=3368.
- New theme inspired by Liquid Glass
- 24-bit colour
- Powerline fonts
Everything else they would rather see devs stay on their platforms, see the official tier 1 scenarios on swift.org.
Is this the first time Apple has offered something substantial for the App store fees beyond the SDK/Xcode and basic app distribution?
Is it a way to give developers a reason to limit distribution to only the official App Store, or will this be offered regardless of what store the app is downloaded from?
They've offered 25hrs/mo of Xcode Cloud build time for the last couple years.
Bad news.
I wish I thought that the Game Porting Toolkit 3 would make a difference, but I think Apple's going to have to incentivize game studios to use it. And they should; the Apple Silicon is good enough to run a lot of games.
... when are they going to have the courage to release MacOS Bakersfield? C'mon. Do it. You're gonna tell me California's all zingers? Nah. We know better.
That's... kinda what Apple is famous for.
Ultimately UI widgets are rooted in reality (switches, knobs, doohickeys) and liquid glass is Salvador-Dali-Esque.
Imagine driving a car and the gear shifter was made of liquid glass… people would hit more grannies than a self-driving Tesla.
Don’t use macOS but had just kinda assumed it would by virtue of shared unixy background with Linux
im confused
https://katacontainers.io/
https://developer.apple.com/documentation/hypervisor
very good to see XCode LLM improvements!
> I use VSCode Go daily + XCode Swift 6 iOS 18 daily
Their hardware across the board is fairly powerful (definetly not top end), they have a good API stack especially with Metal. And they have systems at all levels including TV. If they were to just make a standard controller or just say "PS5 dualshock is our choice" they could have a nice little slice for themselves.
I'm assuming this is an updated version of those.
I am excited to see what the benchmarks look like though, once it's live.
https://huggingface.co/apple
I finally gave up and bought a Mini6 a year or two ago, which gets.... also minimal use. And I'm sure not buying ANOTHER tablet we're not going to use.
If they were multi-user I actually think we'd both get more value out of it, and upgrade our one device more often.
I get it, but an iPad starts at $349; often available for less.
At this point, an iPad is no different than a phone—most people wouldn't share a single tablet.
Laptops and desktops that run macOS, Linux, Windows which are multiuser operating systems have largely become single-user devices.
It's less about the cost and more about having to have another stupid device to charge, update, and keep track of, when a tablet is not a device that gets used enough by any one person to be worth all that. It would be much more convenient to have a single device on a coffee or end table which all family members could use when they need to do more than you can do on a phone.
> Laptops and desktops that run macOS, Linux, Windows which are multiuser operating systems have largely become single-user devices.
Maybe. Probably 90% of work laptops are single-user, I'm sure. But for home computers, multi-user can be very useful. And it's better than ever to use laptops as dumb terminals, since all most people's stuff is in the cloud. It's not nearly as much trouble to get your secondary user account on a spare laptop in the living room to be useful as it was in the Windows XP days. Just having a browser that's signed into your stuff, plus Messages or Whatsapp, and maybe Slack/Discord/etc. is enough.
> most people wouldn't share a single tablet.
Since iPads have never supported doing so in a sane way, that unfounded assertion is just as likely due to the fact that it's a terrible experience today, since if you share one today, someone else will be accidentally marking your messages as read, you'll be polluting their browser or YouTube history, etc.
It's also the kind of dismissive claim true Apple believers tend to trot out when someone points out a shortcoming: "Nobody wants to use a touchscreen laptop!" "Nobody wants USB-C on an iPhone when Lightning is slightly smaller!" "Nobody needs an HDMI port or SD slot on a MacBook Pro!" "Nobody needs a second port on the 12-inch MacBook!" Most of the above things have come true except the touch laptop, and somehow it hasn't hurt anyone, but the "nobody wants..." crew immediately stops when Apple finally [re-]embraces something
Having profiles for the kids however would be nice though. But most apps have that built in themselves.
I find this madness that apple doesnt have this already.
Edit: surprised apple is dumping resources into gaming, maybe they are playing the long game here?
…10 Central and Mountain.
Looks like software UI design – just like fashion, film, architecture and many other fields I'm sure – has now officially entered the "nothing new under the sun" / "let's recycle ideas from xx years ago" stage.
https://en.wikipedia.org/wiki/Aqua_%28user_interface%29
To be clear, this is just an observation, not a judgment of that change or the quality of the design by itself. I was getting similar vibes from the recent announcement of design changes in Android.
This was posted in another HN thread about Liquid Glass: https://imgur.com/a/6ZTCStC . I'm sure Apple will tweak the opacity before it goes live, but this looks horribly insane to me.
But I'm not so sure if I want transparent.
I remember the catastrophe of Windows Vista, and how you needed a capable GPU to handle the glass effect. Otherwise, one of your (Maybe two) CPU cores would have to process all that overhead.
They are heading in a good direction, it just needs to be toned down. But like any new graphics technology the first year is the "WOW WE CAN DO X!!!!" then the more tame stuff comes along.
[1]: https://aesthetics.fandom.com/wiki/Frutiger_Aero
Maybe this is consequence of the Frutiger Aero trend, and that users miss the time where user interfaces were designed to be cool instead of only useful
Usability feels it has only been down since Windows 7. (on another hand, Windows has plenty of accessibility features that help a lot in restoring usability)
Sebastiaan de With of Halide fame did a writeup about this recently, and I think he makes some great points.
https://www.lux.camera/physicality-the-new-age-of-ui/
Read on and:
They are completely dynamic: inhabiting characteristics that are akin to actual materials and objects. We’ve come back, in a sense, to skeuomorphic interfaces — but this time not with a lacquer resembling a material. Instead, the interface is clear, graphic and behaves like things we know from the real world, or might exist in the world. This is what the new skeuomorphism is. It, too, is physicality.
Well worth reading for the retrospective of Apple's website taking a twenty year journey from flatland and back.
Proof of a well-designed UI is stability, not change.
Reads to me strongly of an effort to give traditional media something shiny to put above the headline and keep the marketing engine running.
Apple will spend 10x the effort to tell you way a useless feature is necessary before they look at user feedback.
My only guess is this style looks better while using the product but not while looking at screenshots or demos built off Illustrator or whatever they’re using.
It was too slow and was later optimized away to run off of pre-rendered assets with some light typical style engine procedural code.
Feels like someone just dusted off the old vision now that the compute is there.
https://www.yahoo.com/lifestyle/why-gen-z-infatuated-frutige...
https://en.wikipedia.org/wiki/Frutiger_Aero
Showing off the pulsating buttons he said something like "we have these processors that can do billions of calculations of second, we might as well use them to make it look great".
And yet a decade later, they were undoing all of that to just be flat an boring. Im glad they are using the now trillions of calculations a second to bring some character back into these things.
A decade later they were handling the windfall that came with smartphone ascendancy. An emergence of an entirely new design language for touch screen UI. Skeumorphism was slowing that all down.
Making it all flat meant making it consistent, which meant making it stable, which meant scalability. iOS7 made it so that even random developers' apps could play along and they needed a lot of developers playing along.
We were in a flat era for the last several years, this kicks off the next 3D era.
P4: Foundation models will get newbies involved, but aren't ready to displace other model providers.
P4: New containers are ergonomic when sub-second init is required, but otw no virtualization news.
P2: Concurrency now visible in instruments and debuggable, high-performance tracing avoid sampling errors; are we finally done with our 4+ years of black-box guesswork? (Not to mention concurrency backtracking to main-thread-by-default as a solution.)
P5: UI Look-and-feel changes across all platforms conceal the fact that there are very few new API's.
Low content overall: Scan the platforms, and you see only L&F, app intents, widgets. Is that really all? (thus far?) - It's quite concerning.
Also low quality: online links point no where, half-baked technologies are filling presentation slots: Swift+Java interop is no where near usable, other topics just point to API documentation, "code-along" sessions restating other sessions.
Beware the new upgrade forcing function: adding to the memory requirements of AI, the new concurrency tracing seems to require M4+ level device support.
How about starting with reliably, deterministically, and instantly (say <50ms) finding obvious things like installed apps when searching by a prefix of their name? As a second criterion, I would like to find files by substrings of their name.
Spotlight is unbelievably bad and has been unbelievably bad for quite a few years. It seems to return things slowly, in erratic order (the same search does not consistently give the same results) and unreliably (items that are definitely there regularly fail to appear in search results).
[0]: https://www.apple.com/newsroom/2025/06/macos-tahoe-26-makes-...
Even I can, and have, build search functionality like this. Deterministically. No LLMs or “AI” needed. In fact for satisfying the above criteria this kind of implementation is still far more reliable.
AI makes it strictly worse. I do not want intelligence. I want to type, for example, "saf" and have Safari appear immediately, in the same place, every time, without popping into a different place as I'm trying to click it because a slower search process decided to displace the result. No "temperature", no randomness, no fancy crap.
Settings → Apple Intelligence and Siri → toggle Apple Intelligence off.
It's not enabled by default. But in case you accidentally turned it on, turning it off gets you a bunch of disk space back as the AI stuff is removed from the OS.
Some people are just looking for a reason to be offended.
Every year, macOS and iPadOS look superficially more and more similar, but they remain distinct in their interfaces, features, etc. But the past 15 years have been "we'll be *forced* to only use Apple-vetted software, just like the App Store!"
And yeah, the Gatekeeper mechanism got less straight-forward to get around in macOS 15, but … I don't know, someone will shoot me down for this, but it's been a long 15 years to be an Apple user with all that noise going on around you from people who really don't have the first clue what they're talking about — and on HN, no less.
They can come back to me when what they say actually happens. Until then, fifteen dang years.
It's not forced. It's completely optional. It has to be downloaded.
And if you activate it, then change your mind, you get the disk space back when you turn it off.
Just don't push the Yes button when it offers.
See (System) Settings
I’ve read stories about how people were amazed at calling each other and would get together or meet at the local home with a phone installed, a gathering spot, make an event about it. Now it’s boring background tech.
We kind of went through a faze of this with the introduction of webcams. Omegle, Chatroulette, it was a wild Wild West. Now it’s normalized, standard for work with the likes of Zoom, with FaceTiming just being normal.
Now the Cyberpunk pen and paper RPG seems prophetic if turn your head sideways a bit https://chatgpt.com/share/684762cc-9024-800e-9460-d5da3236cd...
AI maximalists are like those 100 years ago that put radium everywhere, even in toothpaste, because new things are cool and we’re so smart you need to trust us they won’t cause any harm.
I’ll keep brushing my teeth with baking soda, thank you very much.
There are lots of folks like this, and it's getting exhausting that they make being anti-AI their sole defining character trait: https://www.reddit.com/r/ArtistHate
https://www.youtube.com/watch?v=sV7C6Ezl35A
The ML hype-cycle has happened before... but this time everyone is adding more complexity to obfuscate the BS. There is also a funny callback to YC in the Lisp story, and why your karma still gets incinerated if one points out its obvious limitations in a thread.
Have a wonderful day, =3