NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
AWS Adds support for nested virtualization (github.com)
boulos 59 minutes ago [-]
I feel vindicated :). We put in a lot of effort with great customers to get nested virtualization running well on GCE years ago, and I'm glad to hear AWS is coming around.

You can tell people to just do something else, there's probably a separate natural solution, etc. but sometimes you're willing to sacrifice some peak performance just have that uniformity of operations and control.

anurag 2 hours ago [-]
This is a big deal because you can now run Firecracker/other microVMs in an AWS VM instead of expensive AWS bare-metal instances.

GCP has had nested virtualization for a while.

iJohnDoe 2 hours ago [-]
Was hoping this comment would be here. Firecracker and microVMs are good use-case. Also, being able to simply test and develop is a nice to have.

Nested virtualization can mean a lot of things. Not just full VMs.

parhamn 2 hours ago [-]
whats the ~ perf hit of something like this?
otterley 2 hours ago [-]
As a practical matter, anywhere from 5-15%.
largbae 2 hours ago [-]
Nowadays nested just wastes the extra operating system overhead and I/O performance if your VM doesn't have paravirtualization drivers installed. CPUs all have hardware support.
BobbyTables2 11 minutes ago [-]
Is nested VMX virtualization in the Linux kernel really that stable?

The technical details are a lot more complex than most realize.

Single level VMX virtualization is relatively straightforward even if there are a lot of details to juggle with VMCS setup and handing exits.

Nested virtualization is a whole another animal as one now also has to handle not just the levels but many things the hardware normally does, plus juggling internal state during transitions between levels.

The LKML is filled with discussions and debates where very sharp contributors are trying to make sense of how it would work.

Amazon turning the feature on is one thing. It working 100% perfectly is quite another…

leetrout 26 minutes ago [-]
> Nested virtualization is supported only on 8th generation Intel-based instance types (c8i, m8i, r8i, and their flex variants). When nested virtualization is enabled, Virtual Secure Mode (VSM) is automatically disabled for the instance.
ohthehugemanate 38 minutes ago [-]
I wonder if this is connected to Azure launching OpenShift Virtualization on "Boost" SKUs? There are a lot of VMWare customers going to OpenShift Virt, and apparently the CPU/memory overhead on Azure maxes out around 10% under full load... but then hyper V has been doing a lot of work on it. No idea if nitro includes any of the KVM-on-KVM passthrough of full KVM, to give it an edge here.
sitole 3 hours ago [-]
Support for nested virtualization has been added to the main SDKs. In the us-west-2 region, you can already see the "Nested Virtualization" option and use it with the new M8id, C8id, and R8id instance types.

This is really big news for micro-VM sandbox solutions like E2B, which I work on.

aliljet 33 minutes ago [-]
I wonder if this will extend SEV-SNP and TDX to the child VMs?
leetrout 12 minutes ago [-]
It says VSM is automatically disabled... so I would assume not.
blibble 3 hours ago [-]
welcome AWS to 2018!
ssl-3 2 hours ago [-]
Yep. It's pretty boring. I've been using it at home for years and years with libvirt on very not-special consumer hardware. I guess the AWS clown is finally catching up on this one little not-new-at-all thing.
otterley 2 hours ago [-]
I was an Amazon EC2 Specialist SA in a prior role, so I know a little about this.

If EC2 were like your home server, you might be right. And an EC2 bare metal instance is the closest approximation to that. On bare metal, you've always been free to run your own VMs, and we had some customers who rolled their own nested VM implementations on it.

But EC2 is not like your home server. There are some nontrivial considerations and requirements to offer nested virtualization at cloud scale:

1. Ensuring virtualized networking (VPC) works with nested VMs as well as with the primary VM

2. Making sure the environment (VMM etc) is sufficiently hardened to meet AWS's incredibly stringent security standards so that nesting doesn't pose unintended threats or weaken EC2's isolation properties. EC2 doesn't use libvirt or an off-the-shelf KVM. See https://youtu.be/cD1mNQ9YbeA?si=hcaZaV2W_hcEIn9L&t=1095 and https://youtu.be/hqqKi3E-oG8?si=liAfollyupYicc_L&t=501

3. Ensuring performance and reliability meets customer standards

4. Building a rock-solid control plane around it all

It's not a trivial matter of flipping a bit.

ssl-3 19 minutes ago [-]
There's no better way to get good information that is right, than to say something that is misguided and/or wrong.

Thanks for the well-reasoned response.

QuinnyPig 1 hours ago [-]
I always enjoy the color you add to these conversations. Thanks!
raw_anon_1111 59 minutes ago [-]
Seriously curious, don’t Firecracker VMs already run on EC2 instances under the hood when they host Lambda and Fargate?
otterley 55 minutes ago [-]
Unfortunately I'm not at liberty to dive deep into those details. I will say that Firecracker can be used on bare metal EC2 instances, whether you're a public customer or AWS itself. :-)
raw_anon_1111 37 minutes ago [-]
I guess I should have peeked at the source code when I was there…
wmf 42 minutes ago [-]
Since I don't work for AWS I'm allowed to say that at the scale of millions/billions of microVMs you're better off running them on bare metal instances to avoid the overhead of nested virtualization.
sitole 1 hours ago [-]
Nitro is very interesting stuff
ilaksh 56 minutes ago [-]
I wonder if providers like Hetzner and Digital Ocean etc. will get this someday also.
gerdesj 2 hours ago [-]
Could someone explain why this is might be a big deal?

I remember playing with nested virty some years ago and deciding it is a backwards step except for PoC and the like. Given I haven't personally run out of virty gear, I never needed to do a PoC.

dboreham 27 seconds ago [-]
If you have some workload that creates VMs, now you can run that workload on EC2 rather than having to use bare metal or some other provider that allows nested virtualization. There are many many such workloads. Just to give one example: testing a build system that spins up VMs to host CI jobs.
paulfurtado 2 hours ago [-]
It is great for isolation. There are so many VM based containerization solutions at this point, like Kata Containers, gvisor, and Firecracker. With kata, your kubernetes pods run in isolated VMs. It also opens the door for live migration of apps between ec2 instances, making some kinds of maintenance easier when you have persistent workloads. Even if not for security, there are so many ways a workload can break a machine such that you need to reboot or replace (like detaching an ebs volume with a mounted xfs filesystem at the wrong moment).

The place I've probably wanted it the most though is in CI/CD systems: it's always been annoying to build and test system images in EC2 in a generic way.

It also allows for running other third party appliances unmodified in EC2.

But also, almost every other execution environment offers this: GCP, VMWare, KVM, etc, so it's frustrating that EC2 has only offered it on their bare metal instance types. When ec2 was using xen 10+ years ago, it made sense, but they've been on kvm since the inception of nitro.

2 hours ago [-]
UltraSane 2 hours ago [-]
You can now run VMs inside a cheaper AWS instance instead of having to pay for an entire bare-metal instance. This is useful for things like network simulation where you use QEMU to emulate network hardware.
ATechGuy 2 hours ago [-]
Would love to see performance numbers with nested virtualization, particularly that of IO-bound workloads.
dk8996 2 hours ago [-]
Would these thing be good for openclaw, agents?
CuriouslyC 1 hours ago [-]
Yeah, though honestly if I'm deploying anything I'd just build an image with nix rather than use nested virtualization.
api 2 hours ago [-]
What's the performance impact for nested virtualization in general? I'd think this would be adding multiple layers of MMU overhead.
blibble 1 hours ago [-]
depends on the workload and how they've done it

pure CPU should be essentially unaffected, if they're not emulating the MMU/page tables in software

the difference in IO ranges from barely measurable to absolutely horrible, depending on their implementation

traps/vmexits have another layer to pass through (and back)

dwattttt 2 hours ago [-]
From memory, the virtualisation operations themselves aren't nested. The VM instructions interact with the external virtualisation hardware, so it's more of a cooperative situation, e.g. a guest can create & manage virtualisation structures that are run alongside it.

I don't know if this applies to the specific nested virtualisation AWS are providing though.

otterley 1 hours ago [-]
As a practical matter, anywhere from 5-15%.
3 hours ago [-]
gchamonlive 2 hours ago [-]
Highly doubt that
farklenotabot 2 hours ago [-]
Sounds expensive for legacy apps
dangoodmanUT 2 hours ago [-]
hell yes, finally
bagels 2 hours ago [-]
"* *Feature*: Launching nested virtualization. This feature allows you to run nested VMs inside virtual (non-bare metal) EC2 instances."
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 03:18:39 GMT+0000 (Coordinated Universal Time) with Vercel.