When Intel and AMD added hardware virtualization support to their CPUs it was a boon to those of us trying to virtualize Operating Systems that don't know they're being virtualized. KVM the somewhat native Linux virtualizer built into the kernel only operates in this mode so you need a CPU with Intel VT-x or AMD-V support built into it and the ability to turn it on in the BIOS. This seems like a really great thing but in reality anyone wanting to extract the maximum performance out of their Virtualization machine is going to be using paravirtualization anyway (sorry KVM). Everyone is working on paravirtuaization including the KVM folks. So the addition to VT extensions in the CPU are really not getting used much besides those that don't have a choice (ahem, KVM). A year from now people using KVM will be using paravirtualized drivers because they're just plain faster. VirtualBox uses a mix of software emulation and virtualization depending on what's fastest. Xen will run in both Full Virtualization (HVM) or paravirtualization (PV) modes depending on how you set it up. There are limitations to PV mode though becuase it's using the same QEMU code that KVM uses. They will reach parity in installation modes between PV and HVM soon though. The cool thing about everyone using QEMU (KVM, Xen, VirtualBox etc..) is that if you change the code once the rest get the updates. Isn't that the way it's supposed to work in the Open Source world?
Anyway the point of this article is not to talk about VT but IOMMU. The problem with Virtualization systems like Parallels and Xen are that they're passing PCI devices through to guest operating systems which is a good thing but because they haven't had support in the CPUs it's been a bit hacked up. There's been a number of security vulnerabilities with pci passthrough in Xen. All of this is changing because Intel and AMD are adding device virtualization to their kernels. Intel announced it in 2006 and has a number of chipsets that support it and AMD will have their first out next year.
How it works... We've had Memory Management Units (MMUs) since about the Motorolla 68030 and Intel x486 chipsets.This allows us to really cool tricks with renaming memory so the applications think they're accessing it somewhere else. Virtual Memory takes advantage of this capability. Problem is you can't use an MMU to remap device IO space. We've never really needed this functionality until we started virtualizing. Here's a scenario - your video card want's to access it's memory at hexidecimal 0. Problem is if you are passing that device through to a guest you can't have the host and the guest accessing the same device at the same time at the same address. TheOS expects to find the device at hex 0 but the guest's memory allocation starts muxh higher than that. What an IOMMU can do is remap the real physical IO memory addresses inside the host OS. That way the host OS looks at hexidecimal 0 and finds what it's looking for. There currently is a huge performance hit for doing this but I think it only temporary as it matures.
So you want to set up an IOMMU system for virtualization? Problem is getting someone, anyone to tell you which chipsets support it is like pulling teeth. You'd think that Intel would be yelling it from the rooftops since AMDs response isn't ready yet but they're not which makes me wonder other things.
So from the horses mouth here are the Intel chipsets that support IOMMU or ast they call it VT-d. Note that the motherboard manufacturor may not have included the ability to turn it on in the BIOS! Beware...
VT-d is enabled on the following chipsets:
Intel Q35 GMCH with ICH9 DO (Bearlake chipset)
The following chipsets have VT-d capability, but OEMs may not have enabled in systems based on these:
VT-d will be enabled on these future products:
Intel Q45 (Eaglelake)For Intel Desktop Boards, these have VT-d support enabled:
These future Intel Desktop Boards will have VT-d support:
And even though the response above did not include the Intel 5400 chipset it's been listed as having IOMMU support on their site and I've found positive responses from people in the Xen community about it's VT-d support so I'm adding it here.
I find it amazing how much VT-d support is being downplayed considering how important it is. The only way to do PCI passthrough securely is via IOMMU. Although having said that it's not as big an issue as everyone is making it out to be. Afterall if you're not virtualizing the Host OS has complete access to all PCI slots right? So PCI passthrough probably isn't any worse than not virtualizing at all from a security standpoint.