Friday, July 20, 2007

OLS Notes, Xen on KVM Talk

So, unfortunately (or wait, maybe that's the point of a good conference!), I spent a lot of time networking and multitasking during the conference, in between sessions and sometimes during sessions. As a result, getting my notes cleaned up and posted didn't happen in real time quite like I hoped. But, there are still a few cool things worth noting from the sessions that I attended.

For instance, Ryan Harper did an interesting talk about how to run Xen guests on top of KVM. This is especially cool because it manages to avoid the core of the Xen vs. KVM debates and allows guests to run on either environment. So, test and compare, pick your favorite, and the
same guests will run.

Ryan spent a bit of time talking about difficulties in working with Xen which made KVM a bit more attractive at least for Xen/guest developers. Specifically, rebooting the hypervisor on changes was a bit painful - all guests must obviously be started (hey, where is guest checkpointing? ;-). And, whenever Linux or Xen changed, you often have to rebuild and reboot both still. You also need to occasionally reboot the user space daemons when running Xen, probably based on changes in Xen or the client.

From an end user perspective, there are also a number of existing issues, including a reboot to install Xen the first time (or on some updates), some issues where the paravirt ops for guests are still not mature and not yet fully interoperable. And, from the Distro perspective, the two
primary distros are on different snapshot timelines, and all of the smaller distros are obviously running with a huge variety of release dates, stability goes, paravirt awareness, etc. So, the dream of running any guest on any distro + Xen or any Xen release without problems has not
quite materialized yet. And, the installation and management tools for guests or the DOM0 are not consistent across the board. And, there is not currently (maybe some day?) a set of paravirt ops for the Xen domain zero (DOM0). This means that the current DOM0 build is fairly incestuous with the Xen hypervisor and thus the two often need to be built in lockstep (the guest paravirt ops is moving forward reasonably well and could be in Linux 2.6.23).

So, if Xen has all these hangups, why even use it and not switch entirely to KVM? Well, Lguest and KVM require hardware virtualization, for one. Second, Xen is already pervasive in Red Hat and SUSE releases as well as existing appliances, and there are a few interfaces which still perform poorly in KVM, such as Direct Paging. Xen also has ports which enable Solaris and Net/FreeBSD to run as Domain 0 for those that want that sort of thing. So far, KVM is completely Linux specific for the (roughly) equivalant domain 0 activities.

KVM does have some significant advantages though. For instance, there is a huge body of re-usable code such as device drivers and hardware enablement. KVM is already upstream in mainline Linux. It is integrated with QEMU for full hardware emulation/virtualzation. And, it has well established interfaces for the user and for userspace/ kernel interactions.

Working with Xen guests on KVM also helps expand the virtualization community (note, this is like the forking that James Bottomley talked about in his keynote speech at the end of the conference, and adds value in exactly the ways that he described there). It also helps to expand the KVM community to systems without hardware support. Ryan expects that this approach will enable virtualization on an as-needed approach, simply by creating loadable modules for
guest, kvm, xen, and potentially even various versions of those such as Xen3.0.3, xen3.1.

His work focused on some key design points such as keeping the monitor small and simple, no hypercalls, shadow page tables in the monitor, and using existing Linux capabilities whenever possible. He also did not add requirements to run Xen daemons on the host - those can be hidden in KVM or QEMU.

A final note on KVM - it is not a replacement for Xen deployments, use dedicated Xen for top performance.

Labels:

0 Comments:

Post a Comment

<< Home