[Devops] From OpenVZ to LXC on Debian & Ubuntu
michael.renner at amd.co.at
Fri Jul 20 15:11:49 CEST 2012
If you need a primer on container-based virtualization and why it's interesting see these two (a bit dated) papers:
Coming from a Debian background there are three things to consider when talking LXC.
*) The kernel support. Implementation is quite good these days with features being worked on for close to 5 years. 
*) The userland tools , which are clearly in the early stages of the development cycle, not having seen too many (large-scale) production environments, compared to OpenVZ's vzctl
*) The integration work done by Debian & Ubuntu
LXC is _almost_ feature complete to OpenVZ these days but not quite ready for production use (Ubuntu) or even dangerous to use (Debian).
Getting LXC & guests up and running is on par with OpenVZ from a time & complexity PoV, which is a good thing. Limiting CPU & memory usage for guests works, introspection and configuration aren't too different from OpenVZ.
Hiding dangerous parts:
sysfs, proc pidfs (/proc/$PID/...), proc sysctlfs (/proc/sys) and the VFS itself are namespaced. What has been disregarded so far are boring things like /proc/sysrq-trigger (allows any guest to reboot the host, among other things) and /proc/kcore (The systems memory as seen by the kernel).
The current stop-gap measure would be to use AppArmor, but this really should be integrated into the kernel.
AppArmor  is a security module for linux, extending the default UNIX/POSIX defined discretionary access control . Compared to SELinux it doesn't use extended attributes on files to define permissions but just uses VFS paths, which makes it considerably more easy to maintain.
In a nutshell, you define to which paths & operations a given process is allowed to have which access.
For LXC guests the bare minimum would be to lock down /proc/sysrq-trigger and /proc/kcore. Ubuntu has integrated native AppArmor support into it's lxc package and ships nice default profiles.  This is completely missing from LXC upstream as well as Debian's LXC package.
Linux offers (AFAIK!) no general-purpose (V)FS quota interface - and thus there is no quota support in LXC when run in the default configuration (guest root = /var/lib/lxc/$guestname/rootfs). OpenVZ had it's own simfs which wrapped the hosts VFS and bolted quota support ontop of it.
You can bandaid this by creating separate filesystems on LVM volumes for each guest but this comes at a much higher IOPS cost since reads & writes aren't as local anymore and there is more housekeeping to be done.
To lock down access to the kernel log ring buffer ("dmesg") you actually have to disable a syscall, which is aptly named syslog(2) , not to be confused by the Unix logging standard. There's talk about using seccomp  for this, but this is probably a few months if not years out.
Migration of live guests:
Migration of running containers is usually done in a three-step process.
During the first pass the filesystem and a snapshot of the running processes (memory, SYSV IPC resources, fds, sockets, etc.) is copied. After this is completed all processes get frozen on the host, a second copy pass is done over the filesystem and process state picking all the changes that happened in the meanwhile. Then the processes get destroyed on the source host and unfrozen/thawed on the destination host, resuming operation unfazed.
Freezing & Thawing is already supported in LXC, what's been missing for a long time was to recreate TCP sockets on the destination host, but TCP connection repair has been merged in 3.5 . I don't know if something else is missing or if we can expect that live migration of LXC containers is soon on the horizon.
In the meanwhile, if you're serious about LXC I'd suggest to look at Ubuntu since the Debian packages (AppArmor support) and container templates (Created guest doesn't boot without some polishing) aren't too nice at the moment and probably won't be fixed in time for Wheezy.
You can find our collection of information including step-by-step notes on how to get LXC running on Debian at http://titanpad.com/ep/pad/view/ro.PHwVPcirW2K/rev.3326
Thanks to Bernhard Miklautz, Stefan Schlesinger and Christian Hofstädtler who helped to compile the information so far. Stefan Schlesinger also has a blog post in the works focusing a bit more on the practical side of things.
All the best,
 Overview of the LXC development process compiled by Stefan Schlesinger:
• 2.6.24 -- Cgroups: Task control groups
• 2.6.25 -- Cgroups: Memory Resource Controller, Sysfs: Initial version of Network Namespaces
• 2.6.26 -- Cgroups: Device Whitelists
• 2.6.27 -- UID namespaces: First appearence of User Namespaces (still incomplete)
• 2.6.28 -- Cgroups: Container Freezer http://lwn.net/Articles/287435/
• 2.6.29 -- Cgroups: Swap Management Feature for Memory Resource Controler, Devpts: multiple instances support
• 2.6.30 -- Cgroups: Per-cgroup utime/stime statistics, struct mem_cgroup memory improvements
• 2.6.32 -- Cgroups: Add support for named cgroups.
• 2.6.34 -- Cgroups: Implement Memory Thresholds + Eventfd API for notification
• 2.6.35 -- Sysfs: Tagged Directories/Network Namespaces
• 2.6.37 -- Cgroups: I/O Throttling support (blkio, doesn't seem to be supported by lxc configuration yet)
• 2.6.38 -- Cgroups: performance improvements on smp systems for cpu-cgroups
• 3.0 -- Cgroups: ??? http://kernelnewbies.org/Linux_3.0
• 3.1 -- Tomoyo Policy namespace support (MAC Framework)
• 3.2 -- Sysfs: Tagged Files
• 3.3 -- Cgroups: Per group TCP buffer limits https://lwn.net/Articles/470656/
• 3.3 -- Network priority control group
• 3.5 -- TCP Connection Repair http://lwn.net/Articles/495304/
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the Devops