SUSE Linux Enterprise Server 11 SP4
Virtualization with Linux Containers (LXC) #
SUSE Linux Enterprise Server 11 SP4
Abstract#
LXC is a lightweight “virtualization” method to run multiple virtual units (containers, akin to “chroot”) simultaneously on a single control host. Containers are isolated with Kernel Control Groups (cgroups) and Kernel Namespaces.
LXC provides an operating system-level virtualization where the
Kernel controls the isolated containers. With other
full virtualization solutions like Xen, KVM, or libvirt the
processor simulates a complete hardware environment
and controls its virtual machines.
1 Terminology #
- chroot
A change root (chroot, or change root jail) is a section in the file system which is isolated from the rest of the file system. For this purpose, the
chrootcommand is used to change the root of the file system. A program which is executed in such a “chroot jail” cannot access files outside the designated directory tree.- cgroups
Kernel Control Groups (commonly referred to as just “cgroups”) are a Kernel feature that allows aggregating or partitioning tasks (processes) and all their children into hierarchical organized groups to isolate resources.
- Container
A “virtual machine” on the host server that can run any Linux system, for example openSUSE, SUSE Linux Enterprise Desktop, or SUSE Linux Enterprise Server.
- Container Name
A name that refers to a container. The name is used by the
lxccommands.- Kernel Namespaces
A Kernel feature to isolate some resources like network, users, and others for a group of processes.
- LXC Host Server
The system that contains the LXC system and provides the containers and management control capabilities through cgroups.
2 Overview #
Conceptually, LXC can be seen as an improved chroot technique. The difference is that a chroot environment separates only the file system, whereas LXC goes further and provides resource management and control via cgroups.
Benefits of LXC #
Isolating applications and operating systems through containers.
Providing nearly native performance as LXC manages allocation of resources in real-time.
Controlling network interfaces and applying resources inside containers through cgroups.
Limitations of LXC #
All LXC containers are running inside the host system's Kernel and not with a different Kernel.
Only allows Linux “guest” operating systems.
LXC is not a full virtualization stack like Xen, KVM, or
libvirt.Security depends on the host system. LXC is not secure. If you need a secure system, use KVM.
3 Setting up an LXC Host #
The LXC host provides the cgroups and controls all containers.
Procedure 1: Preparing an LXC Host #
Install the following packages:
lxc
bridge-utils
Check if everything is prepared for LXC:
lxc-checkconfig
You should see the words
enabledon each checked item.If you want to access the virtual container's ethernet interface, create a network bridge. A network bridge allows to share the network link on the physical interface of the host (
eth0):Open YaST and go to › .
Click .
Select as device type. Proceed with .
Activate and select .
Choose your bridged device(s), usually
eth0. Proceed with . Optionally check your devices with theifconfigcommand. Close the module.
If you have created a network bridge, assign its interface zone:
Start YaST and go to › .
Open the tab.
Select your bridge device (usually
br0).Click and select . Proceed with .
Finish with .
LXC starts the cgroup service automatically. The LXC host is now prepared for setting up containers.
4 Setting up LXC Containers with YaST #
A container is a “virtual machine” that can be started,
stopped, connected, or disconnected in YaST. The two last actions are only
available in the GUI version, not when YaST running in text mode. If you
use YaST in a text console, use the lxc-console command
as described in Procedure 5, “Starting, Accessing, and Stopping Your Container Manually”.
To set up an LXC container with YaST, proceed as follows:
Procedure 2: Creating a Container with YaST #
Open YaST and go to the LXC module.
Click .
Enter a name of your container in the field.
Select a Linux distribution (only SLES is supported) from the pop-up menu.
Enter the bridge for your LXC container. If you do not have a bridge, click to configure a bridge.
If needed, enter a password to log in to a LXC container. If you leave the password field empty, the standard password “root” is used for this container.
Finish with and YaST tries to prepare the container. This action takes some time.
After YaST has finished the preparation, click to launch the LXC container.
Procedure 3: Starting, Accessing, and Stopping Your Container with YaST #
Select the container and click
Click the button. A new terminal window opens.
Log in with user
rootand your password from Step 6 of Procedure 2, “Creating a Container with YaST”. If you did not set a password, use “root”.Make your changes in your container.
When you are finished, save all your work and log out.
Click the button to close the terminal. It is still possible to reconnect to your container by clicking .
To shutdown the container entirely, click the button.
5 Setting up LXC Containers Manually #
A container is a “virtual machine” that can be started, stopped, frozen, or cloned (to name but a few tasks). To set up an LXC container, proceed as follows:
Procedure 4: Creating a Container Manually #
Create a configuration file (name
lxc_vps0.confin this example) with the container name in it and edit it according to the following example:lxc.utsname = vps0 1 lxc.network.type = veth 2 lxc.network.flags = up 3 lxc.network.link = br0 4 lxc.network.hwaddr = 00:30:6E:08:EC:80 5 lxc.network.ipv4 = 192.168.1.10 6 lxc.network.name = eth0 7
Container name, should also be used when naming the configuration file
Type of network virtualization to be used for the container. The option
vethdefines a peer network device. It is created with one side assigned to the container and the other side is attached to a bridge by thelxc.network.linkoption.Network actions. The value
upin this case activates the network.Host network interface to be used for the container.
Allocated MAC address of the virtual interface. This MAC address needs to be unique in your network and different from the host MAC address.
IPv4 address assigned to the virtualized interface. Use the address
0.0.0.0to make use of DHCP. Uselxc.network.ipv6if you need IPv6 support.Dynamically allocated interface name. This option will rename the interface in the container.
More example files can be found in
/usr/share/doc/packages/lxc/examples/. Find details about all options in thelxc.confman page.Create a container by using the configuration file from Step 1. A list of available templates is located in
/usr/share/lxc/templates/.lxc-create -t TEMPLATE -f lxc.conf -n CONTAINER
CONTAINER needs to be replaced by the value you specified for
lxc.utsnamein the config file,vps0in this example. Replace the placeholder TEMPLATE with your preferred template name.Downloading and installing the base packages for openSUSE or SUSE Linux Enterprise Server will take some time. The container will be created in
/var/lib/lxc/CONTAINER, and their configuration files will be stored under/etc/lxc/.Finalize the configuration of the container:
Change the root path to the installed LXC container with the
chrootcommand:chroot /var/lib/lxc/CONTAINER_NAME/rootfs/
Change the password for user
rootwithpasswd root.Create an
operatoruser withoutrootprivileges:useradd -m operator
Change the operator's password:
passwd operator
Leave the chroot environment with
exit.
Procedure 5: Starting, Accessing, and Stopping Your Container Manually #
Start the container:
lxc-start -d -n CONTAINER_NAME
Connect to the container and log in:
lxc-console -n CONTAINER_NAME
Stop and remove your container always with the two steps:
lxc-stop -n CONTAINER_NAME lxc-destroy -n CONTAINER_NAME
6 Starting Containers at Boot Time #
LXC containers can be started at boot time. However, you need to follow
certain conventions. Every container has a subdirectory with its name in
/etc/lxc/, for example,
/etc/lxc/my-sles. This directory needs to be created
once. There you place your configuration file (named
config).
To set up the automatic start of LXC containers, proceed as follows:
Activate the cgroup service with
insserv boot.cgroup. This has to be done only once to enable this service at boot time. The command will populate the/sys/fs/cgroupdirectory.Create a directory
/etc/lxc/CONTAINER.Copy your configuration file to
/etc/lxc/CONTAINER/config.Run
/etc/init.d/boot.cgroupstartto set up cgroups properly.Run
/etc/init.d/lxcstartto start your containers.Wait a few seconds and run
/etc/init.d/lxclistto print the state of all your containers.
After this procedure, your LXC containers are correctly configured. To
start it automatically next time you boot your computer, use
insserv lxc.
7 For More Information #
- LXC Home Page
- Kernel Control Groups (cgroups)
http://www.suse.com/doc/sles11/book_sle_tuning/data/cha_tuning_cgroups.html
- Managing Virtual Machines with libvirt
http://www.suse.com/doc/sles11/book_sles_kvm/data/part_managing_virtual.html
- LXC Container Driver
8 Legal Notice #
Copyright© 2006– 2025 SUSE LLC and contributors. All rights reserved.
Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled “GNU Free Documentation License”.
For SUSE trademarks, see http://www.suse.com/company/legal/. All other third party trademarks are the property of their respective owners. A trademark symbol (®, ™ etc.) denotes a SUSE or Novell trademark; an asterisk (*) denotes a third party trademark.
All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors, nor the translators shall be held liable for possible errors or the consequences thereof.