Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / SUSE Linux Enterprise Serverマニュアル / Virtualization Guide / Managing Virtual Machines with LXC / Linux Containers
Applies to SUSE Linux Enterprise Server 15 SP2

34 Linux Containers

34.1 Setting Up LXC Distribution Containers

A container is self-contained software that includes an application's code and all its dependencies. A containerized application can be deployed quickly and run reliably in a computing environment.

To set up an LXC container, you need to create a root file system containing the guest distribution.

Procedure 34.1: Creating a Root File System

There is currently no GUI to create a root file system. Run the virt-create-rootfs command as root to set up a new root file system. Follow the steps below to create a new root file system in /path/to/rootfs.

Important
Important: Registration Code Needed

virt-create-rootfs requires a registration code to set up a SUSE Linux Enterprise Server root file system.

  1. Run the virt-create-rootfs command:

    > virt-create-rootfs --root /PATH/TO/ROOTFS --distro SLES-12.0 -c REGISTRATION_CODE
  2. Change the root path to the root file system with the chroot command:

    > chroot /path/to/rootfs
  3. Change the password for user root with passwd.

  4. Create an operator user without root privileges:

    useradd -m operator
  5. Change the operator's password:

    passwd operator
  6. Leave the chroot environment with exit.

Procedure 34.2: Defining the Container
  1. Start Virtual Machine Manager.

  2. (Optional) If not already present, add a local LXC connection by clicking File › Add Connection.

    Select LXC (Linux Containers) as the hypervisor and click Connect.

  3. Select the localhost (LXC) connection and click File New Virtual Machine menu.

  4. Activate Operating system container and click Forward.

  5. Type the path to the root file system from Procedure 34.1, “Creating a Root File System” and click the Forward button.

  6. Choose the maximum amount of memory and CPUs to allocate to the container. Then click the Forward button.

  7. Type in a name for the container. This name will be used for all virsh commands on the container.

    Click Advanced options. Select the network to connect the container to and click the Finish button: the container will then be created and started. A console will also be automatically opened.

Procedure 34.3: Configuring IP Addresses for Network Interfaces

Network devices and hostdev devices with network capabilities can be provided with one or more IP addresses to set on the network device in the guest. However, some hypervisors or network device types will simply ignore them or only use the first one.

  1. Edit the container XML configuration using virsh:

    > virsh -c lxc:/// edit MYCONTAINER
  2. The following example shows how to set one or multiple IP addresses:

    [...]
    <devices>
     <interface type='network'>
      <source network='default'/>
      <target dev='vnet0'/>
      <ip address='192.168.122.5' prefix='24'/>
      <ip address='192.168.122.5' prefix='24' peer1='10.0.0.10'/>
       <route family2='ipv4' address3='192.168.122.0' prefix4='24'
              gateway5='192.168.122.1'/>
       <route family2='ipv4' address3='192.168.122.8' gateway5='192.168.122.1'/>
     </interface>
     [...]
     <hostdev mode='capabilities' type='net'>
      <source>
       <interface>eth0</interface>
      </source>
      <ip address='192.168.122.6' prefix='24'/>
      <route family='ipv4' address='192.168.122.0' prefix='24' gateway='192.168.122.1'/>
      <route family='ipv4' address='192.168.122.8' gateway='192.168.122.1'/>
     </hostdev>
    </devices>
    [...]

    1

    Optional attribute. Holds the IP address of the other end of a point-to-point network device.

    2

    Can be set to either ipv4 or ipv6.

    3

    Contains the IP address.

    4

    Optional parameter (will be automatically set if not specified). Defines the number of 1 bits in the netmask. For IPv4, the default prefix is determined according to the network class (A, B, or C). For IPv6, the default prefix is 64.

    5

    If you do not specify a default gateway in the XML file, none will be set.

  3. You can also add route elements to define IP routes to add in the guest. These are used by the LXC driver.

    [...]
    <devices>
     <interface type1='ethernet'>
      <source>2
       <ip address3='192.168.123.1' prefix='24'/>
       <ip address4='10.0.0.10' prefix='24' peer='192.168.122.5'/>
       <route5 family='ipv4' address='192.168.42.0' prefix='24'
                gateway='192.168.123.4'/>
      </source>
      [...]
     </interface>
     [...]
    </devices>
    [...]

    1

    Network devices of type ethernet can optionally be provided with one or multiple IP addresses (3, 4) and with one or multiple routes (5) to set on the host side of the network device.

    These are configured as subelements of the source element (2) of the interface. They have the same attributes as the similarly named elements used to configure the guest side of the interface (see the step above).

    3

    First IP address for the network device of type ethernet.

    4

    Second IP address for the network device of type ethernet.

    5

    Route to set on the host side of the network device.

    Find further details about the attributes of this element at https://libvirt.org/formatnetwork.html#elementsStaticroute.

  4. Save the changes and exit the editor.

Note
Note: Container Network

To configure the container network, edit the /etc/sysconfig/network/ifcfg-* files.

34.2 Setting Up LXC Application Containers

Libvirt also allows to run single applications instead of full blown Linux distributions in containers. In this example, bash will be started in its own container.

Procedure 34.4: Defining an Application Container Using YaST
  1. Start Virtual Machine Manager.

  2. (Optional) If not already present, add a local LXC connection by clicking File › Add Connection.

    Select LXC (Linux Containers) as the hypervisor and click Connect.

  3. Select the localhost (LXC) connection and click File New Virtual Machine menu.

  4. Activate Application container and click Forward.

    Set the path to the application to be launched. As an example, the field is filled with /bin/sh, which is fine to create a first container. Click Forward.

  5. Choose the maximum amount of memory and CPUs to allocate to the container. Click Forward.

  6. Type in a name for the container. This name will be used for all virsh commands on the container.

    Click Advanced options. Select the network to connect the container to and click Finish. The container will be created and started. A console will be opened automatically.

    Note that the container will be destroyed after the application has finished running.

34.3 Securing a Container Using AppArmor

By default, containers are not secured using AppArmor or SELinux. There is no graphical user interface to change the security model for a libvirt domain, but virsh will help.

  1. Edit the container XML configuration using virsh:

    > virsh -c lxc:/// edit MYCONTAINER
  2. Add the following to the XML configuration, save it and exit the editor.

    <domain>
        ...
        <seclabel type="dynamic" model="apparmor"/>
        ...
    </domain>
  3. With this configuration, an AppArmor profile for the container will be created in the /etc/apparmor.d/libvirt directory. The default profile only allows the minimum applications to run in the container. This can be changed by modifying the libvirt-CONTAINER-uuid file: this file is not overwritten by libvirt.

34.4 Differences between the libvirt LXC Driver and LXC

SUSE Linux Enterprise Server 11 SP3 was shipping LXC, while SUSE Linux Enterprise Server 12 comes with the libvirt LXC driver, sometimes named libvirt-lxc to avoid confusion. The containers are not managed or configured in the same way in these tools. Here is a non-exhaustive list of differences.

The main difference is that domain configuration in libvirt is an XML file, while LXC configuration is a properties file. Most of the LXC properties can be mapped to the domain XML. The properties that cannot be migrated are:

  • lxc.network.script.up: this script can be implemented using the /etc/libvirt/hooks/network libvirt hook, though the script will need to be adapted.

  • lxc.network.ipv*: libvirt cannot set the container network configuration from the domain configuration.

  • lxc.network.name: libvirt cannot set the container network card name.

  • lxc.devttydir: libvirt does not allow changing the location of the console devices.

  • lxc.console: there is currently no way to log the output of the console into a file on the host for libvirt LXC containers.

  • lxc.pivotdir: libvirt does not allow to fine-tune the directory used for the pivot_root. /.olroot is used.

  • lxc.rootfs.mount: libvirt does not allow to fine-tune this.

LXC VLAN networks automatically create the VLAN interface on the host and then move it into the guest namespace. libvirt-lxc configuration can mention a VLAN tag ID only for Open vSwitch tap devices or PCI pass-through of SR-IOV VF. The conversion tool actually needs the user to manually create the VLAN interface on the host side.

LXC rootfs can also be an image file, but LXC brute-forces the mount to try to detect the proper file system format. libvirt-lxc can mount image files of several formats, but the 'auto' value for the format parameter is explicitly not supported. This means that the generated configuration will need to be tweaked by the user to get a proper match in that case.

LXC can support any cgroup configuration, even future ones, while libvirt domain configuration, needs to map each of them.

LXC can mount block devices in the rootfs, but it cannot mount raw partition files: the file needs to be manually attached to a loop device. On the other hand libvirt-lxc can mount block devices, but also partition files of any format.

34.5 Sharing Namespaces across Containers

Like Docker Open Source Engine, libvirt allows you to inherit the namespace from containers or processes to share the network namespace. The following example shows how to share required namespaces.

<domain type='lxc' xmlns:lxc='http://libvirt.org/schemas/domain/lxc/1.0'>
 [...]
 <lxc:namespace>
  <lxc:sharenet type='netns' value='red'/>
  <lxc:shareuts type='name' value='CONTAINER_1'/>
  <lxc:shareipc type='pid' value='12345'/>
 </lxc:namespace>
 </domain>

The netns option is specific to sharenet. Use it to use an existing network namespace (instead of creating a new network namespace for the container). In this case, the privnet option will be ignored.

34.6 For More Information

LXC Container Driver

https://libvirt.org/drvlxc.html