Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / Building Linux System Appliances / Working with Images

11 Working with Images

Note
Note

This document provides a collection of worksheets which describes the creation and setup of appliances to work within a number of different target environments.

11.1 Deploy ISO Image on an USB Stick

In KIWI NG all generated ISO images are created to be hybrid. This means, the image can be used as a CD/DVD or as a disk. This works because the ISO image also has a partition table embedded. With more and more computers delivered without a CD/DVD drive this becomes important.

The very same ISO image can be copied onto a USB stick and used as a bootable disk. The following procedure shows how to do this:

  1. Plug in a USB stick

    Once plugged in, check which Unix device name the stick was assigned to. The following command provides an overview about all linux storage devices:

    $ lsblk
  2. Dump the ISO image on the USB stick:

    Warning
    Warning

    Make sure the selected device really points to your stick because the following operation can not be revoked and will destroy all data on the selected device

    $ dd if={exc_image_base_name}.x86_64-1.15.3.iso of=/dev/<stickdevice>
  3. Boot from your USB Stick

    Activate booting from USB in your BIOS/UEFI. As many firmware has different procedures on how to do it, look into your user manual. Many firmware offers a boot menu which can be activated at boot time.

11.2 Deploy ISO Image as File on a FAT32 Formated USB Stick

In KIWI NG, all generated ISO images are created to be hybrid. This means, the image can be used as a CD/DVD or as a disk. The deployment of such an image onto a disk like an USB stick normally destroys all existing data on this device. Most USB sticks are pre-formatted with a FAT32 Windows File System and to keep the existing data on the stick untouched a different deployment needs to be used.

The following deployment process copies the ISO image as an additional file to the USB stick and makes the USB stick bootable. The ability to boot from the stick is configured through a SYSLINUX feature which allows to loopback mount an ISO file and boot the kernel and initrd directly from the ISO file.

The initrd loaded in this process must also be able to loopback mount the ISO file to access the root filesystem and boot the live system. The dracut initrd system used by KIWI NG provides this feature upstream called as “iso-scan”. Therefore all KIWI NG generated live ISO images supports this deployment mode.

For copying the ISO file on the USB stick and the setup of the SYSLINUX bootloader to make use of the “iso-scan” feature an extra tool named live-grub-stick exists. The following procedure shows how to setup the USB stick with live-grub-stick:

  1. Install the live-grub-stick package from software.opensuse.org:

  2. Plug in a USB stick

    Once plugged in, check which Unix device name the FAT32 partition was assigned to. The following command provides an overview about all storage devices and their filesystems:

    $ sudo lsblk --fs
  3. Call the live-grub-stick command as follows:

    Assuming “/dev/sdz1” was the FAT32 partition selected from the output of the lsblk command:

    $ sudo live-grub-stick {exc_image_base_name}.x86_64-1.15.3.iso /dev/sdz1
  4. Boot from your USB Stick

    Activate booting from USB in your BIOS/UEFI. As many firmware has different procedures on how to do it, look into your user manual. EFI booting from iso image is not supported at the moment, for EFI booting use –isohybrid option with live-grub-stick, however note that all the data on the stick will be lost. Many firmware offers a boot menu which can be activated at boot time. Usually this can be reached by pressing the Esc or F12 keys.

11.3 Booting a Live ISO Images from Grub2

In KIWI NG, all generated ISO images are created to be hybrid. This means, the image can be used as a CD/DVD or as a disk. This works because the ISO image also has a partition table embedded. With more and more computers delivered without a CD/DVD drive this becomes important. The deployment of such an image onto a disk like an USB stick normally destroys all existing data on this device. It is also not possible to use USB stick as a data storage device. Most USB sticks are pre-formatted with a FAT32 or exFAT Windows File System and to keep the existing data on the stick untouched a different deployment needs to be used.

Fortunately Grub2 supports booting directly from ISO files. It does not matter whether it is installed on your computer’s hard drive or on a USB stick. The following deployment process copies the ISO image as an additional file to the USB stick or hard drive. The ability to boot from the disk is configured through a Grub2 feature which allows to loopback mount an ISO file and boot the kernel and initrd directly from the ISO file.

The initrd loaded in this process must also be able to loopback mount the ISO file to access the root filesystem and boot the live system. Almost every Linux distribution supports fat32, and more and more of them also support exFAT. For hard drives, Linux filesystems are also supported.

The dracut initrd system used by KIWI NG provides this feature upstream called as “iso-scan/filename”. Therefore all KIWI NG generated live ISO images supports this deployment mode.

The following procedure shows how to setup Grub2 on your hard drive:

  1. Copy the ISO image to a folder of your choice on your hard drive.

  2. Add the following code to the “grub.cfg” file:

    Be sure to set the path to the ISO image, you can set your own menu name. The drive identification for Grub2 can be checked at boot time by pressing the ‘c’ key and typing ‘ls’.

    submenu "Boot from openSUSE ISO" {
         iso_path="(hd0,gpt2)/path/to/openSUSE.iso"
         export iso_path
         loopback loop "$iso_path"
         root=(loop)
         source /boot/grub2/loopback.cfg
         loopback --delete loop
    }
  3. Restart your computer and select the added menu.

For USB sticks, the procedure is identical. You would then install Grub2 on the USB drive and follow the steps above. The use of scripts such as “MultiOS-USB” is strongly recommended.

11.4 Image Description for Amazon EC2

A virtual disk image which is able to boot in the Amazon EC2 cloud framework has to comply the following constraints:

  • Xen tools and libraries must be installed

  • cloud-init package must be installed

  • cloud-init configuration for Amazon must be provided

  • Grub bootloader modules for Xen must be installed

  • AWS tools must be installed

  • Disk size must be set to 10G

  • Kernel parameters must allow for xen console

To meet this requirements add or update the KIWI NG image description as follows:

  1. Software packages

    Make sure to add the following packages to the package list

    Note
    Note

    Package names used in the following list matches the package names of the SUSE distribution and might be different on other distributions.

    <package name="aws-cli"/>
    <package name="grub2-x86_64-xen"/>
    <package name="xen-libs"/>
    <package name="xen-tools-domU"/>
    <package name="cloud-init"/>
  2. Image Type definition

    Update the oem image type setup as follows

    <type image="oem"
          filesystem="ext4"
          kernelcmdline="console=xvc0 multipath=off net.ifnames=0"
          devicepersistency="by-label"
          firmware="ec2">
      <bootloader name="grub2" timeout="1"/>
      <size unit="M">10240</size>
      <machine xen_loader="hvmloader"/>
      <oemconfig>
          <oem-resize>false</oem-resize>
      </oemconfig>
    </type>
  3. Cloud Init setup

    Cloud init is a service which runs at boot time and allows to customize the system by activating one ore more cloud init modules. For Amazon EC2 the following configuration file /etc/cloud/cloud.cfg needs to be provided as part of the overlay files in your KIWI NG image description

    users:
      - default
    
    disable_root: true
    preserve_hostname: false
    syslog_fix_perms: root:root
    
    datasource_list: [ NoCloud, Ec2, None ]
    
    cloud_init_modules:
      - migrator
      - bootcmd
      - write-files
      - growpart
      - resizefs
      - set_hostname
      - update_hostname
      - update_etc_hosts
      - ca-certs
      - rsyslog
      - users-groups
      - ssh
    
    cloud_config_modules:
      - mounts
      - ssh-import-id
      - locale
      - set-passwords
      - package-update-upgrade-install
      - timezone
    
    cloud_final_modules:
      - scripts-per-once
      - scripts-per-boot
      - scripts-per-instance
      - scripts-user
      - ssh-authkey-fingerprints
      - keys-to-console
      - phone-home
      - final-message
      - power-state-change
    
    system_info:
      default_user:
        name: ec2-user
        gecos: "cloud-init created default user"
        lock_passwd: True
        sudo: ["ALL=(ALL) NOPASSWD:ALL"]
        shell: /bin/bash
      paths:
        cloud_dir: /var/lib/cloud/
        templates_dir: /etc/cloud/templates/
      ssh_svcname: sshd

An image built with the above setup can be uploaded into the Amazon EC2 cloud and registered as image. For further information on how to upload to EC2 see: ec2uploadimg

11.5 Image Description for Microsoft Azure

A virtual disk image which is able to boot in the Microsoft Azure cloud framework has to comply the following constraints:

  • Hyper-V tools must be installed

  • Microsoft Azure Agent must be installed

  • Disk size must be set to 30G

  • Kernel parameters must allow for serial console

To meet this requirements update the KIWI NG image description as follows:

  1. Software packages

    Make sure to add the following packages to the package list

    Note
    Note

    Package names used in the following list matches the package names of the SUSE distribution and might be different on other distributions.

    <package name="hyper-v"/>
    <package name="python-azure-agent"/>
  2. Image Type definition

    Update the oem image type setup as follows

    <type image="oem"
          filesystem="ext4"
          kernelcmdline="console=ttyS0 rootdelay=300 net.ifnames=0"
          devicepersistency="by-uuid"
          format="vhd-fixed"
          formatoptions="force_size"
          bootpartition="true"
          bootpartsize="1024">
      <bootloader name="grub2" timeout="1"/>
      <size unit="M">30720</size>
      <oemconfig>
          <oem-resize>false</oem-resize>
      </oemconfig>
    </type>

An image built with the above setup can be uploaded into the Microsoft Azure cloud and registered as image. For further information on how to upload to Azure see: azurectl

11.6 Image Description for Google Compute Engine

A virtual disk image which is able to boot in the Google Compute Engine cloud framework has to comply the following constraints:

  • KIWI NG type must be an expandable disk

  • Google Compute Engine init must be installed

  • Disk size must be set to 10G

  • Kernel parameters must allow for serial console

To meet this requirements update the KIWI NG image description as follows:

  1. Software packages

    Make sure to add the following packages to the package list

    Note
    Note

    Package names used in the following list matches the package names of the SUSE distribution and might be different on other distributions.

    <package name="google-compute-engine-init"/>
  2. Image Type definition

    To allow the image to be expanded to the configured disk geometry of the instance started by Google Compute Engine it is suggested to let KIWI NG’s OEM boot code take over that task. It would also be possible to try cloud-init’s resize module but we found conflicts when two cloud init systems, google-compute-engine-init and cloud-init were used together. Thus for now we stick with KIWI NG’s boot code which can resize the disk from within the initrd before the system gets activated through systemd.

    Update the oem image type setup to be changed into an expandable type as follows:

    <type image="oem"
          initrd_system="dracut"
          filesystem="ext4"
          kernelcmdline="console=ttyS0,38400n8 net.ifnames=0"
          format="gce">
      <bootloader name="grub2" timeout="1"/>
      <size unit="M">10240</size>
      <oemconfig>
          <oem-resize>true</oem-resize>
          <oem-swap>false</oem-swap>
      </oemconfig>
    </type>

An image built with the above setup can be uploaded into the Google Compute Engine cloud and registered as image. For further information on how to upload to Google see: google-cloud-sdk on software.opensuse.org

11.7 Image Description for Vagrant

Vagrant is a framework to implement consistent processing/testing work environments based on Virtualization technologies. To run a system, Vagrant needs so-called boxes. A box is a TAR archive containing a virtual disk image and some metadata.

To build Vagrant boxes, you can use Packer which is provided by Hashicorp itself. Packer is based on the official installation media (DVDs) as shipped by the distribution vendor.

The KIWI NG way of building images might be helpful, if such a media does not exist or does not suit your needs. For example, if the distribution is still under development or you want to use a collection of your own repositories. Note, that in contrast to Packer KIWI NG only supports the libvirt and VirtualBox providers. Other providers require a different box layout that is currently not supported by KIWI NG.

In addition, you can use the KIWI NG image description as source for the Open Build Service which allows building and maintaining boxes.

Vagrant expects boxes to be setup in a specific way (for details refer to the Vagrant box documentation.), applied to the referenced KIWI NG image description from Section 10.2, “Build a Virtual Disk Image”, the following steps are required:

  1. Update the image type setup

    <type image="oem" filesystem="ext4" format="vagrant">
        <bootloader name="grub2" timeout="0"/>
        <vagrantconfig provider="libvirt" virtualsize="42"/>
        <size unit="G">42</size>
        <oemconfig>
            <oem-resize>false</oem-resize>
        </oemconfig>
    </type>

    This modifies the type to build a Vagrant box for the libvirt provider including a pre-defined disk size. The disk size is optional, but recommended to provide some free space on disk.

    For the VirtualBox provider, the additional attribute virtualbox_guest_additions_present can be set to true when the VirtualBox guest additions are installed in the KIWI NG image:

    <type image="oem" filesystem="ext4" format="vagrant">
        <bootloader name="grub2" timeout="0"/>
        <vagrantconfig
          provider="virtualbox"
          virtualbox_guest_additions_present="true"
          virtualsize="42"
        />
        <size unit="G">42</size>
        <oemconfig>
            <oem-resize>false</oem-resize>
        </oemconfig>
    </type>

    The resulting Vagrant box then uses the vboxfs module for the synchronized folder instead of rsync, that is used by default.

  2. Add mandatory packages

    <package name="sudo"/>
    <package name="openssh"/>
  3. Add additional packages

    If you have set the attribute virtualbox_guest_additions_present to true, add the VirtualBox guest additions. For openSUSE the following packages are required:

    <package name="virtualbox-guest-tools"/>
    <package name="virtualbox-guest-x11"/>
    <package name="virtualbox-guest-kmp-default"/>

    Otherwise, you must add rsync:

    <package name="rsync"/>

    Note that KIWI NG cannot verify whether these packages are installed. If they are missing, the resulting Vagrant box will be broken.

  4. Add Vagrant user

    <users group='vagrant'>
        <user name='vagrant' password='vh4vw1N4alxKQ' home='/home/vagrant'/>
    </users>

    This adds the vagrant user to the system and applies the name of the user as the password for login.

  5. Configure SSH, the default shared folder and sudo permissions

    Vagrant expects that it can login as the user vagrant using an insecure public key . Furthermore, vagrant also usually uses /vagrant as the default shared folder and assumes that the vagrant user can invoke commands via sudo without having to enter a password.

    This can be achieved using the function baseVagrantSetup in config.sh:

    baseVagrantSetup
  6. Additional customizations:

    Additionally to baseVagrantSetup, you might want to also ensure the following:

    • If you have installed the Virtualbox guest additions into your box, then also load the vboxsf kernel module.

    • When building boxes for libvirt, then ensure that the default wired networking interface is called eth0 and uses DHCP. This is necessary since libvirt uses dnsmasq to issue IPs to the VMs. This step can be omitted for Virtualbox boxes.

An image built with the above setup creates a Vagrant box file with the extension .vagrant.libvirt.box or .vagrant.virtualbox.box. Add the box file to Vagrant with the command:

vagrant box add my-box image-file.vagrant.libvirt.box
Note
Note

Using the box with the libvirt provider requires alongside a correct Vagrant installation:

  • the plugin vagrant-libvirt to be installed

  • a running libvirtd daemon

Once added to Vagrant, boot the box and log in with the following sequence of vagrant commands:

vagrant init my-box
vagrant up --provider libvirt
vagrant ssh

11.7.1 Customizing the embedded Vagrantfile

Warning
Warning

This is an advanced topic and not required for most users

Vagrant ship with an embedded Vagrantfile that carries settings specific to this box, for instance the synchronization mechanism for the shared folder. KIWI NG generates such a file automatically for you and it should be sufficient for most use cases.

If a box requires different settings in the embedded Vagrantfile, then the user can provide KIWI NG with a path to an alternative via the attribute embebbed_vagrantfile of the vagrantconfig element: it specifies a relative path to the Vagrantfile that will be included in the finished box.

In the following example snippet from config.xml we add a custom MyVagrantfile into the box (the file should be in the image description directory next to config.sh):

<type image="oem" filesystem="ext4" format="vagrant">
    <bootloader name="grub2" timeout="0"/>
    <vagrantconfig
      provider="libvirt"
      virtualsize="42"
      embedded_vagrantfile="MyVagrantfile"
    />
    <size unit="G">42</size>
    <oemconfig>
        <oem-resize>false</oem-resize>
    </oemconfig>
</type>

The option to provide a custom Vagrantfile can be combined with the usage of profiles (see Section 7.4, “Image Profiles”), so that certain builds can use the automatically generated Vagrantfile (in the following example that is the Virtualbox build) and others get a customized one (the libvirt profile in the following example):

<?xml version="1.0" encoding="utf-8"?>

<image schemaversion="8.0" name="{exc_image_base_name}">
  <!-- description goes here -->
  <profiles>
    <profile name="libvirt" description="Vagrant Box for Libvirt"/>
    <profile name="virtualbox" description="Vagrant Box for VirtualBox"/>
  </profiles>

  <!-- general preferences go here -->

  <preferences profiles="libvirt">
    <type
      image="oem"
      filesystem="ext4"
      format="vagrant">
        <bootloader name="grub2" timeout="0"/>
        <vagrantconfig
          provider="libvirt"
          virtualsize="42"
          embedded_vagrantfile="LibvirtVagrantfile"
        />
        <size unit="G">42</size>
        <oemconfig>
            <oem-resize>false</oem-resize>
        </oemconfig>
   </type>
   </preferences>
   <preferences profiles="virtualbox">
     <type
       image="oem"
       filesystem="ext4"
       format="vagrant">
         <bootloader name="grub2" timeout="0"/>
         <vagrantconfig
           provider="virtualbox"
           virtualbox_guest_additions_present="true"
           virtualsize="42"
         />
         <size unit="G">42</size>
         <oemconfig>
             <oem-resize>false</oem-resize>
         </oemconfig>
     </type>
   </preferences>

   <!-- remaining box description -->
 </image>

11.8 Image Description Encrypted Disk

A virtual disk image can be partially or fully encrypted using the LUKS extension supported by KIWI NG. A fully encrypted image also includes the data in /boot to be encrypted. Such an image requests the passphrase for the master key to be entered at the bootloader stage. A partialy encrypted image keeps /boot unencrypted and on an extra boot partition. Such an image requests the passphrase for the master key later in the boot process when the root partition gets accessed by the systemd mount service. In any case the master passphrase is requested only once.

Update the KIWI NG image description as follows:

  1. Software packages

    Make sure to add the following package to the package list

    Note
    Note

    Package names used in the following list match the package names of the SUSE distribution and might be different on other distributions.

    <package name="cryptsetup"/>
  2. Image Type definition

    Update the oem image type setup as follows

    Full disk encryption including /boot:
    <type image="oem" filesystem="ext4" luks="linux" bootpartition="false">
        <oemconfig>
            <oem-resize>false</oem-resize>
        </oemconfig>
    </type>
    Encrypted root partition with an unencrypted extra /boot partition:
    <type image="oem" filesystem="ext4" luks="linux" bootpartition="true">
        <oemconfig>
            <oem-resize>false</oem-resize>
        </oemconfig>
    </type>
    Note
    Note

    The value for the luks attribute sets the master passphrase for the LUKS keyring. Therefore the XML description becomes security critical and should only be readable by trustworthy people. Alternatively the credentials information can be stored in a key file and referenced as:

    <type luks="file:///path/to/keyfile"/>

11.9 Deploy and Run System in a RamDisk

If a machine should run the OS completely in memory without the need for any persistent storage, the approach to deploy the image into a ramdisk serves this purpose. KIWI NG allows to create a bootable ISO image which deploys the image into a ramdisk and activates that image with the following oem type definition:

<type image="oem" filesystem="ext4" installiso="true" initrd_system="dracut" installboot="install" kernelcmdline="rd.kiwi.ramdisk ramdisk_size=2048000">
    <bootloader name="grub2" timeout="1"/>
    <oemconfig>
        <oem-skip-verify>true</oem-skip-verify>
        <oem-unattended>true</oem-unattended>
        <oem-unattended-id>/dev/ram1</oem-unattended-id>
        <oem-swap>false</oem-swap>
        <oem-multipath-scan>false</oem-multipath-scan>
     </oemconfig>
 </type>

The type specification above builds an installation ISO image which deploys the System Image into the specified ramdisk device (/dev/ram1). The setup of the ISO image boots with a short boot timeout of 1sec and just runs through the process without asking any questions. In a ramdisk deployment the optional target verification, swap space and multipath targets are out of scope and therefore disabled.

The configured size of the ramdisk specifies the size of the OS disk and must be at least of the size of the System Image. The disk size can be configured with the following value in the kernelcmdline attribute:

  • ramdisk_size=kbyte-value”

An image built with the above setup can be tested in QEMU as follows:

$ sudo qemu -cdrom \
      {exc_image_base_name}.x86_64-1.15.3.install.iso \
      -serial stdio
Note
Note

Enough Main Memory

The machine, no matter if it’s a virtual machine like QEMU or a real machine, must provide enough RAM to hold the image in the ramdisk as well as have enough RAM available to operate the OS and its applications. The KIWI NG build image with the extension .raw provides the System Image which gets deployed into the RAM space. Substract the size of the System Image from the RAM space the machine offers and make sure the result is still big enough for the use case of the appliance. In case of a virtual machine, attach enough main memory to fit this calculation. In case of QEMU this can be done with the -m option

Like all other oem KIWI NG images, also the ramdisk setup supports all the deployments methods as explained in Section 10.3.1, “Deployment Methods” This means it’s also possible to dump the ISO image on a USB stick let the system boot from it and unplug the stick from the machine because the system was deployed into RAM

Note
Note

Limitations Of RamDisk Deployments

Only standard images which can be booted by a simple root mount and root switch can be used. Usually KIWI NG calls kexec after deployment such that the correct, for the image created dracut initrd, will boot the image. In case of a RAM only system kexec does not work because it would loose the ramdisk contents. Thus the dracut initrd driving the deployment is also the environment to boot the system. There are cases where this environment is not suitable to boot the system.

11.10 Custom Disk Partitions

KIWI NG has its own partitioning schema which is defined according to several different user configurations: boot firmware, boot partition, expandable layouts, etc. Those supported features have an impact on the partitioning schema.

MBR or GUID partition tables are not flexible, carry limitations and are tied to some specific disk geometry. Because of that the preferred alternative to disk layouts based on traditional partition tables is using flexible approaches like logic volumes.

However, on certain conditions additional entries to the low level partition table are needed. For this purpose the <partitions> section exists and allows to add custom entries like shown in the following example:

<partitions>
    <partition name="var" size="100" mountpoint="/var" filesystem="ext3"/>
</partitions>

Each <partition> entry puts a partition of the configured size in the low level partition table, creates a filesystem on it and includes it to the system’s fstab file. If parts of the root filesystem are moved into its own partition like it’s the case in the above example, this partition will also contain the data that gets installed during the image creation process to that area.

The following attributes must/can be set to configured a partition entry:

name=”identifier”

Mandatory name of the partition as handled by KIWI NG.

Note
Note

There are the following reserved names which cannot be used because they are already represented by existing attributes: root, readonly, boot, prep, spare, swap, efi_csm and efi.

partition_name=”name”

Optional name of the partition as it appears when listing the table contents with tools like gdisk. If no name is set KIWI NG constructs a name of the form p.lx(identifier_from_name_attr)

partition_type=”type_identifier”

Optional partition type identifier as handled by KIWI NG. Allowed values are t.linux and t.raid. If not specified t.linux is the default.

size=”size_string”

Mandatory size of the partition. A size string can end with M or G to indicate a mega-Byte or giga-Byte value. Without a unit specification mega-Byte is used.

mountpoint=”path”

Mandatory mountpoint to mount the partition in the system.

filesystem=”btrfs|ext2|ext3|ext4|squashfs|xfs

Mandatory filesystem configuration to create one of the supported filesystems on the partition.

clone=”number”

Optional setting to indicate that this partition should be cloned number of times. A clone partition is content wise an exact byte for byte copy of the origin. However, to avoid conflicts at boot time the UUID of any cloned partition will be made unique. In the sequence of partitions, the clone(s) will always be created first followed by the partition considered the origin. The origin partition is the one that will be referenced and used by the system

Despite the customization options of the partition table shown above there are the following limitations:

  1. By default the root partition is always the last one

    Disk imags build with KIWI NG are designed to be expandable. For this feature to work the partition containing the system rootfs must always be the last one. If this is unwanted for some reason KIWI NG offers an opportunity for one extra/spare partition with the option to be also placed at the end of the table. For details lookup spare_part in Section 8.1, “Image Description Elements”

  2. By default there are no gaps in the partition table

    The way partitions are configured is done such that there are no gaps in the table of the image. However, leaving some space free at the end of the partition geometry is possible in the following ways:

    • Deploy with unpartitioned free space.

      To leave space unpartitioned on first boot of a disk image it is possible to configure an <oem-systemsize> which is smaller than the disk the image gets deployed to. Details about this setting can be found in Section 8.1, “Image Description Elements”

    • Build with unpartitioned free space.

      To leave space unpartitioned at build time of the image it is possible to disable <oem-resize> and configure an <oem-systemsize> which is smaller than the kiwi calculated disk size or the fixed setting for the disk size via the size> element.

    • Build with unpartitioned free space.

      Setting some unpartitioned free space on the disk can be done using the unpartitioned attribute of size element in type’s section. For details see Section 10.2.2, “Modifying the Size of the Image”

    • Resize built image adding unpartitioned free space.

      A built image can be resized by using the kiwi-ng image resize command and set a new extended size for the disk. See KIWI NG commands docs Section 4.8, “kiwi-ng image resize”.

11.11 Custom Disk Volumes

KIWI NG supports defining custom volumes by using the logical volume manager (LVM) for the Linux kernel or by setting volumes at filesystem level when filesystem supports it (e.g. btrfs).

Volumes are defined in the KIWI NG description file config.xml, using systemdisk. This element is a child of the type. Volumes themselves are added via (multiple) volume child elements of the systemdisk element:

<image schemaversion="8.0" name="openSUSE-Leap-15.1">
  <type image="oem" filesystem="btrfs">
    <systemdisk name="vgroup" preferlvm="true">
      <volume name="usr/lib" size="1G" label="library"/>
      <volume name="@root" freespace="500M"/>
      <volume name="etc_volume" mountpoint="etc" copy_on_write="false"/>
      <volume name="bin_volume" size="all" mountpoint="/usr/bin" quota="2G"/>
    </systemdisk>
  </type>
</image>

Additional non-root volumes are created for each volume element. Volume details can be defined by setting the following volume attributes:

  • name: Required attribute representing the volume’s name. Additionally, this attribute is interpreted as the mountpoint if the mountpoint attribute is not used.

  • mountpoint: Optional attribute that specifies the mountpoint of this volume.

  • size: Optional attribute to set the size of the volume. If no suffix (M or G) is used, then the value is considered to be in megabytes.

    Note
    Note

    Special name for the root volume

    One can use the @root name to refer to the volume mounted at /, in case some specific size attributes for the root volume have to be defined. For instance:

    <volume name="@root" size="4G"/>

    In addition to the custom size of the root volume it’s also possible to setup the name of the root volume as follows:

    <volume name="@root=rootlv" size="4G"/>

    If no name for the root volume is specified the default name: LVRoot applies.

  • freespace: Optional attribute defining the additional free space added to the volume. If no suffix (M or G) is used, the value is considered to be in megabytes.

  • label: Optional attribute to set filesystem label of the volume.

  • copy_on_write: Optional attribute to set the filesystem copy-on-write attribute for this volume.

  • quota: Optional attribute for the btrfs filesystem only. Allows to specify a quota size for the generated volume.

  • filesystem_check: Optional attribute to indicate that this filesystem should perform the validation to become filesystem checked. The actual constraints if the check is performed or not depends on systemd and filesystem specific components. If not set or set to false no system component will be triggered to run an eventual filesystem check, which results in this filesystem to be never checked. The latter is the default.

  • arch: Optional attribute to create the volume only if it matches the specified host architecture. Multiple architecture names can be specified as comma separated list.

Warning
Warning

The size attributes for filesystem volumes, as for btrfs, are ignored and have no effect.

The systemdisk element additionally supports the following optional attributes:

  • name: The volume group name, by default kiwiVG is used. This setting is only relevant for LVM volumes.

  • preferlvm: Boolean value instructing KIWI NG to prefer LVM even if the used filesystem has its own volume management system.

11.12 Partition Clones

KIWI NG allows to create block level clones of certain partitions used in the image. Clones can be created from the root, boot and any other partition listed in the <partitions> element.

A partition clone is a simple byte dump from one block storage device to another. However, this would cause conflicts during boot of the system because all unique identifiers like the UUID of a filesystem will no longer be unique. The clone feature of KIWI NG takes care of this part and re-creates the relevant unique identifiers per cloned partition. KIWI NG allows this also for complex partitions like LVM, LUKS or RAID.

The partition clone(s) will always appear first in the partition table, followed by the origin partition. The origin partition is the one whose identifier will be referenced and used by the system. By default no cloned partition will be mounted or used by the system at boot time.

Let’s take a look at the following example:

<type image="oem" root_clone="1" boot_clone="1" firmware="uefi" filesystem="xfs" bootpartition="true" bootfilesystem="ext4">
    <partitions>
        <partition name="home" size="10" mountpoint="/home" filesystem="ext3" clone="2"/>
    </partitions>
    <oemconfig>
        <oem-systemsize>2048</oem-systemsize>
        <oem-resize>false</oem-resize>
    </oemconfig>
    <size unit="G">100</size>
</type>

With the above setup KIWI NG will create a disk image that contains the following partition table:

Number  Start (sector)    End (sector)  Size       Code  Name
   1            2048            6143   2.0 MiB     EF02  p.legacy
   2            6144           47103   20.0 MiB    EF00  p.UEFI
   3           47104          661503   300.0 MiB   8300  p.lxbootclone1
   4          661504         1275903   300.0 MiB   8300  p.lxboot
   5         1275904         1296383   10.0 MiB    8300  p.lxhomeclone1
   6         1296384         1316863   10.0 MiB    8300  p.lxhomeclone2
   7         1316864         1337343   10.0 MiB    8300  p.lxhome
   8         1337344         3864575   2 GiB       8300  p.lxrootclone1
   9         3864576         6287326   2 GiB       8300  p.lxroot

When booting the system only the origin partitions p.lxboot, p.lxroot and p.lxhome will be mounted and visible in e.g. /etc/fstab, the bootloader or the initrd. Thus partition clones are present as a data source but are not relevant for the operating system from a functional perspective.

As shown in the above example there is one clone request for root and boot and a two clone requests for the home partition. KIWI NG does not sanity- check the provided number of clones (e.g. whether your partition table can hold that many partitions).

Warning
Warning

There is a limit how many partitions a partition table can hold. This also limits how many clones can be created.

It’s important when using the root_clone attribute to specify a size for the part of the system that represents the root partition. As of today KIWI NG does not automatically divide the root partition into two identical pieces. In order to create a clone of the partition a size specification is required. In the above example the size for root is provided via the oem-systemsize element. Using a root clone and fixed size values has the following consequences:

  1. The resize capability must be disabled. This is done via oem-resize element. The reason is that only the last partition in the partition table can be resized without destroying data. If there is a clone of the root partition it should never be resized because then the two partitions will be different in size and no longer clones of each other

  2. There can be unpartitioned space left. In the above example the overall disk size is set to 100G. The sum of all partition sizes will be smaller than this value and there is no resize available anymore. Depending on the overall size setup for the disk this will leave unpartitioned space free on the disk.

11.12.1 Use Case

Potential use cases for which a clone of one or more partitions is useful include among others:

Factory Resets:

Creating an image with the option to rollback to the state of the system at deployment time can be very helpful for disaster recovery

System Updates with Rollbacks e.g A/B:

Creating an image which holds extra space allowing to rollback modified data can make a system more robust. For example in a simple A/B update concept, partition A would get updated but would flip to B if A is considered broken after applying the update.

Note
Note

Most probably any use case based on partition clones requires additional software to manage them. KIWI NG provides the option to create the clone layout but it does not provide the software to implement the actual use case for which the partition clones are needed.

Developers writing applications based on a clone layout created with KIWI NG can leverage the metadata file /config.partids. This file is created at build time and contains the mapping between the partition name and the actual partition number in the partition table. For partition clones, the following naming convention applies:

kiwi_(name)PartClone(id)="(partition_number)"

The (name) is either taken from the name attribute of the <partition> element or it is a fixed name assigned by KIWI NG. There are the following reserved partition names for which cloning is supported:

  • root

  • readonly

  • boot

For the mentioned example this will result in the following /config.partids:

kiwi_BiosGrub="1"
kiwi_EfiPart="2"
kiwi_bootPartClone1="3"
kiwi_BootPart="4"
kiwi_homePartClone1="5"
kiwi_homePartClone2="6"
kiwi_HomePart="7"
kiwi_rootPartClone1="8"
kiwi_RootPart="9"

11.13 Setting Up a Network Boot Server

To be able to deploy a system through the PXE boot protocol, you need to set up a network boot server providing the services DHCP and tftp. With dnsmasq an utility exists which allows to setup all needed services at once:

11.13.1 Installing and Configuring DHCP and TFTP with dnsmasq

The following instructions can only serve as an example. Depending on your network structure, the IP addresses, ranges and domain settings needs to be adapted to allow the DHCP server to work within your network. If you already have a DHCP server running in your network, make sure that the filename and next-server directives are correctly set on this server.

The following steps describe how to set up dnsmasq to work as DHCP and TFTP server.

  1. Install the dnsmasq package.

  2. Create the file /etc/dnsmasq.conf and insert the following content

    # Don't function as a DNS server.
    port=0
    
    # Log information about DHCP transactions.
    log-dhcp
    
    # Set the root directory for files available via FTP,
    # usually "/srv/tftpboot":
    tftp-root=TFTP_ROOT_DIR
    
    enable-tftp
    
    dhcp-range=BOOT_SERVER_IP,proxy

    In the next step it’s required to decide for the boot method. There is the PXE loader provided via pxelinux.0 from the syslinux package and there is the GRUB loader provided via the grub package.

    Note
    Note

    Placeholders

    Replace all placeholders (written in uppercase) with data fitting your network setup.

2.1. insert the following content to use pxelinux.0:

# The boot filename, Server name, Server Ip Address
dhcp-boot=pxelinux.0,,BOOT_SERVER_IP

# Disable re-use of the DHCP servername and filename fields as extra
# option space. That's to avoid confusing some old or broken
# DHCP clients.
dhcp-no-override

# PXE menu.  The first part is the text displayed to the user.
# The second is the timeout, in seconds.
pxe-prompt="Booting FOG Client", 1

# The known types are x86PC, PC98, IA64_EFI, Alpha, Arc_x86,
# Intel_Lean_Client, IA32_EFI, BC_EFI, Xscale_EFI and X86-64_EFI
# This option is first and will be the default if there is no input
# from the user.
pxe-service=X86PC, "Boot to FOG", pxelinux.0
pxe-service=X86-64_EFI, "Boot to FOG UEFI", ipxe
pxe-service=BC_EFI, "Boot to FOG UEFI PXE-BC", ipxe
Note
Note

On boot of a network client with that configuration the default pxelinux.0 config file is expected at TFTP_ROOT_DIR/pxelinux.cfg/default

2.2. insert the following content to use grub:

# The boot filename, Server name, Server Ip Address
dhcp-boot=boot/grub2/i386-pc/core.0,,BOOT_SERVER_IP

When using grub the referenced dhcp-boot grub module must be genereated. To do this change the directory to TFTP_ROOT_DIR and create the setvars.conf with the following content:

set root=(tftp)
set net_default_server=BOOT_SERVER_IP
set prefix=boot/grub2

Now call the following commands to create the grub module

$ grub2-mknetdir --net-directory=TFTP_ROOT_DIR --subdir=boot/grub2
$ grub2-mkimage -O i386-pc-pxe \
    --output boot/grub2/i386-pc/core.0 \
    --prefix=/boot/grub2 \
    -c setvars.conf \
  pxe tftp
Note
Note

On boot of a network client with that configuration the grub config file is expected at TFTP_ROOT_DIR/boot/grub2/grub.cfg

  • Run the dnsmasq server by calling:

    systemctl start dnsmasq

11.14 Build PXE Root File System Image for the legacy netboot infrastructure

  • how to build a PXE file system image

  • how to setup the PXE file system image on the PXE server

  • how to run it with QEMU

PXE is a network boot protocol that is shipped with most BIOS implementations. The protocol sends a DHCP request to get an IP address. When an IP address is assigned, it uses the TFTP protocol to download a Kernel and boot instructions. Contrary to other images built with KIWI NG, a PXE image consists of separate boot, kernel and root filesystem images, since those images need to be made available in different locations on the PXE boot server.

A root filesystem image which can be deployed via KIWI NG’s PXE netboot infrastructure represents the system rootfs in a linux filesystem. A user could loop mount the image and access the contents of the root filesystem. The image does not contain any information about the system disk its partitions or the bootloader setup. All of these information is provided by a client configuration file on the PXE server which controlls how the root filesystem image should be deployed.

Many different deployment strategies are possible, e.g root over NBD (network block device), AoE (ATA over Ethernet), or NFS for diskless and diskfull clients. This particular example shows how to build an overlayfs-based union system based on openSUSE Leap for a diskless client which receives the squashfs compressed root file system image in a ramdisk overlayed via overlayfs and writes new data into another ramdisk on the same system. As diskless client, a QEMU virtual machine is used.

  1. Make sure you have checked out the example image descriptions, see Section 2.4, “Example Appliance Descriptions”.

  2. Build the image with KIWI NG:

    $ sudo kiwi-ng --profile Flat system build \
        --description kiwi/build-tests/x86/tumbleweed/test-image-pxe \
        --set-repo http://download.opensuse.org/tumbleweed/repo/oss \
        --target-dir /tmp/mypxe-result
  3. Change into the build directory:

    $ cd /tmp/mypxe-result
  4. Copy the initrd and the kernel to /srv/tftpboot/boot:

    $ cp *.initrd /srv/tftpboot/boot/initrd
    $ cp *.kernel /srv/tftpboot/boot/linux
  5. Copy the system image and its MD5 sum to /srv/tftpboot/image:

    $ cp kiwi-test-image-pxe.x86_64-1.15.3 /srv/tftpboot/image
    $ cp kiwi-test-image-pxe.x86_64-1.15.3.md5 /srv/tftpboot/image
  6. Adjust the PXE configuration file. The configuration file controls which kernel and initrd is loaded and which kernel parameters are set. A template has been installed at /srv/tftpboot/pxelinux.cfg/default from the kiwi-pxeboot package. The minimal configuration required to boot the example image looks like to following:

    DEFAULT KIWI-Boot
    
    LABEL KIWI-Boot
        kernel boot/linux
        append initrd=boot/initrd
        IPAPPEND 2
  7. Create the image client configuration file:

    $ vi /srv/tftpboot/KIWI/config.default
    
    IMAGE=/dev/ram1;kiwi-test-image-pxe.x86_64;1.15.3;192.168.100.2;4096
    UNIONFS_CONFIG=/dev/ram2,/dev/ram1,overlay

    All PXE boot based deployment methods are controlled by a client configuration file. The above configuration tells the client where to find the image and how to activate it. In this case the image will be deployed into a ramdisk (ram1) and overlay mounted such that all write operations will land in another ramdisk (ram2). KIWI NG supports a variety of different deployment strategies based on the rootfs image created beforehand. For details, refer to PXE Client Setup Configuration

  8. Connect the client to the network and boot. This can also be done in a virtualized environment using QEMU as follows:

    $ sudo qemu -boot n -m 4096

11.14.1 PXE Client Setup Configuration

All PXE boot based deployment methods are controlled by configuration files located in /srv/tftpboot/KIWI on the PXE server. Such a configuration file can either be client-specific (config.MAC_ADDRESS, for example config.00.AB.F3.11.73.C8), or generic (config.default).

In an environment with heterogeneous clients, this allows to have a default configuration suitable for the majority of clients, to have configurations suitable for a group of clients (for example machines with similar or identical hardware), and individual configurations for selected machines.

The configuration file contains data about the image and about configuration, synchronization, and partition parameters. The configuration file has got the following general format:

IMAGE="device;name;version;srvip;bsize;compressed,...,"

DISK="device"
PART="size;id;Mount,...,size;id;Mount"
RAID="raid-level;device1;device2;..."

AOEROOT=ro-device[,rw-device]
NBDROOT="ip-address;export-name;device;swap-export-name;swap-device;write-export-name;write-device"
NFSROOT="ip-address;path"

UNIONFS_CONFIGURATION="rw-partition,compressed-partition,overlayfs"

CONF="src;dest;srvip;bsize;[hash],...,src;dest;srvip;bsize;[hash]"

KIWI_BOOT_TIMEOUT="seconds"
KIWI_KERNEL_OPTIONS="opt1 opt2 ..."

REBOOT_IMAGE=1
RELOAD_CONFIG=1
RELOAD_IMAGE=1
Note
Note

Quoting the Values

The configuration file is sourced by Bash, so the same quoting rules as for Bash apply.

Not all configuration options needs to be specified. It depends on the setup of the client which configuration values are required. The following is a collection of client setup examples which covers all supported PXE client configurations.

11.14.1.1 Setup Client with Remote Root

To serve the image from a remote location and redirect all write operations on a tmpfs, the following setup is required:

# When using AoE, see vblade toolchain for image export

AOEROOT=/dev/etherd/e0.1
UNIONFS_CONFIG=tmpfs,aoe,overlay

# When using NFS, see exports manual page for image export

NFSROOT="192.168.100.2;/srv/tftpboot/image/root"
UNIONFS_CONFIG=tmpfs,nfs,overlay

# When using NBD, see nbd-server manual page for image export

NBDROOT=192.168.100.2;root_export;/dev/nbd0
UNIONFS_CONFIG=tmpfs,nbd,overlay

The above setup shows the most common use case where the image built with KIWI NG is populated over the network using either AoE, NBD or NFS in combination with overlayfs which redirects all write operations to be local to the client. In any case a setup of either AoE, NBD or NFS on the image server is required beforehand.

11.14.1.2 Setup Client with System on Local Disk

To deploy the image on a local disk the following setup is required:

Note
Note

In the referenced x86/tumbleweed/test-image-pxe XML description the pxe type must be changed as follows and the image needs to be rebuild:

<type image="pxe" filesystem="ext3" boot="netboot/suse-tumbleweed"/>
IMAGE="/dev/sda2;kiwi-test-image-pxe.x86_64;1.15.3;192.168.100.2;4096"
DISK="/dev/sda"
PART="5;S;X,X;L;/"

The setup above will create a partition table on sda with a 5MB swap partition (no mountpoint) and the rest of the disk will be a Linux(L) partition with / as mountpoint. The (X) in the PART setup specifies a place holder to indicate the default behaviour.

11.14.1.3 Setup Client with System on Local MD RAID Disk

To deploy the image on a local disk with prior software RAID configuration, the following setup is required:

Note
Note

In the referenced x86/tumbleweed/test-image-pxe XML description the pxe type must be changed as follows and the image needs to be rebuild:

<type image="pxe" filesystem="ext3" boot="netboot/suse-tumbleweed"/>
RAID="1;/dev/sda;/dev/sdb"
IMAGE="/dev/md1;kiwi-test-image-pxe.x86_64;1.15.3;192.168.100.2;4096"
PART="5;S;x,x;L;/"

The first parameter of the RAID line is the RAID level. So far only raid1 (mirroring) is supported. The second and third parameter specifies the raid disk devices which make up the array. If a RAID line is present all partitions in PART will be created as RAID partitions. The first RAID is named md0, the second one md1 and so on. It is required to specify the correct RAID partition in the IMAGE line according to the PART setup. In this case md0 is reserved for the SWAP space and md1 is reserved for the system.

11.14.1.4 Setup Loading of Custom Configuration File(s)

In order to load for example a custom /etc/hosts file on the client, the following setup is required:

CONF="hosts;/etc/hosts;192.168.1.2;4096;ffffffff"

On boot of the client KIWI NG’s boot code will fetch the hosts file from the root of the server (192.168.1.2) with 4k blocksize and deploy it as /etc/hosts on the client. The protocol is by default tftp but can be changed via the kiwiservertype kernel commandline option. For details, see Setup a Different Download Protocol and Server

11.14.1.5 Setup Client to Force Reload Image

To force the reload of the system image even if the image on the disk is up-to-date, the following setup is required:

RELOAD_IMAGE=1

The option only applies to configurations with a DISK/PART setup

11.14.1.6 Setup Client to Force Reload Configuration Files

To force the reload of all configuration files specified in CONF, the following setup is required:

RELOAD_CONFIG=1

By default only configuration files which has changed according to their md5sum value will be reloaded. With the above setup all files will be reloaded from the PXE server. The option only applies to configurations with a DISK/PART setup

11.14.1.7 Setup Client for Reboot After Deployment

To reboot the system after the initial deployment process is done the following setup is required:

REBOOT_IMAGE=1

11.14.1.8 Setup custom kernel boot options

To deactivate the kernel mode setting on local boot of the client the following setup is required:

KIWI_KERNEL_OPTIONS="nomodeset"
Note
Note

This does not influence the kernel options passed to the client if it boots from the network. In order to setup those the PXE configuration on the PXE server needs to be changed

11.14.1.9 Setup a Custom Boot Timeout

To setup a 10sec custom timeout for the local boot of the client the following setup is required.

KIWI_BOOT_TIMEOUT="10"
Note
Note

This does not influence the boot timeout if the client boots off from the network.

11.14.1.10 Setup a Different Download Protocol and Server

By default all downloads controlled by the KIWI NG linuxrc code are performed by an atftp call using the TFTP protocol. With PXE the download protocol is fixed and thus you cannot change the way how the kernel and the boot image (initrd) is downloaded. As soon as Linux takes over, the download protocols http, https and ftp are supported too. KIWI NG uses the curl program to support the additional protocols.

To select one of the additional download protocols the following kernel parameters need to be specified

kiwiserver=192.168.1.1 kiwiservertype=ftp

To set up this parameters edit the file /srv/tftpboot/pxelinux.cfg/default on your PXE boot server and change the append line accordingly.

Note
Note

Once configured all downloads except for kernel and initrd are now controlled by the given server and protocol. You need to make sure that this server provides the same directory and file structure as initially provided by the kiwi-pxeboot package

11.15 Booting a Root Filesystem from Network

In KIWI NG, the kiwi-overlay dracut module can be used to boot from a remote exported root filesystem. The exported device is visible as block device on the network client. KIWI NG supports the two export backends NBD (Network Block Device) and AoE (ATA over Ethernet) for this purpose. A system that is booted in this mode will read the contents of the root filesystem from a remote location and targets any write action into RAM by default. The kernel cmdline option rd.root.overlay.write can be used to specify an alternative device to use for writing. The two layers (read and write) are combined using the overlayfs filesystem.

For remote boot of a network client, the PXE boot protocol is used. This functionality requires a network boot server setup on the system. Details how to setup such a server can be found in Section 11.13, “Setting Up a Network Boot Server”.

Before the KIS image can be build, the following configuration step is required:

  • Create dracut configuration to include the kiwi-overlay module

    $ cd kiwi/build-tests/x86/tumbleweed/test-image-pxe
    $ mkdir -p root/etc/dracut.conf.d
    $ cd root/etc/dracut.conf.d
    $ echo 'add_dracutmodules+=" kiwi-overlay "' > overlay.conf

Now the KIS image can be build as shown in Section 10.6, “Build KIS Image (Kernel, Initrd, System)”. After the build, the following configuration steps are required to boot from the network:

  1. Copy initrd/kernel from the KIS build to the PXE server

    The PXE boot process loads the configured kernel and initrd from the PXE server. For this reason, the following files must be copied to the PXE server as follows:

    $ cp *.initrd /srv/tftpboot/boot/initrd
    $ cp *.kernel /srv/tftpboot/boot/linux
  2. Export Root FileSystem to the Network

    Access to the root filesystem is implemented using either the AoE or the NBD protocol. This requires the export of the root filesystem image as remote block device:

    Export via AoE:

    Install the vblade package on the system which is expected to export the root filesystem

    Note
    Note

    Not all versions of AoE are compatible with any kernel. This means the kernel on the system exporting the root filesystem image must provide a compatible aoe kernel module compared to the kernel used inside of the root filesystem image.

    Once done, export the filesystem from the KIS build above as follows:

    $ vbladed 0 1 IFACE {exc_image_base_name}.x86_64-1.15.3

    The above command exports the given filesystem image file as a block storage device to the network of the given IFACE. On any machine except the one exporting the file, it will appear as /dev/etherd/e0.1 once the aoe kernel module was loaded. The two numbers, 0 and 1 in the above example, classifies a major and minor number which is used in the device node name on the reading side, in this case e0.1.

    Note
    Note

    Only machines in the same network of the given INTERFACE can see the exported block device.

    Export via NBD:

    Install the nbd package on the system which is expected to export the root filesystem

    Once done, export the filesystem from the KIS build above as follows:

    $ losetup /dev/loop0 {exc_image_base_name}.x86_64-1.15.3
    
    $ vi /etc/nbd-server/config
    
      [generic]
          user = root
          group = root
      [export]
          exportname = /dev/loop0
    
    $ nbd-server
  3. Setup boot entry in the PXE configuration

    Note
    Note

    The following step assumes that the pxelinux.0 loader has been configured on the boot server to boot up network clients

    Edit the file /srv/tftpboot/pxelinux.cfg/default and create a boot entry of the form:

    Using NBD:
    LABEL Overlay-Boot
        kernel boot/linux
        append initrd=boot/initrd root=overlay:nbd=server-ip:export

    The boot parameter root=overlay:nbd=server-ip:export specifies the NBD server IP address and the name of the export as used in /etc/nbd-server/config

    Using AoE:
    LABEL Overlay-Boot
        kernel boot/linux
        append initrd=boot/initrd root=overlay:aoe=AOEINTERFACE

    The boot parameter root=overlay:aoe=AOEINTERFACE specifies the interface name as it was exported by the vbladed command

  4. Boot from the Network

    Within the network which has access to the PXE server and the exported root filesystem image, any network client can now boot the system. A test based on QEMU can be done as follows:

    $ sudo qemu -boot n

11.16 Booting a Live ISO Image from Network

In KIWI NG, live ISO images can be configured to boot via the PXE boot protocol. This functionality requires a network boot server setup on the system. Details how to setup such a server can be found in Section 11.13, “Setting Up a Network Boot Server”.

After the live ISO was built as shown in Section 10.1, “Build an ISO Hybrid Live Image”, the following configuration steps are required to boot from the network:

  1. Extract initrd/kernel From Live ISO

    The PXE boot process loads the configured kernel and initrd from the PXE server. For this reason, those two files must be extracted from the live ISO image and copied to the PXE server as follows:

    $ mount {exc_image_base_name}.x86_64-1.15.3.iso /mnt
    $ cp /mnt/boot/x86_64/loader/initrd /srv/tftpboot/boot/initrd
    $ cp /mnt/boot/x86_64/loader/linux /srv/tftpboot/boot/linux
    $ umount /mnt
    Note
    Note

    This step must be repeated with any new build of the live ISO image

  2. Export Live ISO To The Network

    Access to the live ISO file is implemented using the AoE protocol in KIWI NG. This requires the export of the live ISO file as remote block device which is typically done with the vblade toolkit. Install the following package on the system which is expected to export the live ISO image:

    $ zypper in vblade
    Note
    Note

    Not all versions of AoE are compatible with any kernel. This means the kernel on the system exporting the live ISO image must provide a compatible aoe kernel module compared to the kernel used in the live ISO image.

    Once done, export the live ISO image as follows:

    $ vbladed 0 1 INTERFACE {exc_image_base_name}.x86_64-1.15.3.iso

    The above command exports the given ISO file as a block storage device to the network of the given INTERFACE. On any machine except the one exporting the file, it will appear as /dev/etherd/e0.1 once the aoe kernel module was loaded. The two numbers, 0 and 1 in the above example, classifies a major and minor number which is used in the device node name on the reading side, in this case e0.1. The numbers given at export time must match the AOEINTERFACE name as described in the next step.

    Note
    Note

    Only machines in the same network of the given INTERFACE can see the exported live ISO image. If virtual machines are the target to boot the live ISO image they could all be connected through a bridge. In this case INTERFACE is the bridge device. The availability scope of the live ISO image on the network is in general not influenced by KIWI NG and is a task for the network administrators.

  3. Setup live ISO boot entry in PXE configuration

    Note
    Note

    The following step assumes that the pxelinux.0 loader has been configured on the boot server to boot up network clients

    Edit the file /srv/tftpboot/pxelinux.cfg/default and create a boot entry of the form:

    LABEL Live-Boot
        kernel boot/linux
        append initrd=boot/initrd rd.kiwi.live.pxe root=live:AOEINTERFACE=e0.1
    • The boot parameter rd.kiwi.live.pxe tells the KIWI NG boot process to setup the network and to load the required aoe kernel module.

    • The boot parameter root=live:AOEINTERFACE=e0.1 specifies the interface name as it was exported by the vbladed command from the last step. Currently only AoE (Ata Over Ethernet) is supported.

  4. Boot from the Network

    Within the network which has access to the PXE server and the exported live ISO image, any network client can now boot the live system. A test based on QEMU is done as follows:

    $ sudo qemu -boot n

11.17 Setting Up YaST at First Boot

To be able to use YaST in a non interactive way, create a YaST profile which tells the autoyast module what to do. To create the profile, run:

yast autoyast

Once the YaST profile exists, update the KIWI NG XML description as follows:

  1. Edit the KIWI NG XML file and add the following package to the <packages type="image"> section:

    <package name="yast2-firstboot"/>
  2. Copy the YaST profile file as overlay file to your KIWI NG image description overlay directory:

    cd IMAGE_DESCRIPTION_DIRECTORY
    mkdir -p root/etc/YaST2
    cp PROFILE_FILE root/etc/YaST2/firstboot.xml
  3. Copy and activate the YaST firstboot template. This is done by the following instructions which needs to be written into the KIWI NG config.sh which is stored in the image description directory:

    sysconfig_firsboot=/etc/sysconfig/firstboot
    sysconfig_template=/var/adm/fillup-templates/sysconfig.firstboot
    if [ ! -e "${sysconfig_firsboot}" ]; then
        cp "${sysconfig_template}" "${sysconfig_firsboot}"
    fi
    
    touch /var/lib/YaST2/reconfig_system

11.18 Add or Update the Fstab File

In KIWI NG, all major filesystems that were created at image build time are handled by KIWI NG itself and setup in /etc/fstab. Thus there is usually no need to add entries or change the ones added by KIWI NG. However depending on where the image is deployed later it might be required to pre-populate fstab entries that are unknown at the time the image is build.

Possible use cases are for example:

  • Adding NFS locations that should be mounted at boot time. Using autofs would be an alternative to avoid additional entries to fstab. The information about the NFS location will make this image specific to the target network. This will be independent of the mount method, either fstab or the automount map has to provide it.

  • Adding or changing entries in a read-only root system which becomes effective on first boot but can’t be added at that time because of the read-only characteristics.

Note
Note

Modifications to the fstab file are a critical change. If done wrongly the risk exists that the system will not boot. In addition this type of modification makes the image specific to its target and creates a dependency to the target hardware, network, etc… Thus this feature should be used with care.

The optimal way to provide custom fstab information is through a package. If this can’t be done the files can also be provided via the overlay file tree of the image description.

KIWI NG supports three ways to modify the contents of the /etc/fstab file:

Providing an /etc/fstab.append file

If that file exists in the image root tree, KIWI NG will take its contents and append it to the existing /etc/fstab file. The provided /etc/fstab.append file will be deleted after successful modification.

Providing an /etc/fstab.patch file

The /etc/fstab.patch represents a patch file that will be applied to /etc/fstab using the patch program. This method also allows to change the existing contents of /etc/fstab. On success /etc/fstab.patch will be deleted.

Providing an /etc/fstab.script file

The /etc/fstab.script represents an executable which is called as chrooted process. This method is the most flexible one and allows to apply any change. On success /etc/fstab.script will be deleted.

Note
Note

All three variants to handle the fstab file can be used together. Appending happens first, patching afterwards and the script call is last. When using the script call, there is no validation that checks if the script actually handles fstab or any other file in the image rootfs.

11.19 Building Images with Profiles

KIWI NG supports so-called profiles inside the XML image description. Profiles act as namespaces for additional settings to be applied on top of the defaults. For further details, see Section 7.4, “Image Profiles”.

11.19.1 Local Builds

To execute local KIWI NG builds with a specific, selected profile, add the command line flag --profile=$PROFILE_NAME:

$ sudo kiwi-ng --type oem --profile libvirt system build \
      --description kiwi/build-tests/x86/leap/test-image-vagrant \
      --set-repo obs://openSUSE:Leap:15.5/standard \
      --target-dir /tmp/myimage

Consult the manual page of kiwi for further details: Section 4.1.1, “SYNOPSIS”.

11.19.2 Building with the Open Build Service

The Open Build Service (OBS) support profiles via the multibuild feature.

To enable and use the profiles, follow these steps:

  1. Add the following XML comment to your config.xml:

    <!-- OBS-Profiles: @BUILD_FLAVOR@ -->

    It must be added before the opening 
  2. Add a file _multibuild into your package’s repository with the following contents:

    <multibuild>
      <flavor>profile_1</flavor>
      <flavor>profile_2</flavor>
    </multibuild>

    Add a line <flavor>$PROFILE</flavor> for each profile that you want OBS to build.

Note, by default, OBS excludes the build without any profile enabled.

Running a build of a multibuild enabled repository via osc can be achieved via the -M $PROFILE flag:

$ osc build -M $PROFILE

11.20 Building in the Open Build Service

Note
Note

Abstract

This document gives a brief overview how to build images with KIWI NG in version 10.2.2 inside of the Open Build Service. A tutorial on the Open Buildservice itself can be found here: https://en.opensuse.org/openSUSE:Build_Service_Tutorial

The next generation KIWI NG is fully integrated with the Open Build Service. In order to start it’s best to checkout one of the integration test image build projects from the base Testing project Virtualization:Appliances:Images:Testing_$ARCH:$DISTRO at:

https://build.opensuse.org

For example the test images for SUSE on x86 can be found here.

11.20.1 Advantages of using the Open Build Service (OBS)

The Open Build Service offers multiple advantages over running KIWI NG locally:

  • OBS will host the latest successful build for you without having to setup a server yourself.

  • As KIWI NG is fully integrated into OBS, OBS will automatically rebuild your images if one of the included packages or one of its dependencies or KIWI NG itself get updated.

  • The builds will no longer have to be executed on your own machine, but will run on OBS, thereby saving you resources. Nevertheless, if a build fails, you get a notification via email (if enabled in your user’s preferences).

11.20.2 Differences Between Building Locally and on OBS

Note, there is a number of differences when building images with KIWI NG using the Open Build Service. Your image that build locally just fine, might not build without modifications.

The notable differences to running KIWI NG locally include:

  • OBS will pick the KIWI NG package from the repositories configured in your project, which will most likely not be the same version that you are running locally. This is especially relevant when building images for older versions like SUSE Linux Enterprise. Therefore, include the custom appliances repository as described in the following section: Recommendations.

  • When KIWI NG runs on OBS, OBS will extract the list of packages from config.xml and use it to create a build root. In contrast to a local build (where your distributions package manager will resolve the dependencies and install the packages), OBS will not build your image if there are multiple packages that could be chosen to satisfy the dependencies of your packages . This shows errors like this:

    unresolvable: have choice for SOMEPACKAGE: SOMEPAKAGE_1 SOMEPACKAGE_2

    This can be solved by explicitly specifying one of the two packages in the project configuration via the following setting:

    Prefer: SOMEPACKAGE_1

    Place the above line into the project configuration, which can be accessed either via the web interface (click on the tab Project Config on your project’s main page) or via osc meta -e prjconf.

    Warning
    Warning

    We strongly encourage you to remove your repositories from config.xml and move them to the repository configuration in your project’s settings. This usually prevents the issue of having the choice for multiple package version and results in a much smoother experience when using OBS.

  • By default, OBS builds only a single build type and the default profile. If your appliance uses multiple build types, put each build type into a profile, as OBS cannot handle multiple build types.

    There are two options to build multiple profiles on OBS:

    1. Use the 
    2. Use the multibuild feature.

    The first option is simpler to use, but has the disadvantage that your appliances are built sequentially. The multibuild feature allows to build each profile as a single package, thereby enabling parallel execution, but requires an additional _multibuild file. For the above example config.xml would have to be adapted as follows:

    <?xml version="1.0" encoding="utf-8"?>
    
    <!-- OBS-Profiles: @BUILD_FLAVOR@ -->
    
    <image schemaversion="8.0" name="openSUSE-Leap-15.1">
      <!-- image description with the profiles foo_profile and bar_profile
    </image>

    The file _multibuild would have the following contents:

    <multibuild>
      <flavor>foo_profile</flavor>
      <flavor>bar_profile</flavor>
    </multibuild>
  • Subfolders in OBS projects are ignored by default by osc and must be explicitly added via osc add $FOLDER. Bear that in mind when adding the overlay files inside the root/ directory to your project.

  • OBS ignores file permissions. Therefore config.sh and images.sh will always be executed through BASH (see also: Section 7.6, “User-Defined Scripts”).

11.21 Using SUSE Product ISO To Build

When building an image with KIWI NG, the image description usually points to a number of public/private package source repositories from which the new image root tree will be created. Alternatively the vendor provided product ISO image(s) can be used. The contents of the ISO (DVD) media also provides package source repositories but organized in a vendor specific structure. As a user it’s important to know about this structure such that the KIWI NG image description can consume it.

To use a SUSE product media the following steps are required:

  • Mount the ISO media from file or DVD drive:

$ sudo mount Product_ISO_file.iso|DVD_drive /media/suse
  • Lookup all Product and Module directories:

    Below /media/suse there is a directory structure which provides package repositories in directories starting with Product-XXX and Module-XXX. It depends on the package list in the KIWI NG image description from which location a package or a dependency of the package is taken. Therefore it is best practice to browse through all the directories and create a <repository> definition for each of them in the KIWI NG image description like the following example shows:

    <repository alias="DVD-1-Product-SLES">
        <source path="file:///media/suse/Product-SLES"/>
    </repository>
    
    <repository alias="DVD-1-Module-Basesystem">
        <source path="file:///media/suse/Module-Basesystem"/>
    </repository>

Once all the individual product and module repos has been created in the KIWI NG image description, the build process can be started as usual.

Note
Note

Because of the manual mount process the /media/suse location stays busy after KIWI NG has created the image. The cleanup of this resource is a responsibility of the user and not done by KIWI NG

11.22 Circumvent Debian Bootstrap

When building Debian based images KIWI NG uses apt in the bootstrap and the system phase to create the image root tree. However, apt does not support a native way to bootstrap an empty root tree. Therefore the bootstrap phase uses apt only to resolve the given bootstrap packages and to download these packages from the given repositories. The list of packages is then manually extracted into the new root tree which is not exactly the same as if apt would have installed them natively. For the purpose of creating an initial tree to begin with, this procedure is acceptable though.

If, for some reasons, this bootstrap procedure is not applicable, KIWI NG allows for an alternative process which is based on a prebuilt bootstrap-root archive provided as a package.

To make use of a bootstrap_package, the name of that package needs to be referenced in the KIWI NG description as follows:

<packages type="bootstrap" bootstrap_package="bootstrap-root">
    <package name="a"/>
    <package name="b"/>
</packages>

The boostrap process now changes in a way that the provided bootstrap_package bootstrap-root will be installed on the build host machine. Next KIWI NG searches for a tar archive file /var/lib/bootstrap/bootstrap-root.ARCH.tar.xz, where ARCH is the name of the host architecture e.g x86_64. If found the archive gets unpacked and serves as the bootstrap root tree to begin with. The optionally provided additional bootstrap packages, a and b in this example will be installed like system packages via chroot and apt. Usually no additional bootstrap packages are needed as they could all be handled as system packages.

11.22.1 How to Create a bootstrap_package

Changing the setup in KIWI NG to use a bootstrap_package rather then using KIWI NG’s debian bootstrap method to do the job, comes with the task to create that package providing the bootstrap root tree. There are more than one way to do this. The following procedure is just one example and requires some background knowledge about the Open Build Service OBS and its KIWI NG integration.

  1. Create an OBS project and repository setup that matches your image target

  2. Create an image build package

    osc mkpac bootstrap-root
  3. Create the following appliance.kiwi file

    <image schemaversion="7.4" name="bootstrap-root">
        <description type="system">
            <author>The Author</author>
            <contact>author@example.com</contact>
            <specification>prebuilt bootstrap rootfs for ...</specification>
        </description>
    
        <preferences>
            <version>1.0.1</version>
            <packagemanager>apt</packagemanager>
            <type image="tbz"/>
        </preferences>
    
        <repository type="rpm-md">
            <source path="obsrepositories:/"/>
        </repository>
    
        <packages type="image">
            <package name="gawk"/>
            <package name="apt-utils"/>
            <package name="debconf"/>
            <package name="mawk"/>
            <package name="libpam-runtime"/>
            <package name="util-linux"/>
            <package name="systemd"/>
            <package name="init"/>
            <package name="gnupg"/>
            <package name="iproute2"/>
            <package name="iptables"/>
            <package name="iputils-ping"/>
            <package name="ifupdown"/>
            <package name="isc-dhcp-client"/>
            <package name="netbase"/>
            <package name="dbus"/>
            <package name="xz-utils"/>
            <package name="usrmerge"/>
            <package name="language-pack-en"/>
        </packages>
    
        <packages type="bootstrap"/>
    </image>
    osc add appliance.kiwi
    osc ci
  4. Package the image build results into a debian package

    In step 3. the bootstrap root tarball was created but not yet packaged. A debian package is needed such that it can be referenced with the bootstrap_package attribute and the repository providing it. The simplest way to package the bootstrap-root tarball is to create another package in OBS and use the tarball file as its source.