Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / Building Linux System Appliances / Building Images for Supported Types

10 Building Images for Supported Types

Note
Note

This document provides an overview how to build and use the KIWI NG supported image types. All images that we provide for testing uses the root password: linux

10.1 Build an ISO Hybrid Live Image

  • how to build an ISO image

  • how to run the image with QEMU

A Live ISO image is a system on a removable media, for example a CD/DVD or a USB stick. Booting a Live ISO image does not interfere with other system storage components, making it a useful portable system for demonstration, testing, and debugging.

To add a Live ISO build to your appliance, create a type element with image set to iso in the config.xml file as shown below:

<image schemaversion="8.0" name="Tumbleweed_appliance">
  <!-- snip -->
  <preferences>
    <type image="iso" primary="true" flags="overlay" hybridpersistent_filesystem="ext4" hybridpersistent="true"/>
    <!-- additional preferences -->
  </preferences>
  <!-- snip -->
</image>

The following attributes of the type element are relevant when building Live ISO images:

  • flags: Specifies the dracut module to use.

    If set to overlay, the kiwi-live dracut module supplied by KIWI NG is used for booting.

    If set to dmsquash, the dracut-supplied dmsquash-live module is used for booting.

    Both modules support a different set of live features. For details see overlay or dmsquash

  • filesystem: Specifies the root filesystem for the live system.

    If set to squashfs, the root filesystem is written into a squashfs image. This option is not compatible with device-mapper specific features of the dmsquash-live dracut module. In that case, use overayfs.

    If set to a value different from squashfs, the root filesystem is written into a filesystem image of the specified type, and the filesystem image written into a squashfs image for compression.

    The default value of this option is ext4.

  • hybridpersistent: Accepts true or false, if set to true, the resulting image is created with a COW file to keep data persistent over a reboot.

  • hybridpersistent_filesystem: The filesystem used for the COW file. Valid values are ext4 or xfs, with ext4 being the default.

With the appropriate settings specified in config.xml, you can build an image using KIWI NG:

$ sudo kiwi-ng system build \
      --description kiwi/build-tests/x86/leap/test-image-live \
      --set-repo obs://openSUSE:Leap:15.5/standard \
      --target-dir /tmp/myimage

The resulting image is saved in /tmp/myimage, and the image can be tested with QEMU:

$ sudo qemu -cdrom \
      kiwi-test-image-live.x86_64-1.15.3.iso \
      -m 4096 -serial stdio

The image is now complete and ready to use. See Section 11.1, “Deploy ISO Image on an USB Stick” and Section 11.2, “Deploy ISO Image as File on a FAT32 Formated USB Stick” for further information concerning deployment.

10.1.1 overlay or dmsquash

Whether you choose the overlay or dmsquash dracut module depends on the features you intend to use. The overlay module supports only overlayfs based overlays, but with automatic creation of a writable layer for persistence. The dmsquash module supports overlayfs as well as device-mapper based overlays.

The following list describes important Live ISO features and their support status in the overlay and dmsquash modules.

ISO scan

Usable in the same way with both dracut modules. This feature allows to boot the Live ISO as a file from a grub loopback configured bootloader. The live-grub-stick tool is one example that uses this feature. For details how to setup ISO scan with the overlay module see Section 11.2, “Deploy ISO Image as File on a FAT32 Formated USB Stick”

ISO in RAM completely

Usable with the dmsquash module through rd.live.ram. The overlay module does not support this mode, while KIWI NG supports RAM only systems as OEM deployment into RAM from an install ISO media. For details how to setup RAM only deployments in KIWI NG see: Section 11.9, “Deploy and Run System in a RamDisk”

Overlay based on overlayfs

Usable with both dracut modules. The readonly root filesystem is overlaid with a readwrite filesystem using the kernel overlayfs filesystem.

Overlay based on device mapper snapshots

Usable with the dmsquash module. A squashfs compressed readonly root is overlaid with a readwrite filesystem using a device mapper snapshot.

Media Checksum Verification

Boot the Live iso only for ISO checksum verification. This is possible with both modules but the overlay module uses the checkmedia tool, whereas the upstream dmsquash module uses checkisomd5. The verification process is triggered by passing the kernel option mediacheck for the overlay module and rd.live.check for the dmsquash module.

Live ISO through PXE boot

Boot the Live image via the network. This is possible with both modules, but it uses different technologies. The overlay module supports network boot only in combination with the AoE (Ata Over Ethernet) protocol. For details see Section 11.16, “Booting a Live ISO Image from Network”. The dmsquash module supports network boot by fetching the ISO image into memory from root=live: using the livenet module.

Persistent Data

Keep new data persistent on a writable storage device. This can be done with both modules but in different ways. The overlay module activates persistency with the kernel boot parameter rd.live.overlay.persistent. If the persistent setup cannot be created the fallback to the non persistent mode applies automatically. The overlay module auto detects if it is used on a disk or ISO scan loop booted from a file. If booted as disk, persistency is setup on a new partition of that disk. If loop booted from file, persistency is setup on a new cow file. The cow file/partition setup can be influenced with the kernel boot parameters: rd.live.overlay.cowfs and rd.live.cowfile.mbsize. The dmsquash module configures persistency through the rd.live.overlay option exclusively and does not support the automatic creation of a write partition in disk mode.

10.2 Build a Virtual Disk Image

  • define a simple disk image in the image description

  • build a simple disk image

  • run it with QEMU

A simple virtual disk image is a compressed system disk with additional metadata useful for cloud frameworks like Amazon EC2, Google Compute Engine, or Microsoft Azure. It is used as the native disk of a system, and it does not require an additional installation workflow or a complex first boot setup procedure.

To enable KIWI NG to build a simple disk image, add a type element with image="oem" in config.xml, where the oem-resize option disabled. An example configuration for a 42 GB large VMDK image with 512 MB RAM, an IDE controller and a bridged network interface is shown below:

<image schemaversion="8.0" name="Tumbleweed_appliance">
  <!-- snip -->
  <preferences>
    <type image="oem" filesystem="ext4" format="vmdk">
      <bootloader name="grub2" timeout="0"/>
      <size unit="G">42</size>
      <oemconfig>
          <oem-resize>false</oem-resize>
      </oemconfig>
      <machine memory="512" guestOS="suse" HWversion="4">
        <vmdisk id="0" controller="ide"/>
        <vmnic driver="e1000" interface="0" mode="bridged"/>
      </machine>
    </type>
    <!-- additional preferences -->
  </preferences>
  <!-- snip -->
</image>

The following attributes of the type element are deserve attention when building simple disk images:

  • format: Specifies the format of the virtual disk, possible values are: gce, ova, qcow2, vagrant, vmdk, vdi, vhd, vhdx and vhd-fixed.

  • formatoptions: Specifies additional format options passed to qemu-img. formatoptions is a comma-separated list of format specific options in a name=value format as expected by qemu-img. KIWI NG forwards the settings from the attribute as a parameter to the -o option in the qemu-img call.

The bootloader, size and machine child-elements of type can be used to customize the virtual machine image. These elements are described in the following sections: Setting up the Bootloader in the Image, Modifying the Size of the Image and Customizing the Virtual Machine

Once your image description is finished , you can build the image using the following KIWI NG command:

$ sudo kiwi-ng system build \
     --description kiwi/build-tests/x86/leap/test-image-disk-simple \
     --set-repo obs://openSUSE:Leap:15.5/standard \
     --target-dir /tmp/myimage

The resulting .raw image is stored in /tmp/myimage.

You can test the image using QEMU:

$ sudo qemu \
    -drive file=kiwi-test-image-disk-simple.x86_64-1.15.3.raw,format=raw,if=virtio \
    -m 4096

For further information on how to configure the image to work within a cloud framework see:

For information on how to setup a Vagrant system, see: Section 11.7, “Image Description for Vagrant”.

10.2.1 Setting up the Bootloader in the Image

<preferences>
  <type>
     <bootloader name="grub2"/>
  </type>
</preferences>

The bootloader element defines which bootloader to use in the image, and the element offers several options for customizing its configuration.

For details, see: Section 8.1.4.14, “<preferences><type><bootloader>”

10.2.2 Modifying the Size of the Image

The size child element of type specifies the size of the resulting disk image. The following example shows an image description, where 20 GB are added to the virtual machine image, of which 5 GB are left unpartitioned:

<preferences>
  <type image="oem" format="vmdk">
    <size unit="G" additive="true" unpartitioned="5">20</size>
    <oemconfig>
        <oem-resize>false</oem-resize>
    </oemconfig>
  </type>
</preferences>

The following optional attributes can be used to futher customize the image size:

  • unit: Defines the unit used for the provided numerical value, possible values are M for megabytes and G for gigabytes. The default unit is megabytes.

  • additive: Boolean value that determines whether the provided value is added to the current image size (additive="true") or whether it is the total size (additive="false"). The default value is false.

  • unpartitioned: Specifies the image space in the image that is not partitioned. The attribute uses either the same unit as defined in the attribute unit or the default value.

10.2.3 Customizing the Virtual Machine

The machine child element of type can be used to customize the virtual machine configuration, including the number of CPUs and the connected network interfaces.

The following attributes are supported by the machine element:

  • ovftype: The OVF configuration type. The Open Virtualization Format is a standard for describing virtual appliances and distribute them in an archive called Open Virtual Appliance (OVA). The standard describes the major components associated with a disk image. The exact specification depends on the product using the format. Supported values are zvm, powervm, xen and vmware.

  • HWversion: The virtual machine’s hardware version (vmdk and ova formats only), refer https://kb.vmware.com/s/article/1003746 for further information on which value to choose.

  • arch: the VM architecture (vmdk format only). Valid values are ix86 (= i585 and i686) and x86_64.

  • xen_loader: the Xen target loader which is expected to load the guest. Valid values are: hvmloader, pygrub and pvgrub.

  • guestOS: The virtual guest OS’ identification string for the VM (only applicable for vmdk and ova formats. Note that the name designation is different for the two formats). Note: For vmware ovftools, guestOS is a VMX GuestOS, but not VIM GuestOS. For instance, correct value for Ubuntu 64 bit is “ubuntu-64”, but not “ubuntu64Guest”. See GUEST_OS_KEY_MAP in guest_os_tables.h at https://github.com/vmware/open-vm-tools for another guestOS values.

  • min_memory: The virtual machine’s minimum memory in MB (ova format only).

  • max_memory: The virtual machine’s maximum memory in MB (ova format only).

  • min_cpu: The virtual machine’s minimum CPU count (ova format only).

  • max_cpu: The virtual machine’s maximum CPU count (ova format only).

  • memory: The virtual machine’s memory in MB (all formats).

  • ncpus: The number of virtual CPUs available to the virtual machine (all formats).

machine also supports additional child elements that are covered in the following subsections.

10.2.3.1 Modifying the VM Configuration Directly

The vmconfig-entry element is used to add entries directly into the virtual machine’s configuration file. This is currently only supported for the vmdk format where the provided strings are directly pasted into the .vmx file.

The vmconfig-entry element has no attributes and can appear multiple times. The entries are added to the configuration file in the provided order. Note that KIWI NG does not check the entries for correctness.

The following example adds the two entries numvcpus = "4" and cpuid.coresPerSocket = "2" into the VM configuration file:

<preferences>
  <type image="oem" filesystem="ext4" format="vmdk">
    <machine memory="512" guestOS="suse" HWversion="4">
      <vmconfig-entry>numvcpus = "4"</vmconfig-entry>
      <vmconfig-entry>cpuid.coresPerSocket = "2"</vmconfig-entry>
    </machine>
  </type>
</preferences>

10.2.3.2 Adding Network Interfaces to the VM

Network interfaces can be explicitly specified for the VM when required via the vmnic element. This makes is possible to add another bridged interface or to specify the driver wto be used.

Note that this element is used for the vmdk image format only.

The following example adds a bridged network interface that uses the e1000 driver:

<preferences>
  <type image="oem" filesystem="ext4" format="vmdk">
    <machine memory="4096" guestOS="suse" HWversion="4">
      <vmnic driver="e1000" interface="0" mode="bridged"/>
    </machine>
  </type>
</preferences>

The vmnic element supports the following attributes:

  • interface: Mandatory interface ID for the VM’s network interface.

  • driver: An optional driver.

  • mac: The MAC address of the specified interface.

  • mode: The mode of the interface.

Note that KIWI NG doesn not verify the values of the attributes, it only inserts them into the appropriate configuration files.

10.2.3.3 Specifying Disks and Disk Controllers

The vmdisk element can be used to customize the disks and disk controllers for the virtual machine. This element can be specified for each disk or disk controller present.

Note that this element is used for vmdk and ova image formats only.

The following example adds a disk with the ID 0 that uses an IDE controller:

<preferences>
  <type image="oem" filesystem="ext4" format="vmdk">
    <machine memory="512" guestOS="suse" HWversion="4">
      <vmdisk id="0" controller="ide"/>
    </machine>
  </type>
</preferences>

Each vmdisk element can be further customized using optional attributes:

10.2.3.4 Adding CD/DVD Drives

KIWI NG supports adding IDE and SCSCI CD/DVD drives to the virtual machine using the vmdvd element for the vmdk image format. The following example adds two drives: one with a SCSCI and another with a IDE controller:

<preferences>
  <type image="oem" filesystem="ext4">
    <machine memory="512" xen_loader="hvmloader">
      <vmdvd id="0" controller="scsi"/>
      <vmdvd id="1" controller="ide"/>
    </machine>
  </type>
</preferences>

The vmdvd element features two mandatory attributes:

  • id: The CD/DVD ID of the drive.

  • controller: The CD/DVD controller used for the VM guest. Valid values are ide and scsi.

10.3 Build an Expandable Disk Image

  • build an expandable disk image

  • deploy an expandable disk image

  • run the deployed system

An expandable disk represents the system disk with the capability to automatically expand the disk and its filesystem to a custom disk geometry. This allows deploying the same disk image on target systems with different hardware setups.

The following example shows how to build and deploy an expandable disk image based on openSUSE Leap using a QEMU virtual machine as a target system:

  1. Make sure you have checked out the example image descriptions (see Section 2.4, “Example Appliance Descriptions”).

  2. Build an image with KIWI NG:

    $ sudo kiwi-ng --type oem system build \
        --description kiwi/build-tests/x86/leap/test-image-disk \
        --set-repo obs://openSUSE:Leap:15.5/standard \
        --target-dir /tmp/myimage

    The resulting image is saved in /tmp/myimage.

    • The disk image with the suffix .raw is an expandable virtual disk. It can expand itself to a custom disk geometry.

    • The installation image with the suffix install.iso is a hybrid installation system which contains the disk image and is capable to install this image on any target disk.

10.3.1 Deployment Methods

The goal of an expandable disk image is to provide the virtual disk data for OEM vendors to support easy deployment of the system to physical storage media.

Basic deployment strategies are as follows:

  1. Manual Deployment

    Manually deploy the disk image onto the target disk.

  2. CD/DVD Deployment

    Boot the installation image and let KIWI NG’s installer deploy the disk image from CD/DVD or USB stick onto the target disk.

  3. Network Deployment

    PXE boot the target system and let KIWI NG’s installer deploy the disk image from the network onto the target disk.

10.3.2 Manual Deployment

The manual deployment method can be tested using virtualization software like QEMU and an additional virtual a large-size target disk. To do this, follow the steps below.

  1. Create a target disk:

    $ qemu-img create target_disk 20g
    Note
    Note

    Retaining the Disk Geometry

    If the target disk geometry is less than or equals to the geometry of the disk image itself, the disk expansion that is performed on a physical disk install during the boot workflow is skipped and the original disk geometry stays unchanged.

  2. Dump disk image on target disk:

    $ dd if=kiwi-test-image-disk.x86_64-1.15.3.raw of=target_disk conv=notrunc
  3. Boot the target disk:

    $ sudo qemu -hda target_disk -m 4096 -serial stdio

    On first boot of the target_disk, the system is expanded to the configured storage layout. By default, the system root partition and filesystem are resized to the maximum free space available.

10.3.3 CD/DVD Deployment

The deployment from CD/DVD via an installation image can also be tested using virtualization software such as QEMU. To do this, follow the steps below.

  1. Create a target disk:

    Follow the steps above to create a virtual target disk

  2. Boot the installation image as CD/DVD with the target disk attached.

    $ sudo qemu -cdrom \
          kiwi-test-image-disk.x86_64-1.15.3.install.iso -hda target_disk \
          -boot d -m 4096 -serial stdio
    Note
    Note

    USB Stick Deployment

    Like any other ISO image built with KIWI NG, the installation image is also a hybrid image. Thus, it can also be used on USB stick and serve as installation media as explained in Section 10.1, “Build an ISO Hybrid Live Image”

10.3.4 Network Deployment

The process of deployment from the network downloads the disk image from a PXE boot server. This requires a PXE network boot server to be setup as described in Section 11.13, “Setting Up a Network Boot Server”

If the PXE server is running, the following steps show how to test the deployment process over the network using a QEMU virtual machine as a target system:

  1. Create an installation PXE TAR archive along with your disk image by replacing the following configuration in kiwi/build-tests/x86/leap/test-image-disk/appliance.kiwi

    Find the line below:

    <type image="oem" installiso="true"/>

    Modify the line as follows:

    <type image="oem" installpxe="true"/>
  2. Rebuild the image, unpack the resulting kiwi-test-image-disk.x86_64-1.15.3.install.tar.xz file to a temporary directory, and copy the initrd and kernel images to the PXE server.

    1. Unpack installation tarball:

      mkdir /tmp/pxe && cd /tmp/pxe
      tar -xf kiwi-test-image-disk.x86_64-1.15.3.install.tar.xz
    2. Copy kernel and initrd used for PXE boot:

      scp pxeboot.kiwi-test-image-disk.x86_64-1.15.3.initrd PXE_SERVER_IP:/srv/tftpboot/boot/initrd
      scp pxeboot.kiwi-test-image-disk.x86_64-1.15.3.kernel PXE_SERVER_IP:/srv/tftpboot/boot/linux
  3. Copy the disk image, MD5 file, system kernel, initrd and bootoptions to the PXE boot server.

    Activation of the deployed system is done via kexec of the kernel and initrd provided here.

    1. Copy system image and MD5 checksum:

      scp kiwi-test-image-disk.x86_64-1.15.3.xz PXE_SERVER_IP:/srv/tftpboot/image/
      scp kiwi-test-image-disk.x86_64-1.15.3.md5 PXE_SERVER_IP:/srv/tftpboot/image/
    2. Copy kernel, initrd and bootoptions used for booting the system via kexec:

      scp kiwi-test-image-disk.x86_64-1.15.3.initrd PXE_SERVER_IP:/srv/tftpboot/image/
      scp kiwi-test-image-disk.x86_64-1.15.3.kernel PXE_SERVER_IP:/srv/tftpboot/image/
      scp kiwi-test-image-disk.x86_64-1.15.3.config.bootoptions PXE_SERVER_IP:/srv/tftpboot/image/
      Note
      Note

      The config.bootoptions file is used with kexec to boot the previously dumped image. This file specifies the root of the dumped image, and the file can include other boot options. The file provided with the KIWI NG built image connected to the image present in the PXE TAR archive. If other images are deployed, the file must be modified to match the correct root reference.

  4. Add/Update the kernel command line parameters.

    Edit your PXE configuration (for example pxelinux.cfg/default) on the PXE server, and add the following parameters to the append line similar to shown below:

    append initrd=boot/initrd rd.kiwi.install.pxe rd.kiwi.install.image=tftp://192.168.100.16/image/kiwi-test-image-disk.x86_64-1.15.3.xz

    The location of the image is specified as a source URI that can point to any location supported by the curl command. KIWI NG uses curl to fetch the data from this URI. This means that the image, MD5 file, system kernel and initrd can be fetched from any server, and they do not need to be stored on the PXE_SERVER.

    By default KIWI NG does not use specific curl options or flags. But it is possible to specify desired options by adding the rd.kiwi.install.pxe.curl_options flag to the kernel command line (curl options are passed as comma-separated values), for example:

    rd.kiwi.install.pxe.curl_options=--retry,3,--retry-delay,3,--speed-limit,2048

    The above instructs KIWI NG to run curl as follows:

    curl --retry 3 --retry-delay 3 --speed-limit 2048 -f <url>

    This can be particularly useful when the deployment infrastructure requires specific download configuration. For example, setting more robust retries over an unstable network connection.

    Note
    Note

    KIWI NG replaces commas with spaces and appends the result to the curl command. Keep that in mind, because command-line options that include commas break the command.

    Note
    Note

    The initrd and Linux Kernel for PXE boot are always loaded via TFTP from the PXE_SERVER.

  1. Create a target disk.

    Follow the steps above to create a virtual target disk.

  2. Connect the client to the network and boot QEMU with the target disk attached to the virtual machine:

    $ sudo qemu -boot n -hda target_disk -m 4096
    Note
    Note

    QEMU bridged networking

    To connect QEMU to the network, we recommend to setup a network bridge on the host system and connect QEMU to it via a custom /etc/qemu-ifup configuration. For details, see https://en.wikibooks.org/wiki/QEMU/Networking

10.3.5 OEM Customization

The deployment process of an OEM image can be customized using the oemconfig element. This element is a child section of the type element, for example:

<oemconfig>
  <oem-swapsize>512</oem-swapsize>
</oemconfig>

Below is a losr list of optional oem element settings.

oemconfig.oem-resize

Determines if the disk has the capability to expand itself to a new disk geometry or not. By default, this feature is activated. The implementation of the resize capability is done in a dracut module packaged as dracut-kiwi-oem-repart. If oem-resize is set to false, the installation of the corresponding dracut package can be skipped as well.

oemconfig.oem-boot-title

By default, the string OEM is used as the boot manager menu entry when KIWI creates the GRUB configuration during deployment. The oem-boot-title element allows you to set a custom name for the grub menu entry. This value is represented by the kiwi_oemtitle variable in the initrd.

oemconfig.oem-bootwait

Determines if the system waits for user interaction before continuing the boot process after the disk image has been dumped to the designated storage device (default value is false). This value is represented by the kiwi_oembootwait variable in the initrd.

oemconfig.oem-reboot

When enabled, the system is rebooted after the disk image has been deployed to the designated storage device (default value is false). This value is represented by the kiwi_oemreboot variable in the initrd.

oemconfig.oem-reboot-interactive

When enabled, the system is rebooted after the disk image has been deployed to the designated storage device (default value is false). Before the reboot, a message is displayed, and it and must be acknowledged by the user for the system to reboot. This value is represented by the kiwi_oemrebootinteractive variable in the initrd.

oemconfig.oem-silent-boot

Determines if the system boots in silent mode after the disk image has been deployed to the designated storage device (default value is false). This value is represented by the kiwi_oemsilentboot variable in the initrd.

oemconfig.oem-shutdown

Determines if the system is powered down after the disk image has been deployed to the designated storage device (default value is false). This value is represented by the kiwi_oemshutdown variable in the initrd.

oemconfig.oem-shutdown-interactive

Determines if the system is powered down after the disk image has been deployed to the designated storage device (default value is false). Before the shutdown a message is displayed, and it must be acknowledged by the user for the system to power off. This value is represented by the kiwi_oemshutdowninteractive variable in the initrd

oemconfig.oem-swap

Determines if a swap partition is be created. By default, no swap partition is created. This value is represented by the kiwi_oemswap variable in the initrd.

oemconfig.oem-swapname

Specifies the name of the swap space. By default, the name is set to LVSwap. The default indicates that this setting is only useful in combination with the LVM volume manager. In this case, the swapspace is configured as a volume in the volume group, and every volume requires a name. The name specified in oemconfig.oem-swapname here is used as a name of the swap volume.

oemconfig.oem-swapsize

Specifies the size of the swap partition. If a swap partition is created while the size of the swap partition is not specified, KIWI calculates the size of the swap partition, and creates a swap partition at initial boot time. In this case, the swap partition size equals the double amount of RAM of the system. This value is represented by the kiwi_oemswapMB variable in the initrd.

oemconfig.oem-systemsize

Specifies the size the operating system is allowed to occupy on the target disk. The size limit does not include any swap space or recovery partition considerations. In a setup without the systemdisk element, this value specifies the size of the root partition. In a setup that includes the systemdisk element, this value specifies the size of the LVM partition that contains all specified volumes. This means that the sum of all specified volume sizes plus the sum of the specified freespace for each volume must be smaller than or equal to the size specified with the oem-systemsize element. This value is represented by the variable kiwi_oemrootMB in the initrd.

oemconfig.oem-unattended

The installation of the image to the target system occurs automatically without requiring user interaction. If multiple possible target devices are discovered, the image is deployed to the first device. kiwi_oemunattended in the initrd.

oemconfig.oem-unattended-id

Selects a target disk device for the installation according to the specified device ID. The device ID corresponds to the name of the device for the configured devicepersistency. By default, it is the by-uuid device name. If no representation exists, for example for ramdisk devices, the UNIX device node can be used to select one. The given name must be present in the device list detected by KIWI.

oemconfig.oem-skip-verify

Disables the checksum verification process after installing of the image to the target disk. The verification process computes the checksum of the image installed to the target. This value is then compared to the initrd embedded checksum generated at build time of the image. Depending on the size of the image and machine power, computing the checksum may take time.

10.3.6 Installation Media Customization

The installation media created for OEM network or CD/DVD deployments can be customized with the installmedia section. It is a child section of the type element, for example:

<installmedia>
  <initrd action="omit">
    <dracut module="network-legacy"/>
  </initrd>
</installmedia>

The installmedia is only available for OEM image types that include the request to create an installation media.

The initrd child element of installmedia lists dracut modules. The element’s action attribute determines whether the dracut module is omitted (action="omit") or added (action="add"). Use action="set" to use only the listed modules and nothing else (that is, none of the dracut modules included by default).

10.4 Build a Container Image

  • basic configuration explanation

  • how to build a Container Image

  • how to run it with a Container Runtime

KIWI NG can build native container images from scratch or using existing images. KIWI NG container images are considered to be native, because a KIWI NG tarball image can be loaded directly into container runtimes like Podman, Docker or Containerd, including common container configurations.

The container configuration metadata is supplied to KIWI NG as part of the Section 1.1.1, “Components of an Image Description” using the <containerconfig> tag. The following configuration metadata can be specified.

containerconfig attributes:

  • name: Specifies the repository name of the container image.

  • tag: Sets the tag of the container image.

  • maintainer: Specifies the author of the container. Equivalent to the MAINTAINER directive in a Dockerfile.

  • user: Sets the user name or user id (UID) to be used when running entrypoint and subcommand. Equivalent of the USER directive of a Dockerfile.

  • workingdir: Sets the working directory to be used when running cmd and entrypoint. Equivalent of the WORKDIR directive in a Dockerfile.

containerconfig child tags:

  • subcommand: Provides the default execution parameters of the container. Equivalent of the CMD directive in a Dockerfile.

  • labels: Adds custom metadata to an image using key-value pairs. Equivalent to one or more LABEL directives in a Dockerfile.

  • expose: Defines which ports can be exposed to the outside when running this container image. Equivalent to one or more EXPOSE directives in a Dockerfile.

  • environment: Sets environment variables using key-value pairs. Equivalent to one or multiple ENV directives in a Dockerfile.

  • entrypoint: Sets the binary to use for executing all commands inside the container. Equivalent of the ENTRYPOINT directive in a Dockerfile.

  • volumes: Creates mountpoints with the given name and marks them to hold external volumes from the host or from other containers. Equivalent to one or more VOLUME directives in a Dockerfile.

  • stopsignal: The stopsignal element sets the system call signal that will be sent to the container to exit. This signal can be a signal name in the format SIG[NAME], for instance SIGKILL, or an unsigned number that matches a position in the kernel’s syscall table, for instance 9. The default is SIGTERM if not defined

Other Dockerfile directives such as RUN, COPY or ADD, can be mapped to KIWI NG using the Section 1.1.1, “Components of an Image Description” script file to run Bash commands, or the Section 1.1.1, “Components of an Image Description” to include additional files.

The following example illustrates how to build a container image based on openSUSE Leap:

  1. Make sure you have checked out the example image descriptions (see Section 2.4, “Example Appliance Descriptions”).

  2. Include the Virtualization/containers repository into your list (replace the placeholder <DIST> with the name of the desired distribution):

    $ zypper addrepo http://download.opensuse.org/repositories/Virtualization:/containers/<DIST> container-tools
  3. Install umoci and skopeo tools

    $ zypper in umoci skopeo
  4. Build an image with KIWI NG:

    $ sudo kiwi-ng system build \
        --description kiwi/build-tests/x86/leap/test-image-docker \
        --set-repo obs://openSUSE:Leap:15.5/standard \
        --target-dir /tmp/myimage
  5. Test the container image.

    First load the new image into your container runtime:

    $ podman load -i kiwi-test-image-docker.x86_64-1.15.3.docker.tar.xz

    Then run the image:

    $ podman run --rm -it buildsystem /bin/bash

10.5 Build a WSL Container Image

KIWI NG can build WSL images using the appx utility. Make sure you have installed the package that provides the command on your build host.

Once the build host has the appx installed, the following image type setup is required in the XML description config.xml:

<type image="appx" metadata_path="/meta/data"/>

The /meta/data path specifies a path that provides additional information required for the WSL-DistroLauncher. This component consists out of a Windows(exe) executable file and an AppxManifest.xml file that references other files, like icons and resource configurations for the startup of the container under Windows.

Note
Note

/meta/data

Except for the root filesystem tarball KIWI NG is not responsible for providing the meta data required for the WSL-DistroLauncher. It is expected that the given metadata path contains all the needed information. Typically this information is delivered in a package provided by the distribution, and it is installed on the build host.

10.5.1 Setup of the WSL-DistroLauncher

The contents of the AppxManifest.xml is changed by KIWI NG if the containerconfig section is provided in the XML description. In the context of a WSL image, the following container configuration parameters are taken into account:

<containerconfig name="my-wsl-container">
    <history
        created_by="Organisation"
        author="Name"
        application_id="AppIdentification"
        package_version="https://docs.microsoft.com/en-us/windows/uwp/publish/package-version-numbering"
        launcher="WSL-DistroLauncher-exe-file"
    >Container Description Text</history>
</containerconfig>

All information provided here, including the entire section, is optional. If the information is not specified, the existing AppxManifest.xml is left untouched.

created_by

Specifies the name of a publisher organization. An appx container must to be signed off with a digital signature. If the image is build in the Open Build Service (OBS), this is done automatically. Outside of OBS, you must o make sure that the given publisher organization name matches the certificate used for signing.

author

Provides the name of the author and maintainer of this container.

application_id

Specifies an ID name for the container. The name must start with a letter, and only alphanumeric characters are allowed. KIWI NG doesn not validate the specified name string, because there is no common criteria for various the container architectures.

package_version

Specifies the version identification for the container. KIWI NG validates it against the Microsoft Package Version Numbering rules.

launcher

Specifies the binary file name of the launcher .exe file.

Warning
Warning

KIWI NG does not check the configuration in AppxManifest.xml ifor validity or completeness.

The following example shows how to build a WSL image based on openSUSE Tumbleweed:

  1. Check the example image descriptions, see Section 2.4, “Example Appliance Descriptions”.

  2. Include the Virtualization/WSL repository to the list ((replace <DIST> with the desired distribution)):

    $ zypper addrepo http://download.opensuse.org/repositories/Virtualization:/WSL/<DIST> WSL
  3. Install fb-util-for-appx utility and the package that provides the WSL-DistroLauncher metadata. See the previous note on /meta/data.

    $ zypper in fb-util-for-appx DISTRO_APPX_METADATA_PACKAGE
    Note
    Note

    When building images with the Open Build Servic,e make sure to add the packages from the zypper command above to the project configuration via osc meta -e prjconf along with the line support: PACKAGE_NAME for each package that needs to be installed on the Open Build Service worker that runs the KIWI NG build process.

  4. Configure the image type:

    Add the following type and container configuration to kiwi/build-tests/x86/tumbleweed/test-image-wsl/appliance.kiwi:

    <type image="appx" metadata_path="/meta/data">
        <containerconfig name="Tumbleweed">
            <history
                created_by="SUSE"
                author="KIWI-Team"
                application_id="tumbleweed"
                package_version="2003.12.0.0"
                launcher="openSUSE-Tumbleweed.exe"
            >Tumbleweed Appliance text based</history>
        </containerconfig>
    </type>
    Warning
    Warning

    If the configured metadata path does not exist, the build will fail. Furthermore, KIWI NG does not check whether the metadata is complete or is valid according to the requirements of the WSL-DistroLauncher

  5. Build the image with KIWI NG:

    $ sudo kiwi-ng system build \
        --description kiwi/build-tests/x86/tumbleweed/test-image-wsl \
        --set-repo http://download.opensuse.org/tumbleweed/repo/oss \
        --target-dir /tmp/myimage

10.5.2 Testing the WSL image

For testing the image, you need a Windows 10 system. Before you proceed, enable the optional feature named Microsoft-Windows-Subsystem-Linux. For further details on how to setup the Windows machine, see: Windows Subsystem for Linux

10.6 Build KIS Image (Kernel, Initrd, System)

A KIS image is a collection of image components that are not associated with a dedicated use case. This means that as far as KIWI NG is concerned, it is not known in which environment these components are expected to be used. The predecessor of this image type was called pxe under the assumption that the components will be used in a PXE boot environment. However, this assumption is not always true, and the image components may be used in different ways. Because there are so many possible deployment strategies for a kernel plus initrd and optional system root filesystem, KIWI NG provides this as the universal KIS type.

The former pxe image type still exist, but it is expected to be used only in combination with the legacy netboot infrastructure, as described in Section 11.14, “Build PXE Root File System Image for the legacy netboot infrastructure”.

To add a KIS build to an appliance, create a type element with image set to kis in the config.xml as shown below:

<preferences>
    <type image="kis"/>
</preferences>

With this image type setup, KIWI NG builds a kernel and initrd not associated with any system root file system. Normally, such an image is only useful with certain custom dracut extensions as part of the image description.

The following attributes of the type element are often used when building KIS images:

  • filesystem: Specifies the root filesystem and triggers the build of an additional filesystem image of that filesystem. The generated kernel command-line options file (append file) then also include a root= parameter that references this filesystem image UUID. Whther the information from the append file should be used or not is optional.

  • kernelcmdline: Specifies kernel command-line options that are part of the generated kernel command-line options file (append file). By default, the append file contains neither information nor the reference to the root UUID, if the filesystem attribute is used.

All other attributes of the type element that applies to an optional root filesystem image remain in effect in the system image of a KIS image as well.

With the appropriate settings present in config.xml, you can use KIWI NG to build the image:

$ sudo kiwi-ng --type kis system build \
    --description kiwi/build-tests/x86/tumbleweed/test-image-pxe \
    --set-repo http://download.opensuse.org/tumbleweed/repo/oss \
    --target-dir /tmp/myimage

The resulting image components are saved in /tmp/myimage. Outside of a deployment infrastructure, the example KIS image can be tested with QEMU as follows:

$ sudo qemu
    -kernel /tmp/myimage/*.kernel \
    -initrd /tmp/myimage/*.initrd \
    -append "$(cat /tmp/myimage/*.append) rw" \
    -drive file=/tmp/myimage/kiwi-test-image-pxe.*-1.15.3,if=virtio,driver=raw \
    -serial stdio
Note
Note

For testing the components of a KIS image normally requires a deployment infrastructure and a deployment process. An example of a deployment infrastructure using PXE is provided by KIWI NG with the netboot infrastructure. However, that netboot infrastructure is no longer developed and only kept for compatibility reasons. For details, see Section 11.14, “Build PXE Root File System Image for the legacy netboot infrastructure”

10.7 Build an AWS Nitro Enclave

  • how to build an AWS Nitro Enclave

  • how to test the enclave via QEMU

AWS Nitro Enclaves enables customers to create isolated compute environments to further protect and securely process highly sensitive data such as personally identifiable information (PII), healthcare, financial, and intellectual property data within their Amazon EC2 instances. Nitro Enclaves uses the same Nitro Hypervisor technology that provides CPU and memory isolation for EC2 instances. For further details please visit https://aws.amazon.com/ec2/nitro/nitro-enclaves

To add an enclave build to your appliance, create a type element with image set to enclave in the config.xml file as shown below:

<image schemaversion="8.0" name="kiwi-test-image-nitro-enclave">
  <!-- snip -->
  <profiles>
    <profile name="default" description="CPIO: default profile" import="true"/>
    <profile name="std" description="KERNEL: default kernel" import="true"/>
  </profiles>
  <preferences>
    <type image="enclave" enclave_format="aws-nitro" kernelcmdline="reboot=k panic=30 pci=off console=ttyS0 i8042.noaux i8042.nomux i8042.nopnp i8042.dumbkbd random.trust_cpu=on rdinit=/sbin/init"/>
    <!-- additional preferences -->
  </preferences>
  <packages type="image" profiles="std">
     <package name="kernel"/>
  </packages>
  <!-- more packages -->
  <!-- snip -->
</image>

The following attributes of the type element are relevant:

  • enclave_format: Specifies the enclave target

    As of today only the aws-nitro enclave target is supported

  • kernelcmdline: Specifies the kernel commandline suitable for the enclave

    An enclave is a system that runs completely in RAM loaded from an enclave binary format which includes the kernel, initrd and the kernel commandline suitable for the target system.

With the appropriate settings specified in config.xml, you can build an image using KIWI NG:

$ sudo kiwi-ng system build \
      --description kiwi/build-tests/x86/rawhide/test-image-nitro-enclave \
      --set-repo https://mirrors.fedoraproject.org/metalink?repo=rawhide&arch=x86_64 \
      --target-dir /tmp/myimage

The resulting image is saved in /tmp/myimage, and the image can be tested with QEMU:

$ sudo qemu-system-x86_64 \
      -M nitro-enclave,vsock=c \
      -m 4G \
      -nographic \
      -chardev socket,id=c,path=/tmp/vhost4.socket \
      -kernel kiwi-test-image-nitro-enclave.eif

The image is now complete and ready to use. Access to the system is possible via ssh through a vsock connection into the guest. To establish a vsock connection it’s required to forward the connection through the guest AF_VSOCK socket. This can be done via a ProxyCommand setup of the host ssh as follows:

$ vi ~/bin/vsock-ssh.sh

#!/bin/bash
CID=$(echo "$1" | cut -d . -f 1)
socat - VSOCK-CONNECT:$CID:22
$ vi ~/.ssh/config

host *.vsock
  ProxyCommand ~/bin/vsock-ssh.sh %h

After the ssh proxy setup login to the enclave with a custom vsock port as follows:

$ ssh root@21.vsock