Jump to content
SUSE Linux Enterprise Desktop 12 SP4

Administration Guide

Covers system administration tasks like maintaining, monitoring and customizing an initially installed system.

Publication Date: September 19, 2019
About This Guide
Available Documentation
Feedback
Documentation Conventions
About the Making of This Documentation
I Common Tasks
1 Bash and Bash Scripts
1.1 What is The Shell?
1.2 Writing Shell Scripts
1.3 Redirecting Command Events
1.4 Using Aliases
1.5 Using Variables in Bash
1.6 Grouping and Combining Commands
1.7 Working with Common Flow Constructs
1.8 For More Information
2 sudo
2.1 Basic sudo Usage
2.2 Configuring sudo
2.3 Common Use Cases
2.4 More Information
3 YaST Online Update
3.1 The Online Update Dialog
3.2 Installing Patches
3.3 Automatic Online Update
4 YaST
4.1 Advanced Key Combinations
5 YaST in Text Mode
5.1 Navigation in Modules
5.2 Advanced Key Combinations
5.3 Restriction of Key Combinations
5.4 YaST Command Line Options
6 Managing Software with Command Line Tools
6.1 Using Zypper
6.2 RPM—the Package Manager
7 System Recovery and Snapshot Management with Snapper
7.1 Default Setup
7.2 Using Snapper to Undo Changes
7.3 System Rollback by Booting from Snapshots
7.4 Enabling Snapper in User Home Directories
7.5 Creating and Modifying Snapper Configurations
7.6 Manually Creating and Managing Snapshots
7.7 Automatic Snapshot Clean-Up
7.8 Frequently Asked Questions
8 Remote Access with VNC
8.1 The vncviewer Client
8.2 Remmina: the Remote Desktop Client
8.3 One-time VNC Sessions
8.4 Persistent VNC Sessions
8.5 Encrypted VNC Communication
9 File Copying with RSync
9.1 Conceptual Overview
9.2 Basic Syntax
9.3 Copying Files and Directories Locally
9.4 Copying Files and Directories Remotely
9.5 Configuring and Using an Rsync Server
9.6 For More Information
10 GNOME Configuration for Administrators
10.1 Starting Applications Automatically
10.2 Automounting and Managing Media Devices
10.3 Changing Preferred Applications
10.4 Adding Document Templates
10.5 For More Information
II Booting a Linux System
11 Introduction to the Boot Process
11.1 Terminology
11.2 The Linux Boot Process
12 UEFI (Unified Extensible Firmware Interface)
12.1 Secure Boot
12.2 For More Information
13 The Boot Loader GRUB 2
13.1 Main Differences between GRUB Legacy and GRUB 2
13.2 Configuration File Structure
13.3 Configuring the Boot Loader with YaST
13.4 Differences in Terminal Usage on z Systems
13.5 Helpful GRUB 2 Commands
13.6 More Information
14 The systemd Daemon
14.1 The systemd Concept
14.2 Basic Usage
14.3 System Start and Target Management
14.4 Managing Services with YaST
14.5 Customization of systemd
14.6 Advanced Usage
14.7 More Information
III System
15 32-Bit and 64-Bit Applications in a 64-Bit System Environment
15.1 Runtime Support
15.2 Kernel Specifications
16 journalctl: Query the systemd Journal
16.1 Making the Journal Persistent
16.2 journalctl Useful Switches
16.3 Filtering the Journal Output
16.4 Investigating systemd Errors
16.5 Journald Configuration
16.6 Using YaST to Filter the systemd Journal
17 Basic Networking
17.1 IP Addresses and Routing
17.2 IPv6—The Next Generation Internet
17.3 Name Resolution
17.4 Configuring a Network Connection with YaST
17.5 NetworkManager
17.6 Configuring a Network Connection Manually
17.7 Setting Up Bonding Devices
17.8 Setting Up Team Devices for Network Teaming
18 Printer Operation
18.1 The CUPS Workflow
18.2 Methods and Protocols for Connecting Printers
18.3 Installing the Software
18.4 Network Printers
18.5 Configuring CUPS with Command Line Tools
18.6 Printing from the Command Line
18.7 Special Features in SUSE Linux Enterprise Desktop
18.8 Troubleshooting
19 The X Window System
19.1 Installing and Configuring Fonts
19.2 For More Information
20 Accessing File Systems with FUSE
20.1 Configuring FUSE
20.2 Mounting an NTFS Partition
20.3 For More Information
21 Managing Kernel Modules
21.1 Listing Loaded Modules with lsmod and modinfo
21.2 Adding and Removing Kernel Modules
22 Dynamic Kernel Device Management with udev
22.1 The /dev Directory
22.2 Kernel uevents and udev
22.3 Drivers, Kernel Modules and Devices
22.4 Booting and Initial Device Setup
22.5 Monitoring the Running udev Daemon
22.6 Influencing Kernel Device Event Handling with udev Rules
22.7 Persistent Device Naming
22.8 Files used by udev
22.9 For More Information
23 Live Patching the Linux Kernel Using kGraft
23.1 Advantages of kGraft
23.2 Low-level Function of kGraft
23.3 Installing kGraft Patches
23.4 Patch Lifecycle
23.5 Removing a kGraft Patch
23.6 Stuck Kernel Execution Threads
23.7 The kgr Tool
23.8 Scope of kGraft Technology
23.9 Scope of SLE Live Patching
23.10 Interaction with the Support Processes
24 Special System Features
24.1 Information about Special Software Packages
24.2 Virtual Consoles
24.3 Keyboard Mapping
24.4 Language and Country-Specific Settings
25 Persistent Memory
25.1 Introduction
25.2 Terms
25.3 Use Cases
25.4 Tools for Managing Persistent Memory
25.5 Setting Up Persistent Memory
25.6 For More Information
IV Services
26 Time Synchronization with NTP
26.1 Configuring an NTP Client with YaST
26.2 Manually Configuring NTP in the Network
26.3 Dynamic Time Synchronization at Runtime
26.4 Setting Up a Local Reference Clock
26.5 Clock Synchronization to an External Time Reference (ETR)
27 Sharing File Systems with NFS
27.1 Overview
27.2 Installing NFS Server
27.3 Configuring Clients
27.4 For More Information
28 Samba
28.1 Terminology
28.2 Installing a Samba Server
28.3 Configuring a Samba Server
28.4 Configuring Clients
28.5 Samba as Login Server
28.6 Advanced Topics
28.7 For More Information
29 On-Demand Mounting with Autofs
29.1 Installation
29.2 Configuration
29.3 Operation and Debugging
29.4 Auto-Mounting an NFS Share
29.5 Advanced Topics
V Mobile Computers
30 Mobile Computing with Linux
30.1 Laptops
30.2 Mobile Hardware
30.3 Cellular Phones and PDAs
30.4 For More Information
31 Using NetworkManager
31.1 Use Cases for NetworkManager
31.2 Enabling or Disabling NetworkManager
31.3 Configuring Network Connections
31.4 NetworkManager and Security
31.5 Frequently Asked Questions
31.6 Troubleshooting
31.7 For More Information
32 Power Management
32.1 Power Saving Functions
32.2 Advanced Configuration and Power Interface (ACPI)
32.3 Rest for the Hard Disk
32.4 Troubleshooting
32.5 For More Information
VI Troubleshooting
33 Help and Documentation
33.1 Documentation Directory
33.2 Man Pages
33.3 Info Pages
33.4 Online Resources
34 Gathering System Information for Support
34.1 Displaying Current System Information
34.2 Collecting System Information with Supportconfig
34.3 Submitting Information to Global Technical Support
34.4 Analyzing System Information
34.5 Gathering Information during the Installation
34.6 Support of Kernel Modules
34.7 For More Information
35 Common Problems and Their Solutions
35.1 Finding and Gathering Information
35.2 Installation Problems
35.3 Boot Problems
35.4 Login Problems
35.5 Network Problems
35.6 Data Problems
A Documentation Updates
A.1 February 2019 (Documentation Maintenance Update for SUSE Linux Enterprise Desktop 12 SP4)
A.2 December 2018 (Initial Release of SUSE Linux Enterprise Desktop 12 SP4)
A.3 October 2018 (Documentation Maintenance Update for SUSE Linux Enterprise Desktop 12 SP3)
A.4 September 2017 (Initial Release of SUSE Linux Enterprise Desktop 12 SP3)
A.5 November 2016 (Initial Release of SUSE Linux Enterprise Desktop 12 SP2)
A.6 March 2016 (Documentation Maintenance Update for SUSE Linux Enterprise Desktop 12 SP1)
A.7 December 2015 (Initial Release of SUSE Linux Enterprise Desktop 12 SP1)
A.8 February 2015 (Documentation Maintenance Update)
A.9 October 2014 (Initial Release of SUSE Linux Enterprise Desktop 12)
B An Example Network
C GNU Licenses
C.1 GNU Free Documentation License

Copyright © 2006– 2019 SUSE LLC and contributors. All rights reserved.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled GNU Free Documentation License.

For SUSE trademarks, see http://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.

All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.

About This Guide Edit source

This guide is intended for use by professional network and system administrators during the operation of SUSE® Linux Enterprise. As such, it is solely concerned with ensuring that SUSE Linux Enterprise is properly configured and that the required services on the network are available to allow it to function properly as initially installed. This guide does not cover the process of ensuring that SUSE Linux Enterprise offers proper compatibility with your enterprise's application software or that its core functionality meets those requirements. It assumes that a full requirements audit has been done and the installation has been requested, or that a test installation for such an audit has been requested.

This guide contains the following:

Support and Common Tasks

SUSE Linux Enterprise offers a wide range of tools to customize various aspects of the system. This part introduces a few of them.

System

Learn more about the underlying operating system by studying this part. SUSE Linux Enterprise supports several hardware architectures and you can use this to adapt your own applications to run on SUSE Linux Enterprise. The boot loader and boot procedure information assists you in understanding how your Linux system works and how your own custom scripts and applications may blend in with it.

Services

SUSE Linux Enterprise is designed to be a network operating system. SUSE® Linux Enterprise Desktop includes client support for many network services. It integrates well into heterogeneous environments including MS Windows clients and servers.

Mobile Computers

Laptops, and the communication between mobile devices like PDAs, or cellular phones and SUSE Linux Enterprise need some special attention. Take care for power conservation and for the integration of different devices into a changing network environment. Also get in touch with the background technologies that provide the needed functionality.

Troubleshooting

Provides an overview of finding help and additional documentation when you need more information or want to perform specific tasks. There is also a list of the most frequent problems with explanations of how to fix them.

1 Available Documentation Edit source

Note
Note: Online Documentation and Latest Updates

Documentation for our products is available at http://www.suse.com/documentation/, where you can also find the latest updates, and browse or download the documentation in various formats.

In addition, the product documentation is usually available in your installed system under /usr/share/doc/manual.

The following documentation is available for this product:

Article “Installation Quick Start

Lists the system requirements and guides you step-by-step through the installation of SUSE Linux Enterprise Desktop from DVD, or from an ISO image.

Book “Deployment Guide

Shows how to install single or multiple systems and how to exploit the product inherent capabilities for a deployment infrastructure. Choose from various approaches, ranging from a local installation or a network installation server to a mass deployment using a remote-controlled, highly-customized, and automated installation technique.

Administration Guide

Covers system administration tasks like maintaining, monitoring and customizing an initially installed system.

Book “Security Guide

Introduces basic concepts of system security, covering both local and network security aspects. Shows how to use the product inherent security software like AppArmor or the auditing system that reliably collects information about any security-relevant events.

Book “System Analysis and Tuning Guide

An administrator's guide for problem detection, resolution and optimization. Find how to inspect and optimize your system by means of monitoring tools and how to efficiently manage resources. Also contains an overview of common problems and solutions and of additional help and documentation resources.

Book “GNOME User Guide

Introduces the GNOME desktop of SUSE Linux Enterprise Desktop. It guides you through using and configuring the desktop and helps you perform key tasks. It is intended mainly for end users who want to make efficient use of GNOME as their default desktop.

2 Feedback Edit source

Several feedback channels are available:

Bugs and Enhancement Requests

For services and support options available for your product, refer to http://www.suse.com/support/.

Help for openSUSE is provided by the community. Refer to https://en.opensuse.org/Portal:Support for more information.

To report bugs for a product component, go to https://scc.suse.com/support/requests, log in, and click Create New.

User Comments

We want to hear your comments about and suggestions for this manual and the other documentation included with this product. Use the User Comments feature at the bottom of each page in the online documentation or go to http://www.suse.com/documentation/feedback.html and enter your comments there.

Mail

For feedback on the documentation of this product, you can also send a mail to doc-team@suse.com. Make sure to include the document title, the product version and the publication date of the documentation. To report errors or suggest enhancements, provide a concise description of the problem and refer to the respective section number and page (or URL).

3 Documentation Conventions Edit source

The following notices and typographical conventions are used in this documentation:

  • /etc/passwd: directory names and file names

  • PLACEHOLDER: replace PLACEHOLDER with the actual value

  • PATH: the environment variable PATH

  • ls, --help: commands, options, and parameters

  • user: users or groups

  • package name : name of a package

  • Alt, AltF1: a key to press or a key combination; keys are shown in uppercase as on a keyboard

  • File, File › Save As: menu items, buttons

  • Dancing Penguins (Chapter Penguins, ↑Another Manual): This is a reference to a chapter in another manual.

  • Commands that must be run with root privileges. Often you can also prefix these commands with the sudo command to run them as non-privileged user.

    root # command
    tux > sudo command
  • Commands that can be run by non-privileged users.

    tux > command
  • Notices

    Warning
    Warning: Warning Notice

    Vital information you must be aware of before proceeding. Warns you about security issues, potential loss of data, damage to hardware, or physical hazards.

    Important
    Important: Important Notice

    Important information you should be aware of before proceeding.

    Note
    Note: Note Notice

    Additional information, for example about differences in software versions.

    Tip
    Tip: Tip Notice

    Helpful information, like a guideline or a piece of practical advice.

4 About the Making of This Documentation Edit source

This documentation is written in GeekoDoc, a subset of DocBook 5. The XML source files were validated by jing (see https://code.google.com/p/jing-trang/), processed by xsltproc, and converted into XSL-FO using a customized version of Norman Walsh's stylesheets. The final PDF is formatted through FOP from Apache Software Foundation. The open source tools and the environment used to build this documentation are provided by the DocBook Authoring and Publishing Suite (DAPS). The project's home page can be found at https://github.com/openSUSE/daps.

The XML source code of this documentation can be found at https://github.com/SUSE/doc-sle.

Part I Common Tasks Edit source

1 Bash and Bash Scripts

Today, many people use computers with a graphical user interface (GUI) like GNOME. Although they offer lots of features, their use is limited when it comes to the execution of automated tasks. Shells are a good addition to GUIs and this chapter gives you an overview of some aspects of shells, in this case Bash.

2 sudo

Many commands and system utilities need to be run as root to modify files and/or perform tasks that only the super user is allowed to. For security reasons and to avoid accidentally running dangerous commands, it is generally advisable not to log in directly as root. Instead, it is recommended to wo…

3 YaST Online Update

SUSE offers a continuous stream of software security updates for your product. By default, the update applet is used to keep your system up-to-date. Refer to Book “Deployment Guide”, Chapter 10 “Installing or Removing Software”, Section 10.5 “Keeping the System Up-to-date” for further information on…

4 YaST

YaST is the installation and configuration tool for SUSE Linux Enterprise Desktop. It has a graphical interface and the capability to customize your system quickly during and after the installation. It can be used to set up hardware, configure the network, system services, and tune your security set…

5 YaST in Text Mode

This section is intended for system administrators and experts who do not run an X server on their systems and depend on the text-based installation tool. It provides basic information about starting and operating YaST in text mode.

6 Managing Software with Command Line Tools

This chapter describes Zypper and RPM, two command line tools for managing software. For a definition of the terminology used in this context (for example, repository, patch, or update) refer to Book “Deployment Guide”, Chapter 10 “Installing or Removing Software”, Section 10.1 “Definition of Terms”.

7 System Recovery and Snapshot Management with Snapper

Being able to do file system snapshots providing the ability to do rollbacks on Linux is a feature that was often requested in the past. Snapper, with the Btrfs file system or thin-provisioned LVM volumes now fills that gap.

Btrfs, a new copy-on-write file system for Linux, supports file system snapshots (a copy of the state of a subvolume at a certain point of time) of subvolumes (one or more separately mountable file systems within each physical partition). Snapshots are also supported on thin-provisioned LVM volumes formatted with XFS, Ext4 or Ext3. Snapper lets you create and manage these snapshots. It comes with a command line and a YaST interface. Starting with SUSE Linux Enterprise Server 12 it is also possible to boot from Btrfs snapshots—see Section 7.3, “System Rollback by Booting from Snapshots” for more information.

8 Remote Access with VNC

Virtual Network Computing (VNC) enables you to control a remote computer via a graphical desktop (as opposed to a remote shell access). VNC is platform-independent and lets you access the remote machine from any operating system.

SUSE Linux Enterprise Desktop supports two different kinds of VNC sessions: One-time sessions that live as long as the VNC connection from the client is kept up, and persistent sessions that live until they are explicitly terminated.

9 File Copying with RSync

Today, a typical user has several computers: home and workplace machines, a laptop, a smartphone or a tablet. This makes the task of keeping files and documents in sync across multiple devices all more important.

10 GNOME Configuration for Administrators

This chapter introduces GNOME configuration options which administrators can use to adjust system-wide settings, such as customizing menus, installing themes, configuring fonts, changing preferred applications, and locking down capabilities.

1 Bash and Bash Scripts Edit source

Abstract

Today, many people use computers with a graphical user interface (GUI) like GNOME. Although they offer lots of features, their use is limited when it comes to the execution of automated tasks. Shells are a good addition to GUIs and this chapter gives you an overview of some aspects of shells, in this case Bash.

1.1 What is The Shell? Edit source

Traditionally, the shell is Bash (Bourne again Shell). When this chapter speaks about the shell it means Bash. There are actually more available shells than Bash (ash, csh, ksh, zsh, …), each employing different features and characteristics. If you need further information about other shells, search for shell in YaST.

1.1.1 Knowing the Bash Configuration Files Edit source

A shell can be invoked as an:

  1. Interactive login shell.  This is used when logging in to a machine, invoking Bash with the --login option or when logging in to a remote machine with SSH.

  2. Ordinary interactive shell.  This is normally the case when starting xterm, konsole, gnome-terminal or similar tools.

  3. Non-interactive shell.  This is used when invoking a shell script at the command line.

Depending on which type of shell you use, different configuration files are being read. The following tables show the login and non-login shell configuration files.

Table 1.1: Bash Configuration Files for Login Shells

File

Description

/etc/profile

Do not modify this file, otherwise your modifications can be destroyed during your next update!

/etc/profile.local

Use this file if you extend /etc/profile

/etc/profile.d/

Contains system-wide configuration files for specific programs

~/.profile

Insert user specific configuration for login shells here

Note that the login shell also sources the configuration files listed under Table 1.2, “Bash Configuration Files for Non-Login Shells”.

Table 1.2: Bash Configuration Files for Non-Login Shells

/etc/bash.bashrc

Do not modify this file, otherwise your modifications can be destroyed during your next update!

/etc/bash.bashrc.local

Use this file to insert your system-wide modifications for Bash only

~/.bashrc

Insert user specific configuration here

Additionally, Bash uses some more files:

Table 1.3: Special Files for Bash

File

Description

~/.bash_history

Contains a list of all commands you have been typing

~/.bash_logout

Executed when logging out

~/.alias

User defined aliases of frequently used commands. See man 1 alias for more details about how to define aliases.

1.1.2 The Directory Structure Edit source

The following table provides a short overview of the most important higher-level directories that you find on a Linux system. Find more detailed information about the directories and important subdirectories in the following list.

Table 1.4: Overview of a Standard Directory Tree

Directory

Contents

/

Root directory—the starting point of the directory tree.

/bin

Essential binary files, such as commands that are needed by both the system administrator and normal users. Usually also contains the shells, such as Bash.

/boot

Static files of the boot loader.

/dev

Files needed to access host-specific devices.

/etc

Host-specific system configuration files.

/home

Holds the home directories of all users who have accounts on the system. However, root's home directory is not located in /home but in /root.

/lib

Essential shared libraries and kernel modules.

/media

Mount points for removable media.

/mnt

Mount point for temporarily mounting a file system.

/opt

Add-on application software packages.

/root

Home directory for the superuser root.

/sbin

Essential system binaries.

/srv

Data for services provided by the system.

/tmp

Temporary files.

/usr

Secondary hierarchy with read-only data.

/var

Variable data such as log files.

/windows

Only available if you have both Microsoft Windows* and Linux installed on your system. Contains the Windows data.

The following list provides more detailed information and gives some examples of which files and subdirectories can be found in the directories:

/bin

Contains the basic shell commands that may be used both by root and by other users. These commands include ls, mkdir, cp, mv, rm and rmdir. /bin also contains Bash, the default shell in SUSE Linux Enterprise Desktop.

/boot

Contains data required for booting, such as the boot loader, the kernel, and other data that is used before the kernel begins executing user-mode programs.

/dev

Holds device files that represent hardware components.

/etc

Contains local configuration files that control the operation of programs like the X Window System. The /etc/init.d subdirectory contains LSB init scripts that can be executed during the boot process.

/home/USERNAME

Holds the private data of every user who has an account on the system. The files located here can only be modified by their owner or by the system administrator. By default, your e-mail directory and personal desktop configuration are located here in the form of hidden files and directories, such as .gconf/ and .config.

Note
Note: Home Directory in a Network Environment

If you are working in a network environment, your home directory may be mapped to a directory in the file system other than /home.

/lib

Contains the essential shared libraries needed to boot the system and to run the commands in the root file system. The Windows equivalent for shared libraries are DLL files.

/media

Contains mount points for removable media, such as CD-ROMs, flash disks, and digital cameras (if they use USB). /media generally holds any type of drive except the hard disk of your system. When your removable medium has been inserted or connected to the system and has been mounted, you can access it from here.

/mnt

This directory provides a mount point for a temporarily mounted file system. root may mount file systems here.

/opt

Reserved for the installation of third-party software. Optional software and larger add-on program packages can be found here.

/root

Home directory for the root user. The personal data of root is located here.

/run

A tmpfs directory used by systemd and various components. /var/run is a symbolic link to /run.

/sbin

As the s indicates, this directory holds utilities for the superuser. /sbin contains the binaries essential for booting, restoring and recovering the system in addition to the binaries in /bin.

/srv

Holds data for services provided by the system, such as FTP and HTTP.

/tmp

This directory is used by programs that require temporary storage of files.

Important
Important: Cleaning up /tmp at Boot Time

Data stored in /tmp is not guaranteed to survive a system reboot. It depends, for example, on settings made in /etc/tmpfiles.d/tmp.conf.

/usr

/usr has nothing to do with users, but is the acronym for Unix system resources. The data in /usr is static, read-only data that can be shared among various hosts compliant with the Filesystem Hierarchy Standard (FHS). This directory contains all application programs including the graphical desktops such as GNOME and establishes a secondary hierarchy in the file system. /usr holds several subdirectories, such as /usr/bin, /usr/sbin, /usr/local, and /usr/share/doc.

/usr/bin

Contains generally accessible programs.

/usr/sbin

Contains programs reserved for the system administrator, such as repair functions.

/usr/local

In this directory the system administrator can install local, distribution-independent extensions.

/usr/share/doc

Holds various documentation files and the release notes for your system. In the manual subdirectory find an online version of this manual. If more than one language is installed, this directory may contain versions of the manuals for different languages.

Under packages find the documentation included in the software packages installed on your system. For every package, a subdirectory /usr/share/doc/packages/PACKAGENAME is created that often holds README files for the package and sometimes examples, configuration files or additional scripts.

If Howtos are installed on your system /usr/share/doc also holds the howto subdirectory in which to find additional documentation on many tasks related to the setup and operation of Linux software.

/var

Whereas /usr holds static, read-only data, /var is for data which is written during system operation and thus is variable data, such as log files or spooling data. For an overview of the most important log files you can find under /var/log/, refer to Table 35.1, “Log Files”.

/windows

Only available if you have both Microsoft Windows and Linux installed on your system. Contains the Windows data available on the Windows partition of your system. Whether you can edit the data in this directory depends on the file system your Windows partition uses. If it is FAT32, you can open and edit the files in this directory. For NTFS, SUSE Linux Enterprise Desktop also includes write access support. However, the driver for the NTFS-3g file system has limited functionality.

1.2 Writing Shell Scripts Edit source

Shell scripts provide a convenient way to perform a wide range of tasks: collecting data, searching for a word or phrase in a text and other useful things. The following example shows a small shell script that prints a text:

Example 1.1: A Shell Script Printing a Text
#!/bin/sh 1
# Output the following line: 2
echo "Hello World" 3

1

The first line begins with the Shebang characters (#!) which is an indicator that this file is a script. The script is executed with the specified interpreter after the Shebang, in this case /bin/sh.

2

The second line is a comment beginning with the hash sign. It is recommended to comment difficult lines to remember what they do.

3

The third line uses the built-in command echo to print the corresponding text.

Before you can run this script you need some prerequisites:

  1. Every script should contain a Shebang line (as in the example above). If the line is missing, you need to call the interpreter manually.

  2. You can save the script wherever you want. However, it is a good idea to save it in a directory where the shell can find it. The search path in a shell is determined by the environment variable PATH. Usually a normal user does not have write access to /usr/bin. Therefore it is recommended to save your scripts in the users' directory ~/bin/. The above example gets the name hello.sh.

  3. The script needs executable permissions. Set the permissions with the following command:

    chmod +x ~/bin/hello.sh

If you have fulfilled all of the above prerequisites, you can execute the script in the following ways:

  1. As Absolute Path.  The script can be executed with an absolute path. In our case, it is ~/bin/hello.sh.

  2. Everywhere.  If the PATH environment variable contains the directory where the script is located, you can execute the script with hello.sh.

1.3 Redirecting Command Events Edit source

Each command can use three channels, either for input or output:

  • Standard Output.  This is the default output channel. Whenever a command prints something, it uses the standard output channel.

  • Standard Input.  If a command needs input from users or other commands, it uses this channel.

  • Standard Error.  Commands use this channel for error reporting.

To redirect these channels, there are the following possibilities:

Command > File

Saves the output of the command into a file, an existing file will be deleted. For example, the ls command writes its output into the file listing.txt:

ls > listing.txt
Command >> File

Appends the output of the command to a file. For example, the ls command appends its output to the file listing.txt:

ls >> listing.txt
Command < File

Reads the file as input for the given command. For example, the read command reads in the content of the file into the variable:

read a < foo
Command1 | Command2

Redirects the output of the left command as input for the right command. For example, the cat command outputs the content of the /proc/cpuinfo file. This output is used by grep to filter only those lines which contain cpu:

cat /proc/cpuinfo | grep cpu

Every channel has a file descriptor: 0 (zero) for standard input, 1 for standard output and 2 for standard error. It is allowed to insert this file descriptor before a < or > character. For example, the following line searches for a file starting with foo, but suppresses its errors by redirecting it to /dev/null:

find / -name "foo*" 2>/dev/null

1.4 Using Aliases Edit source

An alias is a shortcut definition of one or more commands. The syntax for an alias is:

alias NAME=DEFINITION

For example, the following line defines an alias lt that outputs a long listing (option -l), sorts it by modification time (-t), and prints it in reverse sorted order (-r):

alias lt='ls -ltr'

To view all alias definitions, use alias. Remove your alias with unalias and the corresponding alias name.

1.5 Using Variables in Bash Edit source

A shell variable can be global or local. Global variables, or environment variables, can be accessed in all shells. In contrast, local variables are visible in the current shell only.

To view all environment variables, use the printenv command. If you need to know the value of a variable, insert the name of your variable as an argument:

printenv PATH

A variable, be it global or local, can also be viewed with echo:

echo $PATH

To set a local variable, use a variable name followed by the equal sign, followed by the value:

PROJECT="SLED"

Do not insert spaces around the equal sign, otherwise you get an error. To set an environment variable, use export:

export NAME="tux"

To remove a variable, use unset:

unset NAME

The following table contains some common environment variables which can be used in you shell scripts:

Table 1.5: Useful Environment Variables

HOME

the home directory of the current user

HOST

the current host name

LANG

when a tool is localized, it uses the language from this environment variable. English can also be set to C

PATH

the search path of the shell, a list of directories separated by colon

PS1

specifies the normal prompt printed before each command

PS2

specifies the secondary prompt printed when you execute a multi-line command

PWD

current working directory

USER

the current user

1.5.1 Using Argument Variables Edit source

For example, if you have the script foo.sh you can execute it like this:

foo.sh "Tux Penguin" 2000

To access all the arguments which are passed to your script, you need positional parameters. These are $1 for the first argument, $2 for the second, and so on. You can have up to nine parameters. To get the script name, use $0.

The following script foo.sh prints all arguments from 1 to 4:

#!/bin/sh
echo \"$1\" \"$2\" \"$3\" \"$4\"

If you execute this script with the above arguments, you get:

"Tux Penguin" "2000" "" ""

1.5.2 Using Variable Substitution Edit source

Variable substitutions apply a pattern to the content of a variable either from the left or right side. The following list contains the possible syntax forms:

${VAR#pattern}

removes the shortest possible match from the left:

file=/home/tux/book/book.tar.bz2
echo ${file#*/}
home/tux/book/book.tar.bz2
${VAR##pattern}

removes the longest possible match from the left:

file=/home/tux/book/book.tar.bz2
echo ${file##*/}
book.tar.bz2
${VAR%pattern}

removes the shortest possible match from the right:

file=/home/tux/book/book.tar.bz2
echo ${file%.*}
/home/tux/book/book.tar
${VAR%%pattern}

removes the longest possible match from the right:

file=/home/tux/book/book.tar.bz2
echo ${file%%.*}
/home/tux/book/book
${VAR/pattern_1/pattern_2}

substitutes the content of VAR from the PATTERN_1 with PATTERN_2:

file=/home/tux/book/book.tar.bz2
echo ${file/tux/wilber}
/home/wilber/book/book.tar.bz2

1.6 Grouping and Combining Commands Edit source

Shells allow you to concatenate and group commands for conditional execution. Each command returns an exit code which determines the success or failure of its operation. If it is 0 (zero) the command was successful, everything else marks an error which is specific to the command.

The following list shows, how commands can be grouped:

Command1 ; Command2

executes the commands in sequential order. The exit code is not checked. The following line displays the content of the file with cat and then prints its file properties with ls regardless of their exit codes:

cat filelist.txt ; ls -l filelist.txt
Command1 && Command2

runs the right command, if the left command was successful (logical AND). The following line displays the content of the file and prints its file properties only, when the previous command was successful (compare it with the previous entry in this list):

cat filelist.txt && ls -l filelist.txt
Command1 || Command2

runs the right command, when the left command has failed (logical OR). The following line creates only a directory in /home/wilber/bar when the creation of the directory in /home/tux/foo has failed:

mkdir /home/tux/foo || mkdir /home/wilber/bar
funcname(){ ... }

creates a shell function. You can use the positional parameters to access its arguments. The following line defines the function hello to print a short message:

hello() { echo "Hello $1"; }

You can call this function like this:

hello Tux

which prints:

Hello Tux

1.7 Working with Common Flow Constructs Edit source

To control the flow of your script, a shell has while, if, for and case constructs.

1.7.1 The if Control Command Edit source

The if command is used to check expressions. For example, the following code tests whether the current user is Tux:

if test $USER = "tux"; then
  echo "Hello Tux."
else
  echo "You are not Tux."
fi

The test expression can be as complex or simple as possible. The following expression checks if the file foo.txt exists:

if test -e /tmp/foo.txt ; then
  echo "Found foo.txt"
fi

The test expression can also be abbreviated in angled brackets:

if [ -e /tmp/foo.txt ] ; then
  echo "Found foo.txt"
fi

Find more useful expressions at http://www.cyberciti.biz/nixcraft/linux/docs/uniqlinuxfeatures/lsst/ch03sec02.html.

1.7.2 Creating Loops with the for Command Edit source

The for loop allows you to execute commands to a list of entries. For example, the following code prints some information about PNG files in the current directory:

for i in *.png; do
 ls -l $i
done

1.8 For More Information Edit source

Important information about Bash is provided in the man pages man bash. More about this topic can be found in the following list:

Many commands and system utilities need to be run as root to modify files and/or perform tasks that only the super user is allowed to. For security reasons and to avoid accidentally running dangerous commands, it is generally advisable not to log in directly as root. Instead, it is recommended to work as a normal, unprivileged user and use the sudo command to run commands with elevated privileges.

On SUSE Linux Enterprise Desktop, sudo is configured by default to work similarly to su. However, sudo offers the possibility to allow users to run commands with privileges of any other user in a highly configurable manner. This can be used to assign roles with specific privileges to certain users and groups. It is for example possible to allow members of the group users to run a command with the privileges of wilber. Access to the command can be further restricted by, for example, forbidding to specify any command options. While su always requires the root password for authentication with PAM, sudo can be configured to authenticate with your own credentials. This increases security by not having to share the root password. For example, you can allow members of the group users to run a command frobnicate as wilber, with the restriction that no arguments are specified. This can be used to assign roles with specific abilities to certain users and groups.

2.1 Basic sudo Usage Edit source

sudo is simple to use, yet very powerful.

2.1.1 Running a Single Command Edit source

Logged in as normal user, you can run any command as root by adding sudo before it. It will prompt for the root password and, if authenticated successfully, run the command as root:

tux > id -un1
tux
tux > sudo id -un
root's password:2
root
tux > id -un
tux3
tux > sudo id -un
4
root

1

The id -un command prints the login name of the current user.

2

The password is not shown during input, neither as clear text nor as bullets.

3

Only commands started with sudo are run with elevated privileges. If you run the same command without the sudo prefix, it is run with the privileges of the current user again.

4

For a limited amount of time, you do not need to enter the root password again.

Tip
Tip: I/O Redirection

I/O redirection does not work as you would probably expect:

tux > sudo echo s > /proc/sysrq-trigger
bash: /proc/sysrq-trigger: Permission denied
tux > sudo cat < /proc/1/maps
bash: /proc/1/maps: Permission denied

Only the echo/cat binary is run with elevated privileges, while the redirection is performed by the user's shell with user privileges. You can either start a shell like in Section 2.1.2, “Starting a Shell” or use the dd utility instead:

echo s | sudo dd of=/proc/sysrq-trigger
sudo dd if=/proc/1/maps | cat

2.1.2 Starting a Shell Edit source

Having to add sudo before every command can be cumbersome. While you could specify a shell as a command sudo bash, it is recommended to rather use one of the built-in mechanisms to start a shell:

sudo -s (<command>)

Starts a shell specified by the SHELL environment variable or the target user's default shell. If a command is given, it is passed to the shell (with the -c option), else the shell is run in interactive mode.

tux:~ > sudo -i
root's password:
root:/home/tux # exit
tux:~ > 
sudo -i (<command>)

Like -s, but starts the shell as login shell. This means that the shell's start-up files (.profile etc.) are processed and the current working directory is set to the target user's home directory.

tux:~ > sudo -i
root's password:
root:~ # exit
tux:~ > 

2.1.3 Environment Variables Edit source

By default, sudo does not propagate environment variables:

tux > ENVVAR=test env | grep ENVVAR
ENVVAR=test
tux > ENVVAR=test sudo env | grep ENVVAR
root's password:
1
tux > 

1

The empty output shows that the environment variable ENVVAR did not exist in the context of the command run with sudo.

This behavior can be changed by the env_reset option, see Table 2.1, “Useful Flags and Options”.

2.2 Configuring sudo Edit source

sudo is a very flexible tool with extensive configuration.

Note
Note: Locked yourself out of sudo

If you accidentally locked yourself out of sudo, use su - and the root password to get a root shell. To fix the error, run visudo.

2.2.1 Editing the Configuration Files Edit source

The main policy configuration file for sudo is /etc/sudoers. As it is possible to lock yourself out of the system due to errors in this file, it is strongly recommended to use visudo for editing. It will prevent simultaneous changes to the opened file and check for syntax errors before saving the modifications.

Despite its name, you can also use editors other than vi by setting the EDITOR environment variable, for example:

sudo EDITOR=/usr/bin/nano visudo

However, the /etc/sudoers file itself is supplied by the system packages and modifications may break on updates. Therefore, it is recommended to put custom configuration into files in the /etc/sudoers.d/ directory. Any file in there is automatically included. To create or edit a file in that subdirectory, run:

sudo visudo -f /etc/sudoers.d/NAME

Alternatively with a different editor (for example nano):

sudo EDITOR=/usr/bin/nano visudo -f /etc/sudoers.d/NAME
Note
Note: Ignored Files in /etc/sudoers.d

The #includedir command in /etc/sudoers, used for /etc/sudoers.d, ignores files that end in ~ (tilde) or contain a . (dot).

For more information on the visudo command, run man 8 visudo.

2.2.2 Basic sudoers Configuration Syntax Edit source

In the sudoers configuration files, there are two types of options: strings and flags. While strings can contain any value, flags can be turned either ON or OFF. The most important syntax constructs for sudoers configuration files are:

# Everything on a line after a # gets ignored 1
Defaults !insults # Disable the insults flag 2
Defaults env_keep += "DISPLAY HOME" # Add DISPLAY and HOME to env_keep
tux ALL = NOPASSWD: /usr/bin/frobnicate, PASSWD: /usr/bin/journalctl 3

1

There are two exceptions: #include and #includedir are normal commands. Followed by digits, it specifies a UID.

2

Remove the ! to set the specified flag to ON.

3

See Section 2.2.3, “Rules in sudoers”.

Table 2.1: Useful Flags and Options

Option name

Description

Example

targetpw

This flag controls whether the invoking user is required to enter the password of the target user (ON) (for example root) or the invoking user (OFF).

Defaults targetpw # Turn targetpw flag ON
rootpw

If set, sudo will prompt for the root password instead of the target user's or the password of the user that invoked the command. The default is OFF.

Defaults !rootpw # Turn rootpw flag OFF
env_reset

If set, sudo constructs a minimal environment with only TERM, PATH, HOME, MAIL, SHELL, LOGNAME, USER, USERNAME, and SUDO_* set. Additionally, variables listed in env_keep get imported from the calling environment. The default is ON.

Defaults env_reset # Turn env_reset flag ON
env_keep

List of environment variables to keep when the env_reset flag is ON.

# Set env_keep to contain EDITOR and PROMPT
Defaults env_keep = "EDITOR PROMPT"
Defaults env_keep += "JRE_HOME" # Add JRE_HOME
Defaults env_keep -= "JRE_HOME" # Remove JRE_HOME
env_delete

List of environment variables to remove when the env_reset flag is OFF.

# Set env_delete to contain EDITOR and PROMPT
Defaults env_delete = "EDITOR PROMPT"
Defaults env_delete += "JRE_HOME" # Add JRE_HOME
Defaults env_delete -= "JRE_HOME" # Remove JRE_HOME

The Defaults token can also be used to create aliases for a collection of users, hosts, and commands. Furthermore, it is possible to apply an option only to a specific set of users.

For detailed information about the /etc/sudoers configuration file, consult man 5 sudoers.

2.2.3 Rules in sudoers Edit source

Rules in the sudoers configuration can be very complex, so this section will only cover the basics. Each rule follows the basic scheme ([] marks optional parts):

#Who      Where         As whom      Tag                What
User_List Host_List = [(User_List)] [NOPASSWD:|PASSWD:] Cmnd_List
Syntax for sudoers Rules
User_List

One or more (separated by ,) identifiers: Either a user name, a group in the format %GROUPNAME or a user ID in the format #UID. Negation can be performed with a ! prefix.

Host_List

One or more (separated by ,) identifiers: Either a (fully qualified) host name or an IP address. Negation can be performed with a ! prefix. ALL is the usual choice for Host_List.

NOPASSWD:|PASSWD:

The user will not be prompted for a password when running commands matching CMDSPEC after NOPASSWD:.

PASSWD is the default, it only needs to be specified when both are on the same line:

tux ALL = PASSWD: /usr/bin/foo, NOPASSWD: /usr/bin/bar
Cmnd_List

One or more (separated by ,) specifiers: A path to an executable, followed by allowed arguments or nothing.

/usr/bin/foo     # Anything allowed
/usr/bin/foo bar # Only "/usr/bin/foo bar" allowed
/usr/bin/foo ""  # No arguments allowed

ALL can be used as User_List, Host_List, and Cmnd_List.

A rule that allows tux to run all commands as root without entering a password:

tux ALL = NOPASSWD: ALL

A rule that allows tux to run systemctl restart apache2:

tux ALL = /usr/bin/systemctl restart apache2

A rule that allows tux to run wall as admin with no arguments:

tux ALL = (admin) /usr/bin/wall ""
Warning
Warning: Dangerous constructs

Constructs of the kind

ALL ALL = ALL

must not be used without Defaults targetpw, otherwise anyone can run commands as root.

2.3 Common Use Cases Edit source

Although the default configuration is often sufficient for simple setups and desktop environments, custom configurations can be very useful.

2.3.1 Using sudo without root Password Edit source

In cases with special restrictions (user X can only run command Y as root) it is not possible. In other cases, it is still favorable to have some kind of separation. By convention, members of the group wheel can run all commands with sudo as root.

  1. Add yourself to the wheel group

    If your user account is not already member of the wheel group, add it by running sudo usermod -a -G wheel USERNAME and logging out and in again. Verify that the change was successful by running groups USERNAME.

  2. Make authentication with the invoking user's password the default.

    Create the file /etc/sudoers.d/userpw with visudo (see Section 2.2.1, “Editing the Configuration Files”) and add:

    Defaults !targetpw
  3. Select a new default rule.

    Depending on whether you want users to re-enter their passwords, uncomment the specific line in /etc/sudoers and comment out the default rule.

    ## Uncomment to allow members of group wheel to execute any command
    # %wheel ALL=(ALL) ALL
    
    ## Same thing without a password
    # %wheel ALL=(ALL) NOPASSWD: ALL
  4. Make the default rule more restrictive

    Comment out or remove the allow-everything rule in /etc/sudoers:

    ALL     ALL=(ALL) ALL   # WARNING! Only use this together with 'Defaults targetpw'!
    Warning
    Warning: Dangerous rule in sudoers

    Do not forget this step, otherwise any user can execute any command as root!

  5. Test the configuration

    Try to run sudo as member and non-member of wheel.

    tux:~ > groups
    users wheel
    tux:~ > sudo id -un
    tux's password:
    root
    wilber:~ > groups
    users
    wilber:~ > sudo id -un
    wilber is not in the sudoers file.  This incident will be reported.

2.3.2 Using sudo with X.Org Applications Edit source

When starting graphical applications with sudo, you will encounter the following error:

tux > sudo xterm
xterm: Xt error: Can't open display: %s
xterm: DISPLAY is not set

YaST will pick the ncurses interface instead of the graphical one.

To use X.Org in applications started with sudo, the environment variables DISPLAY and XAUTHORITY need to be propagated. To configure this, create the file /etc/sudoers.d/xorg, (see Section 2.2.1, “Editing the Configuration Files”) and add the following line:

Defaults env_keep += "DISPLAY XAUTHORITY"

If not set already, set the XAUTHORITY variable as follows:

export XAUTHORITY=~/.Xauthority

Now X.Org applications can be run as usual:

sudo yast2

2.4 More Information Edit source

A quick overview about the available command line switches can be retrieved by sudo --help. An explanation and other important information can be found in the man page: man 8 sudo, while the configuration is documented in man 5 sudoers.

3 YaST Online Update Edit source

SUSE offers a continuous stream of software security updates for your product. By default, the update applet is used to keep your system up-to-date. Refer to Book “Deployment Guide”, Chapter 10 “Installing or Removing Software”, Section 10.5 “Keeping the System Up-to-date” for further information on the update applet. This chapter covers the alternative tool for updating software packages: YaST Online Update.

The current patches for SUSE® Linux Enterprise Desktop are available from an update software repository. If you have registered your product during the installation, an update repository is already configured. If you have not registered SUSE Linux Enterprise Desktop, you can do so by starting the Product Registration in YaST. Alternatively, you can manually add an update repository from a source you trust. To add or remove repositories, start the Repository Manager with Software › Software Repositories in YaST. Learn more about the Repository Manager in Book “Deployment Guide”, Chapter 10 “Installing or Removing Software”, Section 10.4 “Managing Software Repositories and Services”.

Note
Note: Error on Accessing the Update Catalog

If you are not able to access the update catalog, this might be because of an expired subscription. Normally, SUSE Linux Enterprise Desktop comes with a one-year or three-year subscription, during which you have access to the update catalog. This access will be denied after the subscription ends.

If an access to the update catalog is denied, you will see a warning message prompting you to visit the SUSE Customer Center and check your subscription. The SUSE Customer Center is available at https://scc.suse.com//.

SUSE provides updates with different relevance levels:

Security Updates

Fix severe security hazards and should always be installed.

Recommended Updates

Fix issues that could compromise your computer.

Optional Updates

Fix non-security relevant issues or provide enhancements.

3.1 The Online Update Dialog Edit source

To open the YaST Online Update dialog, start YaST and select Software  › Online Update. Alternatively, start it from the command line with yast2 online_update.

The Online Update window consists of four sections.

YaST Online Update
Figure 3.1: YaST Online Update

The Summary section on the left lists the available patches for SUSE Linux Enterprise Desktop. The patches are sorted by security relevance: security, recommended, and optional. You can change the view of the Summary section by selecting one of the following options from Show Patch Category:

Needed Patches (default view)

Non-installed patches that apply to packages installed on your system.

Unneeded Patches

Patches that either apply to packages not installed on your system, or patches that have requirements which have already have been fulfilled (because the relevant packages have already been updated from another source).

All Patches

All patches available for SUSE Linux Enterprise Desktop.

Each list entry in the Summary section consists of a symbol and the patch name. For an overview of the possible symbols and their meaning, press ShiftF1. Actions required by Security and Recommended patches are automatically preset. These actions are Autoinstall, Autoupdate and Autodelete.

If you install an up-to-date package from a repository other than the update repository, the requirements of a patch for this package may be fulfilled with this installation. In this case a check mark is displayed in front of the patch summary. The patch will be visible in the list until you mark it for installation. This will in fact not install the patch (because the package already is up-to-date), but mark the patch as having been installed.

Select an entry in the Summary section to view a short Patch Description at the bottom left corner of the dialog. The upper right section lists the packages included in the selected patch (a patch can consist of several packages). Click an entry in the upper right section to view details about the respective package that is included in the patch.

3.2 Installing Patches Edit source

The YaST Online Update dialog allows you to either install all available patches at once or manually select the desired patches. You may also revert patches that have been applied to the system.

By default, all new patches (except optional ones) that are currently available for your system are already marked for installation. They will be applied automatically once you click Accept or Apply. If one or multiple patches require a system reboot, you will be notified about this before the patch installation starts. You can then either decide to continue with the installation of the selected patches, skip the installation of all patches that need rebooting and install the rest, or go back to the manual patch selection.

Procedure 3.1: Applying Patches with YaST Online Update
  1. Start YaST and select Software › Online Update.

  2. To automatically apply all new patches (except optional ones) that are currently available for your system, press Apply or Accept.

  3. First modify the selection of patches that you want to apply:

    1. Use the respective filters and views that the interface provides. For details, refer to Section 3.1, “The Online Update Dialog”.

    2. Select or deselect patches according to your needs and wishes by right-clicking the patch and choosing the respective action from the context menu.

      Important
      Important: Always Apply Security Updates

      Do not deselect any security-related patches without a very good reason. These patches fix severe security hazards and prevent your system from being exploited.

    3. Most patches include updates for several packages. If you want to change actions for single packages, right-click a package in the package view and choose an action.

    4. To confirm your selection and apply the selected patches, proceed with Apply or Accept.

  4. After the installation is complete, click Finish to leave the YaST Online Update. Your system is now up-to-date.

3.3 Automatic Online Update Edit source

YaST also offers the possibility to set up an automatic update with daily, weekly or monthly schedule. To use the respective module, you need to install the yast2-online-update-configuration package first.

By default, updates are downloaded as delta RPMs. Since rebuilding RPM packages from delta RPMs is a memory- and processor-intensive task, certain setups or hardware configurations might require you to disable the use of delta RPMs for the sake of performance.

Some patches, such as kernel updates or packages requiring license agreements, require user interaction, which would cause the automatic update procedure to stop. You can configure to skip patches that require user interaction.

Procedure 3.2: Configuring the Automatic Online Update
  1. After installation, start YaST and select Software › Online Update Configuration.

    Alternatively, start the module with yast2 online_update_configuration from the command line.

  2. Activate Automatic Online Update.

  3. Choose the update interval: Daily, Weekly, or Monthly.

  4. To automatically accept any license agreements, activate Agree with Licenses.

  5. Select if you want to Skip Interactive Patches in case you want the update procedure to proceed fully automatically.

    Important
    Important: Skipping Patches

    If you select to skip any packages that require interaction, run a manual Online Update occasionally to install those patches, too. Otherwise you might miss important patches.

  6. To automatically install all packages recommended by updated packages, activate Include Recommended Packages.

  7. To disable the use of delta RPMs (for performance reasons), deactivate Use Delta RPMs.

  8. To filter the patches by category (such as security or recommended), activate Filter by Category and add the appropriate patch categories from the list. Only patches of the selected categories will be installed. Others will be skipped.

  9. Confirm your configuration with OK.

The automatic online update does not automatically restart the system afterward. If there are package updates that require a system reboot, you need to do this manually.

YaST is the installation and configuration tool for SUSE Linux Enterprise Desktop. It has a graphical interface and the capability to customize your system quickly during and after the installation. It can be used to set up hardware, configure the network, system services, and tune your security settings.

4.1 Advanced Key Combinations Edit source

YaST has a set of advanced key combinations.

Print Screen

Take and save a screenshot. May not be available when YaST is running under some desktop environments.

ShiftF4

Enable/disable the color palette optimized for vision impaired users.

ShiftF7

Enable/disable logging of debug messages.

ShiftF8

Open a file dialog to save log files to a non-standard location.

CtrlShiftAltD

Send a DebugEvent. YaST modules can react to this by executing special debugging actions. The result depends on the specific YaST module.

CtrlShiftAltM

Start/stop macro recorder.

CtrlShiftAltP

Replay macro.

CtrlShiftAltS

Show style sheet editor.

CtrlShiftAltT

Dump widget tree to the log file.

CtrlShiftAltX

Open a terminal window (xterm). Useful for installation process via VNC.

CtrlShiftAltY

Show widget tree browser.

5 YaST in Text Mode Edit source

This section is intended for system administrators and experts who do not run an X server on their systems and depend on the text-based installation tool. It provides basic information about starting and operating YaST in text mode.

YaST in text mode uses the ncurses library to provide an easy pseudo-graphical user interface. The ncurses library is installed by default. The minimum supported size of the terminal emulator in which to run YaST is 80x25 characters.

Main Window of YaST in Text Mode
Figure 5.1: Main Window of YaST in Text Mode

When you start YaST in text mode, the YaST control center appears (see Figure 5.1). The main window consists of three areas. The left frame features the categories to which the various modules belong. This frame is active when YaST is started and therefore it is marked by a bold white border. The active category is selected. The right frame provides an overview of the modules available in the active category. The bottom frame contains the buttons for Help and Quit.

When you start the YaST control center, the category Software is selected automatically. Use and to change the category. To select a module from the category, activate the right frame with and then use and to select the module. Keep the arrow keys pressed to scroll through the list of available modules. After selecting a module, press Enter to start it.

Various buttons or selection fields in the module contain a highlighted letter (yellow by default). Use Althighlighted_letter to select a button directly instead of navigating there with →|. Exit the YaST control center by pressing AltQ or by selecting Quit and pressing Enter.

Tip
Tip: Refreshing YaST Dialogs

If a YaST dialog gets corrupted or distorted (for example, while resizing the window), press CtrlL to refresh and restore its contents.

5.1 Navigation in Modules Edit source

The following description of the control elements in the YaST modules assumes that all function keys and Alt key combinations work and are not assigned to different global functions. Read Section 5.3, “Restriction of Key Combinations” for information about possible exceptions.

Navigation among Buttons and Selection Lists

Use →| to navigate among the buttons and frames containing selection lists. To navigate in reverse order, use Alt→| or Shift→| combinations.

Navigation in Selection Lists

Use the arrow keys ( and ) to navigate among the individual elements in an active frame containing a selection list. If individual entries within a frame exceed its width, use Shift or Shift to scroll horizontally to the right and left. Alternatively, use CtrlE or CtrlA. This combination can also be used if using or results in changing the active frame or the current selection list, as in the control center.

Buttons, Radio Buttons, and Check Boxes

To select buttons with empty square brackets (check boxes) or empty parentheses (radio buttons), press Space or Enter. Alternatively, radio buttons and check boxes can be selected directly with Althighlighted_letter. In this case, you do not need to confirm with Enter. If you navigate to an item with →|, press Enter to execute the selected action or activate the respective menu item.

Function Keys

The function keys (F1 ... F12) enable quick access to the various buttons. Available function key combinations (FX) are shown in the bottom line of the YaST screen. Which function keys are actually mapped to which buttons depend on the active YaST module, because the different modules offer different buttons (Details, Info, Add, Delete, etc.). Use F10 for Accept, OK, Next, and Finish. Press F1 to access the YaST help.

Using Navigation Tree in ncurses Mode

Some YaST modules use a navigation tree in the left part of the window to select configuration dialogs. Use the arrow keys ( and ) to navigate in the tree. Use Space to open or close tree items. In ncurses mode, Enter must be pressed after a selection in the navigation tree to show the selected dialog. This is an intentional behavior to save time consuming redraws when browsing through the navigation tree.

Selecting Software in the Software Installation Module

Use the filters on the left side to limit the amount of displayed packages. Installed packages are marked with the letter i. To change the status of a package, press Space or Enter. Alternatively, use the Actions menu to select the needed status change (install, delete, update, taboo or lock).

The Software Installation Module
Figure 5.2: The Software Installation Module

5.2 Advanced Key Combinations Edit source

YaST in text mode has a set of advanced key combinations.

ShiftF1

Show a list of advanced hotkeys.

ShiftF4

Change color schema.

Ctrl\

Quit the application.

CtrlL

Refresh screen.

CtrlD F1

Show a list of advanced hotkeys.

CtrlD ShiftD

Dump dialog to the log file as a screenshot.

CtrlD ShiftY

Open YDialogSpy to see the widget hierarchy.

5.3 Restriction of Key Combinations Edit source

If your window manager uses global Alt combinations, the Alt combinations in YaST might not work. Keys like Alt or Shift can also be occupied by the settings of the terminal.

Replacing Alt with Esc

Alt shortcuts can be executed with Esc instead of Alt. For example, EscH replaces AltH. (First press Esc, then press H.)

Backward and Forward Navigation with CtrlF and CtrlB

If the Alt and Shift combinations are occupied by the window manager or the terminal, use the combinations CtrlF (forward) and CtrlB (backward) instead.

Restriction of Function Keys

The function keys (F1 ... F12) are also used for functions. Certain function keys might be occupied by the terminal and may not be available for YaST. However, the Alt key combinations and function keys should always be fully available on a pure text console.

5.4 YaST Command Line Options Edit source

Besides the text mode interface, YaST provides a pure command line interface. To get a list of YaST command line options, enter:

yast -h

5.4.1 Starting the Individual Modules Edit source

To save time, the individual YaST modules can be started directly. To start a module, enter:

yast <module_name>

View a list of all module names available on your system with yast -l or yast --list. Start the network module, for example, with yast lan.

5.4.2 Installing Packages from the Command Line Edit source

If you know a package name and the package is provided by any of your active installation repositories, you can use the command line option -i to install the package:

yast -i <package_name>

or

yast --install <package_name>

PACKAGE_NAME can be a single short package name (for example gvim) installed with dependency checking, or the full path to an RPM package which is installed without dependency checking.

If you need a command line based software management utility with functionality beyond what YaST provides, consider using Zypper. This utility uses the same software management library that is also the foundation for the YaST package manager. The basic usage of Zypper is covered in Section 6.1, “Using Zypper”.

5.4.3 Command Line Parameters of the YaST Modules Edit source

To use YaST functionality in scripts, YaST provides command line support for individual modules. Not all modules have command line support. To display the available options of a module, enter:

yast <module_name> help

If a module does not provide command line support, the module is started in text mode and the following message appears:

This YaST module does not support the command line interface.

6 Managing Software with Command Line Tools Edit source

Abstract

This chapter describes Zypper and RPM, two command line tools for managing software. For a definition of the terminology used in this context (for example, repository, patch, or update) refer to Book “Deployment Guide”, Chapter 10 “Installing or Removing Software”, Section 10.1 “Definition of Terms”.

6.1 Using Zypper Edit source

Zypper is a command line package manager for installing, updating and removing packages a well as for managing repositories. It is especially useful for accomplishing remote software management tasks or managing software from shell scripts.

6.1.1 General Usage Edit source

The general syntax of Zypper is:

zypper [--global-options] COMMAND  [--command-options] [arguments]

The components enclosed in brackets are not required. See zypper help for a list of general options and all commands. To get help for a specific command, type zypper help COMMAND.

Zypper Commands

The simplest way to execute Zypper is to type its name, followed by a command. For example, to apply all needed patches to the system, use:

tux > sudo zypper patch
Global Options

Additionally, you can choose from one or more global options by typing them immediately before the command:

tux > sudo zypper --non-interactive patch

In the above example, the option --non-interactive means that the command is run without asking anything (automatically applying the default answers).

Command-Specific Options

To use options that are specific to a particular command, type them immediately after the command:

tux > sudo zypper patch --auto-agree-with-licenses

In the above example, --auto-agree-with-licenses is used to apply all needed patches to a system without you being asked to confirm any licenses. Instead, license will be accepted automatically.

Arguments

Some commands require one or more arguments. For example, when using the command install, you need to specify which package or which packages you want to install:

tux > sudo zypper install mplayer

Some options also require a single argument. The following command will list all known patterns:

tux > zypper search -t pattern

You can combine all of the above. For example, the following command will install the aspell-de and aspell-fr packages from the factory repository while being verbose:

tux > sudo zypper -v install --from factory aspell-de aspell-fr

The --from option makes sure to keep all repositories enabled (for solving any dependencies) while requesting the package from the specified repository.

Most Zypper commands have a dry-run option that does a simulation of the given command. It can be used for test purposes.

tux > sudo zypper remove --dry-run MozillaFirefox

Zypper supports the global --userdata STRING option. You can specify a string with this option, which gets written to Zypper's log files and plug-ins (such as the Btrfs plug-in). It can be used to mark and identify transactions in log files.

tux > sudo zypper --userdata STRING patch

6.1.2 Installing and Removing Software with Zypper Edit source

To install or remove packages, use the following commands:

tux > sudo zypper install PACKAGE_NAME
sudo zypper remove PACKAGE_NAME
Warning
Warning: Do Not Remove Mandatory System Packages

Do not remove mandatory system packages like glibc , zypper , kernel . If they are removed, the system can become unstable or stop working altogether.

6.1.2.1 Selecting Which Packages to Install or Remove Edit source

There are various ways to address packages with the commands zypper install and zypper remove.

By Exact Package Name
tux > sudo zypper install MozillaFirefox
By Exact Package Name and Version Number
tux > sudo zypper install MozillaFirefox-52.2
By Repository Alias and Package Name
tux > sudo zypper install mozilla:MozillaFirefox

Where mozilla is the alias of the repository from which to install.

By Package Name Using Wild Cards

You can select all packages that have names starting or ending with a certain string. Use wild cards with care, especially when removing packages. The following command will install all packages starting with Moz:

tux > sudo zypper install 'Moz*'
Tip
Tip: Removing all -debuginfo Packages

When debugging a problem, you sometimes need to temporarily install a lot of -debuginfo packages which give you more information about running processes. After your debugging session finishes and you need to clean the environment, run the following:

tux > sudo zypper remove '*-debuginfo'
By Capability

For example, if you want to install a Perl module without knowing the name of the package, capabilities come in handy:

tux > sudo zypper install firefox
By Capability, Hardware Architecture, or Version

Together with a capability, you can specify a hardware architecture and a version:

  • The name of the desired hardware architecture is appended to the capability after a full stop. For example, to specify the AMD64/Intel 64 architectures (which in Zypper is named x86_64), use:

    tux > sudo zypper install 'firefox.x86_64'
  • Versions must be appended to the end of the string and must be preceded by an operator: < (lesser than), <= (lesser than or equal), = (equal), >= (greater than or equal), > (greater than).

    tux > sudo zypper install 'firefox>=52.2'
  • You can also combine a hardware architecture and version requirement:

    tux > sudo zypper install 'firefox.x86_64>=52.2'
By Path to the RPM file

You can also specify a local or remote path to a package:

tux > sudo zypper install /tmp/install/MozillaFirefox.rpm
tux > sudo zypper install http://download.example.com/MozillaFirefox.rpm

6.1.2.2 Combining Installation and Removal of Packages Edit source

To install and remove packages simultaneously, use the +/- modifiers. To install emacs and simultaneously remove vim , use:

tux > sudo zypper install emacs -vim

To remove emacs and simultaneously install vim , use:

tux > sudo zypper remove emacs +vim

To prevent the package name starting with the - being interpreted as a command option, always use it as the second argument. If this is not possible, precede it with --:

tux > sudo zypper install -emacs +vim       # Wrong
tux > sudo zypper install vim -emacs        # Correct
tux > sudo zypper install -- -emacs +vim    # Correct
tux > sudo zypper remove emacs +vim         # Correct

6.1.2.3 Cleaning Up Dependencies of Removed Packages Edit source

If (together with a certain package), you automatically want to remove any packages that become unneeded after removing the specified package, use the --clean-deps option:

tux > sudo zypper rm PACKAGE_NAME --clean-deps

6.1.2.4 Using Zypper in Scripts Edit source

By default, Zypper asks for a confirmation before installing or removing a selected package, or when a problem occurs. You can override this behavior using the --non-interactive option. This option must be given before the actual command (install, remove, and patch), as can be seen in the following:

tux > sudo zypper --non-interactive install PACKAGE_NAME

This option allows the use of Zypper in scripts and cron jobs.

6.1.2.5 Installing or Downloading Source Packages Edit source

To install the corresponding source package of a package, use:

tux > zypper source-install PACKAGE_NAME

When executed as root, the default location to install source packages is /usr/src/packages/ and ~/rpmbuild when run as user. These values can be changed in your local rpm configuration.

This command will also install the build dependencies of the specified package. If you do not want this, add the switch -D:

tux > sudo zypper source-install -D PACKAGE_NAME

To install only the build dependencies use -d.

tux > sudo zypper source-install -d PACKAGE_NAME

Of course, this will only work if you have the repository with the source packages enabled in your repository list (it is added by default, but not enabled). See Section 6.1.5, “Managing Repositories with Zypper” for details on repository management.

A list of all source packages available in your repositories can be obtained with:

tux > zypper search -t srcpackage

You can also download source packages for all installed packages to a local directory. To download source packages, use:

tux > zypper source-download

The default download directory is /var/cache/zypper/source-download. You can change it using the --directory option. To only show missing or extraneous packages without downloading or deleting anything, use the --status option. To delete extraneous source packages, use the --delete option. To disable deleting, use the --no-delete option.

6.1.2.6 Installing Packages from Disabled Repositories Edit source

Normally you can only install or refresh packages from enabled repositories. The --plus-content TAG option helps you specify repositories to be refreshed, temporarily enabled during the current Zypper session, and disabled after it completes.

For example, to enable repositories that may provide additional -debuginfo or -debugsource packages, use --plus-content debug. You can specify this option multiple times.

To temporarily enable such 'debug' repositories to install a specific -debuginfo package, use the option as follows:

tux > sudo zypper --plus-content debug \
   install "debuginfo(build-id)=eb844a5c20c70a59fc693cd1061f851fb7d046f4"

The build-id string is reported by gdb for missing debuginfo packages.

6.1.2.7 Utilities Edit source

To verify whether all dependencies are still fulfilled and to repair missing dependencies, use:

tux > zypper verify

In addition to dependencies that must be fulfilled, some packages recommend other packages. These recommended packages are only installed if actually available and installable. In case recommended packages were made available after the recommending package has been installed (by adding additional packages or hardware), use the following command:

tux > sudo zypper install-new-recommends

This command is very useful after plugging in a Web cam or Wi-Fi device. It will install drivers for the device and related software, if available. Drivers and related software are only installable if certain hardware dependencies are fulfilled.

6.1.3 Updating Software with Zypper Edit source

There are three different ways to update software using Zypper: by installing patches, by installing a new version of a package or by updating the entire distribution. The latter is achieved with zypper dist-upgrade. Upgrading SUSE Linux Enterprise Desktop is discussed in Book “Deployment Guide”, Chapter 16 “Upgrading SUSE Linux Enterprise”.

6.1.3.1 Installing All Needed Patches Edit source

To install all officially released patches that apply to your system, run:

tux > sudo zypper patch

All patches available from repositories configured on your computer are checked for their relevance to your installation. If they are relevant (and not classified as optional or feature), they are installed immediately. Note that the official update repository is only available after registering your SUSE Linux Enterprise Desktop installation.

If a patch that is about to be installed includes changes that require a system reboot, you will be warned before.

The plain zypper patch command does not apply patches from third party repositories. To update also the third party repositories, use the with-update command option as follows:

tux > sudo zypper patch --with update

To install also optional patches, use:

tux > sudo zypper patch --with-optional

To install all patches relating to a specific Bugzilla issue, use:

tux > sudo zypper patch --bugzilla=NUMBER

To install all patches relating to a specific CVE database entry, use:

tux > sudo zypper patch --cve=NUMBER

For example, to install a security patch with the CVE number CVE-2010-2713, execute:

tux > sudo zypper patch --cve=CVE-2010-2713

To install only patches which affect Zypper and the package management itself, use:

tux > sudo zypper patch --updatestack-only

Bear in mind that other command options that would also update other repositories will be dropped if you use the updatestack-only command option.

6.1.3.2 Listing Patches Edit source

To find out whether patches are available, Zypper allows viewing the following information:

Number of Needed Patches

To list the number of needed patches (patches that apply to your system but are not yet installed), use patch-check:

tux > zypper patch-check
Loading repository data...
Reading installed packages...
5 patches needed (1 security patch)

This command can be combined with the --updatestack-only option to list only the patches which affect Zypper and the package management itself.

List of Needed Patches

To list all needed patches (patches that apply to your system but are not yet installed), use list-patches:

tux > zypper list-patches
Loading repository data...
Reading installed packages...

Repository     | Name        | Version | Category | Status  | Summary
---------------+-------------+---------+----------+---------+---------
SLES12-Updates | SUSE-2014-8 | 1       | security | needed  | openssl: Update for OpenSSL
List of All Patches

To list all patches available for SUSE Linux Enterprise Desktop, regardless of whether they are already installed or apply to your installation, use zypper patches.

It is also possible to list and install patches relevant to specific issues. To list specific patches, use the zypper list-patches command with the following options:

By Bugzilla Issues

To list all needed patches that relate to Bugzilla issues, use the option --bugzilla.

To list patches for a specific bug, you can also specify a bug number: --bugzilla=NUMBER. To search for patches relating to multiple Bugzilla issues, add commas between the bug numbers, for example:

tux > zypper list-patches --bugzilla=972197,956917
By CVE Number

To list all needed patches that relate to an entry in the CVE database (Common Vulnerabilities and Exposures), use the option --cve.

To list patches for a specific CVE database entry, you can also specify a CVE number: --cve=NUMBER. To search for patches relating to multiple CVE database entries, add commas between the CVE numbers, for example:

tux > zypper list-patches --bugzilla=CVE-2016-2315,CVE-2016-2324

To list all patches regardless of whether they are needed, use the option --all additionally. For example, to list all patches with a CVE number assigned, use:

tux > zypper list-patches --all --cve
Issue | No.           | Patch             | Category    | Severity  | Status
------+---------------+-------------------+-------------+-----------+----------
cve   | CVE-2015-0287 | SUSE-SLE-Module.. | recommended | moderate  | needed
cve   | CVE-2014-3566 | SUSE-SLE-SERVER.. | recommended | moderate  | not needed
[...]

6.1.3.3 Installing New Package Versions Edit source

If a repository contains only new packages, but does not provide patches, zypper patch does not show any effect. To update all installed packages with newer available versions (while maintaining system integrity), use:

tux > sudo zypper update

To update individual packages, specify the package with either the update or install command:

tux > sudo zypper update PACKAGE_NAME
sudo zypper install PACKAGE_NAME

A list of all new installable packages can be obtained with the command:

tux > zypper list-updates

Note that this command only lists packages that match the following criteria:

  • has the same vendor like the already installed package,

  • is provided by repositories with at least the same priority than the already installed package,

  • is installable (all dependencies are satisfied).

A list of all new available packages (regardless whether installable or not) can be obtained with:

tux > sudo zypper list-updates --all

To find out why a new package cannot be installed, use the zypper install or zypper update command as described above.

6.1.3.4 Identifying Orphaned Packages Edit source

Whenever you remove a repository from Zypper or upgrade your system, some packages can get in an orphaned state. These orphaned packages belong to no active repository anymore. The following command gives you a list of these:

tux > sudo zypper packages --orphaned

With this list, you can decide if a package is still needed or can be removed safely.

6.1.4 Identifying Processes and Services Using Deleted Files Edit source

When patching, updating or removing packages, there may be running processes on the system which continue to use files having been deleted by the update or removal. Use zypper ps to list processes using deleted files. In case the process belongs to a known service, the service name is listed, making it easy to restart the service. By default zypper ps shows a table:

tux > zypper ps
PID   | PPID | UID | User  | Command      | Service      | Files
------+------+-----+-------+--------------+--------------+-------------------
814   | 1    | 481 | avahi | avahi-daemon | avahi-daemon | /lib64/ld-2.19.s->
      |      |     |       |              |              | /lib64/libdl-2.1->
      |      |     |       |              |              | /lib64/libpthrea->
      |      |     |       |              |              | /lib64/libc-2.19->
[...]
PID: ID of the process
PPID: ID of the parent process
UID: ID of the user running the process
Login: Login name of the user running the process
Command: Command used to execute the process
Service: Service name (only if command is associated with a system service)
Files: The list of the deleted files

The output format of zypper ps can be controlled as follows:

zypper ps-s

Create a short table not showing the deleted files.

tux > zypper ps -s
PID   | PPID | UID  | User    | Command      | Service
------+------+------+---------+--------------+--------------
814   | 1    | 481  | avahi   | avahi-daemon | avahi-daemon
817   | 1    | 0    | root    | irqbalance   | irqbalance
1567  | 1    | 0    | root    | sshd         | sshd
1761  | 1    | 0    | root    | master       | postfix
1764  | 1761 | 51   | postfix | pickup       | postfix
1765  | 1761 | 51   | postfix | qmgr         | postfix
2031  | 2027 | 1000 | tux     | bash         |
zypper ps-ss

Show only processes associated with a system service.

PID   | PPID | UID  | User    | Command      | Service
------+------+------+---------+--------------+--------------
814   | 1    | 481  | avahi   | avahi-daemon | avahi-daemon
817   | 1    | 0    | root    | irqbalance   | irqbalance
1567  | 1    | 0    | root    | sshd         | sshd
1761  | 1    | 0    | root    | master       | postfix
1764  | 1761 | 51   | postfix | pickup       | postfix
1765  | 1761 | 51   | postfix | qmgr         | postfix
zypper ps-sss

Only show system services using deleted files.

avahi-daemon
irqbalance
postfix
sshd
zypper ps--print "systemctl status %s"

Show the commands to retrieve status information for services which might need a restart.

systemctl status avahi-daemon
systemctl status irqbalance
systemctl status postfix
systemctl status sshd

For more information about service handling refer to Chapter 14, The systemd Daemon.

6.1.5 Managing Repositories with Zypper Edit source

All installation or patch commands of Zypper rely on a list of known repositories. To list all repositories known to the system, use the command:

tux > zypper repos

The result will look similar to the following output:

Example 6.1: Zypper—List of Known Repositories
tux > zypper repos
# | Alias        | Name          | Enabled | Refresh
--+--------------+---------------+---------+--------
1 | SLEHA-12-GEO | SLEHA-12-GEO  | Yes     | No
2 | SLEHA-12     | SLEHA-12      | Yes     | No
3 | SLES12       | SLES12        | Yes     | No

When specifying repositories in various commands, an alias, URI or repository number from the zypper repos command output can be used. A repository alias is a short version of the repository name for use in repository handling commands. Note that the repository numbers can change after modifying the list of repositories. The alias will never change by itself.

By default, details such as the URI or the priority of the repository are not displayed. Use the following command to list all details:

tux > zypper repos -d

6.1.5.1 Adding Repositories Edit source

To add a repository, run

tux > sudo zypper addrepo URI ALIAS

URI can either be an Internet repository, a network resource, a directory or a CD or DVD (see http://en.opensuse.org/openSUSE:Libzypp_URIs for details). The ALIAS is a shorthand and unique identifier of the repository. You can freely choose it, with the only exception that it needs to be unique. Zypper will issue a warning if you specify an alias that is already in use.

6.1.5.2 Refreshing Repositories Edit source

zypper enables you to fetch changes in packages from configured repositories. To fetch the changes, run:

tux > sudo zypper refresh
Note
Note: Default Behavior of zypper

By default, some commands perform refresh automatically, so you do not need to run the command explicitly.

The refresh command enables you to view changes also in disabled repositories, by using the --plus-content option:

tux > sudo zypper --plus-content refresh

This option fetches changes in repositories, but keeps the disabled repositories in the same state—disabled.

6.1.5.3 Removing Repositories Edit source

To remove a repository from the list, use the command zypper removerepo together with the alias or number of the repository you want to delete. For example, to remove the repository SLEHA-12-GEO from Example 6.1, “Zypper—List of Known Repositories”, use one of the following commands:

tux > sudo zypper removerepo 1
tux > sudo zypper removerepo "SLEHA-12-GEO"

6.1.5.4 Modifying Repositories Edit source

Enable or disable repositories with zypper modifyrepo. You can also alter the repository's properties (such as refreshing behavior, name or priority) with this command. The following command will enable the repository named updates, turn on auto-refresh and set its priority to 20:

tux > sudo zypper modifyrepo -er -p 20 'updates'

Modifying repositories is not limited to a single repository—you can also operate on groups:

-a: all repositories
-l: local repositories
-t: remote repositories
-m TYPE: repositories of a certain type (where TYPE can be one of the following: http, https, ftp, cd, dvd, dir, file, cifs, smb, nfs, hd, iso)

To rename a repository alias, use the renamerepo command. The following example changes the alias from Mozilla Firefox to firefox:

tux > sudo zypper renamerepo 'Mozilla Firefox' firefox

6.1.6 Querying Repositories and Packages with Zypper Edit source

Zypper offers various methods to query repositories or packages. To get lists of all products, patterns, packages or patches available, use the following commands:

tux > zypper products
tux > zypper patterns
tux > zypper packages
tux > zypper patches

To query all repositories for certain packages, use search. To get information regarding particular packages, use the info command.

6.1.6.1 zypper search Usage Edit source

The zypper search command works on package names, or, optionally, on package summaries and descriptions. String wrapped in / are interpreted as regular expressions. By default, the search is not case-sensitive.

Simple search for a package name containing fire
tux > zypper search "fire"
Simple search for the exact package MozillaFirefox
tux > zypper search --match-exact "MozillaFirefox"
Also search in package descriptions and summaries
tux > zypper search -d fire
Only display packages not already installed
tux > zypper search -u fire
Display packages containing the string fir not followed be e
tux > zypper se "/fir[^e]/"

6.1.6.2 zypper what-provides Usage Edit source

To search for packages which provide a special capability, use the command what-provides. For example, if you want to know which package provides the Perl module SVN::Core, use the following command:

tux > zypper what-provides 'perl(SVN::Core)'

The what-provides PACKAGE_NAME is similar to rpm -q --whatprovides PACKAGE_NAME, but RPM is only able to query the RPM database (that is the database of all installed packages). Zypper, on the other hand, will tell you about providers of the capability from any repository, not only those that are installed.

6.1.6.3 zypper info Usage Edit source

To query single packages, use info with an exact package name as an argument. This displays detailed information about a package. In case the package name does not match any package name from repositories, the command outputs detailed information for non-package matches. If you request a specific type (by using the -t option) and the type does not exist, the command outputs other available matches but without detailed information.

If you specify a source package, the command displays binary packages built from the source package. If you specify a binary package, the command outputs the source packages used to build the binary package.

To also show what is required/recommended by the package, use the options --requires and --recommends:

tux > zypper info --requires MozillaFirefox

6.1.7 Configuring Zypper Edit source

Zypper now comes with a configuration file, allowing you to permanently change Zypper's behavior (either system-wide or user-specific). For system-wide changes, edit /etc/zypp/zypper.conf. For user-specific changes, edit ~/.zypper.conf. If ~/.zypper.conf does not yet exist, you can use /etc/zypp/zypper.conf as a template: copy it to ~/.zypper.conf and adjust it to your liking. Refer to the comments in the file for help about the available options.

6.1.8 Troubleshooting Edit source

If you have trouble accessing packages from configured repositories (for example, Zypper cannot find a certain package even though you know it exists in one of the repositories), refreshing the repositories may help:

tux > sudo zypper refresh

If that does not help, try

tux > sudo zypper refresh -fdb

This forces a complete refresh and rebuild of the database, including a forced download of raw metadata.

6.1.9 Zypper Rollback Feature on Btrfs File System Edit source

If the Btrfs file system is used on the root partition and snapper is installed, Zypper automatically calls snapper when committing changes to the file system to create appropriate file system snapshots. These snapshots can be used to revert any changes made by Zypper. See Chapter 7, System Recovery and Snapshot Management with Snapper for more information.

6.1.10 For More Information Edit source

For more information on managing software from the command line, enter zypper help, zypper help  COMMAND or refer to the zypper(8) man page. For a complete and detailed command reference, cheat sheets with the most important commands, and information on how to use Zypper in scripts and applications, refer to http://en.opensuse.org/SDB:Zypper_usage. A list of software changes for the latest SUSE Linux Enterprise Desktop version can be found at http://en.opensuse.org/openSUSE:Zypper versions.

6.2 RPM—the Package Manager Edit source

RPM (RPM Package Manager) is used for managing software packages. Its main commands are rpm and rpmbuild. The powerful RPM database can be queried by the users, system administrators and package builders for detailed information about the installed software.

Essentially, rpm has five modes: installing, uninstalling (or updating) software packages, rebuilding the RPM database, querying RPM bases or individual RPM archives, integrity checking of packages and signing packages. rpmbuild can be used to build installable packages from pristine sources.

Installable RPM archives are packed in a special binary format. These archives consist of the program files to install and certain meta information used during the installation by rpm to configure the software package or stored in the RPM database for documentation purposes. RPM archives normally have the extension .rpm.

Tip
Tip: Software Development Packages

For several packages, the components needed for software development (libraries, headers, include files, etc.) have been put into separate packages. These development packages are only needed if you want to compile software yourself (for example, the most recent GNOME packages). They can be identified by the name extension -devel, such as the packages alsa-devel and gimp-devel.

6.2.1 Verifying Package Authenticity Edit source

RPM packages have a GPG signature. To verify the signature of an RPM package, use the command rpm --checksig  PACKAGE-1.2.3.rpm to determine whether the package originates from SUSE or from another trustworthy facility. This is especially recommended for update packages from the Internet.

While fixing issues in the operating system, you might need to install a Problem Temporary Fix (PTF) into a production system. The packages provided by SUSE are signed against a special PTF key. However, in contrast to SUSE Linux Enterprise 11, this key is not imported by default on SUSE Linux Enterprise 12 systems. To manually import the key, use the following command:

tux > sudo rpm --import \
/usr/share/doc/packages/suse-build-key/suse_ptf_key.asc

After importing the key, you can install PTF packages on your system.

6.2.2 Managing Packages: Install, Update, and Uninstall Edit source

Normally, the installation of an RPM archive is quite simple: rpm -i PACKAGE.rpm. With this command the package is installed, but only if its dependencies are fulfilled and if there are no conflicts with other packages. With an error message, rpm requests those packages that need to be installed to meet dependency requirements. In the background, the RPM database ensures that no conflicts arise—a specific file can only belong to one package. By choosing different options, you can force rpm to ignore these defaults, but this is only for experts. Otherwise, you risk compromising the integrity of the system and possibly jeopardize the ability to update the system.

The options -U or --upgrade and -F or --freshen can be used to update a package (for example, rpm -F PACKAGE.rpm). This command removes the files of the old version and immediately installs the new files. The difference between the two versions is that -U installs packages that previously did not exist in the system, while -F merely updates previously installed packages. When updating, rpm updates configuration files carefully using the following strategy:

  • If a configuration file was not changed by the system administrator, rpm installs the new version of the appropriate file. No action by the system administrator is required.

  • If a configuration file was changed by the system administrator before the update, rpm saves the changed file with the extension .rpmorig or .rpmsave (backup file) and installs the version from the new package. This is done only if the originally installed file and the newer version are different. If this is the case, compare the backup file (.rpmorig or .rpmsave) with the newly installed file and make your changes again in the new file. Afterward, delete all .rpmorig and .rpmsave files to avoid problems with future updates.

  • .rpmnew files appear if the configuration file already exists and if the noreplace label was specified in the .spec file.

Following an update, .rpmsave and .rpmnew files should be removed after comparing them, so they do not obstruct future updates. The .rpmorig extension is assigned if the file has not previously been recognized by the RPM database.

Otherwise, .rpmsave is used. In other words, .rpmorig results from updating from a foreign format to RPM. .rpmsave results from updating from an older RPM to a newer RPM. .rpmnew does not disclose any information to whether the system administrator has made any changes to the configuration file. A list of these files is available in /var/adm/rpmconfigcheck. Some configuration files (like /etc/httpd/httpd.conf) are not overwritten to allow continued operation.

The -U switch is not just an equivalent to uninstalling with the -e option and installing with the -i option. Use -U whenever possible.

To remove a package, enter rpm -e PACKAGE. This command only deletes the package if there are no unresolved dependencies. It is theoretically impossible to delete Tcl/Tk, for example, as long as another application requires it. Even in this case, RPM calls for assistance from the database. If such a deletion is, for whatever reason, impossible (even if no additional dependencies exist), it may be helpful to rebuild the RPM database using the option --rebuilddb.

6.2.3 Delta RPM Packages Edit source

Delta RPM packages contain the difference between an old and a new version of an RPM package. Applying a delta RPM onto an old RPM results in a completely new RPM. It is not necessary to have a copy of the old RPM because a delta RPM can also work with an installed RPM. The delta RPM packages are even smaller in size than patch RPMs, which is an advantage when transferring update packages over the Internet. The drawback is that update operations with delta RPMs involved consume considerably more CPU cycles than plain or patch RPMs.

The makedeltarpm and applydelta binaries are part of the delta RPM suite (package deltarpm) and help you create and apply delta RPM packages. With the following commands, you can create a delta RPM called new.delta.rpm. The following command assumes that old.rpm and new.rpm are present:

tux > sudo makedeltarpm old.rpm new.rpm new.delta.rpm

Using applydeltarpm, you can reconstruct the new RPM from the file system if the old package is already installed:

tux > sudo applydeltarpm new.delta.rpm new.rpm

To derive it from the old RPM without accessing the file system, use the -r option:

tux > sudo applydeltarpm -r old.rpm new.delta.rpm new.rpm

See /usr/share/doc/packages/deltarpm/README for technical details.

6.2.4 RPM Queries Edit source

With the -q option rpm initiates queries, making it possible to inspect an RPM archive (by adding the option -p) and to query the RPM database of installed packages. Several switches are available to specify the type of information required. See Table 6.1, “The Most Important RPM Query Options”.

Table 6.1: The Most Important RPM Query Options

-i

Package information

-l

File list

-f FILE

Query the package that contains the file FILE (the full path must be specified with FILE)

-s

File list with status information (implies -l)

-d

List only documentation files (implies -l)

-c

List only configuration files (implies -l)

--dump

File list with complete details (to be used with -l, -c, or -d)

--provides

List features of the package that another package can request with --requires

--requires, -R

Capabilities the package requires

--scripts

Installation scripts (preinstall, postinstall, uninstall)

For example, the command rpm -q -i wget displays the information shown in Example 6.2, “rpm -q -i wget.

Example 6.2: rpm -q -i wget
Name        : wget
Version     : 1.14
Release     : 17.1
Architecture: x86_64
Install Date: Mon 30 Jan 2017 14:01:29 CET
Group       : Productivity/Networking/Web/Utilities
Size        : 2046483
License     : GPL-3.0+
Signature   : RSA/SHA256, Thu 08 Dec 2016 07:48:44 CET, Key ID 70af9e8139db7c82
Source RPM  : wget-1.14-17.1.src.rpm
Build Date  : Thu 08 Dec 2016 07:48:34 CET
Build Host  : sheep09
Relocations : (not relocatable)
Packager    : https://www.suse.com/
Vendor      : SUSE LLC <https://www.suse.com/>
URL         : http://www.gnu.org/software/wget/
Summary     : A Tool for Mirroring FTP and HTTP Servers
Description :
Wget enables you to retrieve WWW documents or FTP files from a server.
This can be done in script files or via the command line.
Distribution: SUSE Linux Enterprise 12

The option -f only works if you specify the complete file name with its full path. Provide as many file names as desired. For example:

tux > rpm -q -f /bin/rpm /usr/bin/wget
rpm-4.11.2-15.1.x86_64
wget-1.14-17.1.x86_64

If only part of the file name is known, use a shell script as shown in Example 6.3, “Script to Search for Packages”. Pass the partial file name to the script shown as a parameter when running it.

The command rpm -q --changelog PACKAGE displays a detailed list of change information about a specific package, sorted by date.

With the installed RPM database, verification checks can be made. Initiate these with -V, or --verify. With this option, rpm shows all files in a package that have been changed since installation. rpm uses eight character symbols to give some hints about the following changes:

Table 6.2: RPM Verify Options

5

MD5 check sum

S

File size

L

Symbolic link

T

Modification time

D

Major and minor device numbers

U

Owner

G

Group

M

Mode (permissions and file type)

In the case of configuration files, the letter c is printed. For example, for changes to /etc/wgetrc (wget package):

tux > rpm -V wget
S.5....T c /etc/wgetrc

The files of the RPM database are placed in /var/lib/rpm. If the partition /usr has a size of 1 GB, this database can occupy nearly 30 MB, especially after a complete update. If the database is much larger than expected, it is useful to rebuild the database with the option --rebuilddb. Before doing this, make a backup of the old database. The cron script cron.daily makes daily copies of the database (packed with gzip) and stores them in /var/adm/backup/rpmdb. The number of copies is controlled by the variable MAX_RPMDB_BACKUPS (default: 5) in /etc/sysconfig/backup. The size of a single backup is approximately 1 MB for 1 GB in /usr.

6.2.5 Installing and Compiling Source Packages Edit source

All source packages carry a .src.rpm extension (source RPM).

Note
Note: Installed Source Packages

Source packages can be copied from the installation medium to the hard disk and unpacked with YaST. They are not, however, marked as installed ([i]) in the package manager. This is because the source packages are not entered in the RPM database. Only installed operating system software is listed in the RPM database. When you install a source package, only the source code is added to the system.

The following directories must be available for rpm and rpmbuild in /usr/src/packages (unless you specified custom settings in a file like /etc/rpmrc):

SOURCES

for the original sources (.tar.bz2 or .tar.gz files, etc.) and for distribution-specific adjustments (mostly .diff or .patch files)

SPECS

for the .spec files, similar to a meta Makefile, which control the build process

BUILD

all the sources are unpacked, patched and compiled in this directory

RPMS

where the completed binary packages are stored

SRPMS

here are the source RPMs

When you install a source package with YaST, all the necessary components are installed in /usr/src/packages: the sources and the adjustments in SOURCES and the relevant .spec file in SPECS.

Warning
Warning: System Integrity

Do not experiment with system components (glibc, rpm, etc.), because this endangers the stability of your system.

The following example uses the wget.src.rpm package. After installing the source package, you should have files similar to those in the following list:

/usr/src/packages/SOURCES/wget-1.11.4.tar.bz2
/usr/src/packages/SOURCES/wgetrc.patch
/usr/src/packages/SPECS/wget.spec

rpmbuild -bX /usr/src/packages/SPECS/wget.spec starts the compilation. X is a wild card for various stages of the build process (see the output of --help or the RPM documentation for details). The following is merely a brief explanation:

-bp

Prepare sources in /usr/src/packages/BUILD: unpack and patch.

-bc

Do the same as -bp, but with additional compilation.

-bi

Do the same as -bp, but with additional installation of the built software. Caution: if the package does not support the BuildRoot feature, you might overwrite configuration files.

-bb

Do the same as -bi, but with the additional creation of the binary package. If the compile was successful, the binary should be in /usr/src/packages/RPMS.

-ba

Do the same as -bb, but with the additional creation of the source RPM. If the compilation was successful, the binary should be in /usr/src/packages/SRPMS.

--short-circuit

Skip some steps.

The binary RPM created can now be installed with rpm -i or, preferably, with rpm -U. Installation with rpm makes it appear in the RPM database.

Keep in mind, the BuildRoot directive in the spec file is deprecated since SUSE Linux Enterprise Desktop 12. If you still need this feature, use the --buildroot option as a workaround. For more detailed background information, see the support database at https://www.suse.com/support/kb/doc?id=7017104.

6.2.6 Compiling RPM Packages with build Edit source

The danger with many packages is that unwanted files are added to the running system during the build process. To prevent this use build, which creates a defined environment in which the package is built. To establish this chroot environment, the build script must be provided with a complete package tree. This tree can be made available on the hard disk, via NFS, or from DVD. Set the position with build --rpms DIRECTORY. Unlike rpm, the build command looks for the .spec file in the source directory. To build wget (like in the above example) with the DVD mounted in the system under /media/dvd, use the following commands as root:

root # cd /usr/src/packages/SOURCES/
root # mv ../SPECS/wget.spec .
root # build --rpms /media/dvd/suse/ wget.spec

Subsequently, a minimum environment is established at /var/tmp/build-root. The package is built in this environment. Upon completion, the resulting packages are located in /var/tmp/build-root/usr/src/packages/RPMS.

The build script offers several additional options. For example, cause the script to prefer your own RPMs, omit the initialization of the build environment or limit the rpm command to one of the above-mentioned stages. Access additional information with build --help and by reading the build man page.

6.2.7 Tools for RPM Archives and the RPM Database Edit source

Midnight Commander (mc) can display the contents of RPM archives and copy parts of them. It represents archives as virtual file systems, offering all usual menu options of Midnight Commander. Display the HEADER with F3. View the archive structure with the cursor keys and Enter. Copy archive components with F5.

A full-featured package manager is available as a YaST module. For details, see Book “Deployment Guide”, Chapter 10 “Installing or Removing Software”.

7 System Recovery and Snapshot Management with Snapper Edit source

Abstract

Being able to do file system snapshots providing the ability to do rollbacks on Linux is a feature that was often requested in the past. Snapper, with the Btrfs file system or thin-provisioned LVM volumes now fills that gap.

Btrfs, a new copy-on-write file system for Linux, supports file system snapshots (a copy of the state of a subvolume at a certain point of time) of subvolumes (one or more separately mountable file systems within each physical partition). Snapshots are also supported on thin-provisioned LVM volumes formatted with XFS, Ext4 or Ext3. Snapper lets you create and manage these snapshots. It comes with a command line and a YaST interface. Starting with SUSE Linux Enterprise Server 12 it is also possible to boot from Btrfs snapshots—see Section 7.3, “System Rollback by Booting from Snapshots” for more information.

Using Snapper you can perform the following tasks:

7.1 Default Setup Edit source

Snapper on SUSE Linux Enterprise Desktop is set up to serve as an undo and recovery tool for system changes. By default, the root partition (/) of SUSE Linux Enterprise Desktop is formatted with Btrfs. Taking snapshots is automatically enabled if the root partition (/) is big enough (approximately more than 16 GB). Taking snapshots on partitions other than / is not enabled by default.

Tip
Tip: Enabling Snapper in the Installed System

If you disabled Snapper during the installation, you can enable it at any time later. To do so, create a default Snapper configuration for the root file system by running

tux > sudo snapper -c root create-config /

Afterward enable the different snapshot types as described in Section 7.1.3.1, “Disabling/Enabling Snapshots”.

Keep in mind that snapshots require a Btrfs root file system with subvolumes set up as proposed by the installer and a partition size of at least 16 GB.

When a snapshot is created, both the snapshot and the original point to the same blocks in the file system. So, initially a snapshot does not occupy additional disk space. If data in the original file system is modified, changed data blocks are copied while the old data blocks are kept for the snapshot. Therefore, a snapshot occupies the same amount of space as the data modified. So, over time, the amount of space a snapshot allocates, constantly grows. As a consequence, deleting files from a Btrfs file system containing snapshots may not free disk space!

Note
Note: Snapshot Location

Snapshots always reside on the same partition or subvolume on which the snapshot has been taken. It is not possible to store snapshots on a different partition or subvolume.

As a result, partitions containing snapshots need to be larger than normal partitions. The exact amount strongly depends on the number of snapshots you keep and the amount of data modifications. As a rule of thumb you should consider using twice the size than you normally would. To prevent disks from running out of space, old snapshots are automatically cleaned up. Refer to Section 7.1.3.4, “Controlling Snapshot Archiving” for details.

7.1.1 Types of Snapshots Edit source

Although snapshots themselves do not differ in a technical sense, we distinguish between three types of snapshots, based on the events that trigger them:

Timeline Snapshots

A single snapshot is created every hour. Old snapshots are automatically deleted. By default, the first snapshot of the last ten days, months, and years are kept. Timeline snapshots are disabled by default.

Installation Snapshots

Whenever one or more packages are installed with YaST or Zypper, a pair of snapshots is created: one before the installation starts (Pre) and another one after the installation has finished (Post). In case an important system component such as the kernel has been installed, the snapshot pair is marked as important (important=yes). Old snapshots are automatically deleted. By default the last ten important snapshots and the last ten regular (including administration snapshots) snapshots are kept. Installation snapshots are enabled by default.

Administration Snapshots

Whenever you administrate the system with YaST, a pair of snapshots is created: one when a YaST module is started (Pre) and another when the module is closed (Post). Old snapshots are automatically deleted. By default the last ten important snapshots and the last ten regular snapshots (including installation snapshots) are kept. Administration snapshots are enabled by default.

7.1.2 Directories That Are Excluded from Snapshots Edit source

Some directories need to be excluded from snapshots for different reasons. The following list shows all directories that are excluded:

/boot/grub2/i386-pc, /boot/grub2/x86_64-efi, /boot/grub2/powerpc-ieee1275, /boot/grub2/s390x-emu

A rollback of the boot loader configuration is not supported. The directories listed above are architecture-specific. The first two directories are present on AMD64/Intel 64 machines, the latter two on IBM POWER and on IBM z Systems, respectively.

/home

If /home does not reside on a separate partition, it is excluded to avoid data loss on rollbacks.

/opt, /var/opt

Third-party products usually get installed to /opt. It is excluded to avoid uninstalling these applications on rollbacks.

/srv

Contains data for Web and FTP servers. It is excluded to avoid data loss on rollbacks.

/tmp, /var/tmp, /var/cache, /var/crash

All directories containing temporary files and caches are excluded from snapshots.

/usr/local

This directory is used when manually installing software. It is excluded to avoid uninstalling these installations on rollbacks.

/var/lib/libvirt/images

The default location for virtual machine images managed with libvirt. Excluded to ensure virtual machine images are not replaced with older versions during a rollback. By default, this subvolume is created with the option no copy on write.

/var/lib/mailman, /var/spool

Directories containing mails or mail queues are excluded to avoid a loss of mails after a rollback.

/var/lib/named

Contains zone data for the DNS server. Excluded from snapshots to ensure a name server can operate after a rollback.

/var/lib/mariadb, /var/lib/mysql, /var/lib/pgqsl

These directories contain database data. By default, these subvolumes are created with the option no copy on write.

/var/log

Log file location. Excluded from snapshots to allow log file analysis after the rollback of a broken system.

7.1.3 Customizing the Setup Edit source

SUSE Linux Enterprise Desktop comes with a reasonable default setup, which should be sufficient for most use cases. However, all aspects of taking automatic snapshots and snapshot keeping can be configured according to your needs.

7.1.3.1 Disabling/Enabling Snapshots Edit source

Each of the three snapshot types (timeline, installation, administration) can be enabled or disabled independently.

Disabling/Enabling Timeline Snapshots

Enabling.  snapper-c root set-config "TIMELINE_CREATE=yes"

Disabling.  snapper -c root set-config "TIMELINE_CREATE=no"

Timeline snapshots are enabled by default, except for the root partition.

Disabling/Enabling Installation Snapshots

Enabling:  Install the package snapper-zypp-plugin

Disabling:  Uninstall the package snapper-zypp-plugin

Installation snapshots are enabled by default.

Disabling/Enabling Administration Snapshots

Enabling:  Set USE_SNAPPER to yes in /etc/sysconfig/yast2.

Disabling:  Set USE_SNAPPER to no in /etc/sysconfig/yast2.

Administration snapshots are enabled by default.

7.1.3.2 Controlling Installation Snapshots Edit source

Taking snapshot pairs upon installing packages with YaST or Zypper is handled by the snapper-zypp-plugin. An XML configuration file, /etc/snapper/zypp-plugin.conf defines, when to make snapshots. By default the file looks like the following:

 1 <?xml version="1.0" encoding="utf-8"?>
 2 <snapper-zypp-plugin-conf>
 3  <solvables>
 4   <solvable match="w"1 important="true"2>kernel-*3</solvable>
 5   <solvable match="w" important="true">dracut</solvable>
 6   <solvable match="w" important="true">glibc</solvable>
 7   <solvable match="w" important="true">systemd*</solvable>
 8   <solvable match="w" important="true">udev</solvable>
 9   <solvable match="w">*</solvable>4
10  </solvables>
11 </snapper-zypp-plugin-conf>

1

The match attribute defines whether the pattern is a Unix shell-style wild card (w) or a Python regular expression (re).

2

If the given pattern matches and the corresponding package is marked as important (for example kernel packages), the snapshot will also be marked as important.

3

Pattern to match a package name. Based on the setting of the match attribute, special characters are either interpreted as shell wild cards or regular expressions. This pattern matches all package names starting with kernel-.

4

This line unconditionally matches all packages.

With this configuration snapshot, pairs are made whenever a package is installed (line 9). When the kernel, dracut, glibc, systemd, or udev packages marked as important are installed, the snapshot pair will also be marked as important (lines 4 to 8). All rules are evaluated.

To disable a rule, either delete it or deactivate it using XML comments. To prevent the system from making snapshot pairs for every package installation for example, comment line 9:

 1 <?xml version="1.0" encoding="utf-8"?>
 2 <snapper-zypp-plugin-conf>
 3  <solvables>
 4   <solvable match="w" important="true">kernel-*</solvable>
 5   <solvable match="w" important="true">dracut</solvable>
 6   <solvable match="w" important="true">glibc</solvable>
 7   <solvable match="w" important="true">systemd*</solvable>
 8   <solvable match="w" important="true">udev</solvable>
 9   <!-- <solvable match="w">*</solvable> -->
10  </solvables>
11 </snapper-zypp-plugin-conf>

7.1.3.3 Creating and Mounting New Subvolumes Edit source

Creating a new subvolume underneath the / hierarchy and permanently mounting it is supported. Such a subvolume will be excluded from snapshots. You need to make sure not to create it inside an existing snapshot, since you would not be able to delete snapshots anymore after a rollback.

SUSE Linux Enterprise Desktop is configured with the /@/ subvolume which serves as an independent root for permanent subvolumes such as /opt, /srv, /home and others. Any new subvolumes you create and permanently mount need to be created in this initial root file system.

To do so, run the following commands. In this example, a new subvolume /usr/important is created from /dev/sda2.

tux > sudo mount /dev/sda2 -o subvol=@ /mnt
tux > sudo btrfs subvolume create /mnt/usr/important
tux > sudo umount /mnt

The corresponding entry in /etc/fstab needs to look like the following:

/dev/sda2 /usr/important btrfs subvol=@/usr/important 0 0
Tip
Tip: Disable Copy-On-Write (cow)

A subvolume may contain files that constantly change, such as virtualized disk images, database files, or log files. If so, consider disabling the copy-on-write feature for this volume, to avoid duplication of disk blocks. Use the nodatacow mount option in /etc/fstab to do so:

/dev/sda2 /usr/important btrfs nodatacow,subvol=@/usr/important 0 0

To alternatively disable copy-on-write for single files or directories, use the command chattr +C PATH.

7.1.3.4 Controlling Snapshot Archiving Edit source

Snapshots occupy disk space. To prevent disks from running out of space and thus causing system outages, old snapshots are automatically deleted. By default, up to ten important installation and administration snapshots and up to ten regular installation and administration snapshots are kept. If these snapshots occupy more than 50% of the root file system size, additional snapshots will be deleted. A minimum of four important and two regular snapshots are always kept.

Refer to Section 7.5.1, “Managing Existing Configurations” for instructions on how to change these values.

7.1.3.5 Using Snapper on Thin-Provisioned LVM Volumes Edit source

Apart from snapshots on Btrfs file systems, Snapper also supports taking snapshots on thin-provisioned LVM volumes (snapshots on regular LVM volumes are not supported) formatted with XFS, Ext4 or Ext3. For more information and setup instructions on LVM volumes, refer to Book “Deployment Guide”, Chapter 9 “Advanced Disk Setup”, Section 9.2 “LVM Configuration”.

To use Snapper on a thin-provisioned LVM volume you need to create a Snapper configuration for it. On LVM it is required to specify the file system with --fstype=lvm(FILESYSTEM). ext3, etx4 or xfs are valid values for FILESYSTEM. Example:

tux > sudo snapper -c lvm create-config --fstype="lvm(xfs)" /thin_lvm

You can adjust this configuration according to your needs as described in Section 7.5.1, “Managing Existing Configurations”.

7.2 Using Snapper to Undo Changes Edit source

Snapper on SUSE Linux Enterprise Desktop is preconfigured to serve as a tool that lets you undo changes made by zypper and YaST. For this purpose, Snapper is configured to create a pair of snapshots before and after each run of zypper and YaST. Snapper also lets you restore system files that have been accidentally deleted or modified. Timeline snapshots for the root partition need to be enabled for this purpose—see Section 7.1.3.1, “Disabling/Enabling Snapshots” for details.

By default, automatic snapshots as described above are configured for the root partition and its subvolumes. To make snapshots available for other partitions such as /home for example, you can create custom configurations.

Important
Important: Undoing Changes Compared to Rollback

When working with snapshots to restore data, it is important to know that there are two fundamentally different scenarios Snapper can handle:

Undoing Changes

When undoing changes as described in the following, two snapshots are being compared and the changes between these two snapshots are made undone. Using this method also allows to explicitly select the files that should be restored.

Rollback

When doing rollbacks as described in Section 7.3, “System Rollback by Booting from Snapshots”, the system is reset to the state at which the snapshot was taken.

When undoing changes, it is also possible to compare a snapshot against the current system. When restoring all files from such a comparison, this will have the same result as doing a rollback. However, using the method described in Section 7.3, “System Rollback by Booting from Snapshots” for rollbacks should be preferred, since it is faster and allows you to review the system before doing the rollback.

Warning
Warning: Data Consistency

There is no mechanism to ensure data consistency when creating a snapshot. Whenever a file (for example, a database) is written at the same time as the snapshot is being created, it will result in a corrupted or partly written file. Restoring such a file will cause problems. Furthermore, some system files such as /etc/mtab must never be restored. Therefore it is strongly recommended to always closely review the list of changed files and their diffs. Only restore files that really belong to the action you want to revert.

7.2.1 Undoing YaST and Zypper Changes Edit source

If you set up the root partition with Btrfs during the installation, Snapper—preconfigured for doing rollbacks of YaST or Zypper changes—will automatically be installed. Every time you start a YaST module or a Zypper transaction, two snapshots are created: a pre-snapshot capturing the state of the file system before the start of the module and a post-snapshot after the module has been finished.

Using the YaST Snapper module or the snapper command line tool, you can undo the changes made by YaST/Zypper by restoring files from the pre-snapshot. Comparing two snapshots the tools also allow you to see which files have been changed. You can also display the differences between two versions of a file (diff).

Procedure 7.1: Undoing Changes Using the YaST Snapper Module
  1. Start the Snapper module from the Miscellaneous section in YaST or by entering yast2 snapper.

  2. Make sure Current Configuration is set to root. This is always the case unless you have manually added own Snapper configurations.

  3. Choose a pair of pre- and post-snapshots from the list. Both, YaST and Zypper snapshot pairs are of the type Pre & Post. YaST snapshots are labeled as zypp(y2base) in the Description column; Zypper snapshots are labeled zypp(zypper).

  4. Click Show Changes to open the list of files that differ between the two snapshots.

  5. Review the list of files. To display a diff between the pre- and post-version of a file, select it from the list.

  6. To restore one or more files, select the relevant files or directories by activating the respective check box. Click Restore Selected and confirm the action by clicking Yes.

    To restore a single file, activate its diff view by clicking its name. Click Restore From First and confirm your choice with Yes.

Procedure 7.2: Undoing Changes Using the snapper Command
  1. Get a list of YaST and Zypper snapshots by running snapper list -t pre-post. YaST snapshots are labeled as yast MODULE_NAME in the Description column; Zypper snapshots are labeled zypp(zypper).

    tux > sudo snapper list -t pre-post
    Pre # | Post # | Pre Date                      | Post Date                     | Description
    ------+--------+-------------------------------+-------------------------------+--------------
    311   | 312    | Tue 06 May 2014 14:05:46 CEST | Tue 06 May 2014 14:05:52 CEST | zypp(y2base)
    340   | 341    | Wed 07 May 2014 16:15:10 CEST | Wed 07 May 2014 16:15:16 CEST | zypp(zypper)
    342   | 343    | Wed 07 May 2014 16:20:38 CEST | Wed 07 May 2014 16:20:42 CEST | zypp(y2base)
    344   | 345    | Wed 07 May 2014 16:21:23 CEST | Wed 07 May 2014 16:21:24 CEST | zypp(zypper)
    346   | 347    | Wed 07 May 2014 16:41:06 CEST | Wed 07 May 2014 16:41:10 CEST | zypp(y2base)
    348   | 349    | Wed 07 May 2014 16:44:50 CEST | Wed 07 May 2014 16:44:53 CEST | zypp(y2base)
    350   | 351    | Wed 07 May 2014 16:46:27 CEST | Wed 07 May 2014 16:46:38 CEST | zypp(y2base)
  2. Get a list of changed files for a snapshot pair with snapper status PRE..POST. Files with content changes are marked with c, files that have been added are marked with + and deleted files are marked with -.

    tux > sudo snapper status 350..351
    +..... /usr/share/doc/packages/mikachan-fonts
    +..... /usr/share/doc/packages/mikachan-fonts/COPYING
    +..... /usr/share/doc/packages/mikachan-fonts/dl.html
    c..... /usr/share/fonts/truetype/fonts.dir
    c..... /usr/share/fonts/truetype/fonts.scale
    +..... /usr/share/fonts/truetype/みかちゃん-p.ttf
    +..... /usr/share/fonts/truetype/みかちゃん-pb.ttf
    +..... /usr/share/fonts/truetype/みかちゃん-ps.ttf
    +..... /usr/share/fonts/truetype/みかちゃん.ttf
    c..... /var/cache/fontconfig/7ef2298fde41cc6eeb7af42e48b7d293-x86_64.cache-4
    c..... /var/lib/rpm/Basenames
    c..... /var/lib/rpm/Dirnames
    c..... /var/lib/rpm/Group
    c..... /var/lib/rpm/Installtid
    c..... /var/lib/rpm/Name
    c..... /var/lib/rpm/Packages
    c..... /var/lib/rpm/Providename
    c..... /var/lib/rpm/Requirename
    c..... /var/lib/rpm/Sha1header
    c..... /var/lib/rpm/Sigmd5
  3. To display the diff for a certain file, run snapper diff PRE..POST FILENAME. If you do not specify FILENAME, a diff for all files will be displayed.

    tux > sudo snapper diff 350..351 /usr/share/fonts/truetype/fonts.scale
    --- /.snapshots/350/snapshot/usr/share/fonts/truetype/fonts.scale       2014-04-23 15:58:57.000000000 +0200
    +++ /.snapshots/351/snapshot/usr/share/fonts/truetype/fonts.scale       2014-05-07 16:46:31.000000000 +0200
    @@ -1,4 +1,4 @@
    -1174
    +1486
     ds=y:ai=0.2:luximr.ttf -b&h-luxi mono-bold-i-normal--0-0-0-0-c-0-iso10646-1
     ds=y:ai=0.2:luximr.ttf -b&h-luxi mono-bold-i-normal--0-0-0-0-c-0-iso8859-1
    [...]
  4. To restore one or more files run snapper -v undochange PRE..POST FILENAMES. If you do not specify a FILENAMES, all changed files will be restored.

    tux > sudo snapper -v undochange 350..351
         create:0 modify:13 delete:7
         undoing change...
         deleting /usr/share/doc/packages/mikachan-fonts
         deleting /usr/share/doc/packages/mikachan-fonts/COPYING
         deleting /usr/share/doc/packages/mikachan-fonts/dl.html
         deleting /usr/share/fonts/truetype/みかちゃん-p.ttf
         deleting /usr/share/fonts/truetype/みかちゃん-pb.ttf
         deleting /usr/share/fonts/truetype/みかちゃん-ps.ttf
         deleting /usr/share/fonts/truetype/みかちゃん.ttf
         modifying /usr/share/fonts/truetype/fonts.dir
         modifying /usr/share/fonts/truetype/fonts.scale
         modifying /var/cache/fontconfig/7ef2298fde41cc6eeb7af42e48b7d293-x86_64.cache-4
         modifying /var/lib/rpm/Basenames
         modifying /var/lib/rpm/Dirnames
         modifying /var/lib/rpm/Group
         modifying /var/lib/rpm/Installtid
         modifying /var/lib/rpm/Name
         modifying /var/lib/rpm/Packages
         modifying /var/lib/rpm/Providename
         modifying /var/lib/rpm/Requirename
         modifying /var/lib/rpm/Sha1header
         modifying /var/lib/rpm/Sigmd5
         undoing change done
Warning
Warning: Reverting User Additions

Reverting user additions via undoing changes with Snapper is not recommended. Since certain directories are excluded from snapshots, files belonging to these users will remain in the file system. If a user with the same user ID as a deleted user is created, this user will inherit the files. Therefore it is strongly recommended to use the YaST User and Group Management tool to remove users.

7.2.2 Using Snapper to Restore Files Edit source

Apart from the installation and administration snapshots, Snapper creates timeline snapshots. You can use these backup snapshots to restore files that have accidentally been deleted or to restore a previous version of a file. By using Snapper's diff feature you can also find out which modifications have been made at a certain point of time.

Being able to restore files is especially interesting for data, which may reside on subvolumes or partitions for which snapshots are not taken by default. To be able to restore files from home directories, for example, create a separate Snapper configuration for /home doing automatic timeline snapshots. See Section 7.5, “Creating and Modifying Snapper Configurations” for instructions.

Warning
Warning: Restoring Files Compared to Rollback

Snapshots taken from the root file system (defined by Snapper's root configuration), can be used to do a system rollback. The recommended way to do such a rollback is to boot from the snapshot and then perform the rollback. See Section 7.3, “System Rollback by Booting from Snapshots” for details.

Performing a rollback would also be possible by restoring all files from a root file system snapshot as described below. However, this is not recommended. You may restore single files, for example a configuration file from the /etc directory, but not the complete list of files from the snapshot.

This restriction only affects snapshots taken from the root file system!

Procedure 7.3: Restoring Files Using the YaST Snapper Module
  1. Start the Snapper module from the Miscellaneous section in YaST or by entering yast2 snapper.

  2. Choose the Current Configuration from which to choose a snapshot.

  3. Select a timeline snapshot from which to restore a file and choose Show Changes. Timeline snapshots are of the type Single with a description value of timeline.

  4. Select a file from the text box by clicking the file name. The difference between the snapshot version and the current system is shown. Activate the check box to select the file for restore. Do so for all files you want to restore.

  5. Click Restore Selected and confirm the action by clicking Yes.

Procedure 7.4: Restoring Files Using the snapper Command
  1. Get a list of timeline snapshots for a specific configuration by running the following command:

    tux > sudo snapper -c CONFIG list -t single | grep timeline

    CONFIG needs to be replaced by an existing Snapper configuration. Use snapper list-configs to display a list.

  2. Get a list of changed files for a given snapshot by running the following command:

    tux > sudo snapper -c CONFIG status SNAPSHOT_ID..0

    Replace SNAPSHOT_ID by the ID for the snapshot from which you want to restore the file(s).

  3. Optionally list the differences between the current file version and the one from the snapshot by running

    tux > sudo snapper -c CONFIG diff SNAPSHOT_ID..0 FILE NAME

    If you do not specify <FILE NAME>, the difference for all files are shown.

  4. To restore one or more files, run

    tux > sudo snapper -c CONFIG -v undochange SNAPSHOT_ID..0 FILENAME1 FILENAME2

    If you do not specify file names, all changed files will be restored.

7.3 System Rollback by Booting from Snapshots Edit source

The GRUB 2 version included on SUSE Linux Enterprise Desktop can boot from Btrfs snapshots. Together with Snapper's rollback feature, this allows to recover a misconfigured system. Only snapshots created for the default Snapper configuration (root) are bootable.

Important
Important: Supported Configuration

As of SUSE Linux Enterprise Desktop 12 SP4 system rollbacks are only supported if the default subvolume configuration of the root partition has not been changed.

When booting a snapshot, the parts of the file system included in the snapshot are mounted read-only; all other file systems and parts that are excluded from snapshots are mounted read-write and can be modified.

Important
Important: Undoing Changes Compared to Rollback

When working with snapshots to restore data, it is important to know that there are two fundamentally different scenarios Snapper can handle:

Undoing Changes

When undoing changes as described in Section 7.2, “Using Snapper to Undo Changes”, two snapshots are compared and the changes between these two snapshots are reverted. Using this method also allows to explicitly exclude selected files from being restored.

Rollback

When doing rollbacks as described in the following, the system is reset to the state at which the snapshot was taken.

To do a rollback from a bootable snapshot, the following requirements must be met. When doing a default installation, the system is set up accordingly.

Requirements for a Rollback from a Bootable Snapshot
  • The root file system needs to be Btrfs. Booting from LVM volume snapshots is not supported.

  • The root file system needs to be on a single device, a single partition and a single subvolume. Directories that are excluded from snapshots such as /srv (see Section 7.1.2, “Directories That Are Excluded from Snapshots” for a full list) may reside on separate partitions.

  • The system needs to be bootable via the installed boot loader.

To perform a rollback from a bootable snapshot, do as follows:

  1. Boot the system. In the boot menu choose Bootable snapshots and select the snapshot you want to boot. The list of snapshots is listed by date—the most recent snapshot is listed first.

  2. Log in to the system. Carefully check whether everything works as expected. Note that you cannot write to any directory that is part of the snapshot. Data you write to other directories will not get lost, regardless of what you do next.

  3. Depending on whether you want to perform the rollback or not, choose your next step:

    1. If the system is in a state where you do not want to do a rollback, reboot to boot into the current system state. You can then choose a different snapshot, or start the rescue system.

    2. To perform the rollback, run

      tux > sudo snapper rollback

      and reboot afterward. On the boot screen, choose the default boot entry to reboot into the reinstated system. A snapshot of the file system status before the rollback is created. The default subvolume for root will be replaced with a fresh read-write snapshot. For details, see Section 7.3.1, “Snapshots after Rollback”.

      It is useful to add a description for the snapshot with the -d option. For example:

      New file system root since rollback on DATE TIME
Tip
Tip: Rolling Back to a Specific Installation State

If snapshots are not disabled during installation, an initial bootable snapshot is created at the end of the initial system installation. You can go back to that state at any time by booting this snapshot. The snapshot can be identified by the description after installation.

A bootable snapshot is also created when starting a system upgrade to a service pack or a new major release (provided snapshots are not disabled).

7.3.1 Snapshots after Rollback Edit source

Before a rollback is performed, a snapshot of the running file system is created. The description references the ID of the snapshot that was restored in the rollback.

Snapshots created by rollbacks receive the value number for the Cleanup attribute. The rollback snapshots are therefore automatically deleted when the set number of snapshots is reached. Refer to Section 7.7, “Automatic Snapshot Clean-Up” for details. If the snapshot contains important data, extract the data from the snapshot before it is removed.

7.3.1.1 Example of Rollback Snapshot Edit source

For example, after a fresh installation the following snapshots are available on the system:

root # snapper --iso list
Type   | # |     | Cleanup | Description           | Userdata
-------+---+ ... +---------+-----------------------+--------------
single | 0 |     |         | current               |
single | 1 |     |         | first root filesystem |
single | 2 |     | number  | after installation    | important=yes

After running sudo snapper rollback snapshot 3 is created and contains the state of the system before the rollback was executed. Snapshot 4 is the new default Btrfs subvolume and thus the system after a reboot.

root # snapper --iso list
Type   | # |     | Cleanup | Description           | Userdata
-------+---+ ... +---------+-----------------------+--------------
single | 0 |     |         | current               |
single | 1 |     | number  | first root filesystem |
single | 2 |     | number  | after installation    | important=yes
single | 3 |     | number  | rollback backup of #1 | important=yes
single | 4 |     |         |                       |

7.3.2 Accessing and Identifying Snapshot Boot Entries Edit source

To boot from a snapshot, reboot your machine and choose Start Bootloader from a read-only snapshot. A screen listing all bootable snapshots opens. The most recent snapshot is listed first, the oldest last. Use the keys and to navigate and press Enter to activate the selected snapshot. Activating a snapshot from the boot menu does not reboot the machine immediately, but rather opens the boot loader of the selected snapshot.

Boot Loader: Snapshots
Figure 7.1: Boot Loader: Snapshots

Each snapshot entry in the boot loader follows a naming scheme which makes it possible to identify it easily:

[*]1OS2 (KERNEL3,DATE4TTIME5,DESCRIPTION6)

1

If the snapshot was marked important, the entry is marked with a *.

2

Operating system label.

4

Date in the format YYYY-MM-DD.

5

Time in the format HH:MM.

6

This field contains a description of the snapshot. In case of a manually created snapshot this is the string created with the option --description or a custom string (see Tip: Setting a Custom Description for Boot Loader Snapshot Entries). In case of an automatically created snapshot, it is the tool that was called, for example zypp(zypper) or yast_sw_single. Long descriptions may be truncated, depending on the size of the boot screen.

Tip
Tip: Setting a Custom Description for Boot Loader Snapshot Entries

It is possible to replace the default string in the description field of a snapshot with a custom string. This is for example useful if an automatically created description is not sufficient, or a user-provided description is too long. To set a custom string STRING for snapshot NUMBER, use the following command:

tux > sudo snapper modify --userdata "bootloader=STRING" NUMBER

The description should be no longer than 25 characters—everything that exceeds this size will not be readable on the boot screen.

7.3.3 Limitations Edit source

A complete system rollback, restoring the complete system to the identical state as it was in when a snapshot was taken, is not possible.

7.3.3.1 Directories Excluded from Snapshots Edit source

Root file system snapshots do not contain all directories. See Section 7.1.2, “Directories That Are Excluded from Snapshots” for details and reasons. As a general consequence, data from these directories is not restored, resulting in the following limitations.

Add-ons and Third Party Software may be Unusable after a Rollback

Applications and add-ons installing data in subvolumes excluded from the snapshot, such as /opt, may not work after a rollback, if others parts of the application data are also installed on subvolumes included in the snapshot. Re-install the application or the add-on to solve this problem.

File Access Problems

If an application had changed file permissions and/or ownership in between snapshot and current system, the application may not be able to access these files. Reset permissions and/or ownership for the affected files after the rollback.

Incompatible Data Formats

If a service or an application has established a new data format in between snapshot and current system, the application may not be able to read the affected data files after a rollback.

Subvolumes with a Mixture of Code and Data

Subvolumes like /srv may contain a mixture of code and data. A rollback may result in non-functional code. A downgrade of the PHP version, for example, may result in broken PHP scripts for the Web server.

User Data

If a rollback removes users from the system, data that is owned by these users in directories excluded from the snapshot, is not removed. If a user with the same user ID is created, this user will inherit the files. Use a tool like find to locate and remove orphaned files.

7.3.3.2 No Rollback of Boot Loader Data Edit source

A rollback of the boot loader is not possible, since all stages of the boot loader must fit together. This cannot be guaranteed when doing rollbacks of /boot.

7.4 Enabling Snapper in User Home Directories Edit source

You may enable snapshots for users' /home directories, which supports a number of use cases:

  • Individual users may manage their own snapshots and rollbacks.

  • System users, for example database, system, and network admins who want to track copies of configuration files, documentation, and so on.

  • Samba shares with home directories and Btrfs backend.

Each user's directory is a Btrfs subvolume of /home. It is possible to set this up manually (see Section 7.4.3, “Manually Enabling Snapshots in Home Directories”). However, a more convenient way is to use pam_snapper. The pam_snapper package installs the pam_snapper.so module and helper scripts, which automate user creation and Snapper configuration.

pam_snapper provides integration with the useradd command, pluggable authentication modules (PAM), and Snapper. By default it creates snapshots at user login and logout, and also creates time-based snapshots as some users remain logged in for extended periods of time. You may change the defaults using the normal Snapper commands and configuration files.

7.4.1 Installing pam_snapper and Creating Users Edit source

The easiest way is to start with a new /home directory formatted with Btrfs, and no existing users. Install pam_snapper:

root # zypper in pam_snapper

Add this line to /etc/pam.d/common-session:

session optional pam_snapper.so

Use the /usr/lib/pam_snapper/pam_snapper_useradd.sh script to create a new user and home directory. By default the script performs a dry run. Edit the script to change DRYRUN=1 to DRYRUN=0. Now you can create a new user:

root # /usr/lib/pam_snapper/pam_snapper_useradd.sh \
username group passwd=password
Create subvolume '/home/username'
useradd: warning: the home directory already exists.
Not copying any file from skel directory into it.

The files from /etc/skel will be copied into the user's home directory at their first login. Verify that the user's configuration was created by listing your Snapper configurations:

root # snapper list --all
Config: home_username, subvolume: /home/username
Type   | # | Pre # | Date | User | Cleanup | Description | Userdata
-------+---+-------+------+------+---------+-------------+---------
single | 0 |       |      | root |         | current     |

Over time, this output will become populated with a list of snapshots, which the user can manage with the standard Snapper commands.

7.4.2 Removing Users Edit source

Remove users with the /usr/lib/pam_snapper/pam_snapper_userdel.sh script. By default it performs a dry run, so edit it to change DRYRUN=1 to DRYRUN=0. This removes the user, the user's home subvolume, Snapper configuration, and deletes all snapshots.

root # /usr/lib/pam_snapper/pam_snapper_userdel.sh username

7.4.3 Manually Enabling Snapshots in Home Directories Edit source

These are the steps for manually setting up users' home directories with Snapper. /home must be formatted with Btrfs, and the users not yet created.

root # btrfs subvol create /home/username
root # snapper -c home_username create-config /home/username
root # sed -i -e "s/ALLOW_USERS=\"\"/ALLOW_USERS=\"username\"/g" \
/etc/snapper/configs/home_username
root # yast users add username=username home=/home/username password=password
root # chown username.group /home/username
root # chmod 755 /home/username/.snapshots

7.5 Creating and Modifying Snapper Configurations Edit source

The way Snapper behaves is defined in a configuration file that is specific for each partition or Btrfs subvolume. These configuration files reside under /etc/snapper/configs/.

In case the root file system is big enough (approximately 12 GB), snapshots are automatically enabled for the root file system / upon installation. The corresponding default configuration is named root. It creates and manages the YaST and Zypper snapshot. See Section 7.5.1.1, “Configuration Data” for a list of the default values.

Note
Note: Minimum Root File System Size for Enabling Snapshots

As explained in Section 7.1, “Default Setup”, enabling snapshots requires additional free space in the root file system. The amount depends on the amount of packages installed and the amount of changes made to the volume that is included in snapshots. The snapshot frequency and the number of snapshots that get archived also matter.

There is a minimum root file system size that is required in order to automatically enable snapshots during the installation. This size is approximately 12 GB. This value may change in the future, depending on architecture and the size of the base system. It depends on the values for the following tags in the file /control.xml from the installation media:

<root_base_size>
<btrfs_increase_percentage>

It is calculated with the following formula: ROOT_BASE_SIZE * (1 + BTRFS_INCREASE_PERCENTAGE/100)

Keep in mind that this value is a minimum size. Consider using more space for the root file system. As a rule of thumb, double the size you would use when not having enabled snapshots.

You may create your own configurations for other partitions formatted with Btrfs or existing subvolumes on a Btrfs partition. In the following example we will set up a Snapper configuration for backing up the Web server data residing on a separate, Btrfs-formatted partition mounted at /srv/www.

After a configuration has been created, you can either use snapper itself or the YaST Snapper module to restore files from these snapshots. In YaST you need to select your Current Configuration, while you need to specify your configuration for snapper with the global switch -c (for example, snapper -c myconfig list).

To create a new Snapper configuration, run snapper create-config:

tux > sudo snapper -c www-data1 create-config /srv/www2

1

Name of configuration file.

2

Mount point of the partition or Btrfs subvolume on which to take snapshots.

This command will create a new configuration file /etc/snapper/configs/www-data with reasonable default values (taken from /etc/snapper/config-templates/default). Refer to Section 7.5.1, “Managing Existing Configurations” for instructions on how to adjust these defaults.

Tip
Tip: Configuration Defaults

Default values for a new configuration are taken from /etc/snapper/config-templates/default. To use your own set of defaults, create a copy of this file in the same directory and adjust it to your needs. To use it, specify the -t option with the create-config command:

tux > sudo snapper -c www-data create-config -t MY_DEFAULTS /srv/www

7.5.1 Managing Existing Configurations Edit source

The snapper offers several subcommands for managing existing configurations. You can list, show, delete and modify them:

List Configurations

Use the command snapper list-configs to get all existing configurations:

tux > sudo snapper list-configs
Config | Subvolume
-------+----------
root   | /
usr    | /usr
local  | /local
Show a Configuration

Use the subcommand snapper -c CONFIG get-config to display the specified configuration. Config needs to be replaced by a configuration name shown by snapper list-configs. See Section 7.5.1.1, “Configuration Data” for more information on the configuration options.

To display the default configuration run

tux > sudo snapper -c root get-config
Modify a Configuration

Use the subcommand snapper -c CONFIG set-config OPTION=VALUE to modify an option in the specified configuration. Config needs to be replaced by a configuration name shown by snapper list-configs. Possible values for OPTION and VALUE are listed in Section 7.5.1.1, “Configuration Data”.

Delete a Configuration

Use the subcommand snapper -c CONFIG delete-config to delete a configuration. Config needs to be replaced by a configuration name shown by snapper list-configs.

7.5.1.1 Configuration Data Edit source

Each configuration contains a list of options that can be modified from the command line. The following list provides details for each option. To change a value, run snapper -c CONFIG set-config "KEY=VALUE".

ALLOW_GROUPS, ALLOW_USERS

Granting permissions to use snapshots to regular users. See Section 7.5.1.2, “Using Snapper as Regular User” for more information.

The default value is "".

BACKGROUND_COMPARISON

Defines whether pre and post snapshots should be compared in the background after creation.

The default value is "yes".

EMPTY_*

Defines the clean-up algorithm for snapshots pairs with identical pre and post snapshots. See Section 7.7.3, “Cleaning Up Snapshot Pairs That Do Not Differ” for details.

FSTYPE

File system type of the partition. Do not change.

The default value is "btrfs".

NUMBER_*

Defines the clean-up algorithm for installation and admin snapshots. See Section 7.7.1, “Cleaning Up Numbered Snapshots” for details.

QGROUP / SPACE_LIMIT

Adds quota support to the clean-up algorithms. See Section 7.7.5, “Adding Disk Quota Support” for details.

SUBVOLUME

Mount point of the partition or subvolume to snapshot. Do not change.

The default value is "/".

SYNC_ACL

If Snapper is used by regular users (see Section 7.5.1.2, “Using Snapper as Regular User”), the users must be able to access the .snapshot directories and to read files within them. If SYNC_ACL is set to yes, Snapper automatically makes them accessible using ACLs for users and groups from the ALLOW_USERS or ALLOW_GROUPS entries.

The default value is "no".

TIMELINE_CREATE

If set to yes, hourly snapshots are created. Valid values: yes, no.

The default value is "no".

TIMELINE_CLEANUP / TIMELINE_LIMIT_*

Defines the clean-up algorithm for timeline snapshots. See Section 7.7.2, “Cleaning Up Timeline Snapshots” for details.

7.5.1.2 Using Snapper as Regular User Edit source

By default Snapper can only be used by root. However, there are cases in which certain groups or users need to be able to create snapshots or undo changes by reverting to a snapshot:

  • Web site administrators who want to take snapshots of /srv/www

  • Users who want to take a snapshot of their home directory

For these purposes Snapper configurations that grant permissions to users or/and groups can be created. The corresponding .snapshots directory needs to be readable and accessible by the specified users. The easiest way to achieve this is to set the SYNC_ACL option to yes.

Procedure 7.5: Enabling Regular Users to Use Snapper

Note that all steps in this procedure need to be run by root.

  1. If not existing, create a Snapper configuration for the partition or subvolume on which the user should be able to use Snapper. Refer to Section 7.5, “Creating and Modifying Snapper Configurations” for instructions. Example:

    tux > sudo snapper --config web_data create /srv/www
  2. The configuration file is created under /etc/snapper/configs/CONFIG, where CONFIG is the value you specified with -c/--config in the previous step (for example /etc/snapper/configs/web_data). Adjust it according to your needs; see Section 7.5.1, “Managing Existing Configurations” for details.

  3. Set values for ALLOW_USERS and/or ALLOW_GROUPS to grant permissions to users and/or groups, respectively. Multiple entries need to be separated by Space. To grant permissions to the user www_admin for example, run:

    tux > sudo snapper -c web_data set-config "ALLOW_USERS=www_admin" SYNC_ACL="yes"
  4. The given Snapper configuration can now be used by the specified user(s) and/or group(s). You can test it with the list command, for example:

    www_admin:~ > snapper -c web_data list

7.6 Manually Creating and Managing Snapshots Edit source

Snapper is not restricted to creating and managing snapshots automatically by configuration; you can also create snapshot pairs (before and after) or single snapshots manually using either the command-line tool or the YaST module.

All Snapper operations are carried out for an existing configuration (see Section 7.5, “Creating and Modifying Snapper Configurations” for details). You can only take snapshots of partitions or volumes for which a configuration exists. By default the system configuration (root) is used. If you want to create or manage snapshots for your own configuration you need to explicitly choose it. Use the Current Configuration drop-down box in YaST or specify the -c on the command line (snapper -c MYCONFIG COMMAND).

7.6.1 Snapshot Metadata Edit source

Each snapshot consists of the snapshot itself and some metadata. When creating a snapshot you also need to specify the metadata. Modifying a snapshot means changing its metadata—you cannot modify its content. Use snapper list to show existing snapshots and their metadata:

snapper --config home list

Lists snapshots for the configuration home. To list snapshots for the default configuration (root), use snapper -c root list or snapper list.

snapper list -a

Lists snapshots for all existing configurations.

snapper list -t pre-post

Lists all pre and post snapshot pairs for the default (root) configuration.

snapper list -t single

Lists all snapshots of the type single for the default (root) configuration.

The following metadata is available for each snapshot:

  • Type: Snapshot type, see Section 7.6.1.1, “Snapshot Types” for details. This data cannot be changed.

  • Number: Unique number of the snapshot. This data cannot be changed.

  • Pre Number: Specifies the number of the corresponding pre snapshot. For snapshots of type post only. This data cannot be changed.

  • Description: A description of the snapshot.

  • Userdata: An extended description where you can specify custom data in the form of a comma-separated key=value list: reason=testing, project=foo. This field is also used to mark a snapshot as important (important=yes) and to list the user that created the snapshot (user=tux).

  • Cleanup-Algorithm: Cleanup-algorithm for the snapshot, see Section 7.7, “Automatic Snapshot Clean-Up” for details.

7.6.1.1 Snapshot Types Edit source

Snapper knows three different types of snapshots: pre, post, and single. Physically they do not differ, but Snapper handles them differently.

pre

Snapshot of a file system before a modification. Each pre snapshot has got a corresponding post snapshot. Used for the automatic YaST/Zypper snapshots, for example.

post

Snapshot of a file system after a modification. Each post snapshot has got a corresponding pre snapshot. Used for the automatic YaST/Zypper snapshots, for example.

single

Stand-alone snapshot. Used for the automatic hourly snapshots, for example. This is the default type when creating snapshots.

7.6.1.2 Cleanup-algorithms Edit source

Snapper provides three algorithms to clean up old snapshots. The algorithms are executed in a daily cron job. It is possible to define the number of different types of snapshots to keep in the Snapper configuration (see Section 7.5.1, “Managing Existing Configurations” for details).

number

Deletes old snapshots when a certain snapshot count is reached.

timeline

Deletes old snapshots having passed a certain age, but keeps several hourly, daily, monthly, and yearly snapshots.

empty-pre-post

Deletes pre/post snapshot pairs with empty diffs.

7.6.2 Creating Snapshots Edit source

Creating a snapshot is done by running snapper create or by clicking Create in the YaST module Snapper. The following examples explain how to create snapshots from the command line. It should be easy to adopt them when using the YaST interface.

Tip
Tip: Snapshot Description

You should always specify a meaningful description to later be able to identify its purpose. Even more information can be specified via the user data option.

snapper create --description "Snapshot for week 2 2014"

Creates a stand-alone snapshot (type single) for the default (root) configuration with a description. Because no cleanup-algorithm is specified, the snapshot will never be deleted automatically.

snapper --config home create --description "Cleanup in ~tux"

Creates a stand-alone snapshot (type single) for a custom configuration named home with a description. Because no cleanup-algorithm is specified, the snapshot will never be deleted automatically.

snapper --config home create --description "Daily data backup" --cleanup-algorithm timeline>

Creates a stand-alone snapshot (type single) for a custom configuration named home with a description. The file will automatically be deleted when it meets the criteria specified for the timeline cleanup-algorithm in the configuration.

snapper create --type pre --print-number --description "Before the Apache config cleanup" --userdata "important=yes"

Creates a snapshot of the type pre and prints the snapshot number. First command needed to create a pair of snapshots used to save a before and after state. The snapshot is marked as important.

snapper create --type post --pre-number 30 --description "After the Apache config cleanup" --userdata "important=yes"

Creates a snapshot of the type post paired with the pre snapshot number 30. Second command needed to create a pair of snapshots used to save a before and after state. The snapshot is marked as important.

snapper create --command COMMAND --description "Before and after COMMAND"

Automatically creates a snapshot pair before and after running COMMAND. This option is only available when using snapper on the command line.

7.6.3 Modifying Snapshot Metadata Edit source

Snapper allows you to modify the description, the cleanup algorithm, and the user data of a snapshot. All other metadata cannot be changed. The following examples explain how to modify snapshots from the command line. It should be easy to adopt them when using the YaST interface.

To modify a snapshot on the command line, you need to know its number. Use snapper list to display all snapshots and their numbers.

The YaST Snapper module already lists all snapshots. Choose one from the list and click Modify.

snapper modify --cleanup-algorithm "timeline" 10

Modifies the metadata of snapshot 10 for the default (root) configuration. The cleanup algorithm is set to timeline.

snapper --config home modify --description "daily backup" -cleanup-algorithm "timeline" 120

Modifies the metadata of snapshot 120 for a custom configuration named home. A new description is set and the cleanup algorithm is unset.

7.6.4 Deleting Snapshots Edit source

To delete a snapshot with the YaST Snapper module, choose a snapshot from the list and click Delete.

To delete a snapshot with the command line tool, you need to know its number. Get it by running snapper list. To delete a snapshot, run snapper delete NUMBER.

Deleting the current default subvolume snapshot is not allowed.

When deleting snapshots with Snapper, the freed space will be claimed by a Btrfs process running in the background. Thus the visibility and the availability of free space is delayed. In case you need space freed by deleting a snapshot to be available immediately, use the option --sync with the delete command.

Tip
Tip: Deleting Snapshot Pairs

When deleting a pre snapshot, you should always delete its corresponding post snapshot (and vice versa).

snapper delete 65

Deletes snapshot 65 for the default (root) configuration.

snapper -c home delete 89 90

Deletes snapshots 89 and 90 for a custom configuration named home.

snapper delete --sync 23

Deletes snapshot 23 for the default (root) configuration and makes the freed space available immediately.

Tip
Tip: Delete Unreferenced Snapshots

Sometimes the Btrfs snapshot is present but the XML file containing the metadata for Snapper is missing. In this case the snapshot is not visible for Snapper and needs to be deleted manually:

btrfs subvolume delete /.snapshots/SNAPSHOTNUMBER/snapshot
rm -rf /.snapshots/SNAPSHOTNUMBER
Tip
Tip: Old Snapshots Occupy More Disk Space

If you delete snapshots to free space on your hard disk, make sure to delete old snapshots first. The older a snapshot is, the more disk space it occupies.

Snapshots are also automatically deleted by a daily cron job. Refer to Section 7.6.1.2, “Cleanup-algorithms” for details.

7.7 Automatic Snapshot Clean-Up Edit source

Snapshots occupy disk space and over time the amount of disk space occupied by the snapshots may become large. To prevent disks from running out of space, Snapper offers algorithms to automatically delete old snapshots. These algorithms differentiate between timeline snapshots and numbered snapshots (administration plus installation snapshot pairs). You can specify the number of snapshots to keep for each type.

In addition to that, you can optionally specify a disk space quota, defining the maximum amount of disk space the snapshots may occupy. It is also possible to automatically delete pre and post snapshots pairs that do not differ.

A clean-up algorithm is always bound to a single Snapper configuration, so you need to configure algorithms for each configuration. To prevent certain snapshots from being automatically deleted, refer to How to make a snapshot permanent? .

The default setup (root) is configured to do clean-up for numbered snapshots and empty pre and post snapshot pairs. Quota support is enabled—snapshots may not occupy more than 50% of the available disk space of the root partition. Timeline snapshots are disabled by default, therefore the timeline clean-up algorithm is also disabled.

7.7.1 Cleaning Up Numbered Snapshots Edit source

Cleaning up numbered snapshots—administration plus installation snapshot pairs—is controlled by the following parameters of a Snapper configuration.

NUMBER_CLEANUP

Enables or disables clean-up of installation and admin snapshot pairs. If enabled, snapshot pairs are deleted when the total snapshot count exceeds a number specified with NUMBER_LIMIT and/or NUMBER_LIMIT_IMPORTANT and an age specified with NUMBER_MIN_AGE. Valid values: yes (enable), no (disable).

The default value is "yes".

Example command to change or set:

tux > sudo snapper -c CONFIG set-config "NUMBER_CLEANUP=no"
NUMBER_LIMIT / NUMBER_LIMIT_IMPORTANT

Defines how many regular and/or important installation and administration snapshot pairs to keep. Only the youngest snapshots will be kept. Ignored if NUMBER_CLEANUP is set to "no".

The default value is "2-10" for NUMBER_LIMIT and "4-10" for NUMBER_LIMIT_IMPORTANT.

Example command to change or set:

tux > sudo snapper -c CONFIG set-config "NUMBER_LIMIT=10"
Important
Important: Ranged Compared to Constant Values

In case quota support is enabled (see Section 7.7.5, “Adding Disk Quota Support”) the limit needs to be specified as a minimum-maximum range, for example 2-10. If quota support is disabled, a constant value, for example 10, needs to be provided, otherwise cleaning-up will fail with an error.

NUMBER_MIN_AGE

Defines the minimum age in seconds a snapshot must have before it can automatically be deleted. Snapshots younger than the value specified here will not be deleted, regardless of how many exist.

The default value is "1800".

Example command to change or set:

tux > sudo snapper -c CONFIG set-config "NUMBER_MIN_AGE=864000"
Note
Note: Limit and Age

NUMBER_LIMIT, NUMBER_LIMIT_IMPORTANT and NUMBER_MIN_AGE are always evaluated. Snapshots are only deleted when all conditions are met.

If you always want to keep the number of snapshots defined with NUMBER_LIMIT* regardless of their age, set NUMBER_MIN_AGE to 0.

The following example shows a configuration to keep the last 10 important and regular snapshots regardless of age:

NUMBER_CLEANUP=yes
NUMBER_LIMIT_IMPORTANT=10
NUMBER_LIMIT=10
NUMBER_MIN_AGE=0

On the other hand, if you do not want to keep snapshots beyond a certain age, set NUMBER_LIMIT* to 0 and provide the age with NUMBER_MIN_AGE.

The following example shows a configuration to only keep snapshots younger than ten days:

NUMBER_CLEANUP=yes
NUMBER_LIMIT_IMPORTANT=0
NUMBER_LIMIT=0
NUMBER_MIN_AGE=864000

7.7.2 Cleaning Up Timeline Snapshots Edit source

Cleaning up timeline snapshots is controlled by the following parameters of a Snapper configuration.

TIMELINE_CLEANUP

Enables or disables clean-up of timeline snapshots. If enabled, snapshots are deleted when the total snapshot count exceeds a number specified with TIMELINE_LIMIT_* and an age specified with TIMELINE_MIN_AGE. Valid values: yes, no.

The default value is "yes".

Example command to change or set:

tux > sudo snapper -c CONFIG set-config "TIMELINE_CLEANUP=yes"
TIMELINE_LIMIT_DAILY, TIMELINE_LIMIT_HOURLY, TIMELINE_LIMIT_MONTHLY, TIMELINE_LIMIT_WEEKLY, TIMELINE_LIMIT_YEARLY

Number of snapshots to keep for hour, day, month, week, and year.

The default value for each entry is "10", except for TIMELINE_LIMIT_WEEKLY, which is set to "0" by default.

TIMELINE_MIN_AGE

Defines the minimum age in seconds a snapshot must have before it can automatically be deleted.

The default value is "1800".

Example 7.1: Example timeline configuration
TIMELINE_CLEANUP="yes"
TIMELINE_CREATE="yes"
TIMELINE_LIMIT_DAILY="7"
TIMELINE_LIMIT_HOURLY="24"
TIMELINE_LIMIT_MONTHLY="12"
TIMELINE_LIMIT_WEEKLY="4"
TIMELINE_LIMIT_YEARLY="2"
TIMELINE_MIN_AGE="1800"

This example configuration enables hourly snapshots which are automatically cleaned up. TIMELINE_MIN_AGE and TIMELINE_LIMIT_* are always both evaluated. In this example, the minimum age of a snapshot before it can be deleted is set to 30 minutes (1800 seconds). Since we create hourly snapshots, this ensures that only the latest snapshots are kept. If TIMELINE_LIMIT_DAILY is set to not zero, this means that the first snapshot of the day is kept, too.

Snapshots to be Kept
  • Hourly: The last 24 snapshots that have been made.

  • Daily: The first daily snapshot that has been made is kept from the last seven days.

  • Monthly: The first snapshot made on the last day of the month is kept for the last twelve months.

  • Weekly: The first snapshot made on the last day of the week is kept from the last four weeks.

  • Yearly: The first snapshot made on the last day of the year is kept for the last two years.

7.7.3 Cleaning Up Snapshot Pairs That Do Not Differ Edit source

As explained in Section 7.1.1, “Types of Snapshots”, whenever you run a YaST module or execute Zypper, a pre snapshot is created on start-up and a post snapshot is created when exiting. In case you have not made any changes there will be no difference between the pre and post snapshots. Such empty snapshot pairs can be automatically be deleted by setting the following parameters in a Snapper configuration:

EMPTY_PRE_POST_CLEANUP

If set to yes, pre and post snapshot pairs that do not differ will be deleted.

The default value is "yes".

EMPTY_PRE_POST_MIN_AGE

Defines the minimum age in seconds a pre and post snapshot pair that does not differ must have before it can automatically be deleted.

The default value is "1800".

7.7.4 Cleaning Up Manually Created Snapshots Edit source

Snapper does not offer custom clean-up algorithms for manually created snapshots. However, you can assign the number or timeline clean-up algorithm to a manually created snapshot. If you do so, the snapshot will join the clean-up queue for the algorithm you specified. You can specify a clean-up algorithm when creating a snapshot, or by modifying an existing snapshot:

snapper create --description "Test" --cleanup-algorithm number

Creates a stand-alone snapshot (type single) for the default (root) configuration and assigns the number clean-up algorithm.

snapper modify --cleanup-algorithm "timeline" 25

Modifies the snapshot with the number 25 and assigns the clean-up algorithm timeline.

7.7.5 Adding Disk Quota Support Edit source

In addition to the number and/or timeline clean-up algorithms described above, Snapper supports quotas. You can define what percentage of the available space snapshots are allowed to occupy. This percentage value always applies to the Btrfs subvolume defined in the respective Snapper configuration.

If Snapper was enabled during the installation, quota support is automatically enabled. In case you manually enable Snapper at a later point in time, you can enable quota support by running snapper setup-quota. This requires a valid configuration (see Section 7.5, “Creating and Modifying Snapper Configurations” for more information).

Quota support is controlled by the following parameters of a Snapper configuration.

QGROUP

The Btrfs quota group used by Snapper. If not set, run snapper setup-quota. If already set, only change if you are familiar with man 8 btrfs-qgroup. This value is set with snapper setup-quota and should not be changed.

SPACE_LIMIT

Limit of space snapshots are allowed to use in fractions of 1 (100%). Valid values range from 0 to 1 (0.1 = 10%, 0.2 = 20%, ...).

The following limitations and guidelines apply:

  • Quotas are only activated in addition to an existing number and/or timeline clean-up algorithm. If no clean-up algorithm is active, quota restrictions are not applied.

  • With quota support enabled, Snapper will perform two clean-up runs if required. The first run will apply the rules specified for number and timeline snapshots. Only if the quota is exceeded after this run, the quota-specific rules will be applied in a second run.

  • Even if quota support is enabled, Snapper will always keep the number of snapshots specified with the NUMBER_LIMIT* and TIMELINE_LIMIT* values, even if the quota will be exceeded. It is therefore recommended to specify ranged values (MIN-MAX) for NUMBER_LIMIT* and TIMELINE_LIMIT* to ensure the quota can be applied.

    If, for example, NUMBER_LIMIT=5-20 is set, Snapper will perform a first clean-up run and reduce the number of regular numbered snapshots to 20. In case these 20 snapshots exceed the quota, Snapper will delete the oldest ones in a second run until the quota is met. A minimum of five snapshots will always be kept, regardless of the amount of space they occupy.

7.8 Frequently Asked Questions Edit source

Why does Snapper Never Show Changes in /var/log, /tmp and Other Directories?

For some directories we decided to exclude them from snapshots. See Section 7.1.2, “Directories That Are Excluded from Snapshots” for a list and reasons. To exclude a path from snapshots we create a subvolume for that path.

How much disk space is used by snapshots? How to free disk space?

Displaying the amount of disk space a snapshot allocates is currently not supported by the Btrfs tools. However, if you have quota enabled, it is possible to determine how much space would be freed if all snapshots would be deleted:

  1. Get the quota group ID (1/0 in the following example):

    tux > sudo snapper -c root get-config | grep QGROUP
    QGROUP                 | 1/0
  2. Rescan the subvolume quotas:

    tux > sudo btrfs quota rescan -w /
  3. Show the data of the quota group (1/0 in the following example):

    tux > sudo btrfs qgroup show / | grep "1/0"
    1/0           4.80GiB    108.82MiB

    The third column shows the amount of space that would be freed when deleting all snapshots (108.82MiB).

To free space on a Btrfs partition containing snapshots you need to delete unneeded snapshots rather than files. Older snapshots occupy more space than recent ones. See Section 7.1.3.4, “Controlling Snapshot Archiving” for details.

Doing an upgrade from one service pack to another results in snapshots occupying a lot of disk space on the system subvolumes, because a lot of data gets changed (package updates). Manually deleting these snapshots after they are no longer needed is recommended. See Section 7.6.4, “Deleting Snapshots” for details.

Can I Boot a Snapshot from the Boot Loader?

Yes—refer to Section 7.3, “System Rollback by Booting from Snapshots” for details.

How to make a snapshot permanent?

Currently Snapper does not offer means to prevent a snapshot from being deleted manually. However, you can prevent snapshots from being automatically deleted by clean-up algorithms. Manually created snapshots (see Section 7.6.2, “Creating Snapshots”) have no clean-up algorithm assigned unless you specify one with --cleanup-algorithm. Automatically created snapshots always either have the number or timeline algorithm assigned. To remove such an assignment from one or more snapshots, proceed as follows:

  1. List all available snapshots:

    tux > sudo snapper list -a
  2. Memorize the number of the snapshot(s) you want to prevent from being deleted.

  3. Run the following command and replace the number placeholders with the number(s) you memorized:

    tux > sudo snapper modify --cleanup-algorithm "" #1 #2 #n
  4. Check the result by running snapper list -a again. The entry in the column Cleanup should now be empty for the snapshots you modified.

Where can I get more information on Snapper?

See the Snapper home page at http://snapper.io/.

8 Remote Access with VNC Edit source

Abstract

Virtual Network Computing (VNC) enables you to control a remote computer via a graphical desktop (as opposed to a remote shell access). VNC is platform-independent and lets you access the remote machine from any operating system.

SUSE Linux Enterprise Desktop supports two different kinds of VNC sessions: One-time sessions that live as long as the VNC connection from the client is kept up, and persistent sessions that live until they are explicitly terminated.

Note
Note: Session Types

A machine can offer both kinds of sessions simultaneously on different ports, but an open session cannot be converted from one type to the other.

8.1 The vncviewer Client Edit source

To connect to a VNC service provided by a server, a client is needed. The default in SUSE Linux Enterprise Desktop is vncviewer, provided by the tigervnc package.

8.1.1 Connecting Using the vncviewer CLI Edit source

To start your VNC viewer and initiate a session with the server, use the command:

tux > vncviewer jupiter.example.com:1

Instead of the VNC display number you can also specify the port number with two colons:

tux > vncviewer jupiter.example.com::5901
Note
Note: Display and Port Number

The actual display or port number you specify in the VNC client must be the same as the display or port number picked by the vncserver command on the target machine. See Section 8.4, “Persistent VNC Sessions” for further info.

8.1.2 Connecting Using the vncviewer GUI Edit source

By running vncviewer without specifying --listen or a host to connect to, it will show a window to ask for connection details. Enter the host into the VNC server field like in Section 8.1.1, “Connecting Using the vncviewer CLI” and click Connect.

vncviewer asking for connection details
Figure 8.1: vncviewer

8.1.3 Notification of Unencrypted Connections Edit source

The VNC protocol supports different kinds of encrypted connections, not to be confused with password authentication. If a connection does not use TLS, the text (Connection not encrypted!) can be seen in the window title of the VNC viewer.

8.2 Remmina: the Remote Desktop Client Edit source

Remmina is a modern and feature-rich remote desktop client. It supports several access methods, for example VNC, SSH, RDP, or Spice.

8.2.1 Installation Edit source

To use Remmina, verify whether the remmina package is installed on your system, and install it if not. Remember to install the VNC plug-in for Remmina as well:

root # zypper in remmina remmina-plugin-vnc

8.2.2 Main Window Edit source

Run Remmina by entering the remmina command.

Remmina's Main Window
Figure 8.2: Remmina's Main Window

The main application window shows the list of stored remote sessions. Here you can add and save a new remote session, quick-start a new session without saving it, start a previously saved session, or set Remmina's global preferences.

8.2.3 Adding Remote Sessions Edit source

To add and save a new remote session, click Add new session in the top left of the main window. The Remote Desktop Preference window opens.

Remote Desktop Preference
Figure 8.3: Remote Desktop Preference

Complete the fields that specify your newly added remote session profile. The most important are:

Name

Name of the profile. It will be listed in the main window.

Protocol

The protocol to use when connecting to the remote session, for example VNC.

Server

The IP or DNS address and display number of the remote server.

User name, Password

Credentials to use for remote authentication. Leave empty for no authentication.

Color depth, Quality

Select the best options according to your connection speed and quality.

Select the Advanced tab to enter more specific settings.

Tip
Tip: Disable Encryption

If the communication between the client and the remote server is not encrypted, activate Disable encryption, otherwise the connection fails.

Select the SSH tab for advanced SSH tunneling and authentication options.

Confirm with Save. Your new profile will be listed in the main window.

8.2.4 Starting Remote Sessions Edit source

You can either start a previously saved session, or quick-start a remote session without saving the connection details.

8.2.4.1 Quick-starting Remote Sessions Edit source

To start a remote session quickly without adding and saving connection details, use the drop-down box and text field at the top of the main window.

Quick-starting
Figure 8.4: Quick-starting

Select the communication protocol from the drop-down box, for example 'VNC', then enter the VNC server's DNS or IP address followed by a colon and a display number, and confirm with Enter.

8.2.4.2 Opening Saved Remote Sessions Edit source

To open a specific remote session, double-click it from the list of sessions.

8.2.4.3 Remote Sessions Window Edit source

Remote sessions are opened in tabs of a separate window. Each tab hosts one session. The toolbar on the left of the window helps you manage the windows/sessions; for example toggle fullscreen mode, resize the window to match the display size of the session, send specific keystrokes to the session, take screenshots of the session, or set the image quality.

Remmina Viewing SLES 15 Remote Session
Figure 8.5: Remmina Viewing SLES 15 Remote Session

8.2.5 Editing, Copying, and Deleting Saved Sessions Edit source

To edit a saved remote session, right-click its name in Remmina's main window and select Edit. Refer to Section 8.2.3, “Adding Remote Sessions” for the description of the relevant fields.

To copy a saved remote session, right-click its name in Remmina's main window and select Copy. In the Remote Desktop Preference window, change the name of the profile, optionally adjust relevant options, and confirm with Save.

To delete a saved remote session, right-click its name in Remmina's main window and select Delete. Confirm with Yes in the next dialog.

8.2.6 Running Remote Sessions from the Command Line Edit source

If you need to open a remote session from the command line or from a batch file without first opening the main application window, use the following syntax:

 tux > remmina -c profile_name.remmina

Remmina's profile files are stored in the .local/share/remmina/ directory in your home directory. To determine which profile file belongs to the session you want to open, run Remmina, click the session name in the main window, and read the path to the profile file in the window's status line at the bottom.

Reading Path to the Profile File
Figure 8.6: Reading Path to the Profile File

While Remmina is not running, you can rename the profile file to to a more reasonable file name, such as sle15.remmina. You can even copy the profile file to your custom directory and run it using the remmina -c command from there.

8.3 One-time VNC Sessions Edit source

A one-time session is initiated by the remote client. It starts a graphical login screen on the server. This way you can choose the user which starts the session and, if supported by the login manager, the desktop environment. When you terminate the client connection to such a VNC session, all applications started within that session will be terminated, too. One-time VNC sessions cannot be shared, but it is possible to have multiple sessions on a single host at the same time.

Procedure 8.1: Enabling One-time VNC Sessions
  1. Start YaST › Network Services › Remote Administration (VNC).

  2. Check Allow Remote Administration Without Session Management.

  3. Activate Enable access using a web browser if you plan to access the VNC session in a Web browser window.

  4. If necessary, also check Open Port in Firewall (for example, when your network interface is configured to be in the External Zone). If you have more than one network interface, restrict opening the firewall ports to a specific interface via Firewall Details.

  5. Confirm your settings with Next.

  6. In case not all needed packages are available yet, you need to approve the installation of missing packages.

    Tip
    Tip: Restart the Display Manager

    YaST makes changes to the display manager settings. You need to log out of your current graphical session and restart the display manager for the changes to take effect.

Remote Administration
Figure 8.7: Remote Administration

8.3.1 Available Configurations Edit source

The default configuration on SUSE Linux Enterprise Desktop serves sessions with a resolution of 1024x768 pixels at a color depth of 16-bit. The sessions are available on ports 5901 for regular VNC viewers (equivalent to VNC display 1) and on port 5801 for Web browsers.

Other configurations can be made available on different ports. Ask your system administrator for details if you need to modify the configuration.

VNC display numbers and X display numbers are independent in one-time sessions. A VNC display number is manually assigned to every configuration that the server supports (:1 in the example above). Whenever a VNC session is initiated with one of the configurations, it automatically gets a free X display number.

By default, both the VNC client and server try to communicate securely via a self-signed SSL certificate, which is generated after installation. You can either use the default one, or replace it with your own. When using the self-signed certificate, you need to confirm its signature before the first connection.

8.3.2 Initiating a One-time VNC Session Edit source

To connect to a one-time VNC session, a VNC viewer must be installed, see also Section 8.1, “The vncviewer Client”.

8.3.3 Configuring One-time VNC Sessions Edit source

You can skip this section, if you do not need or want to modify the default configuration.

One-time VNC sessions are started via the systemd socket xvnc.socket. By default it offers six configuration blocks: three for VNC viewers (vnc1 to vnc3), and three serving a Java applet (vnchttpd1 to vnchttpd3). By default only vnc1 and vnchttpd1 are active.

To activate the VNC server socket at boot time, run the following command:

sudo systemctl enable xvnc.socket

To start the socket immediately, run:

sudo systemctl start xvnc.socket

The Xvnc server can be configured via the server_args option. For a list of options, see Xvnc --help.

When adding custom configurations, make sure they are not using ports that are already in use by other configurations, other services, or existing persistent VNC sessions on the same host.

Activate configuration changes by entering the following command:

tux > sudo systemctl reload xvnc.socket
Important
Important: Firewall and VNC Ports

When activating Remote Administration as described in Procedure 8.1, “Enabling One-time VNC Sessions”, the ports 5801 and 5901 are opened in the firewall. If the network interface serving the VNC sessions is protected by a firewall, you need to manually open the respective ports when activating additional ports for VNC sessions. See Book “Security Guide”, Chapter 15 “Masquerading and Firewalls” for instructions.

8.4 Persistent VNC Sessions Edit source

A persistent session can be accessed from multiple clients simultaneously. This is ideal for demonstration purposes where one client has full access and all other clients have view-only access. Another use case are trainings where the trainer might need access to the trainee's desktop.

Tip
Tip: Connecting to a Persistent VNC Session

To connect to a persistent VNC session, a VNC viewer must be installed. Refer to Section 8.1, “The vncviewer Client” for more details.

There are two types of persistent VNC sessions:

8.4.1 VNC Session Initiated Using vncserver Edit source

This type of persistent VNC session is initiated on the server. The session and all applications started in this session run regardless of client connections until the session is terminated. Access to persistent sessions is protected by two possible types of passwords:

  • a regular password that grants full access or

  • an optional view-only password that grants a non-interactive (view-only) access.

A session can have multiple client connections of both kinds at once.

Procedure 8.2: Starting a Persistent VNC Session using vncserver
  1. Open a shell and make sure you are logged in as the user that should own the VNC session.

  2. If the network interface serving the VNC sessions is protected by a firewall, you need to manually open the port used by your session in the firewall. If starting multiple sessions you may alternatively open a range of ports. See Book “Security Guide”, Chapter 15 “Masquerading and Firewalls” for details on how to configure the firewall.

    vncserver uses the ports 5901 for display :1, 5902 for display :2, and so on. For persistent sessions, the VNC display and the X display usually have the same number.

  3. To start a session with a resolution of 1024x769 pixel and with a color depth of 16-bit, enter the following command:

    vncserver -alwaysshared -geometry 1024x768 -depth 16

    The vncserver command picks an unused display number when none is given and prints its choice. See man 1 vncserver for more options.

When running vncserver for the first time, it asks for a password for full access to the session. If needed, you can also provide a password for view-only access to the session.

The password(s) you are providing here are also used for future sessions started by the same user. They can be changed with the vncpasswd command.

Important
Important: Security Considerations

Make sure to use strong passwords of significant length (eight or more characters). Do not share these passwords.

To terminate the session shut down the desktop environment that runs inside the VNC session from the VNC viewer as you would shut it down if it was a regular local X session.

If you prefer to manually terminate a session, open a shell on the VNC server and make sure you are logged in as the user that owns the VNC session you want to terminate. Run the following command to terminate the session that runs on display :1: vncserver -kill :1

8.4.1.1 Configuring Persistent VNC Sessions Edit source

Persistent VNC sessions can be configured by editing $HOME/.vnc/xstartup. By default this shell script starts the same GUI/window manager it was started from. In SUSE Linux Enterprise Desktop this will either be GNOME or IceWM. If you want to start your session with a window manager of your choice, set the variable WINDOWMANAGER:

WINDOWMANAGER=gnome vncserver -geometry 1024x768
WINDOWMANAGER=icewm vncserver -geometry 1024x768
Note
Note: One Configuration for Each User

Persistent VNC sessions are configured in a single per-user configuration. Multiple sessions started by the same user will all use the same start-up and password files.

8.4.2 VNC Session Initiated Using vncmanager Edit source

Procedure 8.3: Enabling Persistent VNC Sessions
  1. Start YaST › Network Services › Remote Administration (VNC).

  2. Activate Allow Remote Administration With Session Management.

  3. Activate Enable access using a web browser if you plan to access the VNC session in a Web browser window.

  4. If necessary, also check Open Port in Firewall (for example, when your network interface is configured to be in the External Zone). If you have more than one network interface, restrict opening the firewall ports to a specific interface via Firewall Details.

  5. Confirm your settings with Next.

  6. In case not all needed packages are available yet, you need to approve the installation of missing packages.

    Tip
    Tip: Restart the Display Manager

    YaST makes changes to the display manager settings. You need to log out of your current graphical session and restart the display manager for the changes to take effect.

8.4.2.1 Configuring Persistent VNC Sessions Edit source

After you enable the VNC session management as described in Procedure 8.3, “Enabling Persistent VNC Sessions”, you can normally connect to the remote session with your favorite VNC viewer, such as vncviewer or Remmina. You will be presented with the login screen. After you log in, the 'VNC' icon will appear in the system tray of your desktop environment. Click the icon to open the VNC Session window. If it does not appear or if your desktop environment does not support icons in the system tray, run vncmanager-controller manually.

VNC Session Settings
Figure 8.8: VNC Session Settings

There are several settings that influence the VNC session's behavior:

Non-persistent, private

This is equivalent to a one-time session. It is not visible to others and will be terminated after you disconnect from it. Refer to Section 8.3, “One-time VNC Sessions” for more information.

Persistent, visible

The session is visible to other users and keeps running even after you disconnect from it.

Session name

Here you can specify the name of the persistent session so that it is easily identified when reconnecting.

No password required

The session will be freely accessible without having to log in under user credentials.

Require user login

You need to log in with a valid user name and password to access the session. List the valid user names in the Allowed users text box.

Allow one client at a time

Prevents multiple users from joining the session at the same time.

Allow multiple clients at a time

Allows multiple users to join the persistent session at the same time. Good for remote presentations or trainings.

Confirm with OK.

8.4.2.2 Joining Persistent VNC Sessions Edit source

After you set up a persistent VNC session as described in Section 8.4.2.1, “Configuring Persistent VNC Sessions”, you can join it with your VNC viewer. After your VNC client connects to the server, you will be prompted to choose whether you want to create a new session or join the existing one:

Joining a Persistent VNC Session
Figure 8.9: Joining a Persistent VNC Session

After you click the name of the existing session, you may be asked for login credentials, depending on the persistent session settings.

8.5 Encrypted VNC Communication Edit source

If the VNC server is set up properly, all communication between the VNC server and the client is encrypted. The authentication happens at the beginning of the session; the actual data transfer only begins afterward.

Whether for a one-time or a persistent VNC session, security options are configured via the -securitytypes parameter of the /usr/bin/Xvnc command located on the server_args line. The -securitytypes parameter selects both authentication method and encryption. It has the following options:

Authentications
None, TLSNone, X509None

No authentication.

VncAuth, TLSVnc, X509Vnc

Authentication using custom password.

Plain, TLSPlain, X509Plain

Authentication using PAM to verify user's password.

Encryptions
None, VncAuth, Plain

No encryption.

TLSNone, TLSVnc, TLSPlain

Anonymous TLS encryption. Everything is encrypted, but there is no verification of the remote host. So you are protected against passive attackers, but not against man-in-the-middle attackers.

X509None, X509Vnc, X509Plain

TLS encryption with certificate. If you use a self-signed certificate, you will be asked to verify it on the first connection. On subsequent connections you will be warned only if the certificate changed. So you are protected against everything except man-in-the-middle on the first connection (similar to typical SSH usage). If you use a certificate signed by a certificate authority matching the machine name, then you get full security (similar to typical HTTPS usage).

Tip
Tip: Path to Certificate and Key

With X509 based encryption, you need to specify the path to the X509 certificate and the key with -X509Cert and -X509Key options.

If you select multiple security types separated by comma, the first one supported and allowed by both client and server will be used. That way you can configure opportunistic encryption on the server. This is useful if you need to support VNC clients that do not support encryption.

On the client, you can also specify the allowed security types to prevent a downgrade attack if you are connecting to a server which you know has encryption enabled (although our vncviewer will warn you with the "Connection not encrypted!" message in that case).

9 File Copying with RSync Edit source

Abstract

Today, a typical user has several computers: home and workplace machines, a laptop, a smartphone or a tablet. This makes the task of keeping files and documents in sync across multiple devices all more important.

Warning
Warning: Risk of Data Loss

Before you start using a synchronization tool, you should familiarize yourself with its features and functionality. Make sure to back up your important files.

9.1 Conceptual Overview Edit source

For synchronizing a large amount of data over a slow network connection, Rsync offers a reliable method of transmitting only changes within files. This applies not only to text files but also binary files. To detect the differences between files, Rsync subdivides the files into blocks and computes check sums over them.

Detecting changes requires some computing power. So make sure that machines on both ends have enough resources, including RAM.

Rsync can be particularly useful when large amounts of data containing only minor changes need to be transmitted regularly. This is often the case when working with backups. Rsync can also be useful for mirroring staging servers that store complete directory trees of Web servers to a Web server in a DMZ.

Despite its name, Rsync is not a synchronization tool. Rsync is a tool that copies data only in one direction at a time. It does not and cannot do the reverse. If you need a bidirectional tool which is able to synchronize both source and destination, use Csync.

9.2 Basic Syntax Edit source

Rsync is a command-line tool that has the following basic syntax:

rsync [OPTION] SOURCE [SOURCE]... DEST

You can use Rsync on any local or remote machine, provided you have access and write permissions. It is possible to have multiple SOURCE entries. The SOURCE and DEST placeholders can be paths, URLs, or both.

Below are the most common Rsync options:

-v

Outputs more verbose text

-a

Archive mode; copies files recursively and preserves timestamps, user/group ownership, file permissions, and symbolic links

-z

Compresses the transmitted data

Note
Note: Trailing Slashes Count

When working with Rsync, you should pay particular attention to trailing slashes. A trailing slash after the directory denotes the content of the directory. No trailing slash denotes the directory itself.

9.3 Copying Files and Directories Locally Edit source

The following description assumes that the current user has write permissions to the directory /var/backup. To copy a single file from one directory on your machine to another path, use the following command:

tux > rsync -avz backup.tar.xz /var/backup/

The file backup.tar.xz is copied to /var/backup/; the absolute path will be /var/backup/backup.tar.xz.

Do not forget to add the trailing slash after the /var/backup/ directory! If you do not insert the slash, the file backup.tar.xz is copied to /var/backup (file) not inside the directory /var/backup/!

Copying a directory is similar to copying a single file. The following example copies the directory tux/ and its content into the directory /var/backup/:

tux > rsync -avz tux /var/backup/

Find the copy in the absolute path /var/backup/tux/.

9.4 Copying Files and Directories Remotely Edit source

The Rsync tool is required on both machines. To copy files from or to remote directories requires an IP address or a domain name. A user name is optional if your current user names on the local and remote machine are the same.

To copy the file file.tar.xz from your local host to the remote host 192.168.1.1 with same users (being local and remote), use the following command:

tux > rsync -avz file.tar.xz  tux@192.168.1.1:

Depending on what you prefer, these commands are also possible and equivalent:

tux > rsync -avz file.tar.xz 192.168.1.1:~
tux > rsync -avz file.tar.xz 192.168.1.1:/home/tux

In all cases with standard configuration, you will be prompted to enter your passphrase of the remote user. This command will copy file.tar.xz to the home directory of user tux (usually /home/tux).

Copying a directory remotely is similar to copying a directory locally. The following example copies the directory tux/ and its content into the remote directory /var/backup/ on the 192.168.1.1 host:

tux > rsync -avz tux 192.168.1.1:/var/backup/

Assuming you have write permissions on the host 192.168.1.1, you will find the copy in the absolute path /var/backup/tux.

9.5 Configuring and Using an Rsync Server Edit source

Rsync can run as a daemon (rsyncd) listing on default port 873 for incoming connections. This daemon can receive copying targets.

The following description explains how to create an Rsync server on jupiter with a backup target. This target can be used to store your backups. To create an Rsync server, do the following:

Procedure 9.1: Setting Up an Rsync Server
  1. On jupiter, create a directory to store all your backup files. In this example, we use /var/backup:

    root # mkdir /var/backup
  2. Specify ownership. In this case, the directory is owned by user tux in group users:

    root # chown tux.users /var/backup
  3. Configure the rsyncd daemon.

    We will separate the configuration file into a main file and some modules which hold your backup target. This makes it easier to add additional targets later. Global values can be stored in /etc/rsyncd.d/*.inc files, whereas your modules are placed in /etc/rsyncd.d/*.conf files:

    1. Create a directory /etc/rsyncd.d/:

      root # mkdir /etc/rsyncd.d/
    2. In the main configuration file /etc/rsyncd.conf, add the following lines:

      # rsyncd.conf main configuration file
      log file = /var/log/rsync.log
      pid file = /var/lock/rsync.lock
      
      &merge /etc/rsyncd.d 1
      &include /etc/rsyncd.d 2

      1

      Merges global values from /etc/rsyncd.d/*.inc files into the main configuration file.

      2

      Loads any modules (or targets) from /etc/rsyncd.d/*.conf files. These files should not contain any references to global values.

    3. Create your module (your backup target) in the file /etc/rsyncd.d/backup.conf with the following lines:

      # backup.conf: backup module
      [backup] 1
         uid = tux 2
         gid = users 2
         path = /var/backup 3
         auth users = tux  4
         secrets file = /etc/rsyncd.secrets 5
         comment = Our backup target

      1

      The backup target. You can use any name you like. However, it is a good idea to name a target according to its purpose and use the same name in your *.conf file.

      2

      Specifies the user name or group name that is used when the file transfer takes place.

      3

      Defines the path to store your backups (from Step 1).

      4

      Specifies a comma-separated list of allowed users. In its simplest form, it contains the user names that are allowed to connect to this module. In our case, only user tux is allowed.

      5

      Specifies the path of a file that contains lines with user names and plain passwords.

    4. Create the /etc/rsyncd.secrets file with the following content and replace PASSPHRASE:

      # user:passwd
      tux:PASSPHRASE
    5. Make sure the file is only readable by root:

      root # chmod 0600 /etc/rsyncd.secrets
  4. Start and enable the rsyncd daemon with:

    root # systemctl enable rsyncd
    root # systemctl start rsyncd
  5. Test the access to your Rsync server:

    tux > rsync jupiter::

    You should see a response that looks like this:

    backup          Our backup target

    Otherwise, check your configuration file, firewall and network settings.

The above steps create an Rsync server that can now be used to store backups. The example also creates a log file listing all connections. This file is stored in /var/log/rsyncd.log. This is useful if you want to debug your transfers.

To list the content of your backup target, use the following command:

rsync -avz jupiter::backup

This command lists all files present in the directory /var/backup on the server. This request is also logged in the log file /var/log/rsyncd.log. To start an actual transfer, provide a source directory. Use . for the current directory. For example, the following command copies the current directory to your Rsync backup server:

rsync -avz . jupiter::backup

By default, Rsync does not delete files and directories when it runs. To enable deletion, the additional option --delete must be stated. To ensure that no newer files are deleted, the option --update can be used instead. Any conflicts that arise must be resolved manually.

9.6 For More Information Edit source

CSync

Bidirectional file synchronizer, see https://www.csync.org/.

RSnapshot

Creates incremental backups, see http://rsnapshot.org.

Unison

A file synchronizer similar to CSync but with a graphical interface, see http://www.seas.upenn.edu/~bcpierce/unison/.

Rear

A disaster recovery framework, see the Administration Guide of the SUSE Linux Enterprise High Availability Extension https://www.suse.com/documentation/sle-ha-12/.

10 GNOME Configuration for Administrators Edit source

This chapter introduces GNOME configuration options which administrators can use to adjust system-wide settings, such as customizing menus, installing themes, configuring fonts, changing preferred applications, and locking down capabilities.

These configuration options are stored in the GConf system. Access the GConf system with tools such as the gconftool-2 command line interface or the gconf-editor GUI tool.

10.1 Starting Applications Automatically Edit source

To automatically start applications in GNOME, use one of the following methods:

  • To run applications for each user:  Put .desktop files in /usr/share/gnome/autostart.

  • To run applications for an individual user:  Put .desktop files in ~/.config/autostart.

To disable an application that starts automatically, add X-Autostart-enabled=false to the .desktop file.

10.2 Automounting and Managing Media Devices Edit source

GNOME Files (nautilus) monitors volume-related events and responds with a user-specified policy. You can use GNOME Files to automatically mount hotplugged drives and inserted removable media, automatically run programs, and play audio CDs or video DVDs. GNOME Files can also automatically import photos from a digital camera.

System administrators can set system-wide defaults. For more information, see Section 10.3, “Changing Preferred Applications”.

10.3 Changing Preferred Applications Edit source

To change users' preferred applications, edit /etc/gnome_defaults.conf. Find further hints within this file.

For more information about MIME types, see http://www.freedesktop.org/Standards/shared-mime-info-spec.

10.4 Adding Document Templates Edit source

To add document templates for users, fill in the Templates directory in a user's home directory. You can do this manually for each user by copying the files into ~/Templates, or system-wide by adding a Templates directory with documents to /etc/skel before the user is created.

A user creates a new document from a template by right-clicking the desktop and selecting Create Document.

10.5 For More Information Edit source

For more information, see http://help.gnome.org/admin/.

Part II Booting a Linux System Edit source

11 Introduction to the Boot Process

Booting a Linux system involves different components and tasks. After a firmware and hardware initialization process, which depends on the machine's architecture, the kernel is started by means of the boot loader GRUB 2. After this point, the boot process is completely controlled by the operating system and handled by systemd. systemd provides a set of targets that boot configurations for everyday usage, maintenance or emergencies.

12 UEFI (Unified Extensible Firmware Interface)

UEFI (Unified Extensible Firmware Interface) is the interface between the firmware that comes with the system hardware, all the hardware components of the system, and the operating system.

13 The Boot Loader GRUB 2

This chapter describes how to configure GRUB 2, the boot loader used in SUSE® Linux Enterprise Desktop. It is the successor to the traditional GRUB boot loader—now called GRUB Legacy. GRUB 2 has been the default boot loader in SUSE® Linux Enterprise Desktop since version 12. A YaST module is available for configuring the most important settings. The boot procedure as a whole is outlined in Chapter 11, Introduction to the Boot Process. For details on Secure Boot support for UEFI machines, see Chapter 12, UEFI (Unified Extensible Firmware Interface).

14 The systemd Daemon

The program systemd is the process with process ID 1. It is responsible for initializing the system in the required way. systemd is started directly by the kernel and resists signal 9, which normally terminates processes. All other programs are either started directly by systemd or by one of its chi…

11 Introduction to the Boot Process Edit source

Abstract

Booting a Linux system involves different components and tasks. After a firmware and hardware initialization process, which depends on the machine's architecture, the kernel is started by means of the boot loader GRUB 2. After this point, the boot process is completely controlled by the operating system and handled by systemd. systemd provides a set of targets that boot configurations for everyday usage, maintenance or emergencies.

11.1 Terminology Edit source

This chapter uses terms that can be interpreted ambiguously. To understand how they are used here, read the definitions below:

init

Two different processes are commonly named init:

  • The initramfs process mounting the root file system

  • The operating system process that starts all other processes that is executed from the real root file system

In both cases, the systemd program is taking care of this task. It is first executed from the initramfs to mount the root file system. Once that has succeeded, it is re-executed from the root file system as the initial process. To avoid confusing these two systemd processes, we refer to the first process as init on initramfs and to the second one as systemd.

initrd/initramfs

An initrd (initial RAM disk) is an image file containing a root file system image which is loaded by the kernel and mounted from /dev/ram as the temporary root file system. Mounting this file system requires a file system driver.

Beginning with kernel 2.6.13, the initrd has been replaced by the initramfs (initial RAM file system), which does not require a file system driver to be mounted. SUSE Linux Enterprise Desktop exclusively uses an initramfs. However, since the initramfs is stored as /boot/initrd, it is often called initrd. In this chapter we exclusively use the name initramfs.

11.2 The Linux Boot Process Edit source

The Linux boot process consists of several stages, each represented by a different component:

11.2.1 The Initialization and Boot Loader Phase Edit source

During the initialization phase the machine's hardware is set up and the devices are prepared. This process differs significantly between hardware architectures.

SUSE Linux Enterprise Desktop uses the boot loader GRUB 2 on all architectures. Depending on the architecture and firmware, starting the GRUB 2 boot loader can be a multi-step process. The purpose of the boot loader is to load the kernel and the initial, RAM-based file system (initramfs). For more information about GRUB 2, refer to Chapter 13, The Boot Loader GRUB 2.

11.2.1.1 Initialization and Boot Loader Phase on AArch64 and AMD64/Intel 64 Edit source

After turning on the computer, the BIOS or the UEFI initializes the screen and keyboard, and tests the main memory. Up to this stage, the machine does not access any mass storage media. Subsequently, the information about the current date, time, and the most important peripherals are loaded from the CMOS values. When the boot media and its geometry are recognized, the system control passes from the BIOS/UEFI to the boot loader.

On a machine equipped with a traditional BIOS, only code from the first physical 512-byte data sector (the Master Boot Record, MBR) of the boot disk can be loaded. Only a minimal GRUB 2 fits into the MBR. Its sole purpose is to load a GRUB 2 core image containing file system drivers from the gap between the MBR and the first partition (MBR partition table) or from the BIOS boot partition (GPT partition table). This image contains file system drivers and therefore is able to access /boot located on the root file system. /boot contains additional modules for GRUB 2 core as well as the kernel and the initramfs image. Once it has access to this partition, GRUB 2 loads the kernel and the initramfs image into memory and hands control over to the kernel.

When booting a BIOS system from an encrypted file system that includes an encrypted /boot partition, you need to enter the password for decryption twice. It is first needed by GRUB 2 to decrypt /boot and then for systemd to mount the encrypted volumes.

On machines with UEFI the boot process is much simpler than on machines with a traditional BIOS. The firmware is able to read from a FAT formatted system partition of disks with a GPT partition table. This EFI system-partition (in the running system mounted as /boot/efi) holds enough space to host a fully-fledged GRUB 2 which is directly loaded and executed by the firmware.

If the BIOS/UEFI supports network booting, it is also possible to configure a boot server that provides the boot loader. The system can then be booted via PXE. The BIOS/UEFI acts as the boot loader. It gets the boot image from the boot server and starts the system. This is completely independent of local hard disks.

11.2.1.2 Initialization and Boot Loader Phase on IBM z Systems Edit source

On IBM z Systems the boot process must be initialized by a boot loader called zipl (z initial program load). Although zipl supports reading from various file systems, it does not support the SLE default file system (Btrfs) or booting from snapshots. SUSE Linux Enterprise Desktop therefore uses a two-stage boot process that ensures full Btrfs support at boot time:

  1. zipl boots from the ext2-formatted partition /boot/zipl. This partition contains a minimal kernel and an initramfs that are loaded into memory. The initramfs contains a Btrfs driver (among others) and the boot loader GRUB 2. The kernel is started with a parameter initgrub, which tells it to start GRUB 2.

  2. The kernel mounts the root file system, so /boot becomes accessible. Now GRUB 2 is started from the initramfs. It reads its configuration from /boot/grub2/grub.cfg and loads the final kernel and initramfs from /boot. The new kernel now gets loaded via Kexec.

11.2.2 The Kernel Phase Edit source

When the boot loader has passed on system control, the boot process is the same on all architectures. The boot loader loads both the kernel and an initial RAM-based file system (initramfs) into memory and the kernel takes over.

After the kernel has set up memory management and has detected the CPU type and its features, it initializes the hardware and mounts the temporary root file system from the memory that was loaded with the initramfs.

11.2.2.1 The initramfs file Edit source

initramfs (initial RAM file system) is a small cpio archive that the kernel can load into a RAM disk. It is located at /boot/initrd. It can be created with a tool called dracut—refer to man 8 dracut for details.

The initramfs provides a minimal Linux environment that enables the execution of programs before the actual root file system is mounted. This minimal Linux environment is loaded into memory by BIOS or UEFI routines and does not have specific hardware requirements other than sufficient memory. The initramfs archive must always provide an executable named init that executes the systemd daemon on the root file system for the boot process to proceed.

Before the root file system can be mounted and the operating system can be started, the kernel needs the corresponding drivers to access the device on which the root file system is located. These drivers may include special drivers for certain kinds of hard disks or even network drivers to access a network file system. The needed modules for the root file system are loaded by init on initramfs. After the modules are loaded, udev provides the initramfs with the needed devices. Later in the boot process, after changing the root file system, it is necessary to regenerate the devices. This is done by the systemd unit systemd-udev-trigger.service.

11.2.2.1.1 Regenerating the initramfs Edit source

Since the initramfs contains drivers, it needs to be updated whenever a new version of one of its drivers is available. This is done automatically when installing the package containing the driver update. YaST or zypper will inform you about this by showing the output of the command that generates the initramfs. However, there are some occasions when you need to regenerate an initramfs manually:

Adding Drivers Because of Hardware Changes

If you need to change hardware (for example, hard disks), and this hardware requires different drivers to be in the kernel at boot time, you must update the initramfs file.

Open or create /etc/dracut.conf.d/10-DRIVER.conf and add the following line (mind the leading whitespace):

force_drivers+=" DRIVER1"

Replace DRIVER1 with the module name of the driver. If you need to add more than one driver, list them space-separated:

force_drivers+=" DRIVER1 DRIVER2"

Proceed with Procedure 11.1, “Generate an initramfs”.

Moving System Directories to a RAID or LVM

Whenever you move swap files, or system directories like /usr in a running system to a RAID or logical volume, you need to create an initramfs that contains support for software RAID or LVM drivers.

To do so, create the respective entries in /etc/fstab and mount the new entries (for example with mount -a and/or swapon -a).

Proceed with Procedure 11.1, “Generate an initramfs”.

Adding Disks to an LVM Group or Btrfs RAID Containing the Root File System

Whenever you add (or remove) a disk to a logical volume group or a Btrfs RAID containing the root file system, you need to create an initramfs that contains support for the enlarged volume. Follow the instructions at Procedure 11.1, “Generate an initramfs”.

Proceed with Procedure 11.1, “Generate an initramfs”.

Changing Kernel Variables

If you change the values of kernel variables via the sysctl interface by editing related files (/etc/sysctl.conf or /etc/sysctl.d/*.conf), the change will be lost on the next system reboot. Even if you load the values with sysctl --system at runtime, the changes are not saved into the initramfs file. You need to update it by proceeding as outlined in Procedure 11.1, “Generate an initramfs”.

Procedure 11.1: Generate an initramfs

Note that all commands in the following procedure need to be executed as user root.

  1. Generate a new initramfs file by running

    dracut MY_INITRAMFS

    Replace MY_INITRAMFS with a file name of your choice. The new initramfs will be created as /boot/MY_INITRAMFS.

    Alternatively, run dracut -f. This will overwrite the currently used, existing file.

  2. (Skip this step if you ran dracut -f in the previous step.) Create a link to the initramfs file you created in the previous step:

    (cd /boot && ln -sf MY_INITRAMFS initrd)
  3. On the IBM z Systems architecture, additionally run grub2-install.

11.2.3 The init on initramfs Phase Edit source

The temporary root file system mounted by the kernel from the initramfs contains the executable systemd (which is called init on initramfs in the following, also see Section 11.1, “Terminology”). This program performs all actions needed to mount the proper root file system. It provides kernel functionality for the needed file system and device drivers for mass storage controllers with udev.

The main purpose of init on initramfs is to prepare the mounting of and access to the real root file system. Depending on your system configuration, init on initramfs is responsible for the following tasks.

Loading Kernel Modules

Depending on your hardware configuration, special drivers may be needed to access the hardware components of your computer (the most important component being your hard disk). To access the final root file system, the kernel needs to load the proper file system drivers.

Providing Block Special Files

The kernel generates device events depending on loaded modules. udev handles these events and generates the required special block files on a RAM file system in /dev. Without those special files, the file system and other devices would not be accessible.

Managing RAID and LVM Setups

If you configured your system to hold the root file system under RAID or LVM, init on initramfs sets up LVM or RAID to enable access to the root file system later.

Managing the Network Configuration

If you configured your system to use a network-mounted root file system (mounted via NFS), init must make sure that the proper network drivers are loaded and that they are set up to allow access to the root file system.

If the file system resides on a network block device like iSCSI or SAN, the connection to the storage server is also set up by init on initramfs. SUSE Linux Enterprise Desktop supports booting from a secondary iSCSI target if the primary target is not available. .

Note
Note: Handling of Mount Failures

If the root file system fails to mount from within the boot environment, it must be checked and repaired before the boot can continue. The file system checker will be automatically started for Ext3 and Ext4 file systems. The repair process is not automated for XFS and Btrfs file systems, and the user is presented with information describing the options available to repair the file system. When the file system has been successfully repaired, exiting the boot environment will cause the system to retry mounting the root file system. If successful, the boot will continue normally.

11.2.3.1 The init on initramfs Phase in the Installation Process Edit source

When init on initramfs is called during the initial boot as part of the installation process, its tasks differ from those mentioned above. Note that the installation system does not start systemd from initramfs—these tasks are performed by linuxrc.

Finding the Installation Medium

When starting the installation process, your machine loads an installation kernel and a special init containing the YaST installer. The YaST installer is running in a RAM file system and needs to have information about the location of the installation medium to access it for installing the operating system.

Initiating Hardware Recognition and Loading Appropriate Kernel Modules

As mentioned in Section 11.2.2.1, “The initramfs file”, the boot process starts with a minimum set of drivers that can be used with most hardware configurations. On AArch64, POWER, and AMD64/Intel 64 machines, linuxrc starts an initial hardware scanning process that determines the set of drivers suitable for your hardware configuration. On IBM z Systems, a list of drivers and their parameters needs to be provided, for example via linuxrc or a parmfile.

These drivers are used to generate a custom initramfs that is needed to boot the system. If the modules are not needed for boot but for coldplug, the modules can be loaded with systemd; for more information, see Section 14.6.4, “Loading Kernel Modules”.

Loading the Installation System

When the hardware is properly recognized, the appropriate drivers are loaded. The udev program creates the special device files and linuxrc starts the installation system with the YaST installer.

Starting YaST

Finally, linuxrc starts YaST, which starts the package installation and the system configuration.

11.2.4 The systemd Phase Edit source

After the real root file system has been found, it is checked for errors and mounted. If this is successful, the initramfs is cleaned and the systemd daemon on the root file system is executed. systemd is Linux's system and service manager. It is the parent process that is started as PID 1 and acts as an init system which brings up and maintains user space services. See Chapter 14, The systemd Daemon for details.

12 UEFI (Unified Extensible Firmware Interface) Edit source

UEFI (Unified Extensible Firmware Interface) is the interface between the firmware that comes with the system hardware, all the hardware components of the system, and the operating system.

UEFI is becoming more and more available on PC systems and thus is replacing the traditional PC-BIOS. UEFI, for example, properly supports 64-bit systems and offers secure booting (Secure Boot, firmware version 2.3.1c or better required), which is one of its most important features. Lastly, with UEFI a standard firmware will become available on all x86 platforms.

UEFI additionally offers the following advantages:

  • Booting from large disks (over 2 TiB) with a GUID Partition Table (GPT).

  • CPU-independent architecture and drivers.

  • Flexible pre-OS environment with network capabilities.

  • CSM (Compatibility Support Module) to support booting legacy operating systems via a PC-BIOS-like emulation.

For more information, see http://en.wikipedia.org/wiki/Unified_Extensible_Firmware_Interface. The following sections are not meant as a general UEFI overview; these are only hints about how some features are implemented in SUSE Linux Enterprise Desktop.

12.1 Secure Boot Edit source

In the world of UEFI, securing the bootstrapping process means establishing a chain of trust. The platform is the root of this chain of trust; in the context of SUSE Linux Enterprise Desktop, the mainboard and the on-board firmware could be considered the platform. In other words, it is the hardware vendor, and the chain of trust flows from that hardware vendor to the component manufacturers, the OS vendors, etc.

The trust is expressed via public key cryptography. The hardware vendor puts a so-called Platform Key (PK) into the firmware, representing the root of trust. The trust relationship with operating system vendors and others is documented by signing their keys with the Platform Key.

Finally, security is established by requiring that no code will be executed by the firmware unless it has been signed by one of these trusted keys—be it an OS boot loader, some driver located in the flash memory of some PCI Express card or on disk, or be it an update of the firmware itself.

To use Secure Boot, you need to have your OS loader signed with a key trusted by the firmware, and you need the OS loader to verify that the kernel it loads can be trusted.

Key Exchange Keys (KEK) can be added to the UEFI key database. This way, you can use other certificates, as long as they are signed with the private part of the PK.

12.1.1 Implementation on SUSE Linux Enterprise Desktop Edit source

Microsoft’s Key Exchange Key (KEK) is installed by default.

Note
Note: GUID Partitioning Table (GPT) Required

The Secure Boot feature is enabled by default on UEFI/x86_64 installations. You can find the Enable Secure Boot Support option in the Boot Code Options tab of the Boot Loader Settings dialog. It supports booting when the secure boot is activated in the firmware, while making it possible to boot when it is deactivated.

Secure Boot Support
Figure 12.1: Secure Boot Support

The Secure Boot feature requires that a GUID Partitioning Table (GPT) replaces the old partitioning with a Master Boot Record (MBR). If YaST detects EFI mode during the installation, it will try to create a GPT partition. UEFI expects to find the EFI programs on a FAT-formatted EFI System Partition (ESP).

Supporting UEFI Secure Boot essentially requires having a boot loader with a digital signature that the firmware recognizes as a trusted key. That key is trusted by the firmware a priori, without requiring any manual intervention.

There are two ways of getting there. One is to work with hardware vendors to have them endorse a SUSE key, which SUSE then signs the boot loader with. The other way is to go through Microsoft’s Windows Logo Certification program to have the boot loader certified and have Microsoft recognize the SUSE signing key (that is, have it signed with their KEK). By now, SUSE got the loader signed by UEFI Signing Service (that is Microsoft in this case).

UEFI: Secure Boot Process
Figure 12.2: UEFI: Secure Boot Process

At the implementation layer, SUSE uses the shim loader which is installed by default. It is a smart solution that avoids legal issues, and simplifies the certification and signing step considerably. The shim loader’s job is to load a boot loader such as GRUB 2 and verify it; this boot loader in turn will load kernels signed by a SUSE key only. SUSE provides this functionality since SLE11 SP3 on fresh installations with UEFI Secure Boot enabled.

There are two types of trusted users:

  • First, those who hold the keys. The Platform Key (PK) allows almost everything. The Key Exchange Key (KEK) allows all a PK can except changing the PK.

  • Second, anyone with physical access to the machine. A user with physical access can reboot the machine, and configure UEFI.

UEFI offers two types of variables to fulfill the needs of those users:

  • The first is the so-called Authenticated Variables, which can be updated from both within the boot process (the so-called Boot Services Environment) and the running OS. This can be done only when the new value of the variable is signed with the same key that the old value of the variable was signed with. And they can only be appended to or changed to a value with a higher serial number.

  • The second is the so-called Boot Services Only Variables. These variables are accessible to any code that runs during the boot process. After the boot process ends and before the OS starts, the boot loader must call the ExitBootServices call. After that, these variables are no longer accessible, and the OS cannot touch them.

The various UEFI key lists are of the first type, as this allows online updating, adding, and blacklisting of keys, drivers, and firmware fingerprints. It is the second type of variable, the Boot Services Only Variable, that helps to implement Secure Boot in a secure and open source-friendly manner, and thus compatible with GPLv3.

SUSE starts with shim—a small and simple EFI boot loader signed by SUSE and Microsoft.

This allows shim to load and execute.

shim then goes on to verify that the boot loader it wants to load is trusted. In a default situation shim will use an independent SUSE certificate embedded in its body. In addition, shim will allow to enroll additional keys, overriding the default SUSE key. In the following, we call them Machine Owner Keys or MOKs for short.

Next the boot loader will verify and then boot the kernel, and the kernel will do the same on the modules.

12.1.2 MOK (Machine Owner Key) Edit source

If the user (machine owner) wants to replace any components of the boot process, Machine Owner Keys (MOKs) are to be used. The mokutils tool will help with signing components and managing MOKs.

The enrollment process begins with rebooting the machine and interrupting the boot process (for example, pressing a key) when shim loads. shim will then go into enrollment mode, allowing the user to replace the default SUSE key with keys from a file on the boot partition. If the user chooses to do so, shim will then calculate a hash of that file and put the result in a Boot Services Only variable. This allows shim to detect any change of the file made outside of Boot Services and thus avoid tampering with the list of user-approved MOKs.

All of this happens during boot time—only verified code is executing now. Therefore, only a user present at the console can use the machine owner's set of keys. It cannot be malware or a hacker with remote access to the OS because hackers or malware can only change the file, but not the hash stored in the Boot Services Only variable.

The boot loader, after having been loaded and verified by shim, will call back to shim when it wants to verify the kernel—to avoid duplication of the verification code. Shim will use the same list of MOKs for this and tell the boot loader whether it can load the kernel.

This way, you can install your own kernel or boot loader. It is only necessary to install a new set of keys and authorize them by being physically present during the first reboot. Because MOKs are a list rather than single MOK, you can make shim trust keys from several vendors, allowing dual- and multi-boot from the boot loader.

12.1.3 Booting a Custom Kernel Edit source

The following is based on http://en.opensuse.org/openSUSE:UEFI#Booting_a_custom_kernel.

Secure Boot does not prevent you from using a self-compiled kernel. You must sign it with your own certificate and make that certificate known to the firmware or MOK.

  1. Create a custom X.509 key and certificate used for signing:

    openssl req -new -x509 -newkey rsa:2048 -keyout key.asc \
      -out cert.pem -nodes -days 666 -subj "/CN=$USER/"

    For more information about creating certificates, see http://en.opensuse.org/openSUSE:UEFI_Image_File_Sign_Tools#Create_Your_Own_Certificate.

  2. Package the key and the certificate as a PKCS#12 structure:

    openssl pkcs12 -export -inkey key.asc -in cert.pem \
      -name kernel_cert -out cert.p12
  3. Generate an NSS database for use with pesign:

    certutil -d . -N
  4. Import the key and the certificate contained in PKCS#12 into the NSS database:

    pk12util -d . -i cert.p12
  5. Bless the kernel with the new signature using pesign:

    pesign -n . -c kernel_cert -i arch/x86/boot/bzImage \
      -o vmlinuz.signed -s
  6. List the signatures on the kernel image:

    pesign -n . -S -i vmlinuz.signed

    At that point, you can install the kernel in /boot as usual. Because the kernel now has a custom signature the certificate used for signing needs to be imported into the UEFI firmware or MOK.

  7. Convert the certificate to the DER format for import into the firmware or MOK:

    openssl x509 -in cert.pem -outform der -out cert.der
  8. Copy the certificate to the ESP for easier access:

    sudo cp cert.der /boot/efi/
  9. Use mokutil to launch the MOK list automatically.

      1. Import the certificate to MOK:

        mokutil --root-pw --import cert.der

        The --root-pw option enables usage of the root user directly.

      2. Check the list of certificates that are prepared to be enrolled:

        mokutil --list-new
      3. Reboot the system; shim should launch MokManager. You need to enter the root password to confirm the import of the certificate to the MOK list.

      4. Check if the newly imported key was enrolled:

        mokutil --list-enrolled
      1. Alternatively, this is the procedure if you want to launch MOK manually:

        Reboot

      2. In the GRUB 2 menu press the 'c' key.

      3. Type:

        chainloader $efibootdir/MokManager.efi
        boot
      4. Select Enroll key from disk.

      5. Navigate to the cert.der file and press Enter.

      6. Follow the instructions to enroll the key. Normally this should be pressing '0' and then 'y' to confirm.

        Alternatively, the firmware menu may provide ways to add a new key to the Signature Database.

12.1.4 Using Non-Inbox Drivers Edit source

There is no support for adding non-inbox drivers (that is, drivers that do not come with SUSE Linux Enterprise Desktop) during installation with Secure Boot enabled. The signing key used for SolidDriver/PLDP is not trusted by default.

It is possible to install third party drivers during installation with Secure Boot enabled in two different ways. In both cases:

  • Add the needed keys to the firmware database via firmware/system management tools before the installation. This option depends on the specific hardware you are using. Consult your hardware vendor for more information.

  • Use a bootable driver ISO from https://drivers.suse.com/ or your hardware vendor to enroll the needed keys in the MOK list at first boot.

To use the bootable driver ISO to enroll the driver keys to the MOK list, follow these steps:

  1. Burn the ISO image above to an empty CD/DVD medium.

  2. Start the installation using the new CD/DVD medium, having the standard installation media at hand or a URL to a network installation server.

    If doing a network installation, enter the URL of the network installation source on the boot command line using the install= option.

    If doing installation from optical media, the installer will first boot from the driver kit and then ask to insert the first installation disk of the product.

  3. An initrd containing updated drivers will be used for installation.

For more information, see https://drivers.suse.com/doc/Usage/Secure_Boot_Certificate.html.

12.1.5 Features and Limitations Edit source

When booting in Secure Boot mode, the following features apply:

  • Installation to UEFI default boot loader location, a mechanism to keep or restore the EFI boot entry.

  • Reboot via UEFI.

  • Xen hypervisor will boot with UEFI when there is no legacy BIOS to fall back to.

  • UEFI IPv6 PXE boot support.

  • UEFI video mode support, the kernel can retrieve video mode from UEFI to configure KMS mode with the same parameters.

  • UEFI booting from USB devices is supported.

When booting in Secure Boot mode, the following limitations apply:

  • To ensure that Secure Boot cannot be easily circumvented, some kernel features are disabled when running under Secure Boot.

  • Boot loader, kernel, and kernel modules must be signed.

  • Kexec and Kdump are disabled.

  • Hibernation (suspend on disk) is disabled.

  • Access to /dev/kmem and /dev/mem is not possible, not even as root user.

  • Access to the I/O port is not possible, not even as root user. All X11 graphical drivers must use a kernel driver.

  • PCI BAR access through sysfs is not possible.

  • custom_method in ACPI is not available.

  • debugfs for asus-wmi module is not available.

  • the acpi_rsdp parameter does not have any effect on the kernel.

12.2 For More Information Edit source

13 The Boot Loader GRUB 2 Edit source

Abstract

This chapter describes how to configure GRUB 2, the boot loader used in SUSE® Linux Enterprise Desktop. It is the successor to the traditional GRUB boot loader—now called GRUB Legacy. GRUB 2 has been the default boot loader in SUSE® Linux Enterprise Desktop since version 12. A YaST module is available for configuring the most important settings. The boot procedure as a whole is outlined in Chapter 11, Introduction to the Boot Process. For details on Secure Boot support for UEFI machines, see Chapter 12, UEFI (Unified Extensible Firmware Interface).

13.1 Main Differences between GRUB Legacy and GRUB 2 Edit source

  • The configuration is stored in different files.

  • More file systems are supported (for example, Btrfs).

  • Can directly read files stored on LVM or RAID devices.

  • The user interface can be translated and altered with themes.

  • Includes a mechanism for loading modules to support additional features, such as file systems, etc.

  • Automatically searches for and generates boot entries for other kernels and operating systems, such as Windows.

  • Includes a minimal Bash-like console.

13.2 Configuration File Structure Edit source

The configuration of GRUB 2 is based on the following files:

/boot/grub2/grub.cfg

This file contains the configuration of the GRUB 2 menu items. It replaces menu.lst used in GRUB Legacy. grub.cfg should not be edited—it is automatically generated by the command grub2-mkconfig -o /boot/grub2/grub.cfg.

/boot/grub2/custom.cfg

This optional file is directly sourced by grub.cfg at boot time and can be used to add custom items to the boot menu. Starting with SUSE Linux Enterprise Desktop these entries will also be parsed when using grub-once.

/etc/default/grub

This file controls the user settings of GRUB 2 and usually includes additional environmental settings such as backgrounds and themes.

Scripts under /etc/grub.d/

The scripts in this directory are read during execution of the command grub2-mkconfig -o /boot/grub2/grub.cfg. Their instructions are integrated into the main configuration file /boot/grub/grub.cfg.

/etc/sysconfig/bootloader

This configuration file holds some basic settings like the boot loader type or whether to enable UEFI Secure Boot support.

/boot/grub2/x86_64-efi, /boot/grub2/power-ieee1275, /boot/grub2/s390x

These configuration files contain architecture-specific options.

GRUB 2 can be controlled in various ways. Boot entries from an existing configuration can be selected from the graphical menu (splash screen). The configuration is loaded from the file /boot/grub2/grub.cfg which is compiled from other configuration files (see below). All GRUB 2 configuration files are considered system files, and you need root privileges to edit them.

Note
Note: Activating Configuration Changes

After having manually edited GRUB 2 configuration files, you need to run grub2-mkconfig -o /boot/grub2/grub.cfg to activate the changes. However, this is not necessary when changing the configuration with YaST, because YaST will automatically run this command.

13.2.1 The File /boot/grub2/grub.cfg Edit source

The graphical splash screen with the boot menu is based on the GRUB 2 configuration file /boot/grub2/grub.cfg, which contains information about all partitions or operating systems that can be booted by the menu.

Every time the system is booted, GRUB 2 loads the menu file directly from the file system. For this reason, GRUB 2 does not need to be re-installed after changes to the configuration file. grub.cfg is automatically rebuilt with kernel installations or removals.

grub.cfg is compiled from the file /etc/default/grub and scripts found in the /etc/grub.d/ directory when running the command grub2-mkconfig -o /boot/grub2/grub.cfg. Therefore you should never edit the file manually. Instead, edit the related source files or use the YaST Boot Loader module to modify the configuration as described in Section 13.3, “Configuring the Boot Loader with YaST”.

13.2.2 The File /etc/default/grub Edit source

More general options of GRUB 2 belong here, such as the time the menu is displayed, or the default OS to boot. To list all available options, see the output of the following command:

grep "export GRUB_DEFAULT" -A50 /usr/sbin/grub2-mkconfig | grep GRUB_

In addition to already defined variables, the user may introduce their own variables, and use them later in the scripts found in the /etc/grub.d directory.

After having edited /etc/default/grub, update the main configuration file with grub2-mkconfig -o /boot/grub2/grub.cfg.

Note
Note: Scope

All options set in this file are general options that affect all boot entries. Specific options for Xen kernels or the Xen hypervisor can be set via the GRUB_*_XEN_* configuration options. See below for details.

GRUB_DEFAULT

Sets the boot menu entry that is booted by default. Its value can be a numeric value, the complete name of a menu entry, or saved.

GRUB_DEFAULT=2 boots the third (counted from zero) boot menu entry.

GRUB_DEFAULT="2>0" boots the first submenu entry of the third top-level menu entry.

GRUB_DEFAULT="Example boot menu entry" boots the menu entry with the title Example boot menu entry.

GRUB_DEFAULT=saved boots the entry specified by the grub2-once or grub2-set-default commands. While grub2-reboot sets the default boot entry for the next reboot only, grub2-set-default sets the default boot entry until changed. grub2-editenv list lists the next boot entry.

GRUB_HIDDEN_TIMEOUT

Waits the specified number of seconds for the user to press a key. During the period no menu is shown unless the user presses a key. If no key is pressed during the time specified, the control is passed to GRUB_TIMEOUT. GRUB_HIDDEN_TIMEOUT=0 first checks whether Shift is pressed and shows the boot menu if yes, otherwise immediately boots the default menu entry. This is the default when only one bootable OS is identified by GRUB 2.

GRUB_HIDDEN_TIMEOUT_QUIET

If false is specified, a countdown timer is displayed on a blank screen when the GRUB_HIDDEN_TIMEOUT feature is active.

GRUB_TIMEOUT

Time period in seconds the boot menu is displayed before automatically booting the default boot entry. If you press a key, the timeout is cancelled and GRUB 2 waits for you to make the selection manually. GRUB_TIMEOUT=-1 will cause the menu to be displayed until you select the boot entry manually.

GRUB_CMDLINE_LINUX

Entries on this line are added at the end of the boot entries for normal and recovery mode. Use it to add kernel parameters to the boot entry.

GRUB_CMDLINE_LINUX_DEFAULT

Same as GRUB_CMDLINE_LINUX but the entries are appended in the normal mode only.

GRUB_CMDLINE_LINUX_RECOVERY

Same as GRUB_CMDLINE_LINUX but the entries are appended in the recovery mode only.

GRUB_CMDLINE_LINUX_XEN_REPLACE

This entry will completely replace the GRUB_CMDLINE_LINUX parameters for all Xen boot entries.

GRUB_CMDLINE_LINUX_XEN_REPLACE_DEFAULT

Same as GRUB_CMDLINE_LINUX_XEN_REPLACE but it will only replace parameters ofGRUB_CMDLINE_LINUX_DEFAULT.

GRUB_CMDLINE_XEN

This entry specifies the kernel parameters for the Xen guest kernel only—the operation principle is the same as for GRUB_CMDLINE_LINUX.

GRUB_CMDLINE_XEN_DEFAULT

Same as GRUB_CMDLINE_XEN—the operation principle is the same as for GRUB_CMDLINE_LINUX_DEFAULT.

GRUB_TERMINAL

Enables and specifies an input/output terminal device. Can be console (PC BIOS and EFI consoles), serial (serial terminal), ofconsole (Open Firmware console), or the default gfxterm (graphics-mode output). It is also possible to enable more than one device by quoting the required options, for example GRUB_TERMINAL="console serial".

GRUB_GFXMODE

The resolution used for the gfxterm graphical terminal. Note that you can only use modes supported by your graphics card (VBE). The default is ‘auto’, which tries to select a preferred resolution. You can display the screen resolutions available to GRUB 2 by typing videoinfo in the GRUB 2 command line. The command line is accessed by typing C when the GRUB 2 boot menu screen is displayed.

You can also specify a color depth by appending it to the resolution setting, for example GRUB_GFXMODE=1280x1024x24.

GRUB_BACKGROUND

Set a background image for the gfxterm graphical terminal. The image must be a file readable by GRUB 2 at boot time, and it must end with the .png, .tga, .jpg, or .jpeg suffix. If necessary, the image will be scaled to fit the screen.

GRUB_DISABLE_OS_PROBER

If this option is set to true, automatic searching for other operating systems is disabled. Only the kernel images in /boot/ and the options from your own scripts in /etc/grub.d/ are detected.

SUSE_BTRFS_SNAPSHOT_BOOTING

If this option is set to true, GRUB 2 can boot directly into Snapper snapshots. For more information, see Section 7.3, “System Rollback by Booting from Snapshots”.

For a complete list of options, see the GNU GRUB manual. For a complete list of possible parameters, see http://en.opensuse.org/Linuxrc.

13.2.3 Scripts in /etc/grub.d Edit source

The scripts in this directory are read during execution of the command grub2-mkconfig -o /boot/grub2/grub.cfg. Their instructions are incorporated into /boot/grub2/grub.cfg. The order of menu items in grub.cfg is determined by the order in which the files in this directory are run. Files with a leading numeral are executed first, beginning with the lowest number. 00_header is run before 10_linux, which would run before 40_custom. If files with alphabetic names are present, they are executed after the numerically-named files. Only executable files generate output to grub.cfg during execution of grub2-mkconfig. By default all files in the /etc/grub.d directory are executable.

Tip
Tip: Persistent Custom Content in grub.cfg

Because /boot/grub2/grub.cfg is recompiled each time grub2-mkconfig is run, any custom content is lost. If you want to insert your lines directly into /boot/grub2/grub.cfg without losing them after grub2-mkconfig is run, insert them between the

### BEGIN /etc/grub.d/90_persistent ###

and

### END /etc/grub.d/90_persistent ###

lines. The 90_persistent script ensures that such content will be preserved.

A list of the most important scripts follows:

00_header

Sets environmental variables such as system file locations, display settings, themes, and previously saved entries. It also imports preferences stored in the /etc/default/grub. Normally you do not need to make changes to this file.

10_linux

Identifies Linux kernels on the root device and creates relevant menu entries. This includes the associated recovery mode option if enabled. Only the latest kernel is displayed on the main menu page, with additional kernels included in a submenu.

30_os-prober

This script uses OS-prober to search for Linux and other operating systems and places the results in the GRUB 2 menu. There are sections to identify specific other operating systems, such as Windows or macOS.

40_custom

This file provides a simple way to include custom boot entries into grub.cfg. Make sure that you do not change the exec tail -n +3 $0 part at the beginning.

The processing sequence is set by the preceding numbers with the lowest number being executed first. If scripts are preceded by the same number the alphabetical order of the complete name decides the order.

Tip
Tip: /boot/grub2/custom.cfg

If you create /boot/grub2/custom.cfg and fill it with content, it will be automatically included into /boot/grub2/grub.cfg just after 40_custom at boot time.

13.2.4 Mapping between BIOS Drives and Linux Devices Edit source

In GRUB Legacy, the device.map configuration file was used to derive Linux device names from BIOS drive numbers. The mapping between BIOS drives and Linux devices cannot always be guessed correctly. For example, GRUB Legacy would get a wrong order if the boot sequence of IDE and SCSI drives is exchanged in the BIOS configuration.

GRUB 2 avoids this problem by using device ID strings (UUIDs) or file system labels when generating grub.cfg. GRUB 2 utilities create a temporary device map on the fly, which is usually sufficient, particularly in the case of single-disk systems.

However, if you need to override the GRUB 2's automatic device mapping mechanism, create your custom mapping file /boot/grub2/device.map. The following example changes the mapping to make DISK 3 the boot disk. Note that GRUB 2 partition numbers start with 1 and not with 0 as in GRUB Legacy.

(hd1)  /dev/disk-by-id/DISK3 ID
(hd2)  /dev/disk-by-id/DISK1 ID
(hd3)  /dev/disk-by-id/DISK2 ID

13.2.5 Editing Menu Entries during the Boot Procedure Edit source

Being able to directly edit menu entries is useful when the system does not boot anymore because of a faulty configuration. It can also be used to test new settings without altering the system configuration.

  1. In the graphical boot menu, select the entry you want to edit with the arrow keys.

  2. Press E to open the text-based editor.

  3. Use the arrow keys to move to the line you want to edit.

    GRUB 2 Boot Editor
    Figure 13.1: GRUB 2 Boot Editor

    Now you have two options:

    1. Add space-separated parameters to the end of the line starting with linux or linuxefi to edit the kernel parameters. A complete list of parameters is available at http://en.opensuse.org/Linuxrc.

    2. Or edit the general options to change for example the kernel version. The →| key suggests all possible completions.

  4. Press F10 to boot the system with the changes you made or press Esc to discard your edits and return to the GRUB 2 menu.

Changes made this way only apply to the current boot process and are not saved permanently.

Important
Important: Keyboard Layout During the Boot Procedure

The US keyboard layout is the only one available when booting. See Figure 35.2, “US Keyboard Layout”.

Note
Note: Boot Loader on the Installation Media

The Boot Loader of the installation media on systems with a traditional BIOS is still GRUB Legacy. To add boot options, select an entry and start typing. Additions you make to the installation boot entry will be permanently saved in the installed system.

Note
Note: Editing GRUB 2 Menu Entries on z Systems

Cursor movement and editing commands on IBM z Systems differ—see Section 13.4, “Differences in Terminal Usage on z Systems” for details.

13.2.6 Setting a Boot Password Edit source

Even before the operating system is booted, GRUB 2 enables access to file systems. Users without root permissions can access files in your Linux system to which they have no access after the system is booted. To block this kind of access or to prevent users from booting certain menu entries, set a boot password.

Important
Important: Booting Requires Password

If set, the boot password is required on every boot, which means the system does not boot automatically.

Proceed as follows to set a boot password. Alternatively use YaST (Protect Boot Loader with Password ).

  1. Encrypt the password using grub2-mkpasswd-pbkdf2:

    tux > sudo grub2-mkpasswd-pbkdf2
    Password: ****
    Reenter password: ****
    PBKDF2 hash of your password is grub.pbkdf2.sha512.10000.9CA4611006FE96BC77A...
  2. Paste the resulting string into the file /etc/grub.d/40_custom together with the set superusers command.

    set superusers="root"
    password_pbkdf2 root grub.pbkdf2.sha512.10000.9CA4611006FE96BC77A...
  3. To import the changes into the main configuration file, run:

    tux > sudo grub2-mkconfig -o /boot/grub2/grub.cfg

After you reboot, you will be prompted for a user name and a password when trying to boot a menu entry. Enter root and the password you typed during the grub2-mkpasswd-pbkdf2 command. If the credentials are correct, the system will boot the selected boot entry.

For more information, see https://www.gnu.org/software/grub/manual/grub.html#Security.

13.3 Configuring the Boot Loader with YaST Edit source

The easiest way to configure general options of the boot loader in your SUSE Linux Enterprise Desktop system is to use the YaST module. In the YaST Control Center, select System › Boot Loader. The module shows the current boot loader configuration of your system and allows you to make changes.

Use the Boot Code Options tab to view and change settings related to type, location and advanced loader settings. You can choose whether to use GRUB 2 in standard or EFI mode.

Boot Code Options
Figure 13.2: Boot Code Options
Important
Important: EFI Systems require GRUB2-EFI

If you have an EFI system you can only install GRUB2-EFI, otherwise your system is no longer bootable.

Important
Important: Reinstalling the Boot Loader

To reinstall the boot loader, make sure to change a setting in YaST and then change it back. For example, to reinstall GRUB2-EFI, select GRUB2 first and then immediately switch back to GRUB2-EFI.

Otherwise, the boot loader may only be partially reinstalled.

Note
Note: Custom Boot Loader

To use a boot loader other than the ones listed, select Do Not Install Any Boot Loader. Read the documentation of your boot loader carefully before choosing this option.

13.3.1 Boot Loader Location and Boot Code Options Edit source

The default location of the boot loader depends on the partition setup and is either the Master Boot Record (MBR) or the boot sector of the / partition. To modify the location of the boot loader, follow these steps:

Procedure 13.1: Changing the Boot Loader Location
  1. Select the Boot Code Options tab and then choose one of the following options for Boot Loader Location:

    Boot from Master Boot Record

    This installs the boot loader in the MBR of the disk containing the directory /boot. Usually this will be the disk mounted to /, but if /boot is mounted to a separate partition on a different disk, the MBR of that disk will be used.

    Boot from Root Partition

    This installs the boot loader in the boot sector of the / partition.

    Custom Boot Partition

    Use this option to specify the location of the boot loader manually.

  2. Click OK to apply your changes.

Code Options
Figure 13.3: Code Options

The Boot Code Options tab includes the following additional options:

Set Active Flag in Partition Table for Boot Partition

Activates the partition that contains the /boot directorythe PReP partition. Use this option on systems with old BIOS and/or legacy operating systems because they may fail to boot from a non-active partition. It is safe to leave this option active.

Write Generic Boot Code to MBR

If MBR contains a custom 'non-GRUB' code, this option replaces it with a generic, operating system independent code. If you deactivate this option, the system may become unbootable.

Enable Trusted Boot Support

Starts TrustedGRUB2, which supports trusted computing functionality (Trusted Platform Module (TPM)). For more information refer to https://github.com/Sirrix-AG/TrustedGRUB2.

13.3.2 Adjusting the Disk Order Edit source

If your computer has more than one hard disk, you can specify the boot sequence of the disks. The first disk in the list is where GRUB 2 will be installed in the case of booting from MBR. It is the disk where SUSE Linux Enterprise Desktop is installed by default. The rest of the list is a hint for GRUB 2's device mapper (see Section 13.2.4, “Mapping between BIOS Drives and Linux Devices”).

Warning
Warning: Unbootable System

The default value is usually valid for almost all deployments. If you change the boot order of disks wrongly, the system may become unbootable on the next reboot. For example, if the first disk in the list is not part of the BIOS boot order, and the other disks in the list have empty MBRs.

Procedure 13.2: Setting the Disk Order
  1. Open the Boot Code Options tab.

  2. Click Edit Disk Boot Order.

  3. If more than one disk is listed, select a disk and click Up or Down to reorder the displayed disks.

  4. Click OK two times to save the changes.

13.3.3 Configuring Advanced Options Edit source

Advanced boot options can be configured via the Boot Loader Options tab.

13.3.3.1 Boot Loader Options Tab Edit source

Boot loader Options
Figure 13.4: Boot loader Options
Boot Loader Time-Out

Change the value of Time-Out in Seconds by typing in a new value and clicking the appropriate arrow key with your mouse.

Probe Foreign OS

When selected, the boot loader searches for other systems like Windows or other Linux installations.

Hide Menu on Boot

Hides the boot menu and boots the default entry.

Adjusting the Default Boot Entry

Select the desired entry from the Default Boot Section list. Note that the > sign in the boot entry name delimits the boot section and its subsection.

Protect Boot Loader with Password

Protects the boot loader and the system with an additional password. For more information, see Section 13.2.6, “Setting a Boot Password”.

13.3.3.2 Kernel Parameters Tab Edit source

Kernel Parameters
Figure 13.5: Kernel Parameters
Console resolution

The Console resolution option specifies the default screen resolution during the boot process.

Kernel Command Line Parameter

The optional kernel parameters are added at the end of the default parameters. For a list of all possible parameters, see http://en.opensuse.org/Linuxrc.

Use graphical console

When checked, the boot menu appears on a graphical splash screen rather than in a text mode. The resolution of the boot screen can be then set from the Console resolution list, and graphical theme definition file can be specified with the Console theme file-chooser.

Use Serial Console

If your machine is controlled via a serial console, activate this option and specify which COM port to use at which speed. See info grub or http://www.gnu.org/software/grub/manual/grub.html#Serial-terminal

13.4 Differences in Terminal Usage on z Systems Edit source

On 3215 and 3270 terminals there are some differences and limitations on how to move the cursor and how to issue editing commands within GRUB 2.

13.4.1 Limitations Edit source

Interactivity

Interactivity is strongly limited. Typing often does not result in visual feedback. To see where the cursor is, type an underscore (_).

Note
Note: 3270 Compared to 3215

The 3270 terminal is much better at displaying and refreshing screens than the 3215 terminal.

Cursor Movement

Traditional cursor movement is not possible. Alt, Meta, Ctrl and the cursor keys do not work. To move the cursor, use the key combinations listed in Section 13.4.2, “Key Combinations”.

Caret

The caret (^) is used as a control character. To type a literal ^ followed by a letter, type ^, ^, LETTER.

Enter

The Enter key does not work, use ^J instead.

13.4.2 Key Combinations Edit source

Common Substitutes:

^J

engage (Enter)

^L

abort, return to previous state

^I

tab completion (in edit and shell mode)

Keys Available in Menu Mode:

^A

first entry

^E

last entry

^P

previous entry

^N

next entry

^G

previous page

^C

next page

^F

boot selected entry or enter submenu (same as ^J)

E

edit selected entry

C

enter GRUB-Shell

Keys Available in Edit Mode:

^P

previous line

^N

next line

^B

backward char

^F

forward char

^A

beginning of line

^E

end of line

^H

backspace

^D

delete

^K

kill line

^Y

yank

^O

open line

^L

refresh screen

^X

boot entry

^C

enter GRUB-Shell

Keys Available in Command Line Mode:

^P

previous command

^N

next command from history

^A

beginning of line

^E

end of line

^B

backward char

^F

forward char

^H

backspace

^D

delete

^K

kill line

^U

discard line

^Y

yank

13.5 Helpful GRUB 2 Commands Edit source

grub2-mkconfig

Generates a new /boot/grub2/grub.cfg based on /etc/default/grub and the scripts from /etc/grub.d/.

Example 13.1: Usage of grub2-mkconfig
grub2-mkconfig -o /boot/grub2/grub.cfg
Tip
Tip: Syntax Check

Running grub2-mkconfig without any parameters prints the configuration to STDOUT where it can be reviewed. Use grub2-script-check after /boot/grub2/grub.cfg has been written to check its syntax.

Important
Important: grub2-mkconfig Cannot Repair UEFI Secure Boot Tables

If you are using UEFI Secure Boot and your system is not reaching GRUB 2 correctly anymore, you may need to additionally reinstall Shim and regenerate the UEFI boot table. To do so, use:

root # shim-install --config-file=/boot/grub2/grub.cfg
grub2-mkrescue

Creates a bootable rescue image of your installed GRUB 2 configuration.

Example 13.2: Usage of grub2-mkrescue
grub2-mkrescue -o save_path/name.iso iso
grub2-script-check

Checks the given file for syntax errors.

Example 13.3: Usage of grub2-script-check
grub2-script-check /boot/grub2/grub.cfg
grub2-once

Set the default boot entry for the next boot only. To get the list of available boot entries use the --list option.

Example 13.4: Usage of grub2-once
grub2-once number_of_the_boot_entry
Tip
Tip: grub2-once Help

Call the program without any option to get a full list of all possible options.

13.6 More Information Edit source

Extensive information about GRUB 2 is available at http://www.gnu.org/software/grub/. Also refer to the grub info page. You can also search for the keyword GRUB 2 in the Technical Information Search at http://www.suse.com/support to get information about special issues.

14 The systemd Daemon Edit source

The program systemd is the process with process ID 1. It is responsible for initializing the system in the required way. systemd is started directly by the kernel and resists signal 9, which normally terminates processes. All other programs are either started directly by systemd or by one of its child processes.

Starting with SUSE Linux Enterprise Desktop 12 systemd is a replacement for the popular System V init daemon. systemd is fully compatible with System V init (by supporting init scripts). One of the main advantages of systemd is that it considerably speeds up boot time by aggressively paralleling service starts. Furthermore, systemd only starts a service when it is really needed. Daemons are not started unconditionally at boot time, but rather when being required for the first time. systemd also supports Kernel Control Groups (cgroups), snapshotting and restoring the system state and more. See http://www.freedesktop.org/wiki/Software/systemd/ for details.

14.1 The systemd Concept Edit source

This section will go into detail about the concept behind systemd.

14.1.1 What Is systemd Edit source

systemd is a system and session manager for Linux, compatible with System V and LSB init scripts. The main features are:

  • provides aggressive parallelization capabilities

  • uses socket and D-Bus activation for starting services

  • offers on-demand starting of daemons

  • keeps track of processes using Linux cgroups

  • supports snapshotting and restoring of the system state

  • maintains mount and automount points

  • implements an elaborate transactional dependency-based service control logic

14.1.2 Unit File Edit source

A unit configuration file contains information about a service, a socket, a device, a mount point, an automount point, a swap file or partition, a start-up target, a watched file system path, a timer controlled and supervised by systemd, a temporary system state snapshot, a resource management slice or a group of externally created processes. Unit file is a generic term used by systemd for the following:

  • Service.  Information about a process (for example running a daemon); file ends with .service

  • Targets.  Used for grouping units and as synchronization points during start-up; file ends with .target

  • Sockets.  Information about an IPC or network socket or a file system FIFO, for socket-based activation (like inetd); file ends with .socket

  • Path.  Used to trigger other units (for example running a service when files change); file ends with .path

  • Timer.  Information about a timer controlled, for timer-based activation; file ends with .timer

  • Mount point.  Usually auto-generated by the fstab generator; file ends with .mount

  • Automount point.  Information about a file system automount point; file ends with .automount

  • Swap.  Information about a swap device or file for memory paging; file ends with .swap

  • Device.  Information about a device unit as exposed in the sysfs/udev(7) device tree; file ends with .device

  • Scope / Slice.  A concept for hierarchically managing resources of a group of processes; file ends with .scope/.slice

For more information about systemd.unit see http://www.freedesktop.org/software/systemd/man/systemd.unit.html

14.2 Basic Usage Edit source

The System V init system uses several commands to handle services—the init scripts, insserv, telinit and others. systemd makes it easier to manage services, since there is only one command to memorize for the majority of service-handling tasks: systemctl. It uses the command plus subcommand notation like git or zypper:

systemctl GENERAL OPTIONS SUBCOMMAND SUBCOMMAND OPTIONS

See man 1 systemctl for a complete manual.

Tip
Tip: Terminal Output and Bash Completion

If the output goes to a terminal (and not to a pipe or a file, for example) systemd commands send long output to a pager by default. Use the --no-pager option to turn off paging mode.

systemd also supports bash-completion, allowing you to enter the first letters of a subcommand and then press →| to automatically complete it. This feature is only available in the bash shell and requires the installation of the package bash-completion.

14.2.1 Managing Services in a Running System Edit source

Subcommands for managing services are the same as for managing a service with System V init (start, stop, ...). The general syntax for service management commands is as follows:

systemd
systemctl reload|restart|start|status|stop|... MY_SERVICE(S)
System V init
rcMY_SERVICE(S) reload|restart|start|status|stop|...

systemd allows you to manage several services in one go. Instead of executing init scripts one after the other as with System V init, execute a command like the following:

systemctl start MY_1ST_SERVICE MY_2ND_SERVICE

To list all services available on the system:

systemctl list-unit-files --type=service

The following table lists the most important service management commands for systemd and System V init:

Table 14.1: Service Management Commands

Task

systemd Command

System V init Command

Starting. 

start
start

Stopping. 

stop
stop

Restarting.  Shuts down services and starts them afterward. If a service is not yet running it will be started.

restart
restart

Restarting conditionally.  Restarts services if they are currently running. Does nothing for services that are not running.

try-restart
try-restart

Reloading.  Tells services to reload their configuration files without interrupting operation. Use case: Tell Apache to reload a modified httpd.conf configuration file. Note that not all services support reloading.

reload
reload

Reloading or restarting.  Reloads services if reloading is supported, otherwise restarts them. If a service is not yet running it will be started.

reload-or-restart
n/a

Reloading or restarting conditionally.  Reloads services if reloading is supported, otherwise restarts them if currently running. Does nothing for services that are not running.

reload-or-try-restart
n/a

Getting detailed status information.  Lists information about the status of services. The systemd command shows details such as description, executable, status, cgroup, and messages last issued by a service (see Section 14.6.8, “Debugging Services”). The level of details displayed with the System V init differs from service to service.

status
status

Getting short status information.  Shows whether services are active or not.

is-active
status

14.2.2 Permanently Enabling/Disabling Services Edit source

The service management commands mentioned in the previous section let you manipulate services for the current session. systemd also lets you permanently enable or disable services, so they are automatically started when requested or are always unavailable. You can either do this by using YaST, or on the command line.

14.2.2.1 Enabling/Disabling Services on the Command Line Edit source

The following table lists enabling and disabling commands for systemd and System V init:

Important
Important: Service Start

When enabling a service on the command line, it is not started automatically. It is scheduled to be started with the next system start-up or runlevel/target change. To immediately start a service after having enabled it, explicitly run systemctl start MY_SERVICE or rc MY_SERVICE start.

Table 14.2: Commands for Enabling and Disabling Services

Task

systemd Command

System V init Command

Enabling. 

systemctl enable MY_SERVICE(S)

insserv MY_SERVICE(S), chkconfig -a MY_SERVICE(S)

Disabling. 

systemctl disable MY_SERVICE(S).service

insserv -r MY_SERVICE(S), chkconfig -d MY_SERVICE(S)

Checking.  Shows whether a service is enabled or not.

systemctl is-enabled MY_SERVICE

chkconfig MY_SERVICE

Re-enabling.  Similar to restarting a service, this command first disables and then enables a service. Useful to re-enable a service with its defaults.

systemctl reenable MY_SERVICE

n/a

Masking.  After disabling a service, it can still be started manually. To completely disable a service, you need to mask it. Use with care.

systemctl mask MY_SERVICE

n/a

Unmasking.  A service that has been masked can only be used again after it has been unmasked.

systemctl unmask MY_SERVICE

n/a

14.3 System Start and Target Management Edit source

The entire process of starting the system and shutting it down is maintained by systemd. From this point of view, the kernel can be considered a background process to maintain all other processes and adjust CPU time and hardware access according to requests from other programs.

14.3.1 Targets Compared to Runlevels Edit source

With System V init the system was booted into a so-called Runlevel. A runlevel defines how the system is started and what services are available in the running system. Runlevels are numbered; the most commonly known ones are 0 (shutting down the system), 3 (multiuser with network) and 5 (multiuser with network and display manager).

systemd introduces a new concept by using so-called target units. However, it remains fully compatible with the runlevel concept. Target units are named rather than numbered and serve specific purposes. For example, the targets local-fs.target and swap.target mount local file systems and swap spaces.

The target graphical.target provides a multiuser system with network and display manager capabilities and is equivalent to runlevel 5. Complex targets, such as graphical.target act as meta targets by combining a subset of other targets. Since systemd makes it easy to create custom targets by combining existing targets, it offers great flexibility.

The following list shows the most important systemd target units. For a full list refer to man 7 systemd.special.

Selected systemd Target Units
default.target

The target that is booted by default. Not a real target, but rather a symbolic link to another target like graphic.target. Can be permanently changed via YaST (see Section 14.4, “Managing Services with YaST”). To change it for a session, use the kernel parameter systemd.unit=MY_TARGET.target at the boot prompt.

emergency.target

Starts an emergency shell on the console. Only use it at the boot prompt as systemd.unit=emergency.target.

graphical.target

Starts a system with network, multiuser support and a display manager.

halt.target

Shuts down the system.

mail-transfer-agent.target

Starts all services necessary for sending and receiving mails.

multi-user.target

Starts a multiuser system with network.

reboot.target

Reboots the system.

rescue.target

Starts a single-user system without network.

To remain compatible with the System V init runlevel system, systemd provides special targets named runlevelX.target mapping the corresponding runlevels numbered X.

If you want to know the current target, use the command: systemctl get-default

Table 14.3: System V Runlevels and systemd Target Units

System V runlevel

systemd target

Purpose

0

runlevel0.target, halt.target, poweroff.target

System shutdown

1, S

runlevel1.target, rescue.target,

Single-user mode

2

runlevel2.target, multi-user.target,

Local multiuser without remote network

3

runlevel3.target, multi-user.target,

Full multiuser with network

4

runlevel4.target

Unused/User-defined

5

runlevel5.target, graphical.target,

Full multiuser with network and display manager

6

runlevel6.target, reboot.target,

System reboot

Important
Important: systemd Ignores /etc/inittab

The runlevels in a System V init system are configured in /etc/inittab. systemd does not use this configuration. Refer to Section 14.5.3, “Creating Custom Targets” for instructions on how to create your own bootable target.

14.3.1.1 Commands to Change Targets Edit source

Use the following commands to operate with target units:

Task

systemd Command

System V init Command

Change the current target/runlevel

systemctl isolate MY_TARGET.target

telinit X

Change to the default target/runlevel

systemctl default

n/a

Get the current target/runlevel

systemctl list-units --type=target

With systemd there is usually more than one active target. The command lists all currently active targets.

who -r

or

runlevel

persistently change the default runlevel

Use the Services Manager or run the following command:

ln -sf /usr/lib/systemd/system/ MY_TARGET.target /etc/systemd/system/default.target

Use the Services Manager or change the line

id: X:initdefault:

in /etc/inittab

Change the default runlevel for the current boot process

Enter the following option at the boot prompt

systemd.unit= MY_TARGET.target

Enter the desired runlevel number at the boot prompt.

Show a target's/runlevel's dependencies

systemctl show -p "Requires" MY_TARGET.target

systemctl show -p "Wants" MY_TARGET.target

Requires lists the hard dependencies (the ones that must be resolved), whereas Wants lists the soft dependencies (the ones that get resolved if possible).

n/a

14.3.2 Debugging System Start-Up Edit source

systemd offers the means to analyze the system start-up process. You can review the list of all services and their status (rather than having to parse /varlog/). systemd also allows you to scan the start-up procedure to find out how much time each service start-up consumes.

14.3.2.1 Review Start-Up of Services Edit source

To review the complete list of services that have been started since booting the system, enter the command systemctl. It lists all active services like shown below (shortened). To get more information on a specific service, use systemctl status MY_SERVICE.

Example 14.1: List Active Services
root # systemctl
UNIT                        LOAD   ACTIVE SUB       JOB DESCRIPTION
[...]
iscsi.service               loaded active exited    Login and scanning of iSC+
kmod-static-nodes.service   loaded active exited    Create list of required s+
libvirtd.service            loaded active running   Virtualization daemon
nscd.service                loaded active running   Name Service Cache Daemon
ntpd.service                loaded active running   NTP Server Daemon
polkit.service              loaded active running   Authorization Manager
postfix.service             loaded active running   Postfix Mail Transport Ag+
rc-local.service            loaded active exited    /etc/init.d/boot.local Co+
rsyslog.service             loaded active running   System Logging Service
[...]
LOAD   = Reflects whether the unit definition was properly loaded.
ACTIVE = The high-level unit activation state, i.e. generalization of SUB.
SUB    = The low-level unit activation state, values depend on unit type.

161 loaded units listed. Pass --all to see loaded but inactive units, too.
To show all installed unit files use 'systemctl list-unit-files'.

To restrict the output to services that failed to start, use the --failed option:

Example 14.2: List Failed Services
root # systemctl --failed
UNIT                   LOAD   ACTIVE SUB    JOB DESCRIPTION
apache2.service        loaded failed failed     apache
NetworkManager.service loaded failed failed     Network Manager
plymouth-start.service loaded failed failed     Show Plymouth Boot Screen

[...]

14.3.2.2 Debug Start-Up Time Edit source

To debug system start-up time, systemd offers the systemd-analyze command. It shows the total start-up time, a list of services ordered by start-up time and can also generate an SVG graphic showing the time services took to start in relation to the other services.

Listing the System Start-Up Time
root # systemd-analyze
Startup finished in 2666ms (kernel) + 21961ms (userspace) = 24628ms
Listing the Services Start-Up Time
root # systemd-analyze blame
  6472ms systemd-modules-load.service
  5833ms remount-rootfs.service
  4597ms network.service
  4254ms systemd-vconsole-setup.service
  4096ms postfix.service
  2998ms xdm.service
  2483ms localnet.service
  2470ms SuSEfirewall2_init.service
  2189ms avahi-daemon.service
  2120ms systemd-logind.service
  1210ms xinetd.service
  1080ms ntp.service
[...]
    75ms fbset.service
    72ms purge-kernels.service
    47ms dev-vda1.swap
    38ms bluez-coldplug.service
    35ms splash_early.service
Services Start-Up Time Graphics
root # systemd-analyze plot > jupiter.example.com-startup.svg

14.3.2.3 Review the Complete Start-Up Process Edit source

The above-mentioned commands let you review the services that started and the time it took to start them. If you need to know more details, you can tell systemd to verbosely log the complete start-up procedure by entering the following parameters at the boot prompt:

systemd.log_level=debug systemd.log_target=kmsg

Now systemd writes its log messages into the kernel ring buffer. View that buffer with dmesg:

dmesg -T | less

14.3.3 System V Compatibility Edit source

systemd is compatible with System V, allowing you to still use existing System V init scripts. However, there is at least one known issue where a System V init script does not work with systemd out of the box: starting a service as a different user via su or sudo in init scripts will result in a failure of the script, producing an Access denied error.

When changing the user with su or sudo, a PAM session is started. This session will be terminated after the init script is finished. As a consequence, the service that has been started by the init script will also be terminated. To work around this error, proceed as follows:

  1. Create a service file wrapper with the same name as the init script plus the file name extension .service:

    [Unit]
    Description=DESCRIPTION
    After=network.target
    
    [Service]
    User=USER
    Type=forking1
    PIDFile=PATH TO PID FILE1
    ExecStart=PATH TO INIT SCRIPT start
    ExecStop=PATH TO INIT SCRIPT stop
    ExecStopPost=/usr/bin/rm -f PATH TO PID FILE1
    
    [Install]
    WantedBy=multi-user.target2

    Replace all values written in UPPERCASE LETTERS with appropriate values.

    1

    Optional—only use if the init script starts a daemon.

    2

    multi-user.target also starts the init script when booting into graphical.target. If it should only be started when booting into the display manager, user graphical.target here.

  2. Start the daemon with systemctl start APPLICATION.

14.4 Managing Services with YaST Edit source

Basic service management can also be done with the YaST Services Manager module. It supports starting, stopping, enabling and disabling services. It also lets you show a service's status and change the default target. Start the YaST module with YaST › System › Services Manager.

Services Manager
Figure 14.1: Services Manager
Changing the Default System Target

To change the target the system boots into, choose a target from the Default System Target drop-down box. The most often used targets are Graphical Interface (starting a graphical login screen) and Multi-User (starting the system in command line mode).

Starting or Stopping a Service

Select a service from the table. The Active column shows whether it is currently running (Active) or not (Inactive). Toggle its status by choosing Start/Stop.

Starting or stopping a service changes its status for the currently running session. To change its status throughout a reboot, you need to enable or disable it.

Enabling or Disabling a Service

Select a service from the table. The Enabled column shows whether it is currently Enabled or Disabled. Toggle its status by choosing Enable/Disable.

By enabling or disabling a service you configure whether it is started during booting (Enabled) or not (Disabled). This setting will not affect the current session. To change its status in the current session, you need to start or stop it.

View a Status Messages

To view the status message of a service, select it from the list and choose Show Details. The output you will see is identical to the one generated by the command systemctl -l status MY_SERVICE.

Warning
Warning: Faulty Runlevel Settings May Damage Your System

Faulty runlevel settings may make your system unusable. Before applying your changes, make absolutely sure that you know their consequences.

14.5 Customization of systemd Edit source

The following sections contain some examples for systemd customization.

Warning
Warning: Avoiding Overwritten Customization

Always do systemd customization in /etc/systemd/, never in /usr/lib/systemd/. Otherwise your changes will be overwritten by the next update of systemd.

14.5.1 Customizing Service Files Edit source

The systemd service files are located in /usr/lib/systemd/system. If you want to customize them, proceed as follows:

  1. Copy the files you want to modify from /usr/lib/systemd/system to /etc/systemd/system. Keep the file names identical to the original ones.

  2. Modify the copies in /etc/systemd/system according to your needs.

  3. For an overview of your configuration changes, use the systemd-delta command. It can compare and identify configuration files that override other configuration files. For details, refer to the systemd-delta man page.

The modified files in /etc/systemd will take precedence over the original files in /usr/lib/systemd/system, provided that their file name is the same.

14.5.2 Creating Drop-in Files Edit source

If you only want to add a few lines to a configuration file or modify a small part of it, you can use so-called drop-in files. Drop-in files let you extend the configuration of unit files without having to edit or override the unit files themselves.

For example, to change one value for the FOOBAR service located in /usr/lib/systemd/system/FOOBAR.SERVICE, proceed as follows:

  1. Create a directory called /etc/systemd/system/MY_SERVICE.service.d/.

    Note the .d suffix. The directory must otherwise be named like the service that you want to patch with the drop-in file.

  2. In that directory, create a file WHATEVERMODIFICATION.conf.

    Make sure it only contains the line with the value that you want to modify.

  3. Save your changes to the file. It will be used as an extension of the original file.

14.5.3 Creating Custom Targets Edit source

On System V init SUSE systems, runlevel 4 is unused to allow administrators to create their own runlevel configuration. systemd allows you to create any number of custom targets. It is suggested to start by adapting an existing target such as graphical.target.

  1. Copy the configuration file /usr/lib/systemd/system/graphical.target to /etc/systemd/system/MY_TARGET.target and adjust it according to your needs.

  2. The configuration file copied in the previous step already covers the required (hard) dependencies for the target. To also cover the wanted (soft) dependencies, create a directory /etc/systemd/system/MY_TARGET.target.wants.

  3. For each wanted service, create a symbolic link from /usr/lib/systemd/system into /etc/systemd/system/MY_TARGET.target.wants.

  4. Once you have finished setting up the target, reload the systemd configuration to make the new target available:

    systemctl daemon-reload

14.6 Advanced Usage Edit source

The following sections cover advanced topics for system administrators. For even more advanced systemd documentation, refer to Lennart Pöttering's series about systemd for administrators at http://0pointer.de/blog/projects.

14.6.1 Cleaning Temporary Directories Edit source

systemd supports cleaning temporary directories regularly. The configuration from the previous system version is automatically migrated and active. tmpfiles.d—which is responsible for managing temporary files—reads its configuration from /etc/tmpfiles.d/*.conf , /run/tmpfiles.d/*.conf, and /usr/lib/tmpfiles.d/*.conf files. Configuration placed in /etc/tmpfiles.d/*.conf overrides related configurations from the other two directories (/usr/lib/tmpfiles.d/*.conf is where packages store their configuration files).

The configuration format is one line per path containing action and path, and optionally mode, ownership, age and argument fields, depending on the action. The following example unlinks the X11 lock files:

Type Path               Mode UID  GID  Age Argument
r    /tmp/.X[0-9]*-lock

To get the status the tmpfile timer:

systemctl status systemd-tmpfiles-clean.timer
systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories
 Loaded: loaded (/usr/lib/systemd/system/systemd-tmpfiles-clean.timer; static)
 Active: active (waiting) since Tue 2014-09-09 15:30:36 CEST; 1 weeks 6 days ago
   Docs: man:tmpfiles.d(5)
         man:systemd-tmpfiles(8)

Sep 09 15:30:36 jupiter systemd[1]: Starting Daily Cleanup of Temporary Directories.
Sep 09 15:30:36 jupiter systemd[1]: Started Daily Cleanup of Temporary Directories.

For more information on temporary files handling, see man 5 tmpfiles.d.

14.6.2 System Log Edit source

Section 14.6.8, “Debugging Services” explains how to view log messages for a given service. However, displaying log messages is not restricted to service logs. You can also access and query the complete log messages written by systemd—the so-called Journal. Use the command journalctl to display the complete log messages starting with the oldest entries. Refer to man 1 journalctl for options such as applying filters or changing the output format.

14.6.3 Snapshots Edit source

You can save the current state of systemd to a named snapshot and later revert to it with the isolate subcommand. This is useful when testing services or custom targets, because it allows you to return to a defined state at any time. A snapshot is only available in the current session and will automatically be deleted on reboot. A snapshot name must end in .snapshot.

Create a Snapshot
systemctl snapshot MY_SNAPSHOT.snapshot
Delete a Snapshot
systemctl delete MY_SNAPSHOT.snapshot
View a Snapshot
systemctl show MY_SNAPSHOT.snapshot
Activate a Snapshot
systemctl isolate MY_SNAPSHOT.snapshot

14.6.4 Loading Kernel Modules Edit source

With systemd, kernel modules can automatically be loaded at boot time via a configuration file in /etc/modules-load.d. The file should be named MODULE.conf and have the following content:

# load module MODULE at boot time
MODULE

In case a package installs a configuration file for loading a kernel module, the file gets installed to /usr/lib/modules-load.d. If two configuration files with the same name exist, the one in /etc/modules-load.d tales precedence.

For more information, see the modules-load.d(5) man page.

14.6.5 Performing Actions before Loading a Service Edit source

With System V init actions that need to be performed before loading a service, needed to be specified in /etc/init.d/before.local . This procedure is no longer supported with systemd. If you need to do actions before starting services, do the following:

Loading Kernel Modules

Create a drop-in file in /etc/modules-load.d directory (see man modules-load.d for the syntax)

Creating Files or Directories, Cleaning-up Directories, Changing Ownership

Create a drop-in file in /etc/tmpfiles.d (see man tmpfiles.d for the syntax)

Other Tasks

Create a system service file, for example /etc/systemd/system/before.service, from the following template:

[Unit]
Before=NAME OF THE SERVICE YOU WANT THIS SERVICE TO BE STARTED BEFORE
[Service]
Type=oneshot
RemainAfterExit=true
ExecStart=YOUR_COMMAND
# beware, executable is run directly, not through a shell, check the man pages
# systemd.service and systemd.unit for full syntax
[Install]
# target in which to start the service
WantedBy=multi-user.target
#WantedBy=graphical.target

When the service file is created, you should run the following commands (as root):

systemctl daemon-reload
systemctl enable before

Every time you modify the service file, you need to run:

systemctl daemon-reload

14.6.6 Kernel Control Groups (cgroups) Edit source

On a traditional System V init system it is not always possible to clearly assign a process to the service that spawned it. Some services, such as Apache, spawn a lot of third-party processes (for example CGI or Java processes), which themselves spawn more processes. This makes a clear assignment difficult or even impossible. Additionally, a service may not terminate correctly, leaving some children alive.

systemd solves this problem by placing each service into its own cgroup. cgroups are a kernel feature that allows aggregating processes and all their children into hierarchical organized groups. systemd names each cgroup after its service. Since a non-privileged process is not allowed to leave its cgroup, this provides an effective way to label all processes spawned by a service with the name of the service.

To list all processes belonging to a service, use the command systemd-cgls. The result will look like the following (shortened) example:

Example 14.3: List all Processes Belonging to a Service
root # systemd-cgls --no-pager
├─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 20
├─user.slice
│ └─user-1000.slice
│   ├─session-102.scope
│   │ ├─12426 gdm-session-worker [pam/gdm-password]
│   │ ├─15831 gdm-session-worker [pam/gdm-password]
│   │ ├─15839 gdm-session-worker [pam/gdm-password]
│   │ ├─15858 /usr/lib/gnome-terminal-server

[...]

└─system.slice
  ├─systemd-hostnamed.service
  │ └─17616 /usr/lib/systemd/systemd-hostnamed
  ├─cron.service
  │ └─1689 /usr/sbin/cron -n
  ├─ntpd.service
  │ └─1328 /usr/sbin/ntpd -p /var/run/ntp/ntpd.pid -g -u ntp:ntp -c /etc/ntp.conf
  ├─postfix.service
  │ ├─ 1676 /usr/lib/postfix/master -w
  │ ├─ 1679 qmgr -l -t fifo -u
  │ └─15590 pickup -l -t fifo -u
  ├─sshd.service
  │ └─1436 /usr/sbin/sshd -D

[...]

See Book “System Analysis and Tuning Guide”, Chapter 9 “Kernel Control Groups” for more information about cgroups.

14.6.7 Terminating Services (Sending Signals) Edit source

As explained in Section 14.6.6, “Kernel Control Groups (cgroups)”, it is not always possible to assign a process to its parent service process in a System V init system. This makes it difficult to terminate a service and all of its children. Child processes that have not been terminated will remain as zombie processes.

systemd's concept of confining each service into a cgroup makes it possible to clearly identify all child processes of a service and therefore allows you to send a signal to each of these processes. Use systemctl kill to send signals to services. For a list of available signals refer to man 7 signals.

Sending SIGTERM to a Service

SIGTERM is the default signal that is sent.

systemctl kill MY_SERVICE
Sending SIGNAL to a Service

Use the -s option to specify the signal that should be sent.

systemctl kill -s SIGNAL MY_SERVICE
Selecting Processes

By default the kill command sends the signal to all processes of the specified cgroup. You can restrict it to the control or the main process. The latter is for example useful to force a service to reload its configuration by sending SIGHUP:

systemctl kill -s SIGHUP --kill-who=main MY_SERVICE
Warning
Warning: Terminating or Restarting the D-Bus Service Is Not Supported

The D-Bus service is the message bus for communication between systemd clients and the systemd manager that is running as pid 1. Even though dbus is a stand-alone daemon, it is an integral part of the init infrastructure.

Terminating dbus or restarting it in the running system is similar to an attempt to terminate or restart pid 1. It will break systemd client/server communication and make most systemd functions unusable.

Therefore, terminating or restarting dbus is neither recommended nor supported.

14.6.8 Debugging Services Edit source

By default, systemd is not overly verbose. If a service was started successfully, no output will be produced. In case of a failure, a short error message will be displayed. However, systemctl status provides means to debug start-up and operation of a service.

systemd comes with its own logging mechanism (The Journal) that logs system messages. This allows you to display the service messages together with status messages. The status command works similar to tail and can also display the log messages in different formats, making it a powerful debugging tool.

Show Service Start-Up Failure

Whenever a service fails to start, use systemctl status MY_SERVICE to get a detailed error message:

root # systemctl start apache2
Job failed. See system journal and 'systemctl status' for details.
root # systemctl status apache2
   Loaded: loaded (/usr/lib/systemd/system/apache2.service; disabled)
   Active: failed (Result: exit-code) since Mon, 04 Jun 2012 16:52:26 +0200; 29s ago
   Process: 3088 ExecStart=/usr/sbin/start_apache2 -D SYSTEMD -k start (code=exited, status=1/FAILURE)
   CGroup: name=systemd:/system/apache2.service

Jun 04 16:52:26 g144 start_apache2[3088]: httpd2-prefork: Syntax error on line
205 of /etc/apache2/httpd.conf: Syntax error on li...alHost>
Show Last N Service Messages

The default behavior of the status subcommand is to display the last ten messages a service issued. To change the number of messages to show, use the --lines=N parameter:

systemctl status ntp
systemctl --lines=20 status ntp
Show Service Messages in Append Mode

To display a live stream of service messages, use the --follow option, which works like tail -f:

systemctl --follow status ntp
Messages Output Format

The --output=MODE parameter allows you to change the output format of service messages. The most important modes available are:

short

The default format. Shows the log messages with a human readable time stamp.

verbose

Full output with all fields.

cat

Terse output without time stamps.

14.7 More Information Edit source

For more information on systemd refer to the following online resources:

Homepage

http://www.freedesktop.org/wiki/Software/systemd

systemd for Administrators

Lennart Pöttering, one of the systemd authors, has written a series of blog entries (13 at the time of writing this chapter). Find them at http://0pointer.de/blog/projects.

Part III System Edit source

15 32-Bit and 64-Bit Applications in a 64-Bit System Environment

SUSE® Linux Enterprise Desktop is available for 64-bit platforms. This does not necessarily mean that all the applications included have already been ported to 64-bit platforms. SUSE Linux Enterprise Desktop supports the use of 32-bit applications in a 64-bit system environment. This chapter offers …

16 journalctl: Query the systemd Journal

When systemd replaced traditional init scripts in SUSE Linux Enterprise 12 (see Chapter 14, The systemd Daemon), it introduced its own logging system called journal. There is no need to run a syslog based service anymore, as all system events are written in the journal.

17 Basic Networking

Linux offers the necessary networking tools and features for integration into all types of network structures. Network access using a network card can be configured with YaST. Manual configuration is also possible. In this chapter only the fundamental mechanisms and the relevant network configuration files are covered.

18 Printer Operation

SUSE® Linux Enterprise Desktop supports printing with many types of printers, including remote network printers. Printers can be configured manually or with YaST. For configuration instructions, refer to Book “Deployment Guide”, Chapter 8 “Setting Up Hardware Components with YaST”, Section 8.3 “Sett…

19 The X Window System

The X Window System (X11) is the de facto standard for graphical user interfaces in Unix. X is network-based, enabling applications started on one host to be displayed on another host connected over any kind of network (LAN or Internet). This chapter provides basic information on the X configuration…

20 Accessing File Systems with FUSE

FUSE is the acronym for file system in user space. This means you can configure and mount a file system as an unprivileged user. Normally, you need to be root for this task. FUSE alone is a kernel module. Combined with plug-ins, it allows you to extend FUSE to access almost all file systems like remote SSH connections, ISO images, and more.

21 Managing Kernel Modules

Although Linux is a monolithic kernel, it can be extended using kernel modules. These are special objects that can be inserted into the kernel and removed on demand. In practical terms, kernel modules make it possible to add and remove drivers and interfaces that are not included in the kernel itsel…

22 Dynamic Kernel Device Management with udev

The kernel can add or remove almost any device in a running system. Changes in the device state (whether a device is plugged in or removed) need to be propagated to user space. Devices need to be configured when they are plugged in and recognized. Users of a certain device need to be informed about …

23 Live Patching the Linux Kernel Using kGraft

This document describes the basic principles of the kGraft live patching technology and provides usage guidelines for the SLE Live Patching service.

kGraft is a live patching technology for runtime patching of the Linux kernel, without stopping the kernel. This maximizes system uptime, and thus system availability, which is important for mission-critical systems. By allowing dynamic patching of the kernel, the technology also encourages users to install critical security updates without deferring them to a scheduled downtime.

A kGraft patch is a kernel module, intended for replacing whole functions in the kernel. kGraft primarily offers in-kernel infrastructure for integration of the patched code with base kernel code at runtime.

SLE Live Patching is a service provided on top of regular SUSE Linux Enterprise Server maintenance. kGraft patches distributed through SLE Live Patching supplement regular SLES maintenance updates. Common update stack and procedures can be used for SLE Live Patching deployment.

The information provided in this document relates to the AMD64/Intel 64 and POWER architectures. In case you use a different architecture, the procedures may differ.

24 Special System Features

This chapter starts with information about various software packages, the virtual consoles and the keyboard layout. We talk about software components like bash, cron and logrotate, because they were changed or enhanced during the last release cycles. Even if they are small or considered of minor importance, users should change their default behavior, because these components are often closely coupled with the system. The chapter concludes with a section about language and country-specific settings (I18N and L10N).

25 Persistent Memory

This chapter contains additional information about using SUSE Linux Enterprise Desktop with non-volatile main memory, also known as Persistent Memory, comprising one or more NVDIMMs.

15 32-Bit and 64-Bit Applications in a 64-Bit System Environment Edit source

SUSE® Linux Enterprise Desktop is available for 64-bit platforms. This does not necessarily mean that all the applications included have already been ported to 64-bit platforms. SUSE Linux Enterprise Desktop supports the use of 32-bit applications in a 64-bit system environment. This chapter offers a brief overview of how this support is implemented on 64-bit SUSE Linux Enterprise Desktop platforms.

SUSE Linux Enterprise Desktop for the 64-bit platforms amd64 and Intel 64 is designed so that existing 32-bit applications run in the 64-bit environment out-of-the-box. This support means that you can continue to use your preferred 32-bit applications without waiting for a corresponding 64-bit port to become available.

Note
Note: No Support for Building 32-bit Applications

SUSE Linux Enterprise Desktop does not support compiling 32-bit applications, it only offers runtime support for 32-bit binaries.

15.1 Runtime Support Edit source

Important
Important: Conflicts Between Application Versions

If an application is available both for 32-bit and 64-bit environments, parallel installation of both versions is bound to lead to problems. In such cases, decide on one of the two versions and install and use this.

An exception to this rule is PAM (pluggable authentication modules). SUSE Linux Enterprise Desktop uses PAM in the authentication process as a layer that mediates between user and application. On a 64-bit operating system that also runs 32-bit applications it is necessary to always install both versions of a PAM module.

To be executed correctly, every application requires a range of libraries. Unfortunately, the names for the 32-bit and 64-bit versions of these libraries are identical. They must be differentiated from each other in another way.

To retain compatibility with the 32-bit version, the libraries are stored at the same place in the system as in the 32-bit environment. The 32-bit version of libc.so.6 is located under /lib/libc.so.6 in both the 32-bit and 64-bit environments.

All 64-bit libraries and object files are located in directories called lib64. The 64-bit object files that you would normally expect to find under /lib and /usr/lib are now found under /lib64 and /usr/lib64. This means that there is space for the 32-bit libraries under /lib and /usr/lib, so the file name for both versions can remain unchanged.

Subdirectories of 32-bit /lib directories which contain data content that does not depend on the word size are not moved. This scheme conforms to LSB (Linux Standards Base) and FHS (File System Hierarchy Standard).

15.2 Kernel Specifications Edit source

The 64-bit kernels for AMD64/Intel 64 offer both a 64-bit and a 32-bit kernel ABI (application binary interface). The latter is identical with the ABI for the corresponding 32-bit kernel. This means that the 32-bit application can communicate with the 64-bit kernel in the same way as with the 32-bit kernel.

The 32-bit emulation of system calls for a 64-bit kernel does not support all the APIs used by system programs. This depends on the platform. For this reason, few applications, like lspci, must be compiled.

A 64-bit kernel can only load 64-bit kernel modules that have been specially compiled for this kernel. It is not possible to use 32-bit kernel modules.

Tip
Tip: Kernel-loadable Modules

Some applications require separate kernel-loadable modules. If you intend to use such a 32-bit application in a 64-bit system environment, contact the provider of this application and SUSE to make sure that the 64-bit version of the kernel-loadable module and the 32-bit compiled version of the kernel API are available for this module.

16 journalctl: Query the systemd Journal Edit source

When systemd replaced traditional init scripts in SUSE Linux Enterprise 12 (see Chapter 14, The systemd Daemon), it introduced its own logging system called journal. There is no need to run a syslog based service anymore, as all system events are written in the journal.

The journal itself is a system service managed by systemd. Its full name is systemd-journald.service. It collects and stores logging data by maintaining structured indexed journals based on logging information received from the kernel, user processes, standard input, and system service errors. The systemd-journald service is on by default:

# systemctl status systemd-journald
systemd-journald.service - Journal Service
   Loaded: loaded (/usr/lib/systemd/system/systemd-journald.service; static)
   Active: active (running) since Mon 2014-05-26 08:36:59 EDT; 3 days ago
     Docs: man:systemd-journald.service(8)
           man:journald.conf(5)
 Main PID: 413 (systemd-journal)
   Status: "Processing requests..."
   CGroup: /system.slice/systemd-journald.service
           └─413 /usr/lib/systemd/systemd-journald
[...]

16.1 Making the Journal Persistent Edit source

The journal stores log data in /run/log/journal/ by default. Because the /run/ directory is volatile by nature, log data is lost at reboot. To make the log data persistent, the directory /var/log/journal/ with correct ownership and permissions must exist, where the systemd-journald service can store its data. systemd will create the directory for you—and switch to persistent logging—if you do the following:

  1. As root, open /etc/systemd/journald.conf for editing.

    # vi /etc/systemd/journald.conf
  2. Uncomment the line containing Storage= and change it to

    [...]
    [Journal]
    Storage=persistent
    #Compress=yes
    [...]
  3. Save the file and restart systemd-journald:

    systemctl restart systemd-journald

16.2 journalctl Useful Switches Edit source

This section introduces several common useful options to enhance the default journalctl behavior. All switches are described in the journalctl manual page, man 1 journalctl.

Tip
Tip: Messages Related to a Specific Executable

To show all journal messages related to a specific executable, specify the full path to the executable:

journalctl /usr/lib/systemd/systemd
-f

Shows only the most recent journal messages, and prints new log entries as they are added to the journal.

-e

Prints the messages and jumps to the end of the journal, so that the latest entries are visible within the pager.

-r

Prints the messages of the journal in reverse order, so that the latest entries are listed first.

-k

Shows only kernel messages. This is equivalent to the field match _TRANSPORT=kernel (see Section 16.3.3, “Filtering Based on Fields”).

-u

Shows only messages for the specified systemd unit. This is equivalent to the field match _SYSTEMD_UNIT=UNIT (see Section 16.3.3, “Filtering Based on Fields”).

# journalctl -u apache2
[...]
Jun 03 10:07:11 pinkiepie systemd[1]: Starting The Apache Webserver...
Jun 03 10:07:12 pinkiepie systemd[1]: Started The Apache Webserver.

16.3 Filtering the Journal Output Edit source

When called without switches, journalctl shows the full content of the journal, the oldest entries listed first. The output can be filtered by specific switches and fields.

16.3.1 Filtering Based on a Boot Number Edit source

journalctl can filter messages based on a specific system boot. To list all available boots, run

# journalctl --list-boots
-1 097ed2cd99124a2391d2cffab1b566f0 Mon 2014-05-26 08:36:56 EDT—Fri 2014-05-30 05:33:44 EDT
 0 156019a44a774a0bb0148a92df4af81b Fri 2014-05-30 05:34:09 EDT—Fri 2014-05-30 06:15:01 EDT

The first column lists the boot offset: 0 for the current boot, -1 for the previous one, -2 for the one prior to that, etc. The second column contains the boot ID followed by the limiting time stamps of the specific boot.

Show all messages from the current boot:

# journalctl -b

If you need to see journal messages from the previous boot, add an offset parameter. The following example outputs the previous boot messages:

# journalctl -b -1

Another way is to list boot messages based on the boot ID. For this purpose, use the _BOOT_ID field:

# journalctl _BOOT_ID=156019a44a774a0bb0148a92df4af81b

16.3.2 Filtering Based on Time Interval Edit source

You can filter the output of journalctl by specifying the starting and/or ending date. The date specification should be of the format "2014-06-30 9:17:16". If the time part is omitted, midnight is assumed. If seconds are omitted, ":00" is assumed. If the date part is omitted, the current day is assumed. Instead of numeric expression, you can specify the keywords "yesterday", "today", or "tomorrow". They refer to midnight of the day before the current day, of the current day, or of the day after the current day. If you specify "now", it refers to the current time. You can also specify relative times prefixed with - or +, referring to times before or after the current time.

Show only new messages since now, and update the output continuously:

# journalctl --since "now" -f

Show all messages since last midnight till 3:20am:

# journalctl --since "today" --until "3:20"

16.3.3 Filtering Based on Fields Edit source

You can filter the output of the journal by specific fields. The syntax of a field to be matched is FIELD_NAME=MATCHED_VALUE, such as _SYSTEMD_UNIT=httpd.service. You can specify multiple matches in a single query to filter the output messages even more. See man 7 systemd.journal-fields for a list of default fields.

Show messages produced by a specific process ID:

# journalctl _PID=1039

Show messages belonging to a specific user ID:

# journalctl _UID=1000

Show messages from the kernel ring buffer (the same as dmesg produces):

# journalctl _TRANSPORT=kernel

Show messages from the service's standard or error output:

# journalctl _TRANSPORT=stdout

Show messages produced by a specified service only:

# journalctl _SYSTEMD_UNIT=avahi-daemon.service

If two different fields are specified, only entries that match both expressions at the same time are shown:

# journalctl _SYSTEMD_UNIT=avahi-daemon.service _PID=1488

If two matches refer to the same field, all entries matching either expression are shown:

# journalctl _SYSTEMD_UNIT=avahi-daemon.service _SYSTEMD_UNIT=dbus.service

You can use the '+' separator to combine two expressions in a logical 'OR'. The following example shows all messages from the Avahi service process with the process ID 1480 together with all messages from the D-Bus service:

# journalctl _SYSTEMD_UNIT=avahi-daemon.service _PID=1480 + _SYSTEMD_UNIT=dbus.service

16.4 Investigating systemd Errors Edit source

This section introduces a simple example to illustrate how to find and fix the error reported by systemd during apache2 start-up.

  1. Try to start the apache2 service:

    # systemctl start apache2
    Job for apache2.service failed. See 'systemctl status apache2' and 'journalctl -xn' for details.
  2. Let us see what the service's status says:

    # systemctl status apache2
    apache2.service - The Apache Webserver
       Loaded: loaded (/usr/lib/systemd/system/apache2.service; disabled)
       Active: failed (Result: exit-code) since Tue 2014-06-03 11:08:13 CEST; 7min ago
      Process: 11026 ExecStop=/usr/sbin/start_apache2 -D SYSTEMD -DFOREGROUND \
               -k graceful-stop (code=exited, status=1/FAILURE)

    The ID of the process causing the failure is 11026.

  3. Show the verbose version of messages related to process ID 11026:

    # journalctl -o verbose _PID=11026
    [...]
    MESSAGE=AH00526: Syntax error on line 6 of /etc/apache2/default-server.conf:
    [...]
    MESSAGE=Invalid command 'DocumenttRoot', perhaps misspelled or defined by a module
    [...]
  4. Fix the typo inside /etc/apache2/default-server.conf, start the apache2 service, and print its status:

    # systemctl start apache2 && systemctl status apache2
    apache2.service - The Apache Webserver
       Loaded: loaded (/usr/lib/systemd/system/apache2.service; disabled)
       Active: active (running) since Tue 2014-06-03 11:26:24 CEST; 4ms ago
      Process: 11026 ExecStop=/usr/sbin/start_apache2 -D SYSTEMD -DFOREGROUND
               -k graceful-stop (code=exited, status=1/FAILURE)
     Main PID: 11263 (httpd2-prefork)
       Status: "Processing requests..."
       CGroup: /system.slice/apache2.service
               ├─11263 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf -D [...]
               ├─11280 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf -D [...]
               ├─11281 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf -D [...]
               ├─11282 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf -D [...]
               ├─11283 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf -D [...]
               └─11285 /usr/sbin/httpd2-prefork -f /etc/apache2/httpd.conf -D [...]

16.5 Journald Configuration Edit source

The behavior of the systemd-journald service can be adjusted by modifying /etc/systemd/journald.conf. This section introduces only basic option settings. For a complete file description, see man 5 journald.conf. Note that you need to restart the journal for the changes to take effect with

# systemctl restart systemd-journald

16.5.1 Changing the Journal Size Limit Edit source

If the journal log data is saved to a persistent location (see Section 16.1, “Making the Journal Persistent”), it uses up to 10% of the file system the /var/log/journal resides on. For example, if /var/log/journal is located on a 30 GB /var partition, the journal may use up to 3 GB of the disk space. To change this limit, change (and uncomment) the SystemMaxUse option:

SystemMaxUse=50M

16.5.2 Forwarding the Journal to /dev/ttyX Edit source

You can forward the journal to a terminal device to inform you about system messages on a preferred terminal screen, for example /dev/tty12. Change the following journald options to

ForwardToConsole=yes
TTYPath=/dev/tty12

16.5.3 Forwarding the Journal to Syslog Facility Edit source

Journald is backward compatible with traditional syslog implementations such as rsyslog. Make sure the following is valid:

  • rsyslog is installed.

    # rpm -q rsyslog
    rsyslog-7.4.8-2.16.x86_64
  • rsyslog service is enabled.

    # systemctl is-enabled rsyslog
    enabled
  • Forwarding to syslog is enabled in /etc/systemd/journald.conf.

    ForwardToSyslog=yes

16.6 Using YaST to Filter the systemd Journal Edit source

For an easy way of filtering the systemd journal (without having to deal with the journalctl syntax), you can use the YaST journal module. After installing it with sudo zypper in yast2-journal, start it from YaST by selecting System › Systemd Journal. Alternatively, start it from command line by entering sudo yast2 journal.

YaST systemd Journal
Figure 16.1: YaST systemd Journal

The module displays the log entries in a table. The search box on top allows you to search for entries that contain certain characters, similar to using grep. To filter the entries by date and time, unit, file, or priority, click Change filters and set the respective options.

17 Basic Networking Edit source

Abstract

Linux offers the necessary networking tools and features for integration into all types of network structures. Network access using a network card can be configured with YaST. Manual configuration is also possible. In this chapter only the fundamental mechanisms and the relevant network configuration files are covered.

Linux and other Unix operating systems use the TCP/IP protocol. It is not a single network protocol, but a family of network protocols that offer various services. The protocols listed in Several Protocols in the TCP/IP Protocol Family, are provided for exchanging data between two machines via TCP/IP. Networks combined by TCP/IP, comprising a worldwide network, are also called the Internet.

RFC stands for Request for Comments. RFCs are documents that describe various Internet protocols and implementation procedures for the operating system and its applications. The RFC documents describe the setup of Internet protocols. For more information about RFCs, see http://www.ietf.org/rfc.html.

Several Protocols in the TCP/IP Protocol Family
TCP

Transmission Control Protocol: a connection-oriented secure protocol. The data to transmit is first sent by the application as a stream of data and converted into the appropriate format by the operating system. The data arrives at the respective application on the destination host in the original data stream format it was initially sent. TCP determines whether any data has been lost or jumbled during the transmission. TCP is implemented wherever the data sequence matters.

UDP

User Datagram Protocol: a connectionless, insecure protocol. The data to transmit is sent in the form of packets generated by the application. The order in which the data arrives at the recipient is not guaranteed and data loss is possible. UDP is suitable for record-oriented applications. It features a smaller latency period than TCP.

ICMP

Internet Control Message Protocol: This is not a protocol for the end user, but a special control protocol that issues error reports and can control the behavior of machines participating in TCP/IP data transfer. In addition, it provides a special echo mode that can be viewed using the program ping.

IGMP

Internet Group Management Protocol: This protocol controls machine behavior when implementing IP multicast.

As shown in Figure 17.1, “Simplified Layer Model for TCP/IP”, data exchange takes place in different layers. The actual network layer is the insecure data transfer via IP (Internet protocol). On top of IP, TCP (transmission control protocol) guarantees, to a certain extent, security of the data transfer. The IP layer is supported by the underlying hardware-dependent protocol, such as Ethernet.

Simplified Layer Model for TCP/IP
Figure 17.1: Simplified Layer Model for TCP/IP

The diagram provides one or two examples for each layer. The layers are ordered according to abstraction levels. The lowest layer is very close to the hardware. The uppermost layer, however, is almost a complete abstraction from the hardware. Every layer has its own special function. The special functions of each layer are mostly implicit in their description. The data link and physical layers represent the physical network used, such as Ethernet.

Almost all hardware protocols work on a packet-oriented basis. The data to transmit is collected into packets (it cannot be sent all at once). The maximum size of a TCP/IP packet is approximately 64 KB. Packets are normally quite smaller, as the network hardware can be a limiting factor. The maximum size of a data packet on an Ethernet is about fifteen hundred bytes. The size of a TCP/IP packet is limited to this amount when the data is sent over an Ethernet. If more data is transferred, more data packets need to be sent by the operating system.

For the layers to serve their designated functions, additional information regarding each layer must be saved in the data packet. This takes place in the header of the packet. Every layer attaches a small block of data, called the protocol header, to the front of each emerging packet. A sample TCP/IP data packet traveling over an Ethernet cable is illustrated in Figure 17.2, “TCP/IP Ethernet Packet”. The proof sum is located at the end of the packet, not at the beginning. This simplifies things for the network hardware.

TCP/IP Ethernet Packet
Figure 17.2: TCP/IP Ethernet Packet

When an application sends data over the network, the data passes through each layer, all implemented in the Linux kernel except the physical layer. Each layer is responsible for preparing the data so it can be passed to the next layer. The lowest layer is ultimately responsible for sending the data. The entire procedure is reversed when data is received. Like the layers of an onion, in each layer the protocol headers are removed from the transported data. Finally, the transport layer is responsible for making the data available for use by the applications at the destination. In this manner, one layer only communicates with the layer directly above or below it. For applications, it is irrelevant whether data is transmitted via a 100 Mbit/s FDDI network or via a 56-Kbit/s modem line. Likewise, it is irrelevant for the data line which kind of data is transmitted, as long as packets are in the correct format.

17.1 IP Addresses and Routing Edit source

The discussion in this section is limited to IPv4 networks. For information about IPv6 protocol, the successor to IPv4, refer to Section 17.2, “IPv6—The Next Generation Internet”.

17.1.1 IP Addresses Edit source

Every computer on the Internet has a unique 32-bit address. These 32 bits (or 4 bytes) are normally written as illustrated in the second row in Example 17.1, “Writing IP Addresses”.

Example 17.1: Writing IP Addresses
IP Address (binary):  11000000 10101000 00000000 00010100
IP Address (decimal):      192.     168.       0.      20

In decimal form, the four bytes are written in the decimal number system, separated by periods. The IP address is assigned to a host or a network interface. It can be used only once throughout the world. There are exceptions to this rule, but these are not relevant to the following passages.

The points in IP addresses indicate the hierarchical system. Until the 1990s, IP addresses were strictly categorized in classes. However, this system proved too inflexible and was discontinued. Now, classless routing (CIDR, classless interdomain routing) is used.

17.1.2 Netmasks and Routing Edit source

Netmasks are used to define the address range of a subnet. If two hosts are in the same subnet, they can reach each other directly. If they are not in the same subnet, they need the address of a gateway that handles all the traffic for the subnet. To check if two IP addresses are in the same subnet, simply AND both addresses with the netmask. If the result is identical, both IP addresses are in the same local network. If there are differences, the remote IP address, and thus the remote interface, can only be reached over a gateway.

To understand how the netmask works, look at Example 17.2, “Linking IP Addresses to the Netmask”. The netmask consists of 32 bits that identify how much of an IP address belongs to the network. All those bits that are 1 mark the corresponding bit in the IP address as belonging to the network. All bits that are 0 mark bits inside the subnet. This means that the more bits are 1, the smaller the subnet is. Because the netmask always consists of several successive 1 bits, it is also possible to count the number of bits in the netmask. In Example 17.2, “Linking IP Addresses to the Netmask” the first net with 24 bits could also be written as 192.168.0.0/24.

Example 17.2: Linking IP Addresses to the Netmask
IP address (192.168.0.20):  11000000 10101000 00000000 00010100
Netmask   (255.255.255.0):  11111111 11111111 11111111 00000000
---------------------------------------------------------------
Result of the link:         11000000 10101000 00000000 00000000
In the decimal system:           192.     168.       0.       0

IP address (213.95.15.200): 11010101 10111111 00001111 11001000
Netmask    (255.255.255.0): 11111111 11111111 11111111 00000000
---------------------------------------------------------------
Result of the link:         11010101 10111111 00001111 00000000
In the decimal system:           213.      95.      15.       0

To give another example: all machines connected with the same Ethernet cable are usually located in the same subnet and are directly accessible. Even when the subnet is physically divided by switches or bridges, these hosts can still be reached directly.

IP addresses outside the local subnet can only be reached if a gateway is configured for the target network. In the most common case, there is only one gateway that handles all traffic that is external. However, it is also possible to configure several gateways for different subnets.

If a gateway has been configured, all external IP packets are sent to the appropriate gateway. This gateway then attempts to forward the packets in the same manner—from host to host—until it reaches the destination host or the packet's TTL (time to live) expires.

Specific Addresses
Base Network Address

This is the netmask AND any address in the network, as shown in Example 17.2, “Linking IP Addresses to the Netmask” under Result. This address cannot be assigned to any hosts.

Broadcast Address

This could be paraphrased as: Access all hosts in this subnet. To generate this, the netmask is inverted in binary form and linked to the base network address with a logical OR. The above example therefore results in 192.168.0.255. This address cannot be assigned to any hosts.

Local Host

The address 127.0.0.1 is assigned to the loopback device on each host. A connection can be set up to your own machine with this address and with all addresses from the complete 127.0.0.0/8 loopback network as defined with IPv4. With IPv6 there is only one loopback address (::1).

Because IP addresses must be unique all over the world, you cannot select random addresses. There are three address domains to use if you want to set up a private IP-based network. These cannot get any connection from the rest of the Internet, because they cannot be transmitted over the Internet. These address domains are specified in RFC 1597 and listed in Table 17.1, “Private IP Address Domains”.

Table 17.1: Private IP Address Domains

Network/Netmask

Domain

10.0.0.0/255.0.0.0

10.x.x.x

172.16.0.0/255.240.0.0

172.16.x.x172.31.x.x

192.168.0.0/255.255.0.0

192.168.x.x

17.2 IPv6—The Next Generation Internet Edit source

Due to the emergence of the World Wide Web (WWW), the Internet has experienced explosive growth, with an increasing number of computers communicating via TCP/IP in the past fifteen years. Since Tim Berners-Lee at CERN (http://public.web.cern.ch) invented the WWW in 1990, the number of Internet hosts has grown from a few thousand to about a hundred million.

As mentioned, an IPv4 address consists of only 32 bits. Also, quite a few IP addresses are lost—they cannot be used because of the way in which networks are organized. The number of addresses available in your subnet is two to the power of the number of bits, minus two. A subnet has, for example, 2, 6, or 14 addresses available. To connect 128 hosts to the Internet, for example, you need a subnet with 256 IP addresses, from which only 254 are usable, because two IP addresses are needed for the structure of the subnet itself: the broadcast and the base network address.

Under the current IPv4 protocol, DHCP or NAT (network address translation) are the typical mechanisms used to circumvent the potential address shortage. Combined with the convention to keep private and public address spaces separate, these methods can certainly mitigate the shortage. The problem with them lies in their configuration, which is a chore to set up and a burden to maintain. To set up a host in an IPv4 network, you need several address items, such as the host's own IP address, the subnetmask, the gateway address and maybe a name server address. All these items need to be known and cannot be derived from somewhere else.

With IPv6, both the address shortage and the complicated configuration should be a thing of the past. The following sections tell more about the improvements and benefits brought by IPv6 and about the transition from the old protocol to the new one.

17.2.1 Advantages Edit source

The most important and most visible improvement brought by the new protocol is the enormous expansion of the available address space. An IPv6 address is made up of 128 bit values instead of the traditional 32 bits. This provides for as many as several quadrillion IP addresses.

However, IPv6 addresses are not only different from their predecessors with regard to their length. They also have a different internal structure that may contain more specific information about the systems and the networks to which they belong. More details about this are found in Section 17.2.2, “Address Types and Structure”.

The following is a list of other advantages of the new protocol:

Autoconfiguration

IPv6 makes the network plug and play capable, which means that a newly set up system integrates into the (local) network without any manual configuration. The new host uses its automatic configuration mechanism to derive its own address from the information made available by the neighboring routers, relying on a protocol called the neighbor discovery (ND) protocol. This method does not require any intervention on the administrator's part and there is no need to maintain a central server for address allocation—an additional advantage over IPv4, where automatic address allocation requires a DHCP server.

Nevertheless if a router is connected to a switch, the router should send periodic advertisements with flags telling the hosts of a network how they should interact with each other. For more information, see RFC 2462 and the radvd.conf(5) man page, and RFC 3315.

Mobility

IPv6 makes it possible to assign several addresses to one network interface at the same time. This allows users to access several networks easily, something that could be compared with the international roaming services offered by mobile phone companies. When you take your mobile phone abroad, the phone automatically logs in to a foreign service when it enters the corresponding area, so you can be reached under the same number everywhere and can place an outgoing call, as you would in your home area.

Secure Communication

With IPv4, network security is an add-on function. IPv6 includes IPsec as one of its core features, allowing systems to communicate over a secure tunnel to avoid eavesdropping by outsiders on the Internet.

Backward Compatibility

Realistically, it would be impossible to switch the entire Internet from IPv4 to IPv6 at one time. Therefore, it is crucial that both protocols can coexist not only on the Internet, but also on one system. This is ensured by compatible addresses (IPv4 addresses can easily be translated into IPv6 addresses) and by using several tunnels. See Section 17.2.3, “Coexistence of IPv4 and IPv6”. Also, systems can rely on a dual stack IP technique to support both protocols at the same time, meaning that they have two network stacks that are completely separate, such that there is no interference between the two protocol versions.

Custom Tailored Services through Multicasting

With IPv4, some services, such as SMB, need to broadcast their packets to all hosts in the local network. IPv6 allows a much more fine-grained approach by enabling servers to address hosts through multicasting, that is by addressing several hosts as parts of a group. This is different from addressing all hosts through broadcasting or each host individually through unicasting. Which hosts are addressed as a group may depend on the concrete application. There are some predefined groups to address all name servers (the all name servers multicast group), for example, or all routers (the all routers multicast group).

17.2.2 Address Types and Structure Edit source

As mentioned, the current IP protocol has two major limitations: there is an increasing shortage of IP addresses, and configuring the network and maintaining the routing tables is becoming a more complex and burdensome task. IPv6 solves the first problem by expanding the address space to 128 bits. The second one is mitigated by introducing a hierarchical address structure combined with sophisticated techniques to allocate network addresses, and multihoming (the ability to assign several addresses to one device, giving access to several networks).

When dealing with IPv6, it is useful to know about three different types of addresses:

Unicast

Addresses of this type are associated with exactly one network interface. Packets with such an address are delivered to only one destination. Accordingly, unicast addresses are used to transfer packets to individual hosts on the local network or the Internet.

Multicast

Addresses of this type relate to a group of network interfaces. Packets with such an address are delivered to all destinations that belong to the group. Multicast addresses are mainly used by certain network services to communicate with certain groups of hosts in a well-directed manner.

Anycast

Addresses of this type are related to a group of interfaces. Packets with such an address are delivered to the member of the group that is closest to the sender, according to the principles of the underlying routing protocol. Anycast addresses are used to make it easier for hosts to find out about servers offering certain services in the given network area. All servers of the same type have the same anycast address. Whenever a host requests a service, it receives a reply from the server with the closest location, as determined by the routing protocol. If this server should fail for some reason, the protocol automatically selects the second closest server, then the third one, and so forth.

An IPv6 address is made up of eight four-digit fields, each representing 16 bits, written in hexadecimal notation. They are separated by colons (:). Any leading zero bytes within a given field may be dropped, but zeros within the field or at its end may not. Another convention is that more than four consecutive zero bytes may be collapsed into a double colon. However, only one such :: is allowed per address. This kind of shorthand notation is shown in Example 17.3, “Sample IPv6 Address”, where all three lines represent the same address.

Example 17.3: Sample IPv6 Address
fe80 : 0000 : 0000 : 0000 : 0000 : 10 : 1000 : 1a4
fe80 :    0 :    0 :    0 :    0 : 10 : 1000 : 1a4
fe80 :                           : 10 : 1000 : 1a4

Each part of an IPv6 address has a defined function. The first bytes form the prefix and specify the type of address. The center part is the network portion of the address, but it may be unused. The end of the address forms the host part. With IPv6, the netmask is defined by indicating the length of the prefix after a slash at the end of the address. An address, as shown in Example 17.4, “IPv6 Address Specifying the Prefix Length”, contains the information that the first 64 bits form the network part of the address and the last 64 form its host part. In other words, the 64 means that the netmask is filled with 64 1-bit values from the left. As with IPv4, the IP address is combined with AND with the values from the netmask to determine whether the host is located in the same subnet or in another one.

Example 17.4: IPv6 Address Specifying the Prefix Length
fe80::10:1000:1a4/64

IPv6 knows about several predefined types of prefixes. Some are shown in Various IPv6 Prefixes.

Various IPv6 Prefixes
00

IPv4 addresses and IPv4 over IPv6 compatibility addresses. These are used to maintain compatibility with IPv4. Their use still requires a router able to translate IPv6 packets into IPv4 packets. Several special addresses, such as the one for the loopback device, have this prefix as well.

2 or 3 as the first digit

Aggregatable global unicast addresses. As is the case with IPv4, an interface can be assigned to form part of a certain subnet. Currently, there are the following address spaces: 2001::/16 (production quality address space) and 2002::/16 (6to4 address space).

fe80::/10

Link-local addresses. Addresses with this prefix should not be routed and should therefore only be reachable from within the same subnet.

fec0::/10

Site-local addresses. These may be routed, but only within the network of the organization to which they belong. In effect, they are the IPv6 equivalent of the current private network address space, such as 10.x.x.x.

ff

These are multicast addresses.

A unicast address consists of three basic components:

Public Topology

The first part (which also contains one of the prefixes mentioned above) is used to route packets through the public Internet. It includes information about the company or institution that provides the Internet access.

Site Topology

The second part contains routing information about the subnet to which to deliver the packet.

Interface ID

The third part identifies the interface to which to deliver the packet. This also allows for the MAC to form part of the address. Given that the MAC is a globally unique, fixed identifier coded into the device by the hardware maker, the configuration procedure is substantially simplified. In fact, the first 64 address bits are consolidated to form the EUI-64 token, with the last 48 bits taken from the MAC, and the remaining 24 bits containing special information about the token type. This also makes it possible to assign an EUI-64 token to interfaces that do not have a MAC, such as those based on PPP.

On top of this basic structure, IPv6 distinguishes between five different types of unicast addresses:

:: (unspecified)

This address is used by the host as its source address when the interface is initialized for the first time (at which point, the address cannot yet be determined by other means).

::1 (loopback)

The address of the loopback device.

IPv4 Compatible Addresses

The IPv6 address is formed by the IPv4 address and a prefix consisting of 96 zero bits. This type of compatibility address is used for tunneling (see Section 17.2.3, “Coexistence of IPv4 and IPv6”) to allow IPv4 and IPv6 hosts to communicate with others operating in a pure IPv4 environment.

IPv4 Addresses Mapped to IPv6

This type of address specifies a pure IPv4 address in IPv6 notation.

Local Addresses

There are two address types for local use:

link-local

This type of address can only be used in the local subnet. Packets with a source or target address of this type should not be routed to the Internet or other subnets. These addresses contain a special prefix (fe80::/10) and the interface ID of the network card, with the middle part consisting of zero bytes. Addresses of this type are used during automatic configuration to communicate with other hosts belonging to the same subnet.

site-local

Packets with this type of address may be routed to other subnets, but not to the wider Internet—they must remain inside the organization's own network. Such addresses are used for intranets and are an equivalent of the private address space defined by IPv4. They contain a special prefix (fec0::/10), the interface ID, and a 16 bit field specifying the subnet ID. Again, the rest is filled with zero bytes.

As a completely new feature introduced with IPv6, each network interface normally gets several IP addresses, with the advantage that several networks can be accessed through the same interface. One of these networks can be configured completely automatically using the MAC and a known prefix with the result that all hosts on the local network can be reached when IPv6 is enabled (using the link-local address). With the MAC forming part of it, any IP address used in the world is unique. The only variable parts of the address are those specifying the site topology and the public topology, depending on the actual network in which the host is currently operating.

For a host to go back and forth between different networks, it needs at least two addresses. One of them, the home address, not only contains the interface ID but also an identifier of the home network to which it normally belongs (and the corresponding prefix). The home address is a static address and, as such, it does not normally change. Still, all packets destined to the mobile host can be delivered to it, regardless of whether it operates in the home network or somewhere outside. This is made possible by the completely new features introduced with IPv6, such as stateless autoconfiguration and neighbor discovery. In addition to its home address, a mobile host gets one or more additional addresses that belong to the foreign networks where it is roaming. These are called care-of addresses. The home network has a facility that forwards any packets destined to the host when it is roaming outside. In an IPv6 environment, this task is performed by the home agent, which takes all packets destined to the home address and relays them through a tunnel. On the other hand, those packets destined to the care-of address are directly transferred to the mobile host without any special detours.

17.2.3 Coexistence of IPv4 and IPv6 Edit source

The migration of all hosts connected to the Internet from IPv4 to IPv6 is a gradual process. Both protocols will coexist for some time to come. The coexistence on one system is guaranteed where there is a dual stack implementation of both protocols. That still leaves the question of how an IPv6 enabled host should communicate with an IPv4 host and how IPv6 packets should be transported by the current networks, which are predominantly IPv4-based. The best solutions offer tunneling and compatibility addresses (see Section 17.2.2, “Address Types and Structure”).

IPv6 hosts that are more or less isolated in the (worldwide) IPv4 network can communicate through tunnels: IPv6 packets are encapsulated as IPv4 packets to move them across an IPv4 network. Such a connection between two IPv4 hosts is called a tunnel. To achieve this, packets must include the IPv6 destination address (or the corresponding prefix) and the IPv4 address of the remote host at the receiving end of the tunnel. A basic tunnel can be configured manually according to an agreement between the hosts' administrators. This is also called static tunneling.

However, the configuration and maintenance of static tunnels is often too labor-intensive to use them for daily communication needs. Therefore, IPv6 provides for three different methods of dynamic tunneling:

6over4

IPv6 packets are automatically encapsulated as IPv4 packets and sent over an IPv4 network capable of multicasting. IPv6 is tricked into seeing the whole network (Internet) as a huge local area network (LAN). This makes it possible to determine the receiving end of the IPv4 tunnel automatically. However, this method does not scale very well and is also hampered because IP multicasting is far from widespread on the Internet. Therefore, it only provides a solution for smaller corporate or institutional networks where multicasting can be enabled. The specifications for this method are laid down in RFC 2529.

6to4

With this method, IPv4 addresses are automatically generated from IPv6 addresses, enabling isolated IPv6 hosts to communicate over an IPv4 network. However, several problems have been reported regarding the communication between those isolated IPv6 hosts and the Internet. The method is described in RFC 3056.

IPv6 Tunnel Broker

This method relies on special servers that provide dedicated tunnels for IPv6 hosts. It is described in RFC 3053.

17.2.4 Configuring IPv6 Edit source

To configure IPv6, you normally do not need to make any changes on the individual workstations. IPv6 is enabled by default. To disable or enable IPv6 on an installed system, use the YaST Network Settings module. On the Global Options tab, select or deselect the Enable IPv6 option as necessary. To enable it temporarily until the next reboot, enter modprobe -i ipv6 as root. It is impossible to unload the IPv6 module after it has been loaded.

Because of the autoconfiguration concept of IPv6, the network card is assigned an address in the link-local network. Normally, no routing table management takes place on a workstation. The network routers can be queried by the workstation, using the router advertisement protocol, for what prefix and gateways should be implemented. The radvd program can be used to set up an IPv6 router. This program informs the workstations which prefix to use for the IPv6 addresses and which routers. Alternatively, use zebra/quagga for automatic configuration of both addresses and routing.

For information about how to set up various types of tunnels using the /etc/sysconfig/network files, see the man page of ifcfg-tunnel (man ifcfg-tunnel).

17.2.5 For More Information Edit source

The above overview does not cover the topic of IPv6 comprehensively. For a more in-depth look at the new protocol, refer to the following online documentation and books:

http://www.ipv6.org/

The starting point for everything about IPv6.

http://www.ipv6day.org

All information needed to start your own IPv6 network.

http://www.ipv6-to-standard.org/

The list of IPv6-enabled products.

http://www.bieringer.de/linux/IPv6/

Here, find the Linux IPv6-HOWTO and many links related to the topic.

RFC 2460

The fundamental RFC about IPv6.

IPv6 Essentials

A book describing all the important aspects of the topic is IPv6 Essentials by Silvia Hagen (ISBN 0-596-00125-8).

17.3 Name Resolution Edit source

DNS assists in assigning an IP address to one or more names and assigning a name to an IP address. In Linux, this conversion is usually carried out by a special type of software known as bind. The machine that takes care of this conversion is called a name server. The names make up a hierarchical system in which each name component is separated by a period. The name hierarchy is, however, independent of the IP address hierarchy described above.

Consider a complete name, such as jupiter.example.com, written in the format hostname.domain. A full name, called a fully qualified domain name (FQDN), consists of a host name and a domain name (example.com). The latter also includes the top level domain or TLD (com).

TLD assignment has become quite confusing for historical reasons. Traditionally, three-letter domain names are used in the USA. In the rest of the world, the two-letter ISO national codes are the standard. In addition to that, longer TLDs were introduced in 2000 that represent certain spheres of activity (for example, .info, .name, .museum).

In the early days of the Internet (before 1990), the file /etc/hosts was used to store the names of all the machines represented over the Internet. This quickly proved to be impractical in the face of the rapidly growing number of computers connected to the Internet. For this reason, a decentralized database was developed to store the host names in a widely distributed manner. This database, similar to the name server, does not have the data pertaining to all hosts in the Internet readily available, but can dispatch requests to other name servers.

The top of the hierarchy is occupied by root name servers. These root name servers manage the top level domains and are run by the Network Information Center (NIC). Each root name server knows about the name servers responsible for a given top level domain. Information about top level domain NICs is available at http://www.internic.net.

DNS can do more than resolve host names. The name server also knows which host is receiving e-mails for an entire domain—the mail exchanger (MX).

For your machine to resolve an IP address, it must know about at least one name server and its IP address. Easily specify such a name server using YaST.

The protocol whois is closely related to DNS. With this program, quickly find out who is responsible for a given domain.

Note
Note: MDNS and .local Domain Names

The .local top level domain is treated as link-local domain by the resolver. DNS requests are send as multicast DNS requests instead of normal DNS requests. If you already use the .local domain in your name server configuration, you must switch this option off in /etc/host.conf. For more information, see the host.conf manual page.

If you want to switch off MDNS during installation, use nomdns=1 as a boot parameter.

For more information on multicast DNS, see http://www.multicastdns.org.

17.4 Configuring a Network Connection with YaST Edit source

There are many supported networking types on Linux. Most of them use different device names and the configuration files are spread over several locations in the file system. For a detailed overview of the aspects of manual network configuration, see Section 17.6, “Configuring a Network Connection Manually”.

On SUSE Linux Enterprise Desktop, where NetworkManager is active by default, all network cards are configured. If NetworkManager is not active, only the first interface with link up (with a network cable connected) is automatically configured. Additional hardware can be configured any time on the installed system. The following sections describe the network configuration for all types of network connections supported by SUSE Linux Enterprise Desktop.

17.4.1 Configuring the Network Card with YaST Edit source

To configure your Ethernet or Wi-Fi/Bluetooth card in YaST, select System › Network Settings. After starting the module, YaST displays the Network Settings dialog with four tabs: Global Options, Overview, Hostname/DNS and Routing.

The Global Options tab allows you to set general networking options such as the network setup method, IPv6, and general DHCP options. For more information, see Section 17.4.1.1, “Configuring Global Networking Options”.

The Overview tab contains information about installed network interfaces and configurations. Any properly detected network card is listed with its name. You can manually configure new cards, remove or change their configuration in this dialog. To manually configure a card that was not automatically detected, see Section 17.4.1.3, “Configuring an Undetected Network Card”. If you want to change the configuration of an already configured card, see Section 17.4.1.2, “Changing the Configuration of a Network Card”.

The Hostname/DNS tab allows to set the host name of the machine and name the servers to be used. For more information, see Section 17.4.1.4, “Configuring Host Name and DNS”.

The Routing tab is used for the configuration of routing. See Section 17.4.1.5, “Configuring Routing” for more information.

Configuring Network Settings
Figure 17.3: Configuring Network Settings

17.4.1.1 Configuring Global Networking Options Edit source

The Global Options tab of the YaST Network Settings module allows you to set important global networking options, such as the use of NetworkManager, IPv6 and DHCP client options. These settings are applicable for all network interfaces.

In the Network Setup Method choose the way network connections are managed. If you want a NetworkManager desktop applet to manage connections for all interfaces, choose NetworkManager Service. NetworkManager is well suited for switching between multiple wired and wireless networks. If you do not run a desktop environment, or if your computer is a Xen server, virtual system, or provides network services such as DHCP or DNS in your network, use the Wicked Service method. If NetworkManager is used, nm-applet should be used to configure network options and the Overview, Hostname/DNS and Routing tabs of the Network Settings module are disabled. For more information on NetworkManager, see Chapter 31, Using NetworkManager.

In the IPv6 Protocol Settings choose whether to use the IPv6 protocol. It is possible to use IPv6 together with IPv4. By default, IPv6 is enabled. However, in networks not using IPv6 protocol, response times can be faster with IPv6 protocol disabled. To disable IPv6, deactivate Enable IPv6. If IPv6 is disabled, the kernel no longer loads the IPv6 module automatically. This setting will be applied after reboot.

In the DHCP Client Options configure options for the DHCP client. The DHCP Client Identifier must be different for each DHCP client on a single network. If left empty, it defaults to the hardware address of the network interface. However, if you are running several virtual machines using the same network interface and, therefore, the same hardware address, specify a unique free-form identifier here.

The Hostname to Send specifies a string used for the host name option field when the DHCP client sends messages to DHCP server. Some DHCP servers update name server zones (forward and reverse records) according to this host name (Dynamic DNS). Also, some DHCP servers require the Hostname to Send option field to contain a specific string in the DHCP messages from clients. Leave AUTO to send the current host name (that is the one defined in /etc/HOSTNAME). Make the option field empty for not sending any host name.

If you do not want to change the default route according to the information from DHCP, deactivate Change Default Route via DHCP.

17.4.1.2 Changing the Configuration of a Network Card Edit source

To change the configuration of a network card, select a card from the list of the detected cards in Network Settings › Overview in YaST and click Edit. The Network Card Setup dialog appears in which to adjust the card configuration using the General, Address and Hardware tabs.

17.4.1.2.1 Configuring IP Addresses Edit source

You can set the IP address of the network card or the way its IP address is determined in the Address tab of the Network Card Setup dialog. Both IPv4 and IPv6 addresses are supported. The network card can have No IP Address (which is useful for bonding devices), a Statically Assigned IP Address (IPv4 or IPv6) or a Dynamic Address assigned via DHCP or Zeroconf or both.

If using Dynamic Address, select whether to use DHCP Version 4 Only (for IPv4), DHCP Version 6 Only (for IPv6) or DHCP Both Version 4 and 6.

If possible, the first network card with link that is available during the installation is automatically configured to use automatic address setup via DHCP. On SUSE Linux Enterprise Desktop, where NetworkManager is active by default, all network cards are configured.

DHCP should also be used if you are using a DSL line but with no static IP assigned by the ISP (Internet Service Provider). If you decide to use DHCP, configure the details in DHCP Client Options in the Global Options tab of the Network Settings dialog of the YaST network card configuration module. If you have a virtual host setup where different hosts communicate through the same interface, an DHCP Client Identifier is necessary to distinguish them.

DHCP is a good choice for client configuration but it is not ideal for server configuration. To set a static IP address, proceed as follows:

  1. Select a card from the list of detected cards in the Overview tab of the YaST network card configuration module and click Edit.

  2. In the Address tab, choose Statically Assigned IP Address.

  3. Enter the IP Address. Both IPv4 and IPv6 addresses can be used. Enter the network mask in Subnet Mask. If the IPv6 address is used, use Subnet Mask for prefix length in format /64.

    Optionally, you can enter a fully qualified Hostname for this address, which will be written to the /etc/hosts configuration file.

  4. Click Next.

  5. To activate the configuration, click OK.

Note
Note: Interface Activation and Link Detection

During activation of a network interface, wicked checks for a carrier and only applies the IP configuration when a link has been detected. If you need to apply the configuration regardless of the link status (for example, when you want to test a service listening to a certain address), you can skip link detection by adding the variable LINK_REQUIRED=no to the configuration file of the interface in /etc/sysconfig/network/ifcfg.

Additionally, you can use the variable LINK_READY_WAIT=5 to specify the timeout for waiting for a link in seconds.

For more information about the ifcfg-* configuration files, refer to Section 17.6.2.5, “/etc/sysconfig/network/ifcfg-* and man 5 ifcfg.

If you use the static address, the name servers and default gateway are not configured automatically. To configure name servers, proceed as described in Section 17.4.1.4, “Configuring Host Name and DNS”. To configure a gateway, proceed as described in Section 17.4.1.5, “Configuring Routing”.

17.4.1.2.2 Configuring Multiple Addresses Edit source

One network device can have multiple IP addresses.

Note
Note: Aliases Are a Compatibility Feature

These so-called aliases or labels, respectively, work with IPv4 only. With IPv6 they will be ignored. Using iproute2 network interfaces can have one or more addresses.

Using YaST to set additional addresses for your network card, proceed as follows:

  1. Select a card from the list of detected cards in the Overview tab of the YaST Network Settings dialog and click Edit.

  2. In the Address › Additional Addresses tab, click Add.

  3. Enter IPv4 Address Label, IP Address, and Netmask. Do not include the interface name in the alias name.

  4. To activate the configuration, confirm the settings.

17.4.1.2.3 Changing the Device Name and Udev Rules Edit source

It is possible to change the device name of the network card when it is used. It is also possible to determine whether the network card should be identified by udev via its hardware (MAC) address or via the bus ID. The later option is preferable in large servers to simplify hotplugging of cards. To set these options with YaST, proceed as follows:

  1. Select a card from the list of detected cards in the Overview tab of the YaST Network Settings dialog and click Edit.

  2. Go to the Hardware tab. The current device name is shown in Udev Rules. Click Change.

  3. Select whether udev should identify the card by its MAC Address or Bus ID. The current MAC address and bus ID of the card are shown in the dialog.

  4. To change the device name, check the Change Device Name option and edit the name.

  5. To activate the configuration, confirm the settings.

17.4.1.2.4 Changing Network Card Kernel Driver Edit source

For some network cards, several kernel drivers may be available. If the card is already configured, YaST allows you to select a kernel driver to be used from a list of available suitable drivers. It is also possible to specify options for the kernel driver. To set these options with YaST, proceed as follows:

  1. Select a card from the list of detected cards in the Overview tab of the YaST Network Settings module and click Edit.

  2. Go to the Hardware tab.

  3. Select the kernel driver to be used in Module Name. Enter any options for the selected driver in Options in the form = =VALUE. If more options are used, they should be space-separated.

  4. To activate the configuration, confirm the settings.

17.4.1.2.5 Activating the Network Device Edit source

If you use the method with wicked, you can configure your device to either start during boot, on cable connection, on card detection, manually, or never. To change device start-up, proceed as follows:

  1. In YaST select a card from the list of detected cards in System › Network Settings and click Edit.

  2. In the General tab, select the desired entry from Device Activation.

    Choose At Boot Time to start the device during the system boot. With On Cable Connection, the interface is watched for any existing physical connection. With On Hotplug, the interface is set when available. It is similar to the At Boot Time option, and only differs in that no error occurs if the interface is not present at boot time. Choose Manually to control the interface manually with ifup. Choose Never to not start the device. The On NFSroot is similar to At Boot Time, but the interface does not shut down with the systemctl stop network command; the network service also cares about the wicked service if wicked is active. Use this if you use an NFS or iSCSI root file system.

  3. To activate the configuration, confirm the settings.

Tip
Tip: NFS as a Root File System

On (diskless) systems where the root partition is mounted via network as an NFS share, you need to be careful when configuring the network device with which the NFS share is accessible.

When shutting down or rebooting the system, the default processing order is to turn off network connections, then unmount the root partition. With NFS root, this order causes problems as the root partition cannot be cleanly unmounted as the network connection to the NFS share is already not activated. To prevent the system from deactivating the relevant network device, open the network device configuration tab as described in Section 17.4.1.2.5, “Activating the Network Device” and choose On NFSroot in the Device Activation pane.

17.4.1.2.6 Setting Up Maximum Transfer Unit Size Edit source

You can set a maximum transmission unit (MTU) for the interface. MTU refers to the largest allowed packet size in bytes. A higher MTU brings higher bandwidth efficiency. However, large packets can block up a slow interface for some time, increasing the lag for further packets.

  1. In YaST select a card from the list of detected cards in System › Network Settings and click Edit.

  2. In the General tab, select the desired entry from the Set MTU list.

  3. To activate the configuration, confirm the settings.

17.4.1.2.7 PCIe Multifunction Devices Edit source

Multifunction devices that support LAN, iSCSI, and FCoE are supported. The YaST FCoE client (yast2 fcoe-client) shows the private flags in additional columns to allow the user to select the device meant for FCoE. The YaST network module (yast2 lan) excludes storage only devices for network configuration.

17.4.1.2.8 Infiniband Configuration for IP-over-InfiniBand (IPoIB) Edit source
  1. In YaST select the InfiniBand device in System › Network Settings and click Edit.

  2. In the General tab, select one of the IP-over-InfiniBand (IPoIB) modes: connected (default) or datagram.

  3. To activate the configuration, confirm the settings.

For more information about InfiniBand, see /usr/src/linux/Documentation/infiniband/ipoib.txt.

17.4.1.2.9 Configuring the Firewall Edit source

Without having to enter the detailed firewall setup as described in Book “Security Guide”, Chapter 15 “Masquerading and Firewalls”, Section 15.4.1 “Configuring the Firewall with YaST”, you can determine the basic firewall configuration for your device as part of the device setup. Proceed as follows:

  1. Open the YaST System › Network Settings module. In the Overview tab, select a card from the list of detected cards and click Edit.

  2. Enter the General tab of the Network Settings dialog.

  3. Determine the Firewall Zone to which your interface should be assigned. The following options are available:

    Firewall Disabled

    This option is available only if the firewall is disabled and the firewall does not run. Only use this option if your machine is part of a greater network that is protected by an outer firewall.

    Automatically Assign Zone

    This option is available only if the firewall is enabled. The firewall is running and the interface is automatically assigned to a firewall zone. The zone which contains the keyword any or the external zone will be used for such an interface.

    Internal Zone (Unprotected)

    The firewall is running, but does not enforce any rules to protect this interface. Use this option if your machine is part of a greater network that is protected by an outer firewall. It is also useful for the interfaces connected to the internal network, when the machine has more network interfaces.

    Demilitarized Zone

    A demilitarized zone is an additional line of defense in front of an internal network and the (hostile) Internet. Hosts assigned to this zone can be reached from the internal network and from the Internet, but cannot access the internal network.

    External Zone

    The firewall is running on this interface and fully protects it against other—presumably hostile—network traffic. This is the default option.

  4. To activate the configuration, confirm the settings.

17.4.1.3 Configuring an Undetected Network Card Edit source

If a network card is not detected correctly, the card is not included in the list of detected cards. If you are sure that your system includes a driver for your card, you can configure it manually. You can also configure special network device types, such as bridge, bond, TUN or TAP. To configure an undetected network card (or a special device) proceed as follows:

  1. In the System › Network Settings › Overview dialog in YaST click Add.

  2. In the Hardware dialog, set the Device Type of the interface from the available options and Configuration Name. If the network card is a PCMCIA or USB device, activate the respective check box and exit this dialog with Next. Otherwise, you can define the kernel Module Name to be used for the card and its Options, if necessary.

    In Ethtool Options, you can set ethtool options used by ifup for the interface. For information about available options, see the ethtool manual page.

    If the option string starts with a - (for example, -K INTERFACE_NAME rx on), the second word in the string is replaced with the current interface name. Otherwise (for example, autoneg off speed 10) ifup adds -s INTERFACE_NAME to the beginning.

  3. Click Next.

  4. Configure any needed options, such as the IP address, device activation or firewall zone for the interface in the General, Address, and Hardware tabs. For more information about the configuration options, see Section 17.4.1.2, “Changing the Configuration of a Network Card”.

  5. If you selected Wireless as the device type of the interface, configure the wireless connection in the next dialog.

  6. To activate the new network configuration, confirm the settings.

17.4.1.4 Configuring Host Name and DNS Edit source

If you did not change the network configuration during installation and the Ethernet card was already available, a host name was automatically generated for your computer and DHCP was activated. The same applies to the name service information your host needs to integrate into a network environment. If DHCP is used for network address setup, the list of domain name servers is automatically filled with the appropriate data. If a static setup is preferred, set these values manually.

To change the name of your computer and adjust the name server search list, proceed as follows:

  1. Go to the Network Settings › Hostname/DNS tab in the System module in YaST.

  2. Enter the Hostname and, if needed, the Domain Name. The domain is especially important if the machine is a mail server. Note that the host name is global and applies to all set network interfaces.

    If you are using DHCP to get an IP address, the host name of your computer will be automatically set by the DHCP. You should disable this behavior if you connect to different networks, because they may assign different host names and changing the host name at runtime may confuse the graphical desktop. To disable using DHCP to get an IP address deactivate Change Hostname via DHCP.

    Assign Hostname to Loopback IP associates your host name with 127.0.0.2 (loopback) IP address in /etc/hosts. This is a useful option if you want to have the host name resolvable at all times, even without active network.

  3. In Modify DNS Configuration, select the way the DNS configuration (name servers, search list, the content of the /etc/resolv.conf file) is modified.

    If the Use Default Policy option is selected, the configuration is handled by the netconfig script which merges the data defined statically (with YaST or in the configuration files) with data obtained dynamically (from the DHCP client or NetworkManager). This default policy is usually sufficient.

    If the Only Manually option is selected, netconfig is not allowed to modify the /etc/resolv.conf file. However, this file can be edited manually.

    If the Custom Policy option is selected, a Custom Policy Rule string defining the merge policy should be specified. The string consists of a comma-separated list of interface names to be considered a valid source of settings. Except for complete interface names, basic wild cards to match multiple interfaces are allowed, as well. For example, eth* ppp? will first target all eth and then all ppp0-ppp9 interfaces. There are two special policy values that indicate how to apply the static settings defined in the /etc/sysconfig/network/config file:

    STATIC

    The static settings need to be merged together with the dynamic settings.

    STATIC_FALLBACK

    The static settings are used only when no dynamic configuration is available.

    For more information, see the man page of netconfig(8) (man 8 netconfig).

  4. Enter the Name Servers and fill in the Domain Search list. Name servers must be specified by IP addresses, such as 192.168.1.116, not by host names. Names specified in the Domain Search tab are domain names used for resolving host names without a specified domain. If more than one Domain Search is used, separate domains with commas or white space.

  5. To activate the configuration, confirm the settings.

It is also possible to edit the host name using YaST from the command line. The changes made by YaST take effect immediately (which is not the case when editing the /etc/HOSTNAME file manually). To change the host name, use the following command:

yast dns edit hostname=HOSTNAME

To change the name servers, use the following commands:

yast dns edit nameserver1=192.168.1.116
yast dns edit nameserver2=192.168.1.117
yast dns edit nameserver3=192.168.1.118

17.4.1.5 Configuring Routing Edit source

To make your machine communicate with other machines and other networks, routing information must be given to make network traffic take the correct path. If DHCP is used, this information is automatically provided. If a static setup is used, this data must be added manually.

  1. In YaST go to Network Settings › Routing.

  2. Enter the IP address of the Default Gateway (IPv4 and IPv6 if necessary). The default gateway matches every possible destination, but if a routing table entry exists that matches the required address, this will be used instead of the default route via the Default Gateway.

  3. More entries can be entered in the Routing Table. Enter the Destination network IP address, Gateway IP address and the Netmask. Select the Device through which the traffic to the defined network will be routed (the minus sign stands for any device). To omit any of these values, use the minus sign -. To enter a default gateway into the table, use default in the Destination field.

    Note
    Note: Route Prioritization

    If more default routes are used, it is possible to specify the metric option to determine which route has a higher priority. To specify the metric option, enter - metric NUMBER in Options. The route with the highest metric is used as default. If the network device is disconnected, its route will be removed and the next one will be used. However, the current kernel does not use metric in static routing, only routing daemons like multipathd do.

  4. If the system is a router, enable IPv4 Forwarding and IPv6 Forwarding in the Network Settings as needed.

  5. To activate the configuration, confirm the settings.

17.5 NetworkManager Edit source

NetworkManager is the ideal solution for laptops and other portable computers. With NetworkManager, you do not need to worry about configuring network interfaces and switching between networks when you are moving.

17.5.1 NetworkManager and wicked Edit source

However, NetworkManager is not a suitable solution for all cases, so you can still choose between the wicked controlled method for managing network connections and NetworkManager. If you want to manage your network connection with NetworkManager, enable NetworkManager in the YaST Network Settings module as described in Section 31.2, “Enabling or Disabling NetworkManager” and configure your network connections with NetworkManager. For a list of use cases and a detailed description of how to configure and use NetworkManager, refer to Chapter 31, Using NetworkManager.

Some differences between wicked and NetworkManager:

root Privileges

If you use NetworkManager for network setup, you can easily switch, stop or start your network connection at any time from within your desktop environment using an applet. NetworkManager also makes it possible to change and configure wireless card connections without requiring root privileges. For this reason, NetworkManager is the ideal solution for a mobile workstation.

wicked also provides some ways to switch, stop or start the connection with or without user intervention, like user-managed devices. However, this always requires root privileges to change or configure a network device. This is often a problem for mobile computing, where it is not possible to preconfigure all the connection possibilities.

Types of Network Connections

Both wicked and NetworkManager can handle network connections with a wireless network (with WEP, WPA-PSK, and WPA-Enterprise access) and wired networks using DHCP and static configuration. They also support connection through dial-up and VPN. With NetworkManager you can also connect a mobile broadband (3G) modem or set up a DSL connection, which is not possible with the traditional configuration.

NetworkManager tries to keep your computer connected at all times using the best connection available. If the network cable is accidentally disconnected, it tries to reconnect. It can find the network with the best signal strength from the list of your wireless connections and automatically use it to connect. To get the same functionality with wicked, more configuration effort is required.

17.5.2 NetworkManager Functionality and Configuration Files Edit source

The individual network connection settings created with NetworkManager are stored in configuration profiles. The system connections configured with either NetworkManager or YaST are saved in /etc/NetworkManager/system-connections/* or in /etc/sysconfig/network/ifcfg-*. For GNOME, all user-defined connections are stored in GConf.

In case no profile is configured, NetworkManager automatically creates one and names it Auto $INTERFACE-NAME. That is made in an attempt to work without any configuration for as many cases as (securely) possible. If the automatically created profiles do not suit your needs, use the network connection configuration dialogs provided by GNOME to modify them as desired. For more information, see Section 31.3, “Configuring Network Connections”.

17.5.3 Controlling and Locking Down NetworkManager Features Edit source

On centrally administered machines, certain NetworkManager features can be controlled or disabled with PolKit, for example if a user is allowed to modify administrator defined connections or if a user is allowed to define his own network configurations. To view or change the respective NetworkManager policies, start the graphical Authorizations tool for PolKit. In the tree on the left side, find them below the network-manager-settings entry. For an introduction to PolKit and details on how to use it, refer to Book “Security Guide”, Chapter 9 “Authorization with PolKit”.

17.6 Configuring a Network Connection Manually Edit source

Manual configuration of the network software should be the last alternative. Using YaST is recommended. However, this background information about the network configuration can also assist your work with YaST.

17.6.1 The wicked Network Configuration Edit source

The tool and library called wicked provides a new framework for network configuration.

One of the challenges with traditional network interface management is that different layers of network management get jumbled together into one single script, or at most two different scripts. These scripts interact with each other in a way that is not well-defined. This leads to unpredictable issues, obscure constraints and conventions, etc. Several layers of special hacks for a variety of different scenarios increase the maintenance burden. Address configuration protocols are being used that are implemented via daemons like dhcpcd, which interact rather poorly with the rest of the infrastructure. Funky interface naming schemes that require heavy udev support are introduced to achieve persistent identification of interfaces.

The idea of wicked is to decompose the problem in several ways. None of them is entirely novel, but trying to put ideas from different projects together is hopefully going to create a better solution overall.

One approach is to use a client/server model. This allows wicked to define standardized facilities for things like address configuration that are well integrated with the overall framework. For example, using a specific address configuration, the administrator may request that an interface should be configured via DHCP or IPv4 zeroconf. In this case, the address configuration service simply obtains the lease from its server and passes it on to the wicked server process that installs the requested addresses and routes.

The other approach to decomposing the problem is to enforce the layering aspect. For any type of network interface, it is possible to define a dbus service that configures the network interface's device layer—a VLAN, a bridge, a bonding, or a paravirtualized device. Common functionality, such as address configuration, is implemented by joint services that are layered on top of these device specific services without having to implement them specifically.

The wicked framework implements these two aspects by using a variety of dbus services, which get attached to a network interface depending on its type. Here is a rough overview of the current object hierarchy in wicked.

Each network interface is represented via a child object of /org/opensuse/Network/Interfaces. The name of the child object is given by its ifindex. For example, the loopback interface, which usually gets ifindex 1, is /org/opensuse/Network/Interfaces/1, the first Ethernet interface registered is /org/opensuse/Network/Interfaces/2.

Each network interface has a class associated with it, which is used to select the dbus interfaces it supports. By default, each network interface is of class netif, and wickedd will automatically attach all interfaces compatible with this class. In the current implementation, this includes the following interfaces:

org.opensuse.Network.Interface

Generic network interface functions, such as taking the link up or down, assigning an MTU, etc.

org.opensuse.Network.Addrconf.ipv4.dhcp, org.opensuse.Network.Addrconf.ipv6.dhcp, org.opensuse.Network.Addrconf.ipv4.auto

Address configuration services for DHCP, IPv4 zeroconf, etc.

Beyond this, network interfaces may require or offer special configuration mechanisms. For an Ethernet device, for example, you should be able to control the link speed, offloading of checksumming, etc. To achieve this, Ethernet devices have a class of their own, called netif-ethernet, which is a subclass of netif. As a consequence, the dbus interfaces assigned to an Ethernet interface include all the services listed above, plus the org.opensuse.Network.Ethernet service available only to objects belonging to the netif-ethernet class.

Similarly, there exist classes for interface types like bridges, VLANs, bonds, or infinibands.

How do you interact with an interface like VLAN (which is really a virtual network interface that sits on top of an Ethernet device) that needs to be created first? For this, wicked defines factory interfaces, such as org.opensuse.Network.VLAN.Factory. Such a factory interface offers a single function that lets you create an interface of the requested type. These factory interfaces are attached to the /org/opensuse/Network/Interfaces list node.

17.6.1.1 wicked Architecture and Features Edit source

The wicked service comprises several parts as depicted in Figure 17.4, “wicked architecture”.

wicked architecture
Figure 17.4: wicked architecture

wicked currently supports the following:

  • Configuration file back-ends to parse SUSE style /etc/sysconfig/network files.

  • An internal configuration back-end to represent network interface configuration in XML.

  • Bring up and shutdown of normal network interfaces such as Ethernet or InfiniBand, VLAN, bridge, bonds, tun, tap, dummy, macvlan, macvtap, hsi, qeth, iucv, and wireless (currently limited to one wpa-psk/eap network) devices.

  • A built-in DHCPv4 client and a built-in DHCPv6 client.

  • The nanny daemon (enabled by default) helps to automatically bring up configured interfaces when the device is available (interface hotplugging) and set up the IP configuration when a link (carrier) is detected. See Section 17.6.1.3, “Nanny” for more information.

  • wicked was implemented as a group of DBus services that are integrated with systemd. So the usual systemctl commands will apply to wicked.

17.6.1.2 Using wicked Edit source

On SUSE Linux Enterprise, wicked runs by default. If you want to check what is currently enabled and whether it is running, call:

systemctl status network

If wicked is enabled, you will see something along these lines:

wicked.service - wicked managed network interfaces
    Loaded: loaded (/usr/lib/systemd/system/wicked.service; enabled)
    ...

In case something different is running (for example, NetworkManager) and you want to switch to wicked, first stop what is running and then enable wicked:

systemctl is-active network && \
systemctl stop      network
systemctl enable --force wicked

This enables the wicked services, creates the network.service to wicked.service alias link, and starts the network at the next boot.

Starting the server process:

systemctl start wickedd

This starts wickedd (the main server) and associated supplicants:

/usr/lib/wicked/bin/wickedd-auto4 --systemd --foreground
/usr/lib/wicked/bin/wickedd-dhcp4 --systemd --foreground
/usr/lib/wicked/bin/wickedd-dhcp6 --systemd --foreground
/usr/sbin/wickedd --systemd --foreground
/usr/sbin/wickedd-nanny --systemd --foreground

Then bringing up the network:

systemctl start wicked

Alternatively use the network.service alias:

systemctl start network

These commands are using the default or system configuration sources as defined in /etc/wicked/client.xml.

To enable debugging, set WICKED_DEBUG in /etc/sysconfig/network/config, for example:

WICKED_DEBUG="all"

Or, to omit some:

WICKED_DEBUG="all,-dbus,-objectmodel,-xpath,-xml"

Use the client utility to display interface information for all interfaces or the interface specified with IFNAME:

wicked show all
wicked show IFNAME

In XML output:

wicked show-xml all
wicked show-xml IFNAME

Bringing up one interface:

wicked ifup eth0
wicked ifup wlan0
...

Because there is no configuration source specified, the wicked client checks its default sources of configuration defined in /etc/wicked/client.xml:

  1. firmware: iSCSI Boot Firmware Table (iBFT)

  2. compat: ifcfg files—implemented for compatibility

Whatever wicked gets from those sources for a given interface is applied. The intended order of importance is firmware, then compat—this may be changed in the future.

For more information, see the wicked man page.

17.6.1.3 Nanny Edit source

Nanny is an event and policy driven daemon that is responsible for asynchronous or unsolicited scenarios such as hotplugging devices. Thus the nanny daemon helps with starting or restarting delayed or temporarily gone devices. Nanny monitors device and link changes, and integrates new devices defined by the current policy set. Nanny continues to set up even if ifup already exited because of specified timeout constraints.

By default, the nanny daemon is active on the system. It is enabled in the /etc/wicked/common.xml configuration file:

<config>
  ...
  <use-nanny>true</use-nanny>
</config>

This setting causes ifup and ifreload to apply a policy with the effective configuration to the nanny daemon; then, nanny configures wickedd and thus ensures hotplug support. It waits in the background for events or changes (such as new devices or carrier on).

17.6.1.4 Bringing Up Multiple Interfaces Edit source

For bonds and bridges, it may make sense to define the entire device topology in one file (ifcfg-bondX), and bring it up in one go. wicked then can bring up the whole configuration if you specify the top level interface names (of the bridge or bond):

wicked ifup br0

This command automatically sets up the bridge and its dependencies in the appropriate order without the need to list the dependencies (ports, etc.) separately.

To bring up multiple interfaces in one command:

wicked ifup bond0 br0 br1 br2

Or also all interfaces:

wicked ifup all

17.6.1.5 Using Tunnels with Wicked Edit source

When you need to use tunnels with Wicked, the TUNNEL_DEVICE is used for this. It permits to specify an optional device name to bind the tunnel to the device. The tunneled packets will only be routed via this device.

For more information, refer to man 5 ifcfg-tunnel.

17.6.1.6 Handling Incremental Changes Edit source

With wicked, there is no need to actually take down an interface to reconfigure it (unless it is required by the kernel). For example, to add another IP address or route to a statically configured network interface, add the IP address to the interface definition, and do another ifup operation. The server will try hard to update only those settings that have changed. This applies to link-level options such as the device MTU or the MAC address, and network-level settings, such as addresses, routes, or even the address configuration mode (for example, when moving from a static configuration to DHCP).

Things get tricky of course with virtual interfaces combining several real devices such as bridges or bonds. For bonded devices, it is not possible to change certain parameters while the device is up. Doing that will result in an error.

However, what should still work, is the act of adding or removing the child devices of a bond or bridge, or choosing a bond's primary interface.

17.6.1.7 Wicked Extensions: Address Configuration Edit source

wicked is designed to be extensible with shell scripts. These extensions can be defined in the config.xml file.

Currently, several classes of extensions are supported:

  • link configuration: these are scripts responsible for setting up a device's link layer according to the configuration provided by the client, and for tearing it down again.

  • address configuration: these are scripts responsible for managing a device's address configuration. Usually address configuration and DHCP are managed by wicked itself, but can be implemented by means of extensions.

  • firewall extension: these scripts can apply firewall rules.

Typically, extensions have a start and a stop command, an optional pid file, and a set of environment variables that get passed to the script.

To illustrate how this is supposed to work, look at a firewall extension defined in etc/server.xml:

<dbus-service interface="org.opensuse.Network.Firewall">
 <action name="firewallUp"   command="/etc/wicked/extensions/firewall up"/>
 <action name="firewallDown" command="/etc/wicked/extensions/firewall down"/>

 <!-- default environment for all calls to this extension script -->
 <putenv name="WICKED_OBJECT_PATH" value="$object-path"/>
 <putenv name="WICKED_INTERFACE_NAME" value="$property:name"/>
 <putenv name="WICKED_INTERFACE_INDEX" value="$property:index"/>
</dbus-service>

The extension is attached to the <dbus-service> tag and defines commands to execute for the actions of this interface. Further, the declaration can define and initialize environment variables passed to the actions.

17.6.1.8 Wicked Extensions: Configuration Files Edit source

You can extend the handling of configuration files with scripts as well. For example, DNS updates from leases are ultimately handled by the extensions/resolver script, with behavior configured in server.xml:

<system-updater name="resolver">
 <action name="backup" command="/etc/wicked/extensions/resolver backup"/>
 <action name="restore" command="/etc/wicked/extensions/resolver restore"/>
 <action name="install" command="/etc/wicked/extensions/resolver install"/>
 <action name="remove" command="/etc/wicked/extensions/resolver remove"/>
</system-updater>

When an update arrives in wickedd, the system updater routines parse the lease and call the appropriate commands (backup, install, etc.) in the resolver script. This in turn configures the DNS settings using /sbin/netconfig, or by manually writing /etc/resolv.conf as a fallback.

17.6.2 Configuration Files Edit source

This section provides an overview of the network configuration files and explains their purpose and the format used.

17.6.2.1 /etc/wicked/common.xml Edit source

The /etc/wicked/common.xml file contains common definitions that should be used by all applications. It is sourced/included by the other configuration files in this directory. Although you can use this file to enable debugging across all wicked components, we recommend to use the file /etc/wicked/local.xml for this purpose. After applying maintenance updates you might lose your changes as the /etc/wicked/common.xml might be overwritten. The /etc/wicked/common.xml file includes the /etc/wicked/local.xml in the default installation, thus you typically do not need to modify the /etc/wicked/common.xml.

In case you want to disable nanny by setting the <use-nanny> to false, restart the wickedd.service and then run the following command to apply all configurations and policies:

wicked ifup all
Note
Note: Configuration Files

The wickedd, wicked, or nanny programs try to read /etc/wicked/common.xml if their own configuration files do not exist.

17.6.2.2 /etc/wicked/server.xml Edit source

The file /etc/wicked/server.xml is read by the wickedd server process at start-up. The file stores extensions to the /etc/wicked/common.xml. On top of that this file configures handling of a resolver and receiving information from addrconf supplicants, for example DHCP.

We recommend to add changes required to this file into a separate file /etc/wicked/server-local.xml, that gets included by /etc/wicked/server.xml. By using a separate file you avoid overwriting of your changes during maintenance updates.

17.6.2.3 /etc/wicked/client.xml Edit source

The /etc/wicked/client.xml is used by the wicked command. The file specifies the location of a script used when discovering devices managed by ibft and configures locations of network interface configurations.

We recommend to add changes required to this file into a separate file /etc/wicked/client-local.xml, that gets included by /etc/wicked/server.xml. By using a separate file you avoid overwriting of your changes during maintenance updates.

17.6.2.4 /etc/wicked/nanny.xml Edit source

The /etc/wicked/nanny.xml configures types of link layers. We recommend to add specific configuration into a separate file: /etc/wicked/nanny-local.xml to avoid losing the changes during maintenance updates.

17.6.2.5 /etc/sysconfig/network/ifcfg-* Edit source

These files contain the traditional configurations for network interfaces. In SUSE Linux Enterprise 11, this was the only supported format besides iBFT firmware.

Note
Note: wicked and the ifcfg-* Files

wicked reads these files if you specify the compat: prefix. According to the SUSE Linux Enterprise Desktop default configuration in /etc/wicked/client.xml, wicked tries these files before the XML configuration files in /etc/wicked/ifconfig.

The --ifconfig switch is provided mostly for testing only. If specified, default configuration sources defined in /etc/wicked/ifconfig are not applied.

The ifcfg-* files include information such as the start mode and the IP address. Possible parameters are described in the manual page of ifup. Additionally, most variables from the dhcp and wireless files can be used in the ifcfg-* files if a general setting should be used for only one interface. However, most of the /etc/sysconfig/network/config variables are global and cannot be overridden in ifcfg-files. For example, NETCONFIG_* variables are global.

For configuring macvlan and macvtab interfaces, see the ifcfg-macvlan and ifcfg-macvtap man pages. For example, for a macvlan interface provide a ifcfg-macvlan0 with settings as follows:

STARTMODE='auto'
MACVLAN_DEVICE='eth0'
#MACVLAN_MODE='vepa'
#LLADDR=02:03:04:05:06:aa

For ifcfg.template, see Section 17.6.2.6, “/etc/sysconfig/network/config, /etc/sysconfig/network/dhcp, and /etc/sysconfig/network/wireless.

17.6.2.6 /etc/sysconfig/network/config, /etc/sysconfig/network/dhcp, and /etc/sysconfig/network/wireless Edit source

The file config contains general settings for the behavior of ifup, ifdown and ifstatus. dhcp contains settings for DHCP and wireless for wireless LAN cards. The variables in all three configuration files are commented. Some variables from /etc/sysconfig/network/config can also be used in ifcfg-* files, where they are given a higher priority. The /etc/sysconfig/network/ifcfg.template file lists variables that can be specified in a per interface scope. However, most of the /etc/sysconfig/network/config variables are global and cannot be overridden in ifcfg-files. For example, NETWORKMANAGER or NETCONFIG_* variables are global.

Note
Note: Using DHCPv6

In SUSE Linux Enterprise 11, DHCPv6 used to work even on networks where IPv6 Router Advertisements (RAs) were not configured properly. Starting with SUSE Linux Enterprise 12, DHCPv6 will correctly require that at least one of the routers on the network sends out RAs that indicate that this network is managed by DHCPv6.

For networks where the router cannot be configured correctly, the ifcfg option allows the user to override this behavior by specifying DHCLIENT6_MODE='managed' in the ifcfg file. You can also activate this workaround with a boot parameter in the installation system:

ifcfg=eth0=dhcp6,DHCLIENT6_MODE=managed

17.6.2.7 /etc/sysconfig/network/routes and /etc/sysconfig/network/ifroute-* Edit source

The static routing of TCP/IP packets is determined by the /etc/sysconfig/network/routes and /etc/sysconfig/network/ifroute-* files. All the static routes required by the various system tasks can be specified in /etc/sysconfig/network/routes: routes to a host, routes to a host via a gateway and routes to a network. For each interface that needs individual routing, define an additional configuration file: /etc/sysconfig/network/ifroute-*. Replace the wild card (*) with the name of the interface. The entries in the routing configuration files look like this:

# Destination     Gateway           Netmask            Interface  Options

The route's destination is in the first column. This column may contain the IP address of a network or host or, in the case of reachable name servers, the fully qualified network or host name. The network should be written in CIDR notation (address with the associated routing prefix-length) such as 10.10.0.0/16 for IPv4 or fc00::/7 for IPv6 routes. The keyword default indicates that the route is the default gateway in the same address family as the gateway. For devices without a gateway use explicit 0.0.0.0/0 or ::/0 destinations.

The second column contains the default gateway or a gateway through which a host or network can be accessed.

The third column is deprecated; it used to contain the IPv4 netmask of the destination. For IPv6 routes, the default route, or when using a prefix-length (CIDR notation) in the first column, enter a dash (-) here.

The fourth column contains the name of the interface. If you leave it empty using a dash (-), it can cause unintended behavior in /etc/sysconfig/network/routes. For more information, see the routes man page.

An (optional) fifth column can be used to specify special options. For details, see the routes man page.

Example 17.5: Common Network Interfaces and Some Static Routes
# --- IPv4 routes in CIDR prefix notation:
# Destination     [Gateway]         -                  Interface
127.0.0.0/8       -                 -                  lo
204.127.235.0/24  -                 -                  eth0
default           204.127.235.41    -                  eth0
207.68.156.51/32  207.68.145.45     -                  eth1
192.168.0.0/16    207.68.156.51     -                  eth1

# --- IPv4 routes in deprecated netmask notation"
# Destination     [Dummy/Gateway]   Netmask            Interface
#
127.0.0.0         0.0.0.0           255.255.255.0      lo
204.127.235.0     0.0.0.0           255.255.255.0      eth0
default           204.127.235.41    0.0.0.0            eth0
207.68.156.51     207.68.145.45     255.255.255.255    eth1
192.168.0.0       207.68.156.51     255.255.0.0        eth1

# --- IPv6 routes are always using CIDR notation:
# Destination     [Gateway]                -           Interface
2001:DB8:100::/64 -                        -           eth0
2001:DB8:100::/32 fe80::216:3eff:fe6d:c042 -           eth0

17.6.2.8 /etc/resolv.conf Edit source

The domain to which the host belongs is specified in /etc/resolv.conf (keyword search). Up to six domains with a total of 256 characters can be specified with the search option. When resolving a name that is not fully qualified, an attempt is made to generate one by attaching the individual search entries. Up to 3 name servers can be specified with the nameserver option, each on a line of its own. Comments are preceded by hash mark or semicolon signs (# or ;). As an example, see Example 17.6, “/etc/resolv.conf.

However, the /etc/resolv.conf should not be edited by hand. Instead, it is generated by the netconfig script. To define static DNS configuration without using YaST, edit the appropriate variables manually in the /etc/sysconfig/network/config file:

NETCONFIG_DNS_STATIC_SEARCHLIST

list of DNS domain names used for host name lookup

NETCONFIG_DNS_STATIC_SERVERS

list of name server IP addresses to use for host name lookup

NETCONFIG_DNS_FORWARDER

the name of the DNS forwarder that needs to be configured, for example bind or resolver

NETCONFIG_DNS_RESOLVER_OPTIONS

arbitrary options that will be written to /etc/resolv.conf, for example:

debug attempts:1 timeout:10

For more information, see the resolv.conf man page.

NETCONFIG_DNS_RESOLVER_SORTLIST

list of up to 10 items, for example:

130.155.160.0/255.255.240.0 130.155.0.0

For more information, see the resolv.conf man page.

To disable DNS configuration using netconfig, set NETCONFIG_DNS_POLICY=''. For more information about netconfig, see the netconfig(8) man page (man 8 netconfig).

Example 17.6: /etc/resolv.conf
# Our domain
search example.com
#
# We use dns.example.com (192.168.1.116) as nameserver
nameserver 192.168.1.116

17.6.2.9 /sbin/netconfig Edit source

netconfig is a modular tool to manage additional network configuration settings. It merges statically defined settings with settings provided by autoconfiguration mechanisms as DHCP or PPP according to a predefined policy. The required changes are applied to the system by calling the netconfig modules that are responsible for modifying a configuration file and restarting a service or a similar action.

netconfig recognizes three main actions. The netconfig modify and netconfig remove commands are used by daemons such as DHCP or PPP to provide or remove settings to netconfig. Only the netconfig update command is available for the user:

modify

The netconfig modify command modifies the current interface and service specific dynamic settings and updates the network configuration. Netconfig reads settings from standard input or from a file specified with the --lease-file FILENAME option and internally stores them until a system reboot (or the next modify or remove action). Already existing settings for the same interface and service combination are overwritten. The interface is specified by the -i INTERFACE_NAME parameter. The service is specified by the -s SERVICE_NAME parameter.

remove

The netconfig remove command removes the dynamic settings provided by a modificatory action for the specified interface and service combination and updates the network configuration. The interface is specified by the -i INTERFACE_NAME parameter. The service is specified by the -s SERVICE_NAME parameter.

update

The netconfig update command updates the network configuration using current settings. This is useful when the policy or the static configuration has changed. Use the -m MODULE_TYPE parameter, if you want to update a specified service only (dns, nis, or ntp).

The netconfig policy and the static configuration settings are defined either manually or using YaST in the /etc/sysconfig/network/config file. The dynamic configuration settings provided by autoconfiguration tools such as DHCP or PPP are delivered directly by these tools with the netconfig modify and netconfig remove actions. When NetworkManager is enabled, netconfig (in policy mode auto) uses only NetworkManager settings, ignoring settings from any other interfaces configured using the traditional ifup method. If NetworkManager does not provide any setting, static settings are used as a fallback. A mixed usage of NetworkManager and the wicked method is not supported.

For more information about netconfig, see man 8 netconfig.

17.6.2.10 /etc/hosts Edit source

In this file, shown in Example 17.7, “/etc/hosts, IP addresses are assigned to host names. If no name server is implemented, all hosts to which an IP connection will be set up must be listed here. For each host, enter a line consisting of the IP address, the fully qualified host name, and the host name into the file. The IP address must be at the beginning of the line and the entries separated by blanks and tabs. Comments are always preceded by the # sign.

Example 17.7: /etc/hosts
127.0.0.1 localhost
192.168.2.100 jupiter.example.com jupiter
192.168.2.101 venus.example.com venus

17.6.2.11 /etc/networks Edit source

Here, network names are converted to network addresses. The format is similar to that of the hosts file, except the network names precede the addresses. See Example 17.8, “/etc/networks.

Example 17.8: /etc/networks
loopback     127.0.0.0
localnet     192.168.0.0

17.6.2.12 /etc/host.conf Edit source

Name resolution—the translation of host and network names via the resolver library—is controlled by this file. This file is only used for programs linked to libc4 or libc5. For current glibc programs, refer to the settings in /etc/nsswitch.conf. Each parameter must always be entered on a separate line. Comments are preceded by a # sign. Table 17.2, “Parameters for /etc/host.conf” shows the parameters available. A sample /etc/host.conf is shown in Example 17.9, “/etc/host.conf.

Table 17.2: Parameters for /etc/host.conf

order hosts, bind

Specifies in which order the services are accessed for the name resolution. Available arguments are (separated by blank spaces or commas):

hosts: searches the /etc/hosts file

bind: accesses a name server

nis: uses NIS

multi on/off

Defines if a host entered in /etc/hosts can have multiple IP addresses.

nospoof on spoofalert on/off

These parameters influence the name server spoofing but do not exert any influence on the network configuration.

trim domainname

The specified domain name is separated from the host name after host name resolution (as long as the host name includes the domain name). This option is useful only if names from the local domain are in the /etc/hosts file, but should still be recognized with the attached domain names.

Example 17.9: /etc/host.conf
# We have named running
order hosts bind
# Allow multiple address
multi on

17.6.2.13 /etc/nsswitch.conf Edit source

The introduction of the GNU C Library 2.0 was accompanied by the introduction of the Name Service Switch (NSS). Refer to the nsswitch.conf(5) man page and The GNU C Library Reference Manual for details.

The order for queries is defined in the file /etc/nsswitch.conf. A sample nsswitch.conf is shown in Example 17.10, “/etc/nsswitch.conf. Comments are preceded by # signs. In this example, the entry under the hosts database means that a request is sent to /etc/hosts (files) via DNS.

Example 17.10: /etc/nsswitch.conf
passwd:     compat
group:      compat

hosts:      files dns
networks:   files dns

services:   db files
protocols:  db files
rpc:        files
ethers:     files
netmasks:   files
netgroup:   files nis
publickey:  files

bootparams: files
automount:  files nis
aliases:    files nis
shadow:     compat

The databases available over NSS are listed in Table 17.3, “Databases Available via /etc/nsswitch.conf”. The configuration options for NSS databases are listed in Table 17.4, “Configuration Options for NSS Databases.

Table 17.3: Databases Available via /etc/nsswitch.conf

aliases

Mail aliases implemented by sendmail; see man 5 aliases.

ethers

Ethernet addresses.

netmasks

List of networks and their subnet masks. Only needed, if you use subnetting.

group

User groups used by getgrent. See also the man page for group.

hosts

Host names and IP addresses, used by gethostbyname and similar functions.

netgroup

Valid host and user lists in the network for controlling access permissions; see the netgroup(5) man page.

networks

Network names and addresses, used by getnetent.

publickey

Public and secret keys for Secure_RPC used by NFS and NIS+.

passwd

User passwords, used by getpwent; see the passwd(5) man page.

protocols

Network protocols, used by getprotoent; see the protocols(5) man page.

rpc

Remote procedure call names and addresses, used by getrpcbyname and similar functions.

services

Network services, used by getservent.

shadow

Shadow passwords of users, used by getspnam; see the shadow(5) man page.

Table 17.4: Configuration Options for NSS Databases

files

directly access files, for example, /etc/aliases

db

access via a database

nis, nisplus

NIS, see also Book “Security Guide”, Chapter 3 “Using NIS”

dns

can only be used as an extension for hosts and networks

compat

can only be used as an extension for passwd, shadow and group

17.6.2.14 /etc/nscd.conf Edit source

This file is used to configure nscd (name service cache daemon). See the nscd(8) and nscd.conf(5) man pages. By default, the system entries of passwd, groups and hostsare cached by nscd. This is important for the performance of directory services, like NIS and LDAP, because otherwise the network connection needs to be used for every access to names, groups or hosts.

If the caching for passwd is activated, it usually takes about fifteen seconds until a newly added local user is recognized. Reduce this waiting time by restarting nscd with:

systemctl restart nscd

17.6.2.15 /etc/HOSTNAME Edit source

/etc/HOSTNAME contains the fully qualified host name (FQHN). The fully qualified host name is the host name with the domain name attached. This file must contain only one line (in which the host name is set). It is read while the machine is booting.

17.6.3 Testing the Configuration Edit source

Before you write your configuration to the configuration files, you can test it. To set up a test configuration, use the ip command. To test the connection, use the ping command.

The command ip changes the network configuration directly without saving it in the configuration file. Unless you enter your configuration in the correct configuration files, the changed network configuration is lost on reboot.

Note
Note: ifconfig and route Are Obsolete

The ifconfig and route tools are obsolete. Use ip instead. ifconfig, for example, limits interface names to 9 characters.

17.6.3.1 Configuring a Network Interface with ip Edit source

ip is a tool to show and configure network devices, routing, policy routing, and tunnels.

ip is a very complex tool. Its common syntax is ip OPTIONS OBJECT COMMAND. You can work with the following objects:

link

This object represents a network device.

address

This object represents the IP address of device.

neighbor

This object represents an ARP or NDISC cache entry.

route

This object represents the routing table entry.

rule

This object represents a rule in the routing policy database.

maddress

This object represents a multicast address.

mroute

This object represents a multicast routing cache entry.

tunnel

This object represents a tunnel over IP.

If no command is given, the default command is used (usually list).

Change the state of a device with the command ip link set DEVICE_NAME  . For example, to deactivate device eth0, enter ip link set eth0 down. To activate it again, use ip link set eth0 up.

After activating a device, you can configure it. To set the IP address, use ip addr add IP_ADDRESS + dev DEVICE_NAME. For example, to set the address of the interface eth0 to 192.168.12.154/30 with standard broadcast (option brd), enter ip addr add 192.168.12.154/30 brd + dev eth0.

To have a working connection, you must also configure the default gateway. To set a gateway for your system, enter ip route add gateway_ip_address. To translate one IP address to another, use nat: ip route add nat ip_address via other_ip_address.

To display all devices, use ip link ls. To display the running interfaces only, use ip link ls up. To print interface statistics for a device, enter ip -s link ls device_name. To view addresses of your devices, enter ip addr. In the output of the ip addr, also find information about MAC addresses of your devices. To show all routes, use ip route show.

For more information about using ip, enter ip help or see the ip(8) man page. The help option is also available for all ip subcommands. If, for example, you need help for ip addr, enter ip addr help. Find the ip manual in /usr/share/doc/packages/iproute2/ip-cref.pdf.

17.6.3.2 Testing a Connection with ping Edit source

The ping command is the standard tool for testing whether a TCP/IP connection works. It uses the ICMP protocol to send a small data packet, ECHO_REQUEST datagram, to the destination host, requesting an immediate reply. If this works, ping displays a message to that effect. This indicates that the network link is functioning.

ping does more than only test the function of the connection between two computers: it also provides some basic information about the quality of the connection. In Example 17.11, “Output of the Command ping”, you can see an example of the ping output. The second-to-last line contains information about the number of transmitted packets, packet loss, and total time of ping running.

As the destination, you can use a host name or IP address, for example, ping example.com or ping 192.168.3.100. The program sends packets until you press CtrlC.

If you only need to check the functionality of the connection, you can limit the number of the packets with the -c option. For example to limit ping to three packets, enter ping -c 3 example.com.

Example 17.11: Output of the Command ping
ping -c 3 example.com
PING example.com (192.168.3.100) 56(84) bytes of data.
64 bytes from example.com (192.168.3.100): icmp_seq=1 ttl=49 time=188 ms
64 bytes from example.com (192.168.3.100): icmp_seq=2 ttl=49 time=184 ms
64 bytes from example.com (192.168.3.100): icmp_seq=3 ttl=49 time=183 ms
--- example.com ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2007ms
rtt min/avg/max/mdev = 183.417/185.447/188.259/2.052 ms

The default interval between two packets is one second. To change the interval, ping provides the option -i. For example, to increase the ping interval to ten seconds, enter ping -i 10 example.com.

In a system with multiple network devices, it is sometimes useful to send the ping through a specific interface address. To do so, use the -I option with the name of the selected device, for example, ping -I wlan1 example.com.

For more options and information about using ping, enter ping -h or see the ping (8) man page.

Tip
Tip: Pinging IPv6 Addresses

For IPv6 addresses use the ping6 command. Note, to ping link-local addresses, you must specify the interface with -I. The following command works, if the address is reachable via eth1:

ping6 -I eth1 fe80::117:21ff:feda:a425

17.6.4 Unit Files and Start-Up Scripts Edit source

Apart from the configuration files described above, there are also systemd unit files and various scripts that load the network services while the machine is booting. These are started when the system is switched to the multi-user.target target. Some of these unit files and scripts are described in Some Unit Files and Start-Up Scripts for Network Programs. For more information about systemd, see Chapter 14, The systemd Daemon and for more information about the systemd targets, see the man page of systemd.special (man systemd.special).

Some Unit Files and Start-Up Scripts for Network Programs
network.target

network.target is the systemd target for networking, but its mean depends on the settings provided by the system administrator.

For more information, see http://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/.

multi-user.target

multi-user.target is the systemd target for a multiuser system with all required network services.

xinetd

Starts xinetd. xinetd can be used to make server services available on the system. For example, it can start vsftpd whenever an FTP connection is initiated.

rpcbind

Starts the rpcbind utility that converts RPC program numbers to universal addresses. It is needed for RPC services, such as an NFS server.

ypserv

Starts the NIS server.

ypbind

Starts the NIS client.

/etc/init.d/nfsserver

Starts the NFS server.

/etc/init.d/postfix

Controls the postfix process.

17.7 Setting Up Bonding Devices Edit source

For some systems, there is a desire to implement network connections that comply to more than the standard data security or availability requirements of a typical Ethernet device. In these cases, several Ethernet devices can be aggregated to a single bonding device.

The configuration of the bonding device is done by means of bonding module options. The behavior is mainly affected by the mode of the bonding device. By default, this is active-backup which means that a different slave device will become active if the active slave fails. The following bonding modes are available:

0 (balance-rr)

Packets are transmitted in round-robin fashion from the first to the last available interface. Provides fault tolerance and load balancing.

1 (active-backup)

Only one network interface is active. If it fails, a different interface becomes active. This setting is the default for SUSE Linux Enterprise Desktop. Provides fault tolerance.

2 (balance-xor)

Traffic is split between all available interfaces based on the following policy: [(source MAC address XOR'd with destination MAC address XOR packet type ID) modulo slave count] Requires support from the switch. Provides fault tolerance and load balancing.

3 (broadcast)

All traffic is broadcast on all interfaces. Requires support from the switch. Provides fault tolerance.

4 (802.3ad)

Aggregates interfaces into groups that share the same speed and duplex settings. Requires ethtool support in the interface drivers, and a switch that supports and is configured for IEEE 802.3ad Dynamic link aggregation. Provides fault tolerance and load balancing.

5 (balance-tlb)

Adaptive transmit load balancing. Requires ethtool support in the interface drivers but not switch support. Provides fault tolerance and load balancing.

6 (balance-alb)

Adaptive load balancing. Requires ethtool support in the interface drivers but not switch support. Provides fault tolerance and load balancing.

For a more detailed description of the modes, see https://www.kernel.org/doc/Documentation/networking/bonding.txt.

Tip
Tip: Bonding and Xen

Using bonding devices is only of interest for machines where you have multiple real network cards available. In most configurations, this means that you should use the bonding configuration only in Dom0. Only if you have multiple network cards assigned to a VM Guest system it may also be useful to set up the bond in a VM Guest.

Note
Note: IBM POWER: Bonding modes 5 and 6 (balance-tlb / balance-alb) unsupported by ibmveth

There is a conflict with the tlb/alb bonding configuration and Power firmware. In short, the bonding driver in tlb/alb mode sends Ethernet Loopback packets with both the source and destination MAC addresses listed as the Virtual Ethernet MAC address. These packets are not supported by Power firmware. Therefore bonding modes 5 and 6 are unsupported by ibmveth.

To configure a bonding device, use the following procedure:

  1. Run YaST › System › Network Settings.

  2. Use Add and change the Device Type to Bond. Proceed with Next.

  3. Select how to assign the IP address to the bonding device. Three methods are at your disposal:

    • No IP Address

    • Dynamic Address (with DHCP or Zeroconf)

    • Statically assigned IP Address

    Use the method that is appropriate for your environment.

  4. In the Bond Slaves tab, select the Ethernet devices that should be included into the bond by activating the related check box.

  5. Edit the Bond Driver Options and choose a bonding mode.

  6. Make sure that the parameter miimon=100 is added to the Bond Driver Options. Without this parameter, the data integrity is not checked regularly.

  7. Click Next and leave YaST with OK to create the device.

17.7.1 Hotplugging of Bonding Slaves Edit source

In specific network environments (such as High Availability), there are cases when you need to replace a bonding slave interface with another one. The reason may be a constantly failing network device. The solution is to set up hotplugging of bonding slaves.

The bond is configured as usual (according to man 5 ifcfg-bonding), for example:

ifcfg-bond0
          STARTMODE='auto' # or 'onboot'
          BOOTPROTO='static'
          IPADDR='192.168.0.1/24'
          BONDING_MASTER='yes'
          BONDING_SLAVE_0='eth0'
          BONDING_SLAVE_1='eth1'
          BONDING_MODULE_OPTS='mode=active-backup miimon=100'

The slaves are specified with STARTMODE=hotplug and BOOTPROTO=none:

ifcfg-eth0
          STARTMODE='hotplug'
          BOOTPROTO='none'

ifcfg-eth1
          STARTMODE='hotplug'
          BOOTPROTO='none'

BOOTPROTO=none uses the ethtool options (when provided), but does not set the link up on ifup eth0. The reason is that the slave interface is controlled by the bond master.

STARTMODE=hotplug causes the slave interface to join the bond automatically when it is available.

The udev rules in /etc/udev/rules.d/70-persistent-net.rules need to be changed to match the device by bus ID (udev KERNELS keyword equal to "SysFS BusID" as visible in hwinfo --netcard) instead of by MAC address. This allows replacement of defective hardware (a network card in the same slot but with a different MAC) and prevents confusion when the bond changes the MAC address of all its slaves.

For example:

SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*",
KERNELS=="0000:00:19.0", ATTR{dev_id}=="0x0", ATTR{type}=="1",
KERNEL=="eth*", NAME="eth0"

At boot time, the systemd network.service does not wait for the hotplug slaves, but for the bond to become ready, which requires at least one available slave. When one of the slave interfaces gets removed (unbind from NIC driver, rmmod of the NIC driver or true PCI hotplug remove) from the system, the kernel removes it from the bond automatically. When a new card is added to the system (replacement of the hardware in the slot), udev renames it using the bus-based persistent name rule to the name of the slave, and calls ifup for it. The ifup call automatically joins it into the bond.

17.8 Setting Up Team Devices for Network Teaming Edit source

The term link aggregation is the general term which describes combining (or aggregating) a network connection to provide a logical layer. Sometimes you find the terms channel teaming, Ethernet bonding, port truncating, etc. which are synonyms and refer to the same concept.

This concept is widely known as bonding and was originally integrated into the Linux kernel (see Section 17.7, “Setting Up Bonding Devices” for the original implementation). The term Network Teaming is used to refer to the new implementation of this concept.

The main difference between bonding and Network Teaming is that teaming supplies a set of small kernel modules responsible for providing an interface for teamd instances. Everything else is handled in user space. This is different from the original bonding implementation which contains all of its functionality exclusively in the kernel. For a comparison refer to Table 17.5, “Feature Comparison between Bonding and Team”.

Table 17.5: Feature Comparison between Bonding and Team
FeatureBondingTeam
broadcast, round-robin TX policyyesyes
active-backup TX policyyesyes
LACP (802.3ad) supportyesyes
hash-based TX policyyesyes
user can set hash functionnoyes
TX load-balancing support (TLB)yesyes
TX load-balancing support for LACPnoyes
Ethtool link monitoringyesyes
ARP link monitoringyesyes
NS/NA (IPV6) link monitoringnoyes
RCU locking on TX/RX pathsnoyes
port prio and stickinessnoyes
separate per-port link monitoring setupnoyes
multiple link monitoring setuplimitedyes
VLAN supportyesyes
multiple device stackingyesyes

Source: http://libteam.org/files/teamdev.pp.pdf

Both implementations, bonding and Network Teaming, can be used in parallel. Network Teaming is an alternative to the existing bonding implementation. It does not replace bonding.

Network Teaming can be used for different use cases. The two most important use cases are explained later and involve:

  • Load balancing between different network devices.

  • Failover from one network device to another in case one of the devices should fail.

Currently, there is no YaST module to support creating a teaming device. You need to configure Network Teaming manually. The general procedure is shown below which can be applied for all your Network Teaming configurations:

Procedure 17.1: General Procedure
  1. Make sure you have all the necessary packages installed. Install the packages libteam-tools, libteamdctl0, and python-libteam.

  2. Create a configuration file under /etc/sysconfig/network/. Usually it will be ifcfg-team0. If you need more than one Network Teaming device, give them ascending numbers.

    This configuration file contains several variables which are explained in the man pages (see man ifcfg and man ifcfg-team). An example configuration can be found in your system in the file /etc/sysconfig/network/ifcfg.template.

  3. Remove the configuration files of the interfaces which will be used for the teaming device (usually ifcfg-eth0 and ifcfg-eth1).

    It is recommended to make a backup and remove both files. Wicked will re-create the configuration files with the necessary parameters for teaming.

  4. Optionally, check if everything is included in Wicked's configuration file:

    wicked show-config
  5. Start the Network Teaming device team0:

    wicked ifup all team0

    In case you need additional debug information, use the option --debug all after the all subcommand.

  6. Check the status of the Network Teaming device. This can be done by the following commands:

    • Get the state of the teamd instance from Wicked:

      wicked ifstatus --verbose team0
    • Get the state of the entire instance:

      teamdctl team0 state
    • Get the systemd state of the teamd instance:

      systemctl status teamd@team0

    Each of them shows a slightly different view depending on your needs.

  7. In case you need to change something in the ifcfg-team0 file afterward, reload its configuration with:

    wicked ifreload team0

Do not use systemctl for starting or stopping the teaming device! Instead, use the wicked command as shown above.

To completely remove the team device, use this procedure:

Procedure 17.2: Removing a Team Device
  1. Stop the Network Teaming device team0:

    wicked ifdown team0
  2. Rename the file /etc/sysconfig/network/ifcfg-team0 to /etc/sysconfig/network/.ifcfg-team0. Inserting a dot in front of the file name makes it invisible for wicked. If you really do not need the configuration anymore, you can also remove the file.

  3. Reload the configuration:

    wicked ifreload all

17.8.1 Use Case: Loadbalancing with Network Teaming Edit source

Loadbalancing is used to improve bandwidth. Use the following configuration file to create a Network Teaming device with loadbalancing capabilities. Proceed with Procedure 17.1, “General Procedure” to set up the device. Check the output with teamdctl.

Example 17.12: Configuration for Loadbalancing with Network Teaming
STARTMODE=auto 1
BOOTPROTO=static 2
IPADDRESS="192.168.1.1/24" 2
IPADDR6="fd00:deca:fbad:50::1/64" 2

TEAM_RUNNER="loadbalance" 3
TEAM_LB_TX_HASH="ipv4,ipv6,eth,vlan"
TEAM_LB_TX_BALANCER_NAME="basic"
TEAM_LB_TX_BALANCER_INTERVAL="100"

TEAM_PORT_DEVICE_0="eth0" 4
TEAM_PORT_DEVICE_1="eth1" 4

TEAM_LW_NAME="ethtool" 5
TEAM_LW_ETHTOOL_DELAY_UP="10" 6
TEAM_LW_ETHTOOL_DELAY_DOWN="10" 6

1

Controls the start of the teaming device. The value of auto means, the interface will be set up when the network service is available and will be started automatically on every reboot.

In case you need to control the device yourself (and prevent it from starting automatically), set STARTMODE to manual.

2

Sets a static IP address (here 192.168.1.1 for IPv4 and fd00:deca:fbad:50::1 for IPv6).

If the Network Teaming device should use a dynamic IP address, set BOOTPROTO="dhcp" and remove (or comment) the line with IPADDRESS and IPADDR6.

3

Sets TEAM_RUNNER to loadbalance to activate the loadbalancing mode.

4

Specifies one or more devices which should be aggregated to create the Network Teaming device.

5

Defines a link watcher to monitor the state of subordinate devices. The default value ethtool checks only if the device is up and accessible. This makes this check fast enough. However, it does not check if the device can really send or receive packets.

If you need a higher confidence in the connection, use the arp_ping option. This sends pings to an arbitrary host (configured in the TEAM_LW_ARP_PING_TARGET_HOST variable). Only if the replies are received, the Network Teaming device is considered to be up.

6

Defines the delay in milliseconds between the link coming up (or down) and the runner being notified.

17.8.2 Use Case: Failover with Network Teaming Edit source

Failover is used to ensure high availability of a critical Network Teaming device by involving a parallel backup network device. The backup network device is running all the time and takes over if and when the main device fails.

Use the following configuration file to create a Network Teaming device with failover capabilities. Proceed with Procedure 17.1, “General Procedure” to set up the device. Check the output with teamdctl.

Example 17.13: Configuration for DHCP Network Teaming Device
STARTMODE=auto 1
BOOTPROTO=static 2
IPADDR="192.168.1.2/24" 2
IPADDR6="fd00:deca:fbad:50::2/64" 2

TEAM_RUNNER=activebackup 3
TEAM_PORT_DEVICE_0="eth0" 4
TEAM_PORT_DEVICE_1="eth1" 4

TEAM_LW_NAME=ethtool 5
TEAM_LW_ETHTOOL_DELAY_UP="10" 6
TEAM_LW_ETHTOOL_DELAY_DOWN="10" 6

1

Controls the start of the teaming device. The value of auto means, the interface will be set up when the network service is available and will be started automatically on every reboot.

In case you need to control the device yourself (and prevent it from starting automatically), set STARTMODE to manual.

2

Sets a static IP address (here 192.168.1.2 for IPv4 and fd00:deca:fbad:50::2 for IPv6).

If the Network Teaming device should use a dynamic IP address, set BOOTPROTO="dhcp" and remove (or comment) the line with IPADDRESS and IPADDR6.

3

Sets TEAM_RUNNER to activebackup to activate the failover mode.

4

Specifies one or more devices which should be aggregated to create the Network Teaming device.

5

Defines a link watcher to monitor the state of subordinate devices. The default value ethtool checks only if the device is up and accessible. This makes this check fast enough. However, it does not check if the device can really send or receive packets.

If you need a higher confidence in the connection, use the arp_ping option. This sends pings to an arbitrary host (configured in the TEAM_LW_ARP_PING_TARGET_HOST variable). Only if the replies are received, the Network Teaming device is considered to be up.

6

Defines the delay in milliseconds between the link coming up (or down) and the runner being notified.

17.8.3 Use Case: VLAN over Team Device Edit source

VLAN is an abbreviation of Virtual Local Area Network. It allows the running of multiple logical (virtual) ethernets over one single physical ethernet. It logically splits the network into different broadcast domains so that packets are only switched between ports that are designated for the same VLAN.

The following use case creates two static VLANs on top of a team device:

  • vlan0, bound to the IP address 192.168.10.1

  • vlan1, bound to the IP address 192.168.20.1

Proceed as follows:

  1. Enable the VLAN tags on your switch. If you want to use loadbalancing for your team device, your switch needs to be capable of Link Aggregation Control Protocol (LACP) (802.3ad). Consult your hardware manual about the details.

  2. Decide if you want to use loadbalancing or failover for your team device. Set up your team device as described in Section 17.8.1, “Use Case: Loadbalancing with Network Teaming” or Section 17.8.2, “Use Case: Failover with Network Teaming”.

  3. In /etc/sysconfig/network create a file ifcfg-vlan0 with the following content:

    STARTMODE="auto"
    BOOTPROTO="static" 1
    IPADDR='192.168.10.1/24' 2
    ETHERDEVICE="team0" 3
    VLAN_ID="0" 4
    VLAN='yes'

    1

    Defines a fixed IP address, specified in IPADDR.

    2

    Defines the IP address, here with its netmask.

    3

    Contains the real interface to use for the VLAN interface, here our team device (team0).

    4

    Specifies a unique ID for the VLAN. Preferably, the file name and the VLAN_ID corresponds to the name ifcfg-vlanVLAN_ID. In our case VLAN_ID is 0 which leads to the filename ifcfg-vlan0.

  4. Copy the file /etc/sysconfig/network/ifcfg-vlan0 to /etc/sysconfig/network/ifcfg-vlan1 and change the following values:

    • IPADDR from 192.168.10.1/24 to 192.168.20.1/24.

    • VLAN_ID from 0 to 1.

  5. Start the two VLANs:

    root # wicked ifup vlan0 vlan1
  6. Check the output of ifconfig:

    root # ifconfig -a
    [...]
    vlan0     Link encap:Ethernet  HWaddr 08:00:27:DC:43:98
              inet addr:192.168.10.1 Bcast:192.168.10.255 Mask:255.255.255.0
              inet6 addr: fe80::a00:27ff:fedc:4398/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:0 (0.0 b)  TX bytes:816 (816.0 b)
    
    vlan1     Link encap:Ethernet  HWaddr 08:00:27:DC:43:98
              inet addr:192.168.20.1 Bcast:192.168.20.255 Mask:255.255.255.0
              inet6 addr: fe80::a00:27ff:fedc:4398/64 Scope:Link
              UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
              RX packets:0 errors:0 dropped:0 overruns:0 frame:0
              TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
              collisions:0 txqueuelen:1000
              RX bytes:0 (0.0 b)  TX bytes:816 (816.0 b)

18 Printer Operation Edit source

SUSE® Linux Enterprise Desktop supports printing with many types of printers, including remote network printers. Printers can be configured manually or with YaST. For configuration instructions, refer to Book “Deployment Guide”, Chapter 8 “Setting Up Hardware Components with YaST”, Section 8.3 “Setting Up a Printer”. Both graphical and command line utilities are available for starting and managing print jobs. If your printer does not work as expected, refer to Section 18.8, “Troubleshooting”.

CUPS (Common Unix Printing System) is the standard print system in SUSE Linux Enterprise Desktop.

Printers can be distinguished by interface, such as USB or network, and printer language. When buying a printer, make sure that the printer has an interface that is supported (USB, Ethernet, or Wi-Fi) and a suitable printer language. Printers can be categorized on the basis of the following three classes of printer languages:

PostScript Printers

PostScript is the printer language in which most print jobs in Linux and Unix are generated and processed by the internal print system. If PostScript documents can be processed directly by the printer and do not need to be converted in additional stages in the print system, the number of potential error sources is reduced.

Currently PostScript is being replaced by PDF as the standard print job format. PostScript+PDF printers that can directly print PDF (in addition to PostScript) already exist. For traditional PostScript printers PDF needs to be converted to PostScript in the printing workflow.

Standard Printers (Languages Like PCL and ESC/P)

In the case of known printer languages, the print system can convert PostScript jobs to the respective printer language with Ghostscript. This processing stage is called interpreting. The best-known languages are PCL (which is mostly used by HP printers and their clones) and ESC/P (which is used by Epson printers). These printer languages are usually supported by Linux and produce an adequate print result. Linux may not be able to address some special printer functions. Except for HP and Epson, there are currently no printer manufacturers who develop Linux drivers and make them available to Linux distributors under an open source license.

Proprietary Printers (Also Called GDI Printers)

These printers do not support any of the common printer languages. They use their own undocumented printer languages, which are subject to change when a new edition of a model is released. Usually only Windows drivers are available for these printers. See Section 18.8.1, “Printers without Standard Printer Language Support” for more information.

Before you buy a new printer, refer to the following sources to check how well the printer you intend to buy is supported:

http://www.openprinting.org/printers

The OpenPrinting home page with the printer database. The database shows the latest Linux support status. However, a Linux distribution can only integrate the drivers available at production time. Accordingly, a printer currently rated as perfectly supported may not have had this status when the latest SUSE Linux Enterprise Desktop version was released. Thus, the databases may not necessarily indicate the correct status, but only provide an approximation.

http://pages.cs.wisc.edu/~ghost/

The Ghostscript Web page.

/usr/share/doc/packages/ghostscript/catalog.devices

List of built-in Ghostscript drivers.

18.1 The CUPS Workflow Edit source

The user creates a print job. The print job consists of the data to print plus information for the spooler. This includes the name of the printer or the name of the print queue, and optionally, information for the filter, such as printer-specific options.

At least one dedicated print queue exists for every printer. The spooler holds the print job in the queue until the desired printer is ready to receive data. When the printer is ready, the spooler sends the data through the filter and back-end to the printer.

The filter converts the data generated by the application that is printing (usually PostScript or PDF, but also ASCII, JPEG, etc.) into printer-specific data (PostScript, PCL, ESC/P, etc.). The features of the printer are described in the PPD files. A PPD file contains printer-specific options with the parameters needed to enable them on the printer. The filter system makes sure that options selected by the user are enabled.

If you use a PostScript printer, the filter system converts the data into printer-specific PostScript. This does not require a printer driver. If you use a non-PostScript printer, the filter system converts the data into printer-specific data. This requires a printer driver suitable for your printer. The back-end receives the printer-specific data from the filter then passes it to the printer.

18.2 Methods and Protocols for Connecting Printers Edit source

There are various possibilities for connecting a printer to the system. The configuration of CUPS does not distinguish between a local printer and a printer connected to the system over the network. For more information about the printer connection, read the article CUPS in a Nutshell at http://en.opensuse.org/SDB:CUPS_in_a_Nutshell.

Warning
Warning: Changing Cable Connections in a Running System

When connecting the printer to the machine, do not forget that only USB devices can be plugged in or unplugged during operation. To avoid damaging your system or printer, shut down the system before changing any connections that are not USB.

18.3 Installing the Software Edit source

PPD (PostScript printer description) is the computer language that describes the properties, like resolution, and options, such as the availability of a duplex unit. These descriptions are required for using various printer options in CUPS. Without a PPD file, the print data would be forwarded to the printer in a raw state, which is usually not desired.

To configure a PostScript printer, the best approach is to get a suitable PPD file. Many PPD files are available in the packages manufacturer-PPDs and OpenPrintingPPDs-postscript. See Section 18.7.3, “PPD Files in Various Packages” and Section 18.8.2, “No Suitable PPD File Available for a PostScript Printer”.

New PPD files can be stored in the directory /usr/share/cups/model/ or added to the print system with YaST as described in Book “Deployment Guide”, Chapter 8 “Setting Up Hardware Components with YaST”, Section 8.3.1.1 “Adding Drivers with YaST”. Subsequently, the PPD file can be selected during the printer setup.

Be careful if a printer manufacturer wants you to install entire software packages. This kind of installation may result in the loss of the support provided by SUSE Linux Enterprise Desktop. Also, print commands may work differently and the system may no longer be able to address devices of other manufacturers. For this reason, the installation of manufacturer software is not recommended.

18.4 Network Printers Edit source

A network printer can support various protocols, some even concurrently. Although most of the supported protocols are standardized, some manufacturers modify the standard. Manufacturers then provide drivers for only a few operating systems. Unfortunately, Linux drivers are rarely provided. The current situation is such that you cannot act on the assumption that every protocol works smoothly in Linux. Therefore, you may need to experiment with various options to achieve a functional configuration.

CUPS supports the socket, LPD, IPP and smb protocols.

socket

Socket refers to a connection in which the plain print data is sent directly to a TCP socket. Some socket port numbers that are commonly used are 9100 or 35. The device URI (uniform resource identifier) syntax is: socket://IP.OF.THE.PRINTER:PORT, for example: socket://192.168.2.202:9100/.

LPD (Line Printer Daemon)

The LPD protocol is described in RFC 1179. Under this protocol, some job-related data, such as the ID of the print queue, is sent before the actual print data is sent. Therefore, a print queue must be specified when configuring the LPD protocol. The implementations of diverse printer manufacturers are flexible enough to accept any name as the print queue. If necessary, the printer manual should indicate what name to use. LPT, LPT1, LP1 or similar names are often used. The port number for an LPD service is 515. An example device URI is lpd://192.168.2.202/LPT1.

IPP (Internet Printing Protocol)

IPP is a relatively new protocol (1999) based on the HTTP protocol. With IPP, more job-related data is transmitted than with the other protocols. CUPS uses IPP for internal data transmission. The name of the print queue is necessary to configure IPP correctly. The port number for IPP is 631. Example device URIs are ipp://192.168.2.202/ps and ipp://192.168.2.202/printers/ps.

SMB (Windows Share)

CUPS also supports printing on printers connected to Windows shares. The protocol used for this purpose is SMB. SMB uses the port numbers 137, 138 and 139. Example device URIs are smb://user:password@workgroup/smb.example.com/printer, smb://user:password@smb.example.com/printer, and smb://smb.example.com/printer.

The protocol supported by the printer must be determined before configuration. If the manufacturer does not provide the needed information, the command nmap (which comes with the nmap package) can be used to ascertain the protocol. nmap checks a host for open ports. For example:

nmap -p 35,137-139,515,631,9100-10000 IP.OF.THE.PRINTER

18.5 Configuring CUPS with Command Line Tools Edit source

CUPS can be configured with command line tools like lpinfo, lpadmin and lpoptions. You need a device URI consisting of a back-end, such as USB, and parameters. To determine valid device URIs on your system use the command lpinfo -v | grep ":/":

# lpinfo -v | grep ":/"
direct usb://ACME/FunPrinter%20XL
network socket://192.168.2.253

With lpadmin the CUPS server administrator can add, remove or manage print queues. To add a print queue, use the following syntax:

lpadmin -p QUEUE -v DEVICE-URI -P PPD-FILE -E

Then the device (-v) is available as QUEUE (-p), using the specified PPD file (-P). This means that you must know the PPD file and the device URI to configure the printer manually.

Do not use -E as the first option. For all CUPS commands, -E as the first argument sets use of an encrypted connection. To enable the printer, -E must be used as shown in the following example:

lpadmin -p ps -v usb://ACME/FunPrinter%20XL -P \
/usr/share/cups/model/Postscript.ppd.gz -E

The following example configures a network printer:

lpadmin -p ps -v socket://192.168.2.202:9100/ -P \
/usr/share/cups/model/Postscript-level1.ppd.gz -E

For more options of lpadmin, see the man page of lpadmin(8).

During printer setup, certain options are set as default. These options can be modified for every print job (depending on the print tool used). Changing these default options with YaST is also possible. Using command line tools, set default options as follows:

  1. First, list all options:

    lpoptions -p QUEUE -l

    Example:

    Resolution/Output Resolution: 150dpi *300dpi 600dpi

    The activated default option is identified by a preceding asterisk (*).

  2. Change the option with lpadmin:

    lpadmin -p QUEUE -o Resolution=600dpi
  3. Check the new setting:

    lpoptions -p QUEUE -l
    
    Resolution/Output Resolution: 150dpi 300dpi *600dpi

When a normal user runs lpoptions, the settings are written to ~/.cups/lpoptions. However, root settings are written to /etc/cups/lpoptions.

18.6 Printing from the Command Line Edit source

To print from the command line, enter lp -d QUEUENAME FILENAME, substituting the c