Jump to content
documentation.suse.com / Security and Hardening Guide
SUSE Linux Enterprise Desktop 15 SP6

Security and Hardening Guide

This guide introduces basic concepts of system security and describes the usage of security software included with the product, such as AppArmor, SELinux, or the auditing system. The guide also supports system administrators in hardening an installation.

Publication Date: December 05, 2024

Copyright © 2006–2024 SUSE LLC and contributors. All rights reserved.

Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or (at your option) version 1.3; with the Invariant Section being this copyright notice and license. A copy of the license version 1.2 is included in the section entitled GNU Free Documentation License.

For SUSE trademarks, see https://www.suse.com/company/legal/. All third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.

All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.

Preface

1 Available documentation

Online documentation

Our documentation is available online at https://documentation.suse.com. Browse or download the documentation in various formats.

Note
Note: Latest updates

The latest updates are usually available in the English-language version of this documentation.

SUSE Knowledgebase

If you run into an issue, check out the Technical Information Documents (TIDs) that are available online at https://www.suse.com/support/kb/. Search the SUSE Knowledgebase for known solutions driven by customer need.

Release notes

For release notes, see https://www.suse.com/releasenotes/.

In your system

For offline use, the release notes are also available under /usr/share/doc/release-notes on your system. The documentation for individual packages is available at /usr/share/doc/packages.

Many commands are also described in their manual pages. To view them, run man, followed by a specific command name. If the man command is not installed on your system, install it with sudo zypper install man.

2 Improving the documentation

Your feedback and contributions to this documentation are welcome. The following channels for giving feedback are available:

Service requests and support

For services and support options available for your product, see https://www.suse.com/support/.

To open a service request, you need a SUSE subscription registered at SUSE Customer Center. Go to https://scc.suse.com/support/requests, log in, and click Create New.

Bug reports

Report issues with the documentation at https://bugzilla.suse.com/.

To simplify this process, click the Report an issue icon next to a headline in the HTML version of this document. This preselects the right product and category in Bugzilla and adds a link to the current section. You can start typing your bug report right away.

A Bugzilla account is required.

Contributions

To contribute to this documentation, click the Edit source document icon next to a headline in the HTML version of this document. This will take you to the source code on GitHub, where you can open a pull request.

A GitHub account is required.

Note
Note: Edit source document only available for English

The Edit source document icons are only available for the English version of each document. For all other languages, use the Report an issue icons instead.

For more information about the documentation environment used for this documentation, see the repository's README.

Mail

You can also report errors and send feedback concerning the documentation to <>. Include the document title, the product version, and the publication date of the document. Additionally, include the relevant section number and title (or provide the URL) and provide a concise description of the problem.

3 Documentation conventions

The following notices and typographic conventions are used in this document:

  • /etc/passwd: Directory names and file names

  • PLACEHOLDER: Replace PLACEHOLDER with the actual value

  • PATH: An environment variable

  • ls, --help: Commands, options, and parameters

  • user: The name of a user or group

  • package_name: The name of a software package

  • Alt, AltF1: A key to press or a key combination. Keys are shown in uppercase as on a keyboard.

  • File, File › Save As: menu items, buttons

  • Chapter 1, Example chapter: A cross-reference to another chapter in this guide.

  • Commands that must be run with root privileges. You can also prefix these commands with the sudo command to run them as a non-privileged user:

    # command
    > sudo command
  • Commands that can be run by non-privileged users:

    > command
  • Commands can be split into two or multiple lines by a backslash character (\) at the end of a line. The backslash informs the shell that the command invocation will continue after the end of the line:

    > echo a b \
    c d
  • A code block that shows both the command (preceded by a prompt) and the respective output returned by the shell:

    > command
    output
  • Notices

    Warning
    Warning: Warning notice

    Vital information you must be aware of before proceeding. Warns you about security issues, potential loss of data, damage to hardware, or physical hazards.

    Important
    Important: Important notice

    Important information you should be aware of before proceeding.

    Note
    Note: Note notice

    Additional information, for example about differences in software versions.

    Tip
    Tip: Tip notice

    Helpful information, like a guideline or a piece of practical advice.

  • Compact Notices

    Note

    Additional information, for example about differences in software versions.

    Tip

    Helpful information, like a guideline or a piece of practical advice.

4 Support

Find the support statement for SUSE Linux Enterprise Desktop and general information about technology previews below. For details about the product lifecycle, see https://www.suse.com/lifecycle.

If you are entitled to support, find details on how to collect information for a support ticket at https://documentation.suse.com/sles-15/html/SLES-all/cha-adm-support.html.

4.1 Support statement for SUSE Linux Enterprise Desktop

To receive support, you need an appropriate subscription with SUSE. To view the specific support offers available to you, go to https://www.suse.com/support/ and select your product.

The support levels are defined as follows:

L1

Problem determination, which means technical support designed to provide compatibility information, usage support, ongoing maintenance, information gathering and basic troubleshooting using available documentation.

L2

Problem isolation, which means technical support designed to analyze data, reproduce customer problems, isolate a problem area and provide a resolution for problems not resolved by Level 1 or prepare for Level 3.

L3

Problem resolution, which means technical support designed to resolve problems by engaging engineering to resolve product defects which have been identified by Level 2 Support.

For contracted customers and partners, SUSE Linux Enterprise Desktop is delivered with L3 support for all packages, except for the following:

  • Technology previews.

  • Sound, graphics, fonts, and artwork.

  • Packages that require an additional customer contract.

  • Packages with names ending in -devel (containing header files and similar developer resources) will only be supported together with their main packages.

SUSE will only support the usage of original packages. That is, packages that are unchanged and not recompiled.

4.2 Technology previews

Technology previews are packages, stacks, or features delivered by SUSE to provide glimpses into upcoming innovations. Technology previews are included for your convenience to give you a chance to test new technologies within your environment. We would appreciate your feedback. If you test a technology preview, please contact your SUSE representative and let them know about your experience and use cases. Your input is helpful for future development.

Technology previews have the following limitations:

  • Technology previews are still in development. Therefore, they may be functionally incomplete, unstable, or otherwise not suitable for production use.

  • Technology previews are not supported.

  • Technology previews may only be available for specific hardware architectures.

  • Details and functionality of technology previews are subject to change. As a result, upgrading to subsequent releases of a technology preview may be impossible and require a fresh installation.

  • SUSE may discover that a preview does not meet customer or market needs, or does not comply with enterprise standards. Technology previews can be removed from a product at any time. SUSE does not commit to providing a supported version of such technologies in the future.

For an overview of technology previews shipped with your product, see the release notes at https://www.suse.com/releasenotes.

1 Security and confidentiality

This chapter introduces basic concepts of computer security. Threats and basic mitigation techniques are described. The chapter also provides references to other chapters, guides and Web sites with further information.

1.1 Overview

One main characteristic of Linux is its ability to handle multiple users at the same time (multiuser) and to allow these users to simultaneously perform tasks (multitasking) on the same computer. To users, there is no difference between working with data stored locally and data stored in the network.

Because of the multiuser capability, data from different users has to be stored separately to guarantee security and privacy. Also important is the ability to keep data available in spite of a lost or damaged data medium, for example a hard disk.

This chapter is primarily focused on confidentiality and privacy. But a comprehensive security concept includes a regularly updated, workable, and tested backup. Without a backup, restoring data after it has been tampered with or after a hardware failure is hard.

Use a defense-in-depth approach to security: Assume that no single threat mitigation can fully protect your systems and data, but multiple layers of defense make an attack much harder. Components of a defense-in-depth strategy can be the following:

  • Hashing passwords (for example with PBKDF2, bcrypt, or scrypt) and salting them

  • Encrypting data (for example with AES)

  • Logging, monitoring, and intrusion detection

  • Firewall

  • Antivirus scanner

  • Defined and documented emergency procedures

  • Backups

  • Physical security

  • Audits, security scans, and intrusion tests

SUSE Linux Enterprise Desktop includes software that addresses the requirements of the list above. The following sections provide starting points for securing your system.

1.2 Passwords

On a Linux system, only hashes of passwords are stored. Hashes are one-way algorithms which scramble data to a digital fingerprint that is hard to reverse.

The hashes are stored in the file /etc/shadow, which cannot be read by normal users. Because restoring passwords is possible with powerful computers, hashed passwords should not be visible to regular users.

The National Institute of Standards and Technology (NIST) publishes a guideline for passwords, which is available at https://pages.nist.gov/800-63-3/sp800-63b.html#sec5

For details about how to set a password policy, see Section 17.3, “Password settings. For general information about authentication on Linux, see Part I, “Authentication”.

1.3 Backups

If your system is compromised, backups can be used to restore a prior system state. When bugs or accidents occur, backups can also be used to compare the current system against an older version. For production systems, it is important to take some backups off-site for cases like disasters (for example, off-site storage of tapes/recordable media, or off-site initiated).

For legal reasons, some firms and organizations must be careful about backing up too much information and holding it too long. If your environment has a policy regarding the destruction of old paper files, you might need to extend this policy to Linux backup tapes as well.

The rules about physical security of servers apply to backups as well. Additionally, it is advisable to encrypt backup data. This can be done either per individual backup archive or for the complete backup file system, if applicable. Should a backup medium ever be lost, for example, during transportation, the data is protected against unauthorized access. The same applies if a backup system itself is compromised. To some extent encryption also ensures the integrity of the backups. Keep in mind, however, that the appropriate people need to be able to decrypt backups in emergency situations. Also, the case that an encryption key itself is compromised and needs to be replaced should be considered.

If a system is known to be compromised or suspected to be compromised, then it is vital to determine the integrity status of backups. If a system compromise went undetected for a long period of time, then it is possible that backups already include manipulated configuration files or malicious programs. Keeping a long enough history of backups allows to inspect for possible unwarranted differences.

Even without any known security breach, a regular inspection of differences among important configuration files in backups can help with finding security issues (maybe even accidental misconfigurations). This approach is best suited for files and environments where the content does not change too frequently.

1.4 System integrity

If it is possible to physically access a computer, the firmware and boot process can be manipulated to gain access when an authorized person boots the machine. While not all computers can be locked into inaccessible rooms, your first step should be physically locking the server room.

Also remember that disposing of old equipment must be handled in a secure manner. Securing the boot loader and restricting removable media also provide useful physical security. See Chapter 9, Physical security for more information.

Consider taking the following additional measures:

  • Configure your system so it cannot be booted from a removable device.

  • Protect the boot process with a UEFI password, Secure Boot, and a GRUB2 password.

  • Linux systems are started by a boot loader that allows passing additional options to the booted kernel. You can prevent others from using such parameters during boot by setting an additional password for the boot loader. This is crucial to system security. Not only does the kernel itself run with root permissions, but it is also the first authority to grant root permissions at system start-up.

    For more information about setting a password in the boot loader, see Book “Administration Guide”, Chapter 18 “The boot loader GRUB 2”, Section 18.2.6 “Setting a boot password”.

  • Enable hard disk encryption. For more information, see Chapter 12, Encrypting partitions and files.

  • Use cryptctl to encrypt hosted storage. For more information, see Chapter 13, Storage encryption for hosted applications with cryptctl.

  • Use AIDE to detect any changes in your system configuration. For more information, see Chapter 20, Intrusion detection with AIDE.

1.5 File access

Because of the everything is a file approach in Linux, file permissions are important for controlling access to most resources. This means that by using file permissions, you can define access to regular files, directories and hardware devices. By default, most hardware devices are only accessible for root. However, certain devices, for example serial ports, can be accessible for normal users.

As a general rule, always work with the most restrictive privileges possible for a given task. For example, it is definitely not necessary to be root to read or write e-mail. If the mail program has a bug, this bug could be exploited for an attack that acts with exactly the permissions of the program at the time of the attack. By following the above rule, minimize the possible damage.

For details, see Section 19.1, “Traditional file permissions” and Section 19.2, “Advantages of ACLs”.

AppArmor allows you to set constraints for applications and users. For details, see Part V, “Confining privileges with AppArmor.

If there is a chance that hard disks could be accessed outside of the installed operating system, for example by booting a live system or removing the hardware, encrypt the data. SUSE Linux Enterprise Desktop allows you to encrypt partitions containing data and the operating system. For details, see Chapter 12, Encrypting partitions and files.

1.6 Networking

Securing network services is a crucial task. Aim to secure as many layers of the OSI model as possible.

All communication should be authenticated and encrypted with up-to-date cryptographic algorithms on the transport or application layer. Use a Virtual Private Network (VPN) as an additional secure layer on physical networks.

SUSE Linux Enterprise Desktop provides many options for securing your network:

  • Use openssl to create X509 certificates. These certificates can be used for encryption and authentication of many services. You can set up your own certificate authority (CA) and use it as a source of trust in your network. For details, see man openssl.

  • At least parts of networks are exposed to the public Internet. Reduce attack surfaces by closing ports with firewall rules and by uninstalling or at least disabling services that are not required. For details, see Chapter 23, Masquerading and firewalls.

  • Use OpenVPN to secure communication channels over insecure physical networks. For details, see Chapter 24, Configuring a VPN server.

  • Use strong authentication for network services. For details, see Part I, “Authentication”.

1.7 Software vulnerabilities

Software vulnerabilities are issues in software that can be exploited to obtain unauthorized access or misuse systems. Vulnerabilities are especially critical if they affect remote services, such as HTTP servers. Computer systems are complex, therefore they always include certain vulnerabilities.

When such issues become known, they must be fixed in the software by software developers. The resulting update must then be installed by system administrators in a timely and safe manner on affected systems.

Vulnerabilities are announced on centralized databases, for example the National Vulnerability Database, which is maintained by the US government. You can subscribe to feeds to stay informed about newly discovered vulnerabilities. In some cases the problems induced by the bugs can be mitigated until a software update is provided. Vulnerabilities are assigned a Common Vulnerabilities and Exposures (CVE) number and a Common Vulnerability Scoring System (CVSS) score. The score helps identify the severity of vulnerabilities.

SUSE provides a feed of security advisories. It is available at https://www.suse.com/en-us/support/update/. There is also a list of security updates by CVE number available at https://www.suse.com/support/security/.

Note
Note: Backports and version numbers

SUSE employs the practice of applying the important source code fixes onto older stable versions of software (backporting). Therefore, even if the version number of a software in SUSE Linux Enterprise Desktop is lower than that of the latest version number from the upstream project, the software version in SUSE Linux Enterprise Desktop may already contain the latest fixes for vulnerabilities.

For more information, see Book “Upgrade Guide”, Chapter 7 “Backports of source code”.

Administrators should be prepared for severe vulnerabilities in their systems. This includes hardening all computers as far as possible. Also, we recommend to have predefined procedures in place for quickly installing updates for severe vulnerabilities.

To reduce the damage of possible attacks, use restrictive file permissions. See Section 19.1, “Traditional file permissions”.

Other useful links:

1.8 Malware

Malware is software that is intended to interrupt the normal functioning of a computer or steal data. This includes viruses, worms, ransomware or rootkits. Sometimes malware uses software vulnerabilities to attack a computer. However, often it is accidentally executed by a user, especially when installing third-party software from unknown sources. SUSE Linux Enterprise Desktop provides an extensive list of programs (packages) in its download repositories. This reduces the need to download third-party software. All packages provided by SUSE are signed. The package manager of SUSE Linux Enterprise Desktop checks the signatures of packages after the download to verify their integrity.

The command rpm --checksig RPM_FILE shows whether the checksum and the signature of a package are correct. You can find the signing key on the first DVD of SUSE Linux Enterprise Desktop and on most key servers worldwide.

You can use the ClamAV antivirus software to detect malware on your system. ClamAV can be integrated into several services, for example mail servers and HTTP proxies. This can be used to filter malware before it reaches the user.

Restrictive user privileges can reduce the risk of accidental code execution.

1.9 Important security tips

The following tips are a quick summary of the sections above:

  • Stay informed about the latest security issues. Get and install the updated packages recommended by security announcements as quickly as possible.

  • Avoid using root privileges whenever possible. Set restrictive file permissions.

  • Only use encrypted protocols for network communication.

  • Disable any network services you do not absolutely require.

  • Conduct regular security audits. For example, scan your network for open ports.

  • Monitor the integrity of files on your systems with AIDE (Advanced Intrusion Detection Environment).

  • Take proper care when installing any third-party software.

  • Check all your backups regularly.

  • Check your log files, for example, with Logwatch.

  • Configure the firewall to block all ports that are not explicitly whitelisted.

  • Design your security measures to be redundant.

  • Use encryption where possible, for example, for hard disks of mobile computers.

1.10 Reporting security issues

If you discover a security-related problem, first check the available update packages. If no update is available, write an e-mail to <>. Include a detailed description of the problem and the version number of the package concerned. We encourage you to encrypt e-mails with GPG.

You can find a current version of the SUSE GPG key at https://www.suse.com/support/security/contact/.

Part I Authentication

  • 2 Authentication with PAM
  • Linux uses PAM (pluggable authentication modules) in the authentication process as a layer that mediates between user and application. PAM modules are available on a system-wide basis, so they can be requested by any application. This chapter describes how the modular authentication mechanism works and how it is configured.

  • 3 Using NIS
  • When multiple Unix systems in a network access common resources, it becomes imperative that all user and group identities are the same for all machines in that network. The network should be transparent to users: their environments should not vary, regardless of which machine they are using. This can be done by NIS and NFS services.

    NIS (Network Information Service) can be described as a database-like service that provides access to the contents of /etc/passwd, /etc/shadow, and /etc/group across networks. NIS can also be used for other purposes (making the contents of files like /etc/hosts or /etc/services available, for example), but this is beyond the scope of this introduction. People often refer to NIS as YP, because it works like the network's yellow pages.

  • 4 Setting up authentication clients using YaST
  • Whereas Kerberos is used for authentication, LDAP is used for authorization and identification. Both can work together. For more information about LDAP, see Chapter 5, LDAP with 389 Directory Server, and about Kerberos, see Chapter 6, Network authentication with Kerberos.

  • 5 LDAP with 389 Directory Server
  • The Lightweight Directory Access Protocol (LDAP) is a protocol designed to access and maintain information directories. LDAP can be used for tasks such as user and group management, system configuration management, and address management. In SUSE Linux Enterprise Desktop 15 SP6, the LDAP service is provided by the 389 Directory Server, replacing OpenLDAP.

  • 6 Network authentication with Kerberos
  • Kerberos is a network authentication protocol which also provides encryption. This chapter describes how to set up Kerberos and integrate services like LDAP and NFS.

  • 7 Active Directory support
  • Active Directory* (AD) is a directory-service based on LDAP, Kerberos, and other services. It is used by Microsoft* Windows* to manage resources, services, and people. In a Microsoft Windows network, Active Directory provides information about these objects, restricts access to them, and enforces po…

  • 8 Setting up a freeRADIUS server
  • The RADIUS (Remote Authentication Dial-In User Service) protocol has long been a standard service for manage network access. It provides authentication, authorization and accounting (AAA) for large businesses such as Internet service providers and cellular network providers, and is also popular for …

2 Authentication with PAM

Linux uses PAM (pluggable authentication modules) in the authentication process as a layer that mediates between user and application. PAM modules are available on a system-wide basis, so they can be requested by any application. This chapter describes how the modular authentication mechanism works and how it is configured.

2.1 What is PAM?

System administrators and programmers often want to restrict access to certain parts of the system or to limit the use of certain functions of an application. Without PAM, applications must be adapted every time a new authentication mechanism, such as LDAP, Samba, or Kerberos, is introduced. However, this process is time-consuming and error-prone. One way to avoid these drawbacks is to separate applications from the authentication mechanism and delegate authentication to centrally managed modules. Whenever a newly required authentication scheme is needed, it is sufficient to adapt or write a suitable PAM module for use by the program in question.

The PAM concept consists of:

  • PAM modules, which are a set of shared libraries for a specific authentication mechanism.

  • A module stack with of one or more PAM modules.

  • A PAM-aware service which needs authentication by using a module stack or PAM modules. Usually a service is a familiar name of the corresponding application, like login or su. The service name other is a reserved word for default rules.

  • Module arguments, with which the execution of a single PAM module can be influenced.

  • A mechanism evaluating each result of a single PAM module execution. A positive value executes the next PAM module. The way a negative value is dealt with depends on the configuration: no influence, proceed up to terminate immediately and anything in between are valid options.

2.2 Structure of a PAM configuration file

PAM can be configured in two ways:

File based configuration (/etc/pam.conf)

The configuration of each service is stored in /etc/pam.conf. However, for maintenance and usability reasons, this configuration scheme is not used in SUSE Linux Enterprise Desktop.

Directory based configuration (/etc/pam.d/)

Every service (or program) that relies on the PAM mechanism has its own configuration file in the /etc/pam.d/ directory. For example, the service for sshd can be found in the /etc/pam.d/sshd file.

The files under /etc/pam.d/ define the PAM modules used for authentication. Each file consists of lines, which define a service, and each line consists of a maximum of four components:

TYPE  CONTROL
 MODULE_PATH  MODULE_ARGS

The components have the following meaning:

TYPE

Declares the type of the service. PAM modules are processed as stacks. Different types of modules have different purposes. For example, one module checks the password, another verifies the location from which the system is accessed, and yet another reads user-specific settings. PAM knows about four different types of modules:

auth

Check the user's authenticity, traditionally by querying a password. However, this can also be achieved with a chip card or through biometrics (for example, fingerprints or iris scan).

account

Modules of this type check if the user has general permission to use the requested service. As an example, such a check should be performed to ensure that no one can log in with the user name of an expired account.

password

The purpose of this type of module is to enable the change of an authentication token. Usually this is a password.

session

Modules of this type are responsible for managing and configuring user sessions. They are started before and after authentication to log login attempts and configure the user's specific environment (mail accounts, home directory, system limits, etc.).

CONTROL

Indicates the behavior of a PAM module. Each module can have the following control flags:

required

A module with this flag must be successfully processed before the authentication may proceed. After the failure of a module with the required flag, all other modules with the same flag are processed before the user receives a message about the failure of the authentication attempt.

requisite

Modules having this flag must also be processed successfully, in much the same way as a module with the required flag. However, in case of failure a module with this flag gives immediate feedback to the user and no further modules are processed. In case of success, other modules are subsequently processed, like any modules with the required flag. The requisite flag can be used as a basic filter checking for the existence of certain conditions that are essential for a correct authentication.

sufficient

After a module with this flag has been successfully processed, the requesting application receives an immediate message about the success and no further modules are processed, provided there was no preceding failure of a module with the required flag. The failure of a module with the sufficient flag has no direct consequences, in the sense that any subsequent modules are processed in their respective order.

optional

The failure or success of a module with this flag does not have any direct consequences. This can be useful for modules that are only intended to display a message (for example, to tell the user that mail has arrived) without taking any further action.

include

If this flag is given, the file specified as argument is inserted at this place.

MODULE_PATH

Contains a full file name of a PAM module. It does not need to be specified explicitly, if the module is located in the default directory /lib/security (for all 64-bit platforms supported by SUSE® Linux Enterprise Desktop, the directory is /lib64/security).

MODULE_ARGS

Contains a space-separated list of options to influence the behavior of a PAM module, such as debug (enables debugging) or nullok (allows the use of empty passwords).

In addition, there are global configuration files for PAM modules under /etc/security, which define the exact behavior of these modules (examples include pam_env.conf and time.conf). Every application that uses a PAM module calls a set of PAM functions, which then process the information in the configuration files and return the result to the requesting application.

To simplify the creation and maintenance of PAM modules, common default configuration files for the types auth, account, password, and session modules have been introduced. These are retrieved from every application's PAM configuration. Updates to the global PAM configuration modules in common-* are thus propagated across all PAM configuration files without requiring the administrator to update every single PAM configuration file.

The global PAM configuration files are maintained using the pam-config tool. This tool automatically adds new modules to the configuration, changes the configuration of existing ones or deletes modules (or options) from the configurations. Manual intervention in maintaining PAM configurations is minimized or no longer required.

Note
Note: 64-bit and 32-bit mixed installations

When using a 64-bit operating system, it is possible to also include a runtime environment for 32-bit applications. In this case, make sure that you also install the 32-bit version of the PAM modules.

2.3 The PAM configuration of sshd

Consider the PAM configuration of sshd as an example:

Example 2.1: PAM configuration for sshd (/etc/pam.d/sshd)
#%PAM-1.0 1
  

auth     requisite      pam_nologin.so                              2
auth     include        common-auth                                 3
account  requisite      pam_nologin.so                              2
account  include        common-account                              3
password include        common-password                             3
session  required       pam_loginuid.so                             4
session  include        common-session                              3
session  optional       pam_lastlog.so   silent noupdate showfailed 5

1

Declares the version of this configuration file for PAM 1.0. This is merely a convention, but could be used in the future to check the version.

2

Checks, if /etc/nologin exists. If it does, no user other than root may log in.

3

Refers to the configuration files of four module types: common-auth, common-account, common-password, and common-session. These four files hold the default configuration for each module type.

4

Sets the login UID process attribute for the process that was authenticated.

5

Displays information about the last login of a user.

By including the configuration files instead of adding each module separately to the respective PAM configuration, you automatically get an updated PAM configuration when an administrator changes the defaults. Formerly, you needed to adjust all configuration files manually for all applications when changes to PAM occurred or a new application was installed. Now the PAM configuration is made with central configuration files and all changes are automatically inherited by the PAM configuration of each service.

The first include file (common-auth) calls three modules of the auth type: pam_env.so, pam_gnome_keyring.so and pam_unix.so. See Example 2.2, “Default configuration for the auth section (common-auth)”.

Example 2.2: Default configuration for the auth section (common-auth)
auth  required  pam_env.so                   1
auth  optional  pam_gnome_keyring.so         2
auth  required  pam_unix.so  try_first_pass 3

1

pam_env.so loads /etc/security/pam_env.conf to set the environment variables as specified in this file. It can be used to set the DISPLAY variable to the correct value, because the pam_env module knows about the location from which the login is taking place.

2

pam_gnome_keyring.so checks the user's login and password against the GNOME key ring

3

pam_unix checks the user's login and password against /etc/passwd and /etc/shadow.

The whole stack of auth modules is processed before sshd gets any feedback about whether the login has succeeded. All modules of the stack having the required control flag must be processed successfully before sshd receives a message about the positive result. If one of the modules is not successful, the entire module stack is still processed and only then is sshd notified about the negative result.

When all modules of the auth type have been successfully processed, another include statement is processed, in this case, that in Example 2.3, “Default configuration for the account section (common-account)”. common-account contains only one module, pam_unix. If pam_unix returns the result that the user exists, sshd receives a message announcing this success and the next stack of modules (password) is processed, shown in Example 2.4, “Default configuration for the password section (common-password)”.

Example 2.3: Default configuration for the account section (common-account)
account  required  pam_unix.so  try_first_pass
Example 2.4: Default configuration for the password section (common-password)
password  requisite  pam_cracklib.so
password  optional   pam_gnome_keyring.so  use_authtok
password  required   pam_unix.so  use_authtok nullok shadow try_first_pass

Again, the PAM configuration of sshd involves only an include statement referring to the default configuration for password modules located in common-password. These modules must successfully be completed (control flags requisite and required) whenever the application requests the change of an authentication token.

Changing a password or another authentication token requires a security check. This is achieved with the pam_cracklib module. The pam_unix module used afterward carries over any old and new passwords from pam_cracklib, so the user does not need to authenticate again after changing the password. This procedure makes it impossible to circumvent the checks carried out by pam_cracklib. Whenever the account or the auth type are configured to complain about expired passwords, the password modules should also be used.

Example 2.5: Default configuration for the session section (common-session)
session  required  pam_limits.so
session  required  pam_unix.so  try_first_pass
session  optional  pam_umask.so
session  optional  pam_systemd.so
session  optional  pam_gnome_keyring.so auto_start only_if=gdm,gdm-password,lxdm,lightdm
session  optional  pam_env.so

As the final step, the modules of the session type (bundled in the common-session file) are called to configure the session according to the settings for the user in question. The pam_limits module loads the file /etc/security/limits.conf, which may define limits on the use of certain system resources. The pam_unix module is processed again. The pam_umask module can be used to set the file mode creation mask. Since this module carries the optional flag, a failure of this module would not affect the successful completion of the entire session module stack. The session modules are called a second time when the user logs out.

2.4 Configuration of PAM modules

Some PAM modules are configurable. The configuration files are located in /etc/security. This section briefly describes the configuration files relevant to the sshd example—pam_env.conf and limits.conf.

2.4.1 pam_env.conf

pam_env.conf can be used to define a standardized environment for users that is set whenever the pam_env module is called. With it, preset environment variables using the following syntax:

VARIABLE  [DEFAULT=VALUE]  [OVERRIDE=VALUE]
VARIABLE

Name of the environment variable to set.

[DEFAULT=<value>]

Default VALUE the administrator wants to set.

[OVERRIDE=<value>]

Values that may be queried and set by pam_env, overriding the default value.

A typical example of how pam_env can be used is the adaptation of the DISPLAY variable, which is changed whenever a remote login takes place. This is shown in Example 2.6, “pam_env.conf”.

Example 2.6: pam_env.conf
REMOTEHOST  DEFAULT=localhost          OVERRIDE=@{PAM_RHOST}
DISPLAY     DEFAULT=${REMOTEHOST}:0.0  OVERRIDE=${DISPLAY}

The first line sets the value of the REMOTEHOST variable to localhost, which is used whenever pam_env cannot determine any other value. The DISPLAY variable in turn contains the value of REMOTEHOST. Find more information in the comments in /etc/security/pam_env.conf.

2.4.2 pam_mount.conf.xml

The purpose of pam_mount is to mount user home directories during the login process, and to unmount them during logout in an environment where a central file server keeps all the home directories of users. With this method, it is not necessary to mount a complete /home directory where all the user home directories would be accessible. Instead, only the home directory of the user who is about to log in, is mounted.

After installing pam_mount, a template for pam_mount.conf.xml is available in /etc/security. The description of the elements can be found in the manual page man 5 pam_mount.conf.

A basic configuration of this feature can be done with YaST. Select Network Settings › Windows Domain Membership › Expert Settings to add the file server.

Note
Note: LUKS2 support

LUKS2 support was added to cryptsetup 2.0, and SUSE Linux Enterprise Desktop has included support for LUKS2 in pam_mount since SUSE Linux Enterprise Desktop 12 SP3.

2.4.3 limits.conf

System limits can be set on a user or group basis in limits.conf, which is read by the pam_limits module. The file allows you to set hard limits, which may not be exceeded, and soft limits, which may be exceeded temporarily. For more information about the syntax and the options, see the comments in /etc/security/limits.conf.

2.5 Configuring PAM using pam-config

The pam-config tool helps you configure the global PAM configuration files (/etc/pam.d/common-*) and several selected application configurations. For a list of supported modules, use the pam-config --list-modules command. Use the pam-config command to maintain your PAM configuration files. Add new modules to your PAM configurations, delete other modules or modify options to these modules. When changing global PAM configuration files, no manual tweaking of the PAM setup for individual applications is required.

A simple use case for pam-config involves the following:

  1. Auto-generate a fresh unix-style PAM configuration. Let pam-config create the simplest possible setup which you can extend later on. The pam-config --create command creates a simple Unix authentication configuration. Pre-existing configuration files not maintained by pam-config are overwritten, but backup copies are kept as *.pam-config-backup.

  2. Add a new authentication method. Adding a new authentication method (for example, SSSD) to your stack of PAM modules comes down to a simple pam-config --add --sss command. SSSD is added wherever appropriate across all common-*-pc PAM configuration files.

  3. Add debugging for test purposes. To make sure the new authentication procedure works as planned, turn on debugging for all PAM-related operations. The pam-config --add --sss-debug command turns on debugging for SSSD-related PAM operations. Find the debugging output in the systemd journal (see Book “Administration Guide”, Chapter 21 “journalctl: query the systemd journal”). To configure SSSD, refer to Section 5.8, “Using SSSD to manage LDAP authentication”.

  4. Query your setup. Before you finally apply your new PAM setup, check if it contains all the options you wanted to add. The pam-config --query --MODULE command lists both the type and the options for the queried PAM module.

  5. Remove the debug options. Finally, remove the debug option from your setup when you are entirely satisfied with its performance. The pam-config --delete --sss-debug command turns off debugging for the pam_ssh.so module. In case you had debugging options added for other modules, use similar commands to turn these off.

For more information on the pam-config command and the options available, refer to the manual page of pam-config(8).

2.6 Manually configuring PAM

If you prefer to manually create or maintain your PAM configuration files, make sure to disable pam-config for these files.

When you create your PAM configuration files from scratch using the pam-config --create command, it creates symbolic links from the common-* to the common-*-pc files. pam-config only modifies the common-*-pc configuration files. Removing these symbolic links effectively disables pam-config, because pam-config only operates on the common-*-pc files and these files are not put into effect without the symbolic links.

Warning
Warning: Include pam_systemd.so in configuration

If you are creating your own PAM configuration, make sure to include pam_systemd.so configured as session optional. Not including the pam_systemd.so can cause problems with systemd task limits. For details, refer to the man page of pam_systemd.so.

2.7 Configuring U2F keys for local login

To provide more security during local login, you can configure two-factor authentication using the pam-u2f framework and the U2F feature on YubiKeys and Security Keys.

To set up U2F on your system, you need to associate your key with your account. After that, configure your system to use the key. The procedure is described in the following sections.

2.7.1 Associating the U2F key with your account

To associate your U2F key with your account, proceed as follows:

  1. Log in to your machine.

  2. Insert your U2F key.

  3. Create a directory for the U2F key configuration:

    > sudo mkdir -p ~/.config/Yubico
  4. Run the pamu2fcfg command that outputs configuration lines:

    > sudo pamu2fcfg > ~/.config/Yubico/u2f_keys
  5. When your device begins flashing, touch the metal contact to confirm the association.

We recommend using a backup U2F device, which you can set up by running the following commands:

  1. Run:

    > sudo pamu2fcfg -n >> ~/.config/Yubico/u2f_keys
  2. When your device begins flashing, touch the metal contact to confirm the association.

You can move the output file from the default location to a directory that requires the sudo permission to modify the file to increase security. For example, move it to the /etc directory. To do so, follow the steps:

  1. Create a directory in /etc:

    > sudo mkdir /etc/Yubico
  2. Move the created file:

    > sudo mv ~/.config/Yubico/u2f_keys /etc/Yubico/u2f_keys
Note
Note: Placing the u2f_keys in a non-default location

If you move the output file to a different directory than is the default ($HOME/.config/Yubico/u2f_keys), you need to add the path to the /etc/pam.d/login file as described in Section 2.7.2, “Updating the PAM configuration”.

2.7.2 Updating the PAM configuration

After you have created the U2F keys configuration, you need to adjust the PAM configuration on your system.

  1. Open the file /etc/pam.d/login.

  2. Add the line auth required pam_u2f.so to the file as follows:

    #%PAM-1.0
    auth      include    common-auth
    auth      required   pam_u2f.so
    account   include    common-account
    password  include    common-password
    session   optional   pam_keyinit.so revoke
    session   include    common-session
    #session  optional   pam_xauth.so
  3. If you placed the u2f_keys file in a different location than $HOME/.config/Yubico/u2f_keys, you need to use the authfile option in the /etc/pam.d/login PAM file as follows:

    #%PAM-1.0
    auth     requisite pam_nologin.so
    auth     include   common-auth
    auth  required pam_u2f.so authfile=<PATH_TO_u2f_keys>
    ...

    where <PATH_TO_u2f_keys> is the absolute path to the u2f_keys file.

2.8 More information

In the /usr/share/doc/packages/pam directory after installing the pam-doc package, find the following additional documentation:

READMEs

In the top level of this directory, there is the modules subdirectory holding README files about the available PAM modules.

The Linux-PAM System Administrators' Guide

This document comprises everything that the system administrator should know about PAM. It discusses a range of topics, from the syntax of configuration files to the security aspects of PAM.

The Linux-PAM Module Writers' Manual

This document summarizes the topic from the developer's point of view, with information about how to write standard-compliant PAM modules.

The Linux-PAM Application Developers' Guide

This document comprises everything needed by an application developer who wants to use the PAM libraries.

The PAM manual pages

PAM and the individual modules come with manual pages that provide a good overview of the functionality of all the components.

3 Using NIS

When multiple Unix systems in a network access common resources, it becomes imperative that all user and group identities are the same for all machines in that network. The network should be transparent to users: their environments should not vary, regardless of which machine they are using. This can be done by NIS and NFS services.

NIS (Network Information Service) can be described as a database-like service that provides access to the contents of /etc/passwd, /etc/shadow, and /etc/group across networks. NIS can also be used for other purposes (making the contents of files like /etc/hosts or /etc/services available, for example), but this is beyond the scope of this introduction. People often refer to NIS as YP, because it works like the network's yellow pages.

3.1 Configuring NIS servers

For configuring NIS servers, see the SUSE Linux Enterprise Server Administration Guide.

3.2 Configuring NIS clients

To use NIS on a workstation, do the following:

  1. Start YaST › Network Services › NIS Client.

  2. Activate the Use NIS button.

  3. Enter the NIS domain. This is a domain name given by your administrator or a static external IP address received by DHCP.

    Setting domain and address of a NIS server
    Figure 3.1: Setting domain and address of a NIS server
  4. Enter your NIS servers and separate their addresses by spaces. If you do not know your NIS server, click Find to let YaST search for any NIS servers in your domain. Depending on the size of your local network, this may be a time-consuming process. Broadcast asks for a NIS server in the local network after the specified servers fail to respond.

  5. Depending on your local installation, you may also want to activate the automounter. This option also installs additional software if required.

  6. If you do not want other hosts to be able to query which server your client is using, go to the Expert settings and disable Answer Remote Hosts. By checking Broken Server, the client is enabled to receive replies from a server communicating through an unprivileged port. For further information, see man ypbind.

  7. Click Finish to save them and return to the YaST control center. Your client is now configured with NIS.

4 Setting up authentication clients using YaST

Whereas Kerberos is used for authentication, LDAP is used for authorization and identification. Both can work together. For more information about LDAP, see Chapter 5, LDAP with 389 Directory Server, and about Kerberos, see Chapter 6, Network authentication with Kerberos.

4.1 Configuring an authentication client with YaST

YaST allows setting up authentication to clients using different modules:

4.2 SSSD

Two of the YaST modules are based on SSSD: User Logon Management and LDAP and Kerberos Authentication.

SSSD stands for System Security Services Daemon. SSSD talks to remote directory services that provide user data and provides authentication methods, such as LDAP, Kerberos, or Active Directory (AD). It also provides an NSS (Name Service Switch) and PAM (Pluggable Authentication Module) interface.

SSSD can locally cache user data and then allow users to use the data, even if the real directory service is (temporarily) unreachable.

4.2.1 Checking the status

After running one of the YaST authentication modules, you can check whether SSSD is running with:

# systemctl status sssd
sssd.service - System Security Services Daemon
   Loaded: loaded (/usr/lib/systemd/system/sssd.service; enabled)
   Active: active (running) since Thu 2015-10-23 11:03:43 CEST; 5s ago
   [...]

4.2.2 Caching

To allow logging in when the authentication back-end is unavailable, SSSD uses its cache even if it was invalidated. This happens until the back-end is available again.

To invalidate the cache, run sss_cache -E (the command sss_cache is part of the package sssd-tools).

To remove the SSSD cache, run:

> sudo systemctl stop sssd
> sudo rm -f /var/lib/sss/db/*
> sudo systemctl start sssd

5 LDAP with 389 Directory Server

The Lightweight Directory Access Protocol (LDAP) is a protocol designed to access and maintain information directories. LDAP can be used for tasks such as user and group management, system configuration management, and address management. In SUSE Linux Enterprise Desktop 15 SP6, the LDAP service is provided by the 389 Directory Server, replacing OpenLDAP.

Ideally, a central server stores the data in a directory and distributes it to all clients using a well-defined protocol. The structured data allow a wide range of applications to access them. A central repository reduces the necessary administrative effort. The use of an open and standardized protocol such as LDAP ensures that as many client applications as possible can access such information.

A directory in this context is a type of database optimized for quick and effective reading and searching. The type of data stored in a directory tends to be long lived and changes infrequently. This allows the LDAP service to be optimized for high performance concurrent reads, whereas conventional databases are optimized for accepting many writes to data in a short time.

5.1 Structure of an LDAP directory tree

This section introduces the layout of an LDAP directory tree, and provides the basic terminology used with regard to LDAP.

An LDAP directory has a tree structure. All entries (called objects) of the directory have a defined position within this hierarchy. This hierarchy is called the directory information tree (DIT). The complete path to the desired entry, which unambiguously identifies it, is called the distinguished name or DN. An object in the tree is identified by its relative distinguished name (RDN). The distinguished name is built from the RDNs of all entries on the path to the entry.

The relations within an LDAP directory tree become more evident in the following example, shown in Figure 5.1, “Structure of an LDAP directory”.

Structure of an LDAP directory
Figure 5.1: Structure of an LDAP directory

The complete diagram is a fictional directory information tree. The entries on three levels are depicted. Each entry corresponds to one box in the image. The complete, valid distinguished name for the fictional employee Geeko Linux, in this case, is cn=Geeko Linux,ou=doc,dc=example,dc=com. It is composed by adding the RDN cn=Geeko Linux to the DN of the preceding entry ou=doc,dc=example,dc=com.

The types of objects that can be stored in the DIT are globally determined following a Schema. The type of an object is determined by the object class. The object class determines what attributes the relevant object must or may be assigned. The Schema contains all object classes and attributes which can be used by the LDAP server. Attributes are a structured data type. Their syntax, ordering and other behavior is defined by the Schema. LDAP servers supply a core set of Schemas which can work in a broad variety of environments. If a custom Schema is required, you can upload it to an LDAP server.

Table 5.1, “Commonly used object classes and attributes” offers a small overview of the object classes from 00core.ldif and 06inetorgperson.ldif used in the example, including required attributes (Req. Attr.) and valid attribute values. After installing 389 Directory Server, these can be found in /usr/share/dirsrv/schema.

Table 5.1: Commonly used object classes and attributes

Object Class

Meaning

Example Entry

Req. Attr.

domain

name components of the domain

example

displayName

organizationalUnit

organizational unit

documentationdept

ou

nsPerson

person-related data for the intranet or Internet

Tux Linux

cn

Example 5.1, “Excerpt from CN=schema” shows an excerpt from a Schema directive with explanations.

Example 5.1: Excerpt from CN=schema
attributetype (1.2.840.113556.1.2.102 NAME 'memberOf' 1
       DESC 'Group that the entry belongs to' 2
       SYNTAX 1.3.6.1.4.1.1466.115.121.1.12 3
       X-ORIGIN 'Netscape Delegated Administrator') 4

objectclass (2.16.840.1.113730.3.2.333 NAME 'nsPerson' 5
       DESC 'A representation of a person in a directory server' 6
       SUP top STRUCTURAL 7
       MUST ( displayName $ cn ) 8
       MAY ( userPassword $ seeAlso $ description $ legalName $ mail \
             $ preferredLanguage ) 9
       X-ORIGIN '389 Directory Server Project'
  ...

1

The name of the attribute, its unique object identifier (OID, numerical), and the abbreviation of the attribute.

2

A brief description of the attribute with DESC. The corresponding RFC, on which the definition is based, may also mentioned here.

3

The type of data that can be held in the attribute. In this case, it is a case-insensitive directory string.

4

The source of the schema element (for example, the name of the project).

5

The definition of the object class nsPerson begins with an OID and the name of the object class (like the definition of the attribute).

6

A brief description of the object class.

7

The SUP top entry indicates that this object class is not subordinate to another object class.

8

With MUST, list all attribute types that must be used with an object of the type nsPerson.

9

With MAY, list all attribute types that are optionally permitted with this object class.

5.2 Creating and managing a Docker container for 389 Directory Server

Note
Note

This section is OPTIONAL; refer to it if you use a 389 Directory Server instance as a Docker container. For regular usage of a 389 Directory Server instance, refer to the rest of the sections.

To create and manage a 389 Directory Server instance as a Docker container, refer to the following examples:

Pull the latest 389 Directory Server image

To pull the latest 389 Directory Server image from the container registry, run the following command:

>  docker pull 389ds/dirsrv:latest
Create a new volume

To create a new volume for the container, run the following example command:

>  docker volume create VOLUME
Create a container with basic configuration

To create a container with basic configuration, run the following example command:

>  docker create \
 -u USERNAME \
 -e SUFFIX_NAME="dc=example,dc=com" \
 -e DS_DM_PASSWORD=PASSWORD \
 -m 1024M \
 -p 3389:3389 -p 3636:3636 \
 -v VOLUME:/data \
 --name INSTANCE \
 389ds/dirsrv:latest
Start the Docker container for 389 Directory Server

To start the Docker container, run the following example command:

>  docker start INSTANCE
Execute a command within a running 389 Directory Server container

Assuming that the primary process of the container (PID 1) is running, you can run a command within a running 389 Directory Server container by using the following syntax:

> sudo docker exec -u USERNAME -i -t INSTANCE COMMAND
Note
Note: The COMMAND must be executable

To run a chained command or a command enclosed within quotations, you should first run a shell session in the container. For example, you can run commands in the sh shell attached to the container:

> sudo docker exec -u USERNAME -i -t INSTANCE sh -c "COMMAND"
Stop the Docker container for 389 Directory Server

To stop the running Docker container, run the following example command:

>  docker stop INSTANCE
Remove the Docker container for 389 Directory Server

To remove the Docker container, run the following example command:

>  docker rm INSTANCE

5.3 Installing 389 Directory Server

Install 389 Directory Server with the following command:

> sudo zypper install 389-ds

After installation, set up the server

5.3.1 Setting up a new 389 Directory Server instance

You will use the dscreate command to create new 389 Directory Server instances, and the dsctl command to cleanly remove them.

There are two ways to configure and create a new instance: from a custom configuration file, and from an auto-generated template file. You can use the auto-generated template without changes for a test instance, though for a production system you must carefully review it and make any necessary changes.

Then you will set up administration credentials, manage users and groups, and configure identity services.

The 389 Directory Server is controlled by three primary commands:

dsctl

Manages a local instance and requires root permissions. Requires you to be connected to a terminal which is running the directory server instance. Used for starting, stopping, backing up the database, and more.

dsconf

The primary tool used for administration and configuration of the server. Manages an instance's configuration via its external interfaces. This allows you to make configuration changes remotely on the instance.

dsidm

Used for identity management (managing users, groups, passwords, etc.). The permissions are granted by access controls, so, for example, users can reset their own password or change details of their own account.

Follow these steps to set up a simple instance for testing and development, populated with a small set of sample entries.

5.3.2 Creating a 389 Directory Server instance with a custom configuration file

You can create a new 389 Directory Server instance from a simple custom configuration file. This file must be in the INF format, and you can name it anything you like.

The default instance name is localhost. The instance name cannot be changed after it has been created. It is better to create your own instance name, rather than using the default, to avoid confusion and to enable a better understanding of how it all works. The following examples use the LDAP1 instance name, and a suffix of dc=LDAP1,dc=COM.

Example 5.2 shows an example configuration file that you can use to create a new 389 Directory Server instance. You can copy and use this file without changes.

  1. Copy the following example file, LDAP1.inf, to your home directory:

    Example 5.2: Minimal 389 Directory Server instance configuration file
    # LDAP1.inf
    
    [general]
    config_version = 2 1
    
    [slapd]
    root_password = PASSWORD2
    self_sign_cert = True 3
    instance_name = LDAP1
    
    [backend-userroot]
    sample_entries = yes 4
    suffix = dc=LDAP1,dc=COM

    1

    This line is required, indicating that this is a version 2 setup INF file.

    2

    Create a strong root_password for the ldap user cn=Directory Manager. This user is for connecting (binding) to the directory.

    3

    Create self-signed server certificates in /etc/dirsrv/slapd-LDAP1.

    4

    Populate the new instance with sample user and group entries.

  2. To create the 389 Directory Server instance from Example 5.2, run the following command:

    > sudo dscreate -v from-file LDAP1.inf | \
    tee LDAP1-OUTPUT.txt

    This shows all activity during the instance creation, stores all the messages in LDAP1-OUTPUT.txt, and creates a working LDAP server in about a minute. The verbose output contains a lot of useful information. If you do not want to save it, then delete the | tee LDAP1-OUTPUT.txt portion of the command.

  3. If the dscreate command should fail, the messages will tell you why. After correcting any issues, remove the instance (see Step 5) and create a new instance.

  4. A successful installation reports Completed installation for LDAP1. Check the status of your new server:

    > sudo dsctl LDAP1 status
    Instance "LDAP1" is running
  5. The following commands are for cleanly removing the instance. The first command performs a dry run and does not remove the instance. When you are sure you want to remove it, use the second command with the --do-it option:

    > sudo dsctl LDAP1 remove
    Not removing: if you are sure, add --do-it
    
    > sudo dsctlLDAP1 remove --do-it

    This command also removes partially installed or corrupted instances. You can reliably create and remove instances as often as you want.

If you forget the name of your instance, use dsctl to list all instances:

> sudo dsctl -l
slapd-LDAP1

5.3.3 Creating a 389 Directory Server instance from a template

You can auto-create a template for a new 389 Directory Server instance with the dscreate command. This creates a template that you can use without making any changes, for testing. For production systems, review and change it to suit your own requirements. The defaults are documented in the template file, and commented out. To make changes, uncomment the default and enter your own value. All options are well documented.

The following example prints the template to stdout:

> sudo dscreate create-template

This is good for a quick review of the template, but you must create a file to use in creating your new 389 Directory Server instance. You can name this file anything you want:

> sudo dscreate create-template TEMPLATE.txt

This is a snippet from the new file:

# full_machine_name (str)
# Description: Sets the fully qualified hostname (FQDN) of this system. When
# installing this instance with GSSAPI authentication behind a load balancer, set
# this parameter to the FQDN of the load balancer and, additionally, set
# "strict_host_checking" to "false".
# Default value: ldapserver1.test.net
;full_machine_name = ldapserver1.test.net

# selinux (bool)
# Description: Enables SELinux detection and integration during the installation
# of this instance. If set to "True", dscreate auto-detects whether SELinux is
# enabled. Set this parameter only to "False" in a development environment.
# Default value: True
;selinux = True

It automatically configures certain options from your existing environment; for example, the system's fully-qualified domain name, which is called full_machine_name in the template. Use this file with no changes to create a new instance:

> sudo dscreate from-file TEMPLATE.txt

This creates a new instance named localhost, and automatically starts it after creation:

> sudo dsctl localhost status
Instance "localhost" is running

The default values create a fully operational instance, but there are certain values you might want to change.

The instance name cannot be changed after it has been created. It is better to create your own instance name, rather than using the default, to avoid confusion and to enable a better understanding of how it all works. To do this, uncomment the ;instance_name = localhost line and change localhost to your chosen name. In the following examples, the instance name is LDAP1.

Another useful change is to populate your new instance with sample users and groups. Uncomment ;sample_entries = no and change no to yes. This creates the demo_user and demo_group.

Set your own password by uncommenting ;root_password, and replacing the default password with your own.

The template does not create a default suffix, so you should configure your own on the suffix line, like the following example:

suffix = dc=LDAP1,dc=COM

You can cleanly remove any instance and start over with dsctl:

> sudo dsctl LDAP1 remove --do-it

5.3.4 Stopping and starting 389 Directory Server

The following examples use LDAP1 as the instance name. Use systemd to manage your 389 Directory Server instance. Get the status of your instance:

> systemctl status --no-pager --full dirsrv@LDAP1.service
   ● dirsrv@LDAP1.service - 389 Directory Server LDAP1.
     Loaded: loaded (/usr/lib/systemd/system/dirsrv@.service; enabled; vendor preset: disabled)
     Active: active (running) since Thu 2021-03-11 08:55:28 PST; 2h 7min ago
    Process: 4451 ExecStartPre=/usr/lib/dirsrv/ds_systemd_ask_password_acl
       /etc/dirsrv/slapd-LDAP1/dse.ldif (code=exited, status=0/SUCCESS)
   Main PID: 4456 (ns-slapd)
     Status: "slapd started: Ready to process requests"
      Tasks: 26
     CGroup: /system.slice/system-dirsrv.slice/dirsrv@LDAP1.service
             └─4456 /usr/sbin/ns-slapd -D /etc/dirsrv/slapd-LDAP1 -i /run/dirsrv/slapd-LDAP1.pid

Start, stop and restart your LDAP server:

> sudo systemctl start dirsrv@LDAP1.service
> sudo systemctl stop dirsrv@LDAP1.service
> sudo systemctl restart dirsrv@LDAP1.service

See Book “Administration Guide”, Chapter 19 “The systemd daemon” for more information on using systemctl.

The dsctl command also starts and stops your server:

> sudo dsctl LDAP1 status
> sudo dsctl LDAP1 stop
> sudo dsctl LDAP1 restart
> sudo dsctl LDAP1 start

5.3.5 Configuring administrator credentials for local administration

For local administration of the 389 Directory Server, you can create a .dsrc configuration file in the /root directory, allowing root and sudo users to administer the server without typing connection details with every command. Example 5.3 shows an example for local administration on the server, using LDAP1 and com for the suffix.

After creating your /root/.dsrc file, try a few administration commands, such as creating new users (see Section 5.6, “Managing LDAP users and groups”).

Example 5.3: A .dsrc file for local administration
# /root/.dsrc file for administering the LDAP1 instance

[LDAP1] 1

uri = ldapi://%%2fvar%%2frun%%2fslapd-LDAP1.socket 2
basedn = dc=LDAP1,dc=COM
binddn = cn=Directory Manager

1

This must specify your exact instance name.

2

ldapi detects the UID and GID of the user attempting to log in to the server. If the UID/GID are 0/0 or dirsrv:dirsrv, ldapi binds the user as the directory server root dn, which is cn=Directory Manager.

In the URI, the slashes are replaced with %%2f, so in this example the path is /var/run/slapd-LDAP1.socket.

Important
Important: New negation feature in sudoers.ldap

In sudo versions older than 1.9.9, negation in sudoers.ldap does not work for the sudoUser, sudoRunAsUser, or sudoRunAsGroup attributes. For example:

 # does not match all but joe
# instead, it does not match anyone
sudoUser: !joe

# does not match all but joe
# instead, it matches everyone including Joe
sudoUser: ALL
sudoUser: !joe

In sudo version 1.9.9 and higher, negation is enabled for the sudoUser attribute. See man 5 sudoers.ldap for more information.

5.4 Firewall configuration

The default TCP ports for 389 Directory Server are 389 and 636. TCP 389 is for unencrypted connections, and STARTTLS. 636 is for encrypted connections over TLS.

firewalld is the default firewall manager for SUSE Linux Enterprise Desktop. The following rules activate the ldap and ldaps firewall services:

> sudo firewall-cmd --add-service=ldap --zone=internal
> sudo firewall-cmd --add-service=ldaps --zone=internal
> sudo firewall-cmd --runtime-to-permanent

Replace the zone with the appropriate zone for your server. See Section 5.10, “Importing TLS server certificates and keys” for information on securing your connections with TLS, and Section 23.3, “Firewalling basics” to learn about firewalld.

5.5 Backing up and restoring 389 Directory Server

389 Directory Server supports making offline and online backups. The dsctl command makes offline database backups, and the dsconf command makes online database backups. Back up the LDAP server configuration directory to enable complete restoration in case of a major failure.

5.5.1 Backing up the LDAP server configuration

Your LDAP server configuration is in the directory /etc/dirsrv/slapd-INSTANCE_NAME. This directory contains certificates, keys and the dse.ldif file. Make a compressed backup of this directory with the tar command:

> sudo tar caf \
config_slapd-INSTANCE_NAME-$(date +%Y-%m-%d_%H-%M-%S).tar.gz \
/etc/dirsrv/slapd-INSTANCE_NAME/
Note
Note: Harmless tar error message

When running tar, you may see the harmless informational message tar: Removing leading `/' from member names.

To restore a previous configuration, unpack it to the same directory:

  1. (Optional) To avoid overwriting an existing configuration, move it:

    > sudo old /etc/dirsrv/slapd-INSTANCE_NAME/
  2. Unpack the backup archive:

    > sudo tar -xvzf \
    config_slapd-INSTANCE_NAME-DATE.tar.gz
  3. Copy it to /etc/dirsrv/slapd-INSTANCE_NAME:

    > sudo cp -r etc/dirsrv/slapd-INSTANCE_NAME \
    /etc/dirsrv/slapd-INSTANCE_NAME

5.5.2 Creating an offline backup of the LDAP database and restoring from it

The dsctl command makes offline backups. Stop the server:

> sudo dsctl INSTANCE_NAME stop
Instance "INSTANCE_NAME" has been stopped

Then make the backup using your instance name. The following example creates a backup archive at /var/lib/dirsrv/slapd-INSTANCE_NAME/bak/INSTANCE_NAME-DATE:

> sudo dsctl INSTANCE_NAME db2bak
db2bak successful

For example, on a test instance named ldap1 it looks like this:

/var/lib/dirsrv/slapd-ldap1/bak/ldap1-2021_10_25_13_03_17

Restore from this backup, naming the directory containing the backup archive:

> sudo dsctl INSTANCE_NAME bak2db \
/var/lib/dirsrv/slapd-INSTANCE_NAME/bak/INSTANCE_NAME-DATE/
bak2db successful

Then start the server:

> sudo dsctl INSTANCE_NAME start
Instance "INSTANCE_NAME" has been started

You can also create LDIF backups:

> sudo dsctl INSTANCE_NAME db2ldif --replication userRoot
ldiffile: /var/lib/dirsrv/slapd-INSTANCE_NAME/ldif/INSTANCE_NAME-userRoot-DATE.ldif
db2ldif successful

Restore an LDIF backup with the name of the archive, then start the server:

> sudo dsctl ldif2db userRoot \
/var/lib/dirsrv/slapd-INSTANCE_NAME/ldif/INSTANCE_NAME-userRoot-DATE.ldif
> sudo dsctl INSTANCE_NAME start

5.5.3 Creating an online backup of the LDAP database and restoring from it

Use the dsconf to make an online backup of your LDAP database:

> sudo dsconf INSTANCE_NAME backup create
The backup create task has finished successfully

This creates /var/lib/dirsrv/slapd-INSTANCE_NAME/bak/INSTANCE_NAME-DATE.

Restore it:

> sudo dsconf INSTANCE_NAME backup restore \
/var/lib/dirsrv/slapd-INSTANCE_NAME/bak/INSTANCE_NAME-DATE

5.6 Managing LDAP users and groups

Use the dsidm command to create, remove and manage users and groups.

5.6.1 Querying existing LDAP users and groups

The following examples show how to list your existing users and groups. The examples use the instance name LDAP1. Replace this with your instance name:

> sudo dsidm LDAP1 user list
> sudo dsidm LDAP1 group list

List all information on a single user:

> sudo dsidm LDAP1 user get USER

List all information on a single group:

> sudo dsidm LDAP1 group get GROUP

List members of a group:

> sudo dsidm LDAP1 group members GROUP

5.6.2 Creating users and managing passwords

In the following example, we create one user, wilber. The example server instance is named LDAP1, and the instance's suffix is dc=LDAP1,dc=COM.

Procedure 5.1: Creating LDAP users

The following example creates the user Wilber Fox on your 389 DS instance:

  1. > sudo dsidm LDAP1 user create --uid wilber \
      --cn wilber --displayName 'Wilber Fox' --uidNumber 1001 --gidNumber 101 \
      --homeDirectory /home/wilber
  2. Verify by looking up your new user's distinguished name (fully qualified name to the directory object, which is guaranteed unique):

    > sudo dsidm LDAP1 user get wilber
    dn: uid=wilber,ou=people,dc=LDAP1,dc=COM
    [...]

    You need the distinguished name for actions such as changing the password for a user.

  3. Create a password for new user wilber:

    1. > sudo dsidm LDAP1 account reset_password \
        uid=wilber,ou=people,dc=LDAP1,dc=COM
    2. Enter the new password for wilber twice.

      If the action was successful, you get the following message:

      reset password for uid=wilber,ou=people,dc=LDAP1,dc=COM

      Use the same command to change an existing password.

  4. Verify that the user's password works:

    > ldapwhoami -D uid=wilber,ou=people,dc=LDAP1,dc=COM -W
    Enter LDAP Password: PASSWORD
    dn: uid=wilber,ou=people,dc=LDAP1,dc=COM

5.6.3 Creating and managing groups

After creating users, you can create groups, and then assign users to them. In the following examples, we create a group, server_admins, and assign the user wilber to this group. The example server instance is named LDAP1, and the instance's suffix is dc=LDAP1,dc=COM.

Procedure 5.2: Creating LDAP groups and assigning users to them
  1. Create the group:

    > sudo dsidm LDAP1 group create

    You will be prompted for a group name. Enter your chosen group name, which in the following example is SERVER_ADMINS:

    Enter value for cn : SERVER_ADMINS
  2. Add the user wilber (created in Procedure 5.1, “Creating LDAP users”) to the group:

    > sudo dsidm LDAP1 group add_member SERVER_ADMINS \
    uid=wilber,ou=people,dc=LDAP1,dc=COM
    added member: uid=wilber,ou=people,dc=LDAP1,dc=COM

5.6.4 Deleting users and groups, and removing users from groups

Use the dsidm command to delete users, remove users from groups, and delete groups. The following example removes our example user wilber from the server_admins group:

> sudo dsidm LDAP1 group remove_member SERVER_ADMINS \
uid=wilber,ou=people,dc=LDAP1,dc=COM

Delete a user:

> sudo dsidm LDAP1 user delete \
uid=wilber,ou=people,dc=LDAP1,dc=COM

Delete a group:

> sudo dsidm LDAP1 group delete SERVER_ADMINS

5.7 Managing plug-ins

Use the following command to list all available plug-ins, enabled and disabled. Use your server's hostname rather than the instance name of your 389 Directory Server, like the following example hostname of LDAPSERVER1:

> sudo dsconf -D "cn=Directory Manager" ldap://LDAPSERVER1 plugin list
Enter password for cn=Directory Manager on ldap://LDAPSERVER1: PASSWORD

7-bit check
Account Policy Plugin
Account Usability Plugin
ACL Plugin
ACL preoperation
[...]

The following command enables the MemberOf plug-in referenced in Section 5.8, “Using SSSD to manage LDAP authentication”. MemberOf simplifies user searches, by returning the user and any groups the user belongs to, with a single command. Without MemberOf, a client must run multiple lookups to find a user's group memberships.

> sudo dsconf -D "cn=Directory Manager" ldap://LDAPSERVER1 plugin memberof enable

The plug-in names used in commands are lowercase, so they are different from how they appear when you list them. If you make a mistake with a plug-in name, you see a helpful error message:

dsconf instance plugin: error: invalid choice: 'MemberOf' (choose from
'memberof', 'automember', 'referential-integrity', 'root-dn', 'usn',
'account-policy', 'attr-uniq', 'dna', 'linked-attr', 'managed-entries',
'pass-through-auth', 'retro-changelog', 'posix-winsync', 'contentsync', 'list',
'show', 'set')

After enabling a plug-in, it is necessary to restart the server:

> sudo systemctl restart dirsrv@LDAPSERVER1.service

Next, configure the plug-in. The following example enables MemberOf to search all entries. Use your instance name rather than the server's hostname:

> sudo dsconf LDAP1 plugin memberOf set --scope dc=example,dc=com
Successfully changed the cn=MemberOf Plugin,cn=plugins,cn=config

After the MemberOf plug-in is enabled and configured, all new groups and users are automatically MemberOf targets. However, any users and groups that exist before it is enabled are not. They must be marked manually:

> sudo dsidm LDAP1 user modify suzanne add:objectclass:nsmemberof
Successfully modified uid=suzanne,ou=people,dc=ldap1,dc=com

Now suzanne information and group membership are listed with a single command:

> sudo dsidm LDAP1 user get suzanne
dn: uid=suzanne,ou=people,dc=ldap1,dc=com
cn: suzanne
displayName: Suzanne Geeko
gidNumber: 102
homeDirectory: /home/suzanne
memberOf: cn=SERVER_ADMINS,ou=groups,dc=ldap1,dc=com

Modifying a larger number of users is a lot of work. The following example shows how to make all legacy users MemberOf targets with one fixup command:

> sudo dsconf LDAP1 plugin memberof fixup -f '(objectClass=*)' dc=LDAP1,dc=COM

5.7.1 Unsupported plug-ins on 389 Directory Server

The following plug-ins are not supported on 389 Directory Server:

  • Distributed Numeric Assignment (DNA) plug-in

  • Managed Entries Plug-in (MEP)

  • Posix Winsync plug-in

5.8 Using SSSD to manage LDAP authentication

The System Security Services Daemon (SSSD) manages authentication, identification, and access controls for remote users. This section describes how to use SSSD to manage authentication and identification for your 389 Directory Server.

SSSD mediates between your LDAP server and clients. It supports several provider back-ends, such as LDAP, Active Directory, and Kerberos. SSSD supports services, including SSH, PAM, NSS and sudo. SSSD provides performance benefits and resilience through caching user IDs and credentials. Caching reduces the number of requests to your 389 DS server, and provides authentication and identity services when the back-ends are unavailable.

If the Name Services Caching Daemon (nscd) is running on your network, you should disable or remove it. nscd caches only the common name service requests, such as passwd, group, hosts, service and netgroup, and will conflict with SSSD.

Your LDAP server is the provider, and your SSSD instance is the client of the provider. You may install SSSD on your 389 DS server, but installing it on a separate machine provides some resilience in case the 389 DS server becomes unavailable. Use the following procedure to install and configure an SSSD client. The example 389 DS instance name is LDAP1:

  1. Install the sssd and sssd-ldap packages:

    > sudo zypper in sssd sssd-ldap
  2. Back up the /etc/sssd/sssd.conf file, if it exists:

    > sudo old /etc/sssd/sssd.conf
  3. Create your new SSSD configuration template. The allowed output file names are sssd.conf and ldap.conf. display sends the output to stdout. The following example creates a client configuration in /etc/sssd/sssd.conf:

    > sudo cd /etc/sssd
    > sudo dsidm LDAP1 client_config sssd.conf
  4. Review the output and make any necessary changes to suit your environment. The following /etc/sssd/sssd.conf file demonstrates a working example.

    Important
    Important: MemberOf

    The LDAP access filter relies on MemberOf being configured. For details, see Section 5.7, “Managing plug-ins”.

    [sssd]
    services = nss, pam, ssh, sudo
    config_file_version = 2
    domains = default
    
    [nss]
    homedir_substring = /home
    
    [domain/default]
    # If you have large groups (for example, 50+ members),
    # you should set this to True
    ignore_group_members = False
    debug_level=3
    cache_credentials = True
    id_provider = ldap
    auth_provider = ldap
    access_provider = ldap
    chpass_provider = ldap
    
    ldap_schema = rfc2307bis
    ldap_search_base = dc=example,dc=com
    # We strongly recommend ldaps
    ldap_uri = ldaps://ldap.example.com
    ldap_tls_reqcert = demand
    ldap_tls_cacert = /etc/openldap/ldap.crt
    ldap_access_filter = (|(memberof=cn=<login group>,ou=Groups,dc=example,dc=com))
    enumerate = false
    access_provider = ldap
    
    ldap_user_member_of = memberof
    ldap_user_gecos = cn
    ldap_user_uuid = nsUniqueId
    ldap_group_uuid = nsUniqueId
    ldap_account_expire_policy = rhds
    ldap_access_order = filter, expire
    # add these lines to /etc/ssh/sshd_config
    #  AuthorizedKeysCommand /usr/bin/sss_ssh_authorizedkeys
    #  AuthorizedKeysCommandUser nobody
    ldap_user_ssh_public_key = nsSshPublicKey
  5. Set file ownership to root, and restrict read-write permissions to root:

    > sudo chown root:root /etc/sssd/sssd.conf
    > sudo chmod 600 /etc/sssd/sssd.conf
  6. Edit the /etc/nsswitch.conf configuration file on the SSSD server to include the following lines:

    passwd: compat sss
    group:  compat sss
    shadow: compat sss
  7. Edit the PAM configuration on the SSSD server, modifying common-account-pc, common-auth-pc, common-password-pc, and common-session-pc. SUSE Linux Enterprise Desktop provides a command to modify all of these files at once, pam-config:

    > sudo pam-config -a --sss
  8. Verify the modified configuration:

    > sudo pam-config -q --sss
    auth:
    account:
    password:
    session:
  9. Copy /etc/dirsrv/slapd-LDAP1/ca.crt from the 389 DS server to /etc/openldap/certs on your SSSD server, then rehash it:

    > sudo c_rehash /etc/openldap/certs
  10. Enable and start SSSD:

    > sudo systemctl enable --now sssd

See Chapter 4, Setting up authentication clients using YaST for information on managing the sssd.service with systemctl.

5.8.1 Unsupported password hashes and authentication schemes

The following are not supported as configuration values in dse.ldif for the settings nsslapd-rootpwstoragescheme or passwordStorageScheme, or as a value of passwordStorageScheme in the account policy objects:

  • SHA

  • SSHA

  • SHA256

  • SSHA256

  • SHA384

  • SSHA384

  • SHA512

  • SSHA512

  • NS-MTA-MD5

  • clear

  • MD5

  • SMD5

Note
Note

Database imports that contain these values are supported if nsslapd-enable-upgrade-hash is set to on (defaults to on).

5.9 Migrating to 389 Directory Server from OpenLDAP

OpenLDAP is deprecated and no longer supported as of SUSE Linux Enterprise 15 SP3. It has been replaced by 389 Directory Server. SUSE provides the openldap_to_ds utility to assist with migration, included in the 389-ds package.

The openldap_to_ds utility automates as much of the migration as possible. However, every LDAP deployment is different, and it is impossible to develop a tool that satisfies all situations. When necessary, intervene and perform manual steps. Additionally, test your migration procedure thoroughly before attempting a production migration.

Important
Important: Read the help page

Before using the openldap_to_ds migration tool, we strongly recommend reading the output of openldap_to_ds --help. It helps you learn about the capabilities and limitations of the migration tool, and prevents you from making wrong assumptions.

5.9.1 Testing migration from OpenLDAP

Due to significant differences between OpenLDAP and 389 Directory Server, migration involves repeated testing and adjustments. It is helpful to do a quick migration test to get an idea of what steps are necessary for a successful migration.

Prerequisites:

  • A running 389 Directory Server instance

  • An OpenLDAP slapd configuration file or directory in dynamic ldif format

  • An ldif file backup of your OpenLDAP database

If your slapd configuration is not in dynamic ldif format, create a dynamic copy with slaptest. Create a slapd.d directory, for example /root/slapd.d/, then run the following command:

> sudo slaptest -f /etc/openldap/slapd.conf -F /root/slapd.d

This results in several files similar to the following example:

> sudo ls /root/slapd.d/*

/root/slapd.d/cn=config.ldif

/root/slapd.d/cn=config:
cn=module{0}.ldif  cn=schema.ldif                 olcDatabase={0}config.ldif
cn=schema          olcDatabase={-1}frontend.ldif  olcDatabase={1}mdb.ldif

Create one ldif file per suffix. In the following examples, the suffix is dc=LDAP1,dc=COM. If you are using the /etc/openldap/slapd.conf format, use the following command to create the ldif backup file:

> sudo slapcat -f /etc/openldap/slapd.conf -b dc=LDAP1,dc=COM \
-l /root/LDAP1-COM.ldif

Use openldap_to_ds to analyze the configuration and files, and show a migration plan without changing anything:

> sudo openldap_to_ds LDAP1\
/root/slapd.d /root/LDAP1-COM.ldif.ldif

This performs a dry run and does not change anything. The output looks like this:

Examining OpenLDAP Configuration ...
Completed OpenLDAP Configuration Parsing.
Examining Ldifs ...
Completed Ldif Metadata Parsing.
The following migration steps will be performed:
 * Schema Skip Unsupported Attribute -> otherMailbox (0.9.2342.19200300.100.1.22)
 * Schema Skip Unsupported Attribute -> dSAQuality (0.9.2342.19200300.100.1.49)
 * Schema Skip Unsupported Attribute -> singleLevelQuality (0.9.2342.19200300.100.1.50)
 * Schema Skip Unsupported Attribute -> subtreeMinimumQuality (0.9.2342.19200300.100.1.51)
 * Schema Skip Unsupported Attribute -> subtreeMaximumQuality (0.9.2342.19200300.100.1.52)
 * Schema Create Attribute -> suseDefaultBase (SUSE.YaST.ModuleConfig.Attr:2)
 * Schema Create Attribute -> suseNextUniqueId (SUSE.YaST.ModuleConfig.Attr:3)
[...]
 * Schema Create ObjectClass -> suseDhcpConfiguration (SUSE.YaST.ModuleConfig.OC:10)
 * Schema Create ObjectClass -> suseMailConfiguration (SUSE.YaST.ModuleConfig.OC:11)
 * Database Reindex -> dc=example,dc=com
 * Database Import Ldif -> dc=example,dc=com from example.ldif -
excluding entry attributes = [{'structuralobjectclass', 'entrycsn'}]
No actions taken. To apply migration plan, use '--confirm'

The following example performs the migration, and the output looks different from the dry run:

  > sudo openldap_to_ds LDAP1 /root/slapd.d /root/LDAP1-COM.ldif --confirm
Starting Migration ... This may take some time ...
migration: 1 / 40 complete ...
migration: 2 / 40 complete ...
migration: 3 / 40 complete ...
[...]
Index task index_all_05252021_120216 completed successfully
post: 39 / 40 complete ...
post: 40 / 40 complete ...
🎉 Migration complete!
----------------------
You should now review your instance configuration and data:
 * [ ] - Create/Migrate Database Access Controls (ACI)
 * [ ] - Enable and Verify TLS (LDAPS) Operation
 * [ ] - Schedule Automatic Backups
 * [ ] - Verify Accounts Can Bind Correctly
 * [ ] - Review Schema Inconistent ObjectClass -> pilotOrganization (0.9.2342.19200300.100.4.20)
 * [ ] - Review Database Imported Content is Correct -> dc=ldap1,dc=com

When the migration is complete, openldap_to_ds creates a checklist of post-migration tasks that must be completed. It is a best practice to document your post-migration steps, so that you can reproduce them in your post-production procedures. Then test clients and application integrations to the migrated 389 Directory Server instance.

Important
Important: Develop a rollback plan

Develop a rollback plan in case of any failures. This plan should define a successful migration, the tests to determine what worked and what needs to be fixed, which steps are critical, what can be deferred until later, how to decide when to undo any changes, how to undo them with minimal disruption, and which other teams need to be involved.

Due to the variability of deployments, it is difficult to provide a recipe for a successful production migration. After you have thoroughly tested the migration process and verified that you get good results, the following general steps help:

  • Lower all hostname/DNS TTLs to 5 minutes 48 hours before the change, to allow a fast rollback to your existing OpenLDAP deployment.

  • Pause all data synchronization and incoming data processes, so that the data in the OpenLDAP environment does not change during the migration.

  • Have all 389 Directory Server hosts ready for deployment before the migration.

  • Have your test migration documentation available.

5.9.2 Testing migration of OpenLDAP servers that use saslauthd

In OpenLDAP deployments, it is common to use saslauthd for passthrough authentication of users. The authentication process involves the following components:

 ┌─────────────────┐
 │                 │
 │   LDAP client   │
 │                 │
 └─────────────────┘
          │
      binds to
          │
          ▼
 ┌─────────────────┐
 │    OpenLDAP     │
 │     server      │
 │                 │
 └─────────────────┘
          │
          │
          ▼
 ┌─────────────────┐
 │                 │
 │    saslauthd    │
 │                 │
 └─────────────────┘
          │
          │
          ▼
 ┌─────────────────┐
 │                 │
 │  External auth  │
 │                 │
 └─────────────────┘

For checking the correctness of configuration and subsequent troubleshooting, the following information is important:

  • OpenLDAP discovers that a user is allowed passthrough authentication if the userPassword attribute has a value with the userPassword: {SASL}USERNAME@REALM format. The prerequisite is to build the OpenLDAP server with the --enable-spasswd option to enable passthrough authentication.

  • OpenLDAP configures its connection to saslauthd from /usr/lib/sasl2/slapd.conf.

  • saslauthd discovers the relevant modules using its command-line parameters configured in /etc/sysconfig/saslauthd.

  • The back-end modules of saslauthd use their own configuration, as documented in man saslauthd.

Detailed information about passthrough authentication using OpenLDAP is available in the official OpenLDAP Admin Guide.

5.9.2.1 Migrating SASL passthrough authentication from OpenLDAP to 389 Directory Server

As a best practice for correctly migrating SASL passthrough authentication from OpenLDAP to 389 Directory Server, refer to the following steps:

  1. Before migration, ensure that you can successfully run testsaslauthd on the OpenLDAP server.

    > sudo testsaslauthd -u USERNAME@REALM -p PASSWORD

    The realm routes the authentication to the correct back-end in saslauthd, and the user name is then used to check the identity.

  2. Install the pam_saslauthd package that enables 389 Directory Server to connect with saslauthd.

    > sudo zypper install -y pam_saslauthd
  3. Migrate from OpenLDAP to 389 Directory Server by running the openldap_to_ds command-line tool. For detailed information on the migration process, refer to the section Section 5.9.1, “Testing migration from OpenLDAP”.

    Note
    Note

    While the openldap_to_ds process is running, if a user is detected as having the value of the userPssword attribute in the userPassword: {SASL}USERNAME@REALM format, it is removed and placed as the value of the nsSaslauthId attribute in the nsSaslauthId: USERNAME@REALM format. Additionally, the attribute value objectClass: nsSaslauthAccount is automatically added to support the modification.

  4. After completion of the migration, check whether the PAM passthrough authentication is configured correctly by running the following commands:

    > sudo dsconf INSTANCE plugin pam-pass-through-auth show
    > sudo dsconf INSTANCE plugin pam-pass-through-auth list

After successful migration, the passthrough authentication flow involves the following components:

  ┌─────────────────┐
  │                 │
  │   LDAP client   │
  │                 │
  └─────────────────┘
           │
       binds to
           │
           ▼
  ┌─────────────────┐
  │     389-DS      │
  │     server      │
  │                 │
  └─────────────────┘
           │
           ▼
  ┌─────────────────┐
  │                 │
  │  pam saslauthd  │
  │                 │
  └─────────────────┘
           │
           ▼
  ┌─────────────────┐
  │                 │
  │    saslauthd    │
  │                 │
  └─────────────────┘
           │
           │
           ▼
  ┌─────────────────┐
  │                 │
  │  External auth  │
  │                 │
  └─────────────────┘

5.9.2.2 Troubleshooting saslauthd passthrough authentication

To troubleshoot problems with saslauthd passthrough authentication before and after the migration from OpenLDAP to 389 Directory Server, refer to the following tips:

Ensure that testsaslauthd works with USERNAME@REALM.

Refer to the step for running testsaslauthd in the Section 5.9.2, “Testing migration of OpenLDAP servers that use saslauthd section.

In case of problems, try the following:

  • Check /etc/sysconfig/saslauthd to ensure the saslauthd back-end modules are configured correctly. For detailed information on the back-end modules of saslauthd and their configurations, run man saslauthd.

  • Turn on debug logging by adding SASLAUTHD_PARAMS="-d" in /etc/sysconfig/saslauthd.

  • View the saslauthd logs that are available as part of the output of journalctl.

Ensure that PAM saslauthd works correctly.

To check if PAM saslauthd works correctly, you can use the pam_tester tool available at https://github.com/kanidm/pam_tester.

Note
Note

The pam_tester tool is NOT officially supported.

Ensure that the PAM Pass Through Auth plug-in is enabled.

Check the status of the PAM Pass Through Auth plug-in by running the following command:

> sudo dsconf INSTANCE plugin pam-pass-through-auth status

To enable the plug-in, run the following command:

> sudo dsconf INSTANCE plugin pam-pass-through-auth enable
Check the configuration of the PAM Pass Through Auth plug-in.

To check the configuration of the PAM Pass Through Auth plug-in for the server instance, run the following command:

> sudo dsconf INSTANCE plugin pam-pass-through-auth show
Check the error logs for the user of the server instance.

Check the logs available in /var/lib/SERVER_USER_NAME/INSTANCE/error.

5.9.3 Planning your migration

As OpenLDAP is a box of parts and highly customizable, it is not possible to prescribe a one size fits all migration. It is necessary to assess your current environment and configuration with OpenLDAP and other integrations. This includes, and is not limited to:

  • Replication topology

  • High availability and load balancer configurations

  • External data flows (IGA, HR, AD, etc.)

  • Configured overlays (plug-ins in 389 Directory Server)

  • Client configuration and expected server features

  • Customized schema

  • TLS configuration

Plan what your 389 Directory Server deployment will look like in the end. This includes the same list, except replace overlays with plug-ins. Once you have assessed your current environment and planned what your 389 Directory Server environment will look like, you can then form a migration plan. We recommended building the 389 Directory Server environment in parallel to your OpenLDAP environment to allow switching between them.

Migrating from OpenLDAP to 389 Directory Server is a one-way migration. There are enough differences between the two that they cannot interoperate, and there is not a migration path from 389 Directory Server to OpenLDAP. The following table highlights the major similarities and differences.

FeatureOpenLDAP389 Directory ServerCompatible
Two-way replicationSyncREPL389 DS-specific systemNo
MemberOfOverlayPlug-inYes, simple configurations only
External AuthProxy-No
Active Directory Synchronization-Winsync Plug-inNo
Inbuilt SchemaOLDAP Schemas389 SchemasYes, supported by migration tool
Custom SchemaOLDAP Schemas389 SchemasYes, supported by migration tool
Database ImportLDIFLDIFYes, supported by migration tool
Password hashesVariesVariesYes, all formats supported excluding Argon2
OpenLDAP to 389 DS replication--No mechanism to replicate to 389 DS is possible
Time-based one-time password (TOTP)TOTP overlay-No, currently not supported
entryUUIDPart of OpenLDAPPlug-inYes

5.10 Importing TLS server certificates and keys

You can manage your CA certificates and keys for 389 Directory Server with the following command line tools: certutil, openssl, and pk12util.

For testing purposes, you can use the self-signed certificate that dscreate creates when you create a new 389 DS instance. Find the certificate at /etc/dirsrv/slapd-INSTANCE-NAME/ca.crt.

For production environments, it is a best practice to use a third-party certificate authority, such as Let's Encrypt, CAcert.org, SSL.com, or whatever CA you choose. Request a server certificate, a client certificate, and a root certificate.

Important
Important

The Mozilla NSS (Network Security Services) toolkit uses nicknames for certificates in the certificate store. The server certificate uses the nickname Server-Cert.

  1. Use the following commands to remove the Self-Signed-CA and Server-Cert from the instance:

    > sudo dsctl INSTANCE_NAME tls remove-cert Self-Signed-CA
    > sudo dsctl INSTANCE_NAME tls remove-cert Server-Cert
    

    Replace INSTANCE_NAME with the instance name of the directory server. This is LDAP1 in the previous sections.

  2. Import the CA that has signed your certificate.

    > sudo sudo dsctl INSTANCE_NAME tls import-ca
       /path/to/CA/in/PEM/format/CA.pem  NICKNAME_FOR_CA
    

    Replace INSTANCE_NAME with the instance name of the directory server. Replace /path/to/CA/in/PEM/format/CA.pem with the full path to the CA certificate file in PEM format. Replace NICKNAME_FOR_CA with a nickname for the CA.

  3. Import the server certificate and the key for the certificate.

    > sudo dsctl INSTANCE_NAME tls import-server-key-cert
      /path/to/SERVER.pem /path/to/SERVER.key

    Replace INSTANCE_NAME with the instance name of the directory server. Replace /path/to/SERVER.pem with the full path to the server certificate in PEM format. Replace /path/to/SERVER.key with the full path to the server certificate key file in PEM format.

  4. Restart the instance so that the new certificates are used.

    > sudo systemctl restart dirsrv@INSTANCE-NAME..service
    

    Replace INSTANCE_NAME with the instance name of the directory server.

5.11 Setting up replication

389 Directory Server supports replicating its database content between multiple servers. According to the type of replication, this provides:

  • Faster performance and response times

  • Fault tolerance and failover

  • Load balancing

  • High availability

A database is the smallest unit of a directory that can be replicated. You can replicate an entire database, but not a subtree within a database. One database must correspond to one suffix. You cannot replicate a suffix that is distributed over two or more databases.

A replica that sends data to another replica is a supplier. A replica that receives data from a supplier is a consumer. Replication is always initiated by the supplier, and a single supplier can send data to multiple consumers. The supplier is a read-write replica, and the consumer is read-only, except in the case of multi-supplier replication. In multi-supplier replication the suppliers are both suppliers and consumers of the same data.

5.11.1 Asynchronous writes

389 DS manages replication differently than other databases. Replication is asynchronous, and eventually consistent. This means:

  • Any write or change to a single server is immediately accepted.

  • There is a delay between a write finishing on one server, and then replicating and being visible on other servers.

  • If that write conflicts with writes on other servers, it may be rolled back at some point in the future.

  • Not all servers may show identical content at the same time due to replication delay.

As LDAP is low-write, these factors mean that all servers are at least up to a common baseline of a known consistent state. Small changes occur on top of this baseline, so many of these aspects of delayed replication are not perceived in day to day usage.

5.11.2 Designing your topology

Consider the following factors when you are designing your replication topology.

  • The need for replication: high availability, geo-location, read scaling, or a combination of all.

  • How many replicas (nodes, servers) you plan to have in your topology.

  • Direction of data flows, both inside of the topology, and data flowing into the topology.

  • How clients balance across nodes of the topology for their requests (multiple ldap URIs, SRV records, load balancers).

These factors all affect how you may create your topology. (See Section 5.11.3, “Example replication topologies” for topology examples.)

5.11.3 Example replication topologies

The following sections provide examples of replication topologies, using two to six 389 Directory Server nodes. The maximum number of supported supplier replicas in a topology is 20. Operational experience shows the optimal number for replication efficiency is a maximum of eight.

5.11.3.1 Two replicas

Example 5.4: Two supplier replicas
┌────┐       ┌────┐
│ S1 │◀─────▶│ S2 │
└────┘       └────┘

In Example 5.4, “Two supplier replicas” there are two replicas, S1 and S2, which replicate bi-directionally between each other, so they are both suppliers and consumers. S1 and S2 could be in separate data centers, or in the same data center. Clients can balance across the servers using LDAP URIs, a load balancer, or DNS SRV records. This is the simplest topology for high availability. Each server needs to be able to provide 100% of client load, in case the other server is offline for any reason. A two-node replication is generally not adequate for horizontal read scaling, as a single node handles all read requests if the other node is offline.

Note
Note: Default topology

The two-node topology should be considered the default topology, because it is the simplest to manage. You can expand your topology over time, as necessary.

5.11.3.2 Four supplier replicas

Example 5.5: Four supplier replicas
┌────┐       ┌────┐
│ S1 │◀─────▶│ S2 │
└────┘       └────┘
   ▲            ▲
   │            │
   ▼            ▼
┌────┐       ┌────┐
│ S3 │◀─────▶│ S4 │
└────┘       └────┘

Example 5.5, “Four supplier replicas” has four supplier replicas, which all synchronize to each other. These could be in four datacenters, or two servers per datacenter. In the case of one node per data center, each node should be able to support 100% of client load. When there are two per datacenter, each one only needs to scale to 50% of the client load.

5.11.3.3 Six replicas

Example 5.6: Six replicas
                  ┌────┐       ┌────┐
                  │ S1 │◀─────▶│ S2 │
                  └────┘       └────┘
                     ▲            ▲
                     │            │
   ┌────────────┬────┴────────────┴─────┬────────────┐
   │            │                       │            │
   ▼            ▼                       ▼            ▼
┌────┐       ┌────┐                  ┌────┐       ┌────┐
│ S3 │◀─────▶│ S4 │                  │ S5 │◀─────▶│ S6 │
└────┘       └────┘                  └────┘       └────┘

In Example 5.6, “Six replicas”, each pair is in a separate location. S1 and S2 are the suppliers, and S3, S4, S5, and S6 are consumers of S1 and S2. Each pair of servers replicate to each other. S3, S4, S5 and S6 can accept writes, though most of the replication is done through S1 and S2. This setup provides geographic separation for high availability and scaling.

5.11.3.4 Six replicas with read-only consumers

Example 5.7: Six replicas with read-only consumers
             ┌────┐       ┌────┐
             │ S1 │◀─────▶│ S2 │
             └────┘       └────┘
                │            │
                │            │
   ┌────────────┼────────────┼────────────┐
   │            │            │            │
   ▼            ▼            ▼            ▼
┌────┐       ┌────┐       ┌────┐       ┌────┐
│ S3 │       │ S4 │       │ S5 │       │ S6 │
└────┘       └────┘       └────┘       └────┘

In Example 5.7, “Six replicas with read-only consumers”, S1 and S2 are the suppliers, and the other four servers are read-only consumers. All changes occur on S1 and S2, and are propagated to the four replicas. Read-only consumers can be configured to store only a subset of the database, or partial entries, to limit data exposure. You could have a fractional read-only server in a DMZ, for example, so that if data is exposed, changes can not propagate back to the other replicas.

5.11.4 Terminology

In the example topologies we have seen that 389 DS can take on a number of roles in a topology. The following list clarifies the terminology.

Replica

An instance of 389 DS with an attached database.

Read-write replica

A replica with a full copy of a database, that accepts read and write operations.

Read-only replica

A replica with a full copy of a database, that only accepts read operations.

Fractional read-only replica

A replica with a partial copy of a database, that only accepts read- only operations.

Supplier

A replica that supplies data from its database to another replica.

Consumer

A replica that receives data from another replica to write into its database.

Replication agreement

The configuration of a server defining its supplier and consumer relation to another replica.

Topology

A set of replicas connected via replication agreements.

Replica ID

A unique identifier of the 389 Directory Server instance within the replication topology.

Replication manager

An account with replication rights in the directory.

5.11.5 Configuring replication

The first example sets up a two node bi-directional replication with a single read-only server, as a minimal starting example. In the following examples, the host names of the two read-write nodes are RW1 and RW2, and the read-only server is RO1. (Use your own host names.)

All servers should have a back-end with an identical suffix. Only one server, RW1, needs an initial copy of the database.

5.11.5.1 Configuring two-node replication

The following commands configure the read-write replicas in a two-node setup (Example 5.4, “Two supplier replicas”), with the host names RW1 and RW2. (Use your own host names.)

Warning
Warning: Create a strong replication manager password

The replication manager should be considered equivalent to the directory manager in terms of security and access, and should have a strong password.

If you create different replication manager passwords for each server, be sure to keep track of which password belongs to which server. For example, when you configure the outbound connection in RW1's replication agreement, you need to set the replication manager password to the RW2 replication manager password.

First, configure RW1:

> sudo dsconf INSTANCE-NAME replication create-manager
> sudo dsconf INSTANCE-NAME replication enable \
--suffix dc=example,dc=com \
--role supplier --replica-id 1 --bind-dn "cn=replication manager,cn=config"

Configure RW2:

> sudo dsconf INSTANCE-NAME replication create-manager
> sudo dsconf INSTANCE-NAME replication enable \
--suffix dc=example,dc=com \
--role supplier --replica-id 2 --bind-dn "cn=replication manager,cn=config"

This will create the replication metadata required on RW1 and RW2. Note the difference in the replica-id between the two servers. This also creates the replication manager account, which is an account with replication rights for authenticating between the two nodes.

RW1 and RW2 are now both configured to have replication metadata. The next step is to create the first agreement for outbound data from RW1 to RW2.

> sudo dsconf INSTANCE-NAME repl-agmt create \
--suffix dc=example,dc=com \
--host=RW2 --port=636 --conn-protocol LDAPS --bind-dn "cn=replication manager,cn=config" \
--bind-passwd PASSWORD --bind-method SIMPLE RW1_to_RW2

Data does not flow from RW1 to RW2 until after a full synchronization of the database, which is called an initialization or reinit. This resets all database content on RW2 to match the content of RW1. Run the following command to trigger a reinit of the data:

> sudo dsconf INSTANCE-NAME repl-agmt init \
--suffix dc=example,dc=com RW1_to_RW2

Check the status by running this command on RW1:

> sudo dsconf INSTANCE-NAME repl-agmt init-status \
--suffix dc=example,dc=com RW1_to_RW2

When it is finished, you should see an Agreement successfully initialized message. If you get an error message, check the errors log. Otherwise, you should see the identical content from RW1 on RW2.

Finally, to make this bi-directional, configure a replication agreement from RW2 outbound to RW1:

> sudo dsconf INSTANCE-NAME repl-agmt create \
--suffix dc=example,dc=com \
--host=RW1 --port=636 --conn-protocol LDAPS \
--bind-dn "cn=replication manager,cn=config" --bind-passwd PASSWORD \
--bind-method SIMPLE RW2_to_RW1

Changes made on either RW1 or RW2 are now replicated to the other. Check replication status on either server with the following command:

> sudo dsconf INSTANCE-NAME repl-agmt status \
--suffix dc=example,dc=com \
--bind-dn "cn=replication manager,cn=config" \
--bind-passwd PASSWORD RW2_to_RW1

5.11.5.2 Configuring a read-only node

To create a read-only node, start by creating the replication manager account and metadata. The hostname of the example server is RO3:

Warning
Warning: Create a strong replication manager password

The replication manager should be considered equivalent to the directory manager in terms of security and access, and should have a strong password.

If you create different replication manager passwords for each server, be sure to keep track of which password belongs to which server. For example, when you configure the outbound connection in RW1's replication agreement, you need to set the replication manager password to the RW2 replication manager password.

> sudo dsconf INSTANCE_NAME replication create-manager
> sudo  dsconf INSTANCE_NAME \
replication enable --suffix dc=EXAMPLE,dc=COM \
--role consumer --bind-dn "cn=replication manager,cn=config"

For a read-only replica, you do not provide a replica-id, and the role is set to consumer. This allocates a special read-only replica-id for all read-only replicas. After the read-only replica is created, add the replication agreements from RW1 and RW2 to the read-only instance. The following example is on RW1:

> sudo dsconf INSTANCE_NAME \
repl-agmt create --suffix dc=EXAMPLE,dc=COM \
--host=RO3 --port=636 --conn-protocol LDAPS \
--bind-dn "cn=replication manager,cn=config" --bind-passwd PASSWORD
--bind-method SIMPLE RW1_to_RO3

The following example, on RW2, configures the replication agreement between RW2 and RO3:

> sudo dsconf INSTANCE_NAME repl-agmt create \
--suffix dc=EXAMPLE,dc=COM \
--host=RO3 --port=636 --conn-protocol LDAPS \
--bind-dn "cn=replication manager,cn=config" --bind-passwd PASSWORD \
--bind-method SIMPLE RW2_to_RO3

After these steps are completed, you can use either RW1 or RW2 to perform the initialization of the database on RO3. The following example initializes RO3 from RW2:

> sudo dsconf INSTANCE_NAME repl-agmt init
--suffix dc=EXAMPLE,dc=COM RW2_to_RO3

5.11.6 Monitoring and healthcheck

The dsconf command includes a monitoring option. You can check the status of each replica status directly on the replicas, or from other hosts. The following example commands are run on RW1, checking the status on two remote replicas, and then on itself:

> sudo dsconf -D "cn=Directory Manager" ldap://RW2 replication monitor
> sudo dsconf -D "cn=Directory Manager" ldap://RO3 replication monitor
> sudo dsconf -D "cn=Directory Manager" ldap://RW1 replication monitor

The dsctl command has a healthcheck option. The following example runs a replication healthcheck on the local 389 DS instance:

> sudo dsctl INSTANCE_NAME healthcheck --check replication

Use the -v option for verbosity, to see what the healthcheck examines:

> sudo dsctl -v INSTANCE_NAME healthcheck --check replication

Run dsctl INSTANCE_NAME healthcheck with no options for a general health check.

Run the following command to see a list of the checks that healthcheck performs:

> sudo dsctl INSTANCE_NAME healthcheck --list-checks
config:hr_timestamp
config:passwordscheme
backends:userroot:cl_trimming
backends:userroot:mappingtree
backends:userroot:search
backends:userroot:virt_attrs
encryption:check_tls_version
fschecks:file_perms
[...]

You can run one or more of the individual checks:

> sudo dsctl INSTANCE_NAME healthcheck \
--check monitor-disk-space:disk_space tls:certificate_expiration

5.11.7 Making backups

When replication is enabled, you need to adjust your 389 Directory Server backup strategy (see Section 5.5, “Backing up and restoring 389 Directory Server” to learn about making backups). If you are using db2ldif, you must add the --replication flag to ensure that replication metadata is backed up. You should backup all servers in the topology. When restoring from backup, start by restoring a single node of the topology, then reinitialize all other nodes as new instances.

5.11.8 Pausing and resuming replication

You can pause replication during maintenance windows, or anytime you need to stop it. A node of the topology can only be offline for a maximum of days up to the limit of the changelog (see Section 5.11.9, “ Changelog max-age”).

Use the repl-agmt command to pause replication. The following example is on RW2:

> sudo dsconf INSTANCE_NAME repl-agmt disable \
--suffix dc=EXAMPLE,dc=COM RW2_to_RW1

The following example re-enables replication:

> sudo dsconf INSTANCE_NAME repl-agmt enable \
--suffix dc=EXAMPLE,dc=COM RW2_to_RW1

5.11.9 Changelog max-age

A replica can be offline for up to the length of time defined by the changelog max-age option. max-age defines the maximum age of any entry in the changelog. Any items older than the max-age value are automatically removed.

After the replica comes back online, it synchronizes with the other replicas. If it is offline for longer than the max-age value, the replica needs to be re-initialized and refuses to accept or provide changes to other nodes, as they may be inconsistent. The following example sets the max-age to seven days:

> sudo dsconf INSTANCE_NAME \
replication set-changelog --max-age 7d \
--suffix dc=EXAMPLE,dc=COM

5.11.10 Removing a replica

To remove a replica, first fence the node to prevent any incoming changes or reads. Then, find all servers that have incoming replication agreements with the node you are removing, and remove them. The following example removes RW2. Start by disabling the outbound replication agreement on RW1:

> sudo dsconf INSTANCE_NAME repl-agmt delete \
--suffix dc=EXAMPLE,dc=COM RW1_to_RW2

On the replica you are removing, which in the following example is RW2, remove all outbound agreements:

> sudo dsconf INSTANCE_NAME repl-agmt delete \
--suffix dc=EXAMPLE,dc=COM RW2_to_RW1
> sudo dsconf INSTANCE_NAME repl-agmt delete \
--suffix dc=EXAMPLE,dc=COM RW2_to_RO3

Stop the instance on RW2:

> sudo systemctl stop dirsrv@INSTANCE_NAME.service

Then run the cleanallruv command to remove the replica ID from the topology. The following example is run on RW1:

> sudo dsconf INSTANCE_NAME repl-tasks cleanallruv \
--suffix dc=EXAMPLE,dc=COM --replica-id 2
> sudo dsconf INSTANCE_NAME repl-tasks list-cleanruv-tasks

5.11.11 Limitations on replication of 389 Directory Server

The use of 389 Directory Server is supported within the following replication limits:

  • A maximum of 8 read-write nodes

  • A maximum of 20 replication hubs

  • A maximum of 100 read-only servers

  • A maximum of 1 Winsync Active Directory consumer as a read-write node member

5.12 Synchronizing with Microsoft Active Directory

389 Directory Server supports synchronizing certain user and group content from Microsoft's Active Directory, so that Linux clients can use 389 DS for their identity information without the normally required domain join process. This also allows 389 DS to extend and use its other features with the data synchronized from Active Directory.

5.12.1 Planning your synchronization topology

Due to how the synchronization works, only a single 389 Directory Server and Active Directory server are involved. The Active Directory server must be a full Domain Controller, and not a Read Only Domain Controller (RODC). The Global Catalog is not required on the DC that is synchronized, as 389 DS only replicates the content of a single forest in a domain.

You must first choose the direction of your data flow. There are three options: from AD to 389 DS, from 389 DS to AD, or bi- directional.

Note
Note: No password synchronization

Passwords cannot be synchronized between 389 DS and Active Directory. This may change in the future, to support Active Directory to 389 DS password flow.

Your topology looks like the following diagram. The 389 Directory Server and Active Directory topologies may differ, but the most important factor is to have only a single connection between 389 DS and Active Directory. It is important to account for this in your disaster recovery and backup plans for both 389 DS and AD, to ensure that you correctly restore only a single replication connection between these topologies.

┌────────┐     ┌────────┐         ┌────────┐     ┌────────┐
│        │     │        │         │        │     │        │
│ 389-ds │◀───▶│ 389-ds │◀ ─ ─ ─ ▶│   AD   │◀───▶│   AD   │
│        │     │        │         │        │     │        │
└────────┘     └────────┘         └────────┘     └────────┘
    ▲               ▲                  ▲             ▲
    │               │                  │             │
    ▼               ▼                  ▼             ▼
┌────────┐     ┌────────┐         ┌────────┐     ┌────────┐
│        │     │        │         │        │     │        │
│ 389-ds │◀───▶│ 389-ds │         │   AD   │◀───▶│   AD   │
│        │     │        │         │        │     │        │
└────────┘     └────────┘         └────────┘     └────────┘

5.12.2 Prerequisites for Active Directory

A security group that is granted the Replicating Directory Changes permission is required. For example, you have created a group named Directory Server Sync. Follow the steps in the How to grant the 'Replicating Directory Changes' permission for the Microsoft Metadirectory Services ADMA service account (https://docs.microsoft.com/en-US/troubleshoot/windows-server/windows-security/grant-replicating-directory-changes-permission-adma-service to set this up.

Warning
Warning: Strong security needed

You should consider members of this group to be of equivalent security importance to Domain Administrators. Members of this group have the ability to read sensitive content from the Active Directory environment, so you should use strong, randomly generated service account passwords for these accounts, and carefully audit membership to this group.

You should also create a service account that is a member of this group.

Your Active Directory environment must have certificates configured for LDAPS to ensure that authentication between 389 DS and AD is secure. Authentication with Generic Security Services API/Kerberos (GSSAPI/KRB) cannot be used.

5.12.3 Prerequisites for 389 Directory Server

The 389 Directory Server must have a back-end database already configured with Organization Units (OUs) for entries to be synchronized into.

The 389 Directory Server must have a replica ID configured as though the server is a read-write replica. (For details about setting up replication see Section 5.11, “Setting up replication”).

5.12.4 Creating an agreement from Active Directory to 389 Directory Server

The following example command, which is run on the 389 Directory Server, creates a replication agreement from Active Directory to 389 Directory Server:

> sudo dsconf INSTANCE-NAME repl-winsync-agmt create --suffix dc=example,dc=com \
  --host AD-HOSTNAME --port 636 --conn-protocol LDAPS \
  --bind-dn "cn=SERVICE-ACCOUNT,cn=USERS,dc=AD,dc=EXAMPLE,dc=COM" \
  --bind-passwd "PASSWORD" --win-subtree "cn=USERS,dc=AD,dc=EXAMPLE,dc=COM" \
  --ds-subtree ou=AD,dc=EXAMPLE,dc=COM --one-way-sync fromWindows \
  --sync-users=on --sync-groups=on --move-action delete \
  --win-domain AD-DOMAIN adsync_agreement

Once the agreement has been created, you must perform an initial resynchronization:

> sudo dsconf INSTANCE-NAME repl-winsync-agmt init --suffix dc=example,dc=com adsync_agreement

Use the following command to check the status of the initialization:

> sudo dsconf INSTANCE-NAME repl-winsync-agmt init-status --suffix dc=example,dc=com adsync_agreement
Note
Note: Some entries are not synchronized

In some cases, an entry may not be synchronized, even if the init status reports success. Check your 389 DS log files in /var/log/dirsrv/slapd-INSTANCE-NAME/errors.

Check the status of the agreement with the following command:

> sudo dsconf INSTANCE-NAME repl-winsync-agmt status --suffix dc=example,dc=com adsync_agreement

Whe you are performing maintenance on the Active Directory or 389 Directory Server topology, you can pause the agreement with the following command:

> sudo dsconf INSTANCE-NAME repl-winsync-agmt disable --suffix dc=example,dc=com adsync_agreement

Resume the agreement with the following command:

> sudo dsconf INSTANCE-NAME repl-winsync-agmt enable --suffix dc=example,dc=com adsync_agreement

5.13 More information

For more information about 389 Directory Server, see:

6 Network authentication with Kerberos

Kerberos is a network authentication protocol which also provides encryption. This chapter describes how to set up Kerberos and integrate services like LDAP and NFS.

6.1 Conceptual overview

An open network provides no means of ensuring that a workstation can identify its users properly, except through the usual password mechanisms. In common installations, the user must enter the password each time a service inside the network is accessed. Kerberos provides an authentication method with which a user registers only once and is trusted in the complete network for the rest of the session. To have a secure network, the following requirements must be met:

  • Have all users prove their identity for each desired service and make sure that no one can take the identity of someone else.

  • Make sure that each network server also proves its identity. Otherwise an attacker might be able to impersonate the server and obtain sensitive information transmitted to the server. This concept is called mutual authentication, because the client authenticates to the server and vice versa.

Kerberos helps you meet these requirements by providing strongly encrypted authentication. Only the basic principles of Kerberos are discussed here. For detailed technical instruction, refer to the Kerberos documentation.

6.2 Kerberos terminology

The following glossary defines Kerberos terminology.

credential

Users or clients need to present credentials that authorize them to request services. Kerberos knows two kinds of credentials—tickets and authenticators.

ticket

A ticket is a per-server credential used by a client to authenticate at a server from which it is requesting a service. It contains the name of the server, the client's name, the client's Internet address, a time stamp, a lifetime, and a random session key. All this data is encrypted using the server's key.

authenticator

Combined with the ticket, an authenticator is used to prove that the client presenting a ticket is really the one it claims to be. An authenticator is built using the client's name, the workstation's IP address, and the current workstation's time, all encrypted with the session key known only to the client and the relevant server. An authenticator can only be used once, unlike a ticket. A client can build an authenticator itself.

principal

A Kerberos principal is a unique entity (a user or service) to which it can assign a ticket. A principal consists of the following components:

USER/INSTANCE@REALM
  • primary: The first part of the principal. For users, this is the same as the user name.

  • instance (optional): Additional information characterizing the primary. This string is separated from the primary by a /.

    tux@example.org and tux/admin@example.org can both exist on the same Kerberos system and are treated as different principals.

  • realm: Specifies the Kerberos realm. Normally, your realm is your domain name in uppercase letters.

mutual authentication

Kerberos ensures that both client and server can be sure of each other's identity. They share a session key, which they can use to communicate securely.

session key

Session keys are temporary private keys generated by Kerberos. They are known to the client and used to encrypt the communication between the client and the server for which it requested and received a ticket.

replay

Almost all messages sent in a network can be eavesdropped, stolen and resent. In the Kerberos context, this would be most dangerous if an attacker manages to obtain your request for a service containing your ticket and authenticator. The attacker could then try to resend it (replay) to impersonate you. However, Kerberos implements several mechanisms to deal with this problem.

server or service

Service is used to refer to a specific action to perform. The process behind this action is called a server.

6.3 How Kerberos works

Kerberos is often called a third-party trusted authentication service, which means all its clients trust Kerberos's judgment of another client's identity. Kerberos keeps a database of all its users and their private keys.

To ensure Kerberos is working correctly, run both the authentication and ticket-granting server on a dedicated machine. Make sure that only the administrator can access this machine physically and over the network. Reduce the (networking) services running on it to the absolute minimum—do not even run sshd.

6.3.1 First contact

Your first contact with Kerberos is similar to any login procedure at a normal networking system. Enter your user name. This piece of information and the name of the ticket-granting service are sent to the authentication server (Kerberos). If the authentication server knows you, it generates a random session key for further use between your client and the ticket-granting server. Now the authentication server prepares a ticket for the ticket-granting server. The ticket contains the following information—all encrypted with a session key only the authentication server and the ticket-granting server know:

  • The names of both, the client and the ticket-granting server

  • The current time

  • A lifetime assigned to this ticket

  • The client's IP address

  • The newly generated session key

This ticket is then sent back to the client together with the session key, again in encrypted form, but this time the private key of the client is used. This private key is only known to Kerberos and the client, because it is derived from your user password. Now that the client has received this response, you are prompted for your password. This password is converted into the key that can decrypt the package sent by the authentication server. The package is unwrapped and password and key are erased from the workstation's memory. As long as the lifetime given to the ticket used to obtain other tickets does not expire, your workstation can prove your identity.

6.3.2 Requesting a service

To request a service from any server in the network, the client application needs to prove its identity to the server. Therefore, the application generates an authenticator. An authenticator consists of the following components:

  • The client's principal

  • The client's IP address

  • The current time

  • A checksum (chosen by the client)

All this information is encrypted using the session key that the client has already received for this special server. The authenticator and the ticket for the server are sent to the server. The server uses its copy of the session key to decrypt the authenticator, which gives it all the information needed about the client requesting its service, to compare it to that contained in the ticket. The server checks if the ticket and the authenticator originate from the same client.

Without any security measures implemented on the server side, this stage of the process would be an ideal target for replay attacks. Someone could try to resend a request stolen off the net some time before. To prevent this, the server does not accept any request with a time stamp and ticket received previously. A request with a time stamp differing too much from the time the request is received is ignored.

6.3.3 Mutual authentication

Kerberos authentication can be used in both directions. It is not only a question of the client being the one it claims to be. The server should also be able to authenticate itself to the client requesting its service. Therefore, it sends an authenticator itself. It adds one to the checksum it received in the client's authenticator and encrypts it with the session key, which is shared between it and the client. The client takes this response as a proof of the server's authenticity and they both start cooperating.

6.3.4 Ticket granting—contacting all servers

Tickets are designed to be used for one server at a time. Therefore, you need to get a new ticket each time you request another service. Kerberos implements a mechanism to obtain tickets for individual servers. This service is called the ticket-granting service. The ticket-granting service is a service (like any other service mentioned before) and uses the same access protocols that have already been outlined. Any time an application needs a ticket that has not already been requested, it contacts the ticket-granting server. This request consists of the following components:

  • The requested principal

  • The ticket-granting ticket

  • An authenticator

Like any other server, the ticket-granting server now checks the ticket-granting ticket and the authenticator. If they are considered valid, the ticket-granting server builds a new session key to be used between the original client and the new server. Then the ticket for the new server is built, containing the following information:

  • The client's principal

  • The server's principal

  • The current time

  • The client's IP address

  • The newly generated session key

The new ticket has a lifetime, which is either the remaining lifetime of the ticket-granting ticket or the default for the service. The lesser of both values is assigned. The client receives this ticket and the session key, which are sent by the ticket-granting service. But this time the answer is encrypted with the session key that came with the original ticket-granting ticket. The client can decrypt the response without requiring the user's password when a new service is contacted. Kerberos can thus acquire ticket after ticket for the client without bothering the user.

6.4 User view of Kerberos

Ideally, a user only contact with Kerberos happens during login at the workstation. The login process includes obtaining a ticket-granting ticket. At logout, a user's Kerberos tickets are automatically destroyed, which makes it difficult for anyone else to impersonate this user.

The automatic expiration of tickets can lead to a situation when a user's login session lasts longer than the maximum lifespan given to the ticket-granting ticket (a reasonable setting is 10 hours). However, the user can get a new ticket-granting ticket by running kinit. Enter the password again and Kerberos obtains access to desired services without additional authentication. To get a list of all the tickets silently acquired for you by Kerberos, run klist.

Here is a short list of applications that use Kerberos authentication. These applications can be found under /usr/lib/mit/bin or /usr/lib/mit/sbin after installing the package krb5-apps-clients. They all have the full functionality of their common Unix and Linux brothers plus the additional bonus of transparent authentication managed by Kerberos:

  • telnet, telnetd

  • rlogin

  • rsh, rcp, rshd

  • ftp, ftpd

You no longer need to enter your password for using these applications because Kerberos has already proven your identity. ssh, if compiled with Kerberos support, can even forward all the tickets acquired for one workstation to another one. If you use ssh to log in to another workstation, ssh makes sure that the encrypted contents of the tickets are adjusted to the new situation. Simply copying tickets between workstations is not sufficient because the ticket contains workstation-specific information (the IP address). XDM and GDM offer Kerberos support, too. Read more about the Kerberos network applications in Kerberos V5 UNIX User's Guide at https://web.mit.edu/kerberos.

6.5 Kerberos and NFS

Most NFS servers can export file systems using any combination of the default trust the network form of security, known as sec=sys, and three different levels of Kerberos-based security, sec=krb5, sec=krb5i, and sec=krb5p. The sec option is set as a mount option on the client. It is often the case that the NFS service is configured first and used with sec=sys, and then Kerberos can be imposed afterwards. In this case it is likely that the server is configured to support both sec=sys and one of the Kerberos levels, and then after all clients have transitioned, the sec=sys support is removed, thus achieving true security. The transition to Kerberos should be transparent if done in an orderly manner. However there is one subtle detail of NFS behavior that works differently when Kerberos is used, and the implications of this need to be understood and addressed. See Section 6.5.1, “Group membership”.

The three Kerberos levels indicate different levels of security. With more security comes a need for more processor power to encrypt and decrypt messages. Choosing the right balance is an important consideration when planning a roll-out of Kerberos for NFS.

krb5 provides only authentication. The server can know who sent a request, and the client can know that the server sent a reply. No security is provided for the content of the request or reply, so an attacker with physical network access could transform the request or reply, or both, in various ways to deceive either server or client. They cannot directly read or change any file that the authenticated user could not read or change, but almost anything is theoretically possible.

krb5i adds integrity checks to all messages. With krb5i, an attacker cannot modify any request or reply, but they can view all the data exchanged, and so could discover the content of any file that is read.

krb5p adds privacy to the protocol. As well as reliable authentication and integrity checking, messages are fully encrypted so an attacker can only know that messages were exchanged between client and server, and cannot extract other information directly from the message. Whether information can be extracted from message timing is a separate question that Kerberos does not address.

6.5.1 Group membership

The one behavioral difference between sec=sys and the Kerberos security levels that might be visible is related to group membership. In Unix and Linux, each file system access comes from a process that is owned by a particular user and has a particular group owner and several supplemental groups. Access rights to files can vary based on the owner and the groups.

With sec=sys, the user-id, group-id, and a list of up to 16 supplementary groups are sent to the server in each request.

If a user is a member of more than 16 supplementary groups, the extra groups are lost and files may not be accessible over NFS that the user would normally expect to have access to. For this reason, most sites that use NFS find a way to limit all users to at most 16 supplementary groups.

If the user runs the newgrp command or runs a set-group-id program, either of which can change the list of groups they are a member of, these changes take effect immediately and provide different accesses over NFS.

With Kerberos, group information is not sent in requests. Only the user is identified (using a Kerberos principal), and the server performs a lookup to determine the user ID and group list for that principal. This means that if the user is a member of more than 16 groups, these group memberships are used in determining file access permissions. However it also means that if the user changes a group-id on the client, the server does not notice the change and does not take it into account in determining access rights.

The improvement of having access to more groups brings a real benefit, and the loss of not being able to change groups is not noticed as it is not widely used. A site administrator considering the use of Kerberos should be aware of the difference though and ensure that it does not cause problems.

6.5.2 Performance and scalability

Using Kerberos for security requires extra CPU power for encrypting and decrypting messages. How much extra CPU power is required and whether the difference is noticeable varies with different hardware and different applications. If the server or client are already saturating the available CPU power, it is likely that a performance drop is measurable when switching from sec=sys to Kerberos. If there is spare CPU capacity available, it is possible that the transition does not result in any throughput change. The only way to be sure how much impact the use of Kerberos has is to test your load on your hardware.

The only configuration options that might reduce the load also reduce the quality of the protection offered. sec=krb5 should produce noticeably less load than sec=krb5p but, as discussed above, it does not produce strong security. Similarly it is possible to adjust the list of ciphers that Kerberos can choose from, and this might change the CPU requirement. However the defaults are carefully chosen and should not be changed without similar careful consideration.

The other possible performance issue when configuring NFS to use Kerberos involves availability of the Kerberos authentication servers, known as the KDC or Key Distribution Center.

The use of NFS adds load to such servers to the same degree that adding the use of Kerberos for any other services adds some load. Every time a given user (Kerberos principal) establishes a session with a service, for example by accessing files exported by a particular NFS server, the client needs to negotiate with the KDC. Once a session key has been negotiated, the client server can communicate without further help for many hours, depending on details of the Kerberos configuration, particularly the ticket_lifetime setting.

The concerns most likely to affect the provisioning of Kerberos KDC servers are availability and peak usage.

As with other core services such as DNS, LDAP or similar name-lookup services, having two servers that are reasonably close to every client provides good availability for modest resources. Kerberos allows for multiple KDC servers with flexible models for database propagation, so distributing servers as needed around campuses, buildings and even cabinets is straightforward. The best mechanism to ensure each client finds a nearby Kerberos server is to use split-horizon DNS with each building (or similar) getting different details from the DNS server. If this is not possible, then managing the /etc/krb5.conf file to be different at different locations is a suitable alternative.

As access to the Kerberos KDC is infrequent, load is only likely to be a problem at peak times. If thousands of people all log in between 9:00 and 9:05, then the servers receive many more requests-per-minute than they might in the middle of the night. The load on the Kerberos server is likely to be more than that on an LDAP server, but not orders of magnitude more. A sensible guideline is to provision Kerberos replicas in the same manner that you provision LDAP replicas, and then monitor performance to determine if demand ever exceeds capacity.

6.5.3 Master KDC, multiple domains, and trust relationships

One service of the Kerberos KDC that is not easily distributed is the handling of updates, such as password changes and new user creation. These must happen at a single master KDC.

These updates are not likely to happen with such frequency that any significant load is generated, but availability could be an issue. It can be annoying to create a new user or change a password, and the master KDC on the other side of the world is temporarily unavailable.

When an organization is geographically distributed and has a policy of handling administration tasks locally at each site, it can be beneficial to create multiple Kerberos domains, one for each administrative center. Each domain would then have its own master KDC which would be geographically local. Users in one domain can still get access to resources in another domain by setting up trust relationships between domains.

The easiest arrangement for multiple domains is to have a global domain (for example, EXAMPLE.COM) and local domains (for example, ASIA.EXAMPLE.COM, EUROPE.EXAMPLE.COM). If the global domain is configured to trust each local domain, and each local domain is configured to trust the global domain, then fully transitive trust is available between any pair of domains, and any principal can establish a secure connection with any service. Ensuring appropriate access rights to resources, for example files provided by that service, depends on the user name lookup service used, and the functionality of the NFS file server, and is beyond the scope of this document.

6.6 More information

The official site of MIT Kerberos is https://web.mit.edu/kerberos. There, find links to any other relevant resource concerning Kerberos, including Kerberos installation, user, and administration guides.

The book Kerberos—A Network Authentication System by Brian Tung (ISBN 0-201-37924-4) offers extensive information.

7 Active Directory support

Active Directory* (AD) is a directory-service based on LDAP, Kerberos, and other services. It is used by Microsoft* Windows* to manage resources, services, and people. In a Microsoft Windows network, Active Directory provides information about these objects, restricts access to them, and enforces policies. SUSE® Linux Enterprise Desktop lets you join existing Active Directory domains and integrate your Linux machine into a Windows environment.

7.1 Integrating Linux and Active Directory environments

With a Linux client (configured as an Active Directory client) that is joined to an existing Active Directory domain, benefit from various features not available on a pure SUSE Linux Enterprise Desktop Linux client:

Browsing shared files and directories with SMB

GNOME Files (previously called Nautilus) supports browsing shared resources through SMB.

Sharing files and directories with SMB

GNOME Files supports sharing directories and files as in Windows.

Accessing and manipulating user data on the Windows server

Through GNOME Files, users can access their Windows user data and can edit, create, and delete files and directories on the Windows server. Users can access their data without having to enter their password multiple times.

Offline authentication

Users can log in and access their local data on the Linux machine even if they are offline or the Active Directory server is unavailable for other reasons.

Windows password change

This port of Active Directory support in Linux enforces corporate password policies stored in Active Directory. The display managers and console support password change messages and accept your input. You can even use the Linux passwd command to set Windows passwords.

Single sign-on through kerberized applications

Many desktop applications are Kerberos-enabled (kerberized), which means they can transparently handle authentication for the user without the need for password reentry at Web servers, proxies, groupware applications, or other locations.

Note
Note: Managing Unix attributes from Windows Server* 2016 and later

In Windows Server 2016 and later, Microsoft has removed the role IDMU/NIS Server and along with it the Unix Attributes plug-in for the Active Directory Users and Computers MMC snap-in.

However, Unix attributes can still be managed manually when Advanced Options are enabled in the Active Directory Users and Computers MMC snap-in. For more information, see https://blogs.technet.microsoft.com/activedirectoryua/2016/02/09/identity-management-for-unix-idmu-is-deprecated-in-windows-server/.

Alternatively, use the method described in Procedure 7.1, “Joining an Active Directory domain using User logon management to complete attributes on the client side (in particular, see Step 6.c).

The following section contains technical background for most of the previously named features. For more information about file and printer sharing using Active Directory, see Book “GNOME User Guide”.

7.2 Background information for Linux Active Directory support

Many system components need to interact flawlessly to integrate a Linux client into an existing Windows Active Directory domain. The following sections focus on the underlying processes of the key events in Active Directory server and client interaction.

To communicate with the directory service, the client needs to share at least two protocols with the server:

LDAP

LDAP is a protocol optimized for managing directory information. A Windows domain controller with Active Directory can use the LDAP protocol to exchange directory information with the clients. To learn more about LDAP, refer to Chapter 5, LDAP with 389 Directory Server.

Kerberos

Kerberos is a third-party trusted authentication service. All its clients trust Kerberos authorization of another client's identity, enabling kerberized single sign-on (SSO) solutions. Windows supports a Kerberos implementation, making Kerberos SSO possible even with Linux clients. To learn more about Kerberos in Linux, refer to Chapter 6, Network authentication with Kerberos.

Depending on which YaST module you use to set up Kerberos authentication, different client components process account and authentication data:

Solutions based on SSSD
  • The sssd daemon is the central part of this solution. It handles all communication with the Active Directory server.

  • To gather name service information, sssd_nss is used.

  • To authenticate users, the pam_sss module for PAM is used. The creation of user homes for the Active Directory users on the Linux client is handled by pam_mkhomedir.

    For more information about PAM, see Chapter 2, Authentication with PAM.

Solution based on Winbind (Samba)
  • The winbindd daemon is the central part of this solution. It handles all communication with the Active Directory server.

  • To gather name service information, nss_winbind is used.

  • To authenticate users, the pam_winbind module for PAM is used. The creation of user homes for the Active Directory users on the Linux client is handled by pam_mkhomedir.

    For more information about PAM, see Chapter 2, Authentication with PAM.

Figure 7.1, “Schema of Winbind-based Active Directory authentication” highlights the most prominent components of Winbind-based Active Directory authentication.

Schema of Winbind-based Active Directory authentication
Figure 7.1: Schema of Winbind-based Active Directory authentication

Applications that are PAM-aware, like the login routines and the GNOME display manager, interact with the PAM and NSS layer to authenticate against the Windows server. Applications supporting Kerberos authentication (such as file managers, Web browsers, or e-mail clients) use the Kerberos credential cache to access user's Kerberos tickets, making them part of the SSO framework.

7.2.1 Domain join

During domain join, the server and the client establish a secure relation. On the client, the following tasks need to be performed to join the existing LDAP and Kerberos SSO environment provided by the Windows domain controller. The entire join process is handled by the YaST Domain Membership module, which can be run during installation or in the installed system:

  1. The Windows domain controller providing both LDAP and KDC (Key Distribution Center) services is located.

  2. A machine account for the joining client is created in the directory service.

  3. An initial ticket granting ticket (TGT) is obtained for the client and stored in its local Kerberos credential cache. The client needs this TGT to get further tickets allowing it to contact other services, like contacting the directory server for LDAP queries.

  4. NSS and PAM configurations are adjusted to enable the client to authenticate against the domain controller.

During client boot, the winbind daemon is started and retrieves the initial Kerberos ticket for the machine account. winbindd automatically refreshes the machine's ticket to keep it valid. To keep track of the current account policies, winbindd periodically queries the domain controller.

7.2.2 Domain login and user homes

The login manager of GNOME (GDM) has been extended to allow the handling of Active Directory domain login. Users can choose to log in to the primary domain the machine has joined or to one of the trusted domains with which the domain controller of the primary domain has established a trust relationship.

User authentication is mediated by several PAM modules as described in Section 7.2, “Background information for Linux Active Directory support”. If there are errors, the error codes are translated into user-readable error messages that PAM gives at login through any of the supported methods (GDM, console and SSH):

Password has expired

The user sees a message stating that the password has expired and needs to be changed. The system prompts for a new password and informs the user if the new password does not comply with corporate password policies (for example the password is too short, too simple, or already in the history). If a user's password change fails, the reason is shown and a new password prompt is given.

Account disabled

The user sees an error message stating that the account has been disabled and to contact the system administrator.

Account locked out

The user sees an error message stating that the account has been locked and to contact the system administrator.

Password has to be changed

The user can log in but receives a warning that the password needs to be changed soon. This warning is sent three days before that password expires. After expiration, the user cannot log in.

Invalid workstation

When a user is restricted to specific workstations and the current SUSE Linux Enterprise Desktop machine is not among them, a message appears that this user cannot log in from this workstation.

Invalid logon hours

When a user is only allowed to log in during working hours and tries to log in outside working hours, a message informs the user that logging in is not possible at that time.

Account expired

An administrator can set an expiration time for a specific user account. If that user tries to log in after expiration, the user gets a message that the account has expired and cannot be used to log in.

During a successful authentication, the client acquires a ticket granting ticket (TGT) from the Kerberos server of Active Directory and stores it in the user's credential cache. It also renews the TGT in the background, requiring no user interaction.

SUSE Linux Enterprise Desktop supports local home directories for Active Directory users. If configured through YaST as described in Section 7.3, “Configuring a Linux client for Active Directory”, user home directories are created when a Windows/Active Directory user first logs in to the Linux client. These home directories look and feel identical to standard Linux user home directories and work independently of the Active Directory Domain Controller.

Using a local user home, it is possible to access a user's data on this machine (even when the Active Directory server is disconnected) if the Linux client has been configured to perform offline authentication.

7.2.3 Offline service and policy support

Users in a corporate environment must have the ability to become roaming users (for example, to switch networks or even work disconnected for some time). To enable users to log in to a disconnected machine, extensive caching was integrated into the winbind daemon. The winbind daemon enforces password policies even in the offline state. It tracks the number of failed login attempts and reacts according to the policies configured in Active Directory. Offline support is disabled by default and must be explicitly enabled in the YaST Domain Membership module.

When the domain controller has become unavailable, the user can still access network resources (other than the Active Directory server itself) with valid Kerberos tickets that have been acquired before losing the connection (as in Windows). Password changes cannot be processed unless the domain controller is online. While disconnected from the Active Directory server, a user cannot access any data stored on this server. When a workstation has become disconnected from the network entirely and connects to the corporate network again later, SUSE Linux Enterprise Desktop acquires a new Kerberos ticket when the user has locked and unlocked the desktop (for example, using a desktop screen saver).

7.3 Configuring a Linux client for Active Directory

Before your client can join an Active Directory domain, adjustments must be made to your network setup to ensure the flawless interaction of client and server.

DNS

Configure your client machine to use a DNS server that can forward DNS requests to the Active Directory DNS server. Alternatively, configure your machine to use the Active Directory DNS server as the name service data source.

NTP

To succeed with Kerberos authentication, the client must have its time set accurately. It is recommended to use a central NTP time server for this purpose (this can be also the NTP server running on your Active Directory domain controller). If the clock skew between your Linux host and the domain controller exceeds a certain limit, Kerberos authentication fails and the client is logged in using the weaker NTLM (NT LAN Manager) authentication. For more details about using Active Directory for time synchronization, see Procedure 7.2, “Joining an Active Directory domain using Windows domain membership.

Firewall

To browse your network neighborhood, either disable the firewall entirely or mark the interface used for browsing as part of the internal zone.

To change the firewall settings on your client, log in as root and start the YaST firewall module. Select Interfaces. Select your network interface from the list of interfaces and click Change. Select Internal Zone and apply your settings with OK. Leave the firewall settings with Next › Finish. To disable the firewall, check the Disable Firewall Automatic Starting option, and leave the firewall module with Next › Finish.

Active Directory account

You cannot log in to an Active Directory domain unless the Active Directory administrator has provided you with a valid user account for that domain. Use the Active Directory user name and password to log in to the Active Directory domain from your Linux client.

7.3.1 Choosing which YaST module to use for connecting to Active Directory

YaST contains multiple modules that allow connecting to an Active Directory:

7.3.2 Joining Active Directory using User logon management

The YaST module User Logon Management supports authentication at an Active Directory. Additionally, it also supports the following related authentication and identification providers:

Identification providers
  • Delegate to third-party software library Support for legacy NSS providers via a proxy.

  • FreeIPA FreeIPA and Red Hat Enterprise Identity Management provider.

  • Generic directory service (LDAP) An LDAP provider. For more information about configuring LDAP, see man 5 sssd-ldap.

  • Local SSSD file database An SSSD-internal provider for local users.

Authentication providers
  • Delegate to third-party software library Relay authentication to another PAM target via a proxy.

  • FreeIPA FreeIPA and Red Hat Enterprise Identity Management provider.

  • Generic Kerberos service An LDAP provider.

  • Generic directory service (LDAP) Kerberos authentication.

  • Local SSSD file database An SSSD-internal provider for local users.

  • This domain does not provide authentication service Disables authentication explicitly.

To join an Active Directory domain using SSSD and the User Logon Management module of YaST, proceed as follows:

Procedure 7.1: Joining an Active Directory domain using User logon management
  1. Open YaST.

  2. To be able to use DNS auto-discovery later, set up the Active Directory Domain Controller (the Active Directory server) as the name server for your client.

    1. In YaST, click Network Settings.

    2. Select Hostname/DNS, then enter the IP address of the Active Directory Domain Controller into the text box Name Server 1.

      Save the setting with OK.

  3. From the YaST main window, start the module User Logon Management.

    The module opens with an overview showing different network properties of your computer and the authentication method currently in use.

    Overview window showing the computer name, IP address, and its authentication setting.
    Figure 7.2: Main window of User logon management
  4. To start editing, click Change Settings.

  5. Now join the domain.

    1. Click Add Domain.

    2. In the appearing dialog, specify the correct Domain name. Then specify the services to use for identity data and authentication: Select Microsoft Active Directory for both.

      Ensure that Enable the domain is activated.

      Click OK.

    3. (Optional) You can keep the default settings in the following dialog. However, there are reasons to make changes:

      • If the local host name does not match the host name set on the domain controller.  Find out if the host name of your computer matches the name under which your computer is known to the Active Directory Domain Controller. In a terminal, run the command hostname, then compare its output to the configuration of the Active Directory Domain Controller.

        If the values differ, specify the host name from the Active Directory configuration under AD hostname. Otherwise, leave the appropriate text box empty.

      • If you do not want to use DNS auto-discovery.  Specify the Host names of Active Directory servers that you want to use. If there are multiple Domain Controllers, separate their host names with commas.

    4. To continue, click OK.

      If not all software is installed already, the computer now installs missing software. It then checks whether the configured Active Directory Domain Controller is available.

    5. If everything is correct, the following dialog should now show that it has discovered an Active Directory Server but that you are Not yet enrolled.

      In the dialog, specify the Username and Password of the Active Directory administrator account (Administrator).

      To make sure that the current domain is enabled for Samba, activate Overwrite Samba configuration to work with this AD.

      To enroll, click OK.

      Figure 7.3: Enrolling into a domain
    6. You should now see a message confirming that you have enrolled successfully. Finish with OK.

  6. After enrolling, configure the client using the window Manage Domain User Logon.

    Figure 7.4: Configuration window of User logon management
    1. To allow logging in to the computer using login data provided by Active Directory, activate Allow Domain User Logon.

    2. (Optional) Optionally, under Enable domain data source, activate additional data sources such as information on which users are allowed to use sudo or which network drives are available.

    3. To allow Active Directory users to have home directories, activate Create Home Directories. The path for home directories can be set in multiple ways—on the client, on the server, or both ways:

      • To configure the home directory paths on the Domain Controller, set an appropriate value for the attribute UnixHomeDirectory for each user. Additionally, make sure that this attribute replicated to the global catalog. For information on achieving that under Windows, see https://support.microsoft.com/en-us/kb/248717.

      • To configure home directory paths on the client in such a way that precedence is given to the path set on the domain controller, use the option fallback_homedir.

      • To configure home directory paths on the client in such a way that the client setting overrides the server setting, use override_homedir.

      As settings on the Domain Controller are outside of the scope of this documentation, only the configuration of the client-side options is described in the following.

      From the side bar, select Service Options › Name switch, then click Extended Options. From that window, select either fallback_homedir or override_homedir, then click Add.

      Specify a value. To have home directories follow the format /home/USER_NAME, use /home/%u. For more information about possible variables, see the man page sssd.conf (man 5 sssd.conf), section override_homedir.

      Click OK.

  7. Save the changes by clicking OK. Then make sure that the values displayed now are correct. To leave the dialog, click Cancel.

7.3.3 Joining Active Directory using Windows domain membership

To join an Active Directory domain using winbind and the Windows Domain Membership module of YaST, proceed as follows:

Procedure 7.2: Joining an Active Directory domain using Windows domain membership
  1. Log in as root and start YaST.

  2. Start Network Services › Windows Domain Membership.

  3. Enter the domain to join at Domain or Workgroup in the Windows Domain Membership screen (see Figure 7.5, “Determining Windows domain membership”). If the DNS settings on your host are properly integrated with the Windows DNS server, enter the Active Directory domain name in its DNS format (mydomain.mycompany.com). If you enter the short name of your domain (also known as the pre–Windows 2000 domain name), YaST must rely on NetBIOS name resolution instead of DNS to find the correct domain controller.

    Determining Windows domain membership
    Figure 7.5: Determining Windows domain membership
  4. To use the SMB source for Linux authentication, activate Also Use SMB Information for Linux Authentication.

  5. To automatically create a local home directory for Active Directory users on the Linux machine, activate Create Home Directory on Login.

  6. Check Offline Authentication to allow your domain users to log in even if the Active Directory server is temporarily unavailable, or if you do not have a network connection.

  7. To change the UID and GID ranges for the Samba users and groups, select Expert Settings. Let DHCP retrieve the WINS server only if you need it. This is the case when certain machines are resolved only by the WINS system.

  8. Configure NTP time synchronization for your Active Directory environment by selecting NTP Configuration and entering an appropriate server name or IP address. This step is obsolete if you have already entered the appropriate settings in the stand-alone YaST NTP configuration module.

  9. Click OK and confirm the domain join when prompted for it.

  10. Provide the password for the Windows administrator on the Active Directory server and click OK (see Figure 7.6, “Providing administrator credentials”).

    Providing administrator credentials
    Figure 7.6: Providing administrator credentials

After you have joined the Active Directory domain, you can log in to it from your workstation using the display manager of your desktop or the console.

Important
Important: Domain name

Joining a domain may not succeed if the domain name ends with .local. Names ending in .local cause conflicts with Multicast DNS (MDNS) where .local is reserved for link-local host names.

Note
Note: Only administrators can enroll a computer

Only a domain administrator account, such as Administrator, can join SUSE Linux Enterprise Desktop into Active Directory.

7.3.4 Checking Active Directory connection status

To check whether you are successfully enrolled in an Active Directory domain, use the following commands:

  • klist shows whether the current user has a valid Kerberos ticket.

  • getent passwd shows published LDAP data for all users.

7.4 Logging in to an Active Directory domain

Provided your machine has been configured to authenticate against Active Directory and you have a valid Windows user identity, you can log in to your machine using the Active Directory credentials. Login is supported for GNOME, the console, SSH and any other PAM-aware application.

Important
Important: Offline authentication

SUSE Linux Enterprise Desktop supports offline authentication, allowing you to log in to your client machine even when it is offline. See Section 7.2.3, “Offline service and policy support” for details.

7.4.1 GDM

To authenticate a GNOME client machine against an Active Directory server, proceed as follows:

  1. Click Not listed.

  2. In the text box Username, enter the domain name and the Windows user name in this form: DOMAIN_NAME\USER_NAME.

  3. Enter your Windows password.

If configured to do so, SUSE Linux Enterprise Desktop creates a user home directory on the local machine on the first login of each user authenticated via Active Directory. This allows you to benefit from the Active Directory support of SUSE Linux Enterprise Desktop while still having a fully functional Linux machine at your disposal.

7.4.2 Console login

Besides logging in to the Active Directory client machine using a graphical front-end, you can log in using the text-based console or even remotely using SSH.

To log in to your Active Directory client from a console, enter DOMAIN_NAME\USER_NAME at the login: prompt and provide the password.

To remotely log in to your Active Directory client machine using SSH, proceed as follows:

  1. At the login prompt, enter:

    > ssh DOMAIN_NAME\\USER_NAME@HOST_NAME

    The \ domain and login delimiter is escaped with another \ sign.

  2. Provide the user's password.

7.5 Changing passwords

SUSE Linux Enterprise Desktop helps the user choose a suitable new password that meets the corporate security policy. The underlying PAM module retrieves the current password policy settings from the domain controller and informs the user by a message on login about the specific password quality requirements a user account typically has. Like its Windows counterpart, SUSE Linux Enterprise Desktop presents a message describing:

  • Password history settings

  • Minimum password length requirements

  • Minimum password age

  • Password complexity

The password change process cannot succeed unless all requirements have been successfully met. Feedback about the password status is given both through the display managers and the console.

GDM provides feedback about password expiration and the prompt for new passwords in an interactive mode. To change passwords in the display managers, provide the password information when prompted.

To change your Windows password, you can use the standard Linux utility, passwd, instead of having to manipulate this data on the server. To change your Windows password, proceed as follows:

  1. Log in at the console.

  2. Enter passwd.

  3. Enter your current password when prompted.

  4. Enter the new password.

  5. Reenter the new password for confirmation. If your new password does not comply with the policies on the Windows server, this feedback is given to you and you are prompted for another password.

To change your Windows password from the GNOME desktop, proceed as follows:

  1. Click the Computer icon on the left edge of the panel.

  2. Select Control Center.

  3. From the Personal section, select About Me › Change Password.

  4. Enter your old password.

  5. Enter and confirm the new password.

  6. Leave the dialog with Close to apply your settings.

7.6 Active Directory certificate auto-enrollment

Certificate auto-enrollment enables network devices to automatically enroll certificates from Active Directory Certificate Services, including SUSE Linux Enterprise Desktop devices, with no user intervention. This is managed by Active Directory's Group Policy, using Samba's samba-gpupdate command.

7.6.1 Configuring certificate auto-enrollment on the server

The Windows server roles Certification Authority, Certificate Enrollment Policy Web Service, Certificate Enrollment Web Service and Network Device Enrollment Service all must be installed and configured on the Active Directory server.

Configure Group Policy auto-enrollment as described in this Microsoft documentation: https://docs.microsoft.com/en-us/windows-server/networking/core-network-guide/cncg/server-certs/configure-server-certificate-autoenrollment#configure-server-certificate-auto-enrollment.

7.6.2 Enable certificate auto-enrollment on the client

Follow the steps in the following procedure to enable certificate on your clients.

  1. Install the samba-gpupdate package. This automatically installs the certmonger, cepces and sscep dependencies. Samba uses sscep to download the Certificate Authority root chain, then uses the certmonger paired with cepces to monitor the host certificate templates.

  2. Join to an Active Directory domain (one where the CA has been previously configured as explained in Section 7.6.1, “Configuring certificate auto-enrollment on the server”).

  3. On Winbind-joined machines, set the smb.conf global parameter by adding the line apply group policies = yes.

  4. For SSSD-joined machines, install oddjob-gpupdate from https://github.com/openSUSE/oddjob-gpupdate.

  5. Then verify that certificate auto-enrollment is correctly configured by running the following command on the client:

    > /usr/sbin/samba-gpupdate --rsop

    If you see output like the following example, it is correctly configured:

    Resultant Set of Policy
    Computer Policy
    GPO: Default Domain Policy
    ==========================================================
    CSE: gp_cert_auto_enroll_ext
    -----------------------------------------------------------
    Policy Type: Auto Enrollment Policy
    -----------------------------------------------------------
    [ <CA NAME> ] =
    [ CA Certificate ] =
    ----BEGIN CERTIFICATE----
    <CERTIFICATE>
    ----END CERTIFICATE----
    [ Auto Enrollment Server ] = <DNS NAME>
  6. Use the following command to display installed certificates:

    > getcert list
     Number of certificates and requests being tracked: 1.
     Request ID 'Machine':
     status: MONITORING
     stuck: no
     key pair storage: type=FILE,location='/var/lib/samba/private/certs/Machine.key'
     certificate: type=FILE,location='/var/lib/samba/certs/Machine.crt'
     CA: <CA NAME>
     issuer: CN=<CA NAME>
     subject: CN=<HOSTNAME>
     expires: 2017-08-15 17:37:02 UTC
     dns: <hostname>
     key usage: digitalSignature,keyEncipherment
     eku: id-kp-clientAuth,id-kp-serverAuth
     certificate template/profile: Machine

Certificates are installed in /var/lib/samba/certs and private keys are installed in /var/lib/samba/private/certs.

For more information, see man samba-gpupdate.

8 Setting up a freeRADIUS server

The RADIUS (Remote Authentication Dial-In User Service) protocol has long been a standard service for manage network access. It provides authentication, authorization and accounting (AAA) for large businesses such as Internet service providers and cellular network providers, and is also popular for small networks. It authenticates users and devices, authorizes those users and devices for certain network services, and tracks use of services for billing and auditing. You do not have to use all three of the AAA protocols, but only the ones you need. For example, you may not need accounting but only client authentication, or perhaps all you want is accounting, and client authorization is managed by something else.

It is efficient and manages thousands of requests on modest hardware. It works for all network protocols and not just dial-up, but the name remains the same.

RADIUS operates in a distributed architecture, sitting separately from the Network Access Server (NAS). User access data is stored on a central RADIUS server that is available to multiple NAS. The NAS provides the physical access to the network, such as a managed Ethernet switch, or wireless access point.

FreeRADIUS is the open source RADIUS implementation, and is the most widely used RADIUS server. In this chapter you learn how to install and test a FreeRADIUS server. Because of the numerous use cases, after your initial setup is working correctly, your next stop is the official documentation, which is detailed and thorough (see https://freeradius.org/documentation/).

8.1 Installation and testing on SUSE Linux Enterprise Desktop

The following steps set up a simple test system. When you have verified that the server is operating correctly and you are ready to create a production configuration, you have several undo steps to perform before starting your production configuration.

First install the freeradius-server and freeradius-server-utils packages. Then enter /etc/raddb/certs and run the bootstrap script to create a set of test certificates:

# zypper in freeradius-server freeradius-server-utils
# cd /etc/raddb/certs
# ./bootstrap

The README in the certs directory contains a great deal of useful information. When the bootstrap script has completed, start the server in debugging mode:

# radiusd -X
[...]
Listening on auth address * port 1812 bound to server default
Listening on acct address * port 1813 bound to server default
Listening on auth address :: port 1812 bound to server default
Listening on acct address :: port 1813 bound to server default
Listening on auth address 127.0.0.1 port 18120 bound to server inner-tunnel
Listening on proxy address * port 54435
Listening on proxy address :: port 58415
Ready to process requests

When you see the Listening and Ready to process requests lines, your server has started correctly. If it does not start, read the output carefully because it tells you what went wrong. You may direct the output to a text file with tee:

> radiusd -X | tee radiusd.text

The next step is to test authentication with a test client and user. The client is a client of the RADIUS server, such as a wireless access point or switch. Clients are configured in /etc/raddb/client.conf. Human users are configured in /etc/raddb/mods-config/files/authorize.

Open /etc/raddb/mods-config/files/authorize and uncomment the following lines:

bob   Cleartext-Password := "hello"
Reply-Message := "Hello, %{User-Name}"

A test client, client localhost, is provided in /etc/raddb/client.conf, with a secret of testing123. Open a second terminal, and as an unprivileged user use the radtest command to log in as bob:

> radtest bob hello 127.0.0.1 0 testing123
Sent Access-Request Id 241 from 0.0.0.0:35234 to 127.0.0.1:1812 length 73
        User-Name = "bob"
        User-Password = "hello"
        NAS-IP-Address = 127.0.0.1
        NAS-Port = 0
        Message-Authenticator = 0x00
        Cleartext-Password = "hello"
Received Access-Accept Id 241 from 127.0.0.1:1812 to 0.0.0.0:0 length 20

In your radius -X terminal, a successful login looks like this:

(3) pap: Login attempt with password
(3) pap: Comparing with "known good" Cleartext-Password
(3) pap: User authenticated successfully
(3)     [pap] = ok
[...]
(3) Sent Access-Accept Id 241 from 127.0.0.1:1812 to 127.0.0.1:35234 length 0
(3) Finished request
Waking up in 4.9 seconds.
(3) Cleaning up request packet ID 241 with timestamp +889

Now run one more login test from a different computer on your network. Create a client configuration on your server by uncommenting and modifying the following entry in clients.conf, using the IP address of your test machine:

client private-network-1 }
  ipaddr          = 192.0.2.0/24
  secret          = testing123-1
  {

On the client machine, install freeradius-server-utils. Try logging in from the client as bob, using the radtest command. It is better to use the IP address of the RADIUS server rather than the hostname because it is faster:

> radtest bob hello 192.168.2.100 0 testing123-1

If your test logins fail, review all the output to learn what went wrong. There are several test users and test clients provided. The configuration files are full of useful information, and we recommend studying them. When you are satisfied with your testing and ready to create a production configuration, remove all the test certificates in /etc/raddb/certs and replace them with your own certificates, comment out all the test users and clients, and stop radiusd by pressing CtrlC. Manage the radiusd.service with systemctl, just like any other service.

To learn how to fit a FreeRADIUS server in your network, see https://freeradius.org/documentation/ and https://networkradius.com/freeradius-documentation/ for in-depth references and howtos.

Part II Local security

  • 9 Physical security
  • Physical security is important. Linux production servers should be in locked data centers accessible to people that have passed security checks. Depending on the environment and circumstances, you can also consider boot loader passwords.

  • 10 Software management
  • An important step in securing a Linux system is to determine the primary functions or roles of the Linux server. Otherwise, it can be difficult to understand what needs to be secured and securing these Linux systems can prove ineffective. Therefore, it is critical to look at the default list of soft…

  • 11 File management
  • Servers should have separate file systems for at least /, /boot, /var, /tmp, and /home. This prevents, for example, logging space and temporary space under /var and /tmp from filling up the root partition. Third-party applications should be on separate file systems as well, for example under /opt.

  • 12 Encrypting partitions and files
  • Encrypting files, partitions, and entire disks prevents unauthorized access to your data and protects your confidential files and documents.

  • 13 Storage encryption for hosted applications with cryptctl
  • Databases and similar applications are often hosted on external servers that are serviced by third-party staff. Certain data center maintenance tasks require third-party staff to directly access affected systems. In such cases, privacy requirements necessitate disk encryption.

  • 14 User management
  • It is important that all system and vendor accounts that are not used for logins are locked. To get a list of unlocked accounts on your system, you can check for accounts that do not have an encrypted password string starting with ! or * in the /etc/shadow file. If you lock an account using either p…

  • 15 Restricting cron and at
  • This chapter explains how to restrict access to the cron and at daemons to improve the security of a system.

  • 16 Spectre/Meltdown checker
  • spectre-meltdown-checker is a shell script to test if your system is vulnerable to the several speculative execution vulnerabilities that are in nearly all CPUs manufactured in the past 20 years. This is a hardware flaw that potentially allows an attacker to read all data on the system. On cloud computing services, where multiple virtual machines are on a single physical host, an attacker can gain access to all virtual machines. Fixing these vulnerabilities requires redesigning and replacing CPUs. Until this happens, there are several software patches that mitigate these vulnerabilities. If you have kept your SUSE systems updated, all these patches should already be installed.

    spectre-meltdown-checker generates a detailed report. It is impossible to guarantee that your system is secure, but it shows you which mitigations are in place, and potential vulnerabilities.

  • 17 Configuring security settings with YaST
  • The YaST module Security Center provides a central control panel for configuring security-related settings for SUSE Linux Enterprise Desktop. Use it to configure security aspects such as settings for the login procedure and for password creation, for boot permissions, user creation, or for default file permissions. Launch it from the YaST control center with Security and Users › Security Center. The Security Center dialog opens to the Security Overview, with additional configuration dialogs in the left and right panes.

  • 18 The Polkit authentication framework
  • Polkit is an authentication framework used in graphical Linux desktop environments, for fine-grained management of access rights on the system. Traditionally, there is a strong separation of privileges on Linux between the root user as the fully authorized administrator account, and all other accounts and groups on the system. These non-administrator accounts may have certain additional privileges, like accessing sound hardware through an audio group. However, this kind of privilege is fixed and cannot be granted in certain specific situations, or for a certain duration of time.

    Instead of fully switching to the root user (using programs such as sudo) for gaining higher privileges, Polkit grants specific privileges to a user or group on an as-needed basis. This is controlled by configuration files that describe individual actions that need to be authorized in a dynamic context.

  • 19 Access control lists in Linux
  • POSIX ACLs (access control lists) can be used as an expansion of the traditional permission concept for file system objects. With ACLs, permissions can be defined more flexibly than with the traditional permission concept.

  • 20 Intrusion detection with AIDE
  • Securing your systems is a mandatory task for any mission-critical system administrator. Because it is impossible to always guarantee that the system is not compromised, it is important to do extra checks regularly (for example with cron) to ensure that the system is still under your control. This is where AIDE, the Advanced Intrusion Detection Environment, comes into play.

9 Physical security

Physical security is important. Linux production servers should be in locked data centers accessible to people that have passed security checks. Depending on the environment and circumstances, you can also consider boot loader passwords.

Additionally, consider questions like:

  • Who has direct physical access to the host?

  • Of those that do, should they?

  • Can the host be protected from tampering and should it be?

The amount of physical security needed on a particular system depends on the situation, and can also vary widely depending on available funds.

9.1 System locks

Most server racks in data centers include a locking feature. This is a hasp/cylinder lock on the front of the rack that allows you to turn an included key to a locked or unlocked position—granting or denying entry. Cage locks can help prevent someone from tampering or stealing devices/media from the servers, or opening the cases and directly manipulating/sabotaging the hardware. Preventing system reboots or the booting from alternate devices is also important (for example CD, DVDs, flash disks, etc.).

Some servers also have case locks. These locks can do different things according to the designs of the system vendor and construction. Many systems are designed to self-disable if attempts are made to open the system without unlocking. Others have device covers that do not let you plug in or unplug keyboards or mice. While locks are sometimes a useful feature, they are lower quality and easily defeated by attackers with ill intent.

9.2 Locking down the BIOS

Tip
Tip: Secure boot

This section describes basic methods to secure the boot process. To find out about more advanced boot protection using UEFI and the secure boot feature, see Book “Administration Guide”, Chapter 17 “UEFI (Unified Extensible Firmware Interface)”, Section 17.1 “Secure boot”.

The BIOS (Basic Input/Output System) or its successor UEFI (Unified Extensible Firmware Interface) is the lowest level of software/firmware on PC class systems. Other hardware types (POWER, IBM Z) that run Linux have low-level firmware that performs similar functions as the PC BIOS. When this document references the BIOS, it means BIOS and/or UEFI. The BIOS dictates system configuration, puts the system into a well defined state and provides routines for accessing low-level hardware. The BIOS executes the configured Linux boot loader (like GRUB 2) to boot the host.

Most BIOS implementations can be configured to prevent unauthorized users from manipulating system and boot settings. This is typically done by setting a BIOS administrator or boot password. The administrator password needs to be entered for changing the system configuration but the boot password is required during every normal boot. For most use cases, it is enough to set an administrator password and restrict booting to the built-in hard disk. This way an attacker is not able to simply boot a Linux live CD or flash drive, for example. Although this does not provide a high level of security (a BIOS can be reset, removed or modified—assuming case access), it can be another deterrent.

Many BIOS firmware implementations have other security-related settings. Check with the system vendor, the system documentation, or examine the BIOS during a system boot to find out more.

Important
Important: Booting when a BIOS boot password is set

If a system has been set up with a boot password, the host does not boot up unattended (for example, in case of a system reboot or power failure). This is a trade-off.

Important
Important: Losing the BIOS administrator password

Once a system is set up for the first time, the BIOS administrator password is not required often. Do not forget the password or you may need to clear the BIOS memory via hardware manipulation to get access again.

9.3 Security via the boot loaders

The Linux boot loader GRUB 2, which is used by default in SUSE Linux Enterprise Desktop can have a boot password set. It also provides a password feature, so that only administrators can start the interactive operations (for example editing menu entries and entering the command line interface). If a password is specified, GRUB 2 disallows any interactive control until you press the key C and E and enter a correct password.

You can refer to the GRUB 2 man page for examples.

It is important to keep in mind that when setting these passwords they need to be remembered. Also, enabling these passwords can merely slow an intrusion, not necessarily prevent it. Again, someone could boot from a removable device, and mount your root partition. If you are using BIOS-level security and a boot loader, it is a good practice to disable the ability to boot from removable devices in your computer's BIOS, and then password-protect the BIOS itself.

Also keep in mind that the boot loader configuration files need to be protected by changing their mode to 600 (read/write for root only), or others can read your passwords or hashes.

9.4 Retiring Linux servers with sensitive data

Security policies contain certain procedures for the treatment of storage media that is going to be retired or disposed of. Disk and media wipe procedures are frequently prescribed, as is complete destruction of the media. You can find several free tools on the Internet. A search for dod disk wipe utility yields several variants. To retire servers with sensitive data, it is important to ensure that data cannot be recovered from the hard disks. To ensure that all traces of data are removed, a wipe utility—such as scrub—can be used. Many wipe utilities overwrite the data several times. This assures that even sophisticated methods are not able to retrieve any parts of the wiped data. Some tools can even be operated from a bootable removable device and remove data according to the U.S. Department of Defense (DoD) standards. Many government agencies specify their own standards for data security. Some standards are stronger than others, yet may require more time to implement.

Important
Important: Wiping wear leveling devices

Some devices, like SSDs, use wear leveling and do not necessarily write new data in the same physical locations. Such devices provide their own erasing functionality.

9.4.1 scrub: disk overwrite utility

scrub overwrites hard disks, files and other devices with repeating patterns intended to make recovering data from these devices more difficult. It operates in three basic modes: on a character or block device, on a file, or on a specified directory. For more information, see the manual page man 1 scrub.

Supported scrub methods
nnsa

4-pass NNSA Policy Letter NAP-14.1-C (XVI-8) for sanitizing removable and non-removable hard disks, which requires overwriting all locations with a pseudo-random pattern twice and then with a known pattern: random (x2), 0x00, verify.

dod

4-pass DoD 5220.22-M section 8-306 procedure (d) for sanitizing removable and non-removable rigid disks. This requires overwriting all addressable locations with a character, its complement, a random character and then verifying. Note: scrub performs the random pass first to make verification easier: random, 0x00, 0xff, verify.

bsi

9-pass method recommended by the German Center of Security in Information Technologies (https://www.bsi.bund.de): 0xff, 0xfe, 0xfd, 0xfb, 0xf7, 0xef, 0xdf, 0xbf, 0x7f.

gutmann

The canonical 35-pass sequence described in Gutmann's paper cited below.

schneier

7-pass method described by Bruce Schneier in "Applied Cryptography" (1996): 0x00, 0xff, random (x5)

pfitzner7

Roy Pfitzner's 7-random-pass method: random (x7).

pfitzner33

Roy Pfitzner's 33-random-pass method: random (x33).

usarmy

US Army AR380-19 method: 0x00, 0xff, random. (Note: identical to DoD 522.22-M section 8-306 procedure (e) for sanitizing magnetic core memory).

fillzero

1-pass pattern: 0x00.

fillff

1-pass pattern: 0xff.

random

1-pass pattern: random (x1).

random2

2-pass pattern: random (x2).

old

6-pass pre-version 1.7 scrub method: 0x00, 0xff, 0xaa, 0x00, 0x55, verify.

fastold

5-pass pattern: 0x00, 0xff, 0xaa, 0x55 and verify.

custom=string

1-pass custom pattern. String may contain C-style numerical escapes: \nnn (octal) or \xnn (hex).

9.5 Restricting access to removable media

In certain environments, it is required to restrict access to removable media such as USB storage or optical devices. The tools included with the udisks2 package help with such a configuration.

  1. Create a user group whose users are allowed to mount and eject removable devices, for example mmedia_all:

    > sudo groupadd mmedia_all
  2. Add a specific user tux to the new group:

    > sudo usermod -a -G mmedia_all tux
  3. Create the /etc/polkit-1/rules.d/10-mount.rules file with the following content:

    > cat /etc/polkit-1/rules.d/10-mount.rules
    polkit.addRule(function(action, subject) {
     if (action.id =="org.freedesktop.udisks2.eject-media"
      && subject.isInGroup("mmedia_all")) {
       return polkit.Result.YES;
      }
    });
    
    polkit.addRule(function(action, subject) {
     if (action.id =="org.freedesktop.udisks2.filesystem-mount"
      && subject.isInGroup("mmedia_all")) {
       return polkit.Result.YES;
      }
    });
    Important
    Important: Naming of the rules file

    The name of a rules file must start with a digit, otherwise it is ignored.

    Rules files are processed in alphabetical order. Functions are called in the order they were added until one of the functions returns a value. Therefore, to add an authorization rule that is processed before other rules, put it in a file in /etc/polkit-1/rules.d with a name that sorts before other rules files, for example /etc/polkit-1/rules.d/10-mount.rules. Each function should return a value from polkit.Result.

  4. Restart udisks2:

    # systemctl restart udisks2
  5. Restart polkit

    # systemctl restart polkit

9.6 System protection with enforced USB device authorization via USBGuard

The USBGuard software framework helps to protect your system with enforced USB device authorization. It implements allowlist and blocklist capabilities based on the device attributes.

The USBGuard provides the following features:

  • A command-line interface to interact with a running USBGuard daemon

  • The daemon component with an inter-process communication (IPC) interface for dynamic interaction and policy enforcement

  • The rule language for writing USB device authorization policies

  • The C++ API for interacting with the daemon component implemented in a shared library

9.6.1 Installing USBGuard

The USBGuard daemon decides which USB device to authorize based on a set of rules defined in the policy. To install and configure USBGuard, use the following commands:

  1. To install USBGuard:

    > sudo  zypper install usbguard

    USBGuard and the required dependencies are installed. If you want to interact with the USBGuard service, you can install usbguard-tools.

  2. To generate a rule set based on currently connected USB devices, switch to root:

    # usbguard generate-policy > /etc/usbguard/rules.conf
    Note
    Note

    You can customize USBGuard by editing the /etc/usbguard/rules.conf file.

  3. You can start the USBGuard daemon or ensure automatic enablement at system start by switching to root:

    # systemctl enable --now usbguard.service
  4. You can either authorize or deauthorize a device from interacting with the system. This depends on the value of the ImplicitPolicyTarget option in the usbguard-daemon.conf file. This option is used to treat devices that do not match any rule in the policy.

    usbguard allow-device 6
    usbguard block-device 6

    You can also use the reject-deviceoption to deauthorize and remove a device from the system.

    Note
    Note

    Use the usbguard --help command to see all the options.

9.6.2 How to use USBGuard

You can configure a security policy to protect your system with enforced USB device authorization by implementing allow and block lists based on the device attributes.

9.6.2.1 The USBGuard configuration file

The USBGuard daemon loads the usbguard-daemon.conf file after the command-line options are parsed and are used to configure the runtime parameters of the daemon. The file is, by default, located at /etc/usbguard/usbguard-daemon.conf. Some options in the file include:

Options
RuleFile=PATH

The USBGuard daemon uses this file to load the policy rule set from it and to write new rules received through the IPC (inter-process communication) interface. The default is %sysconfdir%/usbguard/rules.conf.

ImplicitPolicyTarget= TARGET

How to treat devices that do not match any rule in the policy, for example:

  • allow - authorize every present device

  • block - deauthorize every present device

  • reject - logically remove the device node from the system

PresentDevicePolicy= POLICY

How to treat devices that are already connected when the daemon starts.

  • allow - authorize every present device

  • block - deauthorize every present device

  • reject - remove every present device

  • keep - sync the internal state

  • apply-policy - evaluate the rule set for all present devices

IPCAllowedUsers= USERNAME

A space-delimited list of user names that the daemon accepts IPC connections from.

IPCAllowedGroups= GROUPNAME

A space-delimited list of group names that the daemon accepts IPC connections from.

IPCAccessControlFiles= PATH

Path to files that are interpreted by the daemon as IPC access control definition files.

Example 9.1: Configuration
IPCAllowedUsers=root joe
IPCAllowedGroups=wheel

The example allows full IPC access to the users root, joe and to the members of the group wheel.

9.6.3 More information

To know more about USBGuard, see:

  • The upstream documentation at https://usbguard.github.io/

  • man usbguard

  • man usbguard-rules.conf

  • man usbguard-daemon

  • man usbguard-daemon.conf

10 Software management

10.1 Removing unnecessary software packages (RPMs)

An important step in securing a Linux system is to determine the primary functions or roles of the Linux server. Otherwise, it can be difficult to understand what needs to be secured and securing these Linux systems can prove ineffective. Therefore, it is critical to look at the default list of software packages and remove any unnecessary packages or packages that do not comply with your defined security policies.

Generally, an RPM software package consists of the following:

  • The package's metadata that is written to the RPM database upon installation.

  • The package's files and directories.

  • Scripts that are being executed before and after installation and removal.

Packages generally do not impose any security risk to the system unless they contain:

  1. setuid or setgid bits on any of the installed files

  2. group- or world-writable files or directories

  3. a service that is activated upon installation, or by default

Assuming that none of the three conditions above apply, a package is merely a collection of files. Neither installation nor uninstallation of such packages has any influence on the security value of the system.

Nevertheless, it is useful to restrict the installed packages in your system to a minimum. Doing this results in fewer packages that require updates and simplifies maintenance efforts when security alerts and patches are released. It is a best practice not to install, among others, development packages or desktop software packages (for example, an X Server) on production servers. If you do not need them, you should also not install, for example, the Apache Web server or Samba file sharing server.

Important
Important: Requirements of third-party installers

Many third-party vendors like Oracle and IBM require a desktop environment and development libraries to run installers. To prevent this from having an impact on the security of their production servers, many organizations work around this by creating a silent installation (response file) in a development lab.

Also, other packages like FTP and Telnet daemons should not be installed unless there is a justified business reason for it. ssh, scp or sftp should be used as replacements.

One of the first action items should be to create a Linux image that only contains RPMs needed by the system and applications, and those needed for maintenance and troubleshooting purposes. A good approach is to start with a minimum list of RPMs and then add packages as needed.

Tip
Tip: SLES Minimal VM

The SUSE Linux Enterprise Server download page offers preconfigured and ready-to-run SLES Minimal VM virtual machine images. SLES Minimal VM has a small footprint and can be customized to fit specific needs of a system developer. SLES Minimal VM is designed for use in virtual machines and for virtual software appliance development. The key benefits of SLES Minimal VM are efficiency and simplified management. More information about SLES Minimal VM is available in the dedicated guide. If SLES Minimal VM does not meet your requirements, consider the minimal installation pattern.

To generate a list of all installed packages, use the following command:

# zypper packages -i

To retrieve details about a particular package, run:

# zypper info PACKAGE_NAME

To check for and report potential conflicts and dependencies when deleting a package, run:

# zypper rm -D PACKAGE_NAME

This can be useful, as running the removal command without a test can often yield a lot of complaints and require manual recursive dependency hunting.

Important
Important: Removal of essential system packages

When removing packages, be careful not to remove any essential system packages. This could put your system into a broken state in which it can no longer be booted or repaired. If you are uncertain about this, then it is best to do a complete backup of your system before you start to remove any packages.

For the final removal of one or more packages use the following zypper command with the added -u switch, which removes any unused dependencies:

# zypper rm -u PACKAGE_NAME

10.2 Patching Linux systems

Building an infrastructure for patch management is another important part of a proactive and secure Linux production environment.

It is recommended to have a written security policy and procedure to handle Linux security updates and issues. For example, a security policy should detail the time frame for assessment, testing and roll out of patches. Network related security vulnerabilities should get the highest priority and should be addressed immediately within a short time frame. The assessment phase should occur within a testing lab, and initial rollout should occur on development systems first.

A separate security log file should contain details on which Linux security announcements have been received, which patches have been researched and assessed, when patches were applied, and so on.

SUSE releases patches in three categories: security, recommended and optional. There are a few options that can be used to keep systems patched, up to date, and secure. Each system can register and then retrieve updates via the SUSE Update Web site using the included YaST tool—YaST Online Update. SUSE has also created the Repository Mirroring Tool (RMT), an efficient way to maintain a local repository of available/released patches/updates/fixes that systems can then pull from (reducing Internet traffic). SUSE also offers SUSE Manager for the maintenance, patching, reporting and centralized management of Linux systems, not only SUSE, but other distributions as well.

10.2.1 YaST Online Update

On a per-server basis, installation of important updates and improvements is possible using the YaST Online Update tool. Current updates for SUSE Linux Enterprise Desktop are available from the product specific update catalogs containing patches. Installation of updates and improvements is accomplished using YaST and selecting Online Update in the Software group. All new patches (except the optional ones) that are currently available for your system are marked for installation. Clicking Accept automatically installs these patches.

10.2.2 Automatic Online Update

YaST also offers the possibility to set up an automatic update. Select Software ›  Automatic Online Update. Configure a Daily or a Weekly update. Some patches, such as kernel updates, require user interaction, which would cause the automatic update procedure to stop. Check Skip Interactive Patches for the update procedure to proceed automatically.

In this case, run a manual Online Update from time to install patches that require interaction.

When Only Download Patches is checked, the patches are downloaded at the specified time but not installed. They must be installed manually using rpmor zypper.

10.2.3 Repository Mirroring Tool—RMT

The Repository Mirroring Tool for SUSE Linux Enterprise goes one step further than the Online Update process by establishing a proxy system with repository and registration targets. This helps customers centrally manage software updates within the firewall on a per-system basis, while maintaining their corporate security policies and regulatory compliance.

The RMT (https://documentation.suse.com/sles/15-SP4/html/SLES-all/book-rmt.html) is integrated with SUSE Customer Center (https://scc.suse.com/) and provides a repository and registration target that is synchronized with it. This can be helpful in tracking entitlements in large deployments. The RMT maintains all the capabilities of SUSE Customer Center, while allowing a more secure centralized deployment. It is included with every SUSE Linux Enterprise subscription and is therefore fully supported.

The RMT provides an alternative to the default configuration, which requires opening the firewall to outbound connections for each device to receive updates. That requirement often violates corporate security policies and can be seen as a threat to regulatory compliance by certain organizations. Through its integration with SUSE Customer Center, the RMT ensures that each device can receive its appropriate updates without the need to open the firewall, and without any redundant bandwidth requirements.

The RMT also enables customers to locally track their SUSE Linux Enterprise devices (that is, servers, desktops or Point of Service terminals) throughout their enterprise. Now they can easily determine how many entitlements are in need of renewal at the end of a billing cycle without having to physically walk through the data center to manually update spreadsheets.

The RMT informs the SUSE Linux Enterprise devices of any available software updates. Each device then obtains the required software updates from the RMT. The introduction of the RMT improves the interaction among SUSE Linux Enterprise devices within the network and simplifies how they receive their system updates. The RMT enables an infrastructure for several hundred SUSE Linux Enterprise devices per instance of each installation (depending on the specific usage profile). This offers more accurate and efficient server tracking.

In a nutshell, the Repository Mirroring Tool for SUSE Linux Enterprise provides customers with:

  • Assurance of firewall and regulatory compliance

  • Reduced bandwidth usage during software updates

  • Full support under active subscription from SUSE

  • Maintenance of existing customer interface with SUSE Customer Center

  • Accurate server entitlement tracking and effective measurement of subscription usage

  • Automated process to easily tally entitlement totals (no more spreadsheets!)

  • Simple installation process that automatically synchronizes server entitlement with SUSE Customer Center

10.2.4 SUSE Manager

SUSE Manager automates Linux server management, allowing you to provision and maintain your servers faster and more accurately. It monitors the health of each Linux server from a single console so you can identify server performance issues before they impact your business. And it lets you comprehensively manage your Linux servers across physical, virtual and cloud environments while improving data center efficiency. SUSE Manager delivers complete lifecycle management for Linux:

  • Asset management

  • Provisioning

  • Package management

  • Patch management

  • Configuration management

  • Redeployment

For more information on SUSE Manager, refer to https://www.suse.com/products/suse-manager/.

11 File management

11.1 Disk partitions

Servers should have separate file systems for at least /, /boot, /var, /tmp, and /home. This prevents, for example, logging space and temporary space under /var and /tmp from filling up the root partition. Third-party applications should be on separate file systems as well, for example under /opt.

Another advantage of separate file systems is the possibility of choosing special mount options that are suitable for certain regions in the file system hierarchy. The mount options are:

  • noexec: prevents execution of files.

  • nodev: prevents character or block special devices from being usable.

  • nosuid: prevents the set-user-ID or set-group-ID bits from being effective.

  • ro: mounts the file system read-only.

Each of these options needs to be carefully considered before applying it to a partition mount. Applications may stop working, or the support status may be violated. When applied correctly, mount options can help against certain types of security attacks or misconfigurations. For example, there should be no need for set-user-ID binaries to be placed in /tmp.

You are advised to review Chapter 27, Common Criteria. It is important to understand the need to separate the partitions that could impact a running system (for example, log files filling up /var/log are a good reason to separate /var from the / partition). Another thing to keep in mind is that you need to use LVM or another volume manager or at the least the extended partition type to work around the limit of four primary partitions on PC class systems.

Another capability in SUSE Linux Enterprise Desktop is encrypting a partition or even a single directory or file as a container. Refer to Chapter 12, Encrypting partitions and files for details.

11.2 Modifying permissions of certain system files

Many files—especially in the /etc directory—are world-readable, which means that unprivileged users can read their contents. Normally this is not a problem, but to take extra care, you can remove the world-readable or group-readable bits for sensitive files.

SUSE Linux Enterprise Desktop provides the permissions package to easily apply file permissions. The package comes with three pre-defined system profiles:

easy

Profile for systems that require user-friendly graphical user interaction. This is the default profile.

secure

Profile for server systems without fully fledged graphical user interfaces.

paranoid

Profile for maximum security. In addition to the secure profile, it removes all special permissions like setuid/setgid and capability bits.

Warning
Warning: Unusable system for non-privileged users

Except for simple tasks like changing passwords, a system without special permissions may be unusable for non-privileged users.

Do not use the paranoid profile is as-is, but as a template for custom permissions. More information can be found in the permissions.paranoid file.

To define custom file permissions, edit /etc/permissions.local or create a drop-in file in the /etc/permissions.d/ directory.

# Additional custom hardening
/etc/security/access.conf       root:root       0400
/etc/sysctl.conf                root:root       0400
/root/                          root:root       0700

The first column specifies the file name. The directory names must end with a slash. The second column specifies the owner and group, and the third column specifies the mode. For more information about the configuration file format, refer to man permissions.

Select the profile in /etc/sysconfig/security. To use the easy profile and custom permissions from /etc/permissions.local, set:

PERMISSION_SECURITY="easy local"

To apply the setting, run chkstat --system --set.

The permissions are also applied during package updates via zypper. You could also call chkstat regularly via cron or a systemd timer.

Important
Important: Custom file permissions

While the system profiles are well tested, custom permissions can break standard applications. SUSE cannot provide support for such scenarios.

Always test custom file permissions before applying them with chkstat to make sure everything works as desired.

11.3 Changing home directory permissions from 755 to 700

By default, home directories of users are accessible (read, execute) by all users on the system. As this is a potential information leak, home directories should be accessible by their owners.

The following commands set the permissions to 700 (directory accessible for the owner) for all existing home directories in /home:

> sudo chmod 755 /home
> sudo bash -c 'for dir in /home/*; do \
echo "Changing permissions of directory $dir"; chmod 700 "$dir"; done'

To ensure newly created home directories are created with secure permissions, edit /etc/login.defs and set HOME_MODE to 700.

# HOME_MODE is used by useradd(8) and newusers(8) to set the mode for new
# home directories.
# If HOME_MODE is not set, the value of UMASK is used to create the mode.
HOME_MODE      0700

If you do not set HOME_MODE, permissions are calculated from the default umask. HOME_MODE specifies the permissions used, not a mask used to remove access like umask. For more information about umask, refer to Section 11.4, “Default umask”.

You can verify the configuration change by creating a new user with useradd -m testuser. Check the permissions of the directories with ls -l /home. Afterwards, remove the user created for this test.

Important
Important: Test permission changes

Users are no longer allowed to access other users' home directories. This may be unexpected for users and software.

Test this change before using it in production and notify users affected by the change.

11.4 Default umask

The umask (user file-creation mode mask) command is a shell built-in command that determines the default file permissions for newly created files and directories. This can be overwritten by system calls but many programs and utilities use umask.

By default, umask is set to 022. This umask is subtracted from the access mode 777 if at least one bit is set.

To determine the active umask, use the umask command:

> umask
022

With the default umask, you see the behavior most users expect to see on a Linux system.

> touch a
> mkdir b
> ls -on
total 16
-rw-r--r--. 1 17086    0 Nov 29 15:05 a
drwxr-xr-x. 2 17086 4096 Nov 29 15:05 b

You can specify arbitrary umask values, depending on your needs.

> umask 111
> touch c
> mkdir d
> ls -on
total 16
-rw-rw-rw-. 1 17086    0 Nov 29 15:05 c
drw-rw-rw-. 2 17086 4096 Nov 29 15:05 d

Based on your threat model, you can use a stricter umask such as 037 to prevent accidental data leakage.

> umask 037
> touch e
> mkdir f
> ls -on
total 16
-rw-r-----. 1 17086    0 Nov 29 15:06 e
drwxr-----. 2 17086 4096 Nov 29 15:06 f
Tip
Tip: Maximum security

For maximum security, use a umask of 077. This forces newly created files and directories to be created with no permissions for the group and other users.

This can be unexpected for users and software and may cause additional load for your support team.

11.4.1 Adjusting the default umask

You can modify the umask globally for all users by changing the UMASK value in /etc/login.defs.

# Default initial "umask" value used by login(1) on non-PAM enabled systems.
# Default "umask" value for pam_umask(8) on PAM enabled systems.
# UMASK is also used by useradd(8) and newusers(8) to set the mode for new
# home directories.
# 022 is the default value, but 027, or even 077, could be considered
# for increased privacy. There is no One True Answer here: each sysadmin
# must make up their mind.
UMASK           022

For individual users, add the umask to the 'gecos' field in /etc/passwd like this:

tux:x:1000:100:Tux Linux,UMASK=022:/home/tux:/bin/bash

You can do the same with yast users by adding UMASK=022 to a user's Details › Additional User Information.

The settings made in /etc/login.defs and /etc/passwd are applied by the PAM module pam_umask.so. For additional configuration options, refer to man pam_umask.

For the changes to take effect, users need to log out and back in again. Afterwards, use the umask command to verify the umask is set correctly.

11.5 SUID/SGID files

When the SUID (set user ID) or SGID (set group ID) bits are set on an executable, it executes with the UID or GID of the owner of the executable rather than that of the person executing it. This means that, for example, all executables that have the SUID bit set and are owned by root are executed with the UID of root. A good example is the passwd command that allows ordinary users to update the password field in the /etc/shadow file, which is owned by root.

SUID/SGID bits can be misused when the executable has a security hole. Therefore, you should search the entire system for SUID/SGID executables and document them. To search the entire system for SUID or SGID files, you can run the following command:

# find /bin /boot /etc /home /lib /lib64 /opt /root /sbin \
  /srv /tmp /usr /var -type f -perm '/6000' -ls

You may need to extend the list of directories that are searched if you have a different file system structure.

SUSE sets the SUID/SGID bit on binary if it is really necessary. Ensure that code developers do not set SUID/SGID bits on their programs if it is not an absolute requirement. You can often use workarounds like removing the executable bit for world/others. However, a better approach is to change the design of the software or use capabilities.

SUSE Linux Enterprise Desktop supports file capabilities to allow more fine-grained privileges to be given to programs rather than the full power of root:

# getcap -v /usr/bin/ping
/usr/bin/ping = cap_new_raw+eip

The previous command grants the CAP_NET_RAW capability to whoever executes ping. In case of vulnerabilities inside ping, an attacker can gain, at most, this capability in contrast with full root. Whenever possible, file capabilities should be chosen in favor of the SUID bit. But this applies when the binary is SUID to root, not to other users such as news, lp and similar.

11.6 World-writable files

World-writable files are a security risk since they can be modified by any user on the system. Additionally, world-writable directories allow anyone to add or delete files. To locate world-writable files and directories, you can use the following command:

# find /bin /boot /etc /home /lib /lib64 /opt /root /sbin \
  /srv /tmp /usr /var -type f -perm -2 ! -type l -ls

You may need to extend the list of directories that are searched if you have a different file system structure.

The ! -type l parameter skips all symbolic links since symbolic links are always world-writable. However, this is not a problem if the target of the link is not world-writable, which is checked by the above find command.

World-writable directories with the sticky bit such as the /tmp directory do not allow anyone except the owner of a file to delete or rename it in this directory. The sticky bit makes files stick to the user who created them, and prevents other users from deleting or renaming the files. Therefore, depending on the purpose of the directory, world-writable directories with the sticky bit are not an issue. An example is the /tmp directory:

> ls -ld /tmp
drwxrwxrwt 18 root root 16384 Dec 23 22:20 /tmp

The t mode bit in the output denotes the sticky bit.

11.7 Orphaned or unowned files

Files not owned by any user or group may not necessarily be a security problem in itself. However, unowned files could pose a security problem in the future. For example, if a new user is created and the new user happens to get the same UID as the unowned files have, then this new user automatically becomes the owner of these files.

To locate files not owned by any user or group, use the following command:

# find /bin /boot /etc /home /lib /lib64 /opt /root /sbin /srv /tmp /usr /var -nouser -o -nogroup

You may need to extend the list of directories that are searched if you have a different file system structure.

A different problem is files that were not installed via the packaging system and therefore do not receive updates. You can check for such files with the following command:

> find /bin /lib /lib64 /usr -path /usr/local -prune -o -type f -a -exec /bin/sh -c "rpm -qf {} &> /dev/null || echo {}" \;

Run this command as an untrusted user (for example, nobody) since crafted file names may lead to command execution. This should not be a problem since these directories should be writeable by root, but it is still a good security precaution.

This shows you all files under /bin, /lib, /lib64 and /usr (with the exception of files in /usr/local) that are not tracked by the package manager. These files may not represent a security issue, but you should be aware of what is not tracked and take the necessary precautions to keep these files up to date.

12 Encrypting partitions and files

Encrypting files, partitions, and entire disks prevents unauthorized access to your data and protects your confidential files and documents.

You can choose between the following encryption options:

Encrypting a hard disk partition

It is possible to create an encrypted partition with YaST during installation or in an already installed system. For further info, see Section 12.1.1, “Creating an encrypted partition during installation” and Section 12.1.2, “Creating an encrypted partition on a running system”. This option can also be used for removable media, such as external hard disks, as described in Section 12.1.3, “Encrypting the content of removable media”.

Encrypting single files with GPG

To quickly encrypt one or more files, you can use the GPG tool. See Section 12.2, “Encrypting files with GPG” for more information.

Encrypting single files with Rage

You can use the Rage encryption tool to encrypt one or more files. See Section 12.3, “Encrypting files with Rage” for more information.

Warning
Warning: Encryption offers limited protection

Encryption methods described in this chapter cannot protect your running system from being compromised. After the encrypted volume is successfully mounted, everybody with appropriate permissions can access it. However, encrypted media are useful in case of loss or theft of your computer, or to prevent unauthorized individuals from reading your confidential data.

12.1 Setting up an encrypted file system with YaST

Use YaST to encrypt partitions or parts of your file system during installation or in an already installed system. However, encrypting a partition in an already-installed system is more difficult, because you need to resize and change existing partitions. In such cases, it may be more convenient to create an encrypted file of a defined size, in which to store other files or parts of your file system. To encrypt an entire partition, dedicate a partition for encryption in the partition layout. The standard partitioning proposal, as suggested by YaST, does not include an encrypted partition by default. Add an encrypted partition manually in the partitioning dialog.

12.1.1 Creating an encrypted partition during installation

Warning
Warning: Password input

Make sure to memorize the password for your encrypted partitions well. Without that password, you cannot access or restore the encrypted data.

The YaST expert dialog for partitioning offers the options needed for creating an encrypted partition. To create a new encrypted partition proceed as follows:

  1. Run the YaST Expert Partitioner with System › Partitioner.

  2. Select a hard disk, click Add, and select a primary or an extended partition.

  3. Select the partition size or the region to use on the disk.

  4. Select the file system, and mount point of this partition.

  5. Activate the Encrypt device check box.

    Note
    Note: Additional software required

    After checking Encrypt device, a pop-up window asking for installing additional software may appear. Confirm to install all the required packages to ensure that the encrypted partition works well.

  6. If the encrypted file system needs to be mounted when necessary, enable Do not mount partition in the Fstab Options. Otherwise enable Mount partition and enter the mount point.

  7. Click Next and enter a password which is used to encrypt this partition. This password is not displayed. To prevent typing errors, you need to enter the password twice.

  8. Complete the process by clicking Finish. The newly encrypted partition is now created.

During the boot process, the operating system asks for the password before mounting any encrypted partition which is set to be auto-mounted in /etc/fstab. Such a partition is then available to all users when it has been mounted.

To skip mounting the encrypted partition during start-up, press Enter when prompted for the password. Then decline the offer to enter the password again. In this case, the encrypted file system is not mounted and the operating system continues booting, blocking access to your data.

To mount an encrypted partition which is not mounted during the boot process, open a file manager and click the partition entry in the pane listing common places on your file system. You are prompted for a password and the partition is mounted.

When you are installing your system on a machine where partitions already exist, you can also decide to encrypt an existing partition during installation. In this case follow the description in Section 12.1.2, “Creating an encrypted partition on a running system” and be aware that this action destroys all data on the existing partition.

12.1.2 Creating an encrypted partition on a running system

Warning
Warning: Activating encryption on a running system

It is also possible to create encrypted partitions on a running system. However, encrypting an existing partition destroys all data on it, and requires re-sizing and restructuring of existing partitions.

On a running system, select System › Partitioner in the YaST control center. Click Yes to proceed. In the Expert Partitioner, select the partition to encrypt and click Edit. The rest of the procedure is the same as described in Section 12.1.1, “Creating an encrypted partition during installation”.

12.1.3 Encrypting the content of removable media

YaST treats removable media (like external hard disks or flash disks) the same as any other storage device. Virtual disks or partitions on external media can be encrypted as described above. However, you should disable mounting at boot time, because removable media is connected when the system is up and running.

If you encrypted your removable device with YaST, the GNOME desktop automatically recognizes the encrypted partition and prompts for the password when the device is detected. If you plug in a FAT-formatted removable device when running GNOME, the desktop user entering the password automatically becomes the owner of the device. For devices with a file system other than FAT, change the ownership explicitly for users other than root to give them read-write access to the device.

12.2 Encrypting files with GPG

GNU Privacy Guard (GPG) encryption software can be used to encrypt individual files and documents.

To encrypt a file with GPG, you need to generate a key pair first. To do this, run the gpg --gen-key and follow the on-screen instructions. When generating the key pair, GPG creates a user ID (UID) to identify the key based on your real name, comments and email address. You need this UID (or just a part of it like your first name or email address) to specify the key you want to use to encrypt a file. To find the UID of an existing key, use the gpg --list-keys command. To encrypt a file use the following command:

> gpg -e -a --cipher-algo AES256 -r UID FILE

Replace UID with part of the UID (for example, your first name) and FILE with the file you want to encrypt. For example:

> gpg -e -a --cipher-algo AES256 -r Tux secret.txt

This command creates an encrypted version of the specified file recognizable by the .asc file extension (in this example, it is secret.txt.asc).

-a formats the file as ASCII text, if you want the contents to be copy-able. Omit -a to create a binary file, which in the above example would be secret.txt.gpg.

To decrypt an encrypted file, use the following command:

> gpg -d -o DECRYPTED_FILE ENCRYPTED_FILE

Replace DECRYPTED_FILE with the desired name for the decrypted file and ENCRYPTED_FILE with the encrypted file you want to decrypt.

Keep in mind that the encrypted file can be decrypted using the same key that was used for encryption. To share an encrypted file with another person, you have to use that person's public key to encrypt the file.

12.3 Encrypting files with Rage

Rage is a secure file encryption software to encrypt files. It has keys that are easy to exchange with other people, and has secure defaults to prevent accidental misuse or leaks of sensitive data. We recommend Rage to encrypt files.

You can install Rage with:

> sudo zypper install rage-encryption

The recipient must first generate a key pair to encrypt a file with Rage:

> rage-keygen -o ~/rage.key 2 ~/rage.pub

Two files are created; rage.pub and rage.key.

rage.pub example
> cat file.pub
    Public key: age17e4g67cs07jk3lmylyq6gduv26uf7tz7nm9jrsaxn8xxx9uc9amsdg4a5e
rage.key example
>  cat file.key
    # created: 2023-05-30T16:29:20+05:30
    # public key: age17e4g67cs07jk3lmylyq6gduv26uf7tz7nm9jrsaxn8xxx9uc9amsdg4a5e
Important
Important

file.key is a private key and should be kept confidential.

Encrypt

To encrypt a file, you need the generated public key:

> rage -e -r PUBLIC_KEY -o ENCRYPTED_FILE FILE

For example:

> rage -e -r age17e4g67cs07jk3lmylyq6gduv26uf7tz7nm9jrsaxn8xxx9uc9amsdg4a5e -o test.txt.age test.txt
Decrypt

The encrypted file can be decrypted by the recipient who has the corresponding private key. To share an encrypted file with another person, you have to use that person's public key to encrypt the file.

> rage -d -i ~/rage.key -o DECRYPTED_FILE ENCRYPTED_FILE FILE

For example:

> rage -d -i ~/rage.key -o test.txt.decrypted test.txt.age
Passphrases

You can encrypt files with passphrases with the -p or --passphrase arguments. By default, Rage automatically generates a secure passphrase, but you also have the option to enter a passphrase.

> rage -e -p -o ENCRYPTED_FILE FILE

For example:

> rage -e -p -o test.txt.age test.txt
SSH

You can encrypt files with SSH (Secure Socket Shell) keys instead of Rage keys. Rage supports ssh-rsa and ssh-ed25519 public keys,and decrypting with the respective private key file. ssh-agent and ssh-sk(FIDO) are not supported.

> rage -e -p -o ENCRYPTED_FILE FILE

For example:

> rage -e -p -o test.txt.age test.txt

For example:

>  ssh-keygen -t ed25519

To encrypt:

>  rage -e -a -R PUBLIC_KEY_FILE -o ENCRYPTED_FILE FILE

For example:

>  rage -e -a -R id_ed25519.pub -o test.txt.age test.txt

To decrypt:

>  rage -d -i SSH_PRIVATE_KEY_FILE -o DECRYPTED_FILEENCRYPTED_FILE

For example:

>  rage -d -i id_ed25519 -o test.txt.decrypted test.txt.age
Important
Important

You must enter the path to the key and files.

Multiple identities

Rage can encrypt to multiple identities at the same time. Any of the recipient's private keys can be used to decrypt the file.

rage -e -a -R FIRST_SSH_PUBLIC_KEY-r FIRST_RAGE_PUBLIC_KEY... -o ENCRYPTED_FILE FILE

For example:

rage -e -a -R id_ed25519.pub -r age1h8equ4vs5pyp8ykw0z8m9n8m3psy6swme52ztth0v66frgu65ussm8gq0t -o -r age1y2lc7x59jcqvrpf3ppmnj3f93ytaegfkdnl5vrdyv83l8ekcae4sexgwkg test.txt.age test.txt
Tip
Tip

You can use the -h or --help argument to list all the Rage command arguments.

12.3.1 Additional Resources

13 Storage encryption for hosted applications with cryptctl

Databases and similar applications are often hosted on external servers that are serviced by third-party staff. Certain data center maintenance tasks require third-party staff to directly access affected systems. In such cases, privacy requirements necessitate disk encryption.

cryptctl allows encrypting sensitive directories using LUKS and offers the following additional features:

  • Encryption keys are located on a central server, which can be located on customer premises.

  • Encrypted partitions are automatically remounted after an unplanned reboot.

cryptctl consists of two components:

  • A client is a machine that has one or more encrypted partitions but does not permanently store the necessary key to decrypt those partitions. For example, clients can be cloud or otherwise hosted machines.

  • The server holds encryption keys that can be requested by clients to unlock encrypted partitions.

    You can also set up the cryptctl server to store encryption keys on a KMIP 1.3-compatible (Key Management Interoperability Protocol) server. In that case, the cryptctl server does not store the encryption keys of clients and is dependent upon the KMIP-compatible server to provide these.

Warning
Warning: cryptctl Server maintenance

Since the cryptctl server manages timeouts for the encrypted disks and, depending on the configuration, can also hold encryption keys, it should be under your direct control and managed by trusted personnel.

Additionally, it should be backed up regularly. Losing the server's data means losing access to encrypted partitions on the clients.

To handle encryption, cryptctl uses LUKS with aes-xts-256 encryption and 512-bit keys. Encryption keys are transferred using TLS with certificate verification.

The client asks the server for the disk decryption key, the server responds
Figure 13.1: Key retrieval with cryptctl (model without connection to KMIP server)
Note
Note: Install cryptctl

Before continuing, make sure the package cryptctl is installed on all machines you intend to set up as servers or clients.

13.1 Setting up a cryptctl server

Before you can define a machine as a cryptctl client, you need to set up a machine as a cryptctl server.

Before beginning, choose whether to use a self-signed certificate to secure communication between the server and clients. If not, generate a TLS certificate for the server and have it signed by a certificate authority.

Additionally, you can have clients authenticate to the server using certificates signed by a certificate authority. To use this extra security measure, make sure to have a CA certificate at hand before starting this procedure.

  1. As root, run:

    # cryptctl init-server
  2. Answer each of the following prompts and press Enter after every answer. If there is a default answer, it is shown in square brackets at the end of the prompt.

    1. Create a strong password, and protect it well. This password unlocks all partitions that are registered on the server.

    2. Specify the path to a PEM-encoded TLS certificate or certificate chain file or leave the field empty to create a self-signed certificate. If you specify a path, use an absolute path.

    3. If you want the server to be identified by a host name other than the default shown, specify a host name. cryptctl generates certificates which include the host name.

    4. Specify the IP address that belongs to the network interface that you want to listen on for decryption requests from the clients, then set a port number (the default is port 3737).

      The default IP address setting, 0.0.0.0 means that cryptctl listens on all network interfaces for client requests using IPv4.

    5. Specify a directory on the server that holds the decryption keys for clients.

    6. Specify whether clients need to authenticate to the server using a TLS certificate. If you choose No, this means that clients authenticate using disk UUIDs. (However, communication is encrypted using the server certificate in any case.)

      If you choose Yes, pick a PEM-encoded certificate authority to use for signing client certificates.

    7. Specify whether to use a KMIP 1.3-compatible server (or multiple such servers) to store encryption keys of clients. If you choose this option, provide the host names and ports for one or multiple KMIP-compatible servers.

      Additionally, provide a user name, password, a CA certificate for the KMIP server, and a client identity certificate for the cryptctl server.

      Important
      Important: No easy reconfiguration of KMIP setting

      The setting to use a KMIP server cannot easily be changed later. To change this setting, both the cryptctl server and its clients need to be configured afresh.

    8. Finally, configure an SMTP server for e-mail notifications for encryption and decryption requests or leave the prompt empty to skip setting up e-mail notifications.

      Note
      Note: Password-protected servers

      cryptctl currently cannot send e-mail using authentication-protected SMTP servers. If that is necessary, set up a local SMTP proxy.

    9. When asked whether to start the cryptctl server, enter y.

  3. To check the status of the service cryptctl-server, use:

    # systemctl status cryptctl-server

To reconfigure the server later, do either of the following:

  • Run the command cryptctl init-server again. cryptctl proposes the existing settings as the defaults, so that you only need to specify the values that you want to change.

  • Make changes directly in the configuration file /etc/sysconfig/cryptctl-server.

    However, to avoid issues, do not change the settings AUTH_PASSWORD_HASH and AUTH_PASSWORD_SALT manually. The values of these options need to be calculated correctly.

13.2 Setting up a cryptctl client

The following interactive setup of cryptctl is currently the only setup method.

Make sure the following preconditions are fulfilled:

  • A cryptctl server is available over the network.

  • There is a directory to encrypt.

  • The client machine has an empty partition available that is large enough to fit the directory to encrypt.

  • When using a self-signed certificate, the certificate (*.crt file) generated on the server is available locally on the client. Otherwise, the certificate authority of the server certificate must be trusted by the client.

  • If you set up the server to require clients to authenticate using a client certificate, prepare a TLS certificate for the client which is signed by the CA certificate you chose for the server.

  1. As root, run:

    # cryptctl encrypt
  2. Answer each of the following prompts and press Enter after every answer. If there is a default answer, it is shown in square brackets at the end of the prompt.

    1. Specify the host name and port to connect to on the cryptctl server.

    2. If you configured the server to have clients authenticate to it using a TLS certificate, specify a certificate and a key file for the client. The client certificate must be signed by the certificate authority chosen when setting up the server.

    3. Specify the absolute path to the server certificate (the *.crt file).

    4. Enter the encryption password that you specified when setting up the server.

    5. Specify the path to the directory to encrypt. Specify the path to the empty partition that contains the encrypted content of the directory.

    6. Specify the number of machines that are allowed to decrypt the partition simultaneously.

      Then specify the timeout in seconds before additional machines are allowed to decrypt the partition after the last vital sign was received from the client or clients.

      When a machine unexpectedly stops working and then reboots, it needs to be able to unlock its partitions again. That means this timeout should be set to a time slightly shorter than the reboot time of the client.

      Important
      Important: Timeout length

      If the time is set too long, the machine cannot decrypt encrypted partitions on the first try. cryptctl then continues to periodically check whether the encryption key has become available. However, this introduces a delay.

      If the timeout is set too short, machines with a copy of the encrypted partition have an increased chance of unlocking the partition first.

  3. To start encryption, enter yes.

    cryptctl encrypts the specified directory to the previously empty partition and then mounts the newly encrypted partition. The file system type is the same type as the original unencrypted file system.

    Before creating the encrypted partition, cryptctl moves the unencrypted content of the original directory to a location prefixed with cryptctl-moved-.

  4. To check that the directory is indeed mounted correctly, use:

    > lsblk -o NAME,MOUNTPOINT,UUID
    NAME                        MOUNTPOINT          UUID
    [...]
    sdc
    └─sdc1                                          PARTITION_UUID
      └─cryptctl-unlocked-sdc1  /secret-partition   UNLOCKED_UUID

    cryptctl identifies the encrypted partition by its UUID. For the previous example, that is the UUID displayed next to sdc1.

    On the server, you can check whether the directory was decrypted using cryptctl.

    # cryptctl list-keys

    For a successfully decrypted partition, you see output like:

    2019/06/06 15:50:00 ReloadDB: successfully loaded database of 1 records
    Total: 1 records (date and time are in zone EDT)
    Used By     When                 UUID  Max.Users  Num.Users  Mount Point
    IP_ADDRESS  2019-06-06 15:00:50  UUID  1          1          /secret-partition

    For a partition not decrypted successfully, you see output like:

    2019/06/06 15:50:00 ReloadDB: successfully loaded database of 1 records
    Total: 1 records (date and time are in zone EDT)
    Used By      When                 UUID  Max.Users  Num.Users  Mount Point
                 2019-06-06 15:00:50  UUID  1          1          /secret-partition

    See the difference in the empty Used by column.

    Verify that the UUID shown is that of the previously encrypted partition.

  5. After verifying that the encrypted partition works, delete the unencrypted content from the client. For example, use rm. For more safety, overwrite the content of the files before deleting them, for example, using shred -u.

    Important
    Important: shred does not guarantee that data is erased

    Depending on the type of storage media, using shred is not a guarantee that all data is removed. In particular, SSDs employ wear leveling strategies that render shred ineffective.

The configuration for the connection from client to server is stored in /etc/sysconfig/cryptctl-client and can be edited manually.

The server stores an encryption key for the client partition in /var/lib/cryptctl/keydb/PARTITION_UUID.

13.3 Configuring /etc/fstab for LUKS volumes

When configuring the mount point for a new file system encrypted with LUKS, YaST uses, by default, the name of the encrypted device in /etc/fstab. (For example, /dev/mapper/cr_sda1.) Using the device name, rather than the UUID or volume label, results in a more robust operation of systemd generators and other related tools.

You have the option to adjust that default behavior for each device, either with the Expert Partitioner in the installer, or via AutoYaST.

This change does not affect upgrades or any other scenario in which the mount points are already defined in /etc/fstab. Newly created mount points are affected, such as during the installation of a new system, or creating new partitions on running systems.

13.4 Checking partition unlock status using server-side commands

When a cryptctl client is active, it sends a heartbeat to the cryptctl server every 10 seconds. If the server does not receive a heartbeat from the client for the length of the timeout configured during the client setup, the server assumes that the client is offline. It then allows another client to connect (or allow the same client to reconnect after a reboot).

To see the usage status of all keys, use:

# cryptctl list-keys

The information under Num. Users shows whether the key is currently in use. To see more detail on a single key, use:

# cryptctl show-key UUID

This command shows information about mount point, mount options, usage options, the last retrieval of the key and the last three heartbeats from clients.

Additionally, you can use journalctl to find logs of when keys were retrieved.

13.5 Unlocking encrypted partitions manually

There are two ways of unlocking a partition manually, both of which are run on a client:

  • Online unlocking.  Online unlocking allows circumventing timeout or user limitations. This method can be used when there is a network connection between client and server but the client could not (yet) unlock the partition automatically. This method unlocks all encrypted partitions on a machine.

    To use it, run cryptctl online-unlock. Be prepared to enter the password specified when setting up the server.

  • Offline unlocking.  This method can be used when a client cannot or must not be brought online to communicate with its server. The encryption key from the server must still be available. This method is meant as a last resort and can only unlock a single partition at a time.

    To use it, run cryptctl offline-unlock. The server's key file for the requisite partition (/var/lib/cryptctl/keydb/PARTITION_UUID) needs to be available on the client.

13.6 Maintenance downtime procedure

To ensure that partitions cannot be decrypted during a maintenance downtime, turn off the client and disable the cryptctl server. You can do so by either:

  • Stopping the service cryptctl-server:

    # systemctl stop cryptctl-server
  • Unplugging the cryptctl server from the network.

13.7 Setting up an HA environment for cryptctl-server service

To avoid downtimes when the cryptctl-server needs to be stopped for maintenance or suffers a damage, it is strongly recommended to set up the cryptctl-server in an HA environment. You need at least a two-node High Availability cluster for this. The following setup shows how to create a two-node HA cluster for cryptctl-server using self-signed certificates.

Make sure the following preconditions are fulfilled:

  • At least two servers which have SUSE Linux Enterprise Server and the High Availability extension installed are available. All servers must also have the cryptctl package installed. All servers can reach each other via SSH.

  • If you set up a new cluster, you need an additional IP address for the HA Web Console of the cluster (AdminIP).

  • A separate IP address (CrypServerIP) is reserved for the cryptctl-server.

  • A separate dns-name (CrypServerHostName) is reserved for the cryptctl-server and is resolved to the above IP address.

  • An HA-enabled block device or NFS share is available to store the keys.

    In our example, we use an NFS share: nfs-server.example.org/data/cryptctl-keys. It is mounted to the standard location /var/lib/cryptctl/keydb.

  • It is strongly recommended to use an SBD device.

Procedure 13.1: Setting up a cryptctl two-node HA cluster
  1. Log in to Node1 as root.

  2. Set up a cryptctl-server as described in Section 13.1, “Setting up a cryptctl server”. Use the following parameters:

    1. To create the certificate, use the dedicated hostname CrypServerHostName of the cryptctl server. Do not use the host name of the host.

    2. Use the dedicated IP address CrypServerIP of the cryptctl server. Do not use the default IP address setting.

    3. Do not configure a KMIP server.

    4. When asked whether to start the cryptctl server, enter n.

  3. Set up a two-node HA cluster:

    1. Important
      Important

      Node1 must be the server where you have configured the cryptctl server.

      On the machine where you have configured the cryptctl server, set up the first node as follows:

      # crm cluster init -i NetDev -A AdminIP -n ClusterName
    2. Log in via SSH to Node2 and join the cluster from there:

      # ssh Node2
      # crm cluster join -y Node1
    3. For more information, also see the Installation and Setup Quick Start.

  4. Set up the resource group for the cryptctl server:

    1. You can set up all needed resource agents and copy all files to all nodes with the cryptcl crm-shell-script in one step. We strongly recommend that you verify the setup in the first step:

      # crm script verify cryptctl  \
      cert-path=/etc/cryptctl/servertls/CertificateFileName \
      cert-key-path=/etc/cryptctl/servertls/CertificateKeyFileName \
      virtual-ip:ip=CrypServerIP \
      filesystem:device=DevicePath
      filesystem:fstype=FileSystemType
    2. If the check was successful, set up the cluster group by running the script as follows:

      # crm script verify cryptctl  \
      cert-path=/etc/cryptctl/servertls/CertificateFileName \
      cert-key-path=/etc/cryptctl/servertls/CertificateKeyFileName \
      virtual-ip:ip=CrypServerIP \
      filesystem:device=DevicePath
      filesystem:fstype=FileSystemType
Table 13.1: List of all parameters to define the resource group with the cryptctl crm script.

Name

Mandatory

Default Value

Description

id no cryptctl Name of the resource group.
cert-path yes The full path to the created certificate.
cert-key-path yes The full path to the created certificate key.
virtual-ip:id no cryptctl-vip The ID of the virtual IP resource for the cryptctl server.
virtual-ip:ip yes The IP address of the cryptctl server.
virtual-ip:nic no Detected by the virtual-ip resource agent. The network device the cryptctl server should be listening on. Only required if the device cannot be detected from the IP address.
virtual-ip:cidr_netmask no Detected by the virtual-ip resource agent. The numeric netmask of the IP address of the cryptctl server. Only required if the netmask cannot be detected from the IP address.
virtual-ip:broadcast no Detected by the virtual-ip resource agent. The broadcast address of the IP address of the cryptctl server. Only required if this cannot be detected from the IP address.
filesystem:id no cryptctl-filesystem The ID of the file system resource containing the disk encryption keys and records.
filesystem:device yes The device containing the file system. This can be a block device like /dev/sda... or an NFS share path server:/path.
filesystem:directory no /var/lib/cryptctl/keydb The directory where the device containing the file system is located. This can be a block device like /dev/sda... or an NFS share path server:/path.
filesystem:fstype yes The file system type (for example, NFS, XFS, EXT4).
filesystem:options no The default options of the selected file system. Mount options for the file system.

13.8 More information

For more information, also see the project home page https://github.com/SUSE/cryptctl/.

14 User management

14.1 Various account checks

14.1.1 Unlocked accounts

It is important that all system and vendor accounts that are not used for logins are locked. To get a list of unlocked accounts on your system, you can check for accounts that do not have an encrypted password string starting with ! or * in the /etc/shadow file. If you lock an account using either passwd -l or usermod -L, it puts a ! in front of the encrypted password, effectively disabling the password. Many system and shared accounts are locked by default by having a * ,!! or !* as the password field which renders the encrypted password into an invalid string. Hence, to get a list of all unlocked (encryptable) accounts, run the following command:

# egrep -v ':\*|:\!' /etc/shadow | awk -F: '{print $1}'

Also make sure all accounts have an x in the password field in /etc/passwd. The following command lists all accounts that do not have a x in the password field:

# grep -v ':x:' /etc/passwd

An x in the password field means that the password has been shadowed, for example, the encrypted password needs to be looked up in the /etc/shadow file. If the password field in /etc/passwd is empty, then the system does not look up the shadow file and it prompts the user for a password at the login prompt.

14.1.2 Unused accounts

All system or vendor accounts that are not being used by users, applications, by the system or by daemons should be removed from the system. You can use the following command to find out if there are any files owned by a specific account:

# find / -path /proc -prune -o -user ACCOUNT -ls

The -prune option in this example is used to skip the /proc file system. If you are sure that an account can be deleted, you can remove the account using the following command:

# userdel -r ACCOUNT

Without the -r option, userdel does not delete the user's home directory and mail spool (/var/spool/mail/USER). Many system accounts do not have a home directory.

14.2 Enabling password aging

Password expiration is a general best practice, but might need to be excluded for certain system and shared accounts (for example, Oracle). Expiring passwords on those accounts could lead to system outages if the application account expires.

Typically a corporate policy should be developed that dictates rules/procedures regarding password changes for system and shared accounts. However, normal user account passwords should expire automatically. The following example shows how password expiration can be set up for individual user accounts.

The following files and parameters in the table can be used when a new account is created with the useradd command. Settings such as these are stored for each user account in the /etc/shadow file. If using the YaST tool (User and Group Management) to add users, the settings are available on a per-user basis. Here are the various settings, some of which can also be system-wide (for example, modification of /etc/login.defs and /etc/default/useradd):

/etc/login.defs

PASS_MAX_DAYS

Maximum number of days a password is valid.

/etc/login.defs

PASS_MIN_DAYS

Minimum number of days before a user can change the password since the last change.

/etc/login.defs

PASS_WARN_AGE

Number of days between the last password change and the next password change reminder.

/etc/default/useradd

INACTIVE

Number of days after password expiration until the account is disabled.

/etc/default/useradd

EXPIRE

Account expiration date in the format YYYY-MM-DD.

Note
Note: Existing users not affected

Users created before these modifications are not affected.

Ensure that the above parameters are changed in the /etc/login.defs and /etc/default/useradd files. A review of the /etc/shadow file shows how these settings are stored after adding a user.

To create a new user account, execute the following command:

# useradd -c "TEST_USER" -g USERS TEST

The -g option specifies the primary group for this account:

# id TEST
uid=509(test) gid=100(users) groups=100(users)

The settings in /etc/login.defs and /etc/default/useradd are recorded for the test user in the /etc/shadow file as follows:

# grep TEST /etc/shadow
test:!!:12742:7:60:7:14::

Password aging can be modified at any time by use of the chage command. To disable password aging for system and shared accounts, you can run the following chage command:

# chage -M -1 SYSTEM_ACCOUNT_NAME

To get password expiration information:

# chage -l SYSTEM_ACCOUNT_NAME

For example:

# chage -l TEST
Minimum: 7
Maximum: 60
Warning: 7
Inactive: 14
Last Change: Jan 11, 2015
Password Expires: Mar 12, 2015
Password Inactive: Mar 26, 2015
Account Expires: Never

14.3 Stronger password enforcement

On an audited system, it is important to restrict people from using simple passwords that can be cracked too easily. Writing down complex passwords is all right if stored securely. Some argue that strong passwords protect you against dictionary attacks, and those types of attacks can be defeated by locking accounts after a few failed attempts. However, this is not always an option. If set up like this, locking system accounts could bring down your applications and systems, which would be nothing short of a denial-of-service attack—another issue.

At any rate, it is important to practice effective password management security. Most companies require that passwords have at least a number, one lowercase letter, and one uppercase letter. Policies vary, but maintaining a balance between password strength/complexity and management can be difficult.

14.4 Password and login management with PAM

Linux-PAM (Pluggable Authentication Modules for Linux) is a suite of shared libraries that enable the local system administrator to choose how applications authenticate users.

It is strongly recommended to familiarize oneself with the capabilities of PAM and how this architecture can be leveraged to provide the best authentication setup for an environment. This configuration can be done once, and implemented across all systems (a standard), or can be enhanced for individual hosts (enhanced security—by host/service/application). The key is to realize how flexible the architecture is.

To learn more about the PAM architecture, find PAM documentation in the /usr/share/doc/packages/pam directory (in a variety of formats).

The following discussions are examples of how to modify the default PAM stacks—specifically around password policies—for example password strength, password re-use, and account locking. While these are only a few of the possibilities, they serve as a good start and demonstrate PAM's flexibility.

Important
Important: pam-config limitations

The pam-config tool can be used to configure the common-{account,auth,password,session} PAM configuration files, which contain global options. These files include the following comment:

# This file is autogenerated by pam-config. All changes
# will be overwritten.

Individual service files, such as login, password, sshd, and su must be edited directly. You can elect to edit all files directly, and not use pam-config, though pam-config includes useful features such as converting an older configuration, updating your current configuration, and sanity checks. For more information, see man 8 pam-config.

14.4.1 Password strength

SUSE Linux Enterprise Desktop can leverage the pam_cracklib library to test for weak passwords—and to suggest using a stronger one if it determines obvious weakness. The following parameters represent an example that could be part of a corporate password policy or something required because of audit constraints.

The PAM libraries follow a defined flow. The best way to design the perfect stack usually is to consider all of the requirements and policies and draw out a flow chart.

Table 14.1: Sample rules/constraints for password enforcement

pam_cracklib.so

minlen=8

Minimum length of password is 8

pam_cracklib.so

lcredit=-1

Minimum number of lowercase letters is 1

pam_cracklib.so

ucredit=-1

Minimum number of uppercase letters is 1

pam_cracklib.so

dcredit=-1

Minimum number of digits is 1

pam_cracklib.so

ocredit=-1

Minimum number of other characters is 1

To set up these password restrictions, use the pam-config tool to specify the parameters you want to configure. For example, the minimum length parameter could be modified like this:

> sudo pam-config -a --cracklib-minlen=8 --cracklib-retry=3 \
--cracklib-lcredit=-1 --cracklib-ucredit=-1 --cracklib-dcredit=-1 \
--cracklib-ocredit=-1 --cracklib

Now verify that the new password restrictions work for new passwords. Log in to a non-root account and change the password using the passwd command. Note that the above requirements are not enforced if you run the passwd command under root.

14.4.2 Restricting use of previous passwords

The pam_pwhistory module can be used to configure the number of previous passwords that cannot be reused. The following command implements password restrictions on a system so that a password cannot be reused for at least six months:

> sudo pam-config -a --pwhistory --pwhistory-remember=26

Recall that in the section Section 14.2, “Enabling password aging” we set PASS_MIN_DAYS to 7, which specifies the minimum number of days allowed between password changes. Therefore, if pam_unix is configured to remember 26 passwords, then the previously used passwords cannot be reused for at least six months (26*7 days).

The PAM configuration (/etc/pam.d/common-password) resulting from the pam-config command looks like the following:

auth      required   pam_env.so
auth      required   pam_unix.so     try_first_pass
account   required   pam_unix.so     try_first_pass
password  requisite   pam_cracklib.so
password  required   pam_pwhistory.so        remember=26
password  optional   pam_gnome_keyring.so    use_authtok
password  required   pam_unix.so     use_authtok nullok shadow try_first_pass
session   required   pam_limits.so
session   required   pam_unix.so     try_first_pass
session   optional   pam_umask.so

14.4.3 Locking user accounts after too many login failures

Locking accounts after a defined number of failed ssh, login, su, or sudo attempts is a common security practice. However, this could lead to outages if an application, admin, or root user is locked out.

Important
Important: Denial-of-service attacks

Password failure counts can easily be abused to cause denial-of-service attacks by deliberately creating login failures.

Only use password failure counts if you have to. Restrict locking to the necessary minimum, and do not lock critical accounts. Keep in mind that locking not only applies to human users but also to system accounts used to provide services.

SUSE Linux Enterprise Desktop does not lock accounts by default, but provides PAM module pam_tally2 to easily implement password failure counts. Add the following line to the top of /etc/pam.d/login to lock out all users (except for root) after six failed logins, and to automatically unlock the accounts after ten minutes:

auth required pam_tally2.so deny=6 unlock_time=600

This is an example of a complete /etc/pam.d/login file:

#%PAM-1.0
auth     requisite      pam_nologin.so
auth     include        common-auth
auth     required       pam_tally2.so deny=6 unlock_time=600
account  include        common-account
account  required       pam_tally2.so
password include        common-password
session  required       pam_loginuid.so
session  include        common-session
#session  optional       pam_lastlog.so nowtmp showfailed
session  optional       pam_mail.so standard

You can also lock out root, though obviously you must be very certain you want to do this:

auth required pam_tally2.so deny=6 even_deny_root unlock_time=600

You can define a different lockout time for root:

auth required pam_tally2.so deny=6 root_unlock_time=120  unlock_time=600

If you want to require the administrator to unlock accounts, leave out the unlock_time option. The next two example commands display the number of failed login attempts and how to unlock a user account:

> sudo pam_tally2 -u username
Login           Failures Latest failure     From
username            6    12/17/19 13:49:43  pts/1

> sudo pam_tally2 -r -u username

The default location for attempted accesses is recorded in /var/log/tallylog.

If the user succeeds in logging in after the login timeout expires, or after the administrator resets their account, the counter resets to 0.

Configure other login services to use pam_tally2 in their individual configuration files in /etc/pam.d/: sshd, su, sudo, sudo-i, and su-l.

14.5 Restricting root logins

By default, the root user is assigned a password and can log in using various methods—for example, on a local terminal, in a graphical session, or remotely via SSH. These methods should be restricted as far as possible. Shared usage of the root account should be avoided. Instead, individual administrators should use tools such as su or sudo (for more information, type man 1 su or man 8 sudo) to obtain elevated privileges. This allows associating root logins with particular users. This also adds another layer of security; not only the root password, but both the root and the password of an administrator's regular account would need to be compromised to gain full root access. This section explains how to limit direct root logins on the different levels of the system.

14.5.1 Restricting local text console logins

TTY devices provide text-mode system access via the console. For desktop systems these are accessed via the local keyboard or—in case of server systems—via input devices connected to a KVM switch or a remote management card (for example, ILO and DRAC). By default, Linux offers six different consoles, which can be switched to via the key combinations AltF1 to AltF6, when running in text mode, or CtrlAltF1 to CtrlAltF6 when running in a graphical session. The associated terminal devices are named tty1 to tty6.

The following steps restrict root access to the first TTY. Even this access method is only meant for emergency access to the system and should never be used for everyday system administration tasks.

Note
Note

The steps shown here are tailored towards PC architectures (x86 and AMD64/Intel 64). On architectures such as POWER, different terminal device names than tty1 can be used. Be careful not to lock yourself out by specifying wrong terminal device names. You can determine the device name of the terminal you are currently logged in to by running the tty command. Be careful not to do this in a virtual terminal, such as via SSH or in a graphical session (device names /dev/pts/N), but only from an actual login terminal reachable via AltFN.

Procedure 14.1: Restricting root logins on local TTYs
  1. Ensure that the PAM stack configuration file /etc/pam.d/login contains the pam_securetty module in the auth block:

    auth     requisite      pam_nologin.so
     auth     [user_unknown=ignore success=ok ignore=ignore auth_err=die default=bad] pam_securetty.so noconsole
     auth     include        common-auth

    This includes the pam_securetty module during the authentication process on local consoles, which restricts root to logging in only on TTY devices that are listed in the file /etc/securetty.

  2. Remove all entries from /etc/securetty except one. This limits the access to TTY devices for root.

    #
    # This file contains the device names of tty lines (one per line,
    # without leading /dev/) on which root is allowed to login.
    #
    tty1
  3. Check whether logins to other terminals are rejected for root. A login on tty2, for example, should be rejected immediately, without even querying the account password. Also make sure that you can still successfully log in to tty1 and thus that root is not locked out of the system.

Important
Important: Do not add pam_securetty module

Do not add the pam_securetty module to the /etc/pam.d/common-auth file. This would break the su and sudo commands, because these tools would then also reject root authentications.

Important
Important

These configuration changes also cause root logins on serial consoles such as /dev/ttyS0 to be denied. In case you require such use cases, you need to list the respective TTY devices additionally in the /etc/securetty file.

14.5.2 Restricting graphical session logins

To improve security on your server, avoid using graphical environments at all. Graphical programs are often not designed to be run as root and can contain security issues than console programs. If you require a graphical login, use a non-root login. Configure your system to disallow root from logging in to graphical sessions.

To prevent root from logging in to graphical sessions, you can apply the same basic steps as outlined in Section 14.5.1, “Restricting local text console logins”. Just add the pam_securetty module to the PAM stack file belonging to the display manager—for example, /etc/pam.d/gdm for GDM. The graphical session also runs on a TTY device: by default, tty7. Therefore, if you restrict root logins to tty1, then root is denied login in the graphical session.

14.5.3 Restricting SSH logins

By default, the root user is also allowed to log in to a machine remotely via the SSH network protocol (if the SSH port is not blocked by the firewall). To restrict this, make the following change to the OpenSSH configuration:

  1. Edit /etc/ssh/sshd_config and adjust the following parameter:

    PermitRootLogin no
  2. Restart the sshd service to make the changes effective:

    systemctl restart sshd.service
Note
Note

Using the PAM pam_securetty module is not suitable in case of OpenSSH, because not all SSH logins go through the PAM stack during authorization (for example, when using SSH public-key authentication). Additionally, an attacker could differentiate between a wrong password and a successful login that was only rejected later on by policy.

14.6 Restricting sudo users

The sudo command allows users to execute commands in the context of another user, typically the root user. The sudo configuration consists of a rule-set that defines the mappings between commands to execute and their allowed source and target users and groups. The configuration is stored in the file /etc/sudoers. For more information about sudo, refer to Book “Administration Guide”, Chapter 2 “sudo basics”.

By default, sudo asks for the root password on SUSE systems. Unlike su however, sudo remembers the password and allows further commands to be executed as root without asking for the password again for five minutes. Therefore, sudo should be enabled for selected administrator users only.

Procedure 14.2: Restricting sudo for normal users
  1. Edit file /etc/sudoers, for example by executing visudo.

  2. Comment out the line that allows every user to run every command as long as they know the password of the user they want to use. Afterwards, it should look like this:

    #ALL ALL=(ALL) ALL # WARNING! Only use this together with 'Defaults targetpw'!
  3. Uncomment the following line:

    %wheel ALL=(ALL) ALL

    This limits the functionality described above to members of the group wheel. You can use a different group as wheel might have other implications that might not be suitable depending on your setup.

  4. Add users that should be allowed to use sudo to the chosen group. To add the user tux to the group wheel, use:

    usermod -aG wheel tux

    To get the new group membership, users have to log out and back in again.

  5. Verify the change by running a command with a user not in the group you have chosen for access control. You should see the error message:

    wilber is not in the sudoers file.  This incident will be reported.

    Next, try the same with a member of the group. They should still be able to execute commands via sudo.

This configuration only limits the sudo functionality. The su command is still available to all users. If there are other ways to access the system, users with knowledge of the root password can easily execute commands via this vector.

14.7 Setting an inactivity timeout for interactive shell sessions

It can be a good idea to terminate an interactive shell session after a certain period of inactivity. For example, to prevent open, unguarded sessions, or to avoid wasting system resources.

By default, there is no inactivity timeout for shells. Nothing happens if a shell stays open and unused for days or even years. However, it is possible to configure most shells so that idle sessions terminate automatically after a certain amount of time. The following example shows how to set an inactivity timeout for several common types of shells.

The inactivity timeout can be configured for login shells only or for all interactive shells. In the latter case, the inactivity timeout runs individually for each shell instance. This means that timeouts accumulate. When a sub- or child-shell is started, a new timeout begins for the sub- or child-shell, and only afterwards the timeout of the parent continues running.

The following table contains configuration details for a selection of common shells shipped with SUSE Linux Enterprise Desktop:

packageshell personalitiesshell variabletime unitreadonly settingconfig path (only login shell)config path (all shells)

bash

bash, sh

TMOUT

seconds

read-only TMOUT=

/etc/profile.local, /etc/profile.d/

/etc/bash.bashrc

mksh

ksh, lksh, mksh, pdksh

TMOUT

seconds

read-only TMOUT=

/etc/profile.local, /etc/profile.d/

/etc/ksh.kshrc.local

tcsh

csh, tcsh

autologout

minutes

set -r autologout=

/etc/csh.login.local

/etc/csh.cshrc.local

zsh

zsh

TMOUT

seconds

readonly TMOUT=

/etc/profile.local, /etc/profile.d/

/etc/zsh.zshrc.local

Every listed shell supports an internal timeout shell variable that can be set to a specific time value to cause the inactivity timeout. If you want to prevent users from overriding the timeout setting, you can mark the corresponding shell timeout variable as read-only. The corresponding variable declaration syntax is also found in the table above.

Note
Note: No protection from hostile users

This feature is only helpful for avoiding risks if a user is forgetful or follows unsafe practices. It does not protect against hostile users. The timeout only applies to interactive wait states of a shell. A malicious user can always find ways to circumvent the timeout and keep their session open regardless.

To configure the inactivity timeout, you need to add the matching timeout variable declaration to each shell's start-up script. Use either the path for login shells only, or the one for all shells, as listed in the table. The following example uses paths and settings that are suitable for bash and ksh to set up a read-only login shell timeout that cannot be overridden by users. Create the file /etc/profile.d/timeout.sh with the following content:

# /etc/profile.d/timeout.sh for SUSE Linux
#
# Timeout in seconds until the bash/ksh session is terminated
# in case of inactivity.
# 24h = 86400 sec
readonly TMOUT=86400
Tip
Tip

We recommend using the screen tool to detach sessions before logging out. screen sessions are not terminated and can be re-attached whenever required. An active session can be locked without logging out (read about CtrlAX / lockscreen in man screen for details).

14.8 Preventing accidental denial of service

Linux allows you to set limits on the amount of system resources that users and groups can consume. This is also useful if bugs in programs cause them to use up too many resources (for example, memory leaks), slow down the machine, or even render the system unusable. Incorrect settings can allow programs to use too many resources, which might make the server unresponsive to new connections or even local logins (for example, if a program uses up all available file handles on the host). This can also be a security concern if someone is allowed to consume all system resources and therefore cause a denial-of-service attack—either unplanned, or worse, planned. Setting resource limits for users and groups might be an effective way to protect systems, depending on the environment.

14.8.1 Example for restricting system resources

The following example demonstrates the practical usage of setting or restricting system resource consumption for an Oracle user account. For a list of system resource settings, see /etc/security/limits.conf or man limits.conf.

Most shells, such as Bash, provide control over various resources (for example, the maximum allowable number of open file descriptors or the maximum number of processes) that are available on a per-user basis. To examine all current limits in the shell, execute:

# ulimit -a

For more information on ulimit for the Bash shell, examine the Bash man pages.

Important
Important: Setting limits for SSH sessions

Setting hard and soft limits might not have the expected results when using an SSH session. To see valid behavior, it might be necessary to log in as root, and then su to the ID with limits (for example, Oracle in these examples). Resource limits should also work assuming the application was started automatically during the boot process. It might be necessary to set UsePrivilegeSeparation in /etc/ssh/sshd_config to no and restart the SSH daemon (systemctl restart sshd) if it seems that the changes to resource limits are not working (via SSH). However, this is not generally recommended, as it weakens a system's security.

Tip
Tip: Disabling password logins via ssh

You can add extra security to your server by disabling password authentication for SSH. Remember that you need to have SSH keys configured, otherwise you cannot access the server. To disable password login, add the following lines to /etc/ssh/sshd_config:

UseLogin no
UsePAM no
PasswordAuthentication no
PubkeyAuthentication yes

In this example, a change to the number of file handles or open files that the user oracle can use is made by editing /etc/security/limits.conf as root making the following changes:

oracle           soft    nofile          4096
oracle           hard    nofile          63536

The soft limit in the first line defines the limit on the number of file handles (open files) that the oracle user has after login. If the user sees error messages about running out of file handles, then the user can increase the number of file handles like in this example up to the hard limit (in this example 63536) by executing:

# ulimit -n 63536

You can set the soft and hard limits higher if necessary.

Note
Note: Caution with using ulimits

It is important to be judicious with the usage of ulimits. Allowing a hard limit for nofile for a user that is equal to the kernel limit (/proc/sys/fs/file-max) is bad. If the user consumes all the available file handles, the system cannot initiate new logins, since it is not possible to access the PAM modules required to perform a login.

You also need to ensure that pam_limits is either configured globally in /etc/pam.d/common-auth, or for individual services like SSH, su, login and telnet in:

/etc/pam.d/sshd (for SSH)
/etc/pam.d/su (for su)
/etc/pam.d/login (local logins and telnet)

If you do not want to enable it for all logins, there is a specific PAM module that reads the /etc/security/limits.conf file. Entries in PAM configuration directives have entries like:

session     required      /lib/security/pam_limits.so
session     required      /lib/security/pam_unix.so

Changes are not immediate and require a new login session:

# su - oracle
> ulimit -n
4096

These examples are specific to the Bash shell; ulimit options are different for other shells. The default limit for the user oracle is 4096. To increase the number of file handles the user oracle can use to 63536, execute:

# su - oracle
> ulimit -n
4096
> ulimit -n 63536
> ulimit -n
63536

Making this permanent requires the addition of the setting, ulimit -n 63536, (again, for Bash) to the user's profile (~/.bashrc or ~/.profile file), which is the user start-up file for the Bash shell on SUSE Linux Enterprise Desktop (to verify your shell, run: echo $SHELL). To do this you could run the following commands for the Bash shell of the user oracle:

# su - oracle
> cat >> ~oracle/.bash_profile << EOF
ulimit -n 63536
EOF

14.9 Displaying login banners

It is often necessary to place a banner on login screens on all servers for legal/audit policy reasons or to give security instructions to users.

To print a login banner after a user logs in on a text based terminal, for example, using SSH or on a local console, you can use the file /etc/motd (motd = message of the day). The file exists by default on SUSE Linux Enterprise Desktop, but it is empty. Simply add content to the file that is applicable/required by the organization.

Note
Note: Banner length

Try to keep the login banner content to a single terminal page (or less), as it scrolls the screen if it does not fit, making it more difficult to read.

You can also have a login banner printed before a user logs in on a text based terminal. For local console logins, you can edit the /etc/issue file, which causes the banner to be displayed before the login prompt. For logins via SSH, you can edit the Banner parameter in the /etc/ssh/sshd_config file, which then appropriately displays the banner text before the SSH login prompt.

For graphical logins via GDM, you can follow the GNOME admin guide to set up a login banner. Furthermore, you can make the following changes to require a user to acknowledge the legal banner by selecting Yes or No. Edit the /etc/gdm/Xsession file and add the following lines at the beginning of the script:

if ! /usr/bin/gdialog --yesno '\nThis system is classified...\n' 10 10; then
    /usr/bin/gdialog --infobox 'Aborting login'
    exit 1;
fi

The text This system is classified... needs to be replaced with the desired banner text. This dialog does not prevent a login from progressing. For more information about GDM scripting, refer to the GDM Admin Manual.

14.10 Connection accounting utilities

Here is a list of commands you can use to get data about user logins:

who Lists currently logged in users.

w Shows who is logged in and what they are doing.

last Shows a list of the most recent logged in users, including login time, logout time, login IP address, etc.

lastb Same as last, except that by default it shows /var/log/btmp, which contains all the bad login attempts.

lastlog This command reports data maintained in /var/log/lastlog, which is a record of the last time a user logged in.

ac Available after installing the acct package. Prints the connect time in hours on a per-user basis or daily basis, etc. This command reads /var/log/wtmp.

dump-utmp Converts raw data from /var/run/utmp or /var/log/wtmp into ASCII-parseable format.

Also check the /var/log/messages file, or the output of journalctl if no logging facility is running. See Book “Administration Guide”, Chapter 21 “journalctl: query the systemd journal” for more information on the systemd journal.

15 Restricting cron and at

This chapter explains how to restrict access to the cron and at daemons to improve the security of a system.

15.1 Restricting the cron daemon

The cron system is used to automatically run commands in the background at predefined times. For more information about cron, refer to the Book “Administration Guide”, Chapter 30 “Special system features”, Section 30.1.2 “The cron package”.

The cron.allow file specifies a list of users that are allowed to execute jobs via cron. The file does not exist by default, so all users can create cron jobs—except for those listed in cron.deny.

To prevent users except for root from creating cron jobs, perform the following steps.

  1. Create an empty file /etc/cron.allow:

    tux > sudo touch /etc/cron.allow
  2. Allow users to create cron jobs by adding their user names to the file:

    tux > sudo echo "tux" >> /etc/cron.allow
  3. To verify, try creating a cron job as non-root user listed in cron.allow. You should see the message:

    tux > crontab -e
    no crontab for tux - using an empty one

    Quit the crontab editor and try the same with a user not listed in the file (or before adding them in step 2 of this procedure):

    wilber > crontab -e
    You (wilber) are not allowed to use this program (crontab)
    See crontab(1) for more information
Important
Important: Existing cron jobs

Implementing cron.allow prevents users from creating new cron jobs. Existing jobs run, even for users listed in cron.deny. To prevent this, create the file as described and remove existing user crontabs from the directory /var/spool/cron/tabs to ensure they are not run anymore.

Note
Note: Switching to systemd timer units

You should also consider switching to systemd timer units, as they allow for more powerful and reliable task execution. By default, users cannot use them to run code when they are not logged in. This limits the way users can interact with the system while not being connected to it.

For more information about systemd timer units, refer to Book “Administration Guide”, Chapter 19 “The systemd daemon”, Section 19.7 “systemd timer units”.

15.2 Restricting the at scheduler

The at job execution system allows users to scheduled one-time running jobs. The at.allow file specifies a list of users that are allowed to schedule jobs via at. The file does not exist by default, so all users can schedule at jobs—except for those listed in at.deny)

To prevent users except for root from scheduling jobs with at, perform the following steps.

  1. Create an empty file /etc/at.allow:

    tux > sudo touch /etc/at.allow
  2. Allow users to schedule jobs with at by adding their user names to the file:

    tux > sudo echo "tux" >> /etc/at.allow
  3. To verify, try scheduling a job as non-root user listed in at.allow:

    tux > at 00:00
    at>

    Quit the atprompt with CtrlC and try the same with a user not listed in the file (or before adding them in step 2 of this procedure):

    wilber > at 00:00
    You do not have permission to use at.
Note
Note: Uninstalling at

at is not widely used anymore. If you do not have valid use cases, consider uninstalling the daemon instead of just restricting its access.

16 Spectre/Meltdown checker

spectre-meltdown-checker is a shell script to test if your system is vulnerable to the several speculative execution vulnerabilities that are in nearly all CPUs manufactured in the past 20 years. This is a hardware flaw that potentially allows an attacker to read all data on the system. On cloud computing services, where multiple virtual machines are on a single physical host, an attacker can gain access to all virtual machines. Fixing these vulnerabilities requires redesigning and replacing CPUs. Until this happens, there are several software patches that mitigate these vulnerabilities. If you have kept your SUSE systems updated, all these patches should already be installed.

spectre-meltdown-checker generates a detailed report. It is impossible to guarantee that your system is secure, but it shows you which mitigations are in place, and potential vulnerabilities.

16.1 Using spectre-meltdown-checker

Install the script, and then run it as root without any options:

# zypper in spectre-meltdown-checker
# spectre-meltdown-checker.sh

You see colorful output like Figure 16.1, “Output from spectre-meltdown-checker”:

Partial output of spectre-meltdown-checker.sh
Figure 16.1: Output from spectre-meltdown-checker

spectre-meltdown-checker.sh --help lists all options. It is useful to pipe plain text output, with no colors, to a file:

# spectre-meltdown-checker.sh --no-color| tee filename.txt

The previous examples are on a running system, which is the default. You may also run spectre-meltdown-checker offline by specifying the paths to the kernel, config and System.map files:

# cd /boot
# spectre-meltdown-checker.sh \
--no-color \
--kernel vmlinuz-6.4.0-150600.9-default \
--config config-6.4.0-150600.9-default \
--map System.map-6.4.0-150600.9-default| tee filename.txt

Other useful options are:

--verbose, -v

Increase verbosity; repeat for more verbosity, for example -v -v -v

--explain

Print human-readable explanations

--batch [short] [json] [nrpe] [prometheus]

Format output in various machine-readable formats

Important
Important: --disclaimer option

spectre-meltdown-checker.sh --disclaimer provides important information about what the script does, and does not do.

16.2 More information

For more information, see the following references:

17 Configuring security settings with YaST

The YaST module Security Center provides a central control panel for configuring security-related settings for SUSE Linux Enterprise Desktop. Use it to configure security aspects such as settings for the login procedure and for password creation, for boot permissions, user creation, or for default file permissions. Launch it from the YaST control center with Security and Users › Security Center. The Security Center dialog opens to the Security Overview, with additional configuration dialogs in the left and right panes.

17.1 Security overview

The Security Overview displays a comprehensive list of the most important security settings for your system. The security status of each entry in the list is visible. A green check mark indicates a secure setting while a red cross indicates an entry as being insecure. Click Help to open an overview of the setting and information on how to make it secure. To change a setting, click the corresponding link in the Status column. Depending on the setting, the following entries are available:

Enabled/Disabled

Click this entry to toggle the status of the setting to either enabled or disabled.

Configure

Click this entry to launch another YaST module for configuration. You are returned to the Security Overview when leaving the module.

Unknown

A setting's status is set to unknown when the associated service is not installed. Such a setting does not represent a potential security risk.

YaST security center and hardening: security overview
Figure 17.1: YaST security center and hardening: security overview

17.2 Predefined security configurations

SUSE Linux Enterprise Desktop includes three Predefined Security Configurations. These configurations affect all the settings available in the Security Center module. Click Predefined Security Configurations in the left pane to see the predefined configurations. Click the one you want to apply, then the module closes. If you wish to modify the predefined settings, re-open the Security Center module, click Predefined Security Configurations, then click Custom Settings in the right pane. Any changes you make are applied to your selected predefined configuration.

Workstation

A configuration for a workstation with any kind of network connection (including a connection to the Internet).

Roaming device

This setting is designed for a laptop or tablet that connects to different networks.

Network server

Security settings designed for a machine providing network services such as a Web server, file server, name server, etc. This set provides the most secure configuration of the predefined settings.

Custom settings

Select Custom Settings to modify any of the three predefined configurations after they have been applied.

17.3 Password settings

Passwords that are easy to guess are a major security issue. The Password Settings dialog provides the means to ensure that only secure passwords can be used.

Check new passwords

By activating this option, a warning is issued if new passwords appear in a dictionary, or if they are proper names (proper nouns).

Minimum acceptable password length

If the user chooses a password with a length shorter than specified here, a warning is issued.

Number of passwords to remember

When password expiration is activated (via Password Age), this setting stores the given number of a user's previous passwords, preventing their reuse.

Password encryption method

Choose a password encryption algorithm. Normally there is no need to change the default (Blowfish).

Password age

Activate password expiration by specifying a minimum and a maximum time limit (in days). By setting the minimum age to a value greater than 0 days, you can prevent users from immediately changing their passwords again (and in doing so circumventing the password expiration). Use the values 0 and 99999 to deactivate password expiration.

Days before password expires warning

When a password expires, the user receives a warning in advance. Specify the number of days before the expiration date that the warning should be issued.

17.4 Boot settings

Configure which users can shut down the machine via the graphical login manager in this dialog. You can also specify how CtrlAltDel is interpreted and who can hibernate the system.

17.5 Login settings

This dialog lets you configure security-related login settings:

Delay after incorrect login attempt

To make it difficult to guess a user's password by repeatedly logging in, it is recommended to delay the display of the login prompt that follows an incorrect login. Specify the value in seconds. Make sure that users who have mistyped their passwords do not need to wait too long.

Allow remote graphical login

When checked, the graphical login manager (GDM) can be accessed from the network. This is a potential security risk.

17.6 User addition

Set minimum and maximum values for user and group IDs. These default settings would rarely need to be changed.

17.7 Miscellaneous settings

Other security settings that do not fit the above-mentioned categories are listed here:

File permissions

SUSE Linux Enterprise Desktop comes with three predefined sets of file permissions for system files. These permission sets define whether a regular user can read log files or start certain programs. Easy file permissions are suitable for stand-alone machines. These settings allow regular users to, for example, read most system files. See the file /etc/permissions.easy for the complete configuration. The Secure file permissions are designed for multiuser machines with network access. A thorough explanation of these settings can be found in /etc/permissions.secure. The Paranoid settings are the most restrictive ones and should be used with care. See /etc/permissions.paranoid for more information.

User launching updatedb

The program updatedb scans the system and creates a database of all files, which can be queried with the command locate. When updatedb is run as user nobody, only world-readable files are added to the database. When run as user root, almost all files (except the ones root is not allowed to read) are added.

Enable magic SysRq keys

The magic SysRq key is a key combination that enables you to have certain control over the system even when it has crashed. The complete documentation can be found at https://www.kernel.org/doc/html/latest/admin-guide/sysrq.html.

18 The Polkit authentication framework

Polkit is an authentication framework used in graphical Linux desktop environments, for fine-grained management of access rights on the system. Traditionally, there is a strong separation of privileges on Linux between the root user as the fully authorized administrator account, and all other accounts and groups on the system. These non-administrator accounts may have certain additional privileges, like accessing sound hardware through an audio group. However, this kind of privilege is fixed and cannot be granted in certain specific situations, or for a certain duration of time.

Instead of fully switching to the root user (using programs such as sudo) for gaining higher privileges, Polkit grants specific privileges to a user or group on an as-needed basis. This is controlled by configuration files that describe individual actions that need to be authorized in a dynamic context.

18.1 Conceptual overview

Polkit consists of multiple components. polkitd is a privileged central background service that performs authentication checks based on the existing Polkit configuration. Polkit-enabled applications forward specific authentication requests to the polkitd daemon. A Polkit authentication agent running in the unprivileged user context is responsible for displaying authentication requests on behalf of the polkitd daemon, and providing the credentials that have been entered interactively by the user.

A Polkit action represents a single activity that is subject to Polkit's authorization rules. For example, the intent to reboot the computer can be modeled as a single action in Polkit. Each action has a unique identifier, for the reboot example the action is called org.freedesktop.login1.reboot.

18.1.1 The authentication agent

When a user starts a graphical session in a fully featured desktop environment, an authentication agent is typically started automatically, running in the background. You notice it when an authentication prompt appears in response to an application requesting authorization for a certain action. Using Polkit in text mode or via SSH is not easily possible, therefore this document focuses on its use in a graphical session context.

18.1.2 Configuration of Polkit

Polkit's configuration consists of actions and authorization rules:

Actions (file extension *.policy)

Actions are defined in XML files that are located in /usr/share/polkit-1/actions. Each file defines one or more actions for a certain application domain, and each action contains human-readable descriptions and its default authorization settings. Although a system administrator can write their own rules, these default policy files must not be edited directly.

Authorization rules (file extension *.rules)

Rules are written in the JavaScript programming language, and are located in two places: /usr/share/polkit-1/rules.d is used by system packages, and /etc/polkit-1/rules.d is for locally administered configurations. The rule files contain more complex logic on top of the default action authorization settings. For example, a rule file could overrule a restrictive action and allow certain users to use it even without authorization.

18.1.3 Polkit Utilities

Polkit provides utilities for specific tasks (see also their respective man pages for further details):

pkaction

Get details about a defined action. See Section 18.3, “Querying privileges” for more information.

pkcheck

Checks whether a process is authorized for a specific Polkit action.

pkexec

Allows programs to be executed as a different user based on Polkit authorization settings. This is similar to su or sudo.

pkttyagent

Starts a textual authentication agent. This agent is used if a desktop environment does not have its own authentication agent.

18.2 Authorization types

Every time a Polkit-enabled application carries out a privileged operation, Polkit is asked whether the user is entitled to do so. The answer can be yes, no or authentication needed. In the latter case, an authentication dialog is displayed for the user to enter the necessary credentials.

18.2.1 Implicit authorizations

When no dedicated Polkit JavaScript rules exist for a given action, the outcome depends on the implicit authorizations settings that are defined for each action in a Polkit policy file. There are three authorization categories: allow_active, allow_inactive and allow_any. allow_active is applied to users in an active session. An active session is a local login on the text mode console or in a graphical user environment. The session becomes inactive when you switch to another console, for example, in which case the category allow_inactive is relevant. allow_any is used for all other contexts, for example for remote users logged in via SSH or VNC. Each of these categories has one of the following authorization settings assigned:

no

The user is never granted authorization of the desired action.

yes

The user is always granted authorization without the need to enter any credentials.

auth_self

The user needs to enter their own password for the action to be authorized.

auth_self_keep

Like auth_self, but the authorization is cached for a certain duration, for example, if the same action is executed by the same application again, then it is not necessary to re-enter the password.

auth_admin

The user needs to enter the administrator (root) password for the action to be authorized.

auth_admin_keep

Similar to auth_self_keep, requiring the administrator (root) password.

18.2.2 SUSE default privileges

The implicit authorization settings found in Polkit policy files described so far are from the upstream developers of the respective applications. We call these settings the upstream defaults. These upstream defaults are not necessarily the same defaults that are used on SUSE systems. SUSE Linux Enterprise Desktop comes with a predefined set of privileges that override the upstream defaults. These settings come in three different flavors (profiles) of which only one can be active at any time:

/etc/polkit-default-privs.easy

Authorization settings tailored towards single-user desktop systems where the administrator is also the active interactive user. It offers reduced security in favor of improved user experience.

/etc/polkit-default-privs.standard

Balanced settings suitable for most systems.

/etc/polkit-default-privs.restrictive

More conservative authorization settings that reduce possible attack surface at the expense of user experience in certain areas.

To switch the active polkit profile, edit /etc/sysconfig/security and adjust the value of POLKIT_DEFAULT_PRIVS to one of easy, standard or restrictive. Then run the command set_polkit_default_privs as root.

Do not modify the profile settings in the files listed above. To define your own custom Polkit settings, use /etc/polkit-default-privs.local. For details, refer to Section 18.4.3, “Modifying the SUSE default privileges”.

18.3 Querying privileges

To query privileges, use the command pkaction included in Polkit.

Polkit comes with command-line tools for changing privileges and executing commands as another user (see Section 18.1.3, “Polkit Utilities” for a short overview). Each existing policy has a unique name with which it can be identified. List all available policies with the command pkaction. See man pkaction for more information.

To display the needed authorization for a given policy (for example, org.freedesktop.login1.reboot), use pkaction as follows:

> pkaction -v --action-id=org.freedesktop.login1.reboot
org.freedesktop.login1.reboot:
  description:       Reboot the system
  message:           Authentication is required to allow rebooting the system
  vendor:            The systemd Project
  vendor_url:        http://www.freedesktop.org/wiki/Software/systemd
  icon:
  implicit any:      auth_admin_keep
  implicit inactive: auth_admin_keep
  implicit active:   yes
Note
Note: Restrictions of pkaction on SUSE Linux Enterprise Desktop

pkaction only takes the upstream defaults into account. It is not aware of the SUSE default privileges that are overriding the upstream defaults. Therefore, be careful about interpreting this output.

18.4 Modifying Polkit configuration

Adjusting Polkit settings is useful when you want to deploy the same set of policies to different machines, for example, to the computers of a specific team. Customization of Polkit authorization settings can also be used to harden security for specific actions, or to improve the user experience by reducing the amount of password prompts for frequently used actions. However, granting certain Polkit actions without authentication can be a security hazard that may grant a regular user full root privileges. Reduce Polkit authentication requirements when you are certain that this does not violate the system security in your specific environment.

18.4.1 Overriding Polkit policy files

The list of available Polkit actions depends on the packages that you have installed on your system. For a quick overview, use pkaction to list all actions Polkit knows about.

For the purposes of this example, we show how the command gparted (GNOME Partition Editor) is integrated into Polkit.

The file /usr/share/polkit-1/actions/org.opensuse.policykit.gparted.policy has the following content:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE policyconfig PUBLIC
 "-//freedesktop//DTD PolicyKit Policy Configuration 1.0//EN"
 "http://www.freedesktop.org/standards/PolicyKit/1.0/policyconfig.dtd">
<policyconfig> 1

  <action id="org-opensuse-polkit-gparted"> 2
    <message>Authentication is required to run the GParted Partition Editor</message>
    <icon_name>gparted</icon_name>
    <defaults> 3
      <allow_any>auth_admin</allow_any>
      <allow_inactive>auth_admin</allow_inactive>
     < allow_active>auth_admin</allow_active>
    </defaults>
    <annotate 4
      key="org.freedesktop.policykit.exec.path">/usr/sbin/gparted</annotate>
    <annotate 4
      key="org.freedesktop.policykit.exec.allow_gui">true</annotate>
  </action>

</policyconfig>

1

Root XML element of the policy file.

2

Start of the definition of the only action in this policy.

3

Here the implicit authorization settings as described above are found.

4

The annotate element contains additional information regarding how Polkit performs an action. In this case, it contains the path to the gparted executable and a setting that this program is allowed to access the graphical display. These annotations are necessary for the use of an action in conjunction with the Polkit tool pkexec.

To add your own policy, create a .policy file with the structure above, add the appropriate action name into the id attribute, and define the desired override implicit authorization settings.

Note
Note: Deprecated name PolicyKit

The Polkit authorization framework was formerly named PolicyKit. In certain places, like the XML document preamble above, this old name still appears.

18.4.2 Adding JavaScript authorization rules

Authorization rules overrule the implicit authorization settings. To add your own rules, store your files under /etc/polkit-1/rules.d/.

The files in this directory start with a two-digit number, a dash, a descriptive name, and end with .rules. Functions inside these files are executed in the lexicographical order of the file names in the directory. For example, 00-foo.rules is ordered (and hence executed) before 60-bar.rules or even 90-default-privs.rules.

Inside the rule file, the script typically checks for the action ID to be authorized. For example, to allow the command gparted to be executed by any member of the admin group, check for the action ID org.opensuse.policykit.gparted:

/* Allow users in admin group to run GParted without authentication */
polkit.addRule(function(action, subject) {
    if (action.id == "org.opensuse.policykit.gparted" &&
        subject.isInGroup("admin")) {
        return polkit.Result.YES;
    }
});

Find the description of all classes and methods of the functions in the Polkit API at https://www.freedesktop.org/software/polkit/docs/latest/ref-api.html.

18.4.3 Modifying the SUSE default privileges

As described in Section 18.2.2, “SUSE default privileges”, SUSE ships different override profiles for the Polkit implicit authorization settings defined by the upstream developers. Custom privileges can be defined in /etc/polkit-default-privs.local. Privileges defined here always take precedence over the predefined profile settings. To add a custom privilege setting, do the following:

Procedure 18.1: Modifying default privileges
  1. Edit /etc/polkit-default-privs.local. To define a privilege, add a line for each action in the following format:

    <action-id>     <auth_any>:<auth_inactive>:<auth_active>

    Alternatively, if all three categories receive the same value, you can also specify a single value:

    <action-id>     <auth_all>

    For example:

    org.freedesktop.color-manager.modify-profile     auth_admin_keep
  2. Run this tool as root for the changes to take effect:

    # /sbin/set_polkit_default_privs

Refer to man polkit-default-privs for the full documentation of the SUSE Polkit default privileges.

18.5 Restoring the SUSE default privileges

To restore the SUSE default authorization settings follow these steps:

Procedure 18.2: Restoring the SUSE Linux Enterprise Desktop defaults
  1. Choose the desired profile as described in Section 18.2.2, “SUSE default privileges”

  2. Remove any overrides from /etc/polkit-default-privs.local.

  3. Run set_polkit_default_privs to regenerate the default rules.

19 Access control lists in Linux

POSIX ACLs (access control lists) can be used as an expansion of the traditional permission concept for file system objects. With ACLs, permissions can be defined more flexibly than with the traditional permission concept.

The term POSIX ACL suggests that this is a true POSIX (portable operating system interface) standard. The respective draft standards POSIX 1003.1e and POSIX 1003.2c have been withdrawn for several reasons. Nevertheless, ACLs (as found on many systems belonging to the Unix family) are based on these drafts and the implementation of file system ACLs (as described in this chapter) follows these two standards.

19.1 Traditional file permissions

The permissions of all files included in SUSE Linux Enterprise Desktop are carefully chosen. When installing additional software or files, take great care when setting the permissions. Always use the -l option with the command ls to detect any incorrect file permissions immediately. An incorrect file attribute does not only mean that files could be changed or deleted. Modified files could be executed by root or services could be hijacked by modifying configuration files. This increases the danger of an attack.

A SUSE® Linux Enterprise Desktop system includes the files permissions, permissions.easy, permissions.secure, and permissions.paranoid, all in the directory /etc. The purpose of these files is to define special permissions, such as world-writable directories or, for files, the setuser ID bit. Programs with the setuser ID bit set do not run with the permissions of the user that launched it, but with the permissions of the file owner, root. An administrator can use the file /etc/permissions.local to add their own settings.

To define one of the available profiles, select Local Security in the Security and Users section of YaST. To learn more about the topic, read the comments in /etc/permissions or consult man chmod.

Find detailed information about the traditional file permissions in the GNU Coreutils Info page, Node File permissions (info coreutils "File permissions"). More advanced features are the setuid, setgid and sticky bit.

19.1.1 The setuid bit

In certain situations, the access permissions may be too restrictive. Therefore, Linux has additional settings that enable the temporary change of the current user and group identity for a specific action. For example, the passwd program normally requires root permissions to access /etc/passwd. This file contains important information, like the home directories of users and user and group IDs. Thus, a normal user would not be able to change passwd, because it would be too dangerous to grant all users direct access to this file. A possible solution to this problem is the setuid mechanism. setuid (set user ID) is a special file attribute that instructs the system to execute programs marked accordingly under a specific user ID. Consider the passwd command:

-rwsr-xr-x  1 root shadow 80036 2004-10-02 11:08 /usr/bin/passwd

You can see the s that denotes that the setuid bit is set for the user permission. Through the setuid bit, all users starting the passwd command execute it as root.

19.1.2 The setgid bit

The setuid bit applies to users. However, there is also an equivalent property for groups: the setgid bit. A program for which this bit was set runs under the group ID under which it was saved, no matter which user starts it. Therefore, in a directory with the setgid bit, all newly created files and subdirectories are assigned to the group to which the directory belongs. Consider the following example directory:

drwxrws--- 2 tux archive 48 Nov 19 17:12  backup

You can see the s that denotes that the setgid bit is set for the group permission. The owner of the directory and members of the group archive can access this directory. Users that are not members of this group are mapped to the respective group. The effective group ID of all written files is archive. For example, a backup program that runs with the group ID archive can access this directory even without root privileges.

19.1.3 The sticky bit

There is also the sticky bit. It makes a difference whether it belongs to an executable program or a directory. If it belongs to a program, a file marked in this way is loaded to RAM to avoid needing to get it from the hard disk each time it is used. This attribute is used rarely, because modern hard disks are fast enough. If this bit is assigned to a directory, it prevents users from deleting each other's files. Typical examples include the /tmp and /var/tmp directories:

drwxrwxrwt 2 root root 1160 2002-11-19 17:15 /tmp

19.2 Advantages of ACLs

Traditionally, three permission sets are defined for each file object on a Linux system. These sets include the read (r), write (w), and execute (x) permissions for each of three types of users—the file owner, the group, and other users. Additionally, it is possible to set the set user id, the set group id, and the sticky bit. This lean concept is fully adequate for most practical cases. However, for more complex scenarios or advanced applications, system administrators formerly needed to use several workarounds to circumvent the limitations of the traditional permission concept.

ACLs can be used as an extension of the traditional file permission concept. They allow the assignment of permissions to individual users or groups even if these do not correspond to the original owner or the owning group. Access control lists are a feature of the Linux kernel and are currently supported by Ext2, Ext3, Ext4, JFS and XFS. Using ACLs, complex scenarios can be realized without implementing complex permission models on the application level.

The advantages of ACLs are evident if you want to replace a Windows server with a Linux server. Some connected workstations may continue to run under Windows even after the migration. The Linux system offers file and print services to the Windows clients with Samba. With Samba supporting access control lists, user permissions can be configured both on the Linux server and in Windows with a graphical user interface (only Windows NT and later). With winbindd, part of the Samba suite, it is even possible to assign permissions to users only existing in the Windows domain without any account on the Linux server.

19.3 Definitions

User class

The conventional POSIX permission concept uses three classes of users for assigning permissions in the file system: the owner, the owning group, and other users. Three permission bits can be set for each user class, giving permission to read (r), write (w), and execute (x).

ACL

The user and group access permissions for all kinds of file system objects (files and directories) are determined through ACLs.

Default ACL

Default ACLs can only be applied to directories. They determine the permissions a file system object inherits from its parent directory when it is created.

ACL entry

Each ACL consists of a set of ACL entries. An ACL entry contains a type, a qualifier for the user or group to which the entry refers, and a set of permissions. For certain entry types, the qualifier for the group or users is undefined.

19.4 Handling ACLs

Table 19.1, “ACL entry types” summarizes the six possible types of ACL entries, each defining permissions for a user or a group of users. The owner entry defines the permissions of the user owning the file or directory. The owning group entry defines the permissions of the file's owning group. The superuser can change the owner or owning group with chown or chgrp, in which case the owner and owning group entries refer to the new owner and owning group. Each named user entry defines the permissions of the user specified in the entry's qualifier field. Each named group entry defines the permissions of the group specified in the entry's qualifier field. Only the named user and named group entries have a qualifier field that is not empty. The other entry defines the permissions of all other users.

The mask entry further limits the permissions granted by named user, named group, and owning group entries by defining which of the permissions in those entries are effective and which are masked. If permissions exist in one of the mentioned entries and in the mask, they are effective. Permissions contained only in the mask or only in the actual entry are not effective—meaning the permissions are not granted. All permissions defined in the owner and owning group entries are always effective. The example in Table 19.2, “Masking access permissions” demonstrates this mechanism.

There are two basic classes of ACLs: A minimum ACL contains only the entries for the types owner, owning group, and other, which correspond to the conventional permission bits for files and directories. An extended ACL goes beyond this. It must contain a mask entry and may contain several entries of the named user and named group types.

Table 19.1: ACL entry types

Type

Text Form

owner

user::rwx

named user

user:name:rwx

owning group

group::rwx

named group

group:name:rwx

mask

mask::rwx

other

other::rwx

Table 19.2: Masking access permissions

Entry Type

Text Form

Permissions

named user

user:geeko:r-x

r-x

mask

mask::rw-

rw-

effective permissions:

r--

19.4.1 ACL entries and file mode permission bits

Figure 19.1, “Minimum ACL: ACL entries compared to permission bits” and Figure 19.2, “Extended ACL: ACL entries compared to permission bits” illustrate the two cases of a minimum ACL and an extended ACL. The figures are structured in three blocks—the left block shows the type specifications of the ACL entries, the center block displays an example ACL, and the right block shows the respective permission bits according to the conventional permission concept (for example, as displayed by ls -l). In both cases, the owner class permissions are mapped to the ACL entry owner. Other class permissions are mapped to the respective ACL entry. However, the mapping of the group class permissions is different in the two cases.

Minimum ACL: ACL entries compared to permission bits
Figure 19.1: Minimum ACL: ACL entries compared to permission bits

For a minimum ACL—without mask—the group class permissions are mapped to the ACL entry owning group. This is shown in Figure 19.1, “Minimum ACL: ACL entries compared to permission bits”. For an extended ACL—with mask—the group class permissions are mapped to the mask entry. This is shown in Figure 19.2, “Extended ACL: ACL entries compared to permission bits”.

Extended ACL: ACL entries compared to permission bits
Figure 19.2: Extended ACL: ACL entries compared to permission bits

This mapping approach ensures the smooth interaction of applications, regardless of whether they have ACL support. The access permissions that were assigned through the permission bits represent the upper limit for all other fine adjustments made with an ACL. Changes made to the permission bits are reflected by the ACL and vice versa.

19.4.2 A directory with an ACL

With getfacl and setfacl on the command line, you can access ACLs. The usage of these commands is demonstrated in the following example.

Before creating the directory, use the umask command to define which access permissions should be masked each time a file object is created. The command umask 027 sets the default permissions by giving the owner the full range of permissions (0), denying the group write access (2), and giving other users no permissions (7). umask masks the corresponding permission bits or turns them off. For details, refer to Section 11.4, “Default umask” or the umask man page.

mkdir mydir creates the mydir directory with the default permissions as set by umask. Use ls -dl mydir to check whether all permissions were assigned correctly. The output for this example is:

drwxr-x--- ... tux project3 ... mydir

With getfacl mydir, check the initial state of the ACL. This gives information like:

# file: mydir
# owner: tux
# group: project3
user::rwx
group::r-x
other::---

The first three output lines display the name, owner and owning group of the directory. The next three lines contain the three ACL entries owner, owning group, and other. Specifically for the minimum ACL, the getfacl command does not produce any information you could not have obtained with ls.

Modify the ACL to assign read, write and execute permissions to an additional user geeko and an additional group mascots with:

# setfacl -m user:geeko:rwx,group:mascots:rwx mydir

The option -m prompts setfacl to modify the existing ACL. The following argument indicates the ACL entries to modify (multiple entries are separated by commas). The final part specifies the name of the directory to which these modifications should be applied. Use the getfacl command to take a look at the resulting ACL.

# file: mydir
# owner: tux
# group: project3
user::rwx
user:geeko:rwx
group::r-x
group:mascots:rwx
mask::rwx
other::---

Besides the entries initiated for the user geeko and the group mascots, a mask entry has been generated. This mask entry is set automatically so that all permissions are effective. setfacl automatically adapts existing mask entries to the settings modified, unless you deactivate this feature with -n. The mask entry defines the maximum effective access permissions for all entries in the group class. This includes named user, named group, and owning group. The group class permission bits displayed by ls -dl mydir now correspond to the mask entry.

drwxrwx---+ ... tux project3 ... mydir

The first column of the output contains an additional + to indicate that there is an extended ACL for this item.

According to the output of the ls command, the permissions for the mask entry include write access. Traditionally, such permission bits would mean that the owning group (here project3) also has write access to the directory mydir.

However, the effective access permissions for the owning group correspond to the overlapping portion of the permissions defined for the owning group and for the mask—which is r-x in our example (see Table 19.2, “Masking access permissions”). As far as the effective permissions of the owning group in this example are concerned, nothing has changed even after the addition of the ACL entries.

Edit the mask entry with setfacl or chmod. For example, use chmod g-w mydir. ls -dl mydir then shows:

drwxr-x---+ ... tux project3 ... mydir

getfacl mydir provides the following output:

# file: mydir
# owner: tux
# group: project3
user::rwx
user:geeko:rwx          # effective: r-x
group::r-x
group:mascots:rwx       # effective: r-x
mask::r-x
other::---

After executing chmod to remove the write permission from the group class bits, the output of ls is sufficient to see that the mask bits must have changed accordingly: Write permission is again limited to the owner of mydir. The output of the getfacl confirms this. This output includes a comment for all those entries in which the effective permission bits do not correspond to the original permissions, because they are filtered according to the mask entry. The original permissions can be restored at any time with chmod g+w mydir.

19.4.3 A directory with a default ACL

Directories can have a default ACL, which is a special kind of ACL defining the access permissions that objects in the directory inherit when they are created. A default ACL affects both subdirectories and files.

19.4.3.1 Effects of a default ACL

There are two ways in which the permissions of a directory's default ACL are passed to the files and subdirectories:

  • A subdirectory inherits the default ACL of the parent directory both as its default ACL and as an ACL.

  • A file inherits the default ACL as its ACL.

All system calls that create file system objects use a mode parameter that defines the access permissions for the newly created file system object. If the parent directory does not have a default ACL, the permission bits as defined by the umask are subtracted from the permissions as passed by the mode parameter, with the result being assigned to the new object. If a default ACL exists for the parent directory, the permission bits assigned to the new object correspond to the overlapping portion of the permissions of the mode parameter and those that are defined in the default ACL. The umask is disregarded in this case.

19.4.3.2 Application of default ACLs

The following three examples show the main operations for directories and default ACLs:

  1. Add a default ACL to the existing directory mydir with:

    > setfacl -d -m group:mascots:r-x mydir

    The option -d of the setfacl command prompts setfacl to perform the following modifications (option -m) in the default ACL.

    Take a closer look at the result of this command:

    > getfacl mydir
    
    # file: mydir
    # owner: tux
    # group: project3
    user::rwx
    user:geeko:rwx
    group::r-x
    group:mascots:rwx
    mask::rwx
    other::---
    default:user::rwx
    default:group::r-x
    default:group:mascots:r-x
    default:mask::r-x
    default:other::---

    getfacl returns both the ACL and the default ACL. The default ACL is formed by all lines that start with default. Although you merely executed the setfacl command with an entry for the mascots group for the default ACL, setfacl automatically copied all other entries from the ACL to create a valid default ACL. Default ACLs do not have an immediate effect on access permissions. They only come into play when file system objects are created. These new objects inherit permissions only from the default ACL of their parent directory.

  2. In the next example, use mkdir to create a subdirectory in mydir, which inherits the default ACL.

    > mkdir mydir/mysubdir
    
    getfacl mydir/mysubdir
    
    # file: mydir/mysubdir
    # owner: tux
    # group: project3
    user::rwx
    group::r-x
    group:mascots:r-x
    mask::r-x
    other::---
    default:user::rwx
    default:group::r-x
    default:group:mascots:r-x
    default:mask::r-x
    default:other::---

    As expected, the newly created subdirectory mysubdir has the permissions from the default ACL of the parent directory. The ACL of mysubdir is an exact reflection of the default ACL of mydir. The default ACL that this directory hands down to its subordinate objects is also the same.

  3. Use touch to create a file in the mydir directory, for example, touch mydir/myfile. ls -l mydir/myfile then shows:

    -rw-r-----+ ... tux project3 ... mydir/myfile

    The output of getfacl mydir/myfile is:

    # file: mydir/myfile
    # owner: tux
    # group: project3
    user::rw-
    group::r-x          # effective:r--
    group:mascots:r-x   # effective:r--
    mask::r--
    other::---

    touch uses a mode with the value 0666 when creating new files, which means that the files are created with read and write permissions for all user classes, provided no other restrictions exist in umask or in the default ACL (see Section 19.4.3.1, “Effects of a default ACL”). In effect, this means that all access permissions not contained in the mode value are removed from the respective ACL entries. Although no permissions were removed from the ACL entry of the group class, the mask entry was modified to mask permissions not set in mode.

    This approach ensures the smooth interaction of applications (such as compilers) with ACLs. You can create files with restricted access permissions and subsequently mark them as executable. The mask mechanism guarantees that the right users and groups can execute them as desired.

19.4.4 The ACL check algorithm

A check algorithm is applied before any process or application is granted access to an ACL-protected file system object. As a basic rule, the ACL entries are examined in the following sequence: owner, named user, owning group or named group, and other. The access is handled in accordance with the entry that best suits the process. Permissions do not accumulate.

Things are more complicated if a process belongs to more than one group and would potentially suit several group entries. An entry is randomly selected from the suitable entries with the required permissions. It is irrelevant which of the entries triggers the final result access granted. Likewise, if none of the suitable group entries contain the required permissions, a randomly selected entry triggers the final result access denied.

19.5 ACL support in applications

ACLs can be used to implement complex permission scenarios that meet the requirements of modern applications. The traditional permission concept and ACLs can be combined in a smart manner. The basic file commands (cp, mv, ls, etc.) support ACLs, as do Samba and Nautilus.

Vi/Vim and emacs both fully support ACLs by preserving the permissions on writing files including backups. Many editors and file managers still lack ACL support. When modifying files with an editor, the ACLs of files are sometimes preserved and sometimes not, depending on the backup mode of the editor used. If the editor writes the changes to the original file, the ACL is preserved. If the editor saves the updated contents to a new file that is subsequently renamed to the old file name, the ACLs may be lost, unless the editor supports ACLs. Except for the star archiver, there are currently no backup applications that preserve ACLs.

19.6 More information

For more information about ACLs, see the man pages for getfacl(1), acl(5), and setfacl(1).

20 Intrusion detection with AIDE

Securing your systems is a mandatory task for any mission-critical system administrator. Because it is impossible to always guarantee that the system is not compromised, it is important to do extra checks regularly (for example with cron) to ensure that the system is still under your control. This is where AIDE, the Advanced Intrusion Detection Environment, comes into play.

20.1 Why use AIDE?

An easy check that often can reveal unwanted changes can be done by means of RPM. The package manager has a built-in verify function that checks all the managed files in the system for changes. To verify all files, run the command rpm -Va. However, this command also displays changes in configuration files, and you need to do some filtering to detect important changes.

An additional problem to the method with RPM is that an intelligent attacker can modify rpm itself to hide any changes that might have been done by some kind of root-kit, which allows the attacker to mask the intrusion and gain root privilege. To solve this, you should implement a secondary check that can also be run independent of the installed system.

20.2 Setting up an AIDE database

Important
Important: Initialize AIDE database after installation

Before you install your system, verify the checksum of your medium (see Book “Deployment Guide”, Chapter 9 “Troubleshooting”, Section 9.1 “Checking media”) to make sure you do not use a compromised source. After you have installed the system, initialize the AIDE database. To make sure that all went well during and after the installation, do an installation directly on the console, without any network attached to the computer. Do not leave the computer unattended or connected to any network before AIDE creates its database.

AIDE is not installed by default on SUSE Linux Enterprise Desktop. To install it, either use Computer › Install Software, or enter zypper install aide on the command line as root.

To tell AIDE which attributes of which files should be checked, use the /etc/aide.conf configuration file. It must be modified to become the actual configuration. The first section handles general parameters like the location of the AIDE database file. More relevant for local configurations are the Custom Rules and the Directories and Files sections. A typical rule looks like the following:

Binlib     = p+i+n+u+g+s+b+m+c+md5+sha1

After defining the variable Binlib, the respective check boxes are used in the files section. Important options include the following:

Table 20.1: Important AIDE check boxes

Option

Description

p

Check for the file permissions of the selected files or directories.

i

Check for the inode number. Every file name has a unique inode number that should not change.

n

Check for the number of links pointing to the relevant file.

u

Check if the owner of the file has changed.

g

Check if the group of the file has changed.

s

Check if the file size has changed.

b

Check if the block count used by the file has changed.

m

Check if the modification time of the file has changed.

c

Check if the files access time has changed.

S

Check for a changed file size.

I

Ignore changes of the file name.

md5

Check if the md5 checksum of the file has changed. We recommend to use sha256 or sha512.

sha1

Check if the sha1 (160 Bit) checksum of the file has changed. We recommend to use sha256 or sha512.

sha256

Check if the sha256 checksum of the file has changed.

sha512

Check if the sha512 checksum of the file has changed.

This is a configuration that checks for all files in /sbin with the options defined in Binlib but omits the /sbin/conf.d/ directory:

/sbin  Binlib
!/sbin/conf.d

To create the AIDE database, proceed as follows:

  1. Open /etc/aide.conf.

  2. Define which files should be checked with which check boxes. For a complete list of available check boxes, see /usr/share/doc/packages/aide/manual.html. The definition of the file selection needs certain knowledge about regular expressions. Save your modifications.

  3. To check whether the configuration file is valid, run:

    # aide --config-check

    Any output of this command is a hint that the configuration is not valid. For example, if you get the following output:

    # aide --config-check
    35:syntax error:!
    35:Error while reading configuration:!
    Configuration error

    The error is to be expected in line 36 of /etc/aide.conf. The error message contains the last successfully read line of the configuration file.

  4. Initialize the AIDE database. Run the command:

    # aide -i
  5. Copy the generated database to a save location like a CD-R or DVD-R, a remote server or a flash disk for later use.

    Important
    Important

    This step is essential as it avoids compromising your database. It is recommended to use a medium which can be written once to prevent the database being modified. Never leave the database on the computer which you want to monitor.

20.3 Local AIDE checks

To perform a file system check, proceed as follows:

  1. Rename the database:

    # mv /var/lib/aide/aide.db.new /var/lib/aide/aide.db
  2. After any configuration change, you always need to re-initialize the AIDE database and subsequently move the newly generated database. It is also a good idea to make a backup of this database. See Section 20.2, “Setting up an AIDE database” for more information.

  3. Perform the check with the following command:

    # aide --check

If the output is empty, everything is fine. If AIDE found changes, it displays a summary of changes, for example:

# aide --check
AIDE found differences between database and file system!!

Summary:
  Total number of files:        1992
  Added files:                  0
  Removed files:                0
  Changed files:                1

To learn about the actual changes, increase the verbose level of the check with the parameter -V. For the previous example, this could look like the following:

# aide --check -V
AIDE found differences between database and file system!!
Start timestamp: 2009-02-18 15:14:10

Summary:
  Total number of files:        1992
  Added files:                  0
  Removed files:                0
  Changed files:                1


---------------------------------------------------
Changed files:
---------------------------------------------------

changed: /etc/passwd

--------------------------------------------------
Detailed information about changes:
---------------------------------------------------


File: /etc/passwd
  Mtime    : 2009-02-18 15:11:02              , 2009-02-18 15:11:47
  Ctime    : 2009-02-18 15:11:02              , 2009-02-18 15:11:47

In this example, the file /etc/passwd was touched to demonstrate the effect.

20.4 System independent checking

To avoid risk, it is advisable to also run the AIDE binary from a trusted source. This excludes the risk that attackers also modified the aide binary to hide its traces.

To accomplish this task, AIDE must be run from a rescue system that is independent of the installed system. With SUSE Linux Enterprise Desktop it is easy to extend the rescue system with arbitrary programs, and thus add the needed functionality.

Before you can start using the rescue system, you need to provide two packages to the system. These are included with the same syntax as you would add a driver update disk to the system. For a detailed description about the possibilities of linuxrc that are used for this purpose, see https://en.opensuse.org/SDB:Linuxrc. In the following, one possible way to accomplish this task is discussed.

Procedure 20.1: Starting a rescue system with AIDE
  1. Provide an FTP server as a second machine.

  2. Copy the packages aide and mhash to the FTP server directory, in our case /srv/ftp/. Replace the placeholders ARCH and VERSION with the corresponding values:

    # cp DVD1/suse/ARCH/aideVERSION.ARCH.rpm /srv/ftp
    # cp DVD1/suse/ARCH/mhashVERSION.ARCH.rpm /srv/ftp
  3. Create an info file /srv/ftp/info.txt that provides the needed boot parameters for the rescue system:

    dud:ftp://ftp.example.com/aideVERSION.ARCH.rpm
    dud:ftp://ftp.example.com/mhashVERSION.ARCH.rpm

    Replace your FTP domain name, VERSION and ARCH with the values used on your system.

  4. Restart the server that needs to go through an AIDE check with the Rescue system from your DVD. Add the following string to the boot parameters:

    info=ftp://ftp.example.com/info.txt

    This parameter tells linuxrc to also read in all information from the info.txt file.

After the rescue system has booted, the AIDE program is ready for use.

20.5 More information

Information about AIDE is available at the following places:

Part III Network security

  • 21 X Window System and X authentication
  • Network transparency is one of the central characteristics of a Unix system. X, the windowing system of Unix operating systems, can use this feature in an impressive way. With X, it is no problem to log in to a remote host and start a graphical program that is then sent over the network to be displa…

  • 22 Securing network operations with OpenSSH
  • OpenSSH is the SSH (secure shell) implementation that ships with SUSE Linux Enterprise Desktop, for securing network operations such as remote administration, file transfers, and tunneling insecure protocols. SSH encrypts all traffic between two hosts, including authentication, to protect against eavesdropping and connection hijacking. This chapter covers basic operations, plus host key rotation and certificate authentication, which are useful for managing larger SSH deployments.

  • 23 Masquerading and firewalls
  • Whenever Linux is used in a network environment, you can use the kernel functions that allow the manipulation of network packets to maintain a separation between internal and external network areas. The Linux netfilter framework provides the means to establish an effective firewall that keeps differ…

  • 24 Configuring a VPN server
  • Internet connections are easily available and affordable. However, not all connections are secure. Using a Virtual Private Network (VPN), you can create a secure network within an insecure network such as the Internet or Wi-Fi. It can be implemented in different ways and serves several purposes. In this chapter, we focus on the OpenVPN implementation to link branch offices via secure wide area networks (WANs).

  • 25 Managing a PKI with XCA, X certificate and key manager
  • Managing your own public key infrastructure (PKI) is traditionally done with the openssl utility. For admins who prefer a graphical tool, SUSE Linux Enterprise Desktop 15 SP6 includes XCA, the X Certificate and Key management tool (https://hohnstaedt.de/xca).

    XCA creates and manages X.509 certificates, certificate requests, RSA, DSA and EC private keys, Smartcards and certificate revocation lists (CRLs). XCA supports everything you need to create and manage your own certificate authority (CA). XCA includes customizable templates that can be used for certificate or request generation. This chapter describes a basic setup.

  • 26 Improving network security with sysctl variables
  • Sysctl (system control) variables control certain kernel parameters that influence the behavior of different parts of the operating system, for example the Linux network stack. These parameters can be looked up in the proc file system, in /proc/sys. Many kernel parameters can be changed directly by …

21 X Window System and X authentication

Network transparency is one of the central characteristics of a Unix system. X, the windowing system of Unix operating systems, can use this feature in an impressive way. With X, it is no problem to log in to a remote host and start a graphical program that is then sent over the network to be displayed on your computer.

When an X client needs to be displayed remotely using an X server, the latter should protect the resource managed by it (the display) from unauthorized access. In more concrete terms, certain permissions must be given to the client program. With the X Window System, there are two ways to do this, called host-based access control and cookie-based access control. The former relies on the IP address of the host where the client should run. The program to control this is xhost. xhost enters the IP address of a legitimate client into a database belonging to the X server. However, relying on IP addresses for authentication is not secure. For example, if there were a second user working on the host sending the client program, that user would have access to the X server as well—like someone spoofing the IP address. Because of these shortcomings, this authentication method is not described in more detail here, but you can learn about it with man xhost.

For cookie-based access control, a character string is generated that is only known to the X server and to the legitimate user, like an ID card. This cookie is stored on login in the file .Xauthority in the user's home directory and is available to any X client wanting to use the X server to display a window. The file .Xauthority can be examined by the user with the tool xauth. If you rename .Xauthority, or if you delete the file from your home directory by accident, you cannot open any new windows or X clients.

SSH (secure shell) can be used to encrypt a network connection and forward it to an X server transparently. This is also called X forwarding. X forwarding is achieved by simulating an X server on the server side and setting a DISPLAY variable for the shell on the remote host. Further details about SSH can be found in Chapter 22, Securing network operations with OpenSSH.

Warning
Warning: X forwarding can be insecure

If you do not consider the computer where you log in to be a secure host, do not use X forwarding. If X forwarding is enabled, an attacker could authenticate via your SSH connection. The attacker could then intrude on your X server and, for example, read your keyboard input.

22 Securing network operations with OpenSSH

OpenSSH is the SSH (secure shell) implementation that ships with SUSE Linux Enterprise Desktop, for securing network operations such as remote administration, file transfers, and tunneling insecure protocols. SSH encrypts all traffic between two hosts, including authentication, to protect against eavesdropping and connection hijacking. This chapter covers basic operations, plus host key rotation and certificate authentication, which are useful for managing larger SSH deployments.

22.1 OpenSSH overview

SSH is a network protocol that provides end-to-end protection for communications between the computers on your network, or between computers on your network and systems outside your network. You may open an SSH session to any other computer, if you have a login and the correct authentication methods for the remote computer.

SSH is a client-server protocol. Any host running the sshd daemon can accept SSH connections from any other host. Every host running sshd can have their own custom configurations, such as limiting who can have access, and which authentication methods are allowed.

Authentication and encryption are provided by encryption key pairs. Each key pair has a public key and a private key. Public keys encrypt, and private keys decrypt. Public keys are meant to be freely shared, while private keys must be protected and not shared. When a private key is compromised, anyone who has possession of it can masquerade as the original key owner.

SSH provides strong protection, because the server and client must both authenticate to each other. When a client first attempts to open an SSH session, the server presents its public host key. If the client already possesses a copy of this key (stored in ~/.ssh/known_hosts on the client machine), the client knows to trust the server. If the client does not have the appropriate host key, it is asked whether it should trust the server:

The authenticity of host '192.168.22.219 (192.168.22.219)'
   can't be established. ECDSA key fingerprint is
   SHA256:yXf6pjV26N0fegvEYIt3HgG95s3Q1X6WYRhtHLF99pUo.
   Are you sure you want to continue connecting (yes/no/[fingerprint])?

The user can type yes or no, or paste their copy of the host key fingerprint for comparison.

Note
Note: Matching host key fingerprints

Distributing copies of your host key fingerprints to your users enables them to verify that they are receiving the correct host keys. When they paste their copy of a host key fingerprint, ssh compares the fingerprints, and accepts the offered host key when the fingerprints match. This ensures a more accurate match than a visual comparison.

You cannot rely on users to use correct verification methods. If the fingerprints do not match, the user can still type yes, or copy the fingerprint in the message, and complete the connection. A stronger alternative is to use certificate authentication, which provides a global authentication mechanism, and does not require perfect behavior from users (see Section 22.8, “OpenSSH certificate authentication”).

If the public keys of a host have changed, the connection is denied with an alarming warning:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
SHA256:keNu/rJFWmpQu9B0SjIuo8NLjbeDY/x3Tktpl7oDJqo.
Please contact your system administrator.
Add correct host key in /home/geeko/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in /home/geeko/.ssh/known_hosts:210
You can use following command to remove the offending key:
ssh-keygen -R 192.168.121.219 -f /home/geeko/.ssh/known_hosts
ECDSA host key for 192.168.121.219 has changed and you have requested strict
checking.
Host key verification failed.

The remedy is to delete the offending key from ~/.ssh/known_hosts with the command given in the warning, then reconnect and accept the new host key.

The openssh package installs the server, client, file transfer commands, and certain utilities.

OpenSSH supports several different types of authentication:

Password authentication

Uses any system login and password on the remote machine. This is the simplest and most flexible authentication because you can open an SSH session from anywhere, on any machine. It is also the least secure, because it is vulnerable to password-cracking and keystroke logging.

Public key authentication

Authenticates with your personal SSH keys, rather than a login and password. This is less flexible than password authentication, because you can open SSH sessions only from a machine that holds your private identity key. It is much stronger because it is not vulnerable to password cracking or keystroke logging; an attacker must possess your private key and know its passphrase.

See Section 22.9, “Automated public key logins with gnome-keyring” to learn how to use gnome-keyring for automated public key authentication in GNOME sessions.

See Section 22.10, “Automated public key logins with ssh-agent” to learn how to use ssh-agent for automated public key authentication in console sessions.

Passphrase-less public key authentication

Public key authentication, paired with private identity keys that do not have passphrases. This is useful for automated services, like scripts and cron jobs. You must protect private keys, because anyone who gains access to them can easily masquerade as the key owner.

Certificate authentication

OpenSSH supports certification authentication, for easier key management, stronger authentication, and large-scale SSH deployments.

SUSE Linux Enterprise Desktop installs the OpenSSH package by default, providing the following commands:

ssh

The client command for initiating an SSH connection to a remote host.

scp

Secure file copy from or to a remote host.

sftp

Secure file transfer between a client and an SFTP server. (The SFTP protocol (SSH FTP) is not related to FTPS or FTPES (FTP over SSL/TLS), but was written independently.)

ssh-add

Add private key identities to the authentication agent, ssh-agent.

ssh-agent

Manages a user's private identity keys and their passphrases, for public key authentication. ssh-agent holds the passphrases in memory and applies them as needed, so that users do not have to re-type their passphrases to authenticate.

ssh-copy-id

Securely transfer a public key to a remote host, to set up public key authentication.

22.2 Server hardening

OpenSSH ships with a usable default server configuration, but there are additional steps you can take to secure your server.

Important
Important: Maintaining access to a remote SSH server

When you make changes to any SSH server, either have physical access to the machine, or keep an active root SSH session open until you have tested your changes, and everything works correctly. Then you can revert or correct your changes if something goes wrong.

The default server configuration file, /etc/ssh/sshd_config, contains the default configuration, and all the defaults are commented out. Override any default item by entering your own configuration item, uncommented, like the following example that sets a different listening port, and specifies the listening IPv4 address on a multi-homed host:

#Port 22
Port 2022

#ListenAddress 0.0.0.0
ListenAddress 192.168.10.100
Important
Important: Update /etc/services

When you use non-standard listening ports, first check the /etc/services file for unused ports. Select any unused port above 1024. Then document the ports you are using in /etc/services.

It is a best practice to disallow root logins. Instead, log into the remote machine as an unprivileged user, then use sudo to run commands as root. If you really want to allow root logins, the following server configuration example shows how to configure the server to accept only public-key authentication (Section 22.6, “Public key authentication”) for the root user with the PermitRootLogin prohibit-password and PasswordAuthentication options.

The following settings for /etc/ssh/sshd_config strengthen access controls:

Example 22.1: Example sshd_config
# Check if the file modes and ownership of the user’s files and
# home directory are correct before allowing them to login
StrictModes yes

# If your machine has more than one IP address, define which address or
# addresses it listens on
ListenAddress 192.168.10.100

# Allow only members of the listed groups to log in
AllowGroups ldapadmins backupadmins

# Or, deny certain groups. If you use both, DenyGroups is read first
DenyGroups users

# Allow or deny certain users. If you use both, DenyUsers is read first
AllowUsers user1 user2@example.com user3
DenyUsers user4 user5@192.168.10.10

# Allow root logins only with public key authentication
PermitRootLogin prohibit-password

# Disable password authentication and allow only public key authentication
# for all users
PasswordAuthentication no

# Length of time the server waits for a user to log in and complete the
# connection. The default is 120 seconds:
LoginGraceTime 60

# Limit the number of failed connection attempts. The default is 6
MaxAuthTries 4

After changing /etc/ssh/sshd_config, run the syntax checker:

> sudo sshd -t

The syntax checker only checks for correct syntax, and does not find configuration errors. When you are finished, reload the configuration:

> sudo systemctl reload sshd.service

Check the server's key directories for correct permissions.

/etc/ssh should be mode 0755/drwxr-xr-x, owned by root:root.

Private keys should be 0600/-rw-------, owned by root:root.

Public keys should be 0644/-rw-r--r--, owned by root:root.

22.3 Password authentication

With password authentication, all you need is the login and password of a user on the remote machine, and sshd set up and running on the remote machine. You do not need any personal SSH keys. In the following example, user suzanne opens an SSH session to the host sun:

> ssh suzanne@sun

suzanne is prompted to enter the remote password. Type exit and press Enter to close an SSH session.

If the user name is the same on both machines, you can omit it, and then using ssh HOST_NAME is sufficient. After a successful authentication, you can work on the command line or use interactive applications, such as YaST in text mode.

You may also run non-interactive commands (log in, run the command, then the session closes all in one command) on remote systems using the ssh USER_NAME HOST COMMAND syntax. COMMAND must be properly quoted. Multiple commands can be concatenated as on a local shell:

> ssh suzanne@sun "df -h && du -sh  /home"
> ssh suzanne@sun "sudo nano /etc/ssh/sshd_config"

When you run sudo on the remote machine, you are prompted for the sudo password.

22.4 Managing user and host encryption keys

There are several key types to choose from: DSA, RSA, ECDSA, ECDSA-SK, Ed25519, and Ed25519-SK. DSA was deprecated several years ago, and was disabled in OpenSSH 7.0 and should not be used. RSA is the most universal as it is older, and more widely used. (As of OpenSSH 8.2, RSA is deprecated for host keys. Use ECDSA or Ed25519 for host keys.)

Ed25519 and ECDSA are stronger and faster. Ed25519 is considered to be the strongest. If you must support older clients that do not support Ed25519 or ECDSA, create host keys in all three formats.

Note
Note: Older clients are unsafe

Some older SSH clients do not support ECDSA and ED25519. ECDSA and ED25519 were released with OpenSSH 6.5 in 2014. It is important to keep security services updated, and, if possible, to not allow unsafe old clients.

SSH keys serve two purposes: authenticating servers to clients, and authenticating clients to servers (see Section 22.6, “Public key authentication”). Server host keys are stored in /etc/ssh. Users' personal keys are stored in /home/user/.ssh.

/home/user/.ssh is created when the user creates a new SSH key.

Host keys must not have passphrases.

In most cases, private user keys should have strong passphrases.

22.4.1 Creating user SSH key pairs

The following procedure shows how to create user OpenSSH encryption keys.

Procedure 22.1: Creating default and customized keys
  1. To generate a user key pair with the default parameters (RSA, 3072 bits), use the ssh-keygen command with no options. Protect your private key with a strong passphrase:

    > ssh-keygen
    Generating public/private rsa key pair.
    Enter file in which to save the key (/home/tux/.ssh/id_rsa):
    Enter passphrase (empty for no passphrase):
    Enter same passphrase again:
    Your identification has been saved in id_rsa
    Your public key has been saved in id_rsa.pub
    The key fingerprint is:
    SHA256:z0uJIuc7Doy07bFTe1ppZHLVrkD/bWWlBAF/PcHjblU user@host2
    The key's randomart image is:
    +---[RSA 3072]----+
    |          ..o... |
    |           o . +E|
    |        . . o +.=|
    |       . o . o o+|
    |  .   . S . . o +|
    | . =  .= * + . = |
    |  o *.o.= * . +  |
    |   ..Bo+.. . .   |
    |    oo==  .      |
    +----[SHA256]-----+
  2. Create an RSA key pair with a longer bit length:

    > ssh-keygen -b 4096

    OpenSSH RSA keys can be a maximum of 16,384 bits. However, longer bit lengths are not necessarily more desirable. See the GnuPG FAQ for more information, https://www.gnupg.org/faq/gnupg-faq.html#no_default_of_rsa4096.

  3. You may create as many user keys as you want, for accessing different servers. Each key pair must have a unique name, and optionally, a comment. These help you remember what each key pair is for. Create an RSA key pair with a custom name and a comment:

    > ssh-keygen -f backup-server-key -C "infrastructure backup server"
  4. Create an Ed25519 key pair with a custom name and a comment:

    > ssh-keygen -t ed25519 -f ldap-server-key -C "Internal LDAP server"

    Ed25519 keys are fixed at 256 bits, which is equivalent in cryptographic strength to RSA 4096.

22.4.2 Creating SSH server host keys

Host keys are managed a little differently. A host key must not have a passphrase, and the key pairs are stored in /etc/ssh. OpenSSH automatically generates a set of host keys when it is installed, like the following example:

> ls -l /etc/ssh
total 608
-rw------- 1 root root 577834 2021-05-06 04:48 moduli
-rw-r--r-- 1 root root   2403 2021-05-06 04:48 ssh_config
-rw-r----- 1 root root   3420 2021-05-06 04:48 sshd_config
-rw------- 1 root root   1381 2022-02-10 06:55 ssh_host_dsa_key
-rw-r--r-- 1 root root    604 2022-02-10 06:55 ssh_host_dsa_key.pub
-rw------- 1 root root    505 2022-02-10 06:55 ssh_host_ecdsa_key
-rw-r--r-- 1 root root    176 2022-02-10 06:55 ssh_host_ecdsa_key.pub
-rw------- 1 root root    411 2022-02-10 06:55 ssh_host_ed25519_key
-rw-r--r-- 1 root root     96 2022-02-10 06:55 ssh_host_ed25519_key.pub
-rw------- 1 root root   2602 2022-02-10 06:55 ssh_host_rsa_key
-rw-r--r-- 1 root root    568 2022-02-10 06:55 ssh_host_rsa_key.pub

ssh-keygen has a special option, -A, for creating new host keys. This creates new keys for each of the key types for which host keys do not exist, with the default key file path, an empty passphrase, default bit size for the key type, and an empty comment. The following example creates a complete new set of host keys by first deleting the existing keys, then creating a new set:

> sudo rm /etc/ssh/ssh_host*
> sudo ssh-keygen -A

You can replace selected key pairs by first deleting only the keys you want to replace, because ssh-keygen -A does not replace existing keys.

Important
Important: Do not use DSA keys

ssh-keygen -A creates DSA keys, even though they have been deprecated as unsafe for several years. In OpenSSH 7.0 they are still created, but disabled by not being listed in sshd_config. You may safely delete DSA keys.

When you want to rotate host keys (see Section 22.5, “Rotating host keys”), you must create the new keys individually, because they must exist at the same time as your old host keys. Your users authenticate with the old keys, and then receive the list of new keys. They need unique names, to not conflict with the old keys. The following example creates new RSA and Ed25519 host keys, labeled with the year and month they were created. Remember, the new host keys must not have passphrases:

> cd /etc/ssh
> sudo ssh-keygen -b 4096 -f "SSH_HOST_RSA_2022_02"
> sudo ssh-keygen -t ed25519 -f "SSH_HOST_ED25519_2022_02"

You may name your new keys whatever you want.

22.5 Rotating host keys

As of version 6.8, OpenSSH includes a protocol extension that supports host key rotation. SSH server admins must periodically retire old host keys and create new keys, for example if a key has been compromised, or it is time to upgrade to stronger keys. Before OpenSSH 6.8, if StrictHostKeyChecking was set to yes in ssh_config on user machines, users would see a warning that the host key had changed, and were not allowed to connect. Then the users would have to manually delete the server's public key from their known_hosts file, reconnect and manually accept the new key. Any automated SSH connections, such as scheduled backups, would fail.

The new host key rotation scheme provides a method to distribute new keys without service interruptions. When clients connect, the server sends them a list of new keys. Then the next time they log in they are asked if they wish to accept the changes. Give users a few days to connect and receive the new keys, and then you can remove the old keys. The users' known_hosts files are automatically updated, with new keys added and the old keys removed.

Setting up host key rotations requires creating new keys on the server, certain changes to /etc/ssh/sshd_config on the server, and to /etc/ssh/ssh_config on the clients.

First, create your new key or keys. The following example creates a new RSA key and a new Ed25519 key, with unique names and comments. A useful convention is to name them with the creation date. Remember, a host key must not have a passphrase:

# ssh-keygen -t rsa -f ssh_host_rsa_2022-01 -C "main server"
Generating public/private rsa key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in ssh_host_rsa_2022-01
Your public key has been saved in ssh_host_rsa_2022-01.pub
The key fingerprint is:
SHA256:F1FIF2aqOz7D3mGdsjzHpH/kjUWZehBN3uG7FM4taAQ main server
The key's randomart image is:
+---[RSA 3072]----+
|         .Eo*.oo |
|          .B .o.o|
|          o . .++|
|         . o ooo=|
|        S . o +*.|
|         o o.oooo|
|       .o ++oo.= |
|       .+=o+o + .|
|       .oo++..   |
+----[SHA256]-----+

# ssh-keygen -t ed25519 -f ssh_host_ed25519_2022-01 -C "main server"
Generating public/private ed25519 key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in ssh_host_ed25519_2022-01
Your public key has been saved in ssh_host_ed25519_2022-01.pub
The key fingerprint is:
SHA256:2p9K0giXv7WsRnLjwjs4hJ8EFcoX1FWR4nQz6fxnjxg main server
The key's randomart image is:
+--[ED25519 256]--+
|   .+o ...o+     |
| . .... o *      |
|  o..  o = o     |
|  ..   .. o      |
|   o. o S  .     |
|  . oo.*+   E o  |
|   + ++==..  = o |
|    = +oo= o. . .|
|     ..=+o=      |
+----[SHA256]-----+

Record the fingerprints, for users to verify the new keys.

Add the new key names to /etc/ssh/sshd_config, and uncomment any existing keys that are in use:

## Old keys
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ed25519_key
HostKey /etc/ssh/ssh_host_ecdsa_key

## New replacement keys
HostKey /etc/ssh/ssh_host_rsa_2022-01
HostKey /etc/ssh/ssh_host_ed25519_2022-01

Save your changes, then restart sshd:

# systemctl restart sshd.service

The /etc/ssh/ssh_config file on user machines must include the following settings:

UpdateHostKeys ask
StrictHostKeyChecking yes

Test connecting from a client by opening an SSH session to the server to receive the new keys list. Log out, then log back in. When you log back in you should see something like the following message:

The server has updated its host keys.
These changes were verified by the server's existing trusted key.
Deprecating obsolete hostkey: ED25519
SHA256:V28d3VpHgjsCoV04RBCZpLo5c0kEslCZDVdIUnCvqPI
Deprecating obsolete hostkey:
RSA SHA256:+NR4DVdbsUNsqJPIhISzx+eqD4x/awCCwijZ4a9eP8I
Accept updated hostkeys? (yes/no):yes

You may set UpdateHostKeys ask to UpdateHostKeys yes to apply the changes automatically, and avoid asking users to approve the changes.

For more information:

22.6 Public key authentication

Public key authentication uses your own personal identity key to authenticate, rather than a user account password.

The following example shows how to create a new personal RSA key pair with a comment, so you know what it is for. First change to your ~/.ssh directory (or create it if it does not exist), then create the new key pair. Give it a strong passphrase, and write the passphrase in a safe place:

> cd ~/.ssh
> ssh-keygen -C "web server1" -f id-web1 -t rsa -b 4096

Next, copy your new public key to the machine you want access to. You must already have a user account on this machine, and SSH access to copy it over the network:

> ssh-copy-id -i id-web1 user@web1

Then try logging in with your new key:

> ssh -i id-web1 user@web1
Enter passphrase for key 'id-web1':
Last login: Sat Jul 11 11:09:53 2022 from 192.168.10.122
Have a lot of fun...

You should be asked for your private key passphrase, and not the password for your user account.

To be effective, public key authentication should be enforced on the remote machine, and password authentication not allowed (see Example 22.1, “Example sshd_config”). If you do not have public key authentication access on the remote machine already, you cannot copy your new public key with ssh-copy-id, and must use other means, such as manually copying it from a USB stick to the ~/.ssh/authorized_keys file of the remote user account.

22.7 Passphrase-less public key authentication

This is public key authentication without a passphrase. Create your new private identity keys without a passphrase, and then use them the same way as passphrase-protected keys. This is useful for automated services, such as scripts and cron jobs. However, anyone who succeeds in stealing the private key can easily masquerade as you, so you need to be protective of a passphrase-less private key.

An alternative to using keys without passphrases is gnome-keyring, which remembers and applies your private keys and passphrases for you. gnome-keyring is for GNOME desktop sessions (Section 22.9, “Automated public key logins with gnome-keyring”).

For console sessions, use ssh-agent (Section 22.10, “Automated public key logins with ssh-agent”).

22.8 OpenSSH certificate authentication

OpenSSH introduced certificate authentication in OpenSSH 5.4. Certificate authentication is similar to public key authentication, except hosts and users authenticate to each other with digitally signed encryption certificates instead of encryption keys. Certificate authentication provides central management for server and user certificates, eliminating the need to manually copy user public keys to multiple hosts. It increases security by giving more control to administrators, and less to users.

Certificates consist of a public encryption key, a user-defined identity string, zero or more user names or host names, and other options. User and host public keys are signed by a Certificate Authority (CA) private signing key to create an encryption certificate. Users and hosts trust the public CA key, rather than trusting individual user and host public encryption keys.

Traditional OpenSSH public key authentication requires copying user public keys to every SSH server they need access to (to the appropriate ~/.ssh/authorized_keys files), and relying on users to verify new SSH server host keys before accepting them (stored in ~/.ssh/known_hosts). This is error-prone and complicated to manage. Another disadvantage is OpenSSH keys never expire. When you need to revoke a particular public key, you have to find and remove all its copies on your network.

Automating the whole process (for example, with Ansible) is virtually a necessity. Large organizations, such as Meta (see https://engineering.fb.com/2016/09/12/security/scalable-and-secure-access-with-ssh/), automate the process completely, so they can revoke and replace certificates, and even certificate authorities, as often as they want without disrupting operations.

A prerequisite is the ability to open SSH sessions to all hosts on your network, and perform tasks like editing configuration files and restarting sshd.

Setting up an OpenSSH certificate authority involves the following steps:

  • Set up a secure trusted server to host your certificate authority, for signing host and user keys. Create a new key pair for signing keys. The private key signs user and host keys, and the public key is copied to all users who are allowed access to the server.

  • Receive and sign host public keys, then distribute the new host certificates to their respective hosts. Host certificates go in /etc/ssh, just like host keys.

  • Receive and sign user public keys, then distribute the new user certificates to their owners. User certificates go in ~/.ssh, just like user keys.

  • Edit configuration files on servers and users' machines, and stop and start sshd on hosts as necessary.

  • Revoke certificates as needed, for example when you suspect a certificate has been compromised, a user has left your organization, or a server has been retired. Revoking a certificate is considerably simpler than finding and removing all relevant public key copies.

Users and server admins create and protect their own OpenSSH keys. It is safe to freely share public keys. It is safe to transfer the new certificates by insecure methods, such as email, because they require their private keys to validate.

SSH certificates follow the OpenPGP standard, rather than SSL/TLS, and that the certificate format is OpenPGP, not X.509.

22.8.1 Setting up a new certificate authority

This section describes how to set up a new certificate authority (CA). Give careful consideration to organizing your CA, to keep it manageable and efficient.

Important
Important: Protect your certificate authority

It is important to protect the machine that hosts your certificate authority. Your CA is literally the key to your entire network. Anyone who gains access to your CA can create their own certificates and freely access your network resources, or even compromise your servers and the CA itself. A common practice is to use a dedicated machine that is started only when you need to sign keys.

It is a best practice to create one signing key for servers, and another signing key for clients. If you have a large number of certificates to manage, it can be helpful to create your CAs for hosts and clients on separate machines. If you prefer using a single machine, give each CA its own directory. The examples in this section use /ca-ssh-hosts and /ca-ssh-users. The example machine is ca.example.com.

If your security policy requires keeping copies of users' and host's public keys, store them in their own subdirectories, for easier tracking and avoiding key name collisions.

Important
Important: RSA signing keys deprecated

OpenSSH 8.2, released February 2020, deprecates RSA signing keys. Use Ed25519 or ECDSA.

The following examples create two signing keys, one for signing host keys, and one for user keys. Give them strong passphrases:

> sudo ssh-keygen -t ed25519 -f /ca-ssh-hosts/ca-host-sign-key -C "signing key for host certificates"
Generating public/private ed25519 key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in ca-host-sign-key
Your public key has been saved in ca-host-sign-key.pub
The key fingerprint is:
SHA256:STuQ7HgDrPcEa7ybNIW0n6kPbj28X5HN8GgwllBbAt0
 signing key for host certificates
The key's randomart image is:
+--[ED25519 256]--+
|      o+o..      |
|   . . o.=E      |
|    = + B .      |
|   + O + = B     |
|  . O * S = +    |
|   o B + o .     |
|    =o=   .      |
|   o.*+  .       |
|   .=.o+.        |
+----[SHA256]-----+
> sudo ssh-keygen -t ed25519 -f /ca-ssh-users/ca-user-sign-key -C "signing key for user certificates"
Generating public/private ed25519 key pair.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in ca-user-sign-key
Your public key has been saved in ca-user-sign-key.pub
The key fingerprint is:
SHA256:taYj8tTnjkzgfHRvQ6HTj8a37PY6rwv96V1x+GHRjIk signing key for user certificates
The key's randomart image is:
+--[ED25519 256]--+
|                 |
|             . +.|
|          . E o.o|
|         . + . ..|
|      . S * o .+.|
|     o + + = +..+|
|    . = * . O + o|
|     + = = o =oo+|
|      . o.o  oOX=|
+----[SHA256]-----+

Copy your public user signing key (be sure you are copying the PUBLIC key) to /etc/ssh on all hosts running SSH servers. Then enter the full path of the public user signing key to /etc/ssh/sshd_config on the hosts:

TrustedUserCAKeys /etc/ssh/ca-user-sign-key.pub

Then restart sshd.

22.8.2 Creating host certificates

The following example signs a host public key to create a host certificate for a database server:

> sudo ssh-keygen -s /ca-ssh-hosts/ca-host-sign-key \
   -n venus,venus.example.com -I "db-server host cert" \
   -h -V +4w /etc/ssh/ssh_host_ed25519_key.pub
Enter passphrase:
Signed host key /etc/ssh/ssh_host_ed25519_key-cert.pub: id
"db-server host cert" serial 0 for venus,venus.example.com
valid from 2022-08-08T14:20:00 to 2022-09-05T15:21:19

If there is more than one host key on a server, sign them all.

  • -s is your private signing key.

  • -n is your list of principals. For host certificates, the principals are the machine's host name and fully qualified domain name.

  • -I is the identity string. This is any comment or description you want. The string is logged, to help you quickly find relevant log entries.

  • -h creates a host certificate.

  • -V sets the expiration date for the certificate. In the example, the certificate expires in four weeks. (See the "-V validity_interval" section of man 1 ssh-keygen for allowed time formats.)

Verify that your new certificate is built the way you want:

> ssh-keygen -Lf /etc/ssh/ssh_host_ed25519_key-cert.pub
 /etc/ssh/ssh_host_ed25519_key-cert.pub:
        Type: ssh-ed25519-cert-v01@openssh.com host certificate
        Public key: ED25519-CERT SHA256:/
         U7C+qABXYyuvueUuhFKzzVINq3d7IULRLwBstvVC+Q
        Signing CA: ED25519 SHA256:
         STuQ7HgDrPcEa7ybNIW0n6kPbj28X5HN8GgwllBbAt0 (using ssh-ed25519)
        Key ID: "db-server host cert"
        Serial: 0
        Valid: from 2022-08-08T14:20:00 to 2022-09-05T15:21:19
        Principals:
                venus
                venus.example.com
        Critical Options: (none)
        Extensions: (none)

Add the full path of the new host certificate to /etc/ssh/sshd_config, to make it available to clients:

HostCertificate /etc/ssh/ssh_host_ed25519_key-cert.pub

Restart sshd to load your changes:

> sudo systemctl restart sshd.service

See Section 22.8.3, “CA configuration for users” to learn how to configure clients to accept host certificates.

22.8.3 CA configuration for users

The following example shows how to configure clients to trust your CA rather than individual keys. The example grants access to a single server. This entry must be on a single unbroken line in your users' ~/.ssh/known_hosts files. Move the original ~/.ssh/known_hosts file, and create a new file that contains only the CA configuration. Or, create a global configuration in /etc/ssh/ssh_known_hosts, which has the advantage of being un-editable by unprivileged users:

@cert-authority db,db.example.com ssh-ed25519
 AAAAC3NzaC1lZDI1NTE5AAAAIH1pF6DN4BdsfUKWuyiGt/leCvuZ/fPu
 YxY7+4V68Fz0 signing key for user certificates

List each server the user is allowed to access in a comma-delimited list, for example venus,venus.example.com,saturn,saturn.example.com. You may also grant access to all servers in domains with wildcards, for example *.example.com,*.example2.com.

Try connecting to the server. You should be prompted for the password of the remote account, without being prompted to verify the host certificate.

22.8.4 Creating user certificates

Sign the user's public key:

> sudo ssh-keygen /ca-ssh-hosts/ca-user-sign-key -I "suzanne's cert" -n suzanne -V +52w user-key.pub
 Signed user key .ssh/ed25519key-cert.pub: id "suzanne's cert" serial 0
 for suzanne valid from 2022-09-14T12:57:00 to 2023-09-13T12:58:21

The principal on user certificates is always the username. Store the user's certificate in ~/.ssh on the user's machine.

User certificates replace the ~/.ssh/authorized_keys files. Remove this file from a user account on the remote machine, then try opening an SSH session to that account. You should be able to log in without being prompted for a password. (Remember, the server should have a TrustedUserCAKeys /etc/ssh/ca-user-sign-key.pub line in its /etc/ssh/sshd_config file, so that the server knows to trust your certificate authority.)

Additionally, look for Accepted publickey for suzanne messages in your log files.

22.8.5 Revoking host keys

When you need to revoke a certificate because a server has been compromised or retired, add the certificate's corresponding public key to a file on every client, for example /etc/ssh/revoked_keys:

ssh-ed25519-cert-v01@openssh.com
    AAAAIHNzaC1lZDI1NTE5LWNlcnQtdjAxQG9wZW5zc2guY29tAAAAIK6hyvFAhFI+0hkKehF/
    506fD1VdcW29ykfFJn1CPK9lAAAAIAawaXbbEFiQOAe5LGclrCHSLWbEeUauK5+CAuhTJyz0
    AAAAAAAAAAAAAAACAAAAE2RiLXNlcnZlciBob3N0IGNlcnQAAAAeAAAABXZlbnVzAAAAEXZl
    bnVzLmV4YW1wbGUuY29tAAAAAGMabhQAAAAAYz9YgQAAAAAAAAAAAAAAAAAAADMAAAALc3No
    LWVkMjU1MTkAAAAgfWkXoM3gF2x9Qpa7KIa3+V4K+5n98+5jFjv7hXrwXPQAAABTAAAAC3Nz
    aC1lZDI1NTE5AAAAQI+mbJsQjt/9bLiURse8DF3yTa6Yk3HpoE2uf9FW/
    KeLsw2wPeDv0d6jv49Wgr5T3xHYPf+VPJQW35ntFiHTlQg= root@db

This file must be named in /etc/ssh/sshd_config:

RevokedKeys /etc/ssh/revoked_keys

22.9 Automated public key logins with gnome-keyring

The gnome-keyring package is installed and enabled by default when the GNOME desktop environment is installed. gnome-keyring is integrated with your system login, automatically unlocking your secrets storage at login. When you change your login password, gnome-keyring automatically updates itself with your new password.

gnome-keyring automatically loads all key pairs in ~/.ssh, for each pair that has a *.pub file. You may manually load other keys with the ssh-add command, for example:

> ssh-add ~/.otherkeys/my_key

List all loaded keys:

> ssh-add -L

When you start up your system and then open an SSH session, you are prompted for your private key passphrase.

gnome-keyring remembers the passphrase for the rest of your session. You do not have to re-enter the passphrase until after a system restart.

22.10 Automated public key logins with ssh-agent

The openssh package provides the ssh-agent utility, which retains your private keys and passphrases, and automatically applies your passphrases for you during the current session.

Configure ssh-agent to start automatically and load your keys by entering the following lines in your ~/.profile file:

eval "$(ssh-agent)"
ssh-add

The first line starts ssh-agent, and the second line loads all the keys in your ~/.ssh folder. When you open an SSH session that requires public key authentication, you are prompted for the passphrase. After the passphrase has been provided once, you do not have to enter it again, until after you restart your system.

You may configure ~/.profile to load only specific keys, like the following example that loads id_rsa and id_ed25519:

> ssh-add id_rsa id_ed25519

22.10.1 Using ssh-agent in an X session

On SUSE Linux Enterprise Desktop, ssh-agent is automatically started by the GNOME display manager. To also invoke ssh-add to add your keys to the agent at the beginning of an X session, do the following:

  1. Log in as the desired user and check whether the file ~/.xinitrc exists.

  2. If it does not exist, use an existing template or copy it from /etc/skel:

    if [ -f ~/.xinitrc.template ]; then mv ~/.xinitrc.template ~/.xinitrc; \
    else cp /etc/skel/.xinitrc.template ~/.xinitrc; fi
  3. If you have copied the template, search for the following lines and uncomment them. If ~/.xinitrc already existed, add the following lines (without comment signs).

    # if test -S "$SSH_AUTH_SOCK" -a -x "$SSH_ASKPASS"; then
    #       ssh-add < /dev/null
    # fi
  4. When starting a new X session, you are prompted for your SSH passphrase.

22.11 Changing an SSH private key passphrase

You may change or remove the passphrase from a private key with ssh-keygen:

> ssh-keygen -pf ~/.ssh/server1
Enter old passphrase:
Key has comment 'shared videos server1'
Enter new passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved with the new passphrase.

22.12 Retrieving a key fingerprint

Use the ssh-keygen to display the public key fingerprint. The following example prints the SHA256 hash for a ED25519 key:

> ssh-keygen -lf ldap-server
256 SHA256:W45lbmj24ZoASbrqW0q9+NhF04muvfKZ+FkRa2cCiqo comment (ED25519)

Add the -v flag to display the ASCII art representation of the key:

> ssh-keygen -lvf ldap-server
256 SHA256:W45lbmj24ZoASbrqW0q9+NhF04muvfKZ+FkRa2cCiqo comment (ED25519)
+--[ED25519 256]--+
|                 |
|                 |
|    .. .         |
|  .o..+ +        |
| ...o+ BSo+      |
|. ..o.o =X       |
|...o o..* =      |
|o.*.* =+ = .     |
|E*o*+O. o.o      |
+----[SHA256]-----+

22.13 Starting X11 applications on a remote host

You can run graphical applications that are installed on a remote machine on your local computer. X11Forwarding Yes must be set in the /etc/ssh/sshd_config file on the remote machine. Then, when you run ssh with the -X option, the DISPLAY variable is automatically set on the remote machine, and all X output is exported to the local machine over the SSH connection. At the same time, X applications started remotely cannot be intercepted by unauthorized users.

A quick test is to run a simple game from the remote machine, such as GNOME Mines:

> ssh wilber@sun
Password:
Last login: Tue May 10 11:29:06 2022 from 192.168.163.13
Have a lot of fun...

wilber@sun>  gnome-mines

The remote application should appear on your local machine just as though it were installed locally. (Network lag affects performance.) Close the remote application in the usual way, such as clicking the close button. This closes only the application, and your SSH session remains open.

Important
Important: X11 forwarding does not work on Wayland

X11 forwarding requires the X Window System running on the remote host. The X Window System has built-in networking, while Wayland does not. Therefore Wayland does not support X11 forwarding.

Use the following command to learn if your system runs X or Wayland:

> echo $XDG_SESSION_TYPE
x11

If Wayland is in use, it looks like the following example:

> echo $XDG_SESSION_TYPE
wayland

The systemd way is to query with loginctl:

> loginctl show-session "$XDG_SESSION_ID" -p Type
Type=x11

> loginctl show-session "$XDG_SESSION_ID" -p Type
Type=wayland

22.14 Agent forwarding

By adding the -A option, the ssh-agent authentication mechanism is carried over to the next machine. This way, you can work from different machines without having to enter a password, but only if you have distributed your public key to the destination hosts and properly saved it there. (Refer to Section 22.6, “Public key authentication” to learn how to copy your public keys to other hosts.)

AllowAgentForwarding yes is the default in /etc/ssh/sshd_config. Change it to No to disable it.

22.15 scp—secure copy

scp copies files to or from a remote machine. If the user name on jupiter is different than the user name on sun, specify the latter using the USER_NAME&host format. If the file should be copied into a directory other than the remote user's home directory, specify it as sun:DIRECTORY. The following examples show how to copy a file from a local to a remote machine and vice versa.

> scp ~/MyLetter.tex tux@sun:/tmp 1
> scp tux@sun:/tmp/MyLetter.tex ~ 2

1

local to remote

2

remote to local

Tip
Tip: The -l option

With the ssh command, the option -l can be used to specify a remote user (as an alternative to the USER_NAME&host format). With scp the option -l is used to limit the bandwidth consumed by scp.

After the correct password is entered, scp starts the data transfer. It displays a progress bar and the time remaining for each file that is copied. Suppress all output with the -q option.

scp also provides a recursive copying feature for entire directories. The command

> scp -r src/ sun:backup/

copies the entire contents of the directory src including all subdirectories to the ~/backup directory on the host sun. If this subdirectory does not exist, it is created automatically.

The -p option tells scp to leave the time stamp of files unchanged. -C compresses the data transfer. This minimizes the data volume to transfer, but creates a heavier burden on the processors of both machines.

22.16 sftp—secure file transfer

22.16.1 Using sftp

To copy several files from or to different locations, sftp is a convenient alternative to scp. It opens a shell with a set of commands similar to a regular FTP shell. Type help at the sftp-prompt to get a list of available commands. More details are available from the sftp man page.

> sftp sun
Enter passphrase for key '/home/tux/.ssh/id_rsa':
Connected to sun.
sftp> help
Available commands:
bye                                Quit sftp
cd path                            Change remote directory to 'path'
[...]

22.16.2 Setting permissions for file uploads

As with a regular FTP server, a user can download and upload files to a remote machine running an SFTP server by using the put command. By default the files are uploaded to the remote host with the same permissions as on the local host. There are two options to automatically alter these permissions:

Setting a umask

A umask works as a filter against the permissions of the original file on the local host. It can only withdraw permissions:

permissions original

umask

permissions uploaded

0666

0002

0664

0600

0002

0600

0775

0025

0750

To apply a umask on an SFTP server, edit the file /etc/ssh/sshd_configuration. Search for the line beginning with Subsystem sftp and add the -u parameter with the desired setting, for example:

Subsystem sftp /usr/lib/ssh/sftp-server -u 0002
Explicitly setting the permissions

Explicitly setting the permissions sets the same permissions for all files uploaded via SFTP. Specify a three-digit pattern such as 600, 644, or 755 with -u. When both -m and -u are specified, -u is ignored.

To apply explicit permissions for uploaded files on an SFTP server, edit the file /etc/ssh/sshd_config. Search for the line beginning with Subsystem sftp and add the -m parameter with the desired setting, for example:

Subsystem sftp /usr/lib/ssh/sftp-server -m 600
Tip
Tip: Viewing the SSH daemon log file

To watch the log entries from the sshd use the following command:

> sudo journalctl -u sshd

22.17 Port forwarding (SSH tunneling)

ssh can also be used to redirect TCP/IP connections. This feature, also called SSH tunneling, redirects TCP connections to a certain port to another machine via an encrypted channel.

With the following command, any connection directed to jupiter port 25 (SMTP) is redirected to the SMTP port on sun. This is especially useful for those using SMTP servers without SMTP-AUTH or POP-before-SMTP features. From any arbitrary location connected to a network, e-mail can be transferred to the home mail server for delivery.

# ssh -L 25:sun:25 jupiter

Similarly, all POP3 requests (port 110) on jupiter can be forwarded to the POP3 port of sun with this command:

# ssh -L 110:sun:110 jupiter

Both commands must be executed as root, because the connection is made to privileged local ports. E-mail is sent and retrieved by normal users in an existing SSH connection. The SMTP and POP3 host must be set to localhost for this to work. Additional information can be found in the manual pages for each of the programs described above and in the OpenSSH package documentation under /usr/share/doc/packages/openssh.

22.18 More information

https://www.openssh.com

The home page of OpenSSH

https://en.wikibooks.org/wiki/OpenSSH

The OpenSSH Wikibook

man sshd

The man page of the OpenSSH daemon

man ssh_config

The man page of the OpenSSH SSH client configuration files

man scp , man sftp , man ssh , man ssh-add , man ssh-copy-id , man ssh-keygen

Man pages of several binary files to securely copy files (scp, sftp), to log in (slogin, ssh), and to manage keys.

/usr/share/doc/packages/openssh-common/README.SUSE , /usr/share/doc/packages/openssh-common/README.FIPS

SUSE package specific documentation; changes in defaults with respect to upstream, notes on FIPS mode etc.

22.19 Stopping SSH brute force attacks with Fail2Ban

An SSH brute force attack involves repeated trials of user name and password combinations until the attacker gains access to the remote server. The attacker uses automated tools that test various user name and password combinations to compromise a server.

You can use Fail2Ban software to limit intrusion attempts. Fail2Ban scans the system logs to detect attacks and trigger an action, such as blocking the IPs via the firewall. Fail2Ban is used only to protect services that require a user name and password authentication.

22.19.1 What is Fail2Ban?

Fail2Ban scans the log files in /var/log/apache/error_log and bans IPs that indicate malicious signs, such as too many password attempts, etc. You can then use Fail2Ban to update firewall rules to reject the IP addresses for a specified amount of time.

Fail2Ban comes with filters for various services, such as Apache, SSH, Courier, etc. You can use Fail2Ban to minimize the rate of incorrect authentication attempts.

22.19.2 Using Fail2Ban to stop an SSH brute force attack

You can install and configure Fail2Ban to protect the server against SSH brute force attacks.

Procedure 22.2: Using Fail2Ban to stop an SSH brute force attack
  1. To install Fail2Ban, execute:

    # sudo zypper -n in fail2ban firewalld
  2. When you install Fail2Ban, a default configuration file jail.conf is also installed. This file gets overwritten when you upgrade Fail2Ban. To retain any customizations you make to the file, you can add the modifications to a file called jail.local. Fail2Ban automatically reads both files.

    # cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
  3. Open the file in your editor.

    vi /etc/fail2ban/jail.local
  4. Take a look at the four settings that you need to know about.

    Fail2Ban settings
    Figure 22.1: jail.local file settings
    ignoreip

    A list of whitelisted IPs that are never banned. The local host IP address 127.0.0.1 is in the list by default, along with the IPv6 equivalent ::1. You can use this setting to add a list of IP addresses that you know should not be banned.

    bantime

    The duration in minutes for which an IP address is banned. When you do not enter a value ending in m or h, it is treated as seconds. If you enter a value of -1, an IP address is permanently banned.

    findtime

    The period of time within which too many failed connection attempts result in an IP being banned.

    maxretry

    The limit for the number of failed attempts.

    Note
    Note

    If a connection from the same IP address makes maxretry failed connection attempts within the findtime period, they are banned for the duration of the bantime. The only exceptions are the IP addresses in the ignoreip list.

    Fail2Ban supports different types of jails, and each jail has different settings for the connection types.

  5. You can use the Fail2Ban service with the following:

    To enable:

    #  systemctl enable fail2ban

    To start:

    #  systemctl start fail2ban

    To check the service status:

    #  systemctl status fail2ban.service
    Important
    Important

    You must restart Fail2Ban each time you make a configuration change.

23 Masquerading and firewalls

Whenever Linux is used in a network environment, you can use the kernel functions that allow the manipulation of network packets to maintain a separation between internal and external network areas. The Linux netfilter framework provides the means to establish an effective firewall that keeps different networks apart. Using iptables—a generic table structure for the definition of rule sets—precisely controls the packets allowed to pass a network interface. Such a packet filter can be set up using firewalld and its graphical interface firewall-config.

SUSE Linux Enterprise Desktop 15 GA introduces firewalld as the new default software firewall, replacing SuSEfirewall2. This chapter provides guidance for configuring firewalld, and migrating from SuSEfirewall2 for users who have upgraded from older SUSE Linux Enterprise Desktop releases.

23.1 Packet filtering with iptables

This section discusses the low-level details of packet filtering. The components netfilter and iptables are responsible for the filtering and manipulation of network packets and for network address translation (NAT). The filtering criteria and any actions associated with them are stored in chains, which must be matched one after another by individual network packets as they arrive. The chains to match are stored in tables. The iptables command allows you to alter these tables and rule sets.

The Linux kernel maintains three tables, each for a particular category of functions of the packet filter:

filter

This table holds the bulk of the filter rules, because it implements the packet filtering mechanism in the stricter sense, which determines whether packets are let through (ACCEPT) or discarded (DROP), for example.

nat

This table defines any changes to the source and target addresses of packets. Using these functions also allows you to implement masquerading, which is a special case of NAT used to link a private network with the Internet.

mangle

The rules held in this table make it possible to manipulate values stored in IP headers (such as the type of service).

These tables contain several predefined chains to match packets:

PREROUTING

This chain is applied to all incoming packets.

INPUT

This chain is applied to packets destined for the system's internal processes.

FORWARD

This chain is applied to packets that are routed through the system.

OUTPUT

This chain is applied to packets originating from the system itself.

POSTROUTING

This chain is applied to all outgoing packets.

Figure 23.1, “iptables: a packet's possible paths” illustrates the paths along which a network packet may travel on a given system. For the sake of simplicity, the figure lists tables as parts of chains, but in reality these chains are held within the tables themselves.

In the simplest case, an incoming packet destined for the system itself arrives at the eth0 interface. The packet is first referred to the PREROUTING chain of the mangle table then to the PREROUTING chain of the nat table. The following step, concerning the routing of the packet, determines that the actual target of the packet is a process of the system itself. After passing the INPUT chains of the mangle and the filter table, the packet finally reaches its target, provided that the rules of the filter table allow this.

iptables: a packet's possible paths
Figure 23.1: iptables: a packet's possible paths

23.2 Masquerading basics

Masquerading is the Linux-specific form of NAT (network address translation) and can be used to connect a small LAN with the Internet. LAN hosts use IP addresses from the private range (see Book “Administration Guide”, Chapter 23 “Basic networking”, Section 23.1.2 “Netmasks and routing”) and on the Internet official IP addresses are used. To be able to connect to the Internet, a LAN host's private address is translated to an official one. This is done on the router, which acts as the gateway between the LAN and the Internet. The underlying principle is a simple one: The router has more than one network interface, typically a network card and a separate interface connecting with the Internet. While the latter links the router with the outside world, one or several others link it with the LAN hosts. With these hosts in the local network connected to the network card (such as eth0) of the router, they can send any packets not destined for the local network to their default gateway or router.

Important
Important: Using the correct network mask

When configuring your network, make sure both the broadcast address and the netmask are the same for all local hosts. Failing to do so prevents packets from being routed properly.

As mentioned, whenever one of the LAN hosts sends a packet destined for an Internet address, it goes to the default router. However, the router must be configured before it can forward such packets. For security reasons, this is not enabled in a default installation. To enable it, add the line net.ipv4.ip_forward = 1 in the file /etc/sysctl.conf. Alternatively do this via YaST, for example by calling yast routing ip-forwarding on.

The target host of the connection can see your router, but knows nothing about the host in your internal network where the packets originated. This is why the technique is called masquerading. Because of the address translation, the router is the first destination of any reply packets. The router must identify these incoming packets and translate their target addresses, so packets can be forwarded to the correct host in the local network.

With the routing of inbound traffic depending on the masquerading table, there is no way to open a connection to an internal host from the outside. For such a connection, there would be no entry in the table. In addition, any connection already established has a status entry assigned to it in the table, so the entry cannot be used by another connection.

As a consequence of all this, you might experience some problems with several application protocols, such as ICQ, cucme, IRC (DCC, CTCP), and FTP (in PORT mode). Web browsers, the standard FTP program, and many other programs use the PASV mode. This passive mode is much less problematic as far as packet filtering and masquerading are concerned.

23.3 Firewalling basics

Firewall is probably the term most widely used to describe a mechanism that controls the data flow between networks. Strictly speaking, the mechanism described in this section is called a packet filter. A packet filter regulates the data flow according to certain criteria, such as protocols, ports, and IP addresses. This allows you to block packets that, according to their addresses, are not supposed to reach your network. To allow public access to your Web server, for example, explicitly open the corresponding port. However, a packet filter does not scan the contents of packets with legitimate addresses, such as those directed to your Web server. For example, if incoming packets were intended to compromise a CGI program on your Web server, the packet filter would still let them through.

A more effective but more complex mechanism is the combination of several types of systems, such as a packet filter interacting with an application gateway or proxy. In this case, the packet filter rejects any packets destined for disabled ports. Packets that are directed to the application gateway are accepted. This gateway or proxy pretends to be the actual client of the server. In a sense, such a proxy could be considered a masquerading host on the protocol level used by the application. One example for such a proxy is Squid, an HTTP and FTP proxy server. To use Squid, the browser must be configured to communicate via the proxy. Any HTTP pages or FTP files requested are served from the proxy cache and objects not found in the cache are fetched from the Internet by the proxy.

The following section focuses on the packet filter that comes with SUSE Linux Enterprise Desktop. For further information about packet filtering and firewalling, read the Firewall HOWTO.

23.4 firewalld

Note
Note: firewalld replaces SuSEfirewall2

SUSE Linux Enterprise Desktop 15 GA introduces firewalld as the new default software firewall, replacing SuSEfirewall2. If you are upgrading from a release older than SUSE Linux Enterprise Desktop 15 GA, SuSEfirewall2 will be unchanged and you must manually upgrade to firewalld (see Section 23.5, “Migrating from SuSEfirewall2”).

firewalld is a daemon that maintains the system's iptables rules and offers a D-Bus interface for operating on them. It comes with a command line utility firewall-cmd and a graphical user interface firewall-config for interacting with it. Since firewalld is running in the background and provides a well defined interface it allows other applications to request changes to the iptables rules, for example to set up virtual machine networking.

firewalld implements different security zones. Several predefined zones like internal and public exist. The administrator can define additional custom zones if desired. Each zone contains its own set of iptables rules. Each network interface is a member of exactly one zone. Individual connections can also be assigned to a zone based on the source addresses.

Each zone represents a certain level of trust. For example the public zone is not trusted, because other computers in this network are not under your control (suitable for Internet or wireless hotspot connections). On the other hand the internal zone is used for networks that are under your control, like a home or company network. By utilizing zones this way, a host can offer different kinds of services to trusted networks and untrusted networks in a defined way.

For more information about the predefined zones and their meaning in firewalld, refer to its manual at https://www.firewalld.org/documentation/zone/predefined-zones.html.

Note
Note: No zone assigned behavior

The initial state for network interfaces is to be assigned to no zone at all. In this case the network interface will be implicitly handled in the default zone, which can be determined by calling firewall-cmd --get-default-zone. If not configured otherwise, the default zone is the public zone.

The firewalld packet filtering model allows any outgoing connections to pass. Outgoing connections are connections that are actively established by the local host. Incoming connections that are established by remote hosts are blocked if the respective service is not allowed in the zone in question. Therefore, each of the interfaces with incoming traffic must be placed in a suitable zone to allow for the desired services to be accessible. For each of the zones, define the services or protocols you need.

An important concept of firewalld is the distinction between two separate configurations: the runtime and the permanent configuration. The runtime configuration represents the currently active rules, while the permanent configuration represents the saved rules that will be applied when restarting firewalld. This allows to add temporary rules that will be discarded after restarting firewalld, or to experiment with new rules while being able to revert back to the original state. When you are changing the configuration, you need to be aware of which configuration you are editing. How this is done is discussed in Section 23.4.3.2, “Runtime versus permanent configuration”.

To perform the firewalld configuration using the graphical user interface firewall-config refer to its documentation. In the following section we will be looking at how to perform typical firewalld configuration tasks using firewall-cmd on the command line.

23.4.1 Configuring the firewall with NetworkManager

The NetworkManager supports a basic configuration of firewalld by selecting zones.

When editing a wired or wireless connection, go to the Identity tab in the configuration window and use the Firewall Zone drop-down box.

23.4.2 Configuring the firewall with YaST

The yast firewall module supports a basic configuration of firewalld. It provides a zone selector, services selector, and ports selector. It does not support creating custom iptables rules, and limits zone creation and customization to selecting services and ports.

Note
Note: Changing settings in running mode

YaST respects the settings in /etc/firewalld/firewalld.conf, where the default value for FlushAllOnReload is set to no. Therefore, YaST does not change settings in running mode. For example, if you have assigned an interface to a different zone with YaST, restart the firewalld daemon for the change to take effect.

23.4.3 Configuring the firewall on the command line

23.4.3.1 Firewall start-up

firewalld will be installed and enabled by default. It is a regular systemd service that can be configured via systemctl or the YaST Services Manager.

Important
Important: Automatic firewall configuration

After the installation, YaST automatically starts firewalld and leaves all interfaces in the default public zone. If a server application is configured and activated on the system, YaST can adjust the firewall rules via the options Open Ports on Selected Interface in Firewall or Open Ports on Firewall in the server configuration modules. Some server module dialogs include a Firewall Details button for activating additional services and ports.

23.4.3.2 Runtime versus permanent configuration

By default, all firewall-cmd commands operate on the runtime configuration. You can apply most operations to the permanent configuration only by adding the --permanent parameter. When doing so, the change affects the permanent configuration and will not be effective immediately in the runtime configuration. There is currently no way to add a rule to both runtime and permanent configurations in a single invocation. To achieve this you can apply all necessary changes to the runtime configuration and when all is working as expected issue the following command:

# firewall-cmd --runtime-to-permanent

This writes all current runtime rules into the permanent configuration. Any temporary modifications you or other programs may have made to the firewall in other contexts are made permanent this way. If you are unsure about this, you can also take the opposite approach to be on the safe side: Add new rules to the permanent configuration and reload firewalld to make them active.

Note
Note

Some configuration items, like the default zone, are shared by both the runtime and permanent configurations. Changing them reflects in both configurations at once.

To revert the runtime configuration to the permanent configuration and thereby discard any temporary changes, two possibilities exist, either via the firewalld command line interface or via systemd:

# firewall-cmd --reload
# systemctl reload firewalld

For brevity the examples in the following sections will always operate on the runtime configuration, if applicable. Adjust them accordingly to make them permanent.

23.4.3.3 Assignment of interfaces to zones

You can list all network interfaces currently assigned to a zone like this:

# firewall-cmd --zone=public --list-interfaces
eth0

Similarly you can query which zone a specific interface is assigned to:

# firewall-cmd --get-zone-of-interface=eth0
public

The following command lines assign an interface to a zone. The variant using --add-interface works if eth0 is not already assigned to another zone. The variant using --change-interface always works, removing eth0 from its current zone if necessary:

# firewall-cmd --zone=internal --add-interface=eth0
# firewall-cmd --zone=internal --change-interface=eth0

Any operations without an explicit --zone argument will implicitly operate on the default zone. This pair of commands can be used for getting and setting the default zone assignment:

# firewall-cmd --get-default-zone
dmz
# firewall-cmd --set-default-zone=public
Important
Important

Any network interfaces not explicitly assigned to a zone are automatically part of the default zone. Changing the default zone reassigns all those network interfaces immediately for the permanent and runtime configurations. You should never use a trusted zone like internal as the default zone, to avoid unexpected exposure to threats. For example, hotplugged network interfaces like USB Ethernet interfaces would automatically become part of the trusted zone in such cases.

Interfaces that are not explicitly part of any zone do not appear in the zone interface list. There is currently no command to list unassigned interfaces. Due to this it is best to avoid unassigned network interfaces during regular operation.

23.4.3.4 Making network services accessible

firewalld has a concept of services. A service consists of definitions of ports and protocols. These definitions logically belong together in the context of a given network service like a Web or mail server protocol. The following commands can be used to get information about predefined services and their details:

# firewall-cmd --get-services
[...] dhcp dhcpv6 dhcpv6-client dns docker-registry [...]
# firewall-cmd --info-service dhcp
dhcp
  ports: 67/udp
  protocols:
  source-ports:
  modules:
  destination:

These service definitions can be used for easily making the associated network functionality accessible in a zone. This command line opens the HTTP Web server port in the internal zone, for example:

# firewall-cmd --add-service=http --zone=internal

The removal of a service from a zone is performed using the counterpart command --remove-service. You can also define custom services using the --new-service subcommand. Refer to https://www.firewalld.org/documentation/howto/add-a-service.html for more details on how to do this.

If you just want to open a single port by number, you can use the following approach. This will open TCP port 8000 in the internal zone:

# firewall-cmd --add-port=8000/tcp --zone=internal

For removal use the counterpart command --remove-port.

Tip
Tip: Temporarily opening a service or port

firewalld supports a --timeout parameter that allows to open a service or port for a limited time duration. This can be helpful for quick testing and making sure that closing the service or port is not forgotten. To allow the imap service in the internal zone for 5 minutes, you would call

# firewall-cmd --add-service=imap --zone=internal --timeout=5m

23.4.3.5 Lockdown mode

firewalld offers a lockdown mode that prevents changes to the firewall rules while it is active. Since applications can automatically change the firewall rules via the D-Bus interface, and depending on the PolicyKit rules regular users may be able to do the same, it can be helpful to prevent changes in some situations. You can find more information about this at https://fedoraproject.org/wiki/Features/FirewalldLockdown.

It is important to understand that the lockdown mode feature provides no real security, but merely protection against accidental or benign attempts to change the firewall. The way the lockdown mode is currently implemented in firewalld provides no security against malicious intent, as is pointed out at https://seclists.org/oss-sec/2017/q3/139.

23.4.3.6 Adding custom iptables rules

firewalld claims exclusive control over the host's netfilter rules. You should never modify firewall rules using other tools like iptables. Doing so could confuse firewalld and break security or functionality.

If you need to add custom firewall rules that aren't covered by firewalld features then there are two ways to do so. To directly pass raw iptables syntax you can use the --direct option. It expects the table, chain, and priority as initial arguments and the rest of the command line is passed as is to iptables. The following example adds a connection tracking rule for the forwarding filter table:

# firewall-cmd  --direct --add-rule ipv4 filter FORWARD 0 -i eth0 -o eth1 \
    -p tcp --dport 80 -m state --state NEW,RELATED,ESTABLISHED -j ACCEPT

Additionally, firewalld implements so called rich rules, an extended syntax for specifying iptables rules in an easier way. You can find the syntax specification at https://www.firewalld.org/documentation/man-pages/firewalld.richlanguage.html. The following example drops all IPv4 packets originating from a certain source address:

# firewall-cmd --zone=public --add-rich-rule='rule family="ipv4" \
    source address="192.168.2.4" drop'

23.4.3.7 Routing, forwarding, and masquerading

firewalld is not designed to run as a fully fledged router. The basic functionality for typical home router setups is available. For a corporate production router you should not use firewalld, however, but use dedicated router and firewall devices instead. The following provides just a few pointers on what to look for to utilize routing in firewalld:

  • First of all IP forwarding needs to be enabled as outlined in Section 23.2, “Masquerading basics”.

  • To enable IPv4 masquerading, for example in the internal zone, issue the following command.

    # firewall-cmd --zone=internal --add-masquerade
  • firewalld can also enable port forwarding. The following command forwards local TCP connections on port 80 to another host:

    # firewall-cmd --zone=public \
        --add-forward-port=port=80:proto=tcp:toport=80:toaddr=192.168.1.10

23.4.4 Accessing services listening on dynamic ports

Some network services do not listen on predefined port numbers. Instead they operate based on the portmapper or rpcbind protocol. We will use the term rpcbind from here on. When one of these services starts, it chooses a random local port and talks to rpcbind to make the port number known. rpcbind itself is listening on a well known port. Remote systems can then query rpcbind about the network services it knows about and on which ports they are listening. Not many programs use this approach anymore today. Popular examples are Network Information Services (NIS; ypserv and ypbind) and the Network File System (NFS) version 3.

Note
Note: About NFSv4

The newer NFSv4 requires the single well-known TCP port 2049. For protocol version 4.0, the kernel parameter fs.nfs.nfs_callback_tcpport may need to be set to a static port (see Example 23.1, “Callback port configuration for the nfs kernel module in /etc/modprobe.d/60-nfs.conf). Starting with protocol version 4.1 this setting has also become unnecessary.

The dynamic nature of the rpcbind protocol makes it difficult to make the affected services behind the firewall accessible. firewalld does not support these services by itself. For manual configuration, see Section 23.4.4.1, “Configuring static ports”. Alternatively, SUSE Linux Enterprise Desktop provides a helper script. For details, see Section 23.4.4.2, “Using firewall-rpcbind-helper for configuring static ports”.

23.4.4.1 Configuring static ports

One possibility is to configure all involved network services to use fixed port numbers. Once this is done, the fixed ports can be opened in firewalld and everything should work. The actual port numbers used are at your discretion but should not clash with any well-known port numbers assigned to other services. See Table 23.1, “Important sysconfig variables for static port configuration” for a list of the available configuration items for NIS and NFSv3 services. Depending on your actual NIS or NFS configuration, not all of these ports may be required for your setup.

Table 23.1: Important sysconfig variables for static port configuration

File Path

Variable Name

Example Value

/etc/sysconfig/nfs MOUNTD_PORT 21001
STATD_PORT 21002
LOCKD_TCPPORT 21003
LOCKD_UDPPORT 21003
RQUOTAD_PORT 21004
/etc/sysconfig/ypbind YPBIND_OPTIONS -p 24500
/etc/sysconfig/ypserv YPXFRD_ARGS -p 24501
YPSERV_ARGS -p 24502
YPPASSWDD_ARGS --port 24503

You need to restart any related services that are affected by these static port configurations for the changes to take effect. You can see the currently assigned rpcbind ports by using the command rpcinfo -p. On success, the statically configured ports should show up there.

Apart from the port configuration for network services running in userspace there are also ports that are used by the Linux kernel directly when it comes to NFS. One of these ports is nfs_callback_tcpport. It is only required for NFS protocol versions older than 4.1. There is a sysctl named fs.nfs.nfs_callback_tcpport to configure this port. This sysctl node only appears dynamically when NFS mounts are active. Therefore it is best to configure the port via kernel module parameters. This can be achieved by creating a file as shown in Example 23.1, “Callback port configuration for the nfs kernel module in /etc/modprobe.d/60-nfs.conf.

Example 23.1: Callback port configuration for the nfs kernel module in /etc/modprobe.d/60-nfs.conf
options nfs callback_tcpport=21005

To make this change effective it is easiest to reboot the machine. Otherwise all NFS services need to be stopped and the nfs kernel module needs to be reloaded. To verify the active NFS callback port, check the output of cat /sys/module/nfs/parameters/callback_tcpport.

For easy handling of the now statically configured RPC ports, it is useful to create a new firewalld service definition. This service definition groups all related ports and, for example, makes it easy to make them accessible in a specific zone. In Example 23.2, “Commands to define a new firewalld RPC service for NFS” this is done for the NFS ports as they have been configured in the accompanying examples.

Example 23.2: Commands to define a new firewalld RPC service for NFS
# firewall-cmd --permanent --new-service=nfs-rpc
# firewall-cmd --permanent --service=nfs-rpc --set-description="NFS related, statically configured RPC ports"
# add UDP and TCP ports for the given sequence
# for port in 21001 21002 21003 21004; do
    firewall-cmd --permanent --service=nfs-rpc --add-port ${port}/udp --add-port ${port}/tcp
done
# the callback port is TCP only
# firewall-cmd --permanent --service=nfs-rpc --add-port 21005/tcp

# show the complete definition of the new custom service
# firewall-cmd --info-service=nfs-rpc --permanent -v
nfs-rpc
  summary:
  description: NFS and related, statically configured RPC ports
  ports: 4711/tcp 21001/udp 21001/tcp 21002/udp 21002/tcp 21003/udp 21003/tcp 21004/udp 21004/tcp
  protocols:
  source-ports:
  modules:
  destination:

# reload firewalld to make the new service definition available
# firewall-cmd --reload

# the new service definition can now be used to open the ports for example in the internal zone
# firewall-cmd --add-service=nfs-rpc --zone=internal

23.4.4.2 Using firewall-rpcbind-helper for configuring static ports

The steps to configure static ports as shown in the previous section can be simplified by using the SUSE helper tool firewall-rpc-helper.py. Install it with zypper in firewalld-rpcbind-helper.

The tool allows interactive configuration of the service patterns discussed in the previous section. It can also display current port assignments and can be used for scripting. For details, see firewall-rpc-helper.py --help.

23.4.4.3 nftables as the default back-end

The default back-end for firewalld is nftables. nftables is a framework by the Linux netfilter project that provides packet filtering, network address translation (NAT) and other similar functionalities. nftables reuses the existing Netfilter subsystems such as the connection tracking system, user space queueing, logging and the hook infrastructure. nftables is a replacement for iptables, ip6tables, arptables, ebtables, and ipset.

The advantages of using nftables include:

  • One framework for both the IPv4 and IPv6 protocols.

  • Rules are applied atomically instead of fetching, updating and storing a complete rule set.

  • Monitor trace events in the nft tool, debug and trace the ruleset via nftrace.

  • The nft command line tool compiles into VM bytecode in netlink format. During the rule set retrieval, the VM bytecode in netlink format is translated back into the original ruleset representation.

A basic nftables configuration file:

>  cat /etc/nftables.conf
    #!/usr/sbin/nft -f

    flush ruleset

    # This matches IPv4 and IPv6
    table inet filter {
        # chain names are up to you.
        # what part of the traffic they cover,
        # depends on the type table.
        chain input {
            type filter hook input priority 0; policy accept;
        }
        chain forward {
            type filter hook forward priority 0; policy accept;
        }
        chain output {
            type filter hook output priority 0; policy accept;
        }
    }

In the above example, all traffic is allowed because all chains have the policy accept. Note that, in reality, this is a bad practice, as firewalls should deny traffic by default.

A basic nftables configuration file with masquerade:

>  cat /etc/nftables.conf
    #!/sbin/nft -f

    flush ruleset

    table ip nat {
      chain prerouting {
        type nat hook prerouting priority 0; policy accept;
      }

      # for all packets to WAN, after routing, replace source address with primary IP of WAN interface
      chain postrouting {
        type nat hook postrouting priority 100; policy accept;
        oifname "wan0" masquerade
      }
    }

If you have a static IP, it would be slightly faster to use source nat (SNAT) instead of masquerade. The router replaces the source with a predefined IP instead of looking up the outgoing IP for every packet.

An nftables configuration file with some rules applied:

>  cat /etc/nftables.conf
    #!/usr/sbin/nft -f

flush ruleset

table inet filter {
    chain base_checks {
        ## another set, this time for connection tracking states.
        # allow established/related connections
        ct state {established, related} accept;

        # early drop of invalid connections
        ct state invalid drop;
    }

    chain input {
        type filter hook input priority 0; policy drop;

        # allow from loopback
        iif "lo" accept;

        jump base_checks;

        # allow icmp and igmp
        ip6 nexthdr icmpv6 icmpv6 type { echo-request, echo-reply, packet-too-big, time-exceeded, parameter-problem, destination-unreachable, packet-too-big, mld-listener-query, mld-listener-report, mld-listener-reduction, nd-router-solicit, nd-router-advert, nd-neighbor-solicit, nd-neighbor-advert, ind-neighbor-solicit, ind-neighbor-advert, mld2-listener-report } accept;
        ip protocol icmp icmp type { echo-request, echo-reply, destination-unreachable, router-solicitation, router-advertisement, time-exceeded, parameter-problem } accept;
        ip protocol igmp accept;

        # for testing reject with logging
        counter log prefix "[nftables] input reject " reject;
    }
    chain forward {
        type filter hook forward priority 0; policy accept;
    }
    chain output {
        type filter hook output priority 0; policy accept;
    }
}

In the above example:

  • The chain policy is to drop and add a reject rule at the end, which means all incoming traffic is blocked.

  • All traffic on the localhost interface is allowed.

  • The base_checks chain handles all packets that are related to established connections. This ensures that incoming packets for outgoing connections are not blocked.

  • ICMP and IGMP packets are allowed by utilizing a set and specific type names.

  • A counter keeps a count of both the total number of packets and bytes it has seen since it was last reset. With nftables, you must specify a counter for each rule you want to count.

23.4.4.3.1 More information

For more information about nftables, see:

  • Upstream documentation

  • nftables man page

23.5 Migrating from SuSEfirewall2

Note
Note: Creating a firewalld configuration for AutoYaST

See the Firewall Configuration section of the AutoYaST Guide to learn how to create a firewalld configuration for AutoYaST.

When upgrading from any service pack of SUSE Linux Enterprise Desktop 12 to SUSE Linux Enterprise Desktop 15 SP6, SuSEfirewall2 is not changed and remains active. There is no automatic migration, so you must migrate to firewalld manually. firewalld includes a helper migration script, susefirewall2-to-firewalld. Depending on the complexity of your SuSEfirewall2 configuration, the script may perform a perfect migration, or it may fail. Most likely it will partially succeed and you have to review your new firewalld configuration and make adjustments.

The resulting configuration makes firewalld behave somewhat like SuSEfirewall2. To take full advantage of firewalld's features, you may elect to create a new configuration, rather than trying to migrate your old configuration. It is safe to run the susefirewall2-to-firewalld script with no options, as it makes no permanent changes to your system. However, if you are administering the system remotely you could get locked out.

Install and run susefirewall2-to-firewalld:

# zypper in susefirewall2-to-firewalld
# susefirewall2-to-firewalld
INFO: Reading the /etc/sysconfig/SuSEfirewall2 file
INFO: Ensuring all firewall services are in a well-known state.
INFO: This will start/stop/restart firewall services and it's likely
INFO: to cause network disruption.
INFO: If you do not wish for this to happen, please stop the script now!
5...4...3...2...1...Lets do it!
INFO: Stopping firewalld
INFO: Restarting SuSEfirewall2_init
INFO: Restarting SuSEfirewall2
INFO: DIRECT: Adding direct rule="ipv4 -t filter -A INPUT -p udp -m udp --dport 5353 -m pkttype
  --pkt-type multicast -j ACCEPT"
[...]
INFO: Enabling direct rule=ipv6 -t filter -A INPUT -p udp -m udp --dport 546 -j ACCEPT
INFO: Enabling direct rule=ipv6 -t filter -A INPUT -p udp -m udp --dport 5353 -m pkttype
  --pkt-type multicast -j ACCEPT
INFO: Enable logging for denied packets
INFO: ##########################################################
INFO:
INFO: The dry-run has been completed. Please check the above output to ensure
INFO: that everything looks good.
INFO:
INFO: ###########################################################
INFO: Stopping firewalld
INFO: Restarting SuSEfirewall2_init
INFO: Restarting SuSEfirewall2

This results in a lot of output, which you may wish to direct to a file for easier review:

# susefirewall2-to-firewalld | tee newfirewallrules.txt

The script supports these options:

-c

Commit changes. The script makes changes to the system, so make sure you only use this option if you are really happy with the proposed changes. This will reset your current firewalld configuration, so make sure you make backups!

-d

Super noisy. Use it to file bug reports but be careful to mask sensitive information.

-h

This message.

-q

No output. Errors will not be printed either!

-v

Verbose mode. It will print warnings and other informative messages.

23.6 More information

The most up-to-date information and other documentation about the firewalld package is found in /usr/share/doc/packages/firewalld. The home page of the netfilter and iptables project, https://www.netfilter.org, provides a large collection of documents about iptables in general in many languages.

24 Configuring a VPN server

Internet connections are easily available and affordable. However, not all connections are secure. Using a Virtual Private Network (VPN), you can create a secure network within an insecure network such as the Internet or Wi-Fi. It can be implemented in different ways and serves several purposes. In this chapter, we focus on the OpenVPN implementation to link branch offices via secure wide area networks (WANs).

24.1 Conceptual overview

This section defines some terms regarding VPN and gives a brief overview of some scenarios.

24.1.1 Terminology

Endpoint

The two ends of a tunnel, the source or destination client.

Tap device

A tap device simulates an Ethernet device (layer 2 packets in the OSI model, such as Ethernet frames). A tap device is used for creating a network bridge. It works with Ethernet frames.

Tun device

A tun device simulates a point-to-point network (layer 3 packets in the OSI model, such as IP packets). A tun device is used with routing and works with IP frames.

Tunnel

Linking two locations through a primarily public network. From a more technical viewpoint, it is a connection between the client's device and the server's device. A tunnel is encrypted, but it does need to be by definition.

24.1.2 VPN scenarios

Whenever you set up a VPN connection, your IP packets are transferred over a secured tunnel. A tunnel can use either a tun or tap device. They are virtual network kernel drivers which implement the transmission of Ethernet frames or IP frames/packets.

Any user space program, such as OpenVPN, can attach itself to a tun or tap device to receive packets sent by your operating system. The program is also able to write packets to the device.

There are many solutions to set up and build a VPN connection. This section focuses on the OpenVPN package. Compared to other VPN software, OpenVPN can be operated in two modes:

Routed VPN

Routing is an easy solution to set up. It is more efficient and scales better than a bridged VPN. Furthermore, it allows the user to tune MTU (Maximum Transfer Unit) to raise efficiency. However, in a heterogeneous environment, if you do not have a Samba server on the gateway, NetBIOS broadcasts do not work. If you need IPv6, the drivers for the tun devices on both ends must support this protocol explicitly. This scenario is depicted in Figure 24.1, “Routed VPN”.

Routed VPN
Figure 24.1: Routed VPN
Bridged VPN

Bridging is a more complex solution. It is recommended when you need to browse Windows file shares across the VPN without setting up a Samba or WINS server. Bridged VPN is also needed to use non-IP protocols (such as IPX) or applications relying on network broadcasts. However, it is less efficient than routed VPN. Another disadvantage is that it does not scale well. This scenario is depicted in the following figures.

Bridged VPN - scenario 1
Figure 24.2: Bridged VPN - scenario 1
Bridged VPN - scenario 2
Figure 24.3: Bridged VPN - scenario 2
Bridged VPN - scenario 3
Figure 24.4: Bridged VPN - scenario 3

The major difference between bridging and routing is that a routed VPN cannot IP-broadcast while a bridged VPN can.

24.2 Setting up a simple test scenario

In the following example, we create a point-to-point VPN tunnel. The example demonstrates how to create a VPN tunnel between one client and a server. It is assumed that your VPN server uses private IP addresses like IP_OF_SERVER and your client uses the IP address IP_OF_CLIENT. Make sure you select addresses which do not conflict with other IP addresses.

Warning
Warning: Use only for testing

This following scenario is provided as an example meant for familiarizing yourself with VPN technology. Do not use this as a real world scenario, as it can compromise the security and safety of your IT infrastructure.

Tip
Tip: Names for configuration file

To simplify working with OpenVPN configuration files, we recommend the following:

  • Place your OpenVPN configuration files in the directory /etc/openvpn.

  • Name your configuration files MY_CONFIGURATION.conf.

  • If there are multiple files that belong to the same configuration, place them in a subdirectory like /etc/openvpn/MY_CONFIGURATION.

24.2.1 Configuring the VPN server

To configure a VPN server, proceed as follows:

Procedure 24.1: VPN server configuration
  1. Install the package openvpn on the machine that later becomes your VPN server.

  2. Open a shell, become root and create the VPN secret key: