Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
Applies to SUSE Linux Enterprise Server 11 SP4 11 SP4

2 What’s New for Storage in SLES 11

The features and behavior changes noted in this section were made for SUSE Linux Enterprise Server 11.

2.1 What’s New in SLES 11 SP4

In regards of storage, SUSE Linux Enterprise Server SP4 is a bugfix release, no new features were added.

2.2 What’s New in SLES 11 SP3

In addition to bug fixes, the features and behavior changes in this section were made for the SUSE Linux Enterprise Server 11 SP3 release:

2.2.1 Btrfs Quotas

Btrfs quota support for subvolumes on the root file system has been added in the btrfs(8) command.

2.2.2 iSCSI LIO Target Server

YaST supports the iSCSI LIO Target Server software. For information, see Chapter 15, Mass Storage over IP Networks: iSCSI LIO Target Server.

2.2.3 Linux Software RAIDs

The following enhancements were added for Linux software RAIDs:

2.2.3.1 Support for Intel RSTe+

The software RAID provides improved support on the Intel RSTe+ (Rapid Storage Technology Enterprise) platform to support RAID level 0, 1, 4, 5, 6, and 10.

2.2.3.2 LEDMON Utility

The LEDMON utility supports PCIe-SSD enclosure LEDs for MD software RAIDs. For information, see Chapter 12, Storage Enclosure LED Utilities for MD Software RAIDs.

2.2.3.3 Device Order in the Software RAID

In the Add RAID wizard in the YaST Partitioner, the Classify option allows you to specify the order in which the selected devices in a Linux software RAID will be used to ensure that one half of the array resides on one disk subsystem and the other half of the array resides on a different disk subsystem. For example, if one disk subsystem fails, the system keeps running from the second disk subsystem. For information, see Step 4.d in Section 10.3.3, “Creating a Complex RAID10 with the YaST Partitioner”.

2.2.4 LVM2

The following enhancements were added for LVM2:

2.2.4.1 Thin Pool and Thin Volumes

LVM logical volumes can be thinly provisioned. For information, see Section 4.5, “Configuring Logical Volumes”.

  • Thin pool:  The logical volume is a pool of space that is reserved for use with thin volumes. The thin volumes can allocate their needed space from it on demand.

  • Thin volume:  The volume is created as a sparse volume. The volume allocates needed space on demand from a thin pool.

2.2.4.2 Thin Snapshots

LVM logical volume snapshots can be thinly provisioned. Thin provisioning is assumed if you to create a snapshot without a specified size. For information, see Section 17.2, “Creating Linux Snapshots with LVM”.

2.2.5 Multipath I/O

The following changes and enhancements were made for multipath I/O:

2.2.5.1 mpathpersist(8)

The mpathpersist(8) utility is new. It can be used to manage SCSI persistent reservations on Device Mapper Multipath devices. For information, see Section 7.3.5, “Linux mpathpersist(8) Utility”.

2.2.5.2 multipath(8)

The following enhancement was added to the multipath(8) command:

  • The -r option allows you to force a device map reload.

2.2.5.3 /etc/multipath.conf

The Device Mapper - Multipath tool added the following enhancements for the /etc/multipath.conf file:

  • udev_dir. 

    The udev_dir attribute is deprecated. After you upgrade to SLES 11 SP3 or a later version, you can remove the following line from the defaults section of your /etc/multipath.conf file:

    udev_dir /dev
  • getuid_callout. 

    In the defaults section of the /etc/multipath.conf file, the getuid_callout attribute is deprecated and replaced by the uid_attribute parameter. This parameter is a udev attribute that provides a unique path identifier. The default value is ID_SERIAL.

    After you upgrade to SLES 11 SP3 or a later version, you can modify the attributes in the defaults section of your /etc/multipath.conf file:

    • Remove the following line from the defaults section:

        getuid_callout "/lib/udev/scsi_id --whitelisted --device=/dev/%n"
    • Add the following line to the defaults section:

        uid_attribute "ID_SERIAL"
  • path_selector. 

    In the defaults section of the /etc/multipath.conf file, the default value for the path_selector attribute was changed from "round-robin 0" to "service-time 0". The service-time option chooses the path for the next bunch of I/O based on the amount of outstanding I/O to the path and its relative throughput.

    After you upgrade to SLES 11 SP3 or a later version, you can modify the attribute value in the defaults section of your /etc/multipath.conf file to use the recommended default:

      path_selector "service-time 0"
  • user_friendly_names. 

    The user_friendly_names attribute can be configured in the devices section and in the multipaths section.

  • max_fds. 

    The default setting for the max_fds attribute was changed to max. This allows the multipath daemon to open as many file descriptors as the system allows when it is monitoring paths.

    After you upgrade to SLES 11 SP3 or a later version, you can modify the attribute value in your /etc/multipath.conf file:

    max_fds "max"
  • reservation_key. 

    In the defaults section or multipaths section of the /etc/multipath.conf file, the reservation_key attribute can be used to assign a Service Action Reservation Key that is used with the mpathpersist(8) utility to manage persistent reservations for Device Mapper Multipath devices. The attribute is not used by default. If it is not set, the multipathd daemon does not check for persistent reservation for newly discovered paths or reinstated paths.

    reservation_key <reservation key>

    For example:

    multipaths {
            multipath {
                              wwid   XXXXXXXXXXXXXXXX
                             alias      yellow
                             reservation_key  0x123abc
          }
    }

    For information about setting persistent reservations, see Section 7.3.5, “Linux mpathpersist(8) Utility”.

  • hardware_handler. 

    Four SCSI hardware handlers were added in the SCSI layer that can be used with DM-Multipath:

    scsi_dh_alua
    scsi_dh_rdac
    scsi_dh_hp_sw
    scsi_dh_emc

    These handlers are modules created under the SCSI directory in the Linux kernel. Previously, the hardware handler in the Device Mapper layer was used.

    Add the modules to the initrd image, then specify them in the /etc/multipath.conf file as hardware handler types alua, rdac, hp_sw, and emc. For information about adding the device drivers to the initrd image, see Section 7.4.3, “Configuring the Device Drivers in initrd for Multipathing”.

2.3 What’s New in SLES 11 SP2

In addition to bug fixes, the features and behavior changes in this section were made for the SUSE Linux Enterprise Server 11 SP2 release:

2.4 What’s New in SLES 11 SP1

In addition to bug fixes, the features and behavior changes noted in this section were made for the SUSE Linux Enterprise Server 11 SP1 release.

2.4.1 Saving iSCSI Target Information

In the YaST Network Services iSCSI Target function (Section 14.2.2, “Creating iSCSI Targets with YaST”), a Save option was added that allows you to export the iSCSI target information. This makes it easier to provide information to consumers of the resources.

2.4.2 Modifying Authentication Parameters in the iSCSI Initiator

In the YaST Network Services iSCSI Initiator function (Section 14.2.2, “Creating iSCSI Targets with YaST”), you can modify the authentication parameters for connecting to a target devices. Previously, you needed to delete the entry and re-create it in order to change the authentication information.

2.4.3 Allowing Persistent Reservations for MPIO Devices

A SCSI initiator can issue SCSI reservations for a shared storage device, which locks out SCSI initiators on other servers from accessing the device. These reservations persist across SCSI resets that might happen as part of the SCSI exception handling process.

The following are possible scenarios where SCSI reservations would be useful:

  • In a simple SAN environment, persistent SCSI reservations help protect against administrator errors where a LUN is attempted to be added to one server but it is already in use by another server, which might result in data corruption. SAN zoning is typically used to prevent this type of error.

  • In a high-availability environment with failover set up, persistent SCSI reservations help protect against errant servers connecting to SCSI devices that are reserved by other servers.

2.4.4 MDADM 3.0.2

Use the latest version of the Multiple Devices Administration (MDADM, mdadm) utility to take advantage of bug fixes and improvements.

2.4.5 Boot Loader Support for MDRAID External Metadata

Support was added to use the external metadata capabilities of the MDADM utility version 3.0 to install and run the operating system from RAID volumes defined by the Intel Matrix Storage Technology metadata format. This moves the functionality from the Device Mapper RAID (DMRAID) infrastructure to the Multiple Devices RAID (MDRAID) infrastructure, which offers the more mature RAID 5 implementation and offers a wider feature set of the MD kernel infrastructure. It allows a common RAID driver to be used across all metadata formats, including Intel, DDF (common RAID disk data format), and native MD metadata.

2.4.6 YaST Install and Boot Support for MDRAID External Metadata

The YaST installer tool added support for MDRAID External Metadata for RAID 0, 1, 10, 5, and 6. The installer can detect RAID arrays and whether the platform RAID capabilities are enabled. If multipath RAID is enabled in the platform BIOS for Intel Matrix Storage Manager, it offers options for DMRAID, MDRAID (recommended), or none. The initrd was also modified to support assembling BIOS-based RAID arrays.

2.4.7 Improved Shutdown for MDRAID Arrays that Contain the Root File System

Shutdown scripts were modified to wait until all of the MDRAID arrays are marked clean. The operating system shutdown process now waits for a dirty-bit to be cleared until all MDRAID volumes have finished write operations.

Changes were made to the startup script, shutdown script, and the initrd to consider whether the root (/) file system (the system volume that contains the operating system and application files) resides on a software RAID array. The metadata handler for the array is started early in the shutdown process to monitor the final root file system environment during the shutdown. The handler is excluded from the general killall events. The process also allows for writes to be quiesced and for the array’s metadata dirty-bit (which indicates whether an array needs to be resynchronized) to be cleared at the end of the shutdown.

2.4.8 MD over iSCSI Devices

The YaST installer now allows MD to be configured over iSCSI devices.

If RAID arrays are needed on boot, the iSCSI initiator software is loaded before boot.md so that the iSCSI targets are available to be auto-configured for the RAID.

For a new install, Libstorage creates an /etc/mdadm.conf file and adds the line AUTO -all. During an update, the line is not added. If /etc/mdadm.conf contains the line

AUTO -all

then no RAID arrays are auto-assembled unless they are explicitly listed in /etc/mdadm.conf.

2.4.9 MD-SGPIO

The MD-SGPIO utility is a standalone application that monitors RAID arrays via sysfs(2). Events trigger an LED change request that controls blinking for LED lights that are associated with each slot in an enclosure or a drive bay of a storage subsystem. It supports two types of LED systems:

  • 2-LED systems (Activity LED, Status LED)

  • 3-LED systems (Activity LED, Locate LED, Fail LED)

2.4.10 Resizing LVM 2 Mirrors

The lvresize, lvextend, and lvreduce commands that are used to resize logical volumes were modified to allow the resizing of LVM 2 mirrors. Previously, these commands reported errors if the logical volume was a mirror.

2.4.11 Updating Storage Drivers for Adapters on IBM Servers

Update the following storage drivers to use the latest available versions to support storage adapters on IBM servers:

  • Adaptec: aacraid, aic94xx

  • Emulex: lpfc

  • LSI: mptas, megaraid_sas

    The mptsas driver now supports native EEH (Enhanced Error Handler) recovery, which is a key feature for all of the IO devices for Power platform customers.

  • qLogic: qla2xxx, qla3xxx, qla4xxx

2.5 What’s New in SLES 11

The features and behavior changes noted in this section were made for the SUSE Linux Enterprise Server 11 release.

2.5.1 EVMS2 Is Deprecated

The Enterprise Volume Management Systems (EVMS2) storage management solution is deprecated. All EVMS management modules have been removed from the SUSE Linux Enterprise Server 11 packages. Your non-system EVMS-managed devices should be automatically recognized and managed by Linux Volume Manager 2 (LVM2) when you upgrade your system. For more information, see Evolution of Storage and Volume Management in SUSE Linux Enterprise.

If you have EVMS managing the system device (any device that contains the root (/), /boot, or swap), try these things to prepare the SLES 10 server before you reboot the server to upgrade:

  1. In the /etc/fstab file, modify the boot and swap disks to the default /dev/system/sys_lx directory:

    1. Remove /evms/lvm2 from the path for the swap and root (/) partitions.

    2. Remove /evms from the path for /boot partition.

  2. In the /boot/grub/menu.lst file, remove /evms/lvm2 from the path.

  3. In the /etc/sysconfig/bootloader file, verify that the path for the boot device is the /dev directory.

  4. Ensure that boot.lvm and boot.md are enabled:

    1. In YaST, click SystemRunlevel EditorExpert Mode.

    2. Select boot.lvm.

    3. Click Set/ResetEnable the Service.

    4. Select boot.md.

    5. Click Set/ResetEnable the Service.

    6. Click Finish, then click Yes.

  5. Reboot and start the upgrade.

For information about managing storage with EVMS2 on SUSE Linux Enterprise Server 10, see the SUSE Linux Enterprise Server 10 SP3: Storage Administration Guide.

2.5.2 Ext3 as the Default File System

The Ext3 file system has replaced ReiserFS as the default file system recommended by the YaST tools at installation time and when you create file systems. ReiserFS is still supported. For more information, see File System Support on the SUSE Linux Enterprise 11 Tech Specs Web page.

2.5.3 Default Inode Size Increased for Ext3

To allow space for extended attributes and ACLs for a file on Ext3 file systems, the default inode size for Ext3 was increased from 128 bytes on SLES 10 to 256 bytes on SLES 11. For information, see Section 1.2.3.4, “Ext3 File System Inode Size and Number of Inodes”.

2.5.4 JFS File System Is Deprecated

The JFS file system is no longer supported. The JFS utilities were removed from the distribution.

2.5.5 OCFS2 File System Is in the High Availability Release

The OCFS2 file system is fully supported as part of the SUSE Linux Enterprise High Availability Extension.

2.5.6 /dev/disk/by-name Is Deprecated

The /dev/disk/by-name path is deprecated in SUSE Linux Enterprise Server 11 packages.

2.5.7 Device Name Persistence in the /dev/disk/by-id Directory

In SUSE Linux Enterprise Server 11, the default multipath setup relies on udev to overwrite the existing symbolic links in the /dev/disk/by-id directory when multipathing is started. Before you start multipathing, the link points to the SCSI device by using its scsi-xxx name. When multipathing is running, the symbolic link points to the device by using its dm-uuid-xxx name. This ensures that the symbolic links in the /dev/disk/by-id path persistently point to the same device regardless of whether multipathing is started or not. The configuration files (such as lvm.conf and md.conf) do not need to be modified because they automatically point to the correct device.

See the following sections for more information about how this behavior change affects other features:

2.5.8 Filters for Multipathed Devices

The deprecation of the /dev/disk/by-name directory (as described in Section 2.5.6, “/dev/disk/by-name Is Deprecated”) affects how you set up filters for multipathed devices in the configuration files. If you used the /dev/disk/by-name device name path for the multipath device filters in the /etc/lvm/lvm.conf file, you need to modify the file to use the /dev/disk/by-id path. Consider the following when setting up filters that use the by-id path:

  • The /dev/disk/by-id/scsi-* device names are persistent and created for exactly this purpose.

  • Do not use the /dev/disk/by-id/dm-* name in the filters. These are symbolic links to the Device-Mapper devices, and result in reporting duplicate PVs in response to a pvscan command. The names appear to change from LVM-pvuuid to dm-uuid and back to LVM-pvuuid.

For information about setting up filters, see Section 7.2.4, “Using LVM2 on Multipath Devices”.

2.5.9 User-Friendly Names for Multipathed Devices

A change in how multipathed device names are handled in the /dev/disk/by-id directory (as described in Section 2.5.7, “Device Name Persistence in the /dev/disk/by-id Directory”) affects your setup for user-friendly names because the two names for the device differ. You must modify the configuration files to scan only the device mapper names after multipathing is configured.

For example, you need to modify the lvm.conf file to scan using the multipathed device names by specifying the /dev/disk/by-id/dm-uuid-.*-mpath-.* path instead of /dev/disk/by-id.

2.5.10 Advanced I/O Load-Balancing Options for Multipath

The following advanced I/O load-balancing options are available for Device Mapper Multipath, in addition to round-robin:

  • Least-pending

  • Length-load-balancing

  • Service-time

For information, see path_selector in Section 7.11.2.1, “Understanding Priority Groups and Attributes”.

2.5.11 Location Change for Multipath Tool Callouts

The mpath_* prio_callouts for the Device Mapper Multipath tool have been moved to shared libraries in/lib/libmultipath/lib*. By using shared libraries, the callouts are loaded into memory on daemon startup. This helps avoid a system deadlock on an all-paths-down scenario where the programs need to be loaded from the disk, which might not be available at this point.

2.5.12 Change from mpath to multipath for the mkinitrd -f Option

The option for adding Device Mapper Multipath services to the initrd has changed from -f mpath to -f multipath.

To make a new initrd, the command is now:

mkinitrd -f multipath

2.5.13 Change from Multibus to Failover as the Default Setting for the MPIO Path Grouping Policy

The default setting for the path_grouping_policy in the /etc/multipath.conf file has changed from multibus to failover.

For information about configuring the path_grouping_policy, see Section 7.11, “Configuring Path Failover Policies and Priorities”.

Print this page