Jump to contentJump to page navigation: previous page [access key p]/next page [access key n]
documentation.suse.com / Software RAID on SUSE Linux Enterprise Micro

Software RAID on SUSE Linux Enterprise Micro

Publication Date: 05 Dec 2024
WHAT?

Basic information about software RAIDs.

WHY?

You need information on RAID levels, or you want to manage or monitor a RAID.

EFFORT

15 minutes of reading time.

GOAL

You will be able to manage a software RAID using mdadm.

REQUIREMENTS
  • A corresponding number of disks/partitions that will form the required RAID.

1 Software RAID on SUSE Linux Enterprise Micro

The purpose of RAID (redundant array of independent disks) is to combine several hard disk partitions into one large virtual hard disk to optimize performance, data security, or both. Most RAID controllers use the SCSI protocol, because it can address a larger number of hard disks in a more effective way than the IDE protocol and is more suitable for parallel processing of commands. There are some RAID controllers that support IDE or SATA hard disks. Software RAID provides the advantages of RAID systems without the additional cost of hardware RAID controllers. However, this requires some CPU time and has memory requirements that make it unsuitable for real high performance computers.

2 RAID levels

RAID implies several strategies for combining several hard disks in a RAID system, each with different goals, advantages, and characteristics. These variations are commonly known as RAID levels.

The RAID levels can be split into the following categories:

Standard levels

These levels and their associated data formats are standardized by the Storage Networking Industry Association (SNIA) in the Common RAID Disk Drive Format (DDF) standard. The standard RAID levels are RAID 0, RAID 1, RAID 2, RAID 3, RAID 4, RAID 5 and RAID 6. For details, refer to Section 2.1, “Standard RAID levels”.

Nested levels

Combine already existing arrays into a new array. For example, RAID 0+1 or RAID 1+0.

Non-standard levels

Usually, these are proprietary RAID configurations designed to meet specific needs, for example, Linux MD RAID 10.

2.1 Standard RAID levels

Originally, there were only five standard levels of RAID, but other levels have evolved as described in the following sections.

2.1.1 RAID 0

RAID 0 improves the performance of your data operations by spreading out blocks of each file across multiple disks. This data distributions is called striping. The overall capacity is a sum of capacity of disk in the RAID. The benefit of RAID 0 is that the throughput of read and write operations to any file is multiplied by the number of drives because reads and writes are done concurrently.

The disadvantage of RAID 0 is that it does not provide any data backup, so if a disk fails, the whole RAID is destroyed and there is data loss.

2.1.2 RAID 1

RAID 1 provides adequate security for your data, because the data is copied to another hard disk 1:1. This is known as hard disk mirroring. This level does not provide striping, so it does not provide a higher read or write throughput. However, the array continues to operate as long as at least one drive is functioning.

RAID 1 requires at least two devices.

2.1.3 RAID 2

In RAID 2, the striping is performed on a bit level. This RAID level is currently not used in practice.

2.1.4 RAID 3

In RAID 3, the striping is performed on a byte level with a dedicated parity drive. RAID 3 is not commonly used in practice.

2.1.5 RAID 4

RAID 4 provides block-level striping with a dedicated parity drive. If a data disk fails, the parity data is used to create a replacement disk. However, the parity disk might create a bottleneck for write access. This level requires at least three devices.

RAID 4 is not commonly used in practice.

2.1.6 RAID 5

RAID 5 is an optimized compromise between Level 0 and Level 1 in terms of performance and redundancy. The hard disk space equals the number of disks used minus one. The data is distributed over the hard disks as with RAID 0, including the parity data. Parity blocks are there for security reasons. They are linked to each other with XOR, enabling the contents to be reconstructed by the corresponding parity block in case of system failure.

With RAID 5, no more than one hard disk can fail at the same time. If one hard disk fails, it must be replaced when possible to avoid the risk of losing data.

RAID 5 requires at least three disks.

2.1.7 RAID 6

RAID 6 consists of block-level striping with double distributed parity. RAID 6 provides extremely high data fault tolerance by sustaining multiple simultaneous drive failures. Even if two of the hard disks fail, the system continues to be operational, with no data loss.

The performance for RAID 6 is slightly lower but comparable to RAID 5 in normal mode and single disk failure mode. It is very slow in dual disk failure mode. A RAID 6 configuration needs a considerable amount of CPU time and memory for write operations.

RAID 6 requires at least four disks. In general, it requires N+2 disks, where N is the number of disks required to store data and 2 is for the dual parity.

2.2 Nested RAID

2.2.1 RAID 0+1

RAID 0+1, also called RAID 01, mirrors striped disks, so the data is replicated and shared at the same time. The minimum number of disks is four.

2.2.2 RAID 1+0

RAID 1+0, also called RAID 10, is a combination of striping and mirroring. Data is distributed into several disks, and each of these disks is mirrored to another disk.

3 Managing software RAID

After you set up a RAID, you can perform additional administration tasks. For example:

3.1 Naming software RAID

3.1.1 Default names

By default, software RAID devices have names following the pattern mdN, where N is a number. For example, they can be accessed as /dev/md127 and are listed as md127 in /proc/mdstat and /proc/partitions.

3.1.2 Providing non-default names

As working with the default names might be clumsy, there are two ways how to work around this:

Providing a named link to the device

You can optionally specify a name for the RAID device when creating it with YaST or on the command line with mdadm --create '/dev/md/ NAME'. The device name will still be mdN, but the link /dev/md/NAME will be created:

> ls -og /dev/md
total 0
lrwxrwxrwx 1 8 Dec  9 15:11 myRAID -> ../md127

The device will still be listed as md127 under /proc.

Providing a named device

If a named link to the device is not sufficient for your setup, add the line CREATE names=yes to /etc/mdadm.conf by running the following command:

> echo "CREATE names=yes" | sudo tee -a  /etc/mdadm.conf

This will cause names like myRAID to be used as a real device name. The device will not only be accessible at /dev/myRAID, but also listed as myRAID under /proc. Note that this will only apply to RAIDs configured after the change to the configuration file. Active RAIDs will continue to use the mdN names until they get stopped and reassembled.

Warning
Warning: Incompatible tools

Not all tools may support named RAID devices. If a tool expects a RAID device to be named mdN, it will fail to identify the devices.

3.2 Configuring stripe size on RAID 5 on AArch64

By default, the stripe size is set to 4 kB. If you need to change the default stripe size, for example, to match the typical page size of 64 kB on AArch64, you can configure the stripe size manually using CLI:

> sudo echo 16384  > /sys/block/md1/md/stripe_size

The above command sets the stripe size to 16 kB. You can set other values such as 4096 or 8192 but the value must be a power of 2.

3.3 Monitoring software RAIDs

You can run mdadm as a daemon in the monitor mode to monitor your software RAID. In the monitor mode, mdadm performs regular checks on the array for disk failures. If there is a failure, mdadm sends an e-mail to the administrator. To define the time interval of the checks, run the following command:

mdadm --monitor --mail=root@localhost --delay=1800 /dev/md2

The command above turns on monitoring of the /dev/md2 array in intervals of 1800 seconds. In the event of a failure, an e-mail will be sent to root@localhost.

Note
Note: RAID checks are enabled by default

The RAID checks are enabled by default. It may happen that the interval between each check is not long enough and you may encounter warnings. Thus, you can increase the interval by setting a higher value with the delay option.