RAID (Redundant Array of Independent Disks) is a way of storing the same data in different places on multiple hard disks to protect data in the case of a drive failure. However, not all RAID levels provide redundancy. The main purposes of RAID are;

  • To expand the drive capacity: eg.RAID 0
  • To prevent loss of data in case of drive failure: eg.RAID 1, RAID 5, RAID 6, and RAID 10

Creating RAID arrays using mdadm

To create and manage storage arrays using RAID capabilities the mdadm function can be used. The mdadm command can be used to perform all the necessary functions related to controlling multiple storage device.

RAID 0 Array

The RAID 0 array works by breaking up data into chunks and striping it across the available disks. RAID 0 or striping distributes data among the drives used in the array. This means that the data will be stripped into different parts and each RAID disks contain a portion of the stripped data. These multiple disks will be referenced when the data is retrieved

Requirements

RAID 0 requires a minimum of two drives and the total capacity of the drives in the array are combined into a single volume, due to the way data is distributed. Two 100GB drives paired together in a striped RAID 0 configuration will be recognized as a single, 200GB volume.

Primary Benefit: Performance
Cons: RAID 0 does not mirror or store any parity data though, so the loss of a single drive will take down the entire array

To create a RAID 0 array with the component devices (eg. /dev/sda, /dev/sdb) use the following mdadm --create command.
sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sda /dev/sdb
/dev/md0 – The device name you wish to create
--level – RAID level
--raid-devices – Number of devices
Ensure that the RAID array was successfully created, check /proc/mdstat file.
cat /proc/mdstat
The output will show /dev/md0 device has been created in the RAID 0 configuration using /dev/sda and /dev/sdb devices.

Create and Mount the Filesystem

We need to create a filesystem on the array;
sudo mkfs.ext4 -F /dev/md0
Create a mount point to attach the new filesystem;
sudo mkdir -p /mnt/md0
You could mount the filesystem by typing;
sudo mount /dev/md0 /mnt/md0
Check whether the new space is available;
df -h
Verify new filesystem is mounted and accessible.

Save the Array Layout

Adjust /etc/mdadm/mdadm.conf file to make sure that array is reassembled at boot automatically.
To automatically scan the active array and append /etc/mdadm/mdadm.conf file
sudo mdadm --detail --scan >> /etc/mdadm/mdadm.conf
For the array to be available during the early boot process, update the initramfs or initial RAM file system.
sudo update-initramfs -u
Update the following in /etc/fstab file for automatic mounting at boot of new file system.
/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0
RAID 0 array should now automatically be assembled and mounted during each boot.

RAID 1 Array

The RAID 1 array is implemented by mirroring data across all available disks. The data stored on the disks in the array is duplicated across all the hard disks.

Requirements

RAID 1, or mirroring, also requires a minimum of two storage devices.

Primary Benefit

RAID 1 offers a level of data redundancy and the array can be rebuilt in the event of a disk failure without any loss of data. The total capacity of a RAID 1 volume will equal the capacity of a single disk on account of the redundancy. If two 500GB disks are used , the total capacity of the RAID 1 volume will still be 500GB.

Cons:

Since two copies of the data are maintained, only half of the disk will be usable.
To create a RAID 1 array with the component devices (eg. /dev/sda, /dev/sdb) using the mdadm --create command.
sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb
/dev/md0 is the device name
--level – RAID level
--raid-devices – number of devices
If boot flag is not enabled for the partitions used as component devices, you will be given a warning. You can continue creating array by typing y.
The mdadm tool will start to mirror the drives. This process would take some time to complete but the array can be used during this time. The progress of mirroring can be monitored by checking the /proc/mdstat file.
cat /proc/mdstat
/dev/md0 has been created in the RAID 1 configuration using the devices /dev/sda and /dev/sdb.

Create and Mount the Filesystem

Create a filesystem on the array;
sudo mkfs.ext4 -F /dev/md0
Create a mount point to attach the new filesystem;
sudo mkdir -p /mnt/md0
You can mount the filesystem by typing;
sudo mount /dev/md0 /mnt/md0
Check whether the new space is available by typing;
df -h
Verify new filesystem is mounted and accessible.

Save the Array Layout

Adjust the /etc/mdadm/mdadm.conf file to make sure that array is reassembled at boot automatically.
To automatically scan the active array and append the /etc/mdadm/mdadm.conf file do
sudo mdadm --detail --scan >> /etc/mdadm/mdadm.conf
For the array to be available during the early boot process, update the initramfs or initial RAM file system.
sudo update-initramfs -u
Update the following in /etc/fstab file for automatic mounting at boot of new file system.
/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0
RAID 1 array should now automatically be assembled and mounted during each boot.

RAID 5 Array

RAID 5 array is implemented by striping data across multiple devices with distributed parity. In RAID 5, it will split the parity information and stripe data over multiple devices, which gives good data redundancy. One component of each stripe is a calculated parity block. If a device fails, the parity blocks and the remaining blocks can be used to calculate the missing data. The device that receives parity block is rotated so that each device has a balanced amount of parity information.

Requirements

For creating RAID 5 you should have minimum of 3 hard drives, but you can add more disks.

What is Parity?

Parity is a simplest method of detecting errors in data storage. Parity stores information in each disks. One disk space of the available disks will be split to all disks to store the parity information. If any one of the disks fails still we can get the data by rebuilding from parity information after replacing the failed disk.

Pros:
  • Gives better performance.
  • Support redundancy and tolerance.
  • Support hot spare options.
  • No data loss if a single disk fails. We can rebuilt from parity after replacing the failed disk.
  • Suits for transaction oriented environment as the reading will be faster.
Cons:
  • Will lose a single disk capacity for using parity information.
  • Due to parity overhead, writing will be slow.
  • Rebuild takes long time.

To create a RAID 5 array with the component devices (eg. /dev/sda, /dev/sdb and /dev/sdc) using the mdadm --create command.
sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sda /dev/sdb /dev/sdc
/dev/md0 is the device name
--level – RAID level
--raid-devices – number of devices
The mdadm tool will start to configure the array. This process would take some time to complete, but the array can be accessed during this time. The progress of mirroring can be monitored by checking the /proc/mdstat file.
cat /proc/mdstat
/dev/md0 has been created in the RAID 5 configuration using the devices /dev/sda, /dev/sdb and /dev/sdc.

Create and Mount the Filesystem

Create a filesystem on the array:
sudo mkfs.ext4 -F /dev/md0
Create a mount point to attach the new filesystem:
sudo mkdir -p /mnt/md0
You can mount the filesystem by typing:
sudo mount /dev/md0 /mnt/md0
Check whether the new space is available by typing:
df -h
The new filesystem is mounted and accessible.

Save the Array Layout

Adjust the /etc/mdadm/mdadm.conf file to make sure that array is reassembled at boot automatically. Make sure the array has finished assembling before adjusting the configuration.
To automatically scan the active array and append the /etc/mdadm/mdadm.conf file do,
sudo mdadm --detail --scan >> /etc/mdadm/mdadm.conf
For the array to be available during the early boot process, update the initramfs or initial RAM file system.
sudo update-initramfs -u
Append the following in /etc/fstab file for automatic mounting at boot of new file system.
/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0
RAID 5 array should now automatically be assembled and mounted during each boot.

RAID 6 Array

RAID 6 is upgraded version of RAID 5. It has two distributed parity which provides fault tolerance even after two drives fails. Mission critical system still operational in case of two concurrent disks failures. RAID 6 is similar to RAID 5, but provides more robust since it uses one more disk for parity. In RAID 6 even if we lose our 2 disks we can get the data back by replacing a spare drive and build it from parity.
For a RAID 6 array a minimum of 4 number of disks or more are required. People who need high tolerance choose RAID 6. In every higher environments with high availability for database, they use RAID 6 because database is the most important and need to be safe in any cost, also it can be useful for video streaming environments.

Requirements

For creating a RAID 6 array a minimum of 4 number of storage devices are required.

Pros:
  • Performance are good.
  • No data loss, even after two disk fails. We can rebuild from parity after replacing the failed disk.
  • Reading will be better than RAID 5, because it reads from multiple disk, but writing performance will be very poor without dedicated RAID controller.
Cons:
  • RAID 6 is expensive as it requires two independent drives, to be used for parity functions.
  • Will lose a two disk capacity for storing parity information.

To create a RAID 6 array with the component devices (eg. /dev/sda, /dev/sdb, /dev/sdc and /dev/sdd) using the mdadm --create command.
sudo mdadm --create --verbose /dev/md0 --level=6 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
/dev/md0 is the device name
--level – RAID level
--raid-devices – number of devices
The mdadm tool will start to configure the array. This process would take some time to complete, but the array can be accessed during this time. The progress of mirroring can be monitored by checking the /proc/mdstat file.
cat /proc/mdstat
/dev/md0 has been created in the RAID 6 configuration using the devices /dev/sda, /dev/sdb,/dev/sdc and /dev/sdd.

Create and Mount the Filesystem

Create a filesystem on the array:
sudo mkfs.ext4 -F /dev/md0
Create a mount point to attach the new filesystem:
sudo mkdir -p /mnt/md0
You can mount the filesystem by typing:
sudo mount /dev/md0 /mnt/md0
Check whether the new space is available by typing:
df -h
The new filesystem is mounted and accessible.

Save the Array Layout

Adjust the /etc/mdadm/mdadm.conf file to make sure that array is reorganised at boot automatically. Make sure the array has finished assembling before adjusting the configuration.
To automatically scan the active array and append the /etc/mdadm/mdadm.conf file do,
sudo mdadm --detail --scan >> /etc/mdadm/mdadm.conf
For the array to be available during the early boot process, update the initramfs or initial RAM file system.
sudo update-initramfs -u
Append the following in /etc/fstab file for automatic mounting at boot of new file system.
/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0
RAID 6 array should now automatically be assembled and mounted during each boot.

RAID 10 Array

RAID 10 is a combination of RAID 0 and RAID 1, thereby providing redundancy and high performance. We need at least 4 number of storage devices to setup RAID 10.

Requirements

We need a minimum of 4 disks for RAID 10, the first two disks for RAID 0 and other two disks for RAID 1.

Pros:
  • Gives better performance.
  • Reading and writing will be very good, because it will write and read to all those 4 disks at the same time.
  • It can be used for database solutions, which needs a high I/O disk writes.
Cons:
  • We will lose two of the disk capacity in RAID 10.

To create a RAID 10 array with the component devices (eg. /dev/sda, /dev/sdb, /dev/sdc and /dev/sdd) using the mdadm --create command.
sudo mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 /dev/sda /dev/sdb /dev/sdc /dev/sdd
/dev/md0 is the device name
--level – RAID level
--raid-devices – number of devices
The mdadm tool will start to configure the array. This process would take some time to complete, but the array can be accessed during this time. The progress of mirroring can be monitored by checking the /proc/mdstat file.
cat /proc/mdstat
/dev/md0 has been created in the RAID 10 configuration using the devices /dev/sda, /dev/sdb,/dev/sdc and /dev/sdd.

Create and Mount the Filesystem

Create a filesystem on the array:
sudo mkfs.ext4 -F /dev/md0
Create a mount point to attach the new filesystem:
sudo mkdir -p /mnt/md0
You can mount the filesystem by typing:
sudo mount /dev/md0 /mnt/md0
Check whether the new space is available by typing:
df -h
The new filesystem is mounted and accessible.

Save the Array Layout

Adjust the /etc/mdadm/mdadm.conf file to make sure that array is reassembled at boot automatically. Make sure the array has finished assembling before adjusting the configuration.
Append /etc/mdadm/mdadm.conf file by automatically scanning the active array.
sudo mdadm --detail --scan >> /etc/mdadm/mdadm.conf
For the array to be available during the early boot process, update the initramfs or initial RAM file system.
sudo update-initramfs -u
Append the following in /etc/fstab file for automatic mounting at boot of new file system.
/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0
RAID 10 array should now automatically be assembled and mounted during each boot.

Get new insights right to your inbox

How can our experts help you?

Schedule your consultation

You may also like

  • By admin
  • in DevOps

DevOps as a Service: All You Should Know

  • Aug 9, 2022 .
  • 9 min min
Read More
  • By admin
  • in Containerization

Containerization VS Virtualization: Understanding the Differences

  • Aug 4, 2022 .
  • 8 min min
Read More
  • By admin
  • in DevOps

Best DevOps Tools You Must Try in 2022

  • Jul 26, 2022 .
  • 12 min min
Read More

Be in the know

Techno tips served hot! Subscribe now and stay atop.