Windows OS Hub
  • Windows Server
    • Windows Server 2022
    • Windows Server 2019
    • Windows Server 2016
    • Windows Server 2012 R2
    • Windows Server 2012
    • Windows Server 2008 R2
    • SCCM
  • Active Directory
    • Active Directory Domain Services (AD DS)
    • Group Policies
  • Windows Clients
    • Windows 11
    • Windows 10
    • Windows 8
    • Windows 7
    • Windows XP
    • MS Office
    • Outlook
  • Virtualization
    • VMWare
    • Hyper-V
    • KVM
  • PowerShell
  • Exchange
  • Cloud
    • Azure
    • Microsoft 365
    • Office 365
  • Linux
    • CentOS
    • RHEL
    • Ubuntu
  • Home
  • About

Windows OS Hub

  • Windows Server
    • Windows Server 2022
    • Windows Server 2019
    • Windows Server 2016
    • Windows Server 2012 R2
    • Windows Server 2012
    • Windows Server 2008 R2
    • SCCM
  • Active Directory
    • Active Directory Domain Services (AD DS)
    • Group Policies
  • Windows Clients
    • Windows 11
    • Windows 10
    • Windows 8
    • Windows 7
    • Windows XP
    • MS Office
    • Outlook
  • Virtualization
    • VMWare
    • Hyper-V
    • KVM
  • PowerShell
  • Exchange
  • Cloud
    • Azure
    • Microsoft 365
    • Office 365
  • Linux
    • CentOS
    • RHEL
    • Ubuntu

 Windows OS Hub / Linux / CentOS / Configuring Software RAID on Linux Using MDADM

January 11, 2021 CentOSLinuxQuestions and AnswersRHEL

Configuring Software RAID on Linux Using MDADM

MDADM is a tool that allows to create and manage software RAIDs on Linux. In this article we’ll show how to use mdadm (multiple disks admin) to create RAID array storage, add and manage disks, add a hot-spare and more.

Contents:
  • mdadm: How to Install a Software Raid Management Tool?
  • Creating RAID 1 (Mirror) Using 2 Disks on Linux
  • How to View State or Check the Integrity of a RAID Array?
  • Recovering from a Disk Failure in RAID, Disk Replacement
  • How to Add or Remove Disks to Software RAID on Linux?
  • How to Add a Hot-Spare Drive to an MDADM Array?
  • How to Remove an MDADM RAID Array?
  • Mdmonitor: RAID State Monitoring & Email Notifications
  • Inactive MDADM RAID

mdadm: How to Install a Software Raid Management Tool?

To install mdadm, run the installation command:

  • For CentOS/Red Hat (yum/dnf is used): yum install mdadm
  • For Ubuntu/Debian: apt-get install mdadm

mdadm and the dependent libraries will be installed:

Running transaction
Installing : libreport-filesystem-2.1.11-43.el7.centos.x86_64 1/2
Installing : mdadm-4.1-1.el7.x86_64 2/2
Verifying : mdadm-4.1-1.el7.x86_64 1/2
Verifying : libreport-filesystem-2.1.11-43.el7.centos.x86_64 2/2
Installed:
mdadm.x86_64 0:4.1-1.el7
Dependency Installed:
libreport-filesystem.x86_64 0:2.1.11-43.el7.centos
Complete!

Creating RAID 1 (Mirror) Using 2 Disks on Linux

I have two extra disks installed on my Linux server, and I want to create a software mirror on them (RAID1). The drives are empty. First of all, you have to zero all superblocks on the disks to be added to the RAID:

# mdadm --zero-superblock --force /dev/vd{b,c}

I have two clean disks: vdb and vdc.

 mdadm zero-superblock

mdadm: Unrecognised md component device - /dev/vdb
mdadm: Unrecognised md component device - /dev/vdc

This listing means that neither of the disks have ever been added to an array.

To create a software RAID1 from two disks to the device /dev/md0, use this command:

# mdadm --create --verbose /dev/md0 -l 1 -n 2 /dev/vd{b,c}

Where ‘-l 1’ is the array type (RAID1 in our case,

and ‘-n 2’ is the number of disks added to the array.

If you want to create RAID0 (Stripe) to improve read/write speed due to parallelizing commands between several physical disks, use this command:

# mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/vdb /dev/vdc

For RAID 5 of three or more drives:

# mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/vdb /dev/ vdc /dev/vdd

After you enter the commands, confirm the actions and the software RAID will be created:

mdadm create a software raid device in linux

If you list the information about your disks, you will see your RAID md0 drive:

# lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 253:0 0 20G 0 disk
├─vda1 253:1 0 512M 0 part /boot
└─vda2 253:2 0 19.5G 0 part /
vdb 253:16 0 20G 0 disk
└─md0 9:0 0 20G 0 raid1
vdc 253:32 0 20G 0 disk
└─md0 9:0 0 20G 0 raid1

In order to create an ext4 file system on your RAID1 drive, run this command:

# mkfs.ext4 /dev/md0

creting filesystem on md device using mkfs.ext4

Create a backup directory and mount the RAID device to it:

# mkdir /backup
# mount /dev/md0 /backup/
# df -h

Filesystem Size Used Avail Use% Mounted on
.............
.............
/dev/md0 20G 45M 19G 1% /backup

The array has been mounted without any errors. In order not to mount the device each time manually, make the following changes to fstab:

# nano /etc/fstab

/dev/md0 /backup ext4 defaults 1 2

add md device to fstab

How to View State or Check the Integrity of a RAID Array?

To check data integrity in the array, use the following command:

#echo 'check' > /sys/block/md0/md/sync_action

Then view the output of the following file:

#cat /sys/block/md0/md/mismatch_cnt

If you get 0, your array is OK:

check md raid device state

To stop the check, run the following:

#echo 'idle' > /sys/block/md0/md/sync_action

To check the state of all RAIDs available on the server, use this command:

# cat /proc/mdstat

Personalities : [raid1]
md0 : active raid1 vdc[1] vdb[0]
20954112 blocks super 1.2 [2/2] [UU]

You can view more detailed information about the specific RAID using this command:

# mdadm -D /dev/md0

mdadm -D - view detailed raid drive state

Let’s consider the main items in the command listing:

  • Version – the metadata version
  • Creation Time – the date and time of RAID creation
  • Raid Level – the level of a RAID array
  • Array Size – the size of the RAID disk space
  • Used Dev Size – the space size used by devices
  • Raid Device – the number of disks in the RAID
  • Total Devices – is the number of disks added to the RAID
  • State – is the current state (clean — it is OK)
  • Active Devices – number of active disks in the RAID
  • Working Devises – number of working disks in the RAID
  • Failed Devices – number of failed devices in the RAID
  • Spare Devices – number of spare disks in the RAID
  • Consistency Policy – is the parameter that sets the synchronization type after a failure, rsync is a full synchronization after RAID array recovery (bitmap, journal, ppl modes are available)
  • UUID – raid array identifier

You can view the brief information using fdisk:

# fdisk -l /dev/md0

Disk /dev/md0: 21.5 GB, 21457010688 bytes, 41908224 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Recovering from a Disk Failure in RAID, Disk Replacement

If one of the disks in a RAID failed or damaged, you may replace it with another one. First of all, find out if the disc is damaged and needs to be replaced.

# cat /proc/mdstat

Personalities : [raid1]
md0 : active raid1 vdb[0]
20954112 blocks super 1.2 [2/1] [U_]

From the previous command you can see that only one disk is active. [U_] also means that a problem exists. When both disks are healthy, the output is [UU].

The detailed information about the RAID also shows that there are some problems:

# mdadm -D /dev/md0

/dev/md0:
Version : 1.2
Creation Time : Tue Dec 31 12:39:22 2020
Raid Level : raid1
Array Size : 20954112 (19.98 GiB 21.46 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Tue Dec 31 14:41:13 2020
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 1
State : clean, degraded

– the last line shows that the one disk in the RAID is damaged.

In our case, /dev/vdc must be replaced. To restore the array, you must remove the damaged disk and add a new one.

Remove the failed drive:

# mdadm /dev/md0 --remove /dev/vdc

Add a new disk to the array:

# mdadm /dev/md0 --add /dev/vdd

Disk recovery will start automatically after you add a new disk:

# mdadm -D /dev/md0

/dev/md0:
Version : 1.2
Creation Time : Tue Dec 31 12:39:22 2020
Raid Level : raid1
Array Size : 20954112 (19.98 GiB 21.46 GB)
Used Dev Size : 20954112 (19.98 GiB 21.46 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Tue Dec 31 14:50:20 2020
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1
Consistency Policy : resync
Rebuild Status : 48% complete
Name : host1:0 (local to host host1)
UUID : 9d59b1fb:7b0a7b6d:15a75459:8b1637a2
Events : 42
Number Major Minor RaidDevice State
0 253 16 0 active sync /dev/vdb
2 253 48 1 spare rebuilding /dev/vdd
rebuild Status : 48% complete shows the current array recovery state.
spare rebuilding /dev/vdd shows which disk is being added to the array.
After rebuilding the array, check its state:
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

How to Add or Remove Disks to Software RAID on Linux?

If you need to remove the previously created mdadm RAID device, unmount it:

# umount /backup

Then run this command:

# mdadm -S /dev/md0

mdadm: stopped /dev/md0

After destroying the RAID array, it won’t detected as a separate disk device:

# mdadm -S /dev/md0

mdadm: error opening /dev/md0: No such file or directory

You can scan all connected drives and re-create a previously removed (failed) RAID device according to the metadata on physical drives. Run the following command:

# mdadm --assemble —scan

mdadm --assemble scan - check mdadm metadata on drives

If you want to remove an operable drive from an array and replace it, first tag the drive as a failed one:

# mdadm /dev/md0 --fail /dev/vdc

Then you can remove it using this command:

# mdadm /dev/md0 --remove /dev/vdc

You can add a new disk, just like in case of a failed drive:

# mdadm /dev/md0 --add /dev/vdd

How to Add a Hot-Spare Drive to an MDADM Array?

You can add an extra hot-spare drive for quickly rebuild the RAID array if one of the active disks fails. Add a free disk to the md device you want:

# mdadm /dev/md0 --add /dev/vdc

When you check the RAID status, we will see the disk as a spare:

add hot spare drive to md raid

To make sure that the hot-swap works, mark any drive as failed and check the RAID status:

# mdadm /dev/md0 --fail /dev/vdb

After checking, you can see that the rebuilding of the array has started.

mdadm spare rebuilding

The /dev/vdb disk is marked as a failed, and the hot-spare disk became one of the active RAID disks. So the rebuild process has started.

To add an additional operable disk to the RAID, you must follow these two steps.

Add an empty drive to the array:

# mdadm /dev/md0 --add /dev/vdb

Now this disk will be displayed as hot-spare. To make it active, expand the md RAID device:

# mdadm -G /dev/md0 —raid-devices=3

Then the array will be rebuilt:

adding new drive to mdadm software array

After the rebuild, all the disks become active:

Number Major Minor RaidDevice State
3 253 32 0 active sync /dev/vdc
2 253 48 1 active sync /dev/vdd
4 253 16 2 active sync /dev/vdb

How to Remove an MDADM RAID Array?

If you want to permanently remove your software RAID drive, use the following scheme:

# umount /backup – unmount the array from the directory

# mdadm -S /dev/md0 — stop the RAID device

Then clear all superblocks on the disks it was built of:

# mdadm --zero-superblock /dev/vdb
# mdadm --zero-superblock /dev/vdc

Mdmonitor: RAID State Monitoring & Email Notifications

The mdmonitor daemon can be used to monitor the status of the RAID. First, you must create the /etc/mdadm.conf file containing the current array configuration:

# mdadm –detail –scan > /etc/mdadm.conf

The mdadm.conf file is not created automatically. You must create and update it manually.

Add to the end of /etc/mdadm.conf the administrator email address to which you want to send notifications in case of any RAID problems:

MAILADDR raidadmin@woshub.com

Then restart mdmonitor service using systemctl:

# systemctl restart mdmonitor

Then the system will notify you by e-mail if there are any mdadm errors or faulty disks.

Inactive MDADM RAID

In case of hardware failures or emergency shutdowns, the software RAID array may become inactive. All drives are marked as inactive, but have no errors.

# cat /proc/mdstat

Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : inactive vdc[1] vdb[0]
20954112 blocks super
unused devices: <none>

In this case, you need to stop the array using this command:

# mdadm --stop /dev/md0

And reassemble it:

# mdadm --assemble --scan –force

If the md device is registered in /etc/fstab, remount it using this command:

# mount -a

There are some ways to create a software RAID in the OS that is installed already. In this case, you will have to manually copy all partition tables to a new disk, and manually move the contents of the system disk to the RAID that consists from one disk. Then clean up the first disk and add it to your RAID, edit initramfs and the GRUB loader. So it is better to select the mode of CentOS installation on a software RAID during the server deployment.

mdadm makes it easier to manage software RAIDs on Linux. In this article I have described the main things when working with the tool, and covered the typical questions that arise when working with RAIDs using mdadm.

0 comment
1
Facebook Twitter Google + Pinterest
previous post
Install and Configure OpenVPN Server on Linux CentOS/RHEL
next post
Windows 10: No Internet Connection After Connecting to VPN Server

Related Reading

Enable Internet Explorer (IE) Compatibility Mode in Microsoft...

January 27, 2023

How to Disable or Uninstall Internet Explorer (IE)...

January 26, 2023

How to Stop Automatic Upgrade to Windows 11?

January 18, 2023

Fix: Windows Needs Your Current Credentials Pop-up Message

January 18, 2023

Adding Trusted Root Certificates on Linux

January 9, 2023

Leave a Comment Cancel Reply

Categories

  • Active Directory
  • Group Policies
  • Exchange Server
  • Microsoft 365
  • Azure
  • Windows 11
  • Windows 10
  • Windows Server 2022
  • Windows Server 2019
  • Windows Server 2016
  • PowerShell
  • VMWare
  • Hyper-V
  • Linux
  • MS Office

Recent Posts

  • Using Previous Command History in PowerShell Console

    January 31, 2023
  • How to Install the PowerShell Active Directory Module and Manage AD?

    January 31, 2023
  • Finding Duplicate E-mail (SMTP) Addresses in Exchange

    January 27, 2023
  • How to Delete Old User Profiles in Windows?

    January 25, 2023
  • How to Install Free VMware Hypervisor (ESXi)?

    January 24, 2023
  • How to Enable TLS 1.2 on Windows?

    January 18, 2023
  • Allow or Prevent Non-Admin Users from Reboot/Shutdown Windows

    January 17, 2023
  • Fix: Can’t Extend Volume in Windows

    January 12, 2023
  • Wi-Fi (Internet) Disconnects After Sleep or Hibernation on Windows 10/11

    January 11, 2023
  • Adding Trusted Root Certificates on Linux

    January 9, 2023

Follow us

woshub.com
  • Facebook
  • Twitter
  • RSS
Popular Posts
  • How to Configure MariaDB Master-Master/Slave Replication?
  • How to Mount Google Drive or OneDrive in Linux?
  • KVM: How to Expand or Shrink a Virtual Machine Disk Size?
  • Hyper-V Boot Error: The Image’s Hash and Certificate Are not Allowed
  • Adding VLAN Interface in CentOS/Fedora/RHEL
  • Configuring High Performance NGINX and PHP-FPM Web Server
  • Install and Configure SNMP on RHEL/CentOS/Fedor
Footer Logo

@2014 - 2023 - Windows OS Hub. All about operating systems for sysadmins


Back To Top