USB Device RAID 1

Jarret B

Well-Known Member
Staff member
Joined
May 22, 2017
Messages
339
Reaction score
369
Credits
11,689
Redundant Array of Inexpensive Disks (RAID) is an implementation to either improve performance of a set of disks and/or allow for data redundancy. Reading and writing performance issues can be helped with RAID. RAID is made up of various levels. This article covers RAID Level 1 and how to implement it on a Linux system.

RAID 1 Overview

RAID 1 is sometimes referred to as Disk Mirroring. RAID 1 is the implementation of 2 hard drives which are written to both drives. Data is written to each disk simultaneously. Since the drives are identical, the data is redundant. If one drive should fail, the data is still intact and usable. Once the failed disk is replaced, the mirror can be re-established to provide redundancy again. One problem with Mirroring is that half of the storage is lost. For example, if two 1 TB drives are used, then only 1 TB of storage is usable to hold data. The other 1 TB of storage is used for redundancy.

HARDWARE

The two USB drives I use are called ORANGE and GREEN. I named them from the color of the thumb drive. The drives are all Sandisk Cruzer Switches which are USB 2.0 compliant and have a storage of 4 GB (3.7GB).

NOTE: When dealing with RAID arrays, all disks should be the same size. If they are not, they must be partitioned to be the same size. The smallest drive in the array sets the usable size of all of the disks.

I placed both USB sticks in the same hub and tested the write speed. A file was written to each and timed. The size of the file was 100 MB and took an average time of 11.5 seconds making the average write speed 8.70 MB/sec. I performed a read test and had an average read time of 3.5 seconds making the average read time of 28.6 MB/sec.

To set up the RAID Array, you use the command 'mdadm'. If you do not have the file on your system, you will receive an error in a terminal when you enter the command 'mdadm'.

To get the file on your system use Synaptic or the like, for your Linux distro.

Once installed, you are ready to make a RAID 1 Array.

Creating the RAID Array

Open a terminal and type 'lsblk' to get a list of your available drives. Make a note of the drives you are using so you do not type in the wrong drive and add it to the Array.

NOTE: Entering the wrong drive can cause a loss of data.

From the listing of the command from above, I am using sdc1 and sdd1. The command is as follows:

sudo mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdc1 /dev/sdd1 --verbose

The command creates (--create) a RAID Array called md0. The RAID Level is 1. and two devices are being used to create the RAID Array – sdc1 and sdd1.

The following should occur:

jarret@Symple-PC ~ $ sudo mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdc1 /dev/sdd1 --verbose

mdadm: /dev/sdc1 appears to contain an ext2fs file system
size=3909632K mtime=Wed Oct 26 21:03:30 2016

mdadm: Note: this array has metadata at the start and may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that your boot-loader understands md/v1.x metadata, or use
--metadata=0.90

mdadm: /dev/sdd1 appears to contain an ext2fs file system
size=3909632K mtime=Wed Oct 26 21:03:34 2016

mdadm: size set to 3907520K

Continue creating array?

NOTE: If you get an error that the device is busy, then remove 'dmraid'. In a Debian system use the command 'sudo apt-get remove dmraid' and when completed, reboot the system. After the system restarts, try the 'mdadm' command again. You also have to use 'umount' to unmount the drives.

Answer 'y' to the question to 'Continue creating array?' and the following should appear:

mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

The RAID Array is created and running, but not yet ready for use.

Prepare md0 for use

You may look around, but the drive md0 is not to be found. Open the GParted application and you will see it there ready to be prepared for use.

By selecting /dev/md0 you will get an error that no Partition Table exists on the RAID Array. Select Device from the top menu and then 'Create Partition Table…'. Specify your partition type and click APPLY.

Now, create the Partition and select your file format to be used. It is suggested to use either EXT3 or EXT4 for formatting the Array. You may also want to select the RAID Flag. Add the Partition scheme. I gave a Label of “Mirror” and then clicked APPLY to make all the selected changes. The drives should be formatted as selected and the RAID Array is ready to be mounted for use.

Mount RAID Array

Before closing GParted, look at the Partition name as shown in Figure 1. My Partition name is '/dev/md127p1'. The partition name is important for mounting.

Figure 01.jpg

FIGURE 01

You may be able to simply mount 'Mirror' as I was able to do.

If the mount does not work, then try the following. Go to your '/media' folder and as ROOT create a folder, such as RAID, to be used as a mount point. In a terminal, use the command 'sudo mount /dev/md127p1 /media/RAID' to mount the RAID Array as the media device named RAID.

Now you must take ownership of the RAID Array with the command:

sudo chown -R jarret:jarret /media/RAID

The command uses my username (jarret) and group name (jarret) to take ownership of the mounted RAID Array. Use your own username and mount point.

Now, when I write to the Raid Array, my time to write a 100 MB file is an average of 15 seconds (it has to write the same file to both drives). The speed to write is now 6.67 MB/sec. Reading a 100 MB file from the RAID Array takes an average of 3 seconds which makes a speed of 33.33 MB/sec.

As you can see, the speed has dramatically changed (write: 8.70 MB/s to 6.67 MB/s and read: 28.6 MB/s to 33.3 MB/s). Do remember, if one drive of the Array is removed or fails, the redundancy of the data is lost, but still available.

NOTE: The speed my be increased by placing each drive on a separate USB ROOT HUB. To see the number of ROOT HUBs you have and where each device is located, use the command 'lsusb'.

Auto Mount the RAID Array

To have the RAID Array auto mount after each reboot is a simple task. Run the command 'blkid' to get the needed information from the RAID Array. For example, to run it after I mounted my RAID mount point, I would get the following:

/dev/sda2: UUID="73d91c92-9a38-4bc6-a913-048971d2cedd" TYPE="ext4"
/dev/sda3: UUID="9a621be5-750b-4ccd-a5c7-c0f38e60fed6" TYPE="ext4"
/dev/sda4: UUID="78f175aa-e777-4d22-b7b0-430272423c4c" TYPE="ext4"
/dev/sda5: UUID="d5991d2f-225a-4790-bbb9-b9a48e691061" TYPE="swap"
/dev/sdc1: LABEL="My Book" UUID="54D8D96AD8D94ABE" TYPE="ntfs"
/dev/sdb1: UUID="bd9095fd-1125-0345-0697-19c4bad0d684" UUID_SUB="a7681b0a-8573-be11-e629-8893d7c73c16" LABEL="Symple-PC:0" TYPE="linux_raid_member"
/dev/sde1: UUID="bd9095fd-1125-0345-0697-19c4bad0d684" UUID_SUB="2a94a813-cab2-c1eb-6e6b-e5f9dc80659e" LABEL="Symple-PC:0" TYPE="linux_raid_member"
/dev/md127p1: LABEL="Mirror" UUID="77d6d037-0a2f-4a78-824f-a54d38e01407" TYPE="ext4"

The needed information is the line with the partition '/dev/md127p1'. The Label is Mirror and the UUID is '77d6d037-0a2f-4a78-824f-a54d38e01407' and the type is EXT4.

Edit the file '/etc/fstab' as ROOT using an editor you prefer and add a line similar to 'UUID= 77d6d037-0a2f-4a78-824f-a54d38e01407 /media/RAID ext4 defaults 0 0'. Here the UUID is used from the blkid command. The mount point of '/media/RAID' shows where the mount point is located. The drive format of ext4 is used. Use the word 'defaults' and then '0 0'. Be sure to use a TAB between each set of commands.

Your RAID 1 drive Array should no be completely operational for use.

NOTE: Looking at the two lines before the /dev/md127p1 you can see that the UUIDs are SUBs and the TYPE is a "linux_raid_member". The listing allows you to see the two original devies being used in RAID Array 1.

Removing the RAID Array

To stop the RAID Array, you need to unmount the RAID mount point then stop the device 'md127p1' as follows:

sudo umount -l /media/RAID
sudo mdadm --stop /dev/md127p1

Once done, you need to reformat the drives and also remove the line from /etc/fstab which
enabled it to be be automounted.

Fixing a broken RAID Array


If one of the two drives should fail, you can easily replace the drive with a new one and restore the data to it.

Now, let's say from the above, drive sde1 fails. If I enter the 'lsblk' command, the drive sdb1 is shown and still listed as 'md127p1'. The device RAID is still accessible and usable. The Fault Tolerance is unavailable since only the one drive remains.

To determine the faulty drive, use the command: 'cat /proc/mdstat'.

$ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md127 : active raid1 sde1[1](F) sdb1[0]
3907520 blocks super 1.2 [2/1] [U_]

unused devices: <none>

The second last line shows '[U_]' the underscore shows a break in the RAID Array. The fourth line shows that sde1 was the failure (F). So, you know to remove the sde1 drive and replace it.

To fix a broken RAID Array, replace the failed drive with a new drive that has a minimum space of the previous drive. After adding a new drive, run 'lsblk' to find the address of the new drive. In my case, the new drive is 'sde1'. So, I must first unmount the drive, named 'Label', by the command 'umount /media/jarret/Label'.

To join the new drive to the existing broken RAID 1 Array, the command is:

sudo mdadm --manage /dev/md127p1 --add /dev/sdf1

The RAID partition name is 'md127p1', as shown in GParted. The device to add is 'sdf1'.

To see the progress of the rebuild, use the command 'cat /proc/mdstat'. The following shows the output of the rebuilding process:

jarret@Symple-PC ~ $ cat /proc/mdstat
Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md127 : active raid1 sdf1[2] sde1[1](F) sdb1[0]
3907520 blocks super 1.2 [2/1] [U_]
[===>.................] recovery = 18.0% (705216/3907520) finish=20.7min speed=2572K/sec

unused devices: <none>

Line three shows some usable information (active raid1 sdf1[2] sde1[1](F) sdb1[0]). The Array is RAID 1 and consists of devices sdb1 and sdf1. The device sde1 was the original one which failed (F).

The fourth line shows '[U_]' the underscore shows a break in the RAID Array. The command can be re-entered to see the rebuilding progress. The fifth line shows the time to finish the rebuild (20.7 minutes) and the speed (2572K/sec) at which the data copy is occurring.

At any time, the command 'cat /proc/mdstat' can be used to see the state of any existing RAID Array.

Once the RAID Array is rebuilt, the command 'cat /proc/mdstat' would show the results of:

Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
md127 : active raid1 sdf1[2] sde1[1](F) sdb1[0]
3907520 blocks super 1.2 [2/2] [UU]

unused devices: <none>

If you must remove a drive, you can tell the system that the device has failed. For instance, if I wanted to remove drive sde1 because it was making strange noises and I was afraid it would fail soon, the command would be:

sudo mdadm --manage /dev/md127p1 --remove /dev/sde1

The command 'cat /proc/mdstat' should show the Array has failed. Before you just unplug the device, you need to tell the system to remove it from the Array. The command would be:

sudo mdadm --manage /dev/md127p1 --remove /dev/sde1

You can now remove the drive, add a new one and rebuild the Array as described above.

Hope this helps you understand a RAID 1 Array. Enjoy your RAID Array!
 
Last edited:

Staff online

Members online


Top