RAID Performance on Ubuntu

Posted on August 30, 2011 by

9


read related articles

The numbers are in.

RAID can make disk volumes more reliable and faster.

A few months ago I posted an article explaining how Redundant Arrays of Inexpensive Disks (RAID) can provide a means for making your disk accesses faster and more reliable.

In this post I report on numbers from one of our servers running Ubuntu Linux. The real performance numbers closely match the theoretical performance I described earlier.

Besides the different RAID levels (which I cover in the other post) there are two main categories of RAID to consider: hardware and software.  In hardware RAID the algorithms are implemented in a controller card. In software RAID your CPU does the work. At first blush it seems that hardware raid would be the way to go. On the other hand, software RAID is still fast, it’s less expensive, and it isn’t susceptible to a single point of failure.

The most compelling argument against hardware RAID is that if your RAID card fails you must replace it with an identical card, otherwise all your data is lost.  You lose the data even if you had selected an ultra redundant RAID configuration that could sustain multiple disk failures. This susceptibility to a single point of failure runs against the idea of reliable storage that brought us to RAID in the first place.

On the other hand, the argument against software RAID concerns how much of your CPU it may consume.  I was surprised to see how little CPU RAID consumes. In my tests I never saw the RAID daemon consume more than 1% of one of the 4 processors. So the processor impact of software RAID looks to be trivial.

Performance numbers for several configurations of storage devices.

I evaluated the performance of a single, traditional 250 GB 3.5″ disk, a single 80 GB Solid State Disk (SSD), and RAID arrays composed of 4 x 250 GB 3.5″ disks (see figure).

Let’s consider each configuration:

  • 250GB Disk: This is our baseline for reference. Just a plain ole 7200 RPM Western Digital Enterprise hard drive. 64MB cache, SATA 3Gb/second. Available for about $70.
  • 80GB SSD: A Corsair SSD available for about $135. A fast and reliable SSD. We include this as another benchmark.
  • RAID1+0: The four 250GB disks are first mirrored (two pairs of mirrored drives) and then striped.  As you see from the numbers, we get about twice the read and write performance as a stand alone drive.  Pros: In this configuration, two of the four drives can fail and the volume will still operate. The failures can’t both be in the same pair of mirrors of course. Cons: Only 500GB of space, only 2x performance.
  • RAID3: Unfortunately not supported on Linux.
  • RAID5: I understand all the other RAID levels, but the descriptions for RAID5 are somewhat mysterious. Read this if you want to take a stab at grokking RAID5 yourself. In any case, what matters is the performance and the robustness to failure. Pros: In this configuration we get 750GB of effective space and we can lose one disk and still operate. Read speeds are about 3x of a stand alone drive. Cons: Write speed is slower than that of a stand alone drive.
  • RAID0: I have a need… A need for speed. Simple striping of 4 drives for speed. Pros: In this configuration we get 1000GB of effective space, 4x read speeds, and 4x write speeds. Cons: It sounds great, but if any single disk dies, the whole volume is lost. There is a very high probability that one of these disks will fail.

Our worst performer, as expected, is the lone 250 GB disk. The best performer is ourRAID0 volume. In RAID0 all 4 disks are written to (or read from) simultaneously, providing almost 4 times the performance for reading and writing. Why not always user RAID0? Because if any one of the 4 disks fail, the entire volume fails, and there’s a good chance one of them will fail.

RAID5 provides a nice balance of redundancy (one disk can fail) with performance (reading is 3 x as fast as for a stand alone disk).  I was disappointed to see that writing is significantly slower than for a stand alone disk.

The single SSD is very fast, and it would be exciting to package 4 of these together as a RAID volume.  I’ll bet the performance would be stunning. I wasn’t able to test that configuration due to cost constraints.

In the end, for me, it’s a split decision between RAID1+0 and RAID5. Both can survive a disk failure (RAID1+0 can survive 2). I think I’ll go with RAID1+0 for the slightly better reliability.

Advertisements
Posted in: research, technology