Maximizing drive performance. Raid and single drives.(+hardware vs software raid.)

Linux101

New Member
Joined
Sep 11, 2020
Messages
7
Reaction score
0
Credits
162
I have a raid and other drives. I'm trying to figure out what is causing the performance specifically and if there is a way to maximize them.

I have a 6 disk raid of old re 4 drives. In raid 10 they perform at around 120mb/s on average no matter what. The individual drive performance is arouns 70-90mb/s on average for random/sequential.

I've run into some odd behavior. If you copy paste a job. Stop it. Then restart the copy job it can do a burst mode where it can go up to as fast as 650mb/s with real files. These would be game folders for a home computer. My question is what is causing all of this. What should the speed be normally(is near 120mbs/ a limit?). And can anything be done about it.

I've run into several things I've wonder about as the cause.

1. Linux kernal only uses 1gb for things and then throttles. This fits the behavior of burst mode as it starts fast and decays continuously to the normal limit of around 120mb/s.
2. File system or other organization in ram. I've noticed it never uses a lot. Can file systems logic or scheduler logic outside the kernel be used to maximize performance so everything acts like it is a sequential write all the time?

I have noatime and lazytime on all my drives at the moment. It helps with some things but isn't cutting into the raid performance. Is there a way to fix this? Are there setting for the HDD's that can be adjusted like que depth or dirty info that make it utilize the drives fully. Or even near the 210-270mb/s you would think from the drives basic speed listings? Do we have file systems or other things in linux to automatically organize files ahead to maximize drive performance?

I also have a cheap SMR backup drive. This thing has 256mb cache. If this could be utilized fully smr would be very useful. With burst it can go up to near 500mb/s while it's bursting.

My question is, if it can do this when forcing it via canceling and restarting a copy job. Why can't it be made to do this normally without inputs?

BTW, it was bursting for like 10gb at those speeds before dropping heavily. It was even going from 850-650mb/s down to only 400mb/s at one point with certain raid settings. But it wasn't stable or constant.

If this could be done all the time it would be fantastic.

What about a file system that organized small files in a folder into a managle chunk in the file system before trying to copy so it's ready for all write/read functions to maximize disk use? Could each folder have similar to inodes with each distinct file have a tag with a number reference with size or order values(which order to place or find them in ram and write to disk for faster retrieval) to organize in ram and then written when a big enough chunk is made. Even if it uses some HDD space it woudl be worth it. It could amend the existing node structure by giving the info with or along side the specific inode data. I would sacrifice some HDD space for pure performance! Especially if it can be added and updated to an existing installed file system. It should be able to if it has to keep updtating itself for things like file size data. (assuming it can't use existing data.. derp.) If could even have an update function with readtoram and them write using it's own logic queing for max speed.. In fact can this be done without adding an extra inode like data and use existing information to simply organize for read/writes?

Only other issue would be if it needs to scale to ram volume to read then write to utilize more ram to keep up with the job. Particularly if it's out of ram.

Oh, and btw, do hardware raid controllers have the ability to reach those speeds without software tricks? Or would they also only do 120mb/s on average. Or would hardware raid on my mobo do any better. I just saw this hardware raid controller: https://www.amazon.com/LSI-Logic-SAS9260-8I-8PORT-512MB/dp/B002IT4YG2 Not sure if it's any good.

I have one of these: https://www.gigabyte.com/Motherboard/GA-970A-D3-rev-10-11#ov

BTW, when I used one a program to test the raid it showed utilization and it didn't seem to use more than one disk most of the time. Is this normal behavior. Is this the reason for the low performance? Is there a way to make it keep using all disks constantly. I'm surprised with how long raid has been around it hasn't been designed in a way to fully utilize it all the time. with more computer power you would think it would be refined to do better.
 
Last edited:


Do hardware raid cards help saturate raids more evenly? Or is there a way to get better performance with software raids? I think my raid is just using one disk more than anything. This is a rust raid.

In particular does hardware raid speed up writes for small lots of small files?

I'm almost assuming it would need Sram over SDram to accomplish this.

This is a fio test from my raid:
Code:
$ sudo fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75
test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64
fio-3.21
Starting 1 process
test: Laying out IO file (1 file / 4096MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=4667KiB/s,w=1602KiB/s][r=1166,w=400 IOPS][eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=3368: Sun Sep 13 01:06:41 2020
  read: IOPS=1003, BW=4014KiB/s (4110kB/s)(3070MiB/783209msec)
   bw (  KiB/s): min= 2064, max= 4872, per=100.00%, avg=4017.75, stdev=277.30, samples=1563
   iops        : min=  516, max= 1218, avg=1004.30, stdev=69.24, samples=1563
  write: IOPS=335, BW=1341KiB/s (1374kB/s)(1026MiB/783209msec); 0 zone resets
   bw (  KiB/s): min=  688, max= 1856, per=100.00%, avg=1342.80, stdev=136.19, samples=1563
   iops        : min=  172, max=  464, avg=335.62, stdev=34.07, samples=1563
  cpu          : usr=1.95%, sys=8.70%, ctx=679177, majf=0, minf=6
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued rwts: total=785920,262656,0,0 short=0,0,0,0 dropped=0,0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: bw=4014KiB/s (4110kB/s), 4014KiB/s-4014KiB/s (4110kB/s-4110kB/s), io=3070MiB (3219MB), run=783209-783209msec
  WRITE: bw=1341KiB/s (1374kB/s), 1341KiB/s-1341KiB/s (1374kB/s-1374kB/s), io=1026MiB (1076MB), run=783209-783209msec

Disk stats (read/write):
    dm-4: ios=785921/262671, merge=0/0, ticks=46718058/3285546, in_queue=50003604, util=100.00%, aggrios=785921/262671, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
    md127: ios=785921/262671, merge=0/0, ticks=0/0, in_queue=0, util=0.00%, aggrios=130988/87552, aggrmerge=7/7, aggrticks=7762990/728758, aggrin_queue=8492235, aggrutil=74.07%
  sdf: ios=111871/87523, merge=0/8, ticks=2749745/44767, in_queue=2794654, util=66.44%
  sdd: ios=109400/87661, merge=0/6, ticks=2834184/40096, in_queue=2874525, util=65.60%
  sdg: ios=133583/87480, merge=10/3, ticks=6624765/1811839, in_queue=8436823, util=32.05%
  sde: ios=150177/87520, merge=23/11, ticks=18331649/511518, in_queue=18843758, util=72.23%
  sdc: ios=152528/87656, merge=4/11, ticks=10170350/152622, in_queue=10324514, util=74.07%
  sdh: ios=128374/87477, merge=10/6, ticks=5867252/1811706, in_queue=7679138, util=32.60%

The que depth for my drives are all at 32 in general. Is it a good idea to increase them to 64 or 128 for a raid. Or in general for my disks?
 
Last edited:

Members online


Top