Adding cache drive to mdadm raid 5

jeremyy44

Member
Joined
Jan 20, 2021
Messages
45
Reaction score
8
Credits
425
Hi so I just finished configuring my raid 5 that has 4 4tb drive and I was wondering what the best way to impliment a cache drive would be. If there is a good way what size would be enough? I tought of using a 128gb ssd to start with since what I usually transfer are media like movies series etc for a plex server. I mostly want to improve my write speeds to make them more consistent over my 2.5gbs nics.

Thank you.
 


dcbrown73

Well-Known Member
Joined
Jul 14, 2021
Messages
365
Reaction score
338
Credits
3,224
Honestly, I haven't used mdadm in a long time, but if memory serves (and it didn't change) I don't think mdadm support adding caching drives. (though there maybe alternatives to add a caching drive)

I just want to point out that if your drives are SATA drives or better which I suspect they are given they are 4TB drives combined with likely a motherboard with a 6GB SATA bus. I'm pretty sure those SATA drives will smoke (saturate) your 2.5GB Ethernet. You're more likely to be limited by your processors ability to encode the video stream depending on what is powering it.

My buddy streams as many as 8 streams at once on his dedicated desktop PC running Plex on a single 8TB SATA drive. His limiting factor was his processor's ability to encode the streams on the fly and upload bandwidth.

My QNAP runs my Plex server, but I may have 2 users tops and that is once in a blue moon.
 
OP
jeremyy44

jeremyy44

Member
Joined
Jan 20, 2021
Messages
45
Reaction score
8
Credits
425
Well its sad to hear that mdadm might not support caching, tbh I never tought to ask myself that in the first place. And yeah im not too worried about streaming performence since at max only 2 ppl (including me) are using it at the same time. It was mostly to improve performence when transfering movie from my main pc to my server

And yes they are sata drives so I guess it could be just some config problems because at some point I was getting a constant 280Mb/s but after reinstalling windows 10 completly and and doing the same thing with my server ubuntu server os im getting ~150 to 240 so its really not as stable.

But if caching isnt suported I guess its not that bad then I think its mostly that I want to see as big of a number as possible when transfering data.
 

dcbrown73

Well-Known Member
Joined
Jul 14, 2021
Messages
365
Reaction score
338
Credits
3,224
What filesystem are you using? Some filesystems are better suited to handle large files than small files.

Another thing to consider is you're using a software raid. That is by default going to be slower than a single disk in Windows. Software raid requires CPU overhead and of course RAID5 must also calculate and write parity if you're writing to the software raid. (ie, to Linux)

There are also tweaks you can make to spend up the raid. A quick Google search returns the following page that has tweaks to speed up the performance.

One other option if you have a managed switch is you can up your MTU to 9k frames. (jumbo frames) Though since you're transferring from your desktop and normal usage doesn't require such large frames. I would recommend against enabling jumbo frames. They are better suited for iSCSI or dealing VMs that are stored on network filesystems and the like.

Good luck,
Dave
 
OP
jeremyy44

jeremyy44

Member
Joined
Jan 20, 2021
Messages
45
Reaction score
8
Credits
425
Im using ext4 and choose 128k chunk size.
And I have a 2600 non x so I think I should be okay and I did use the recommendation in this guide it did help when growing my raid from 3 to 4 disks.

I havnt tried to enable jumbo frames ill try that tho hoping my nics support it which it dosnt say in the manual or I couldnt find it but good idea but like you said I should keep it off after that which ill follow.

Thank you for all the tips btw.
 
$100 Digital Ocean Credit
Get a free VM to test out Linux!

Linux.org Hosting Donations
Consider making a donation

Members online


Top