I tried to find a more applicable community to post this to but didn’t find anything.
I recently set up a NAS/server on a Raspberry Pi 5 running Raspberry Pi OS (see my last post) and since then I’ve got everything installed into a 3D printed enclosure and I’ve got RAID set up (ZFS RAIDz1). Prior to setting up RAID, I could transfer files to/from the NAS at around 200MB/s, but now that RAID is seemingly working things are transferring at around 28-30 MB/s. I did a couple searches and found someone suggesting to disable sync ($ sudo zfs set sync=disabled zfspool). I tried that and it doesn’t seem to have had any effect. Any suggestions are welcome but keep in mind that I barely know what I’m doing.
Edit: When I look at the SATA hat, the LEDs indicate that the drives are being written to for less than half a second and then there’s a break of about 4 seconds where there’s no writing going on.
When people talk about CPU limitations on the rPi, they aren’t talking about just the actual processing portion of the machine. There are also a lot of other corners cut for basically all SBCs. Including bus width and throughput.
The problem is that when you use a software raid, like ZFS, or it’s precursors, you are using far more than the CPU. You’re also using the data bus between the CPU and the IO controller.
“CPU usage” indicators don’t really tell you how active your data buses are, but how active your CPU is, in having to process information.
Basically, it’s the difference between IO wait states, and CPU usage.
The Pi is absolutely a poor choice for input/output, period. Regardless of your “metrics” tell you, it’s data bus simply does not have the bandwidth necessary to control several hard drives at once with any sort of real world usability.
You’ve wasted your money on an entire ecosystem by trying to make it do something it wasn’t designed, nor has the capability, to do.
sync=disabled will make ZFS write to disk every 5 seconds instead of when software demands it, which maybe explains your LED behavior.
Jeff Geerling found that writes with Z1 was 74 MB/sec using the Radxa Penta SATA HAT with SSDs. Any HDD should be that fast, the SATA hat is likely the bottleneck.
Are you performing writes locally, or over smb?
Can try
iostat
orzpool iostat
to monitor drive writes and latencies, might give a clue.How much RAM does the Pi 5 have?
After some googling:
Some Linux distributions (at least Debian, Ubuntu) enable init_on_alloc option as security precaution by default. This option can help to prevent possible information leaks and make control-flow bugs that depend on uninitialized values more deterministic.
Unfortunately, it can lower ARC throughput considerably (see bug).
If you’re ready to cope with these security risks 6, you may disable it by setting init_on_alloc=0 in the GRUB kernel boot parameters.
I think it’s set to 1 on Raspberry Pi OS, you set it in
/boot/cmdline.txt
I think.I haven’t seen more than maybe 32MB/s. The transfers I’ve done are all on my local network from my desktop to my NAS which is plugged into my router. I have samba installed, my NAS shows up as a network drive just fine and I’ve just been dragging/dropping on my desktop GUI. Before setting up RAID, when I did this I would get around 200MB/s. The Pi has 16GB of RAM and I’m using less than 1.5GB while making a large transfer from my desktop to the NAS.
Ah kay, definitely not a RAM size problem then.
iostat -x 5
Will print out per drive stats every 5 seconds. The first output is an average since boot. Check all of the drives have similar values while performing a write. Might be one drive is having problems and slows everything down, hopefully unlikely if they are brand new drives.zpool iostat -w
Will print out a latency histogram. Check if any have a lot above 1s and if it’s in the disk or sync queues. Here’s mine with 4 HDDs in z1 working fairly happily for comparison:The
init_on_alloc=0
kernel flag I mentioned below might still be worth trying.
deleted by creator
What type of disk (HDD or SSD) and how many disks in the pool?
RAIDZ1 configuration will bring your write speed down some due to data having to write to multiple disks at a time. This is true for most any RAID. Once written, your read speeds should remain the same or improve a bit though.
I’ve got 5x 8TB HDD at 5400rpm
Added an edit to my post.
It’s a Pi, what are you expecting. You just wasted a ton of money on inferior hardware with extra software issues. You could’ve just got a mini pc with 2 nvme slots instead for half the price and add a 6 port sata board for 20$ to one of those. Much cheaper, way more reliable, upgradable and ZFS actually would’ve work as you expect.
Just getting my feet wet. I have plenty of other uses for a Pi so if the NAS doesn’t work out that’s fine, I’ll get something else. Worst case I’m out $70-ish for the Raxda SATA hat. Something is seemingly not right with my config, I haven’t gotten the Pi to break a sweat yet. Thanks for your input though.
Other people have already talked about why you are having performance issues with the Pi. As for a better NAS solution you will probably be better off with a used desktop PC from the last 10 years. If the computer doesn’t have enough SATA ports you can get a sata addon card or HBA (host buss adapter) addon card flashed in IT mode. You should be able to find a lot of options on eBay. Maybe people can chime in with specific models to look at.
What are you actually using for storage, and how is it connected?
5x 8TB HDD at 5400rpm all hooked up to a Radxa Penta SATA hat.
What controller?
I guess the Radxa Penta SATA hat would be considered the controller.