Home>Computers>Zoneminder Linux / mdraid Optimizations

Zoneminder Linux / mdraid Optimizations

I was looking at things on the new server that I had just setup today and I noticed that the SSD was being written to at a continuous 35MB/s. This was as a result of the zoneminder /var/tmp/zm being located on the primary disk.

The solution was to add a line to /etc/fstab and dedicated an 8GB RAM disk to resolve my issue. The running occupied size for my setup before was roughly 5GB. Here is the /etc/fstab line:

tmpfs /var/tmp/zm tmpfs rw,nodev,nosuid,size=8G 0 0

 

Whilst researching this solution I stumbled across a bit of mdraid optimization I hadn’t done or tested for yet. That was checking the stripe cache size for

 /sys/block/md127/md/stripe_cache_size

The default was 256. I tried several different sizes, each time running hdparm -Tt /dev/md127 for a quick and dirty test to figure out which seems to work best. 32768 yielded the best results.

With the 16 disks in a RAID6 using two LSI Logic / Symbios Logic 9207-8i SAS2.1 HBA 8 port adapters I was able to achieve 2GB/s read and nearly 950MB/s sustained write.

time cp /ramdisk/10g.bin /raid/

real 0m13.068s
user 0m0.036s
sys 0m12.480s

time cp /raid/10g.bin /ramdisk/

real 0m5.806s
user 0m0.016s
sys 0m5.789s

hdparm -Tt /dev/md127

/dev/md127:
Timing cached reads: 10886 MB in 2.00 seconds = 5445.73 MB/sec
Timing buffered disk reads: 5864 MB in 3.00 seconds = 1954.45 MB/sec

 

This isn’t the best in the world really, but it is not bad either! This kind of capacity coupled with a large amount of disks allows me to record a lot of CCTV footage. Though, I suppose I haven’t explained how all of this works.

 

Stock blockdev readahead on my system was 28672.

Rate this post