Repurposing Old Hardware as a NAS

I’ve been helping out a charity recently who have a lot of old kit and no money to replace it. Open source software can really help organisations out like this, and let’s face it everyone wants to save some money where they can. These days there are loads of off-the-shelf distributions designed to turn commodity PC hardware into dedicated appliances, and you don’t need a degree in the Linux kernel to administer appliances like that. If you pick a well supported mature package the learning curve can be shallow and the gains huge. You’ll know if you’ve read this blog that I’m a big fan of using pfSense to turn old PC hardware into feature-rich firewall appliances. There must be something similar for storage, right?

We were talking about the charity’s storage strategy and it was established that a NAS would be beneficial to complement their aging server hardware. As they’ve got no money, they can’t just go out and buy a Synology or QNAP box. “Oh, maybe you can do something with that old box over there, we don’t use it for anything.” The box in question is an “Equiinet ServerPilot“. It was an appliance that they got as part of a contract with a local ISP to function as a squid HTTP proxy, SMTP gateway etc. That ISP has since gone bust, and Equiinet no longer support the device. When I opened it up it was clear to see why!

  • Intel Pentium Dual-Core E5200 @ 2.5GHz
  • Gigabyte GA-G31M-ES2L – 2 RAM slots, 1 PCIe x16, 1 PCIe x1, 2 PCI. 4 onboard SATA ports

  • 2 x 512MB sticks of RAM
  • 1 x 500GB SATA drive in a removable bay
  • Antec Aria Micro-ATX case

It was obvious to turn this into a NAS the base hardware would need supplementing. There’s no point in setting up a NAS with 500GB of storage and no fault tolerance. It seemed like a good starting point, there’s plenty of onboard SATA ports, the CPU is not too old. It was definitely going to need more RAM and more drives, though. I was keen to leverage some new functionality available in a few opensource filesystems these days like SSD caching – I have an HP Gen8 server which does this natively and it makes a massive difference. There’s also a conundrum around where to put the OS. A commercial NAS box would store the OS in flash, and commercials servers tend to have an internal USB port or SD card slot for operating system storage in these scenarios. You don’t really want the OS to be sat on the redundant disks as it reduces the flexibility of the drive array. You don’t really want to waste one of your 4 onboard SATA ports to a dedicated OS storage drive, and someone can easily knock out or snap off a protruding USB stick with the OS on (and this board only has USB2). I decided I’d need to get hold of the following:

  • 4GB of RAM. 2x2GB sticks of DDR2 is the most that this motherboard can support
  • 4x 1TB drives. 2TB of RAID10 will be sufficient to address current and future storage requirements, resiliency and performance.
  • ~100GB SSD for caching purposes and a method of connecting it
  • Dual-port PCIe server NIC
  • Some OS storage device

This hardware spec was tweaked a little bit once I decided what software solution I was going to use. On that note…

It turns out there are loads of options. The obvious leader in this area is FreeNAS. It’s based on FreeBSD like pfSense and really all it’s doing is providing a friendly user interface to FreeBSD’s ZFS filesystem. ZFS has all the features you’d expect from a modern filesystem like resilience, online expansion, snapshotting and advanced caching to increase performance. It’s a complex beast but FreeNAS does a great job of making it user friendly. FreeNAS is a good product and quite mature. It has other features too like hypervisor management so that you can turn your NAS into an all-in-one network appliance providing a multitude of services to your users. There is one massive caveat to all of this though – ZFS loves RAM. The more RAM you can give it, the better it performs – a good rule of thumb is 1GB of RAM per 1TB of storage. This completely rules it out of our scenario where we’re trying to repurpose old hardware. A lot of the older devices that I support are either 32bit or the motherboards only support maybe 4GB or 8GB of RAM. The RAM that ZFS needs is dedicated to ZFS, it can’t be shared with other processes. So to run a 4TB NAS using ZFS you really need at least 6GB of RAM.

A competitor to FreeNAS is Rockstor. You can consider Rockstor to be a Linux-based equivalent of FreeNAS, although it’s not quite as mature in its development and does not benefit from the same size user base. It’s Linux under the hood rather than FreeBSD (in fact it’s CentOS 7) which is advantageous to me as I’ve been administering CentOS servers for 10+ years. It uses BTRFS (developed by Oracle) rather than FreeNAS’s ZFS (developed by Sun). It has a plugin architecture based around docker. BTRFS in turn is not quite as mature as ZFS, it will only reliably work at certain RAID levels for example. It has been integrated into the Linux kernel since 2009. One really notable difference between the two is that ZFS natively supports tiered levels of caching using system RAM, SSD etc. BTRFS does not – it will rely on the Linux kernel’s ability to use free RAM as an IO cache but that’s about it. Fortunately there is some functionality in the Linux kernel since version 3.10 called bcache which does just this. Rockstor takes the standard CentOS kernel and chucks it away in favour of kernel-ml from the epel repository to add support for these advanced features.

While I was browsing the Rockstor website I had a quick flick through their shop and found this PCIe card.

I had no idea these existed, but it seemed to resolve my issues around OS storage, and an additional SATA port for my caching SSD! It’s even bootable.

  • PCIe mSATA & SATA card – £9.80
  • Samsung mSATA 32GB – £22.99

So now I have OS storage and a 5th SATA port for £32.79. Not bad! Next up I needed some hard drives. I have a few contacts in the business of stripping servers and liquidating old stock, and managed to find someone selling 1TB NetApp SATA drives. These turned out to be standard Hitachi drives with a plastic caddy around them. They’d never been used and were £15 each with a 1yr warranty!

  • 4 x NetAPP X302A 1TB SATA drives – £60

That puts me at a running total of £92.79. I still needed an SSD, some way of mounting it (the Antec Aria case only has 4 hard drive bays which would be taken up by the 4 1TB drives) and some RAM. I decided that the case had quite compromised airflow but Equiinet had managed to squeeze a fan in the front by removing a PCB with USB and audio jacks on. There was definitely space for another one! Back to eBay we go…

  • Samsung 840 EVO SSD for caching – £34.99
  • PCI Backplane Adapter for Mounting 2.5 SSD/HDDs Into Vacant PCI Slot (yes that is exactly what it sounds like) – this allows me to mount the SSD above an unused PCI slot – £4.65
  • 80mm fan – £5.56
  • Fan power splitter – £1.95
  • Short SATA cable to go between mSATA card and SSD mounted over PCI slot – £2.99
  • Screws to mount the fan – £1.80

I ordered some cheap Chinese RAM too, but the board didn’t like it and would only ever see 50% of it at a time, so I had to go to Crucial for the RAM to make sure I’d get stuff guaranteed to work:

  • Crucial 2 x 2GB DDR2-800 DIMMS – £49.19

My overall total for the 2TB NAS build from the old ServerPilot appliance is £193.92. You can’t buy a QNAP or Synology (or even Netgear) 4 drive NAS box for that, even without drives. The ones that do SSD caching are even more expensive! I’m cheating a little there as I donated an old PCIe Intel Pro/1000PT dual port adaptor I had lying around – they go for about £15 on eBay. Either way you’re looking at around £200 to repurpose the box, which I’m pretty please with. Next up, installation.

I’m not gonna talk about how to install Rockstor as there’s loads of great guides online. It’s essentially a case of writing the ISO to USB using rufus or dd and booting it. I did this with the SATA drives disconnected to ensure the OS and bootloader/MBR ended up on the right device. One thing to note is that the filesystem for the OS HAS to be BTRFS but the installer will let you choose other formats. If you choose LVM, ext3 etc the product will install and then tell you to reinstall as soon as you try and log in to it.

Installation instructions: http://rockstor.com/docs/quickstart.html

I then connected up all my drives and rebooted the device. These steps were followed to enable bcache to use the SSD as a cache for the mechanical drives: https://forum.rockstor.com/t/bcache-developers-notes/2762

The OS will see the mechanical drives, the SSD, and the virtual bcache devices created for each mechanical drive. The GUI is clever enough to figure out which devices are bcache backing devices and it won’t let you do anything with them. I created a RAID10 BTRFS pool using lzo compression – I don’t have that much CPU so lightweight compression is the order of the day. I also had to specify the nossd mount option. BTRFS sees the bcache devices as SSDs when they really aren’t. I found that manually telling it that they were not true SSD devices increased performance.

I also created some scheduled tasks to snapshot the share twice a day (I have found through trial and error that 1200 and 1900 are good times to snapshot) and to scrub the RAIDset once a week. Samba understands BTRFS snapshots and can pass them through to the Windows Previous Versions/Shadow Copy interface, so your users can retrieve old versions of their files from the OS native snapshots right from within Windows! The charity is an Active Directory user so Samba will be integrated into that when I get it on site.

The dual port NIC was configured as a bonded interface using LACP – the device will be connected to 2 switches in a stack which support LACP bonds.

And that’s it! Obviously RAID and snapshotting are no substitute for backups, another old server will be repurposed once this NAS box is in use to fulfil the backup requirement. I’m able to saturate a gigabit ethernet interface copying either to or from the NAS box, retrieve mistakenly deleted files from the snapshots with a few clicks from Windows, and I have terabytes of fault tolerant storage on the charity’s network for under £200. The device is monitored using SNMP by an open source NMS which will notify of any impending errors, Rockstor will notify of drive health from SMART status by email and storage can be expanded on the fly by swapping the drives out for larger ones. I’m not yet sure how to calculate usage on the bcache SSD to determine the perfect ratio between SSD and HDD but I’m sure there will be one!

Plus the dashboard is COOL