Browsed by
Month: June 2015

Setting up the Plex container in unRAID 6

Setting up the Plex container in unRAID 6

unRAID: How to set up the Plex container

Now that I have unRaid up and running, in this post I am going to discuss how I went about adding the Plex container in unRAID 6. It’s a fairly straightforward process and didn’t cause me many headaches thankfully. I was originally running the server from my desktop but that wasn’t an ideal solution because I don’t leave my desktop on all the time.

Plex logo
Plex logo (

Enabling Docker

The first step involved here was enabling Docker. If you’re not familiar with what Docker is, then take a look at my post where I explained how the application server works in unRAID.

To do this you simply navigate to the Settings tab, click Docker, and then use the drop down menu to enable Docker. PlexMediaServer is one of the default docket templates that comes packaged by lime-technology in unRAID 6. Navigate to the new Docker tab and you will see a section named ‘Add container’. From there you want to choose ‘PlexMediaServer’ under the limetech heading.

Plex docker template

There is only one required folder named ‘config’ for this docker but you’re going to want add more. The config folder does exactly what it says on the tin – Stores the configuration settings for this particular docker. I originally made the silly mistake of pointing this folder to a directory on the flash drive running unRAID. As soon as I rebooted the server I lost all the configuration I had spent the previous setting up – bummer. So for this directory you’re going to want to specify a directory on the array. I created a folder called ‘DockerConfig’.

In order to add any media you will also need to specify these directories too. I added one named /movies pointing to /mnt/user/Movies and another named /series pointing to /mnt/user/Series.

Plex container volumes

All that is left now it to allow unRAID to create the docker container. Simple as.

Configuring Plex

There wasn’t a whole lot of configuration required with Plex assuming you only want to use it locally within your home. If you plan on streaming media externally you will need to setup remote access. There are three steps required in doing this. The first step required is to sign into your Plex account, assuming you created one. If not you will need to register an account. The second is port forwarding – By default Plex will use port 32400 but you can specify another port if you prefer. You will need to forward this port to the IP of your server. Lastly you need to edit the settings of your plex server for remote access and manually specify whatever port you chose by ticking the box to manually specify.

unRAID 6 benchmarks

unRAID 6 benchmarks

unRAID 6 benchmarks

Now that I’ve got unRAID up & running I thought it would be interesting to run some benchmarks so I could determine what kind of speed to expect. Bear in mind this is without a cache device installed as I am waiting on my SATA controller to arrive before installing the SSD. Why? The optical drive SATA port runs at SATA I speeds, hence the reason for buying the additional SATA controller. This way I can have 4 drives in the bay slots and the cache drive hooked into the second SATA controller.

The program I used to run these benchmarks is called ‘CrystalDiskMark‘. CrystalDiskMark is designed to quickly test the performance of your hard drives. The program allows you to measure sequential and random read/write speeds for a specific drive. In order to test unRAID I needed to mount one of the unRAID NAS shares as a volume within Windows (N:).

This is 5 x 4GB passes with two WD Red 3TB drives, one of which is acting as a parity disk:

unRAID benchmark speeds

And here are the results with 5 x 1 GB passes:

unRaid benchmark speeds

I don’t think that’s too bad considering I don’t have a cache device set up yet. With a cache device I think I should be hitting much higher numbers because I won’t need to read/write from the disk array. Still slower than my desktop mechanical drive for both sequential read and write speeds but I doubt I will notice the difference when it comes to real world usage – Hopefully anyway. Parity certainly had more of an impact on the speeds than I would have liked. My internal 1TB WD Blue mechanical drive clocked the below speeds. Much faster for sequential reads and writes but a hell of lot slower for the 4k metrics.

Desktop drive benchmark speeds
I might revisit this in the future when the SATA controller arrives so I can set up my cache device. Watch this space!

SSD benchmark – My desktop SSD

SSD benchmark – My desktop SSD

SSD benchmark – My desktop SSD

Time for a SSD benchmark test! My desktop currently has a 120GB M500 Crucial SSD installed for booting my OS and related applications. It’s not exactly a top-of-the-line drive by any means but it’s more than enough for every day use. The spec sheet for this model claims I should be getting up to 500 MB/second reads and 400MB/s writes. So when I saw the results of my CrystalDiskMark SSD benchmark test I was underwhelmed to say the least. It’s worth nothing that this drive was 75% full at the time of the test so I understand this has the potential to affect speeds – But not this much!

desktop ssd benchmark

56MB/s sequential writes?? Time to figure out what’s going on here..

Troubleshooting steps

First off I needed to ensure that AHCI was enabled. AHCI stand for Advance Host Controller Interface. This is a hardware mechanism that allows software to communicate with Serial ATA (SATA) devices such as SSDs. Windows supports AHCI out-of-the-box by default but just to be sure I went into device manager to confirm the AHCI controller was enabled and running.

ssd device manager

So it’s enabled – Great! Now I just need to confirm that the SSD is being managed by this controller. Right click the controller, choose properties, then navigate to the ‘Details’ tab. In this section you are greeted with a drop down menu where I chose ‘Children’ and could see my SSD listed so AHCI is definitely enabled.

ssd controller properties

Then I needed to confirm that ‘TRIM’ was enabled. TRIM support is essential for an SSD to run the way it should to avoid slow writing times. This command allows an operating system to inform a solid-state drive (SSD) which blocks of data are no longer considered in use and can be wiped internally. To test if this is enabled you can run the command “fsutil behavior query DisableDeleteNotify”. If this returns as 0 then TRIM is enabled.

The next step was to make sure I was at the latest revision of the firmware. The firmware download for my SSD comes as an ISO package so I stuck it onto a USB using unetbootin and made a backup of my current SSD before proceeding – I’ve had enough bad experiences to warrant backups! Thankfully the installation completed successfully without any issues and I was upgraded from MU03 to the MU05 revision. Rebooted and went through another SSD benchmark with CrystalDiskMark. No improvement whatsoever.

ssd desktop benchmark

So considering there was little to no change in the speeds I figured I would confirm the results with another benchmark tool Atto disk benchmark. Now I’m seeing more along the lines of the expected speeds! At my maximum I reached over 500MB/s reads and maxed out about 140MB/s writes. I’m still a little disappointed with the write speeds though but this is a start at least.

atto disk benchmark
As this was an SSD I was recommended to test out ‘AS SSD Benchmark’ which also confirmed that AHCI was enabled and my alignment was okay. This reported speeds were pretty much along the same lines as what I’ve seen so far with write speeds being reported between 113MB/s and 134MB/s. Still disappointing.

ssd benchmark

I went ahead and asked someone else to run the same test on their Crucial M4, which is an older model than mine. He was clocking double my write speeds with similar read speeds using the same test config as me. Only other difference between his test and mine was that his drive has higher total capacity with about 50% free whereas I only had 25% free. I went about reducing the space consumed on my drive and then re-ran the test only to receive roughly the same numbers again..

The conclusion

Sorry to get your hopes up folks! After a bit more digging I realised this was down to false advertising rather than anything being wrong with my drive / configuration. The M500 series drives are capable of up to 400MB/s write speed at the higher capacity spectrum of drives. My specific model – CT120M500SSD1 – I found is only capable of about 130MB/s which more or less matches up with the speeds I was recording in some of the benchmarks. Here is a screenshot from the product page on newegg:

newegg specifications ssd

If you found this interesting, take a look at my unRAID benchmark tests also!


Installing unRAID 6 on my HP Proliant Gen8 Microserver G1610T

Installing unRAID 6 on my HP Proliant Gen8 Microserver G1610T

Installing unRAID 6 on my HP Proliant Gen8 Microserver G1610T

As discussed in my previous posts I decided on unRAID for my HP Microserver OS. For now I simply have a trial version but I can already tell I will end up buying a license. The installation process is super easy as you will see below:

Downloading the unRAID image

Before going about installing unRAID I was under the impression that it would come in an ISO package like every other OS. However unRAID just comes as a package of files with a ‘make-bootable’ executable inside. Preparing the USB is easy! First of all your USB needs to be formatted to FAT32 and the volume label must be set to UNRAID in capital letters.

Microserver unRAID USB format

Then simply transfer all the files to the root directory of the USB (i.e. not in a folder) and then run the ‘make-bootable.bat’ file as administrator (Right click -> Run as administrator). A CMD prompt will appear just asking you to confirm you want to proceed – Press any button to continue and hey presto job done.

Microserver unRAID make bootable

Now you just need to eject the usb from your computer, connect it to the internal USB slot of the microserver and boot it up. Mine booted into the USB straight away without editing any BIOS options of my microserver but your mileage may vary. After successfully booting up I was able to navigate to http://tower from my desktop and was greeted with the web GUI. It really was that easy!

Microserver unRAID GUI

unRAID Licensing

Before you’re able to do anything you will need a license key. Upon first installation you’re entitled to a 30 day evaluation period to test out the software. To activate your license navigate to ‘Tools -> Registration’. You will need to enter in your email address so the license URL can be sent to you which you then paste into the box provided.

Microserver unRAID registration

After that you’re pretty much good to go! Stay tuned for more unRAID and microserver posts!

New components for my HP G1610T Gen8 Microserver – Upgrades!

New components for my HP G1610T Gen8 Microserver – Upgrades!

G1610T Gen8 Microserver upgrades!

I decided it would be best to invest some more money into my G1610T Gen8 Microserver rather than trying to struggle through with the limited resources available in the server itself. I also needed some hard drives to fill up the drive bays for my NAS.

What did I buy?

2 x 3TB WD Red NAS drives
1 x 2 Port SATA Controller
3 x Cat6a ethernet cables

G1610T Gen8 Microserver components

The NAS drives will obviously going into the bays available at the front of the server so I can set up my NAS. As I only have two drives at the moment this means the array will only have 3TB usable space due to the parity disk. Realistically that is all I need to start with as my media collection is only about that large at the moment. The SATA controller card I bought because the internal SATA connector only runs at SATA I speeds while this PCI card runs at SATA III. This will be used to connect up my cache devices if I ever get around to implementing that.

G1610T Gen8 Microserver sata controller

The requirements for running unRAID are pretty minimal compared to FreeNAS with unRAID only requiring about 1GB of RAM if you have the intention of running it as a pure NAS system. However once you start playing around with containers and virtualising machines you will understandably get tight on resources. With that in mind I decided it would be best to upgrade to 16GB RAM which is the most this machine will accept. This didn’t come cheap at about €160 for the two stick set – ECC RAM is bloody expensive and is unfortunately a necessity for this particular model.

G1610T Gen8 Microserver ram

Lastly I decided to invest in some decent cat6a cables for connecting up my server and desktop to the network along with a 1Gb switch. I’ve been running on cat5e cables for quite a long time because in all honesty I just had no requirement for the additional benefits in cat6 cables. Now that I will be regularly transferring files to and from the NAS I felt the additional bandwidth might be beneficial.


How the application server works in unRAID – What is Docker?

How the application server works in unRAID – What is Docker?

So what is Docker?

Before installing unRAID I had not heard of Docker but I wish I did. Before explaining what Docker is I’m going to take you back to one of my first posts. In this post I explained what the purpose of a hypervisor is and what the differences between a level 1 and a level 2 hypervisor are. I think this would be useful to understand before proceeding with this post.
The idea behind a hypervisor is that it has the ability to emulate hardware in such a way that an operating system running on the hypervisor has no idea that it is not a physical machine. By creating multiple VMs (Virtual Machines) you have the ability to isolate applications from each other. For example you might have one VM for torrents and automated media management. You could have another for web development. This is all great in theory but the issue with this is that virtualisation is resource intensive! The alternative to deploying Virtual Machines is using ‘containers’. Containers are similar to virtual machines in that they also allow for a level of isolation between applications but there are some significant differences.

What’s a container? How does it differ from a Virtual Machine?

Virtual machines helped us get past the idea of a “one server for one application” paradigm that was formerly common in data centers and enterprise. By introducing the hypervisor layer it allowed for multiple different OS to run on the same hardware so that many different applications could be used without wasting resources. While this was a huge improvement it is still limited because each application you want to run will require a guest operating system, it’s own CPU, dedicated memory and virtualised hardware. This lead to the idea of containers. Docker has been around for only a few short years, but container technology has been with us for decades – It just wasn’t all that popular until recent years.

docker logo
unRAID makes use of Docker containers. Similar to the idea of a virtual machine Docker containers bundle up a complete filesystem with all of the required dependencies for an application to run. This will guarantee that it will run the same way on every computer that it is installed on. If you have ever done any development work you may understand the frustration of developing an application on one machine, and then deploying it on another machine only to find out that it won’t launch. Docker helps to do away with this by bundling all the required libraries etc. into one lightweight package with the ‘Docker Engine’ being the one and only requirement. The Docker Engine is is a program that allows for containers to be built and run.

Fundamentally a container looks similar to a virtual machine. A container uses the kernel of the host operating system to run multiple guest instances of applications. Each instance is called a container and will have it’s own root file system, processes, network stack, etc. However the fundamental difference being that it does not require a full guest OS. Docker can control the resources (e.g. CPU, memory, disk, and network) that Containers are allocated and isolate them from conflicting with other applications on the same system. This provides all the benefits of traditional virtual machines, but with none of the overhead associated with emulating hardware.

The Docker Hub

One of biggest advantages Docker provides is in its application repository: The Docker Hub. This is a huge repository of ‘Dockerised’ applications that I can download and run on my microserver. With unRAID and Docker it doesn’t matter if it’s a windows or linux application, I can run it in a docker container. Really cool stuff.

How does NAS work in unRAID?

How does NAS work in unRAID?

How does NAS work in unRAID?

With this post I hope to shed some light on the unRAID implementation of NAS. This isn’t your standard NAS setup due to the fact that it doesn’t use a normal RAID implementation. I’m going to start this section off by explaining a little bit about what RAID is. If you know what RAID is and how it works feel free to skip the next section.

RAID originally stood for ‘redundant array of inexpensive disks’ but is now commonly known as ‘redundant array of independent disks’. It is a storage virtualisation technology that combines multiple disk drives into a single logical unit for the purposes of data redundancy or performance improvement. Most RAID implementations perform an action called striping. Striping is a term used to describe when individual files are split up and spread across more than one disk. By performing read and write operations to all the disks in the array simultaneously, RAID works around the performance limitation of mechanical drives resulting in much higher read and write speeds. In layman’s terms – Let’s say you have an array of 4 separate disks. In this case the file would be split into four pieces with one piece written to each drive at the same time therefore theoretically gaining 4 times the speed of one drive. That’s not quite how it works in reality though..

Striping can be done at a byte level or a block level. Byte-level striping means that each file is split up into little pieces of one byte in size (8 binary digits) and each byte is written to a separate drive i.e. the first byte gets written to the first drive, second to the second, and so on. Block level striping on the other hand splits the file into logical blocks of data with a default block size being 512 bytes. Each block of 512 bytes is then written to an individual disk. Obviously striping is used to improve performance but that comes with a caveat – it provides no fault tolerance or redundancy. This is known as a RAID0 setup.

NAS unRAID raid0

If all the files are split up among the drives what happens when one dies? Well this is where parity and mirroring come into RAID. Mirroring is the most simple method by using redundant storage. When data is written to one disk, it is simultaneously written to the other disk, so the NAS array would have two drives that are always an exact copy of each other. If one of the drives fails the other drive still contains all the lost data (Assuming that doesn’t die too!). This is obviously not an efficient use of storage space when half of the space can’t be utilised. This is where parity comes in – Parity can be used alongside striping as a way to offer redundancy without losing half of the total capacity. With parity one disk can be used (Depending on the RAID implementation) to store enough parity data to recover the entire NAS array in the event of a drive failure. It does this through mathematical XOR equations which I’m not going into in this post. There is one glaring problem with this setup – What if two drives fail? You’re more than likely screwed.. This is part of the reason nobody uses RAID4 in practice.

NAS unraid raid4
There are implementations available such as RAID6 which use double parity, meaning the NAS array can have two drives fail without data loss which is a better implementation if you plan on storing many many terabytes of data across a large number of drives. Logically the more drives you have the higher percentage chance you have that one will fail.

Why is unRAID not considered a standard NAS?

unRAID’s storage capabilities are broken down into three components: the array, the cache, and the user share file system. Let’s start with the NAS array first of all. unRAID makes uses a single dedicated parity disk without striping. Due to the fact that unRAID does not utilise striping this means you have the ability to use multiple hard drives with differing sizes, types, etc. This also has the benefit of making the NAS array more resistant to data loss because your data isn’t striped across multiple disks.

The reason striping isn’t used (Or can’t be used) is because unRAID treats each drive as an individual file system. In a traditional RAID setup all the drives are simultaneously spinning while in unRAID spin down can be controlled per drive – so a drive with rarely accessed files may stay off (theoretically increasing it’s lifespan!). If the NAS array fails, the individual drives are still accessible unlike traditional RAID arrays where you might have total data loss.

Because each drive is treated as an individual file system, this allows for the user share file system that I mentioned earlier. Let’s just take an example to explain this. I could create a user share and call it media. For this media folder I can specify:

  • How the data is allocated across the disks i.e. I can include / exclude some disks
  • How the data is exposed on the network using what protocols (NFS, SMB, AFP)
  • Who can access the data by creating user accounts with permissions

The following image taken from the unRAID website explains the NAS setup better than words can:

NAS unraid permissions


Finally we need to address the issue of performance due to the fact that striping is not used. To get around this limitation unRAID introduced the ability to utilise an SSD as a cache disk for faster writes. All the data to be written to the NAS array is initially written directly to the dedicated cache device and then moved to the mechanical drives at a later time. Because this device is not a part of the NAS array, the write speed is unaffected by parity calculations. However with a single cache device, data captured there is at risk as a parity device doesn’t protect it. To minimise this risk you can build a cache pool with multiple devices both to increase your cache capacity as well as to add protection for that data.