Browsed by
Category: Home server

Blog posts related to my unRAID home server and how I came to choose unRAID as my server operating system. Tips, trick and mistakes made along the way.

How does NAS work in unRAID?

How does NAS work in unRAID?

How does NAS work in unRAID?

With this post I hope to shed some light on the unRAID implementation of NAS. This isn’t your standard NAS setup due to the fact that it doesn’t use a normal RAID implementation. I’m going to start this section off by explaining a little bit about what RAID is. If you know what RAID is and how it works feel free to skip the next section.

RAID originally stood for ‘redundant array of inexpensive disks’ but is now commonly known as ‘redundant array of independent disks’. It is a storage virtualisation technology that combines multiple disk drives into a single logical unit for the purposes of data redundancy or performance improvement. Most RAID implementations perform an action called striping. Striping is a term used to describe when individual files are split up and spread across more than one disk. By performing read and write operations to all the disks in the array simultaneously, RAID works around the performance limitation of mechanical drives resulting in much higher read and write speeds. In layman’s terms – Let’s say you have an array of 4 separate disks. In this case the file would be split into four pieces with one piece written to each drive at the same time therefore theoretically gaining 4 times the speed of one drive. That’s not quite how it works in reality though..

Striping can be done at a byte level or a block level. Byte-level striping means that each file is split up into little pieces of one byte in size (8 binary digits) and each byte is written to a separate drive i.e. the first byte gets written to the first drive, second to the second, and so on. Block level striping on the other hand splits the file into logical blocks of data with a default block size being 512 bytes. Each block of 512 bytes is then written to an individual disk. Obviously striping is used to improve performance but that comes with a caveat – it provides no fault tolerance or redundancy. This is known as a RAID0 setup.

NAS unRAID raid0

If all the files are split up among the drives what happens when one dies? Well this is where parity and mirroring come into RAID. Mirroring is the most simple method by using redundant storage. When data is written to one disk, it is simultaneously written to the other disk, so the NAS array would have two drives that are always an exact copy of each other. If one of the drives fails the other drive still contains all the lost data (Assuming that doesn’t die too!). This is obviously not an efficient use of storage space when half of the space can’t be utilised. This is where parity comes in – Parity can be used alongside striping as a way to offer redundancy without losing half of the total capacity. With parity one disk can be used (Depending on the RAID implementation) to store enough parity data to recover the entire NAS array in the event of a drive failure. It does this through mathematical XOR equations which I’m not going into in this post. There is one glaring problem with this setup – What if two drives fail? You’re more than likely screwed.. This is part of the reason nobody uses RAID4 in practice.

NAS unraid raid4
There are implementations available such as RAID6 which use double parity, meaning the NAS array can have two drives fail without data loss which is a better implementation if you plan on storing many many terabytes of data across a large number of drives. Logically the more drives you have the higher percentage chance you have that one will fail.

Why is unRAID not considered a standard NAS?

unRAID’s storage capabilities are broken down into three components: the array, the cache, and the user share file system. Let’s start with the NAS array first of all. unRAID makes uses a single dedicated parity disk without striping. Due to the fact that unRAID does not utilise striping this means you have the ability to use multiple hard drives with differing sizes, types, etc. This also has the benefit of making the NAS array more resistant to data loss because your data isn’t striped across multiple disks.

The reason striping isn’t used (Or can’t be used) is because unRAID treats each drive as an individual file system. In a traditional RAID setup all the drives are simultaneously spinning while in unRAID spin down can be controlled per drive – so a drive with rarely accessed files may stay off (theoretically increasing it’s lifespan!). If the NAS array fails, the individual drives are still accessible unlike traditional RAID arrays where you might have total data loss.

Because each drive is treated as an individual file system, this allows for the user share file system that I mentioned earlier. Let’s just take an example to explain this. I could create a user share and call it media. For this media folder I can specify:

  • How the data is allocated across the disks i.e. I can include / exclude some disks
  • How the data is exposed on the network using what protocols (NFS, SMB, AFP)
  • Who can access the data by creating user accounts with permissions

The following image taken from the unRAID website explains the NAS setup better than words can:

NAS unraid permissions

 

Finally we need to address the issue of performance due to the fact that striping is not used. To get around this limitation unRAID introduced the ability to utilise an SSD as a cache disk for faster writes. All the data to be written to the NAS array is initially written directly to the dedicated cache device and then moved to the mechanical drives at a later time. Because this device is not a part of the NAS array, the write speed is unaffected by parity calculations. However with a single cache device, data captured there is at risk as a parity device doesn’t protect it. To minimise this risk you can build a cache pool with multiple devices both to increase your cache capacity as well as to add protection for that data.

Forget ESXi, unRAID 6 looks perfect for what I need

Forget ESXi, unRAID 6 looks perfect for what I need

Forget about ESXi and vSphere – Looks like unRAID is the way to go

Up until this point I have been completely focused on installing ESXi and trying to wedge it into my plans for this server. Today I discovered unRAID – NAS and a hypervisor all in one bare metal solution. It sounds absolutely perfect for my needs as it will allow me to do everything that I want in terms of virtual machines, as well as offering me a NAS solution without the need for messing with passing through SATA controllers etc.

Why unRAID over ESXi?

When I was looking up FreeNAS the main information I was getting was that it is resource intensive, doesn’t like to be virtualised, and some of the plugins aren’t very stable at the best of times. The thing to bear in mind with this is that people only post online when things go wrong, not when they’re working as expected. Obviously FreeNAS is popular for a reason but it just doesn’t seem to fit into what I want to do. This more or less left me with a choice of a NAS or an ESXi box unless I wanted to go passing through SATA controllers to the a virtualised FreeNAS OS – Which I can imagine might cause issues down the line..

Then I stumbled across unRAID from a post on a forum I regular. Some member had mentioned he was running unRAID on a USB stick on his HP Gen8 MicroServer without any issues. A few minutes later I was sold after watching their promotional video.. NAS, Apps, and virtualisation all in one lightweight package..

I’m really looking forward to this now!

What happens next?

The one thing FreeNAS has over unRAID is that it’s free (the name probably gave that away..). If I do plan on using unRAID then I will need to fork out for a license which is a one-time cost of $59 which certainly isn’t going to break the bank. I’ll make a decision on this after trying to the 30-day trial but I really think this is the way to go. Either way this is still far cheaper than buying a vSphere / ESXi license from VMware.

Server requirements and my plans for the future

Server requirements and my plans for the future

What is the plan for my home server?

In my previous post I was pretty happy with my purchase of a HP Proliant Gen8 G1610T Microserver as my home server. I was thinking that I might finally be able to get working on installing ESXi. However I am starting to have some doubts that I may have underestimated what I am looking for in a home server..

Doubts about the Gen8 G1610T home server

home server Microserver
Did I make the right choice?

By all means this is a great piece of kit and pretty good value too. The problem I have is that I want both a NAS and an ESXi home server but working on an all-in-one solutions seems a little difficult to implement without issues. My original plan had been to install ESXi as my hypervisor and virtualise FreeNAS  (Or some other alternative) but it seems FreeNAS does not play nice when it is virtualised as it requires block level access to the drives to function properly. You can kind of force this by passing through (I previously mentioned GPU pass-through) the disks to the VM but I’ve heard mixed reviews about attempting this. Seems like some people got this working without issues while others are experiencing crashes, down time, etc.

home server FreeNAS logo
If I did manage to successfully virtualise FreeNAS without any problems this still leaves me with a resource issue on my home server. The recommended resources for FreeNAS is 1GB RAM for every 1TB of storage space. I was considering a 4 x 4TB array of WD Red drives but this leaves me with no memory available for the virtual machines. Even if I drop to 3TB drives this still only leave me with about 4GB of usable RAM to work with. Certainly not ideal and wouldn’t allow for much wiggle room.

Then there is the financial cost to consider. Obviously the drives are going to be expensive but I had planned on doing this anyway so I consider this more of an investment. I think I would like to go all the way to 4 x 4TB which I believe is the maximum allowed on this server, but honestly 4 x 3TB would be more than enough. That isn’t the main concern here though – the cost of the additional RAM is. Normal DDR3 RAM is pretty expensive nowadays although it is finally dropping in price. The problem is this server requires ECC memory or Error-correcting code memory. Unlike normally every day RAM this can detect and correct any internal corruption so it is mainly used in systems where a fault could be catastrophic such as financial services. Unfortunately this kind of technology demands a hefty price tag at almost twice the price of normal RAM – Over €150 for one of the cheaper Value RAM sets of 16GB (8GB x 2).

So what is really needed for my home server?

This is the question I am asking myself. My original intention as a starting point was to set up a media server VM in ESXi including Plex and Torrenting as well as additional media library applications such as SickBeard. I thought this would be a good starting point as it isn’t exactly out of my comfort zone. This would work great in combination with a NAS fileserver so I need to find out how best to implement this – Virtualised or otherwise. I believe FreeNAS has some plugins that can be used to support these features but I’ve heard far from happy reviews about these in terms of uptime and general issues with staying running.

home server esxi logo

I definitely do want some sort of functional vSphere implementation as I want to gain my VCP certification this year if I can. I’m thinking at this point it might be best to build an ESXi whitebox to accomplish everything I want in terms of different VMs and NAS as it doesn’t look like the Gen8 is going to be enough for my needs.. This could well be the beginning of a home lab… Stay tuned

Screw it I’ll just buy a HP Gen8 microserver instead

Screw it I’ll just buy a HP Gen8 microserver instead

So I bought a HP Gen8 Microserver

Considering the level of difficulty I’ve been having with getting any hypervisor installed I just decided to give up on my original plan and buy a server. I came across an offer on HP Gen8 Microserver G1610T models where HP were offering €110 cashback on certain models. After I get my cashback the total cost will have be about €160 for the bare-bones server. How could I say no to that? Ordered on Wednesday night and it arrived this morning, happy days!

Introducing the Gen8 Microserver G1610T

When I collected the box I was concerned because it looked like the box may have been dropped. The below image should explain what I mean. It looks like there is a handle of sorts on the box itself and it was ripped when I collected it – If this ripped while someone was carrying the box they more than likely would have dropped it. I have to admit I felt slightly panicky when powering it up for the first time…

hp microserver

When I finally removed the server from its box I was pleasantly surprised at just how small this thing is. I was expecting something a fair bit bigger but this fits cleanly on my desk. It’s not a whole lot bigger than my hand! My whole setup looks pretty sweet now after some desk re-organisation. I might go into this in a separate blog post – Watch this space!

hp microserver

When it finally came time to power on the Gen8 Microserver I was nervous.. but for once things went my way and it booted up without issues! Setup was a breeze with HP’s intelligent provisioning wizard. A few clicks of my mouse and I had it up and running ready to go. The only issue I encountered was trying to access the iLO console – For those of you that do not know what iLO is, it allows me to manage and interact with the server from my computer via a web browser. It’s almost as if I have a monitor, mouse, and keyboard connected to the server. Really handy feature. I tried to give iLO a static IP address so I wouldn’t have to go looking for it every time it changed with DHCP. However this IP address did not allow me to access the server, and I could not ping the address from the command line. A reboot of the server seemed to resolve this – DHCP had been re-enabled after the reboot and the new IP worked. I just changed this to static and all was well with the world! 🙂

What did not occur to me when I was buying the server was that there was no HDD included in it so there wasn’t a whole lot I could do disappointingly. Some time in the next few days I will get around to installing ESXi using the internal USB slot and hook up the second SSD from my desktop for VM storage. I might finally get around to setting up some VMs! But let’s not jinx it..

Update: I did not install ESXi but instead decided on unRAID

Giving up on Proxmox, working with Hyper-V

Giving up on Proxmox, working with Hyper-V

I have abandoned Proxmox!

The installation of Proxmox should have been straightforward but unfortunately it just was not to be. I attempted the installation process from scratch this time and came out the other end with the same result.. Debian refusing to boot with the pve kernel. This left me with a few options..

  • Install Proxmox on a different flavour of Linux
  • Install the bare metal ISO of Proxmox
  • Go with a different hypervisor

proxmox logo

After weighing up my options and a bit more research I decided to just ditch Proxmox altogether. The first attempt at installation didn’t leave me with much confidence and I was tired of messing about with bootable USBs. I decided to go completely against my original decision and try out something different..

hyper-v over proxmox

Client Hyper-V over Promox

Client Hyper-V is the name for the virtualisation technology that was introduced in Windows 8. It is not enabled by default so many people probably don’t even realise they have it. It needs to be enabled via the control panel. Client Hyper-V is more or less a slightly more limited version of the Server implementation of Hyper-V. From reading the technet article regarding these limitations I do not think they are going to affect me, but I did read somewhere that the free version can only run a small number of VMs. I haven’t really looked into this to be sure, but I don’t see this being a major issue yet. The only requirements for enabling Hyper-V are:

  • Your desktop must have at least 4GB of RAM. Mine has 16GB so I have more than enough for multiple VMs
  • Your CPU must support SLAT technology (Second Level Address Translation). My AMD 8350 supports SLAT so this also not an issue.

Ok, good to go!

Does my CPU support SLAT?

Microsoft has a handy little utility called coreinfo that allows you to check for this. Once you have downloaded it you will need to extract it to some directory of your choice. Then you will want to open a command prompt (Admin) in this directory by pressing Win + x and choosing “Command Prompt (Admin)”. Now navigate to the directory where you stored coreinfo. Now run the command ‘coreinfo -v’. On an AMD if your processor supports SLAT it will have an asterix in the NP row as below:

SLAT check for hyper-v instead of proxmox

Enabling Hyper-V

Because Hyper-V is an optional feature it will need to be enabled via the control panel. Open the Control Panel, click Programs, and then click Programs and Features. Click Turn Windows features on or off. Find Hyper-V in this list and enable it, click OK, and then it will request to reboot your machine.

enable hyper-v instead of proxmox

Easy right?

The fun begins..

Guess what!?! More issues! yay! For some reason I am just not allowed to implement any sort of virtualisation technology outside of VirtualBox. Damn you Oracle, what you have done! During my first attempt at enabling Hyper-V it reached about 90% progress after restarting so I just figured I had missed a prerequisite or something on those lines.. So I tried again and got the same result. The message appearing on my screen was “We couldn’t complete the features”. Interesting, but at least I didn’t turn my PC into a paperweight this time. Some furious google-fu brought me all kinds of results but the primary answer seemed to be related to virus guards, but I do not have one installed other than Windows Defender or Microsoft Security Essentials – Whatever the built in option is called for Windows 8. So I am working under the assumption that this is not responsible.

A second common answer for this was a backlog of Windows Updates preventing the hyper-v installation from completing. Apparently my computer just hates me because even something as simple as getting windows updates to install was proving difficult. Windows continuously refused to connect to the update server every time I tried checking. I kept getting Windows Update Error 0x80243003 in return so I needed to run a utility from Microsoft for repairing Windows update. This worked… after running it countless times as each time I ran it there was a new problem! Eventually after getting all the updates installed I attempted to enable Hyper-V once more and this time reached about 93%! On top of that the error changed to “We couldn’t complete the updates” which I guess can be considered progress.. ?

hyper-v error while replacing proxmox

I tried reviewing the event logs for my installation attempt but this did not shed any light for me as to what happened. Moving on the final relatively common resolution was to enable bit locker on the drive where hyper-v is installed. Was worth a try.. I mean, what harm could it do?

Bitlocker – Windows encryption

BitLocker lets you encrypt the hard drive(s) on your Windows based system. It’s basically there to protect your data on the off chance that someone robs your physical computer so they will not be able to boot or access your data without the password. So went through the process of enabling BitLocker..

enabling bitlocker while replacing promox
To cut a long story short, it worked but my computer didn’t play nice with it (surprise surprise). Following a reboot after enabling bitlocker I was greeted with an orange and white striped screen like below:

orange screen after enabling bitlocker
Not quite paperweight material this time as I quickly realised I could still type in the password that I had set despite having nothing to look at. I just typed and hoped and thankfully it worked out. I found this rather annoying and potentially a dodgy situation since I can’t see what or where I’m typing.. I did not test to see if Hyper-V worked as I did not have the patience to let the process completely encrypt the SSD before disabling it..

So.. What now?

I never did get Hyper-V enabled and I do not think I will any time soon. So for now I am going to decide on how to proceed from here.. Surely I’ll get something working eventually..

hyper-v is not going to work instead of proxmox

Proxmox – Tried installing it and it didn’t go so well..

Proxmox – Tried installing it and it didn’t go so well..

Proxmox refused to install

When I finally got my PC back working as expected and was successfully able to boot into both Linux and Windows I figured it was about time to actually install my hypervisor. As I mentioned in my previous post I had decided to install Proxmox as a level two hypervisor on Debian. The instructions for this seemed pretty straightforward, but alas as per my luck so far it was not…

proxmox logo

I used unetbootin to create the bootable USB for Proxmox. If you have not heard of unetbootin then I highly recommend it.

Getting started with Proxmox

The installation steps listed for Debian on the Promox site seemed relatively straightforward. The first step involved checking my hosts file to ensure that my hostname is resolvable:

luke@debian:~$ cat /etc/hosts
127.0.0.1 localhost

127.0.1.1 debian.mydomain.com debian
Next I needed to edit my sources list to add the Proxmox VE repository. If you have ever used Linux before you have more than likely come across the command apt-get before. Well Apt uses a file that lists the ‘sources’ from which packages can be obtained. This file is /etc/apt/sources.list. The below three entries needed to be added to this list:

luke@debian:~$ nano /etc/apt/sources.list
deb http://ftp.at.debian.org/debian wheezy main contrib
deb http://download.proxmox.com/debian wheezy pve
deb http://security.debian.org/ wheezy/updates main contrib

Add the Proxmox VE repository key:
luke@debian:~$ wget -O- "http://download.proxmox.com/debian/key.asc" | apt-key add -

Update your repository and system by running
luke@debian:~$ apt-get update && apt-get dist-upgrade

Install the kernel and kernel headers
luke@debian:~$ apt-get install pve-firmware pve-kernel-2.6.32-26-pve
luke@debian:~$ apt-get install pve-headers-2.6.32-26-pve

At this point the next step is to restart and select the pve kernel. It’s a surprisingly straightforward process. I honestly thought it would be more complicated. However, as is the usual paradigm so far it’s just one step forward two steps back with everything…

Problems begin..

Once again I am having some issues with actually booting the system. After rebooting into GRUB and choosing the appropriate kernel, this appears on my screen..

Loading, please wait...
Usb 5-1: device descriptor read/64, error - 62
Fsck from util-linux 2.25.2
/dev/mapper/debian--vg-root: clean, 166811/7012352 files, 1550982/28045312 blocks

Left it at that for well over an hour before just turning it off and have not gone back to it since. Proxmox has just straight out refused to work with me so far and at this point I’m just about ready to give up and continue using VirtualBox.. We’ll see..

Choosing a hypervisor

Choosing a hypervisor

The confusion of choosing a hypervisor

As I discussed in my previous post there is a wealth of information out around the web on virtualisation, and when I started researching what hypervisor to choose it was no different. In fact the more I looked the more indecisive I found myself being. However I recalled my first post on this topic and the article I had mentioned reading about setting up gaming machines and utilising what is known as PCI pass-through. I quickly learned that this doesn’t work well with Nvidia graphics cards and this turned out to be deciding factor for me in the end.

Hypervisor PCI pass-through; What is it and why does it matter?

If you have ever tried playing a game on a virtual machine you would quickly realise that this is not a viable solution at all for a gaming desktop. This is because normally the GPU is emulated by the hypervisor and a resource manager carves up the resources and passes them to the individual machines. This is where PCI pass-through comes into play – Rather than sharing the resources of the GPU among the multiple machines, you can have the hypervisor assign the whole card to the machine so it has independent ownership of it. There is no virtualisation layer in between the GPU and the VM managing resources so this allows for near native performance. In theory you should not even know that you are using a VM!

Many months ago I decided to get two GTX 970’s for my gaming desktop rather than opting for an AMD alternative. I am living to regret that decision somewhat as I am now learning that Nvidia does not allow their consumer grade GPU’s to utilise this pass-through technology. For this privilege you need to upgrade to their Quadro series which from what I can tell offer no other benefits other than allowing pass-through. Did I mention they’re also much more expensive and far inferior when compared to their GTX counterparts? Nice one Nvidia! Since I don’t plan on replacing my GPUs any time soon so this has more or less ruled out ESXi for me but I learned that it is possible (with a lot of effort) to implement this on a Linux based hypervisor such as KVM / Proxmox.

And the winner is..

proxmox hypervisor

I narrowed my choices down to KVM and Proxmox (Which is based on KVM) as the only two viable options. In the end I decided I would proceed with Proxmox as my hypervisor for the simple reason that it has built-in web GUI for management and it has the option of either a type 1 or 2 hypervisor. This leaves me with plenty of flexibility and simple management.