Browsed by
Category: Home server

Blog posts related to my unRAID home server and how I came to choose unRAID as my server operating system. Tips, trick and mistakes made along the way.

Telegram notifications with unRAID and Docker applications

Telegram notifications with unRAID and Docker applications

Telegram has been a dream come true. I recently discovered the simplicity of setting up a Telegram bot for getting notifications from my server. One of the grievances I’ve had since starting my server journey was the fact that I needed to constantly monitor things like available storage, cache disk usage, new downloads, etc.. I often found out too late when something was going wrong and had to spend some time restoring functionality to the server. I’ve finally found a super simple solution to this problem; A Telegram Bot to send automated notifications.

I’m not going to talk too much about creating or setting up a Telegram bot because I really didn’t spend any time playing with that functionality. What I am going to cover in this post is more about using the Telegram bot API for receiving notifications from unRAID and the various applications I have running like Sonarr, Radarr, and Ombi as well as some general system level stuff like available storage space.


Telegram Setup

There are a few things you need to do with Telegram before you get started with setting up notifications:

  1. Create a Telegram bot
  2. Optional: Create a channel where you’d like to have these notifications sent to. I say this is optional because you can have the messages sent directly to you instead.
  3. Get your channel ID

Create a Telegram bot

To create a bot you’ll need to start a conversation with @BotFather on Telegram. When you start a conversation with this bot you’ll be given a number of options, one of which is /newbot. You just send this command to the bot, follow the prompts on screen, and you’re done! It’s literally that easy to get started with your first Telegram bot. As proof here’s a screenshot of my conversation:

The end of the last message (which I’ve hidden) will provide you with your private bot API token. You will need to record this long string of characters for use later on.

Create your Telegram channel

As mentioned above this step is optional, but I would recommend it if you want to add additional users for notifications, or additional bots for other purposes. In my scenario I created a private channel and created separate bots for each application, then added them all to one channel. The reason I have separate bots for each application is because you can enable the “Sign Messages” option in Telegram. This option asks the bot to sign the messages so I know which bot sent which message. This allows me to know what application the message came from.

Get your Telegram channel chat ID

This part was less than obvious at first, but I discovered a super simple solution after some brief research. You can add the bot @get_id_bot to your channel. When the bot receives a message in this channel it will output the channel chat ID. If you want to be directly messaged instead of creating a new channel, you can just message the bot directly to get your personal chat ID. If you have added @get_id_bot to your channel you should remove it once you have your ID as it will continuously respond to every new message.

That’s it!

You’re now ready to get started! Below I’ll run through some simple notifications I have from Sonarr, Radarr and Ombi.


Sonarr & Radarr

I’ve used Sonarr as the example here, but this also applies to Radarr as they are basically the same application. You can find Telegram as one of the options found under Settings > Connect. Sonarr will allow you to send notifications whenever a new episode is grabbed, downloaded, upgraded or renamed. You just need to provide your Bot Token and Chat ID that we already recorded in the previous steps:

Options when setting up Telegram notifications in Sonarr

Unfortunately there doesn’t seem to be any (obvious) way of customising the notifications that are sent. I don’t have any problems with these notifications, but it would be nice to have the ability to customise the notification content.



Ombi on the other hand does allow plenty of notification customisation using the list of defined variables, and various categories that can be enabled or disabled. You also have the option of using complete Telegram markdown or raw HTML formatting for each notification content:

There are links within this page in Ombi that explains the Telegram markdown, and possible variables that can be used. As an example I customised the issue notification to provide more detail:

{Alias} has reported a new issue for the title {Title}
*Category:* {IssueCategory}

*Subject*: {IssueSubject}

*Description:* {IssueDescription}


This gives me a nice overview of the problem without needing to log into Ombi to view the details.


System level unRAID alerts

The last area that I wanted to monitor was available storage space. From my last post I addressed hard links in Sonarr not working for me and storage being filled up due to duplicate files. While that issue was ongoing my server kept crashing because the storage kept being filled with hundreds of GBs of useless duplicates. I want to avoid something like that happening again and Telegram seems like an obvious choice here as it’s instantaneous so I can be proactive rather than reactive. However, unRAID doesn’t have any capabilities for defining notifications like this.. Until I discovered the “CA User Scripts” application. This incredibly flexible application allows you to write your own bash scripts and schedule them for execution.

I wrote a bash script that will calculate the amount of free space on the array and cache disk, and send a notification to my Telegram channel using the Telegram API:


#Checks how much space% is consumed on the array
ARRAY_USED_PERCENT=`df -h /mnt/user | awk '{print $5}' | sed -ne 2p | cut -d'%' -f1`
#Checks how much available space there is in GB
ARRAY_AVAIL_SPACE=`df -h /mnt/user | awk '{print $4}' | sed -ne 2p`
#Checks how much space% is consumed on the cache drive
CACHE_USED=`df -h /mnt/cache | awk '{print $5}' | sed -ne 2p | cut -d'%' -f1`
#Checks how much available space there is in GB
CACHE_AVAIL_SPACE=`df -h /mnt/cache | awk '{print $4}' | sed -ne 2p`

    MESSAGE="WARNING: Free Array Space is at `expr 100 - $ARRAY_USED_PERCENT`%. There is $ARRAY_AVAIL_SPACE available."
    curl -s -X POST $URL -d chat_id=$CHAT_ID -d text="$MESSAGE"

    MESSAGE="WARNING: Free Cache space has fallen below `expr 100 - $CACHE_USED`%. There is $CACHE_AVAIL_SPACE available."
    curl -s -X POST $URL -d chat_id=$CHAT_ID -d text="$MESSAGE"

This script is scheduled to run every hour using the “CA User Scripts” app.

I definitely plan on making more use of these notifications.. but this been such an awesome ease-of-life improvement for me.

Sonarr and the “Use Hardlinks instead of Copy” conundrum with Unraid

Sonarr and the “Use Hardlinks instead of Copy” conundrum with Unraid

Sooo I haven’t updated my blog in quite a while.. In fact there hasn’t been a new post in about 18 months.. What can I say? I’ve been slacking! Today I decided to tackle a problem that has been causing me heartache with my server for quite some time, and I thought it was the ideal topic to talk about.. Sonarr gives me the option of using a “hard link” when moving files rather than copying the file and duplicating space.. But why isn’t it working?


Wait, what’s a hard link?

Let’s start with the basics. A hard link essentially allows you to link two (or more) files that both point to the same location on disk. In other words I can create a “link” between ‘File1‘ and ‘File2‘ such that opening either File will open the same source data from disk. Even though both File1 and File2 exist on the Filesystem, there is actually only one real copy of the data on disk. The benefit of hard links is that it allows a file to ‘exist’ in two or more locations but not take up any additional disk space.

When File1 and File2 have a hard link designated between them it doesn’t matter if I go to open File1 or File2, I am actually opening the same data.
root@Tower:/mnt/disk3/Documents# touch File1
root@Tower:/mnt/disk3/Documents# echo "Adding some text to File1" > File1
root@Tower:/mnt/disk3/Documents# ln File1 File2
root@Tower:/mnt/disk3/Documents# cat File2
Adding some text to File1

I could go ahead and delete File1 from the Filesystem, but it wouldn’t matter because I still have File2. The fact that File1 is the ‘original’ file is irrelevant once a hard link has been established. They are now both considered equal.
root@Tower:/mnt/disk3/Documents# rm File1
root@Tower:/mnt/disk3/Documents# cat File2
Adding some text to File1

In order for me to explain the problem I’m having with Sonarr it’s important that we first understand how this works. To do that we need to take a step back and talk about how files are stored in Unix like file systems.


Files and inodes

When we look at a file in Linux it encapsulates two distinct but linked concepts:

  1. File Content and structure
  2. Metadata and properties

The file content is written out to disk as raw blocks of data. The File will act as a reference point to the location of these blocks so we know where the File begins and ends. This information is stored using the concept of an “Index Node” or ‘inode’.

The inode is a data structure in Linux systems that describes the File; It will point to the location on disk as well as providing all of the Files associated metadata and permissions. Every File on Linux is associated with an inode number which is stored in an inode table. From the inode number the kernel’s file system driver can access the inode contents, including the location of the file on disk.

Finding the inode number

You can get the inode number for a file using the ‘ls -i‘ or ‘ls --inode‘ command:
root@Tower:/mnt/disk3/Documents# ls -i
414 File1

Above we can see that File1 has an inode number  of 414. So this means whenever a user attempts to read from File1 the Kernel will open the data at the disk location specified for this inode. To demonstrate the concept of a hard link more clearly, let’s create a link again and confirm the inode number for the new File matches:
root@Tower:/mnt/disk3/Documents# ln File1 File2
root@Tower:/mnt/disk3/Documents# ls -i
414 File1
414 File2

So now we can see that regardless of whether we try to open or edit File1 or File2 it does not matter as it refers to the same inode, and therefore the same disk location.

You can also find how many reference links (Or more simply put how many hardlinks) a file has by running “ls -l” or “ls -li” to include the inode number. You can see below each File has 2 links:
root@Tower:/mnt/disk3/Documents# ls -l
-rw-rw-rw- 2 root root 0 Jun 29 17:36 File1
-rw-rw-rw- 2 root root 0 Jun 29 17:36 File2

We can check where these references are using the inode number:
root@Tower:/mnt/disk3/Documents# find . -inum 414


So what’s the problem?

I have the Sonarr configurable option “Use Hardlinks instead of Copy” set to enabled. This option can be revealed by toggling “Advanced Settings” to “Shown” in the top right:

Advanced Options

You can find the “Use Hardlinks instead of Copy” option under “Importing”:

Use hardlinks instead of copy option

Theoretically this should allow for any new media file to exist simultaneously in the downloads folder and my Plex library. In an ideal world it would mean everything is fully automated for my downloads, however in practice the hardlink option does not seem to work. The files I download are being copied rather than linked therefore each file consumes twice as much space. Unfortunately this means frequent manual intervention by deleting the contents of the download directory to free up space.

After performing some initial checks I was immediately able to confirm that there is no hard link for new downloads. Using an example file that was just downloaded I could see there was only 1 reference:
root@Tower:/mnt/user/Media/downloads# ls -li
 71244 -rw-rw-r-- 1 nobody users 1346128584 Jun 29 15:58 FileName

The inode number also only returns one item:
root@Tower:/mnt/user/Media/downloads# find /mnt/user/Media -inum 71244

After some further research, this initial “proof” that hardlinks weren’t working turned out to be a quirk of the Unraid system..


Unraid and the FUSE filesystem

My research lead me to a post on the Unraid forums with a user stating the hard links do not work in Unraid. This comment from limetech himself set me straight:

“Your ‘ls -li’ command is listing “pseudo” inode numbers in the fuse file system.  The two names indeed point to same file.”

If we refer to the Unraid User guide we’ll find this little tidbit which expands on the above statement:

“User Shares are implemented using proprietary code which builds a composite directory hierarchy of all the data disks. This is created on a tmpfs file system mounted on /mnt/tmp. User Shares are exported using a proprietary FUSE pseudo-file system called ‘shfs’ which is mounted on /mnt/users.”

There is a number of things we can discern from the above conversation which will help our investigation:

#1: If I make a hardlink on a user share it will also make a hard link on the actual disk. Proof:
root@Tower:/mnt/user/Documents# touch UserShareFile
root@Tower:/mnt/user/Documents# ln UserShareFile UserShareHardLink
root@Tower:/mnt/user/Documents# ls -li /mnt/disk3/Documents
413 -rw-rw-rw- 2 root root 0 Jun 29 19:00 UserShareFile
413 -rw-rw-rw- 2 root root 0 Jun 29 19:00 UserShareHardLink 

#2: The shfs FUSE file system generates “pseudo” inode values and therefore there is no way to determine if a file in a user share has a hard link. The “ls -l” output will indicate that there is two links but we have no way of determining where they are. Proof:
root@Tower:/mnt/user/Documents# ls -li
321717 -rw-rw-rw- 2 root root 0 Jun 29 19:00 UserShareFile
321719 -rw-rw-rw- 2 root root 0 Jun 29 19:00 UserShareHardLink
root@Tower:/mnt/user/Documents# find . -samefile UserShareFile
root@Tower:/mnt/user/Documents# find /mnt/user -samefile UserShareFile


Re-starting the investigation

Now let’s go back and check out that file I downloaded in the previous section armed with the knowledge we learned from the forum gurus. As a reminder this is the file we’re dealing with:
root@Tower:/mnt/user/Media/downloads# ls -li
 71244 -rw-rw-r-- 1 nobody users 1346128584 Jun 29 15:58 FileName

This file actually exists on Disk3:
root@Tower:/mnt/disk3/Media/downloads# ls -li
7156367 -rw-rw-r-- 1 nobody users 1346128584 Jun 29 15:58 FileName

From the command output we can see that there is no additional links to this file, but let’s double check with the inode value:
root@Tower:/mnt/disk3/Media/downloads# find /mnt -inum 7156367

So we’re back to square one – The hardlink functionality is definitely not working, and this time we have proof!

After enabling debug level logging in Sonarr I finally found the culprit:
19-6-29 15:38:44.2|Debug|EpisodeFileMovingService|Hardlinking episode file: /downloads/FileName to /series/Series Name/Series Season X/FileName
19-6-29 15:38:44.2|Debug|DiskTransferService|HardLinkOrCopy [/downloads/FileName] > [/series/Series Name/Series Season X/FileName]
19-6-29 15:38:44.2|Debug|DiskProvider|Hardlink '/downloads/FileName' to '/series/Series Name/Series Season X/FileName' failed.
[v2.0.0.5322] Mono.Unix.UnixIOException: Invalid cross-device link [EXDEV]


The Resolution

You cannot have a hard link which spans across different mount points. All shares in Unraid are attached to the system via the /mnt/user mount point. However each Docker application can also define it’s own internal mappings for mount points. In this scenario I had provided Sonarr with a few different mounts at the root of the container:
/downloads => /mnt/user/Media/downloads
/series => /mnt/user/Media/series

Even though these are on the same system level mount point, the container sees them as two different mount points. To resolve this I simply need to map them to the same mount point within the container. I updated the config for Sonarr to put the two directories under a single mount point named media:
/media/ => /mnt/user/Media

I then had to go and apply the same mapping to Transmission. This unfortunately had a negative consequence of pausing all of my torrents as the source data could no longer be found under /series.  After painstakingly updating each torrent one-by-one and setting the new location, then verifying the source data location, I was back in action. Sonarr thankfully allows you to easily edit the root folder for series in bulk so that was a much more simple fix.

After fixing the container to only have a single mount point the hard links are now being created sucessfully:
19-6-29 22:44:20.7|Debug|EpisodeFileMovingService|Hardlinking episode file: /media/downloads/FileName to /media/series/SeriesName/SeasonName/FileName
19-6-29 22:44:20.7|Debug|DiskTransferService|HardLinkOrCopy [/media/downloads/FileName] > [/media/series/SeriesName/SeasonName/FileName]
19-6-29 22:44:20.7|Debug|DiskProvider|Setting permissions: 0644 on /media/series/SeriesName/SeasonName/FileName

I am also able to confirm this using the inum of the file:
root@Tower:/mnt/disk1/Media/downloads# find /mnt -inum 7280027077

Setting up the Plex container in unRAID 6

Setting up the Plex container in unRAID 6

unRAID: How to set up the Plex container

Now that I have unRaid up and running, in this post I am going to discuss how I went about adding the Plex container in unRAID 6. It’s a fairly straightforward process and didn’t cause me many headaches thankfully. I was originally running the server from my desktop but that wasn’t an ideal solution because I don’t leave my desktop on all the time.

Plex logo
Plex logo (

Enabling Docker

The first step involved here was enabling Docker. If you’re not familiar with what Docker is, then take a look at my post where I explained how the application server works in unRAID.

To do this you simply navigate to the Settings tab, click Docker, and then use the drop down menu to enable Docker. PlexMediaServer is one of the default docket templates that comes packaged by lime-technology in unRAID 6. Navigate to the new Docker tab and you will see a section named ‘Add container’. From there you want to choose ‘PlexMediaServer’ under the limetech heading.

Plex docker template

There is only one required folder named ‘config’ for this docker but you’re going to want add more. The config folder does exactly what it says on the tin – Stores the configuration settings for this particular docker. I originally made the silly mistake of pointing this folder to a directory on the flash drive running unRAID. As soon as I rebooted the server I lost all the configuration I had spent the previous setting up – bummer. So for this directory you’re going to want to specify a directory on the array. I created a folder called ‘DockerConfig’.

In order to add any media you will also need to specify these directories too. I added one named /movies pointing to /mnt/user/Movies and another named /series pointing to /mnt/user/Series.

Plex container volumes

All that is left now it to allow unRAID to create the docker container. Simple as.

Configuring Plex

There wasn’t a whole lot of configuration required with Plex assuming you only want to use it locally within your home. If you plan on streaming media externally you will need to setup remote access. There are three steps required in doing this. The first step required is to sign into your Plex account, assuming you created one. If not you will need to register an account. The second is port forwarding – By default Plex will use port 32400 but you can specify another port if you prefer. You will need to forward this port to the IP of your server. Lastly you need to edit the settings of your plex server for remote access and manually specify whatever port you chose by ticking the box to manually specify.

unRAID 6 benchmarks

unRAID 6 benchmarks

unRAID 6 benchmarks

Now that I’ve got unRAID up & running I thought it would be interesting to run some benchmarks so I could determine what kind of speed to expect. Bear in mind this is without a cache device installed as I am waiting on my SATA controller to arrive before installing the SSD. Why? The optical drive SATA port runs at SATA I speeds, hence the reason for buying the additional SATA controller. This way I can have 4 drives in the bay slots and the cache drive hooked into the second SATA controller.

The program I used to run these benchmarks is called ‘CrystalDiskMark‘. CrystalDiskMark is designed to quickly test the performance of your hard drives. The program allows you to measure sequential and random read/write speeds for a specific drive. In order to test unRAID I needed to mount one of the unRAID NAS shares as a volume within Windows (N:).

This is 5 x 4GB passes with two WD Red 3TB drives, one of which is acting as a parity disk:

unRAID benchmark speeds

And here are the results with 5 x 1 GB passes:

unRaid benchmark speeds

I don’t think that’s too bad considering I don’t have a cache device set up yet. With a cache device I think I should be hitting much higher numbers because I won’t need to read/write from the disk array. Still slower than my desktop mechanical drive for both sequential read and write speeds but I doubt I will notice the difference when it comes to real world usage – Hopefully anyway. Parity certainly had more of an impact on the speeds than I would have liked. My internal 1TB WD Blue mechanical drive clocked the below speeds. Much faster for sequential reads and writes but a hell of lot slower for the 4k metrics.

Desktop drive benchmark speeds
I might revisit this in the future when the SATA controller arrives so I can set up my cache device. Watch this space!

Installing unRAID 6 on my HP Proliant Gen8 Microserver G1610T

Installing unRAID 6 on my HP Proliant Gen8 Microserver G1610T

Installing unRAID 6 on my HP Proliant Gen8 Microserver G1610T

As discussed in my previous posts I decided on unRAID for my HP Microserver OS. For now I simply have a trial version but I can already tell I will end up buying a license. The installation process is super easy as you will see below:

Downloading the unRAID image

Before going about installing unRAID I was under the impression that it would come in an ISO package like every other OS. However unRAID just comes as a package of files with a ‘make-bootable’ executable inside. Preparing the USB is easy! First of all your USB needs to be formatted to FAT32 and the volume label must be set to UNRAID in capital letters.

Microserver unRAID USB format

Then simply transfer all the files to the root directory of the USB (i.e. not in a folder) and then run the ‘make-bootable.bat’ file as administrator (Right click -> Run as administrator). A CMD prompt will appear just asking you to confirm you want to proceed – Press any button to continue and hey presto job done.

Microserver unRAID make bootable

Now you just need to eject the usb from your computer, connect it to the internal USB slot of the microserver and boot it up. Mine booted into the USB straight away without editing any BIOS options of my microserver but your mileage may vary. After successfully booting up I was able to navigate to http://tower from my desktop and was greeted with the web GUI. It really was that easy!

Microserver unRAID GUI

unRAID Licensing

Before you’re able to do anything you will need a license key. Upon first installation you’re entitled to a 30 day evaluation period to test out the software. To activate your license navigate to ‘Tools -> Registration’. You will need to enter in your email address so the license URL can be sent to you which you then paste into the box provided.

Microserver unRAID registration

After that you’re pretty much good to go! Stay tuned for more unRAID and microserver posts!

New components for my HP G1610T Gen8 Microserver – Upgrades!

New components for my HP G1610T Gen8 Microserver – Upgrades!

G1610T Gen8 Microserver upgrades!

I decided it would be best to invest some more money into my G1610T Gen8 Microserver rather than trying to struggle through with the limited resources available in the server itself. I also needed some hard drives to fill up the drive bays for my NAS.

What did I buy?

2 x 3TB WD Red NAS drives
1 x 2 Port SATA Controller
3 x Cat6a ethernet cables

G1610T Gen8 Microserver components

The NAS drives will obviously going into the bays available at the front of the server so I can set up my NAS. As I only have two drives at the moment this means the array will only have 3TB usable space due to the parity disk. Realistically that is all I need to start with as my media collection is only about that large at the moment. The SATA controller card I bought because the internal SATA connector only runs at SATA I speeds while this PCI card runs at SATA III. This will be used to connect up my cache devices if I ever get around to implementing that.

G1610T Gen8 Microserver sata controller

The requirements for running unRAID are pretty minimal compared to FreeNAS with unRAID only requiring about 1GB of RAM if you have the intention of running it as a pure NAS system. However once you start playing around with containers and virtualising machines you will understandably get tight on resources. With that in mind I decided it would be best to upgrade to 16GB RAM which is the most this machine will accept. This didn’t come cheap at about €160 for the two stick set – ECC RAM is bloody expensive and is unfortunately a necessity for this particular model.

G1610T Gen8 Microserver ram

Lastly I decided to invest in some decent cat6a cables for connecting up my server and desktop to the network along with a 1Gb switch. I’ve been running on cat5e cables for quite a long time because in all honesty I just had no requirement for the additional benefits in cat6 cables. Now that I will be regularly transferring files to and from the NAS I felt the additional bandwidth might be beneficial.


How the application server works in unRAID – What is Docker?

How the application server works in unRAID – What is Docker?

So what is Docker?

Before installing unRAID I had not heard of Docker but I wish I did. Before explaining what Docker is I’m going to take you back to one of my first posts. In this post I explained what the purpose of a hypervisor is and what the differences between a level 1 and a level 2 hypervisor are. I think this would be useful to understand before proceeding with this post.
The idea behind a hypervisor is that it has the ability to emulate hardware in such a way that an operating system running on the hypervisor has no idea that it is not a physical machine. By creating multiple VMs (Virtual Machines) you have the ability to isolate applications from each other. For example you might have one VM for torrents and automated media management. You could have another for web development. This is all great in theory but the issue with this is that virtualisation is resource intensive! The alternative to deploying Virtual Machines is using ‘containers’. Containers are similar to virtual machines in that they also allow for a level of isolation between applications but there are some significant differences.

What’s a container? How does it differ from a Virtual Machine?

Virtual machines helped us get past the idea of a “one server for one application” paradigm that was formerly common in data centers and enterprise. By introducing the hypervisor layer it allowed for multiple different OS to run on the same hardware so that many different applications could be used without wasting resources. While this was a huge improvement it is still limited because each application you want to run will require a guest operating system, it’s own CPU, dedicated memory and virtualised hardware. This lead to the idea of containers. Docker has been around for only a few short years, but container technology has been with us for decades – It just wasn’t all that popular until recent years.

docker logo
unRAID makes use of Docker containers. Similar to the idea of a virtual machine Docker containers bundle up a complete filesystem with all of the required dependencies for an application to run. This will guarantee that it will run the same way on every computer that it is installed on. If you have ever done any development work you may understand the frustration of developing an application on one machine, and then deploying it on another machine only to find out that it won’t launch. Docker helps to do away with this by bundling all the required libraries etc. into one lightweight package with the ‘Docker Engine’ being the one and only requirement. The Docker Engine is is a program that allows for containers to be built and run.

Fundamentally a container looks similar to a virtual machine. A container uses the kernel of the host operating system to run multiple guest instances of applications. Each instance is called a container and will have it’s own root file system, processes, network stack, etc. However the fundamental difference being that it does not require a full guest OS. Docker can control the resources (e.g. CPU, memory, disk, and network) that Containers are allocated and isolate them from conflicting with other applications on the same system. This provides all the benefits of traditional virtual machines, but with none of the overhead associated with emulating hardware.

The Docker Hub

One of biggest advantages Docker provides is in its application repository: The Docker Hub. This is a huge repository of ‘Dockerised’ applications that I can download and run on my microserver. With unRAID and Docker it doesn’t matter if it’s a windows or linux application, I can run it in a docker container. Really cool stuff.