Browsed by
Author: Luke

Telegram notifications with unRAID and Docker applications

Telegram notifications with unRAID and Docker applications

Telegram has been a dream come true. I recently discovered the simplicity of setting up a Telegram bot for getting notifications from my server. One of the grievances I’ve had since starting my server journey was the fact that I needed to constantly monitor things like available storage, cache disk usage, new downloads, etc.. I often found out too late when something was going wrong and had to spend some time restoring functionality to the server. I’ve finally found a super simple solution to this problem; A Telegram Bot to send automated notifications.

I’m not going to talk too much about creating or setting up a Telegram bot because I really didn’t spend any time playing with that functionality. What I am going to cover in this post is more about using the Telegram bot API for receiving notifications from unRAID and the various applications I have running like Sonarr, Radarr, and Ombi as well as some general system level stuff like available storage space.

 

Telegram Setup

There are a few things you need to do with Telegram before you get started with setting up notifications:

  1. Create a Telegram bot
  2. Optional: Create a channel where you’d like to have these notifications sent to. I say this is optional because you can have the messages sent directly to you instead.
  3. Get your channel ID

Create a Telegram bot

To create a bot you’ll need to start a conversation with @BotFather on Telegram. When you start a conversation with this bot you’ll be given a number of options, one of which is /newbot. You just send this command to the bot, follow the prompts on screen, and you’re done! It’s literally that easy to get started with your first Telegram bot. As proof here’s a screenshot of my conversation:

The end of the last message (which I’ve hidden) will provide you with your private bot API token. You will need to record this long string of characters for use later on.

Create your Telegram channel

As mentioned above this step is optional, but I would recommend it if you want to add additional users for notifications, or additional bots for other purposes. In my scenario I created a private channel and created separate bots for each application, then added them all to one channel. The reason I have separate bots for each application is because you can enable the “Sign Messages” option in Telegram. This option asks the bot to sign the messages so I know which bot sent which message. This allows me to know what application the message came from.

Get your Telegram channel chat ID

This part was less than obvious at first, but I discovered a super simple solution after some brief research. You can add the bot @get_id_bot to your channel. When the bot receives a message in this channel it will output the channel chat ID. If you want to be directly messaged instead of creating a new channel, you can just message the bot directly to get your personal chat ID. If you have added @get_id_bot to your channel you should remove it once you have your ID as it will continuously respond to every new message.

That’s it!

You’re now ready to get started! Below I’ll run through some simple notifications I have from Sonarr, Radarr and Ombi.

 

Sonarr & Radarr

I’ve used Sonarr as the example here, but this also applies to Radarr as they are basically the same application. You can find Telegram as one of the options found under Settings > Connect. Sonarr will allow you to send notifications whenever a new episode is grabbed, downloaded, upgraded or renamed. You just need to provide your Bot Token and Chat ID that we already recorded in the previous steps:

Options when setting up Telegram notifications in Sonarr

Unfortunately there doesn’t seem to be any (obvious) way of customising the notifications that are sent. I don’t have any problems with these notifications, but it would be nice to have the ability to customise the notification content.

 

Ombi

Ombi on the other hand does allow plenty of notification customisation using the list of defined variables, and various categories that can be enabled or disabled. You also have the option of using complete Telegram markdown or raw HTML formatting for each notification content:

There are links within this page in Ombi that explains the Telegram markdown, and possible variables that can be used. As an example I customised the issue notification to provide more detail:

{Alias} has reported a new issue for the title {Title}
*Category:* {IssueCategory}

*Subject*: {IssueSubject}

*Description:* {IssueDescription}

 

This gives me a nice overview of the problem without needing to log into Ombi to view the details.

 

System level unRAID alerts

The last area that I wanted to monitor was available storage space. From my last post I addressed hard links in Sonarr not working for me and storage being filled up due to duplicate files. While that issue was ongoing my server kept crashing because the storage kept being filled with hundreds of GBs of useless duplicates. I want to avoid something like that happening again and Telegram seems like an obvious choice here as it’s instantaneous so I can be proactive rather than reactive. However, unRAID doesn’t have any capabilities for defining notifications like this.. Until I discovered the “CA User Scripts” application. This incredibly flexible application allows you to write your own bash scripts and schedule them for execution.

I wrote a bash script that will calculate the amount of free space on the array and cache disk, and send a notification to my Telegram channel using the Telegram API:

#!/bin/bash
TOKEN=XXX
CHAT_ID=XXX
URL="https://api.telegram.org/bot$TOKEN/sendMessage"
LIMIT=95
CACHE_LIMIT=60

#Checks how much space% is consumed on the array
ARRAY_USED_PERCENT=`df -h /mnt/user | awk '{print $5}' | sed -ne 2p | cut -d'%' -f1`
#Checks how much available space there is in GB
ARRAY_AVAIL_SPACE=`df -h /mnt/user | awk '{print $4}' | sed -ne 2p`
#Checks how much space% is consumed on the cache drive
CACHE_USED=`df -h /mnt/cache | awk '{print $5}' | sed -ne 2p | cut -d'%' -f1`
#Checks how much available space there is in GB
CACHE_AVAIL_SPACE=`df -h /mnt/cache | awk '{print $4}' | sed -ne 2p`

if [ $ARRAY_USED_PERCENT -ge $LIMIT ] 
then
    MESSAGE="WARNING: Free Array Space is at `expr 100 - $ARRAY_USED_PERCENT`%. There is $ARRAY_AVAIL_SPACE available."
    curl -s -X POST $URL -d chat_id=$CHAT_ID -d text="$MESSAGE"
fi

if [ $CACHE_USED -ge $CACHE_LIMIT ] 
then
    MESSAGE="WARNING: Free Cache space has fallen below `expr 100 - $CACHE_USED`%. There is $CACHE_AVAIL_SPACE available."
    curl -s -X POST $URL -d chat_id=$CHAT_ID -d text="$MESSAGE"
fi

This script is scheduled to run every hour using the “CA User Scripts” app.

I definitely plan on making more use of these notifications.. but this been such an awesome ease-of-life improvement for me.

Sonarr and the “Use Hardlinks instead of Copy” conundrum with Unraid

Sonarr and the “Use Hardlinks instead of Copy” conundrum with Unraid

Sooo I haven’t updated my blog in quite a while.. In fact there hasn’t been a new post in about 18 months.. What can I say? I’ve been slacking! Today I decided to tackle a problem that has been causing me heartache with my server for quite some time, and I thought it was the ideal topic to talk about.. Sonarr gives me the option of using a “hard link” when moving files rather than copying the file and duplicating space.. But why isn’t it working?

 

Wait, what’s a hard link?

Let’s start with the basics. A hard link essentially allows you to link two (or more) files that both point to the same location on disk. In other words I can create a “link” between ‘File1‘ and ‘File2‘ such that opening either File will open the same source data from disk. Even though both File1 and File2 exist on the Filesystem, there is actually only one real copy of the data on disk. The benefit of hard links is that it allows a file to ‘exist’ in two or more locations but not take up any additional disk space.

When File1 and File2 have a hard link designated between them it doesn’t matter if I go to open File1 or File2, I am actually opening the same data.
root@Tower:/mnt/disk3/Documents# touch File1
root@Tower:/mnt/disk3/Documents# echo "Adding some text to File1" > File1
root@Tower:/mnt/disk3/Documents# ln File1 File2
root@Tower:/mnt/disk3/Documents# cat File2
Adding some text to File1

I could go ahead and delete File1 from the Filesystem, but it wouldn’t matter because I still have File2. The fact that File1 is the ‘original’ file is irrelevant once a hard link has been established. They are now both considered equal.
root@Tower:/mnt/disk3/Documents# rm File1
root@Tower:/mnt/disk3/Documents# cat File2
Adding some text to File1

In order for me to explain the problem I’m having with Sonarr it’s important that we first understand how this works. To do that we need to take a step back and talk about how files are stored in Unix like file systems.

 

Files and inodes

When we look at a file in Linux it encapsulates two distinct but linked concepts:

  1. File Content and structure
  2. Metadata and properties

The file content is written out to disk as raw blocks of data. The File will act as a reference point to the location of these blocks so we know where the File begins and ends. This information is stored using the concept of an “Index Node” or ‘inode’.

The inode is a data structure in Linux systems that describes the File; It will point to the location on disk as well as providing all of the Files associated metadata and permissions. Every File on Linux is associated with an inode number which is stored in an inode table. From the inode number the kernel’s file system driver can access the inode contents, including the location of the file on disk.

Finding the inode number

You can get the inode number for a file using the ‘ls -i‘ or ‘ls --inode‘ command:
root@Tower:/mnt/disk3/Documents# ls -i
414 File1

Above we can see that File1 has an inode number  of 414. So this means whenever a user attempts to read from File1 the Kernel will open the data at the disk location specified for this inode. To demonstrate the concept of a hard link more clearly, let’s create a link again and confirm the inode number for the new File matches:
root@Tower:/mnt/disk3/Documents# ln File1 File2
root@Tower:/mnt/disk3/Documents# ls -i
414 File1
414 File2

So now we can see that regardless of whether we try to open or edit File1 or File2 it does not matter as it refers to the same inode, and therefore the same disk location.

You can also find how many reference links (Or more simply put how many hardlinks) a file has by running “ls -l” or “ls -li” to include the inode number. You can see below each File has 2 links:
root@Tower:/mnt/disk3/Documents# ls -l
-rw-rw-rw- 2 root root 0 Jun 29 17:36 File1
-rw-rw-rw- 2 root root 0 Jun 29 17:36 File2

We can check where these references are using the inode number:
root@Tower:/mnt/disk3/Documents# find . -inum 414
./File1
./File2

 

So what’s the problem?

I have the Sonarr configurable option “Use Hardlinks instead of Copy” set to enabled. This option can be revealed by toggling “Advanced Settings” to “Shown” in the top right:

Advanced Options

You can find the “Use Hardlinks instead of Copy” option under “Importing”:

Use hardlinks instead of copy option

Theoretically this should allow for any new media file to exist simultaneously in the downloads folder and my Plex library. In an ideal world it would mean everything is fully automated for my downloads, however in practice the hardlink option does not seem to work. The files I download are being copied rather than linked therefore each file consumes twice as much space. Unfortunately this means frequent manual intervention by deleting the contents of the download directory to free up space.

After performing some initial checks I was immediately able to confirm that there is no hard link for new downloads. Using an example file that was just downloaded I could see there was only 1 reference:
root@Tower:/mnt/user/Media/downloads# ls -li
 71244 -rw-rw-r-- 1 nobody users 1346128584 Jun 29 15:58 FileName

The inode number also only returns one item:
root@Tower:/mnt/user/Media/downloads# find /mnt/user/Media -inum 71244
/mnt/user/Media/downloads/FileName

After some further research, this initial “proof” that hardlinks weren’t working turned out to be a quirk of the Unraid system..

 

Unraid and the FUSE filesystem

My research lead me to a post on the Unraid forums with a user stating the hard links do not work in Unraid. This comment from limetech himself set me straight:

“Your ‘ls -li’ command is listing “pseudo” inode numbers in the fuse file system.  The two names indeed point to same file.”

If we refer to the Unraid User guide we’ll find this little tidbit which expands on the above statement:

“User Shares are implemented using proprietary code which builds a composite directory hierarchy of all the data disks. This is created on a tmpfs file system mounted on /mnt/tmp. User Shares are exported using a proprietary FUSE pseudo-file system called ‘shfs’ which is mounted on /mnt/users.”

There is a number of things we can discern from the above conversation which will help our investigation:

#1: If I make a hardlink on a user share it will also make a hard link on the actual disk. Proof:
root@Tower:/mnt/user/Documents# touch UserShareFile
root@Tower:/mnt/user/Documents# ln UserShareFile UserShareHardLink
root@Tower:/mnt/user/Documents# ls -li /mnt/disk3/Documents
413 -rw-rw-rw- 2 root root 0 Jun 29 19:00 UserShareFile
413 -rw-rw-rw- 2 root root 0 Jun 29 19:00 UserShareHardLink 

#2: The shfs FUSE file system generates “pseudo” inode values and therefore there is no way to determine if a file in a user share has a hard link. The “ls -l” output will indicate that there is two links but we have no way of determining where they are. Proof:
root@Tower:/mnt/user/Documents# ls -li
321717 -rw-rw-rw- 2 root root 0 Jun 29 19:00 UserShareFile
321719 -rw-rw-rw- 2 root root 0 Jun 29 19:00 UserShareHardLink
root@Tower:/mnt/user/Documents# find . -samefile UserShareFile
./UserShareFile
root@Tower:/mnt/user/Documents# find /mnt/user -samefile UserShareFile
/mnt/user/Documents/UserShareFile

 

Re-starting the investigation

Now let’s go back and check out that file I downloaded in the previous section armed with the knowledge we learned from the forum gurus. As a reminder this is the file we’re dealing with:
root@Tower:/mnt/user/Media/downloads# ls -li
 71244 -rw-rw-r-- 1 nobody users 1346128584 Jun 29 15:58 FileName

This file actually exists on Disk3:
root@Tower:/mnt/disk3/Media/downloads# ls -li
7156367 -rw-rw-r-- 1 nobody users 1346128584 Jun 29 15:58 FileName

From the command output we can see that there is no additional links to this file, but let’s double check with the inode value:
root@Tower:/mnt/disk3/Media/downloads# find /mnt -inum 7156367
/mnt/disk3/Media/downloads/FileName

So we’re back to square one – The hardlink functionality is definitely not working, and this time we have proof!

After enabling debug level logging in Sonarr I finally found the culprit:
19-6-29 15:38:44.2|Debug|EpisodeFileMovingService|Hardlinking episode file: /downloads/FileName to /series/Series Name/Series Season X/FileName
19-6-29 15:38:44.2|Debug|DiskTransferService|HardLinkOrCopy [/downloads/FileName] > [/series/Series Name/Series Season X/FileName]
19-6-29 15:38:44.2|Debug|DiskProvider|Hardlink '/downloads/FileName' to '/series/Series Name/Series Season X/FileName' failed.
[v2.0.0.5322] Mono.Unix.UnixIOException: Invalid cross-device link [EXDEV]

 

The Resolution

You cannot have a hard link which spans across different mount points. All shares in Unraid are attached to the system via the /mnt/user mount point. However each Docker application can also define it’s own internal mappings for mount points. In this scenario I had provided Sonarr with a few different mounts at the root of the container:
/downloads => /mnt/user/Media/downloads
/series => /mnt/user/Media/series

Even though these are on the same system level mount point, the container sees them as two different mount points. To resolve this I simply need to map them to the same mount point within the container. I updated the config for Sonarr to put the two directories under a single mount point named media:
/media/ => /mnt/user/Media

I then had to go and apply the same mapping to Transmission. This unfortunately had a negative consequence of pausing all of my torrents as the source data could no longer be found under /series.  After painstakingly updating each torrent one-by-one and setting the new location, then verifying the source data location, I was back in action. Sonarr thankfully allows you to easily edit the root folder for series in bulk so that was a much more simple fix.

After fixing the container to only have a single mount point the hard links are now being created sucessfully:
19-6-29 22:44:20.7|Debug|EpisodeFileMovingService|Hardlinking episode file: /media/downloads/FileName to /media/series/SeriesName/SeasonName/FileName
19-6-29 22:44:20.7|Debug|DiskTransferService|HardLinkOrCopy [/media/downloads/FileName] > [/media/series/SeriesName/SeasonName/FileName]
19-6-29 22:44:20.7|Debug|DiskProvider|Setting permissions: 0644 on /media/series/SeriesName/SeasonName/FileName

I am also able to confirm this using the inum of the file:
root@Tower:/mnt/disk1/Media/downloads# find /mnt -inum 7280027077
/mnt/disk1/Media/series/SeriesName/SeasonName/FileName
/mnt/disk1/Media/downloads/FileName

What’s new in Nuix 7.2?

What’s new in Nuix 7.2?

Nuix 7.2. Powering today. Shaping tomorrow.

The latest release of Nuix 7.2 eDiscovery Workstation is absolutely packed full of new exciting features. Below I will attempt to offer some brief details on some of the more interesting features which will help you to conduct a more comprehensive investigation or discovery workflow.

For a full list of changes please refer to the Nuix 7.2 changelog documentation available here.

 

Cloud Storage

Cloud storage provider support has been improved by offering support for accounts from Google Drive, Microsoft OneDrive, Apple iCloud and Box.com. In the Add/Edit evidence dialogue you will see these options under ‘Add Network Location’. While Nuix already has support for Dropbox accounts, we now also offer support for extracting deleted files from Dropbox!

Adding a cloud storage provider in Nuix

Microsoft EDB files

We have added support for extracting data from the Extensible Storage Engine (ESE) Database File (EDB) format. The ESE database format has been used by several different functions within the Windows operating system for a while now such as Content Indexing / Windows Desktop Search and Active Directory, but in recent times it has become the standard database for storing Internet Explorer browser artefacts. It also stores information from Cortana, the Windows 10 virtual assistant.

Password bank

This is a cool new feature that allows for ingestion time decryption of certain file types. This feature can be accessed when adding / processing new evidence by selecting the “Decryption keys” in the evidence processing settings dialogue. If you wish to use this feature you will need to select an existing word list in your case as the password bank feature is otherwise off by default. After successfully decrypting a file the new unencrypted version appears as a child item of the original encrypted item. We provide both files to support multiple workflows.

nuix password bank

Create new child items from selected binary regions

This is one the forensic folks will be happy to hear about. You can now select a specific region of binary in the binary viewer and create a new child item from that region. A typical scenario where this could be used is to create new child items using text stripped regions of unallocated space.

 

"Create new child item" from binary region

Offline maps

In previous versions of Nuix you would not be able to use the Maps view without an internet connection, however many investigators work in air-gapped offline environments – So they would not be able to make use of a very powerful feature of Nuix. With 7.2 you now have the option to switch your maps view to “OpenStreetMaps” in the top left corner of the of the maps view.

Nuix maps view

For now the built-in web browser is not capable of rendering the OpenStreetMaps” data directly so there is a requirement to run a “tile server” which is just a node.js app that serves these rendered files to the built-in browser. The IP address / URL of this server will need to be specified under “Global Options > Results”. While I have not personally tested this, I have heard that the performance of this view is much better than Bing maps as it uses GPU-accelerated HTML canvas to render vector data whereas Bing Maps fetches heavy pre-rendered JPEG tiles from a server.

Pivoting

You can take any item and pivot around it by either time or location showing you all items or events that happened within a given time window, or within a specified distance (Geo-location). Select any item(s) in the results pane, right-click and navigate down to “Pivot” which has sub-menus for time and location. This pivot feature has been implemented in Workbench and Context.

Nuix Pivot Workbench
Workbench
Context pivot
Context

Imaging and production profiles

Production sets now make heavy use of Imaging and Production profiles to help control exports and provide repeatable control which can be specified under the “Imaging and Production” tab when creating a Production set. You now have access to fine-grained control to specify how to image each type of document. One example where this might be useful is creating custom slipsheet templates for a specific imaging profile, based on defined rules.

Imaging and Production tab
Imaging and Production tab

Closing thoughts

I think Eddie Sheehy summed up the new release of Nuix 7.2 aptly, and I strongly agree with his sentiment:

“In response to requests from our customers in advisory firms, litigation service providers, law enforcement agencies, and businesses around the world, we’ve added features to help them conduct comprehensive eDiscovery and investigation workflows within a single application.”

Although some of my descriptions are brief I do intend on elaborating on these in future posts. If anyone wants to add discussion to what I mentioned above feel free!

What is SMI-S? What is the EMC SMI-S Provider? What is ECOM?

What is SMI-S? What is the EMC SMI-S Provider? What is ECOM?

What is SMI-S? What is the EMC SMI-S Provider?

The first step in understanding the functionality of ECOM and the EMC SMI-S Provider, is to first answer the question of what is SMI-S? Essentially it is an attempt at standardisation of storage management and it’s related technologies to increase interoperability. This standardisation was created by The Storage Networking Industry Association (SNIA) who envisions “leading the storage industry in developing and promoting vendor-neutral architectures and standards” according to their mission statement. This is lead by the SNIA’s Storage Management Initiative (SMI). The Storage Management Initiative Specification (SMI-S) is a standard that has been developed by the SNIA.

How many more acronyms are there?

A few more.. but I’ll keep it short. These are more for reference than anything else.

  • The SMI Architecture is based on Web-Based Enterprise Management (WBEM) from the Distributed Management Task Force (DMTF).
  • The architecture is a client-server model that uses CIM-XML as the protocol. The client interface is the combination of the operations defined in CIM-XML and the model defined in SMI-S. The model is defined using the Common Information Model (CIM) and based on the CIM Schema. The EMC implementation of CIM is named ECIM.
  • The CIM is an object oriented model based on the Unified Modeling Language (UML). Managed elements are represented as CIM Classes that include properties and methods to represent management data and functions.

What is ECOM?

The EMC Common Object Manager (ECOM) enables communications and common services for applications. ECOM supports the ECIM which is used to represent the wide variety of components found in data centers using the CIM schema. The CIM schema provides a common methodology for representing systems, networks, applications, and services as a set of object-oriented models that can be bound to real-world functionality. Management applications based on CIM can interact with resources such as data storage hardware from multiple vendors, without direct knowledge of the underlying systems.

CIM classes identify types of resources. A class can represent a broad category of resources or can be subclassed to represent a specific type. For example, the class CIM_NetworkPort represents a broad category of heterogeneous network communications hardware, while EMC_NetworkPort is a subclass that represents an EMC specific subset. While classes define types of things found in a managed IT environment, instances represent individual implementations of a class. A specific port at a specific network address is an example of an instance of class EMC_NetworkPort.

The ability to exchange information, retrieve data, execute commands, and discover available resources are also required to link the elements in a network. Before resources can be managed, they must first be discovered by management applications. The Service Location Protocol (SLP) defines a mechanism for locating services in a network. Applications looking for a service are called user agents (UA), and applications providing a service are called service agents (SA). In terms of SLP, ECOM acts as a service agent that advertises its address and capabilities. When a UA has found the ECOM-exposed services that it needs, it begins communicating directly with each ECOM instance via CIM-XML messages.

A successful SMI implementation

ECOM is vital to the successful implementation of SMI. In order to successfully implement SMI there are two requirements; A WBEM Server and a SMI Provider. A WBEM server is responsible for routing of requests and management of SMI Providers. An SMI Provider will use an Application Programming Interface (API) to communicate with devices and retrieve the information in CIM format. In this scenario the ECOM service and the EMC SMI-S Provider are paired together to successfully implement an SMI-compliant interface.

EMC SMI-S Provider
Source: The EMC SMI-S Provider Programmers Guide

So what makes this the EMC SMI-S Provider as opposed to a generic SMI-S Provider? Due to the exceedingly large number of classes available in CIM and because of their broad nature, profiles were created. Profiles allow one to make use of CIM for specific domains. A profile defines the set of classes and properties that must be used to model the managed resource. The EMC SMI-S Provider is made up from the array providers. The profiles from these providers allow an SMI-S Client to retrieve information from, or make changes to specific storage systems.

First steps with the WordPress administration area

First steps with the WordPress administration area

WordPress 101 – First steps with the WordPress administration area

Now that you have finished installing WordPress on your desktop or server, it is time to get familiar with the WordPress administration area. I will take you through some of the basics of using the web interface and begin customising your website.

Logging in to the WordPress administration area

First step to using WordPress is accessing the WordPress administration area or the backend of your website. You can access the WordPress login page by adding “wp-admin” to the end of your URL (e.g. http://mywebsite.com/wp-admin). Now you will be presented with a form prompting you for a username and password. If you recall during the installation process we created a username and password which you will enter into this form. Please not this is not the MySQL username and password.

wordpress administration username passwordThe Dashboard

After logging into the administration page you will be greeted with the Dashboard. In WordPress a Dashboard is the main administration screen for a site.  It summarizes information about the site in one or more Widgets that you can add and remove. The Dashboard is also where you will plant the seeds of your new website – Creating pages, writing posts, designing the layout and making the website your own.

Starting at the top of this page we can see the toolbar.  The toolbar contains links to information about WordPress, as well as quick-links to create new posts, pages and links, add new plugins and users, review comments, and alerts to available updates to plugins and themes on your site. It also has a handy link to directly view your new website by clicking on your name.

wordpress administration toolbar

On the left side of the WordPress administration page is the main navigation menu, which is where you will perform most of your functions. As you move your cursor down this list you will see a number of sub lists pop out detailing further actions. You should get familiar with this menu – Poke around at the various options and sub menus available.

Viewing your posts

A post is a single article within a blog. What you are reading right now is a post. Assuming you are using the default WordPress theme then you will only have one post to work with in the beginning. This post will be visible on the front page, or the homepage of your website. Go take a look for yourself by navigating to your website using the WordPress toolbar. If you click on the title of the post it will bring you to the page of the post. Alternatively you can also view the post by clicking the ‘View’ button in the posts page:

wordpress administration postsPosts are usually stored in Categories and/or Tags so you can keep related topics together. Every Post in WordPress is filed under one or more Categories. Categories allow the classification of your Posts into groups and subgroups. Tags are the keywords you might assign to each post. The difference is that tags have no relationship to each other. They can be completely random for each post. Tags provide another means to aid your readers in accessing information on your blog. Looking at the screenshot above you can see that post is in the WordPress category, and does not have any tags.

Viewing your pages

Pages are not to be confused with posts. Pages are for content such as “About,” “Contact me”, etc.. For example I have an about me page. They live outside of your blogs home page. To look at your pages you can click on the ‘Pages’ option in the navigation menu. One thing to note is that normal web pages can be either static or dynamic. Static pages are created once and do not have to be regenerated every time a person visits the website. If you take a look at my about me page you might think that this is static – Nothing changes on this page whenever you visit it. However, almost everything in WordPress is generated dynamically including pages.

As I discussed in my previous post on WordPress installation everything published within WordPress is stored in the MySQL database. When the page is accessed the database information is queried by your WordPress template from your theme and the web page is generated. Technically this would be considered a “pseudo-static” page as static information is generated dynamically by the template. I will discuss this further in future posts.. Stay tuned.

By default you will not have any pages available to look at, so try creating one and see what happens. How does your page look?

Looking for more?

For now I would recommend taking a look at the WordPress Codex, but in future posts I aim to expand on the details behind pages, posts, templates, themes, plugins, and anything else related to WordPress that I can expand on..

Thanks for reading!

Recovering the partition table of a corrupted USB stick using TestDisk

Recovering the partition table of a corrupted USB stick using TestDisk

Recovering the partition table of a corrupted USB

Yesterday I came across an extremely useful utility called TestDisk. I managed to rescue my Dads micro-SD card when all hope looked lost. Somehow the micro-SD card partition table became corrupted after his phone got wet. I have never heard corruption-by-water before but I guess that’s a thing!

First signs of partition table corruption

The first problem we noticed after drying the phone out was that most of the apps on his android device were missing. After removing the memory card and re-inserting it again a message popped up saying that he needed to format the card before it could be used. Obviously this doesn’t sound good and first my thought jumped straight to water damage. However despite this I powered on to see if I could avoid formatting the device. After attaching it to my desktop a similar message appeared: You need to format the disk in drive G: before you can use it. 

windows partition table formatIn disk management I could see the device as a RAW filesystem so the first thing I did was open up DISKPART and see if that could manipulate the partition. With DISKPART I encountered a weird a problem in that there was only one partition (Partition 1) and it was marked as active (Denoted by a *) however all commands were failing with an error stating that a partition must be selected. This also meant that CHKDSK would not work either. Still refusing to give up I moved onto the tried-and-trusted GParted ISO VM, but this only gave me the option of creating a new partition table which in turn would require formatting the device.

TestDisk – My saviour

It was at this point I happened to stumble across a utility called TestDisk while trying to Google a solution for my corruption issue. Their website states that TestDisk is a free and open source data recovery software tool designed to recover lost partition and unerase deleted files. Recovering a lost partition sounded like exactly what I needed so I figured I would give it a go.. and it worked! TestDisk managed to rescue the partition table and restore partition 1 as the active partition. We plugged the memory card back into my Dad’s phone and voila all his apps were back to normal.

How to use TestDisk to recover a partition table

While I’m here I figure I may as well show you how to use TestDisk and the procedure I followed. The first prompt you receive after launching the TestDisk executable is whether or not you wish to create a log file of the completed actions. If you choose to create the text file, testdisk.log , it will contain TestDisk options, technical information and various
outputs; including any folder/file names TestDisk was used to find and list onscreen. I went with ‘No Log’:

TestDisk log partition table

Now we are good to go with finding the partition..

  1. First step is to choose your media device. Mine was listed as ‘Disk /dev/sdd’ at the time.  You can choose your appropriate device via your arrow keys on your keyboard.
  2. Next you choose your partition table type from the list. For this step I went with ‘EFI GPT’
  3. Lastly you choose the ‘Analyse’ option. At this point I was given a message stating that there were no partitions found, but after another scan it found the primary partition.
  4. Use to arrow keys to navigate to the partition and press the return key (Enter)
  5. Now you will be given the option to ‘Write’ the partition table

This managed to successfully recover the partition table without formatting the sdCard and therefore retaining all of the data. I am so happy I stumbled across TestDisk and I know I’m going to end up needing it again in the future.

How to edit a WordPress site offline on your Windows desktop using WAMP

How to edit a WordPress site offline on your Windows desktop using WAMP

WordPress: How to edit your site offline on your Windows Desktop using WAMP

In my previous post I covered the installation of WordPress with WAMP. You might want to read that before continuing here! Now that I have wampserver and WordPress installed and running on my desktop it’s time to import my production site so I can make changes offline. Once again I have another gripe with the WordPress documentation – There isn’t enough detail on this topic. How to move your WordPress site to edit it offline should be discussed in the “Moving WordPress” section of their Codex. I tried reading the above section but it’s so generic it’s mostly unhelpful. With this post I aim to help anyone in a similar situation to me.

Backing up your online WordPress site

Your WordPress database contains every post, every comment and every link you have on your blog. If your database gets erased or corrupted, you stand to lose everything you have written. While I could go through the process of doing everything manually, I decided to make use of the various plugins available in WordPress. I started with “UpdraftPlus – Backup/Restore” but in order to make use of migrate and export options I needed to buy another plugin for that plugin – Yeah, not happening. Next up I decided to take a look at a plugin named “Backup Guard” which seems to work great so far.

First off I installed BackupGuard on the production site. After installation there is a new entry in the sidebar for “Backup Guard”. Clicking on the backup guard entry will bring you to the following GUI where you can complete a backup or import a previous backup. I performed a manual backup and you can see it completed successfully:

wordpress mage of backup guard

I connected to the production site via SFTP and transferred the backup to my local desktop. Swapping over to the offline instance of WordPress I tried to import the backup, however it told me the file was too large (67MB) but offered me an alternative:

wordpress mage of backup guard

If your file is larger than 2MB you can copy it inside the following folder and it will be automatically detected: C:\WAMP\www\wordpress\wp-content\uploads\backup-guard

Please note your directory may be different depending where you installed WordPress. So I did just that; Copied my sgbp file to the above folder and it appeared once I returned to the backup guard section again. Now I just needed to click the restore button and hope everything went to plan:

wordpress mage of backup guard

This process took approximately a minute or so and then I was brought back to the login prompt again. First I felt a little panicky because my credentials I setup in the previous installation were not being accepted. Then I realised my mistake; The credentials being requested were those of my live website rather than the offline instance – Silly me! After logging in all of my pages and posts were visible from the production website on my offline instance. The import was a success!

Much easier than I expected – Highly recommend Backup Guard!