Browsed by
Month: May 2015

Installing Debian, write-protected USB, and Windows MBR..

Installing Debian, write-protected USB, and Windows MBR..

I tried installing Debian and somehow broke Windows

My attempt to install Proxmox didn’t go quite as smoothly as I wanted – quite the opposite in fact. I ended up spending hours fixing the utter mess I made of my computer. My Windows PC had more or less become an expensive paper weight for a brief period of time. All of this happened when I tried install Debian.. Read on for more..

Creating a bootable USB for Debian

When building my PC I made the decision to not purchase a DVD drive as I figured I never use it. This means when installing an operating system I need to create what is known as a bootable USB – Basically the equivalent of inserting your Windows installation DVD when installing Windows. I have done this numerous times in the past and usually it’s a straight forward process. Not this time though. For some reason my USB stick had become write protected meaning that I could not write any new files to it. This also meant I couldn’t format it either so it was pretty much useless. Great start!

windows write-protected usb

The disk is write-protected

Remove the write-protection or use another disk

I brought up DISKPART in the hopes that I could just remove the attribute and then try cleaning it. Nothing is straight forward though and this too just spat back an error at me stating that the disk is write protected. “DiskPart has encountered an error: The media is write protected“.

windows diskpart
So at this point I can’t write to it. I can’t format it. I can’t clean it using DiskPart. I was about to give up and just get another USB but I figured trusty Ubuntu might be able to help. I have an Ubuntu VM that I use as a development environment with Oracle VirtualBox but I don’t consider that an ideal solution, hence me looking for a decent hypervisor.. Anyway back on topic! I booted up my Ubuntu VM and for some reason managed to resolve the issue using fdisk! Here’s what I did:
The following command will list all detected hard disks:

# fdisk -l | grep '^Disk'

Disk /dev/sda: 251.0 GB, 251000193024 bytes

Disk /dev/sdb: 251.0 GB, 251000193024 bytes

You want to find the appropriate entry for your disk and then run fdisk against that disk
# fdisk /dev/sdb

After successfully completing the required tasks I created a new ext3 filesystem on the USB
# mkfs.ext3 /dev/sdb1

And it worked! The read only attribute was removed but obviously the USB was not visible in Windows due to the unsupported filesystem. So back into DISKPART I went, ensured the attribute had been removed and then ran the clean command. Brought up the Windows DiskManagement GUI and formatted the USB with NTFS. Success!

All that effort just to make the USB work? What now?

Remember that I haven’t even gotten around to getting the ISO onto the USB yet! In my previous post I mentioned that Proxmox could be used as either a type 1 or type 2 hypervisor so I felt that my life might be easier if I installed it as a type 2 on Debian. So I loaded the Debian ISO onto the USB using a piece of software called unetbootin. This is probably the one and only thing that went smoothly throughout this whole process. Rebooted my computer, chose the USB from the boot menu and followed the installation process successfully to my spare SSD. No issues so far and everything appeared to be working properly until I restarted and realised that Windows wasn’t appearing in the GRUB boot menu. Everything on the SSD was still accessible and I could mount the drive in Debian no problem. A quick google lead me to believe this was an easy fix:
# os-prober
# update-grub

Alas Windows failed to appear in the menu after these two commands. Some research into this was pointing me towards a BIOS option called ‘Secure Boot’ which might not be allowing the Windows UEFI to interact with GRUB. So in my attempt to resolve this I set my BIOS to SecureBoot UEFI only and then disabled an option called ‘Compatibility Support Module’. I am not a smart man. After saving these settings and restarted the computer I was now facing a black screen. No BIOS splash screen, no GRUB, nothing. I went from a fully functional windows desktop, to a semi-functional debian desktop, right down to an expensive paperweight all in the course of about one hour. Impressive stuff.

My first step was to consult my motherboard documentation in which I learned I have a button on my motherboard that allows me to boot straight into the BIOS which is quite a handy feature to have. Unfortunately this button was no use to me and I still couldn’t see anything other than a black screen. Some further digging made me realise at this point my only option was to short the motherboard CMOS in an attempt to reset it. Thankfully this allowed me back into the BIOS and from there I was able to get back into Debian. Phew!

Windows Startup Repair isn’t very useful

To rule out any Debian / GRUB involvement in the Windows issue I disconnected the Debian SSD to try and force a Windows boot. “Reboot and Select proper Boot device or Insert Boot Media in Selected Boot device and press a key” appeared on my screen. To cut a long story short it turns out I somehow managed to destroy my Windows Master Boot Record (MBR) while installing Debian. I still have not figured out how and I’m not sure I ever will. So I install gparted on Debian, reformat the USB and install Windows using unetbootin. With this I attempted to complete a startup repair but this just told me the repair failed. I brought up the command prompt and ran the command ‘bootrec /fixmbr’ against the Windows SSD and rebooted – This time I’m no longer getting the ‘Reboot and Select proper Boot device or Insert Boot Media in Selected Boot device and press a key’. Instead I’m getting a blinking cursor so I don’t really know if I’ve made progress or gone backwards..

At this point I’m really thinking anything that can go wrong will go wrong so I’m just waiting for the whole thing to burst into flames. Fixing the MBR was quite a painful process in which I learned if you are ever running startup repair from the windows installation disk, run it 3 times. Don’t ask why, just do it. So back into startup repair I went and ran it 3 times despite it telling me it failed each time. Then:
bootrec /fixmbr
bootrec /fixboot
bootrec /scanos
bootrec /redbuildbcd

At this point the rebuildbcd failed to add the windows entry to the MBR. Went back into DISKPART, set the partition as active again then restarted and got a new error about failing to read the boot configuration table. Went through another 3 startup repairs for good measure and volia, Windows booted. Easy as that. Only took about 6 hours of work..

Now to actually try and install Proxmox..

Choosing a hypervisor

Choosing a hypervisor

The confusion of choosing a hypervisor

As I discussed in my previous post there is a wealth of information out around the web on virtualisation, and when I started researching what hypervisor to choose it was no different. In fact the more I looked the more indecisive I found myself being. However I recalled my first post on this topic and the article I had mentioned reading about setting up gaming machines and utilising what is known as PCI pass-through. I quickly learned that this doesn’t work well with Nvidia graphics cards and this turned out to be deciding factor for me in the end.

Hypervisor PCI pass-through; What is it and why does it matter?

If you have ever tried playing a game on a virtual machine you would quickly realise that this is not a viable solution at all for a gaming desktop. This is because normally the GPU is emulated by the hypervisor and a resource manager carves up the resources and passes them to the individual machines. This is where PCI pass-through comes into play – Rather than sharing the resources of the GPU among the multiple machines, you can have the hypervisor assign the whole card to the machine so it has independent ownership of it. There is no virtualisation layer in between the GPU and the VM managing resources so this allows for near native performance. In theory you should not even know that you are using a VM!

Many months ago I decided to get two GTX 970’s for my gaming desktop rather than opting for an AMD alternative. I am living to regret that decision somewhat as I am now learning that Nvidia does not allow their consumer grade GPU’s to utilise this pass-through technology. For this privilege you need to upgrade to their Quadro series which from what I can tell offer no other benefits other than allowing pass-through. Did I mention they’re also much more expensive and far inferior when compared to their GTX counterparts? Nice one Nvidia! Since I don’t plan on replacing my GPUs any time soon so this has more or less ruled out ESXi for me but I learned that it is possible (with a lot of effort) to implement this on a Linux based hypervisor such as KVM / Proxmox.

And the winner is..

proxmox hypervisor

I narrowed my choices down to KVM and Proxmox (Which is based on KVM) as the only two viable options. In the end I decided I would proceed with Proxmox as my hypervisor for the simple reason that it has built-in web GUI for management and it has the option of either a type 1 or 2 hypervisor. This leaves me with plenty of flexibility and simple management.

The hypervisor

The hypervisor

The hypervisor

I never realised until today just how many hypervisor options are out there. Not only how many but that there are different types as well. Obviously I had heard of the industry standards ESXi and the Microsoft alternative Hyper-V but little did I realise that is only scratching the surface. You’ve also got XenServer, Proxmox and KVM just to name some of the more popular hypervisor options. In the last few days I have managed to go from having a good idea of what I wanted to implement, to landing myself in a vast sea of information that just seems limitless. Each hypervisor has it’s benefits and limitations when compared to it’s competitors so I have a lot of research to do before making my final decision and sticking with it. So let’s start with the basics:

What is a hypervisor?

A hypervisor is a piece of software that can create and run virtual machines. The computer or server that this software runs on is known as the host, and the virtual machines are known as guests. A certain defined portion of the resources from the host machine such as memory and disk space are allocated to a guest machine to use. These guest machines have no idea that they do not own these resources – As far as the VM (Virtual Machine) is concerned it is a single independent entity. The hypervisor is actually controlling the resources of the host and distributing them as required.

There are in fact two types of hypervisor – Type 1 and Type 2. What type you choose really doesn’t matter as they serve the same purpose, but for educations sake the distinction is there. A type 1 hypervisor is commonly known as a ‘bare metal hypervisor’ because it runs directly on the hardware of the host. In other words there is no operating system or any other software in between the hardware layer and hypervisor.

hypervisor type 1
Img Source: https://www.flexiant.com

The other type is a Type 2 hypervisor, commonly referred to as a hosted hypervisor. This is software that runs on an operating system. One of the more common examples of this would be Oracle Virtualbox or VMware Workstation. I guess the main disadvantage of a hosted hypervisor is that you are adding an unnecessary extra layer to the environment. However this has the benefit of making management somewhat more intuitive in my opinion.

hypervisor type 2
Img Source: https://www.flexiant.com

Which is the best hypervisor option?

If you find out please let me know! Generally people are quick to recommend ESXi due to the fact it has more or less become the industry standard and having experience in such a widely used product would be beneficial. This was part of my original reasoning behind ESXi because I do encounter it frequently in work but I don’t have a whole lot of experience working with it. It makes ESXi very hard to ignore for this reason, but on the flip side the possibilities are somewhat limited without purchasing a license. I’m not going into the differences between the free and licensed options in this post, but it is definitely something I am going to do in the future.

Then of course you have Hyper-V, another very commonly implemented option from Microsoft. This has the massive advantage of being free and comes bundled with Windows Server as an add-on feature. If you are running Windows 8 at home you can also enable Client Hyper-v which isn’t as feature-filled as the Server alternative but offers the same purpose. Finally you have your Linux alternatives which are also mostly free if you are not interested in receiving support. These options are much less widely used but are certainly growing in popularity.

I’ve got a lot of work to do….

Adventures in virtualisation

Adventures in virtualisation

My adventures in virtualisation

I had this crazy idea to set up a home server – not that I really need one. My end goal would be to familiarise myself with virtualisation and maintain multiple virtual machines each with their own individual responsibilities running on a hypervisor such as ESXi. I thought it might be beneficial to start a blog so I could keep track of what I’ve learned along the way and perhaps inspire someone else to follow the same path. You might be questioning my reasoning behind such a project in a home environment.. Well, you’re probably right. My motivation for building a server came after reading about setting up a multi-headed ESXi server. Afterwards my inner-geek couldn’t relax thinking of all the possibilities this could be used for. That, and it’s just a very cool thing to do! Right!?

This is the article I’m referring to;

https://www.pugetsystems.com/labs/articles/Multi-headed-VMWare-Gaming-Setup-564/

How cool does that look!? Four independent machines, each running an instance of Battlefield, running off the one physical server. Now realistically I don’t have a need for this kind of setup but perhaps on a small scale this might be useful. Imagine being able to deploy a new computer at the touch of a button to any room in the house. On top of that you have the advantage of keeping everything segregated which is not only beneficial from an organisational point of view, but I imagine it helps to improve security as well.

The one stumbling block I’m going to encounter is resource limitations. Virtualisation is not cheap in terms of resources if you intend on running multiple virtual machines! The current desktop I have is probably powerful enough to host two high-power virtual machines easily enough but nothing more than that. I will have to think about how I want to proceed here. Do I build a standalone server or do I convert my desktop to a server? Decisions..

Maybe I have deluded myself into thinking this would be useful.. I guess we’ll find out.. Watch this space!