Author Archives: David Candy

Building My Home Lab Part 4 – Installing and Configuring Storage

This will be a series of posts in which I describe how I have put together my Home Lab ESXi server.

Previous Parts:
Part 3 – Installing ESXi
Part 2 – Networking
Part 1 – Storage

A Storage Virtual Machine – NexentaStor Community Edition

NexentaStor is a software defined storage solution. Instead of using a separate storage server, I will be installing NexentaStor as a virtual machine in the ESXi host. The Dell H200 SAS HBA is passed through directly to the VM, giving it direct access to the drives attached. From here, Nexenta can create a ZFS file system, providing the benefits of a hardware raid solution but with more flexibility. On top of this, an NFS share is created which the ESXi host will connect to, creating a Datastore. The community edition is free, however it does have some limitations, such as a maximum storage capacity of 18TB. I’m not sure anyone would ever need 18TB for a home lab, so this isn’t a problem.

NexentaStor documentation and download can be obtained HERE.

Enable Passthrough

Note: Be aware that for this to work, the CPU must support Intel VT-d or AMD IOMMU.
In the vSphere client, go to Configuration – Advanced Settings, then click edit. Next, select the SAS HBA card and click OK. You will most likely need to reboot.

After restarting the SAS HBA card should be displayed in the list as shown below:

Nexenta_SAS HBA

Create the VM

Now create a VM for NexentaStor. Choose Oracle Solaris 11 (64-bit) as the Operating System. The VM will require a minimum of 8GB of RAM.
Once the VM is created, open the hardware properties and click Add. Select PCI device and choose the SAS HBA card. The card is now directly attached to this VM. Two NICs are added, one connected to the physical (management) network, the other to the internal storage network.

Nexenta_VM Config

Configure Networking

The storage network consists of creating a vSwitch without a physical adapter assigned. A VM Network port group and VMkernel port are added to this vSwitch.


The console screen in Nexenta will display the IP address to connect to the web interface. If you don’t have DHCP, the IP address can be configured here. The first login will present a wizard where you configure the IP address for both the management and storage network adapters.

Configure NexentaStor ZFS volume and NFS share

Login to the web interface, then go to Data Management – Data Sets. Here we create the ZFS volume. In the screenshot below, I’ve created the equivalent of a RAID 10 setup.


Next, go to Data Management – Shares. Under NFS Server click configure. Enable the service and set the client and server version to 3 as ESXi 5.5 does not work with version 4.


Now we can create the share. I have modified some of the default settings. I set block size to 8K and disable sync. Disabling sync will improve speed but be warned, data corrupt will occur if power is lost! Finally, enable NFS for this share by ticking the box as show below.


Create the Datastore

We can now create the Datastore in ESXi. Select Network File System as the storage type, then enter the server IP address and folder path.

You should also enable the VM to automatically startup and shutdown with the host. Other VMs won’t be available until this VM starts and all VMs should be shutdown before this one.

We’re now ready to create Virtual Machines!

Note: For info on installing the VMware Tools inside Nexenta, click here.


Error 25325 when creating Hyper-V cluster on VMM 2012 R2

I’m currently studying for the 70-414 exam and following the course provided by I’ve been following the videos on setting up a Hyper-V cluster using System Center Virtual Machine Manager 2012 R2.

When I got to the step of creating the cluster it was failing in my demo lab. I kept getting this error:

Error (25325) The cluster creation failed because of the following error: Network has already been specified.

The network was dedicated to the iSCSI connections. I had configured two NICs on the same subnet, connected to a VM running Windows 2012 R2 iSCSI target with MPIO. It turns out VMM does not like having multiple adapters on the same subnet. Once one of the two adapters were disabled on both hosts, the cluster creation was successful. The adapters can be enabled again after cluster creation.

Building My Home Lab Part 3 – Installing ESXi

This will be a series of posts in which I describe how I have put together my Home Lab ESXi server.

Previous Parts:
Part 2 – Networking
Part 1 – Storage

Installing ESXi

Since the motherboard in my server supports Intel vPro, I can use the remote console feature to perform the installation and mount the VMware ESXi installer ISO remotely. The remote console feature requires RealVNC Plus, which will work in trial mode for 30 days, after which you need to purchase a licence.

vPro Setup

The first step in setting up vPro is configuring the network address. This is done from within the system bios. Further configuration can then be done from the browser by going to the URL http://<IP Address>:16992. You will need to enter the username (admin) and the password (set in the bios) to login.

Further vPro configuration can be done by downloading the Intel Open Manageability Toolkit at The Manageability Commander Tool can be used to configure remote control as well as initiate a remote control session. Be aware that only basic Serial over LAN is provided. Remote control still requires the use of an AMT compatible VNC client.

Connecting to the remote console

Power on the server. I found it is best NOT to power on the server from the console as this prevents the NICs from switching to gigabit speed. They remain on 10mbit to keep the console connection up.

Start RealVNC Plus. Enter the vPro interface IP address and select Intel AMT from the drop down list. You will most likely see the “no boot media” bios error. You can now mount the ESXi iso.

ESXi Installation

Installing ESXi is pretty straight forward. Just follow the prompts. The installer will automatically set up the destination disk and will reboot when it is complete.

Initial ESXi Configuration

You will be presented with a yellow and black screen with the ESXi version information. This is the Direct Console User Interface (DCUI). From here you can login as root (password is set during installation) and configure basic settings such as management network adapters. Select the adapter you want to use for the management interface. This will be the IP address used to connect the vSphere client to the host.

In Part 4 I will go over how I configured storage for the server, using a VM with a SAS HBA directly connected.

OpenMediaVault setup fails to install bootloader

When installing OpenMediaVault via a USB stick you may run into an error. During the installation you are prompted to select the correct drive to install to (in my case /dev/sdb). Towards the end of the install, setup will attempt to install the bootloader to the installation media, which fails.

To fix this issue we need to chroot to the new installation, install the bootloader, then unmount. This is done at the end of the installation, just before rebooting.

  1. After the error, continue the installation so it can complete.
  2. When the message appears to restart the computer, press Alt+F2 to switch to a new console. Press enter to activate it.
  3. Type the following commands to mount the new OpenMediaVault installation and chroot to it:
    mount /dev/sda1 /mnt/
    mount -t proc none /mnt/proc
    mount -o bind /dev /mnt/dev
    mount -t sysfs sys /mnt/sys
    chroot /mnt/ /bin/bash
  4. Enter the following command to install the GRUB bootloader. Be sure to select the correct drive when prompted (in my case it was /dev/sda):
    dpkg-reconfigure grub-pc
  5. Exit the chroot and unmount:
    umount /mnt/sys
    umount /mnt/dev
    umount /mnt/proc
    umount /mnt
  6. Switch back to the install screen by pressing Alt+F1. Press enter to reboot. Remove the USB stick.

OpenMediaVault should now boot!


Building My Home Lab Part 2 – Networking

This will be a series of posts in which I describe how I have put together my Home Lab ESXi server.

Previous Parts:
Part 1 – Storage


4x NICs in the server allow the creation of multiple, isolated networks via VLANs. One of the two integrated NICs provides vPro access. This NIC will not be available for use with ESXi as no driver is provided. See my earlier post on how it can be enabled.

Network Setup Overview:

  • The Cisco switch is set to L3 mode, enabling VLAN routing. An IP address is assigned to the switch on the physical network.
  • A VLAN is created and set as tagged on the port connected to the server.
  • An IP interface is created on the switch for the VLAN.
  • A static route and rules are configured on the physical router to allow the VM network to connect to the internet.
    UPDATE 20/11/14: I have changed this by setting up the third network interface on the router with the same VLAN created on the switch. The port this interface is connected to will act as a trunk port, allowing me to choose which VLANs will have access to the internet. This also allows the ability for the router to provide a DHCP server for the VLAN.
  • A static route is created on my workstation to the VM network. This allows my PC to access the VM network via the Cisco switch rather than the router. This is preferred as the network ports on my router are only 100Mbit and this traffic doesn’t need to go via the router anyway.

Cisco SG300 Configuration

Login to the web interface of the switch

Enable L3 Mode:

  1. Go to Administration – System Settings and set the System Mode to L3. This reboots the switch and resets it to factory defaults, so don’t bother setting anything else up first!

Create a VLAN and assign it to a port:

  1. Go to VLAN Management – VLAN Settings. Click Add. Enter a VLAN ID and a name, then click Apply.
  2. Click on Port to VLAN. Select the VLAN ID and click Go.
  3. Select Tagged for the port which is connected to the server and click Apply.

Create an IP interface for the VLAN:

  1. Go to IP Configuration – IPv4 Interface
  2. Click Add.
  3. Select the VLAN ID and Static IP Address.
  4. Enter the IP address and subnet mask for the interface, then click Apply.

Workstation Configuration (Windows)

Add a static route to the VM Network:

  1. Open command prompt as administrator.
  2. Enter: route -p add <VM Network> mask <VM Network Mask> <Cisco Switch IP Address>
    The -p switch makes the route permanent, otherwise it is removed on reboot.

ESXi network setup will be covered in a later post.

Up next – Installing ESXi.

Building My Home Lab Part 1 – Storage

This will be a series of posts in which I describe how I have put together my Home Lab ESXi server.


Hosted within the ESXi server, a virtual storage solution.
Fast performance.

Hardware added to the server since the last post:

  • ICY DOCK MB994SP-4S – This can hold 4 2.5in drives inside a 5.25in enclosure.
  • 1x 120GB Corsair SSD drive – This will hold the ESXi OS as well has the NexentaStor VM
  • Dell PERC H200 HBA – Reflashed to IT mode (explained here). The HBA is connected to the dock via a Mini-SAS SFF 8087 to 4x SATA cable.
  • 4x 250GB Intel SSD drives – The storage drives, inserted into the dock.

As the motherboard supports Intel VT-d, the HBA can be passed through to the Nexenta VM, giving it direct hardware access to the SSD drives.

The setup of this will be covered when we get to setting up ESXi and the Nexenta VM.

Up next – Networking.

Building my Home Lab

I’m putting together a home lab which I will use to learn/test/play with new technologies. A virtual environment allows you to create test environments which help with studying for MCP, VCP or other IT exams.

Here is what I have so far:

Cisco SG300-10 Managed Switch

Alix 2-3 running pfSense. I got this router from Yawarra Tiny Computers a few years ago and its as solid as a rock.

I have yet to put this together. I was previously using a Dell PowerEdge T110. I found that server to be too big and too noisy. It also maxed out at 16GB of RAM. Here are the parts I got for the new server:

RAM: Kingston Hyper X Fury HX316C10FBK2/16 (x2 for 32GB RAM).

Motherboard: Gigabyte GA-Q87M-MK – This board has two NICs and supports Intel vPro.

CPU: Intel Core i5 4690S – This CPU supports all virtualisation requirements, plus it has vPro, which allows remote KVM.

PSU: Corsair VS350

Case: Silverstone SG02B-F Black Micro ATX

I will be installing VMware ESXi 5.5 on this machine. ESXi 5.5 does not include the driver for the network interfaces used on this board, so I used the instructions here to create a custom install ISO, which I hope can be mounted remotely using the vPro feature.

My next post will have the results of this setup!