Friday, February 26, 2016

vSphere 6 - Shared Storage with NFS - with OpenFiler

We can store our virtual machines on the storage of the ESXi hosts themselves but most often the virtual machines (guests) are stored on what is known as "shared storage", either NAS or SAN using one of the folowing protocols: NFS, iSCSI, Fiber Channel or FCoE. I will not explain these various systems here. If these abbreviations are unfamiliar, please consult other sources for definitions and descriptions. 

Shared storage is essential in most vSphere implementations because some vSphere components will not function without them (or function better with them). In the most simple configuration (that I will use in this post), we have two ESXi hosts with a virtual machine stored on a shared storage device. That could be a QNAP or Synology NAS device or a larger EMC or NetApp SAN system. In my case, I will use a virtual machine running Openfiler. If you want to explore this option yourself, you can visit the Openfiler site here:


In this post, I'll install and configure Openfiler in the first section and then demonstrate how we create a new datastore on the Openfiler using what is known as "NFS", a format for remote storage based on "files" as opposed to "iSCSI" which is "block based" storage.

Installation and basic configuration of Openfiler

In my test environment, I will install Openfiler as a guest virtual machine in VMware workstation with the following settings:

There is no "Openfiler" option for the operating system but Openfiler 2.99 will work with "RHEL 5 x64". For the CD/DVD setting, I am pointing the drive at the Openfiler .iso downloaded from the vendor website. I will not show the step-by-step configuration of VMware for Openfiler. You should be able to imitate my configuration above by clicking "Next" as needed and selecting the values that match those shown above.

To install Openfiler, we click on "Power on this virtual machine" (the green arrow in the screenshot above) and proceed with the configuration choices. On the first screen (the Introduction - not shown here), we click Next, and on the following page (not shown) we select the keyboard type.

Note: I will not waste space and time posting screenshots of the "Welcome" screen and very basic configuration (language, region/time zone). Most likely, you can determine the appropriate choice for your environment without my assistance (and click "Next" without my prompting).

Openfiler needs to intialize the /dev/sda drive (or simple sda) to create partitions. This will cause data loss so if you are attempt to install Openfiler on a drive that previously contained data, ensure that the data is no longer necessary or has been copied elsewhere. My drive is brand new, so I can proceed without further verification. This is the warning:

I only have on drive (/dev/sda) so I select it:

We will see one more warning:

Next, we need to configure the network settings. It is preferable to assign a static IP address and a host name referenced in DNS. We click on Edit for the IP configuration...

I will assign the following address (and subnet mask)...

And then enter a hostname for the filer along with the DNS server(s) parameters:

After a screen where we select our region/time zone (not shown), we are prompted to enter a root password for the filer:

Lastly, we click "Next" to start the installation and select "Reboot" when the installation has completed (these two screenshots not shown).

Configuration of NFS storage

Once the Openfiler reboots we will connect to the web interface at the following URL:

Note: replace my IP address with the IP address you assigned to your Openfiler.

Use the following credentials to login (you can change them later):

username: openfiler
password: password

Once inside, we see a number of tabs for the configuration of the Openfiler.

The Status tab displays system information. Here is a partial screenshot (click to enlarge):

Under the system tab, we see the the network settings that we configured at installation:

In the section "Network Access Configuration", we must create what is essentially an ACE (Access Control Entry) that allows specific hosts or hosts on a particular subnet to access the Openfiler. This affords a certain level of security to the extent we can limit access to certain hosts or subnets only. In my test network, I happen to be using a vast /8 subnet. In other circumstances, we may want to impose more narrow limits. I will name the "rule" Allow-MgtNet and I will take this opporunity to clarify one important point.

We would almost always have separate networks for vSphere management, client access to virtual machines and traffic between ESXi hosts for vMotion, DRS, HA, etc.. The names of our rules could reflect the different subnetworks to which we may need to grant access. In fact, this would also be reflected in the Network Interface Configuration section where we would most likely have more than a single NIC. Please take note of this if you are planning to implement vSphere for real. My test network imposes certain limitations, hence the network simplifications.

Note: besides the network settings, we can configure other parameters by clicking on the choices in the menu on the right:

Next, I will configure storage for our virtual machines. This is the hierarchy of the disk structure and organization...

  1. Partition
  2. Volume Group
  3. Volume
  4. (and LUN if iSCSI is used, not applicable for NFS)

Under the Volumes tab, we click on "Block Devices":

We select /dev/sdb and click on the link (see above).

We do not have any partitions on the disk so we go to the lower section called "Create partition in /dev/sdb":

We should then see a partition like this:

After the partition, we create a new Volume Group (click on Volume Groups in the menu on the right):

Click on "Add volume group" to complete the operation.

Next, click on Add Volume in the menu (Volumes section):

We select the volume group for the volume (in our case, we only have one: VG1):

We name the volume (VOL-01), describe the volume, adjust the amount of space and select a file system. This last past can be confusing. We want to use NFS in this scenario but... there is no choice for NFS. In fact, XFS is the correct choice here and will work for NFS:

We click on create and we will see the new volume with configuration details (screenshot not shown).

At this point, I will enable the NFS Server service on the Openfiler (without this service enabled, NFS will not function. We can also start/stop the service directly):

Under the Shares tab, we click on volume to create a shared folder and make it accessible from remote hosts:

We share the folder by clicking on it...
And clicking on the option "Make Share":

We then have a shared folder to which we need to regulate access. In my practice lab, I will allow Public (guest) access:

Scrolling down, in the "Host Access Configuration" section, we select RW for NFS (so hosts can make changes to virtual machines stored in the folder (or subfolders)) and click "Edit" so we can change the UID/GID Mapping to "no_root_squash":

So now we have  NFS storage that we can use to store VMware guests. In the next section, I'll create a NFS Datastore using that shared folder.

Creation of a NFS Datastore

In vCenter, at the location indicated below, we select the option to create a new Datastore:

We select the location for the new Datastore (in my case, there is not much of a choice):

For a NFS datastore, we logically select "NFS":

Next, we select the NFS version (for my scenario, NFS 3.0 will suffice):

We enter a name for the Datastore, indicate the folder and the IP address (or host name) of the Openfiler:

Note: to designate the folder, we can copy the path from the Openfiler.

We then designate the hosts that will need to access the new Datastore. For the time being, I will add ESX1 only:

And here is a summary of our configuration choices:

Once we close the configuration assistant, we can see the new Datastore, OFNAS-1:


In my next blog post, I will demonstrate how to move a virtual machine to the new Datastore. Of course, we could also simply create a brand new virtual machine directly on the new Datastore.

Wednesday, February 10, 2016

vSphere 6 - Create datastore and prepare guest OS installation

Now that we have added an ESXi host to vCenter (see my previous blog post), we are almost ready to install our first guest operating system (OS). But first, we need configure a datastore to house the virtual machines and other files, either on the ESXi host itself or on shared storage.

Datastore configuration

In fact, there is, by default, a datastore on each ESXi host named "DataStore". However, if we will have multiple ESXi hosts, centrally managed by vCenter, they cannot all be named "DataStore". I have renamed the datastore on host ESX1 "ESX1-Datastore" (right-click on the datastore and select "rename" in the menu).

Note: the datastore resides under the Datacenter.

If we look at the "Related Objects" tab, we can see the host that holds the datastore as well as the guests and templates located in this datastore (once we create them).

And if we look at the "Manage" tab (Files section) we can see the file structure of the datastore. I will create my virtual machines using a .iso file and want to store the .iso files for OS installation in a separate folder. So I click on the folder with the green plus sign and name the new folder:

Next, I highlight the new ISO_files folder and select the "Upload a file to the datastore" icon. I will be prompted to install the "File Integration Plug-in" that will allow me to upload files to the ESXi host from the vCenter server...

So I download the plug-in...

execute the file...

and accept any security prompts related to the plug-in:

I can now browse to the location of my .iso file and upload it to the ESXi datastore.

I can rename, move, copy or delete the file as needed:

Installation of the guest OS

Note: the installation of the guest OS will vary greatly based on the OS selected. I will concentrate on the configuration of the guest OS settings in vCenter rather than on step-by-step instructions for a particular OS (that would be quite useless for a different OS).

We highlight the Datacenter, right-click and select "New Virtual Machine".

Note: click Next as necessary.

Select the option "Create a new virtual machine":

Note: you can click to enlarge the images.

I will name my virtual machine "Lubuntu-1" and place it directly in the "HQ" Datacenter. For better organization, we can create folders under the Datacenter object, sorting the VMs by operating system or by function (web servers, database servers, etc.):

Note: Lubuntu is a Linux distribution with a reputation of low resource use and that I believe may be appropriate for exploring the features of vSphere in a lab where vCenter and ESXi are already using 8 GB and 4 GB respectively for a total of 12 GB.

I will select ESX1 for the host (or the "compute resource"):

I will use my ESX1-Datastore to house the new virtual machine:

The next setting is important if we have legacy ESXi hosts running older versions such as 4.x, 5.1 and 5.5, and on which we might have to move the virtual machine. In my current test environment, I have a single ESXi 6 host so I can chose the following setting:

Earlier, I stated that I would use a Linux distribution for the guest OS. Lubuntu is a variant of the Ubuntu "distro", so I'll select Linux and then Ubuntu Linux (32 bit):

We have almost finished. If we prefer, we can customize the hardware by increasing the number of CPUs, the amount of memory or the size of the (virtual) hard disk:

Our choices are summarized on the last page:

There is one setting that we must change if we intend to install the OS with a .iso file. We right-click on the virtual machine and select "Edit Settings":

For the CD/DVD drive, we select "Datastore ISO file" instead of the default "Client Device":

We then browse to the location of the .iso file in the Datastore:

To power on the guest, we highlight it and then click on the green arrow as shown below:

In this case, clicking on the green arrow would start the installation process.


I will not present the installation of Lubuntu here since that is not particuarly useful, in itself, for understanding how to manage vCenter. Obviously, the installation steps and choices would be completely different for other operating systems (and slightly different for other Ubuntu variants).