Upgrading the home lab to vSphere 6.0 with Gigabyte BRIX

I finally decided that this easter weekend was going to be the time to update the lab to vSphere 6.0, which meant that some messing around was going to be required due to the Gigabyte Brix using unsupported network adapters.

Hardware overview (2x):

Gigabyte BRIX i5-3337U CPU @ 1.80 Ghz (Turbo upto 2.7Ghz) 2 cores and hyper threading.

16Gb Ram and a 256Gb SSD.

A WD NFS data store (4Tb) for large slow storage..

IMG_4699

 

 

 

 

 

 

 

HP OneView – Part 2: Server Profiles

Apologies for the delay I was busy..

What is a Server Profile

The “Server Profile” is the defining phrase that comes to mind when thinking about the SDDC (Software Defined Data Centre). It allows a server administrator to define a hardware identity or configuration (MAC addresses, WWNN, BIOS, Boot, RAID config etc.) in software and then apply this to a “blank” server.

This brings a number of key advantages:

  • Pre-designed hardware identities allow configuration tasks to be pre-provisioned before hardware deployment (SAN zoning, firewall ACLs etc..)
  • Designated addresses allow easier identification e.g.  aa:bb:cc:dd:01:01 = Management /  aa:bb:cc:dd:02:01 = Production
  • Server failure/replacement doesn’t require upstream changes, software identity (server profile) is applied and all previous configuration is still relevant.

Design

Following on from the previous HP OneView post, this is a continuation of the same simple VMware vSphere deployment. As before, a good design should exist before implementation, so again i’ve embedded a diagram detailing where and how these networks are going to be defined to the virtual interfaces on a blade. VMware OneView Service Profiles

Quite Simply:

  • Two virtual interfaces defined for all of the Service networks.
  • Two virtual interfaces defined for the Production networks.
  • Two HBAs on each fabric, providing resilience for Fibre Channel traffic.

As mentioned, this is a simple design for a vSphere host but allows expansion in the future with the ability to define a further virtual interface on each physical interface inside the blade.

 

HP OneView – Part 1: Enclosures and Uplinks (logical or otherwise)

The initial configuration of HP OneView is a pretty simple and intuitive process, it just isn’t documented as well as it could be. I’ve decided to put together a few posts detailing some of the areas of configuration that could possibly do with a bit more detailed procedures. I’d expect that someone who wishes to use any thing that is documented here is already acquainted with Virtual Connect, Onboard Administrator, VMware vSphere and iLO’s etc.

Design

Before any implementation work is carried out there has to be a design in place, otherwise what and how are you going to configure anything.. The design for these posts will be a simple VMware environment consisting of a number of networks to handle the standard traffic one would expect (Production, Vmotion etc.) at this point we have defined our networks and these have been trunked by our network admin on the switches out to the C7000’s we will be connecting too.

c7000 VMware

 

From this diagram, you can see the coloured VLANs represent the traffic designated management/service traffic and the grey VLANs represent production traffic. All of these VLANs are trunked from the Access switches down to the virtual connect modules located in the back of the C7000 enclosures.

 

Note: I’ve omitted SAN switches from this and just presented what would appear as zoned storage direct to the Virtual Connect. I may cover storage and flat san at a later date, if there is any request to do so.

 

 

This represents the external connectivity being presented to our Enclosure, it’s now time for the logical configuration with HP OneView…

 

OpenStack on CentOS 7.0 (manual install)

This is a very very basic overview of all of the steps (and there are a lot of them) to deploy OpenStack controllers on a single node, purely for testing purposes..

Update: Turns out that a lot of this can be automated.. but i’m leaving this up as it took so long 🙁

Hopefully you’ll end up with something looking like this:

lsvm – list virtual machines in vSphere from CLI

Whilst I’m aware that it’s simple enough to type vim-cmd vmsvc/getallvms from the cli in vSphere/ESXi, having a program spawning another program is a bit of a pain, especially having to parse the results(vim-cmd output is a bit of a mess). To programmatically handle this requires a bit of messing around with xml and then having to parse a number of files that are listed in the xml files to get your results. However the code is quite easy to modify so that you could do things such as list allocated vCPUs, or total allocated memory.

I’ve uploaded the code to github here.

Example output:

terminal

HP Oneview API with cURL

As part of a PoC i’m spending quite a bit of time pulling apart the APIs of HPs OneView application, and pulling this functionality into a Objective-C application.

oneview_dashboard_tcm245_1843105_tcm245_1843121_tcm245-1843105

HP Oneview is based upon a REST API and is interacted with directly via the https interface, this is the same API that the HTML5 web interface interacts with (more than likely in a loopback fashion). Connections require the following initial steps:

 

 

  1. Initial rest connection passing JSON containing the userName and password variables.
  2. The response is some JSON containing an id, which is comparable to a cookie when using UCS Manager.
  3. This id is then passed as an Auth: header when used to interact with various areas of the API.

 

These are a few one liners that can be used to communicate with the HP OneView interface.

 

  •  Login

curl -v -H "Content-Type: application/json" -d '{"userName":"Administrator","password":" <PASSWORD> "}' -X POST -k https:// <IP> /rest/login-sessions

  • List server profiles

curl -k -v -H "Content-Type: application/json" \
-H "Auth: <id> " \
-H "X-API-Version: 101" \
-X GET https:// <IP> /rest/server-profiles

Creating a server profile requires a little bit more work, even for a blank profile that isn’t attached to a physical server. This goes agains the examples in the API guide, so I think there is a mistake somewhere. However, two pieces of information are a requirement to create a un-attached server profile. These two pieces are a server hardware type and an enclosure group and these can be found through the following api calls:

  • List Server Hardware Types

curl -k -v -H "Content-Type: application/json" \
-H "Auth: <id> " \
-H "X-API-Version: 101" \
-X GET https:// <IP> /rest/server-hardware-types

  • List Enclosure Groups

curl -k -v -H "Content-Type: application/json" \
-H "Auth: <id> " \
-H "X-API-Version: 101" \
-X GET https:// <IP> /rest/enclosure-groups

These will result in a large amount of data being dumped, however what is being looked for is the URI of the enclosure groups or the particular server hardware type that your profile is being attached to. With these URIs you can construct a simple one liner to create a blank Server profile as show below:

curl -k -v -H "Content-Type: application/json" \
-H "Auth: <id> " \
-H "X-API-Version: 101" \
-d '{"type":"ServerProfileV4","name":"blankProfile01","serverHardwareTypeUri":"/rest/server-hardware-types/XXXXX","enclosureGroupUri":"/rest/enclosure-groups/XXXXX"}' -X POST -k https:// <IP> /rest/server-profiles

VM creation from CLI

For testing purposes, whilst developing a zeroconf daemon I needed to be able to quickly create some VMs and register them on vSphere. The process of opening infrastructure client and running through the wizard etc. or other methods of creating virtual machines was just too slow. After a bit of searching around and looking at kb.vmware.com, it became apparent that VMware don’t support creating virtual machines from their CLI toolset.

I found script from 2006 that uses commands that don’t even exist anymore, but with a bit of of tweaking will happily create test virtual machines.

 

EVO:RAIL – LoudMouth aka Zeroconf

What is Zeroconf?

Zeroconf was first proposed in November 1999 and finalised in 2003 and has found the largest adoption in Mac OS products, nearly all networked printers and other network device vendors. The most obvious and recognisable implementation of zeroconf is bonjour, which has been part of Mac OS since version 9 and is used to provide a number of shared network services. The basics of Zeroconf are explained quite simply on zeroconf.org with the following (abbreviated statement) “making it possible to take two laptop computers, and connect them … without needing a man in a white lab coat to set it all up for you”.

Basically zeroconf allows a server/appliance or client device to discover one another without any networking configuration. It is comparable to DHCP in some regards in that a computer with no network configuration can send out a DHCP request (essentially asking to be configured by the DHCP server), the response will be an assigned address and further configuration allowing communication on the network. Where it differs is that zeroconf also allows for advertisement of services (time capsule, printer services, iTunes shared libraries etc.), it also can advertise small amounts of data to identify itself as a type of device.

A Time machine advertisement over zeroconf: (MAC address removed)

[dan@mgmt ~]$ avahi-browse -r -a -p -t | grep TimeMachine
+;eth0;IPv4;WDMyCloud;Apple TimeMachine;local
=;eth0;IPv4;WDMyCloud;Apple TimeMachine;local;WDMyCloud.local;192.168.0.249;9;"dk0=adVN=TimeMachineBackup,adVF=0x83" "sys=waMA=00:xx:xx:xx:xx:xx,adVF=0x100"

How the EVO:RAIL team are using Zeroconf

From recollection of the deep-dive sessions, I may have mistaken the point (corrections welcome).

Zeroconf has found the largest adoption in networked printers and apple bonjour services, however in the server deployment area a combination of DHCP and MAC address matching is more commonly used (Auto deploy or kickstart from PXE boot).

The EVO:RAIL team have implemented a Zeroconf daemon that lives inside every vSphere instance and inside the VCSA instance. The daemon inside the VCSA wasn’t really explained however the vSphere daemon instances allow the EVO:RAIL engine to discover them and take the necessary steps to automate their configuration.

Implementing Zeroconf inside vSphere(esxi)

The EVO:RAIL team had to develop their own zeroconf daemon named loudmouth that is coded entirely in python. The reason behind this was explained in one of the technical deep dives, the problem being that the majority of pre-existing zeroconf implementations have dependancies on various linux shared libraries.

/lib # ls *so | wc -l
86
/lib # uname -a
VMkernel esxi02.fnnrn.me 5.5.0 #1 SMP Release build-1331820 Sep 18 2013 23:08:31 x86_64 GNU/Linux
....
[dan@mgmt lib]$ ls *so | wc -l
541
[dan@mgmt lib]$ uname -a
Linux mgmt.fnnrn.me 3.8.7-1-ARCH #1 SMP PREEMPT Sat Apr 13 09:01:47 CEST 2013 x86_64 GNU/Linux

As the quick example above shows (32bit libs) a vSphere instance contains only a few elf based libraries providing a limited subset of shared functionality. This means that whilst elf based binaries can be moved from a linux distribution over to a vSphere instance, the chance is that a requirement on a shared library won’t be met. Further more building a static binary possibly won’t help as the VMKernel (VMwares kernel implementation)doesn’t implement the full set of linux syscalls, which makes sense as it’s not an OS implementation the userland area of the vSphere is purely for management of the hypervisor. The biggest issue that an implementation of zeroconf which relies on UDP and datagrams is the lack of implementaion of IP_PKTINFO.

This rules out avahi, Zero Conf IP (zcif), and linux implementations of mDnsResponder.

What about loudmouth?

Unfortunately it is yet to be said if any components of EVO:RAIL will be open sourced or back ported to vSphere, so whilst VMware have a zeroconf implementation for vSphere it is likely it will remain proprietary.

What next…

I’ve improved on where I’ve been with my daemon, however i’m hoping to upload it to github sooner rather than later. Unfortunately work has occupied most of the weekend and most evenings so far .. that tied with catching up on episodes of elementary and dealing with endless segfaults as I add any simple functionality have slowed the progress more than I was expecting.

Also I decided to finish writing up this post, which took most of this evening 😐

Debugging on vSphere

A summary of what to expect inside vSphere can be read here and there is no point duplicating existing information (http://www.v-front.de/2013/08/a-myth-busted-and-faq-esxi-is-not-based.html). More importantly when dealing with the vSphere userland libraries or more accurately lack of, then the use of strace is hugely valuable. More details on strace can be found here (http://dansco.de/doku.php?id=technical_documentation:system_debugging).