HP OneView – Part 1: Enclosures and Uplinks (logical or otherwise)

The initial configuration of HP OneView is a pretty simple and intuitive process, it just isn’t documented as well as it could be. I’ve decided to put together a few posts detailing some of the areas of configuration that could possibly do with a bit more detailed procedures. I’d expect that someone who wishes to use any thing that is documented here is already acquainted with Virtual Connect, Onboard Administrator, VMware vSphere and iLO’s etc.

Design

Before any implementation work is carried out there has to be a design in place, otherwise what and how are you going to configure anything.. The design for these posts will be a simple VMware environment consisting of a number of networks to handle the standard traffic one would expect (Production, Vmotion etc.) at this point we have defined our networks and these have been trunked by our network admin on the switches out to the C7000’s we will be connecting too.

c7000 VMware

 

From this diagram, you can see the coloured VLANs represent the traffic designated management/service traffic and the grey VLANs represent production traffic. All of these VLANs are trunked from the Access switches down to the virtual connect modules located in the back of the C7000 enclosures.

 

Note: I’ve omitted SAN switches from this and just presented what would appear as zoned storage direct to the Virtual Connect. I may cover storage and flat san at a later date, if there is any request to do so.

 

 

This represents the external connectivity being presented to our Enclosure, it’s now time for the logical configuration with HP OneView…

 

OpenStack on CentOS 7.0 (manual install)

This is a very very basic overview of all of the steps (and there are a lot of them) to deploy OpenStack controllers on a single node, purely for testing purposes..

Update: Turns out that a lot of this can be automated.. but i’m leaving this up as it took so long 🙁

Hopefully you’ll end up with something looking like this:

lsvm – list virtual machines in vSphere from CLI

Whilst I’m aware that it’s simple enough to type vim-cmd vmsvc/getallvms from the cli in vSphere/ESXi, having a program spawning another program is a bit of a pain, especially having to parse the results(vim-cmd output is a bit of a mess). To programmatically handle this requires a bit of messing around with xml and then having to parse a number of files that are listed in the xml files to get your results. However the code is quite easy to modify so that you could do things such as list allocated vCPUs, or total allocated memory.

I’ve uploaded the code to github here.

Example output:

terminal

HP Oneview API with cURL

As part of a PoC i’m spending quite a bit of time pulling apart the APIs of HPs OneView application, and pulling this functionality into a Objective-C application.

oneview_dashboard_tcm245_1843105_tcm245_1843121_tcm245-1843105

HP Oneview is based upon a REST API and is interacted with directly via the https interface, this is the same API that the HTML5 web interface interacts with (more than likely in a loopback fashion). Connections require the following initial steps:

 

 

  1. Initial rest connection passing JSON containing the userName and password variables.
  2. The response is some JSON containing an id, which is comparable to a cookie when using UCS Manager.
  3. This id is then passed as an Auth: header when used to interact with various areas of the API.

 

These are a few one liners that can be used to communicate with the HP OneView interface.

 

  •  Login

curl -v -H "Content-Type: application/json" -d '{"userName":"Administrator","password":" <PASSWORD> "}' -X POST -k https:// <IP> /rest/login-sessions

  • List server profiles

curl -k -v -H "Content-Type: application/json" \
-H "Auth: <id> " \
-H "X-API-Version: 101" \
-X GET https:// <IP> /rest/server-profiles

Creating a server profile requires a little bit more work, even for a blank profile that isn’t attached to a physical server. This goes agains the examples in the API guide, so I think there is a mistake somewhere. However, two pieces of information are a requirement to create a un-attached server profile. These two pieces are a server hardware type and an enclosure group and these can be found through the following api calls:

  • List Server Hardware Types

curl -k -v -H "Content-Type: application/json" \
-H "Auth: <id> " \
-H "X-API-Version: 101" \
-X GET https:// <IP> /rest/server-hardware-types

  • List Enclosure Groups

curl -k -v -H "Content-Type: application/json" \
-H "Auth: <id> " \
-H "X-API-Version: 101" \
-X GET https:// <IP> /rest/enclosure-groups

These will result in a large amount of data being dumped, however what is being looked for is the URI of the enclosure groups or the particular server hardware type that your profile is being attached to. With these URIs you can construct a simple one liner to create a blank Server profile as show below:

curl -k -v -H "Content-Type: application/json" \
-H "Auth: <id> " \
-H "X-API-Version: 101" \
-d '{"type":"ServerProfileV4","name":"blankProfile01","serverHardwareTypeUri":"/rest/server-hardware-types/XXXXX","enclosureGroupUri":"/rest/enclosure-groups/XXXXX"}' -X POST -k https:// <IP> /rest/server-profiles

VM creation from CLI

For testing purposes, whilst developing a zeroconf daemon I needed to be able to quickly create some VMs and register them on vSphere. The process of opening infrastructure client and running through the wizard etc. or other methods of creating virtual machines was just too slow. After a bit of searching around and looking at kb.vmware.com, it became apparent that VMware don’t support creating virtual machines from their CLI toolset.

I found script from 2006 that uses commands that don’t even exist anymore, but with a bit of of tweaking will happily create test virtual machines.

 

EVO:RAIL – LoudMouth aka Zeroconf

What is Zeroconf?

Zeroconf was first proposed in November 1999 and finalised in 2003 and has found the largest adoption in Mac OS products, nearly all networked printers and other network device vendors. The most obvious and recognisable implementation of zeroconf is bonjour, which has been part of Mac OS since version 9 and is used to provide a number of shared network services. The basics of Zeroconf are explained quite simply on zeroconf.org with the following (abbreviated statement) “making it possible to take two laptop computers, and connect them … without needing a man in a white lab coat to set it all up for you”.

Basically zeroconf allows a server/appliance or client device to discover one another without any networking configuration. It is comparable to DHCP in some regards in that a computer with no network configuration can send out a DHCP request (essentially asking to be configured by the DHCP server), the response will be an assigned address and further configuration allowing communication on the network. Where it differs is that zeroconf also allows for advertisement of services (time capsule, printer services, iTunes shared libraries etc.), it also can advertise small amounts of data to identify itself as a type of device.

A Time machine advertisement over zeroconf: (MAC address removed)

[dan@mgmt ~]$ avahi-browse -r -a -p -t | grep TimeMachine
+;eth0;IPv4;WDMyCloud;Apple TimeMachine;local
=;eth0;IPv4;WDMyCloud;Apple TimeMachine;local;WDMyCloud.local;192.168.0.249;9;"dk0=adVN=TimeMachineBackup,adVF=0x83" "sys=waMA=00:xx:xx:xx:xx:xx,adVF=0x100"

How the EVO:RAIL team are using Zeroconf

From recollection of the deep-dive sessions, I may have mistaken the point (corrections welcome).

Zeroconf has found the largest adoption in networked printers and apple bonjour services, however in the server deployment area a combination of DHCP and MAC address matching is more commonly used (Auto deploy or kickstart from PXE boot).

The EVO:RAIL team have implemented a Zeroconf daemon that lives inside every vSphere instance and inside the VCSA instance. The daemon inside the VCSA wasn’t really explained however the vSphere daemon instances allow the EVO:RAIL engine to discover them and take the necessary steps to automate their configuration.

Implementing Zeroconf inside vSphere(esxi)

The EVO:RAIL team had to develop their own zeroconf daemon named loudmouth that is coded entirely in python. The reason behind this was explained in one of the technical deep dives, the problem being that the majority of pre-existing zeroconf implementations have dependancies on various linux shared libraries.

/lib # ls *so | wc -l
86
/lib # uname -a
VMkernel esxi02.fnnrn.me 5.5.0 #1 SMP Release build-1331820 Sep 18 2013 23:08:31 x86_64 GNU/Linux
....
[dan@mgmt lib]$ ls *so | wc -l
541
[dan@mgmt lib]$ uname -a
Linux mgmt.fnnrn.me 3.8.7-1-ARCH #1 SMP PREEMPT Sat Apr 13 09:01:47 CEST 2013 x86_64 GNU/Linux

As the quick example above shows (32bit libs) a vSphere instance contains only a few elf based libraries providing a limited subset of shared functionality. This means that whilst elf based binaries can be moved from a linux distribution over to a vSphere instance, the chance is that a requirement on a shared library won’t be met. Further more building a static binary possibly won’t help as the VMKernel (VMwares kernel implementation)doesn’t implement the full set of linux syscalls, which makes sense as it’s not an OS implementation the userland area of the vSphere is purely for management of the hypervisor. The biggest issue that an implementation of zeroconf which relies on UDP and datagrams is the lack of implementaion of IP_PKTINFO.

This rules out avahi, Zero Conf IP (zcif), and linux implementations of mDnsResponder.

What about loudmouth?

Unfortunately it is yet to be said if any components of EVO:RAIL will be open sourced or back ported to vSphere, so whilst VMware have a zeroconf implementation for vSphere it is likely it will remain proprietary.

What next…

I’ve improved on where I’ve been with my daemon, however i’m hoping to upload it to github sooner rather than later. Unfortunately work has occupied most of the weekend and most evenings so far .. that tied with catching up on episodes of elementary and dealing with endless segfaults as I add any simple functionality have slowed the progress more than I was expecting.

Also I decided to finish writing up this post, which took most of this evening 😐

Debugging on vSphere

A summary of what to expect inside vSphere can be read here and there is no point duplicating existing information (http://www.v-front.de/2013/08/a-myth-busted-and-faq-esxi-is-not-based.html). More importantly when dealing with the vSphere userland libraries or more accurately lack of, then the use of strace is hugely valuable. More details on strace can be found here (http://dansco.de/doku.php?id=technical_documentation:system_debugging).

EVO:RAIL – “Imitation is the sincerest form of flattery”

EVO:RAIL Overview

At VMworld ’14 I managed to catch a few excellent sessions on EVO:RAIL including the deep dive session + Q&A that really explained the architecture that makes up the EVO:RAIL appliance. The EVO:RAIL appliance is only built by authorised vendors or as VMware call them QEPs (Qualified EVO:RAIL Partners) from a hardware specification that is determined by VMware.

VMW-LOGO-EVO-Rail-108Also the EVO:RAIL software/engine is provided to the QEPs for them to install and requires a built in hardware ID (also provided by the QEPs) in order for the engine to work correctly. This means currently that the EVO:RAIL appliance is a sealed environment that has only been designed to exist on pre-determined hardware and that anyone wanting to use any of this functionality on their pre-existing infrastructure will be unable to do so.

So what are the components of EVO:RAIL:
  • QEP hardware (a 2U appliance that has four separate compute nodes)
  • Loudmouth, a zeroconf daemon written in python that also detects the hardware ID
  • EVO:RAIL engine that consists of python/bash scripts, a comprehensive UI and automation ability for deployment
  • VCSA appliance (contains loudmouth, EVO:RAIL) this is pre-installed on one node in every appliance.
How is it built and how is it configured:EVO Rail UI

The idea is that a customer will speak to their account manager at a QEP place an order on a single SKU and provide some simple configuration details. The vendor then will pre-provision the four nodes with the EVO:RAIL version of vSphere, one of these nodes will be also provisioned with the VCSA appliance (also the EVO:RAIL version). The VCSA node will be configured with some IP addresses provided by the customer so that they can complete the configuration once the appliance has been racked.  The EVO:RAIL engine combined with the loudmouth daemon will detect the remaining nodes in the appliance and allow them to be configured, the same goes for addition appliances (maximum 4) that are added.

This simplified UI that was crafted by Jehad Affoneh and provides a HTML5 + web sockets interface that provides real-time information for the end-user as they complete the EVO:RAIL configuration. Once the initial configuration (networking/passwords etc.) is complete the EVO engine will then handle the automation of the following tasks:

  1. vSphere instance configuration (hostnames, passwords, network configuration)
  2. Deploy and configure the VCSA (once complete add in the vSphere instances, and configure VSAN)
  3. Tidy up
  4. Re-direct user to the EVO:RAIL simplified interface for VM deployment.

vmw-evo-rail-screen-2

 

The final screen that the user is presented with is the EVO:RAIL simplified interface, this is a “reduced functionality” user interface that allows people to complete simple tasks such as deploy a simple VM (from simplified pre-determined parameters such as sizing) or investigate the health of vSphere host. The “real” management interface i.e. vCenter is still there in the back ground and the EVO:RAIL interface still has to interact with it through the awful vCenter SOAP SDK (which hopefully will change in the next releases, thus requiring a re-write for the EVO engine).This vCenter can still be accessed through the URL on port 9443, direct with the infrastructure client or alternatively there is a small link in the EVO interface in the top right hand corner.

What next?

EVO:RAIL has been described by JP Morgan with – “EVO products could be an iPhone moment for enterprise use of cloud computing”. I see this in two ways:

Incredible simplification and ease of use, deployment of four nodes is meant to take no more that fifteen minutes. The simplified interface for VM deployments takes 3-4 clicks and your virtual machine is ready to use. The use of the LoudMouth service truly makes deployment plug and play as more capacity is added.

The walled garden, the comparison to the iPhone covers this point perfectly as this product is a closed off product and only available to authorised partners. There are some really clever design ideas here that could really be expanded on and back ported to the existing vSphere to provide some really great functionality.

  • Large scale deployment with the use of the LoudMouth daemon for discovery
  • Disaster recovery would be simplified again via the LoudMouth daemon advertising where virtual machines are located (in the event that vCenter doesn’t auto re-start after failure).

Imitation?

After speaking with the designer of the UI and sitting through the deep dive sessions there were a few design decisions or “short cuts” that had to be taken in order to get functionality to work, I decided to see what I could improve or at least imitate in my vSphere lab. To begin with I started with the zeroconf agent and how it could be implemented or improved upon, in EVO:RAIL this had to be written from scratch in python due to the development team not managing to get any pre-existing solution (which is understandable AVAHI is hideous and has dependencies on everything).

So I introduce “boaster” which is a tiny daemon written in C that handles simple mDNS service announcement, it’s only a PoC however I intend to add more functionality in the next few days. At the moment a vSphere hypervisor will advertise itself and it’s DHCP address to a bonjour browser or avahi-browse..

mdsn

 

.. More to follow.

Layer 2 over Layer 3 with vSwitch and Mikrotik virtual routers

I’ve trialled a number of different ideas to have a number of different vSwitches that have virtual machines attached, when dealing with a vSphere host that had a single interface. The problem lies in that only one of your vSwitches has a physical interface (uplink) present, which obviously means that traffic can go between the virtual machines on that vSwitch but can’t go northbound to other devices on the network. I decided to give the Mikrotik virtual router a go as it’s requirements are so tiny it doesn’t have a noticeable footprint on my small infrastructure (the virtual routers require 64MB of ram).

Using the two software routers it is possible to bridge interfaces on numerous vSwitches and then use EoIP to create another layer 2 bridge northbound over layer 3. In a simple example we will use two simple vSphere hosts (01 / 02), in real life both are a pair of Gigabyte Brix hosts that whilst good for small lab environments only have a single Gigabit interface. This is limiting with regards to what network based lab environments you can put together as any vSwitch that doesn’t have a physical interface can’t broadcast traffic anywhere other than inside that vSwitch and having different configurations on each host means that vmotion will break the hosts network connectivity.

Below is the configuration I currently have:

layer2 over layer3

 

Although not explicitly mentioned in the diagram the interface on vSwitch0 is ether1, this interface will be on the same vSwitch that has a physical interface and thus will allow outbound traffic from the esxi host. This interface will need configuring to enable connectivity to the switch and also to route out to the internet (if required).

 Configuring router01

Configure ether1

Enable the interface and assign a reachable address (192.168.0.2)

/interface enable ether1
/ip address add address=192.168.0.2/24 interface=ether1 comment="External Interface"

Also add another interface that will be used as an EoIP end point.

/ip address add address=10.0.0.1/24 interface=ether1 comment="EoIP endPoint"

Adding a default gateway (192.168.0.1) which is most peoples router.

/ip route add dst-address=0.0.0.0/0 gateway=192.168.0.1

Create an Ethernet over IP interface

This EoIP interface is required to encapsulate layer2 frames into layer3 packets that can be routed etc..

/interface eoip add comment="eoip interface" name="eoip01" remote-address=10.0.0.2 tunnel-id=1

Create a bridge and add interfaces

The bridge is required for allowing layer2 traffic between interfaces that will sit on the different vSwitches.

/interface bridge add comment="Bridge between vmnics" name=esx-bridge protocol-mode=rstp
/interface bridge port add bridge=esx-bridge interface=eoip01
/interface bridge port add bridge=esx-bridge interface=ether2

 

 Configuring router02

Configure ether1

Enable the interface and assign a reachable address (192.168.0.2)

/interface enable ether1
/ip address add address=192.168.0.3/24 interface=ether1 comment="External Interface"

Also add another interface that will be used as an EoIP end point.

/ip address add address=10.0.0.2/24 interface=ether1 comment="EoIP endPoint"

Adding a default gateway (192.168.0.1) which is most peoples router.

/ip route add dst-address=0.0.0.0/0 gateway=192.168.0.1

Create an Ethernet over IP interface

This EoIP interface is required to encapsulate layer2 frames into layer3 packets that can be routed etc..

/interface eoip add comment="eoip interface" name="eoip01" remote-address=10.0.0.1 tunnel-id=1

Create a bridge and add interfaces

The bridge is required for allowing layer2 traffic between interfaces that will sit on the different vSwitches.

/interface bridge add comment="Bridge between vmnics" name=esx-bridge protocol-mode=rstp
/interface bridge port add bridge=esx-bridge interface=eoip01
/interface bridge port add bridge=esx-bridge interface=ether2

 Testing and DHCP on vSwitch1

Connectivity between the two switches can be tested by pinging the alternative EoIP end points from either host.

e.g. router01 pinging 10.0.0.2 and vice-versa

The final testing is placing DHCP on your vSwitch1 interface and ensuring that clients on both sides of the network receive DHCP leases.

Creating the DHCP pool

/ip pool add name=vswitch1_pool  ranges 172.16.0.2-172.16.0.254

Creating the DHCP server

/ip dhcp-server add address-pool=vswitch1_pool disabled=no interface=ether2 name=vswitch1_dhcp

Then