Automate HPE OneView/Synergy with Chef and Docker

As per a previous post, I’ve been working quite a lot with both HPE OneView (powers HPE Synergy through it’s Composer) and thought i’d put a post together that summarises automating deployments through Chef. There is already tons of information (some of it somewhat sporadic) around the internet for using the HPE OneView Chef driver:

  • Build a Chef Environment and install HPE OneView Provisioning driver -> Here
  • Overview into the code architecture -> Here
  • Technical white paper -> Here

To simplify the process of using Chef with HPE OneView i’ve put together a couple of Dockerfiles that will build some Docker images that simplify the process so much that to have the requirements to use chef/OneView provisioning driver only takes a few commands. Also the side effect of containing Chef and the provisioning driver means that it becomes incredibly simple to have a group of configurations and recipes that will interact with both Synergy Composers and HPE OneView instances that manage DL and BL servers.

 

The below image depicts how have multiple configurations would work:

oneview-chef-docker

Essentially, using the -v /local/path:/container/path allows us to have three folders each containing their own knife.rb (Configuration for each OneView/Container) and a recipe.rb that is applicable for that particular set of infrastructure. The mapping described above will always map a path on the host machine to /root/chef inside the container, allowing simple provisioning through Chef. It also makes it become incredibly simple to manage a number of sets of infrastructure from a single location.

Also be aware that a Chef Server isn’t required, but in order to be able to clear up (:destroy) machines created through Chef in a docker container WITHOUT a Chef server then ensure that Chef is recording what is deployed locally.

Example for your recipe.rb :

with_chef_local_server :chef_repo_path => '/root/chef/',
  :client_name => Chef::Config[:node_name]

Dockerfile is located here

raspberry-pi-logo

 

For the more adventurous, it is also possible to have all of this code run from a Docker container on a Raspberry Pi (the same usage applies). To create a Docker container that will run on a Raspberry Pi the Docker file is located here

Chef and HPE OneView

HPETSS

Currently we’re about 50% of the way through 2016 and i’ve been very privileged to spend a lot of the year working with Chef and not just their software but also presenting with them throughout Europe. All of that bringing us up to the current point where last week I was presenting at HPE TSS (Technology Solutions Summit) around HPE OneView and Chef (picture above :-)). In the last six months i’ve worked a lot with the HPE OneView Chef Provisioning driver  and recently been contributing a lot of changes that have brought the driver version up to 1.20 (as of June 2016). I’ve struggled a little bit with the documentation around Chef Provisioning, so I though it best to write up something around Chef Provisioning and how it works with HPE OneView.

Chef Provisioning

So quite simply, Chef Provisioning is a library that is specifically designed for allowing chef to automate the provisioning of server infrastructure (whether that be physical infrastructure i.e. Servers or virtual infrastructure from vSphere VMs to AWS compute). This library provides the ability to have machine resources that describe the logical make up of a provisioned resource e.g. Operating System, Infrastructure/VM Template, Server Configuration
The Provisioning library can then make use of drivers that extend functionality by allowing Chef to interact with specific end points such as vCenter or AWS. These drivers provide specific driver options that allow the finite configuration of a Chef Machine.

To recap:
Machine resource defines a machine resource inside Chef and can also have additional recipes that will be run in these machines.
Provisioning Drivers extend a machine resource so that Chef can interact with various infrastructure providers. With HPE OneView the driver provides the capability to log into OneView and create Server Profiles from Templates and apply them to server hardware.

Example Recipe:

machine 'web01' do
action :allocate # Action to be performed on this server

  machine_options :driver_options => { # Specific HPE OneView driver options
   :server_template => 'ChefWebServer', # Name of OneView Template
   :host_name => 'chef-http-01', # Name to be applied to Server Profile
   :server_location => 'Encl1, bay 11' # Location of Server Hardware
 }

end

More information around the Chef Provisioning driver along with examples of using it with things like AWS, Vagrant, Azure, VMware etc. can be found on their GitHub site.

API Interactions

Some vendors have taken the approach to have automation agents (Chef clients) hosted inside various infrastructure components such as network switches etc.. I can only assume that this was the only method that existed that would allow Infrastructure automation tools to configure their devices. The HPE OneView Unified API provides a stable/versioned API that Chef and it’s associated Chef provisioning driver can interact with (typically over Https) that doesn’t require either side to maintain for reasons of compatibility.

The diagram below depicts recipes that make use of Chef Provisioning, these have to be run locally (using the -z) command so that they can make use of the provisioning libraries and associated drivers that have to be installed on a Chef workstation. All of the machines provisioned will then be added into the Chef Server for auditing purposes etc..

HPEOneView

HP OneView State-Change Message Bus (through API)

Continuing the theme of discovering and tinkering with HP OneView and it’s API’s ..

g1382094005202824031.jpg

 

 

 

The recent announcement of interoperability between HPE and Arista led me to investigate one of the more hidden aspects of HP OneView, albeit the most critical component as without the MessageBus then OneView simply wouldn’t work. The MessageBus handles two different types (or under the covers queues) which handle different types of information:

State

This message queue handles information such as configuration changes (new hardware plugged in), network configuration changes and failures etc.

Metric

This message queue contains information around things such as power draw or thermal/CPU metrics and details.

In this post i’ll be focussing on the state-change message bus and how Aristas EOS operating system on switches deals with changes within HP OneView. Also for most end-users it is this MessageBus that is going to be the most heavily used component and has the most interaction from end-users (either GUI of API) and also from the hardware changing (new/replacing etc.).

State-change_MessageBus_Chalk

Extending HP OneView through it’s API

I’ve spent around nine months getting to grips with the “nuances” of both HP OneViews design and it’s API (both are intrinsically linked). During this time i’ve had a couple of attempts at wrapping around the OneView API with a few different levels of success.. So some quick takeaways, and ways these can be extended through the API (with OVCLI):

URI

Everything is built around the use of URI (presuming Uniform Resource Locator), but for the best part Unique identifier for an element inside HP OneView.

  • Every new element has a URI such as server profiles or networks etc.. This also applies for “static elements”, for this I refer to a Server Hardware identity e.g. DL360. When a server or enclosure is added and an inventory is done for the hardware, it will create a unique identity for it even though this would be identical between HP OneView instances, A DL360 is a DL360 regardless where it is located.

Same hardware added to two HP OneView instances

dan$ OVCLI 192.168.0.91 SHOW SERVER-HARDWARE-TYPES URI
/rest/server-hardware-types/BF2E08CD-D213-422B-A19D-3297A7A5581E  BL460c Gen8 1
/rest/server-hardware-types/A4AB76D5-B4E3-4272-A18A-ECD24A500F2A  BL460c Gen9 1
/rest/server-hardware-types/D53C5B86-C826-4434-97C1-68DDBE4D4F49  BL660c Gen9 1
dan$ OVCLI 192.168.0.92 SHOW SERVER-HARDWARE-TYPES URI
/rest/server-hardware-types/A2B0009F-83FC-42EC-A952-1B8DF0D0B46A  BL460c Gen9 1
/rest/server-hardware-types/CD79A904-483A-4BA3-8D8F-69DED515A0FE  BL460c Gen8 1
/rest/server-hardware-types/BDC49ED0-FEC2-4864-A0B8-99A99E808230  BL660c Gen9 1

  • It isn’t possible to assign a URI.. so creating a network (e.g. vMotion with VLAN 90) will return a random URI of 3543346435 (for example), it isn’t then possible to create that same network elsewhere with that URI as a _new_ URI will be generated during creation.

After using OVCLI’s copy network function (whilst trying to persist the URI)

dan$ OVCLI 192.168.0.91 SHOW NETWORKS URI
/rest/ethernet-networks/c5657d2e-121d-48d4-9b57-1ff1aa62ce29  Management
dan$ OVCLI 192.168.0.91 COPY NETWORKS /rest/ethernet-networks/c5657d2e-121d-48d4-9b57-1ff1aa62ce29 192.168.0.92
dan$ OVCLI 192.168.0.92 SHOW NETWORKS URI
/rest/ethernet-networks/7d9b8279-31ce-4da6-9ce7-260ee9c48982  Management

Federation

A look around the internet for “HP OneView Federation” will result in a number of results mentioning a few sentences talking about using the message queues etc. to handle federated OneView appliances, other than that there isn’t a master HP One”View” to rule them all currently available. HP OneView scales quite large, and doesn’t require the use of dedicated management devices (such as a Fabric Interconnect or Cisco UCS manager), the only requirement is simple IP connectivity to either the C7000 OA/VC, HP rack mount iLO, San switches, Network switches or the Intelligent PDU devices for monitoring and management meaning for most deployments federating a number of HP OneView instances won’t be a requirement.

There will be the odd business or security requirement to have separate instances, such as a security requirement to ensure physical and logical separation between Test/Dev and production or a multi-tenant data centre with separate POD’s. So currently your only options are to build something cool with the OneView API or open multiple tabs in a web browser the latter will look something like this from a memory usage perspective (although i’ve seen it hover around 200MB per instance):

HPOneViewMem

 

The web UI provides an excellently detailed interface that easily puts all of the relevant information at your fingertips, but that’s only for a single OneView instance.

A one liner to list all server profiles from two OneView instances (.91 = Test/Dev , .92 is the prod)

dan$ OVCLI 192.168.0.91 SHOW SERVER-PROFILES FIELDS name description serialNumber status; \
> OVCLI 192.168.0.92 SHOW SERVER-PROFILES FIELDS name description serialNumber status
TEST Test Machines VCG0U8N000 OK
DEV Development VCG0U8N001 OK
PROD Production VCGO9MK000 OK

Another one to pull all of the names and URIs

dan$ OVCLI 192.168.0.91 SHOW SERVER-PROFILES URI; \
> OVCLI 192.168.0.92 SHOW SERVER-PROFILES URI
/rest/server-profiles/5fdaf0cb-b7a8-40b1-b576-8a91e5d5acbf  TEST
/rest/server-profiles/dd87433e-c564-4381-8542-7e9cf521b8c0  DEV
/rest/server-profiles/d75a1d9e-8bc4-4ee3-9fa8-3246ba71f5db  PROD

With the UI interface, there isn’t a method to move or copy elements such as Networking/server-profiles etc. between numerous OneView interfaces, with the API this is a simple task however as noted in the networking example above it is impossible to keep the identifiers(URIs) common between OneView instances. This makes it quite a challenge to move an entire server profile from one instance to the next as it’s a complicated task moving or determining connectivity information that is unique from one OneView instance to another. It is possible as show in the video (here), but the connectivity information proved too much of a challenge to keep in the current version of OVCLI.

Operational Tasks

The Web UI again simplifies a lot of the tasks, including providing some incredible automation/workflows of tasks such as automating storage provisioning and zoning when applying a server profile to a server. It can also handle some bulk tasks through the ability to do some group selection in the UI. However currently and with the limitations to server-profiles and profiles templates (changes might fix this in 2.0), make it quite an arduous task to deploy large amounts of server profiles through the UI.. it’s easy to do but it’s a case of a click or two per server profile. Using the API makes this very simple:

Let’s find the Development Server Profile and create 50 of them.
dan$ OVCLI 192.168.0.91 SHOW SERVER-PROFILES URI | grep DEV
/rest/server-profiles/dd87433e-c564-4381-8542-7e9cf521b8c0 DEV


dan$ date; OVCLI 192.168.0.91 CLONE SERVER-PROFILES /rest/server-profiles/dd87433e-c564-4381-8542-7e9cf521b8c0 50; date
Tue 14 Jul 2015 17:06:40 BST
DEV_0
DEV_1


...


DEV_48
DEV_49
Tue 14 Jul 2015 17:06:52 BST

Twelve seconds and 50 development profiles are ready to go.

HPOneViewProf

HP OneView Automation through the API

I’ve had the opportunity to head to some exciting places over the last few weeks/months and especially in the more recent weeks i’ve been heading up and down the country on a regular basis. This has given me time whilst sat on the train “yay!” to really spend some time playing around with HP OneView.. I’ve already had a go at wrapping some of the API in Objective-C and decided to make something a little bit more useful.

I probably should have done my development work in a language that is a little bit more recent, something such as python etc.. however I stuck with a 43 (at time of writing) year old programming language .. C. This does give me the option of porting it to anything with a C compiler and libcurl so the option is there 🙂 I’ve also made use of the libjannson library which is fantastic for manipulating and reading JSON (http://www.digip.org/jansson/).

So, what i’ve ended up with is a simple tool that can plug into automation tools pretty easily (Chef, Puppet, Ansible i’m looking at you) that can interact with HP OneView and do some simple reporting in JSON or Tab delimited output. It also can do somethings that currently aren’t available.. such as interact with multiple HP OneView instances!

I’d full screen this before clicking play.. 🙂

This is a quick example of pulling some details from one instance (Enclosure-Groups, Server Hardware Type) and using that to move a server profile from one HP OneView to another..

HP OneView – Part 2: Server Profiles

Apologies for the delay I was busy..

What is a Server Profile

The “Server Profile” is the defining phrase that comes to mind when thinking about the SDDC (Software Defined Data Centre). It allows a server administrator to define a hardware identity or configuration (MAC addresses, WWNN, BIOS, Boot, RAID config etc.) in software and then apply this to a “blank” server.

This brings a number of key advantages:

  • Pre-designed hardware identities allow configuration tasks to be pre-provisioned before hardware deployment (SAN zoning, firewall ACLs etc..)
  • Designated addresses allow easier identification e.g.  aa:bb:cc:dd:01:01 = Management /  aa:bb:cc:dd:02:01 = Production
  • Server failure/replacement doesn’t require upstream changes, software identity (server profile) is applied and all previous configuration is still relevant.

Design

Following on from the previous HP OneView post, this is a continuation of the same simple VMware vSphere deployment. As before, a good design should exist before implementation, so again i’ve embedded a diagram detailing where and how these networks are going to be defined to the virtual interfaces on a blade. VMware OneView Service Profiles

Quite Simply:

  • Two virtual interfaces defined for all of the Service networks.
  • Two virtual interfaces defined for the Production networks.
  • Two HBAs on each fabric, providing resilience for Fibre Channel traffic.

As mentioned, this is a simple design for a vSphere host but allows expansion in the future with the ability to define a further virtual interface on each physical interface inside the blade.

 

HP OneView – Part 1: Enclosures and Uplinks (logical or otherwise)

The initial configuration of HP OneView is a pretty simple and intuitive process, it just isn’t documented as well as it could be. I’ve decided to put together a few posts detailing some of the areas of configuration that could possibly do with a bit more detailed procedures. I’d expect that someone who wishes to use any thing that is documented here is already acquainted with Virtual Connect, Onboard Administrator, VMware vSphere and iLO’s etc.

Design

Before any implementation work is carried out there has to be a design in place, otherwise what and how are you going to configure anything.. The design for these posts will be a simple VMware environment consisting of a number of networks to handle the standard traffic one would expect (Production, Vmotion etc.) at this point we have defined our networks and these have been trunked by our network admin on the switches out to the C7000’s we will be connecting too.

c7000 VMware

 

From this diagram, you can see the coloured VLANs represent the traffic designated management/service traffic and the grey VLANs represent production traffic. All of these VLANs are trunked from the Access switches down to the virtual connect modules located in the back of the C7000 enclosures.

 

Note: I’ve omitted SAN switches from this and just presented what would appear as zoned storage direct to the Virtual Connect. I may cover storage and flat san at a later date, if there is any request to do so.

 

 

This represents the external connectivity being presented to our Enclosure, it’s now time for the logical configuration with HP OneView…