Automate HPE OneView/Synergy with Chef and Docker

As per a previous post, I’ve been working quite a lot with both HPE OneView (powers HPE Synergy through it’s Composer) and thought i’d put a post together that summarises automating deployments through Chef. There is already tons of information (some of it somewhat sporadic) around the internet for using the HPE OneView Chef driver:

  • Build a Chef Environment and install HPE OneView Provisioning driver -> Here
  • Overview into the code architecture -> Here
  • Technical white paper -> Here

To simplify the process of using Chef with HPE OneView i’ve put together a couple of Dockerfiles that will build some Docker images that simplify the process so much that to have the requirements to use chef/OneView provisioning driver only takes a few commands. Also the side effect of containing Chef and the provisioning driver means that it becomes incredibly simple to have a group of configurations and recipes that will interact with both Synergy Composers and HPE OneView instances that manage DL and BL servers.

 

The below image depicts how have multiple configurations would work:

oneview-chef-docker

Essentially, using the -v /local/path:/container/path allows us to have three folders each containing their own knife.rb (Configuration for each OneView/Container) and a recipe.rb that is applicable for that particular set of infrastructure. The mapping described above will always map a path on the host machine to /root/chef inside the container, allowing simple provisioning through Chef. It also makes it become incredibly simple to manage a number of sets of infrastructure from a single location.

Also be aware that a Chef Server isn’t required, but in order to be able to clear up (:destroy) machines created through Chef in a docker container WITHOUT a Chef server then ensure that Chef is recording what is deployed locally.

Example for your recipe.rb :

with_chef_local_server :chef_repo_path => '/root/chef/',
  :client_name => Chef::Config[:node_name]

Dockerfile is located here

raspberry-pi-logo

 

For the more adventurous, it is also possible to have all of this code run from a Docker container on a Raspberry Pi (the same usage applies). To create a Docker container that will run on a Raspberry Pi the Docker file is located here

Read More

Compiling packages in Docker

After my previous post yesterday I was given a few tips thanks to http://twitter.com/yankcrime about some much better solutions for compiling packages and then building them into a package that can then be used with docker build in the correct fashion (I’m sure there are still some steps that could be better).

First attempt:

The first Dockerfile I put together consisted of installing half the compile toolchain, a hand full of development libraries and then downloading the source file for the application I was trying to build. It would then configure the build with required settings, spend an hour (Raspberry Pis aren’t the fastest at compiling 🙁 ) building everything and then install all of the files in their expected locations. Finally remove all of the source code, un-install all of the development packages and finally do the final prep configuration work specified in the docker file.

Docker Image Result:

bind9_10      latest      d1ceef183d9f     3 hours ago         930.6 MB

Second attempt:

To try and shrink the size of the Docker image, I had to abandon my desire to automate the building of the docker container and switch to a two container model. The first container would partially automate the application configuration and the actual compilation and then copy the built application to shared storage. The second container would start with a simple base image install the minimum requirements (shared/required libraries) and also it required make. This image would be volume mapped to the application source directory where it would simply run a $ make install however that need for mapped volumes mean that this has to be a hand-crafted image (with a commit once complete)

Docker Image Result:

bind910json      latest      6eeae5a74642     20 hours ago          320 MB

Third attempt (thanks to @yankcrime and FPM):

This morning (02/08/2016) I updated my Dockerfiles and tried last night suggestions. The build is almost automated (I believe I can completely automate it with nested containers, but i’ve yet to try). The current environment consists of three things:

  • Docker container that automates fully building of bind9.10, with a make install into /tmp where the binaries are stripped and fpm will create a package (Dockerfile).
  • Docker run the new image, which will copy the new package to the location of my second Dockerfile location (commands below).

$ export DEB_LOCATION=/mnt/docker/Dockerfiles/bind9_10_deb/
$ docker run -i --rm -v $DEB_LOCATION:/files bind9_10 /bin/cp bind9_10_0.9.10-dan_armhf.deb /files

  • Docker build my other Dockerfile and away we go (Dockerfile).

Docker Image Result:

bind9_10_deb      latest      64d7df855866     21 seconds ago      220.2 MB

 

Notes:

I did attempt to make changes to the debian source package for bind9.10 and it was just a mine field of random dependencies and over the top scripting.. even adding in the correct configure options resulted in something breaking the config.h script for the build (HAVE_JSON was always missing)

There are a lot more steps that can be observed to have more efficient Docker images, including running multiple commands per RUN command to reduce the amount of space written per layer etc.

 

Docker Image sizing information:

http://developers.redhat.com/blog/2016/03/09/more-about-docker-images-size/

Docker Image Reduction Techniques and Tools

 

 

Read More

Raspberry Pi with Docker

I’ve put off purchasing Raspberry Pis for a few years as I was pretty convinced that the novelty would wear off very quickly and they would be consigned to the drawer of random cables and bizarre IT equipment i’ve collected over the years (Parallel cables and zip drives o_O).

The latest iteration of the Pi is the v3 that comes with 1Gb of ram and 4 arm cores and it turns out that whilst it’s not exactly a computer powerhouse, it can still handle enough load to do a few useful things.

Raspberry Pis

I’ve currently got a pair of them in a docker swarm cluster (docker 1.12-rc3 for armv7l available here). Which has given me another opportunity to really actually play with docker and try and replace some of my linux virtual machines with “lightweight” docker containers.

First up: To ease working between the two pi’s I created a nfs share for all my docker files etc. I then decided that having my own Docker registry to share images between hosts would be useful. So on my first node I did a docker pull etc. for the Docker registry container and attempted to start it. This is where I realised that the container will just continuously restart, a quick peer into the container itself and I realised that it has binaries compiled for x86_64 not armv7l  so that clearly wasn’t going to work here. So that chalks up failure number one for a pure Raspberry Pi Docker cluster as my Registry had to be ran from a CoreOS virtual machine.

Once that was up and running, my first attempt to push an image from the Pis resulted in the error message :

https://X.X.X.X:5000/v1/_ping: http: server gave HTTP response to HTTPS client

After some googling it appears that this issue is related to having an insecure repository, this can be resolved by going down the generating certificate route etc.. However to fix the insecurity issue the docker daemon will need starting with some parameters to allow the use of an insecure registry.

Note: The documentation online informs you to update the docker configuration files and restart the docker daemon, however there appears to be a bug in raspbian/debian implementation. For me editing /etc/default/docker and adding DOCKER_OPTS='--insecure-registry X.X.X.X:5000' had no effect. This can be inspected by looking at the output from $ docker info as the insecure registries are listed here.

To fix this I had to edit the systemd start up files so that it would start dockerd with the correct parameters.

$ sudo vim /lib/systemd/system/docker.service

...

ExecStart=/usr/bin/dockerd --insecure-registry X.X.X.X:5000 -H fd://

After restarting the daemon, I can successfully push/pull from the registry.

Finally: The first service I attempted to move to a lightweight container resulted in a day and a half of fiddling (clearly a lot of learning needed). Although to clarify this was due to me wanting some capabilities that were compiled into the existing packages.

Moving bind into a container “in theory” is relatively simple:

  • Pull an base container
  • pull the bind package and install (apt-get install -y bind9)
  • map a volume containing the configuration and zone files
  • start the bind process and point it at the mapped configuration files

All of this can be automated through the use of a Dockerfile like this one. However after further investigation and a desire to monitor everything as much as possible, it became apparent that the use of the statistic-channel on bind 9.9 wouldn’t be sufficient and i’d need the 9.10 (mainly for JSON output). After creating a new docker file that adds in the unstable repos for debian and pulling the bind 9.10 packages it turns out that debian compile bind without libjson support 🙁 meaning that json output was disabled. This was the point where Docker and I started to fall out as a combination of dockers layered file system and build’s lack of ability to use --privileged or the -v (volume) parameter don’t work. This resulted in me automating a docker container that did the following:

  • Pull an base container
  • Pull a myriad of dev libraries, compilers, make toolchains etc.
  • download the bind9.10 tarball and untar it
  • change the WORKDIR and run ./configure with all of the correct flags
  • make install
  • delete the source directory and tarball
  • remove all of the development packages and compilers

This resulted in a MASSIVE 800Mb docker image just to run bind 🙁 In order to shrink the docker container I attempted a number of alternative methods such as using an NFS mount inside the container where all of the source code would reside for the compiling which wouldn’t be needed once a make install was ran. However as mentioned NFS mounts (require --privileged to mount) aren’t allowed with docker build and neither is it an option to pass it through with a -v flag as that doesn’t exist for building. This left me with the only option of manually having to create a docker container for bind that would start a base container with a volume passed through of already “made” binaries along with the required shared libraries and do a make install. This container then could be exited and committed as an image for future use and was significantly smaller that the previous image.

dockerimages

There are a number of issues on the Docker Github page around docker build not supporting volume mapping or privileged features. However the general response to feature requests that would have assisted my deployment generally are for edge cases and won’t be part of Docker any time soon.

Still, got there in the end and with a third Pi in the post i’m looking forward to moving some more systems onto my Pi cluster and coding some cool monitoring solutions 😀

Read More

Chef and HPE OneView

HPETSS

Currently we’re about 50% of the way through 2016 and i’ve been very privileged to spend a lot of the year working with Chef and not just their software but also presenting with them throughout Europe. All of that bringing us up to the current point where last week I was presenting at HPE TSS (Technology Solutions Summit) around HPE OneView and Chef (picture above :-)). In the last six months i’ve worked a lot with the HPE OneView Chef Provisioning driver  and recently been contributing a lot of changes that have brought the driver version up to 1.20 (as of June 2016). I’ve struggled a little bit with the documentation around Chef Provisioning, so I though it best to write up something around Chef Provisioning and how it works with HPE OneView.

Chef Provisioning

So quite simply, Chef Provisioning is a library that is specifically designed for allowing chef to automate the provisioning of server infrastructure (whether that be physical infrastructure i.e. Servers or virtual infrastructure from vSphere VMs to AWS compute). This library provides the ability to have machine resources that describe the logical make up of a provisioned resource e.g. Operating System, Infrastructure/VM Template, Server Configuration
The Provisioning library can then make use of drivers that extend functionality by allowing Chef to interact with specific end points such as vCenter or AWS. These drivers provide specific driver options that allow the finite configuration of a Chef Machine.

To recap:
Machine resource defines a machine resource inside Chef and can also have additional recipes that will be run in these machines.
Provisioning Drivers extend a machine resource so that Chef can interact with various infrastructure providers. With HPE OneView the driver provides the capability to log into OneView and create Server Profiles from Templates and apply them to server hardware.

Example Recipe:

machine 'web01' do
action :allocate # Action to be performed on this server

  machine_options :driver_options => { # Specific HPE OneView driver options
   :server_template => 'ChefWebServer', # Name of OneView Template
   :host_name => 'chef-http-01', # Name to be applied to Server Profile
   :server_location => 'Encl1, bay 11' # Location of Server Hardware
 }

end

More information around the Chef Provisioning driver along with examples of using it with things like AWS, Vagrant, Azure, VMware etc. can be found on their GitHub site.

API Interactions

Some vendors have taken the approach to have automation agents (Chef clients) hosted inside various infrastructure components such as network switches etc.. I can only assume that this was the only method that existed that would allow Infrastructure automation tools to configure their devices. The HPE OneView Unified API provides a stable/versioned API that Chef and it’s associated Chef provisioning driver can interact with (typically over Https) that doesn’t require either side to maintain for reasons of compatibility.

The diagram below depicts recipes that make use of Chef Provisioning, these have to be run locally (using the -z) command so that they can make use of the provisioning libraries and associated drivers that have to be installed on a Chef workstation. All of the machines provisioned will then be added into the Chef Server for auditing purposes etc..

HPEOneView

Read More

Monitoring HP OneView with InfluxDB and Grafana

My previous post was a theoretical piece around how Arista may or may not (had no confirmation) be interacting with HP OneView in order to automate infrastructure provisioning in the Data Centre. That particular article dealt with one of the Message Buses that form HP OneView, in particular the State Change Message Bus (SCMB) that handles tasks and changes to hardware.

This post will look at the other Message Bus that exists inside HP OneView, which is the Metric Streaming Message Bus (MSMB) that handles all of the metrics (HW info). As of HP OneView (1.20) the following info is available:

Enclosure (RatedCapacity / DeratedCapacity / Temp / AvgPower / PowerCap / Peak Power)
Power Device (AvgPower / PeakPower)
Server Hardware (CpuUtilisation / CpuAvgFreq / Temp / AvgPower / PowerCap / Peak Power)

These statistics can be captured at a sample rate (every 5 mins of more) and then posted to the message bus at a frequency (every 5 mins or more).

One thing that surprised me was that by default the Metric streaming bus isn’t configured to monitor anything, which means that the web based UI must be getting its statistics by polling or internal SNMP.

 

So some quick changes to my PoC tool and it can now monitor the Metric Streaming Bus with a simple line:
OVCLI 192.168.0.91 MESSAGEBUS LISTEN METRIC msmb.#

The plan

The idea I had was to take the raw output from HP OneView and find away of visualising it in the large interactive dashboards that Operations teams are hugely fond of currently. My idea was to make use of Docker for ease of deployment and using InfluxDB and Grafana for simplification of configuration.

This is a sample message captured (click to make larger): OVCLI_MSMB

 

This is where I wanted to get to (click to make larger): GRAFANA_MSMB

 

(more…)

Read More

HP OneView State-Change Message Bus (through API)

Continuing the theme of discovering and tinkering with HP OneView and it’s API’s ..

g1382094005202824031.jpg

 

 

 

The recent announcement of interoperability between HPE and Arista led me to investigate one of the more hidden aspects of HP OneView, albeit the most critical component as without the MessageBus then OneView simply wouldn’t work. The MessageBus handles two different types (or under the covers queues) which handle different types of information:

State

This message queue handles information such as configuration changes (new hardware plugged in), network configuration changes and failures etc.

Metric

This message queue contains information around things such as power draw or thermal/CPU metrics and details.

In this post i’ll be focussing on the state-change message bus and how Aristas EOS operating system on switches deals with changes within HP OneView. Also for most end-users it is this MessageBus that is going to be the most heavily used component and has the most interaction from end-users (either GUI of API) and also from the hardware changing (new/replacing etc.).

State-change_MessageBus_Chalk

(more…)

Read More

Extending HP OneView through it’s API

I’ve spent around nine months getting to grips with the “nuances” of both HP OneViews design and it’s API (both are intrinsically linked). During this time i’ve had a couple of attempts at wrapping around the OneView API with a few different levels of success.. So some quick takeaways, and ways these can be extended through the API (with OVCLI):

URI

Everything is built around the use of URI (presuming Uniform Resource Locator), but for the best part Unique identifier for an element inside HP OneView.

  • Every new element has a URI such as server profiles or networks etc.. This also applies for “static elements”, for this I refer to a Server Hardware identity e.g. DL360. When a server or enclosure is added and an inventory is done for the hardware, it will create a unique identity for it even though this would be identical between HP OneView instances, A DL360 is a DL360 regardless where it is located.

Same hardware added to two HP OneView instances

dan$ OVCLI 192.168.0.91 SHOW SERVER-HARDWARE-TYPES URI
/rest/server-hardware-types/BF2E08CD-D213-422B-A19D-3297A7A5581E  BL460c Gen8 1
/rest/server-hardware-types/A4AB76D5-B4E3-4272-A18A-ECD24A500F2A  BL460c Gen9 1
/rest/server-hardware-types/D53C5B86-C826-4434-97C1-68DDBE4D4F49  BL660c Gen9 1
dan$ OVCLI 192.168.0.92 SHOW SERVER-HARDWARE-TYPES URI
/rest/server-hardware-types/A2B0009F-83FC-42EC-A952-1B8DF0D0B46A  BL460c Gen9 1
/rest/server-hardware-types/CD79A904-483A-4BA3-8D8F-69DED515A0FE  BL460c Gen8 1
/rest/server-hardware-types/BDC49ED0-FEC2-4864-A0B8-99A99E808230  BL660c Gen9 1

  • It isn’t possible to assign a URI.. so creating a network (e.g. vMotion with VLAN 90) will return a random URI of 3543346435 (for example), it isn’t then possible to create that same network elsewhere with that URI as a _new_ URI will be generated during creation.

After using OVCLI’s copy network function (whilst trying to persist the URI)

dan$ OVCLI 192.168.0.91 SHOW NETWORKS URI
/rest/ethernet-networks/c5657d2e-121d-48d4-9b57-1ff1aa62ce29  Management
dan$ OVCLI 192.168.0.91 COPY NETWORKS /rest/ethernet-networks/c5657d2e-121d-48d4-9b57-1ff1aa62ce29 192.168.0.92
dan$ OVCLI 192.168.0.92 SHOW NETWORKS URI
/rest/ethernet-networks/7d9b8279-31ce-4da6-9ce7-260ee9c48982  Management

Federation

A look around the internet for “HP OneView Federation” will result in a number of results mentioning a few sentences talking about using the message queues etc. to handle federated OneView appliances, other than that there isn’t a master HP One”View” to rule them all currently available. HP OneView scales quite large, and doesn’t require the use of dedicated management devices (such as a Fabric Interconnect or Cisco UCS manager), the only requirement is simple IP connectivity to either the C7000 OA/VC, HP rack mount iLO, San switches, Network switches or the Intelligent PDU devices for monitoring and management meaning for most deployments federating a number of HP OneView instances won’t be a requirement.

There will be the odd business or security requirement to have separate instances, such as a security requirement to ensure physical and logical separation between Test/Dev and production or a multi-tenant data centre with separate POD’s. So currently your only options are to build something cool with the OneView API or open multiple tabs in a web browser the latter will look something like this from a memory usage perspective (although i’ve seen it hover around 200MB per instance):

HPOneViewMem

 

The web UI provides an excellently detailed interface that easily puts all of the relevant information at your fingertips, but that’s only for a single OneView instance.

A one liner to list all server profiles from two OneView instances (.91 = Test/Dev , .92 is the prod)

dan$ OVCLI 192.168.0.91 SHOW SERVER-PROFILES FIELDS name description serialNumber status; \
> OVCLI 192.168.0.92 SHOW SERVER-PROFILES FIELDS name description serialNumber status
TEST Test Machines VCG0U8N000 OK
DEV Development VCG0U8N001 OK
PROD Production VCGO9MK000 OK

Another one to pull all of the names and URIs

dan$ OVCLI 192.168.0.91 SHOW SERVER-PROFILES URI; \
> OVCLI 192.168.0.92 SHOW SERVER-PROFILES URI
/rest/server-profiles/5fdaf0cb-b7a8-40b1-b576-8a91e5d5acbf  TEST
/rest/server-profiles/dd87433e-c564-4381-8542-7e9cf521b8c0  DEV
/rest/server-profiles/d75a1d9e-8bc4-4ee3-9fa8-3246ba71f5db  PROD

With the UI interface, there isn’t a method to move or copy elements such as Networking/server-profiles etc. between numerous OneView interfaces, with the API this is a simple task however as noted in the networking example above it is impossible to keep the identifiers(URIs) common between OneView instances. This makes it quite a challenge to move an entire server profile from one instance to the next as it’s a complicated task moving or determining connectivity information that is unique from one OneView instance to another. It is possible as show in the video (here), but the connectivity information proved too much of a challenge to keep in the current version of OVCLI.

Operational Tasks

The Web UI again simplifies a lot of the tasks, including providing some incredible automation/workflows of tasks such as automating storage provisioning and zoning when applying a server profile to a server. It can also handle some bulk tasks through the ability to do some group selection in the UI. However currently and with the limitations to server-profiles and profiles templates (changes might fix this in 2.0), make it quite an arduous task to deploy large amounts of server profiles through the UI.. it’s easy to do but it’s a case of a click or two per server profile. Using the API makes this very simple:

Let’s find the Development Server Profile and create 50 of them.
dan$ OVCLI 192.168.0.91 SHOW SERVER-PROFILES URI | grep DEV
/rest/server-profiles/dd87433e-c564-4381-8542-7e9cf521b8c0 DEV


dan$ date; OVCLI 192.168.0.91 CLONE SERVER-PROFILES /rest/server-profiles/dd87433e-c564-4381-8542-7e9cf521b8c0 50; date
Tue 14 Jul 2015 17:06:40 BST
DEV_0
DEV_1


...


DEV_48
DEV_49
Tue 14 Jul 2015 17:06:52 BST

Twelve seconds and 50 development profiles are ready to go.

HPOneViewProf

Read More

HP OneView Automation through the API

I’ve had the opportunity to head to some exciting places over the last few weeks/months and especially in the more recent weeks i’ve been heading up and down the country on a regular basis. This has given me time whilst sat on the train “yay!” to really spend some time playing around with HP OneView.. I’ve already had a go at wrapping some of the API in Objective-C and decided to make something a little bit more useful.

I probably should have done my development work in a language that is a little bit more recent, something such as python etc.. however I stuck with a 43 (at time of writing) year old programming language .. C. This does give me the option of porting it to anything with a C compiler and libcurl so the option is there 🙂 I’ve also made use of the libjannson library which is fantastic for manipulating and reading JSON (http://www.digip.org/jansson/).

So, what i’ve ended up with is a simple tool that can plug into automation tools pretty easily (Chef, Puppet, Ansible i’m looking at you) that can interact with HP OneView and do some simple reporting in JSON or Tab delimited output. It also can do somethings that currently aren’t available.. such as interact with multiple HP OneView instances!

I’d full screen this before clicking play.. 🙂

This is a quick example of pulling some details from one instance (Enclosure-Groups, Server Hardware Type) and using that to move a server profile from one HP OneView to another..

(more…)

Read More

Developing on Linux (Arch) .. through OSX

As part of some development work on auto discover within a VMware environment (talked about here and here) I got quite fed up with having to move between Xcode for some code and vim/gcc in a linux VM or vSphere console. To try and streamline the work I decided to have a look at IDEs that are available for linux, excluding eclipse as it’s just too massive for the simple tasks I wanted..

223_linus_torvaldsSo I decided to have a look at Visual Studio Code and CodeBlocks as possible IDE solutions.

At which point the predicted vim abuse started on irc..

Names obscured and some text shortened.. (you get the idea)

 


09:13 <@A> a nice IDE for linux dev?
09:13 <@A> what's wrong with vim?
09:13 < B> lol
09:14 < dan> beard and sandals has arrived
09:14 <@A> newblets
09:14 < B> i can barely edit and save a text file in vim
[... first attempt with Visual Studio Code ...]
09:20 < dan> [dan@development ~]$ ./Code
09:20 < dan> ./Code: /usr/lib64/libstdc++.so.6: version `GLIBCXX_3.4.14' not
found (required by ./Code)
09:20 < dan> f-ing linux
09:21 < dan> clearly ready for the desktop..
09:34 < C> :]
09:42 <@A> atom is an awful editor
09:42 <@A> and by awful i mean horrendously slow and bloated
09:44 < dan> :-)
09:45 <@A> https://discuss.atom.io/t/why-is-atom-so-slow/11376/36
09:48 <@A> spoiler alert: it's not gotten any better

After a frustrating further 20 minutes I managed to get things working.. so here are my notes:

(more…)

Read More

HP OneView – Part 3: VMware vCenter Integration

This has been a learning experience for me as i’ve not had the opportunity to interact with this tool before, however i’ve been very curious about how it brings together the single pane of glass mentality for HP kit and VMware. To build this quick Proof of Concept, i’ve HP OneView 1.20 (configured as previously described in Parts 1&2) I have also deployed a new VCSA 5.5 just for this particular test.

OneView120_DashboardI was under the impression that a windows box would only be required for the installation of the plugin due to the installer being a windows executable, however it appears that the HP OneView for VMware vCenter actually consists of a number of components and services that require a windows box to run on. The Installer and some further information can be found here: http://www8.hp.com/us/en/products/server-software/product-detail.html?oid=4152978

If the windows machine doesn’t have enough memory then the installer will fail at the end as it attempts to bring up the VMware vCenter services.

 

(more…)

Read More