The three year mission aboard the Enterprise is over

As with all good things… they must come to an end and sadly my time at HP Enterprise (originally HP) will be coming to an end July 2017.

It has been an amazing adventure, and one that has both shaped and had a lasting effect on my career. Pre-HPE I typically had desk-based roles and would be issued work requests, such as configure this VM or design and build this Datacenter and meetings would typically feel like there were more important things I could be doing.

Fast forward to starting at HPE, my first time on the opposite side of the table being a ‘vendor’ was a rather nerve wracking experience (along with being stuck in a suit). Having to learn to deliver customer presentations along with the corporate messaging was a bizarre experience but once I got my head around the admittedly massive portfolio it was an enjoyable experience, especially when the solution came together between vendor and customer.

Occasionally you meet people who for their own amusement will try to take you to task by querying every inane detail of a server or network switch, but you learn pretty quick (or your skin toughens up) how to cope.

I also had my first real exposure to presenting on stage, which was (and still is) an utterly nerve wracking experience. The first time at HP ETSS (Enterprise Technology Solutions Summit) was awful I can still remember standing on stage mumbling and shaking away hoping it would all end ASAP. Since then I’ve presented all over the place including an exhausting solid week of full day presentations in Johannesburg, which again was an amazing experience. I still have a lot to learn but can just about put together a presentation that I’d rate as “acceptable” ūüôā

Also whilst at HPE I was exposed to Open Source and started developing again, which is quite amusing given I originally did a degree in Software Engineering and then promptly moved into Infrastructure and Hardware management/configuration. I’ve been very fortunate to get involved and meet some amazing developers in both Docker and Chef over the last couple of years along with getting to contribute to various projects.

I’ve nothing but great memories from the last three years (apart from people trying (and failing) to force me to use salesforce). Also here is to the DCA team, who helped and taught me so much during my time at HPE (gone but not forgotten).

So heres to whats next!

Deploying Docker DataCenter

This is a simple guide to deploying some SLES hosts, configuring them to allow deployment of Docker Engines along with configuration to allow Docker Datacenter to be deployed on¬†the platform. It’s also possible to deploy the components using Docker for Mac/Windows as detailed here.

Disclaimer: This is in “no way” an¬†official or supported procedure for deploying Docker CaaS.

Overview

I’ve had Docker DataCenter in about 75% of my emails over the last few months and it’s certainly been something that¬†has been on my to-do list to get a private (lab) deployment done. Given the HPE and SuSE announcement in September I decided that I would see how easy it would be to have my deployment on a few SLES hosts, turns out it’s surprisingly simple (although, I was expecting something like OpenStack deployments a few years ago ūüėē )

Also,¬†if you’re just looking to *just* deploy Docker DataCenter then ignore the Host configuration steps.

hpe_docker_suse

Requirements

  1. 1-2 large cups of tea (depends on any typing mistakes)
  2. 2 or more SLES hosts (virtual of physical makes no difference, 1vCPU and 2GB ram, 16GB disk) mine were all built from SLE-12-SP1-Server-DVD-x86_64-GM-DVD1
  3. A SuSE product registration, 60 day free is fine (can take 24 hours for the email to arrive) *OPTIONAL*
  4. A Docker Datacenter license 60 day trial is minimum *REQUIRED*
  5. An internet connection? (it’s 2016 … )

Configuring your hosts

SuSE Linux Enterprise Server (SLES) is a pretty locked down beast and will require a few things modified before it can run as a Docker host.

SLES 12 SP1 Installation

The installation was done through the CD images, although it you want to automate the procedure it’s a case of configuring your AutoYast to deploy the correct SuSE patterns. As you step through the installation screens there are a couple of screens to be aware of:

  • Product Registration: If you have the codes then add them in here, it simplifies installing Docker later.¬†ALSO, this is where the¬†Network Settings are hidden¬† ūüėą So either set your static networking here or alternatively it can be done through yast on the cli (details here). Ensure on the routing page that IPv4 Forwarding is enabled for Docker networking.
  • Installation Settings: The defaults can leave you with a system you can’t connect to.

sles12_install

Under the Software headline, deselect the patterns GNOME Desktop Environment and X Windows System as we won’t be needing a GUI or full desktop environment. Also under the Firewall and SSH headline, the SSH port is blocked by default and that means you won’t be able to SSH into your server once the Operating System has been installed so click (open).

So after my installation I ended up with two hosts (that can happily ping and resolve one another etc.):

ddc01  192.168.0.140 /  255.255.255.0

ddc02  192.168.0.141 /  255.255.255.0

The next step is to allow the myriad of ports required for UCP and DTR, this is quite simple and consists of opening up the file /etc/sysconfig/SuSEfirewall2 and modifying it to look like the following:

FW_SERVICES_EXT_TCP="443 2376 2377 12376 12379:12386"

Once this change has been completed, the firewall rules will be re-read by using the command SuSEfirewall2

Installing Docker with a Product registration

Follow the instructions here, no point copying it twice.

Installing Docker without a Product registration

I’m still waiting for my 60-day registration to arrive from SuSE , so in the meantime I decided to start adding other Repositories to deploy applications. ¬†NOTE: As this isn’t coming from a Enterprise repository it certainly won’t be supported.

So the quickest way of getting the latest Docker on a SLES system is to have the latest OpenSuSe repository added, the following two lines will add the repository and add Docker:

zypper ar -f http://download.opensuse.org/tumbleweed/repo/oss/ oss
zipper in docker
...
docker -v
...
Docker version 1.12.1, build 8eab29e

 

To recap, we have a number of hosts configured that have network connectivity and the firewall ports open and finally we’ve Docker installed and ready to deploy containers.

Deploying Docker Datacenter

Deploying the Universal Control Plane (UCP)

On our first node ddc01, we deploy the UCP installer, which automates the pulling of additional containers that make up the UCP.

docker run --rm -it --name ucp \

-v /var/run/docker.sock:/var/run/docker.sock \

docker/ucp install -i --host-address 192.168.0.140

Errors to watch for:

FATA[0033] The following required ports are blocked on your host: 12376.  Check your firewall settings. 

.. Make sure that you’ve edited the firewall configuration and reloaded the rules

WARNING: IPv4 forwarding is disabled. Networking will not work.

Enable IPv4 forwarding in the yast routing configuration.

Once the installation starts it will ask you for a password for the admin user, for this example the password I set was ‘password’ however I highly recommend that you choose something a little more secure. The installer will also give you the option to set additional SANs for the TLS certificates for additional domain names.

The installation will complete, and in my environment i’ll be able to connect to my UCP by putting in the address of ddc01 in a web browser.

ucp

Adding nodes to the UCP

After logging into the UCP for the first time, the dashboard will display everything that the docker cluster is currently managing. There will be a number of containers displayed as they make up the UCP (UCP web server, etcd, swarm manager, swarm client etc…). Adding in additional nodes is as simple as adding in Docker workers to a swarm cluster, possibly simpler as the UCP provides you with a command that can be copied and pasted on all further nodes to add them to the cluster.

Note: The UCP needs a license adding, otherwise additional nodes will fail during the add process.

 add_nodes

Deploying the Docker Trusted Registry (DTR)

On ddc02 install the Docker Trusted Registry as it’s not supported or recommended to have the UCP and the DTR on the same nodes.

From ddc02 we download UCP certificate.

curl -k https://192.168.0.140/ca > ucp-ca.pem

To then install the DTR, run this docker command and it will pull down the containers and add the registry to the control plane.

docker run -it --rm docker/dtr install --ucp-url http://192.168.0.140 \

--ucp-node ddc02 \

--dtr-external-url 192.168.0.141 \

--ucp-username admin \

--ucp-password password \

--ucp-ca "$(cat ucp-ca.pem)"

docker_trusted_reg

Conclusion

With all this completed we have a the following:

  • A number of configured hosts with correct firewall rules.
  • Docker Engine, that starts and stops the containers
  • Docker Swarm, clusters together the Docker Engines (it’s worth noting that it’s not the in built swarm in 1.12 and it still uses the swarm container to link together engines)
  • Docker DTR, the platform for hosting Docker images to be deployed on the engines
  • Docker UCP, as the front end to the entire platform.

docker_platform

I was pleasantly surprised about the simplicity of deploying the components that make up Docker Datacenter. Although it looks a little bit like the various components are running behind the new¬†functionality that has been added to¬†the Docker engine, this is evident in that UCP doesn’t use swarm that is part of 1.12 and wastes a little bit of resource in deploying additional containers to provide the swarm clustering.

It would be nice in the future to provide a more feature rich UI that provides workflow capabilities to compose applications. As currently it’s based upon hand crafting compose files in YAML, that you can copy and paste into the UCP or upload your existing compose files. However the UCP provides an excellent overview of your deployed applications and the current status of containers (logs and statistics).

Automate HPE OneView/Synergy with Chef and Docker

As per a previous post, I’ve been working quite a lot with both HPE OneView (powers HPE Synergy through it’s Composer) and thought i’d put a post together that summarises automating deployments through Chef. There¬†is already tons of information (some of it somewhat sporadic) around the internet for using the HPE OneView Chef driver:

  • Build a Chef Environment and install HPE OneView Provisioning driver -> Here
  • Overview into the code architecture -> Here
  • Technical white paper -> Here

To simplify the process of using Chef with HPE OneView i’ve put together a couple of Dockerfiles that will build some Docker images that simplify the process so much that to have the requirements to use chef/OneView provisioning driver only takes a few commands. Also the side effect of containing Chef and the provisioning driver means that it becomes incredibly simple to have a group of configurations and recipes that will interact with both Synergy Composers and HPE OneView instances that manage DL and BL servers.

 

The below image depicts how have multiple configurations would work:

oneview-chef-docker

Essentially, using the -v /local/path:/container/path allows us to have three folders each containing their own knife.rb (Configuration for each OneView/Container) and a recipe.rb that is applicable for that particular set of infrastructure. The mapping described above will always map a path on the host machine to /root/chef inside the container, allowing simple provisioning through Chef. It also makes it become incredibly simple to manage a number of sets of infrastructure from a single location.

Also be aware that a Chef Server isn’t required, but in order to be able to clear up (:destroy) machines created through Chef in a docker container WITHOUT a Chef server then ensure that Chef is recording what is deployed locally.

Example for your recipe.rb :

with_chef_local_server :chef_repo_path => '/root/chef/',
  :client_name => Chef::Config[:node_name]

Dockerfile is located here

raspberry-pi-logo

 

For the more adventurous, it is also possible to have all of this code run from a Docker container on a Raspberry Pi (the same usage applies). To create a Docker container that will run on a Raspberry Pi the Docker file is located here

Chef and HPE OneView

HPETSS

Currently we’re about 50% of the way through 2016 and i’ve been very privileged to spend a lot of the year¬†working with¬†Chef¬†and not just their software but also presenting with them throughout Europe. All of that bringing us up to the current point where last week I was presenting at HPE TSS (Technology Solutions Summit)¬†around HPE OneView and Chef (picture above :-)). In the last six months i’ve worked a lot with the HPE OneView Chef Provisioning driver ¬†and recently been contributing a lot of changes that have brought the driver version up to 1.20 (as of June 2016). I’ve struggled a little bit with the documentation around Chef Provisioning, so I though it best to write up something around Chef Provisioning and how it works with HPE OneView.

Chef Provisioning

So quite simply, Chef Provisioning is a library that is specifically designed for allowing chef to automate the provisioning of server infrastructure (whether that be physical infrastructure i.e. Servers or virtual infrastructure from vSphere VMs to AWS compute). This library provides the ability to have machine resources that describe the logical make up of a provisioned resource e.g. Operating System, Infrastructure/VM Template, Server Configuration
The Provisioning library can then make use of drivers that extend functionality by allowing Chef to interact with specific end points such as vCenter or AWS. These drivers provide specific driver options that allow the finite configuration of a Chef Machine.

To recap:
Machine resource defines a machine resource inside Chef and can also have additional recipes that will be run in these machines.
Provisioning Drivers extend a machine resource so that Chef can interact with various infrastructure providers. With HPE OneView the driver provides the capability to log into OneView and create Server Profiles from Templates and apply them to server hardware.

Example Recipe:

machine 'web01' do
action :allocate # Action to be performed on this server

  machine_options :driver_options => { # Specific HPE OneView driver options
   :server_template => 'ChefWebServer', # Name of OneView Template
   :host_name => 'chef-http-01', # Name to be applied to Server Profile
   :server_location => 'Encl1, bay 11' # Location of Server Hardware
 }

end

More information around the Chef Provisioning driver along with examples of using it with things like AWS, Vagrant, Azure, VMware etc. can be found on their GitHub site.

API Interactions

Some vendors have taken the approach to have automation agents (Chef clients) hosted inside various infrastructure components such as network switches etc.. I can only assume that this was the only method that existed¬†that would allow Infrastructure automation tools to configure their devices. The HPE OneView Unified API provides a stable/versioned API that Chef and it’s associated Chef provisioning driver can interact with (typically over Https) that doesn’t require either side to maintain for reasons of compatibility.

The diagram below depicts recipes that make use of Chef Provisioning, these have to be run locally (using the -z) command so that they can make use of the provisioning libraries and associated drivers that have to be installed on a Chef workstation. All of the machines provisioned will then be added into the Chef Server for auditing purposes etc..

HPEOneView

HP OneView State-Change Message Bus (through API)

Continuing the theme of discovering and tinkering with HP OneView and it’s API’s ..

g1382094005202824031.jpg

 

 

 

The recent announcement of interoperability between HPE and Arista led me to investigate one of the more hidden aspects of HP OneView, albeit the most critical component as without the MessageBus then OneView simply wouldn’t work. The MessageBus handles two different types (or under the covers queues) which handle different types of information:

State

This message queue handles information such as configuration changes (new hardware plugged in), network configuration changes and failures etc.

Metric

This message queue contains information around things such as power draw or thermal/CPU metrics and details.

In this post i’ll be focussing on the state-change message bus and how Aristas EOS operating system on switches deals with changes within HP OneView. Also¬†for most end-users it is this MessageBus that is going to be the most heavily used component and has the most interaction from end-users (either GUI of API) and also from the hardware changing (new/replacing etc.).

State-change_MessageBus_Chalk

Extending HP OneView through it’s API

I’ve spent around nine months getting to grips with the “nuances” of both HP OneViews design and it’s API (both are intrinsically linked). During¬†this time i’ve had a couple of attempts at wrapping around the OneView API with a few different levels of success.. So some quick takeaways, and ways these can be extended through the API (with OVCLI):

URI

Everything is built around the use of URI (presuming Uniform Resource Locator), but for the best part Unique identifier for an element inside HP OneView.

  • Every new element has a URI such as server profiles or networks etc.. This also applies for “static elements”, for this I refer to a Server Hardware identity e.g. DL360. When a server or enclosure is added and an inventory is done for the hardware, it will create a unique identity for it even though this would be identical between HP OneView instances, A DL360 is a DL360 regardless where it is located.

Same hardware added to two HP OneView instances

dan$ OVCLI 192.168.0.91 SHOW SERVER-HARDWARE-TYPES URI
/rest/server-hardware-types/BF2E08CD-D213-422B-A19D-3297A7A5581E  BL460c Gen8 1
/rest/server-hardware-types/A4AB76D5-B4E3-4272-A18A-ECD24A500F2A  BL460c Gen9 1
/rest/server-hardware-types/D53C5B86-C826-4434-97C1-68DDBE4D4F49  BL660c Gen9 1
dan$ OVCLI 192.168.0.92 SHOW SERVER-HARDWARE-TYPES URI
/rest/server-hardware-types/A2B0009F-83FC-42EC-A952-1B8DF0D0B46A  BL460c Gen9 1
/rest/server-hardware-types/CD79A904-483A-4BA3-8D8F-69DED515A0FE  BL460c Gen8 1
/rest/server-hardware-types/BDC49ED0-FEC2-4864-A0B8-99A99E808230  BL660c Gen9 1

  • It isn’t possible to assign a URI.. so creating a network (e.g. vMotion with VLAN 90) will return a random¬†URI of 3543346435 (for example), it isn’t then possible to create that same network elsewhere with that URI as a _new_ URI will be generated during creation.

After using OVCLI’s copy network function (whilst trying to persist the URI)

dan$ OVCLI 192.168.0.91 SHOW NETWORKS URI
/rest/ethernet-networks/c5657d2e-121d-48d4-9b57-1ff1aa62ce29  Management
dan$ OVCLI 192.168.0.91 COPY NETWORKS /rest/ethernet-networks/c5657d2e-121d-48d4-9b57-1ff1aa62ce29 192.168.0.92
dan$ OVCLI 192.168.0.92 SHOW NETWORKS URI
/rest/ethernet-networks/7d9b8279-31ce-4da6-9ce7-260ee9c48982  Management

Federation

A look around the internet for “HP OneView Federation” will result in a number of results mentioning a few sentences talking about using the message queues etc. to handle federated OneView appliances, other than that there isn’t a master HP One”View” to rule them all currently available.¬†HP OneView scales quite large, and doesn’t require the use of dedicated management devices (such as a Fabric Interconnect or Cisco UCS manager), the only requirement is simple IP connectivity to either the C7000 OA/VC,¬†HP rack mount iLO, San switches, Network switches or the Intelligent PDU devices for monitoring and management meaning for most deployments federating a number of HP OneView instances won’t be a requirement.

There will be the odd business or security requirement to have separate instances, such as a security requirement to ensure physical and logical separation between Test/Dev and production or a multi-tenant data centre with separate POD’s. So currently your only options are to build something cool with the OneView API or open multiple tabs in a web browser the latter will look something like this from a memory usage perspective (although i’ve seen it hover around 200MB per instance):

HPOneViewMem

 

The web UI provides an excellently detailed interface that easily puts all of the relevant information at your fingertips, but that’s only for a single OneView instance.

A one liner to list all server profiles from two OneView instances (.91 = Test/Dev , .92 is the prod)

dan$ OVCLI 192.168.0.91 SHOW SERVER-PROFILES FIELDS name description serialNumber status; \
> OVCLI 192.168.0.92 SHOW SERVER-PROFILES FIELDS name description serialNumber status
TEST Test Machines VCG0U8N000 OK
DEV Development VCG0U8N001 OK
PROD Production VCGO9MK000 OK

Another one to pull all of the names and URIs

dan$ OVCLI 192.168.0.91 SHOW SERVER-PROFILES URI; \
> OVCLI 192.168.0.92 SHOW SERVER-PROFILES URI
/rest/server-profiles/5fdaf0cb-b7a8-40b1-b576-8a91e5d5acbf  TEST
/rest/server-profiles/dd87433e-c564-4381-8542-7e9cf521b8c0  DEV
/rest/server-profiles/d75a1d9e-8bc4-4ee3-9fa8-3246ba71f5db  PROD

With the UI interface, there isn’t a method to move or copy elements such as Networking/server-profiles etc. between numerous OneView interfaces, with the API this is a simple task however as noted in the networking example above it is impossible to keep the identifiers(URIs) common between OneView instances. This makes it quite a challenge to move an entire server profile from one instance to the next as it’s a complicated task moving or determining connectivity information that is unique from one OneView instance to another. It is possible as show in the video (here), but the connectivity information proved too much of a challenge to keep in the current version of OVCLI.

Operational Tasks

The Web UI again simplifies a lot of the tasks, including providing some incredible automation/workflows of tasks such as automating storage provisioning and zoning when applying a server profile to a server. It can also handle some bulk tasks through the ability to do some group selection in the UI. However currently and with the limitations to server-profiles and profiles templates (changes might fix¬†this¬†in 2.0),¬†make it quite an arduous task to deploy large amounts of server profiles through the UI.. it’s easy to do but it’s a case of a click or two per server profile. Using the API makes this very simple:

Let’s find the Development Server Profile and create 50 of them.
dan$ OVCLI 192.168.0.91 SHOW SERVER-PROFILES URI | grep DEV
/rest/server-profiles/dd87433e-c564-4381-8542-7e9cf521b8c0 DEV


dan$ date; OVCLI 192.168.0.91 CLONE SERVER-PROFILES /rest/server-profiles/dd87433e-c564-4381-8542-7e9cf521b8c0 50; date
Tue 14 Jul 2015 17:06:40 BST
DEV_0
DEV_1


...


DEV_48
DEV_49
Tue 14 Jul 2015 17:06:52 BST

Twelve seconds and 50 development profiles are ready to go.

HPOneViewProf

HP OneView Automation through the API

I’ve had the opportunity to head to some exciting places over the last few weeks/months and especially in the more recent weeks i’ve been heading up and down the country on a regular basis. This has given me time whilst sat on the train “yay!” to really spend some time playing around with HP OneView.. I’ve already had a go at wrapping some of the API in Objective-C and decided to make something a little bit more useful.

I probably should have done my development work in a language that is a little bit more recent, something such as python etc.. however I stuck with a 43 (at time of writing) year old programming language ..¬†C. This does give me the option of porting it to anything with a C compiler and libcurl so the option is there ūüôā I’ve also made use of the libjannson library which is fantastic for manipulating and reading JSON (http://www.digip.org/jansson/).

So, what i’ve ended up with is a simple tool that can plug into automation tools pretty easily (Chef, Puppet, Ansible i’m looking at you) that can interact with HP OneView and do some simple reporting in JSON or Tab delimited output. It also can do somethings that currently aren’t available.. such as interact with multiple HP OneView instances!

I’d full screen this before clicking play.. ūüôā

This is a quick example of pulling some details from one instance (Enclosure-Groups, Server Hardware Type) and using that to move a server profile from one HP OneView to another..

HP OneView ‚Äď Part 2: Server Profiles

Apologies for the delay I was busy..

What is a Server Profile

The “Server Profile” is the defining phrase that comes to mind when thinking about the SDDC (Software Defined Data Centre). It allows a server administrator to define¬†a hardware identity or configuration (MAC addresses, WWNN, BIOS, Boot, RAID config etc.) in software and then apply this to a “blank” server.

This brings a number of key advantages:

  • Pre-designed hardware identities allow configuration tasks to be pre-provisioned before hardware deployment (SAN zoning, firewall ACLs etc..)
  • Designated addresses allow easier identification e.g. ¬†aa:bb:cc:dd:01:01 = Management / ¬†aa:bb:cc:dd:02:01 = Production
  • Server failure/replacement doesn’t require upstream changes, software identity (server profile) is applied and all previous configuration is still relevant.

Design

Following on from the previous HP OneView post, this is a continuation of the same simple VMware vSphere deployment. As before, a good design should exist before implementation, so again i’ve embedded a diagram detailing where and how these networks are going to be defined to the virtual interfaces on a blade. VMware OneView Service Profiles

Quite Simply:

  • Two virtual interfaces defined for all of the Service networks.
  • Two virtual interfaces defined for the Production networks.
  • Two HBAs on each fabric, providing resilience for Fibre Channel traffic.

As mentioned, this is a simple design for a vSphere host but allows expansion in the future with the ability to define a further virtual interface on each physical interface inside the blade.

 

HP OneView – Part 1: Enclosures and Uplinks (logical or otherwise)

The initial configuration of HP OneView is a pretty simple and intuitive process, it just isn’t documented as well as it could be. I’ve decided to put together a few posts detailing some of the areas of configuration that could possibly do with a bit more detailed procedures. I’d expect that someone who wishes to use any thing that is documented here is already acquainted with Virtual Connect, Onboard Administrator, VMware vSphere and iLO’s etc.

Design

Before any implementation work is carried out there has to be a design in place, otherwise what¬†and how¬†are you going to configure anything.. The design for these posts will be a simple VMware environment consisting of a number of networks to handle the standard traffic one would expect (Production, Vmotion etc.) at this point we have defined our networks and these have been trunked by our network admin on the switches out to the C7000’s we will be connecting too.

c7000 VMware

 

From this diagram, you can see the coloured VLANs represent the traffic designated management/service traffic and the grey VLANs represent production traffic. All of these VLANs are trunked from the Access switches down to the virtual connect modules located in the back of the C7000 enclosures.

 

Note:¬†I’ve omitted SAN switches from this and just presented what would appear as zoned storage direct to the Virtual Connect. I may cover storage and flat san at a later date, if there is any request to do so.

 

 

This¬†represents the external connectivity being presented to our Enclosure, it’s now time for the logical configuration with HP OneView…