My previous post was a theoretical piece around how Arista may or may not (had no confirmation) be interacting with HP OneView in order to automate infrastructure provisioning in the Data Centre. That particular article dealt with one of the Message Buses that form HP OneView, in particular the State Change Message Bus (SCMB) that handles tasks and changes to hardware.
This post will look at the other Message Bus that exists inside HP OneView, which is the Metric Streaming Message Bus (MSMB) that handles all of the metrics (HW info). As of HP OneView (1.20) the following info is available:
Enclosure (RatedCapacity / DeratedCapacity / Temp / AvgPower / PowerCap / Peak Power)
Power Device (AvgPower / PeakPower)
Server Hardware (CpuUtilisation / CpuAvgFreq / Temp / AvgPower / PowerCap / Peak Power)
These statistics can be captured at a sample rate (every 5 mins of more) and then posted to the message bus at a frequency (every 5 mins or more).
One thing that surprised me was that by default the Metric streaming bus isn’t configured to monitor anything, which means that the web based UI must be getting its statistics by polling or internal SNMP.
So some quick changes to my PoC tool and it can now monitor the Metric Streaming Bus with a simple line:
OVCLI 192.168.0.91 MESSAGEBUS LISTEN METRIC msmb.#
The idea I had was to take the raw output from HP OneView and find away of visualising it in the large interactive dashboards that Operations teams are hugely fond of currently. My idea was to make use of Docker for ease of deployment and using InfluxDB and Grafana for simplification of configuration.
To make this work I needed to make some changes to the POC tool in order for it to change the raw JSON from HP OneView into something that could be ingested by InfluxDB. The code to do that is located on github and is pretty raw at the moment, however it does what is needed for todays example.
PLEASE NOTE: That this is all a proof of concept and isn’t designed for anything in production, it’s more a learning experience for myself and hoping to inspire others to create something that perhaps in the future would be production ready… Same goes with my use of Docker containers 😛
Also, to mitigate any confusion the IP addresses i’m using are local to my lab environment and are equal to the following:
192.168.0.25 = CoreOS i.e. Docker Host
192.168.0.91 = HP OneView Instance
This will pull the influxDB image and start it, exposing the ports 8083/8086 and creating a database called one view. Once the container has started it will be accessible from the CoreOS host on the exposed ports e.g. http://192.168.0.25:8083
docker run -d -p 8083:8083 -p 8086:8086 -e PRE_CREATE_DB="oneview" --expose 8090 --expose 8099 --name influxdb tutum/influxdb
A lot of these steps will only require doing once and are to configure the actual Metric Bus so that it reports the correct data, once the configuration work and certs are downloaded the only step required is to connect the message bus to the InfluxDB instance.
These steps will log into HP OneView, generate the certificates required for RabbitMQ and then download them locally.
docker run -v ~:/root ovcli:latest OVCLI 192.168.0.91 LOGIN Administrator password
docker run -v ~:/root ovcli:latest OVCLI 192.168.0.91 MESSAGEBUS GENERATE
docker run -v ~:/root ovcli:latest OVCLI 192.168.0.91 MESSAGEBUS CERT
Theses steps will interrogate the configuration of the Message Bus and set the frequency of the reporting
Configure the Metric MessageBus
Check the current configuration
docker run -v ~:/root ovcli:latest OVCLI 192.168.0.91 MESSAGEBUS METRIC GETCONFIG
Check what capabilities are available (i’m assuming for future releases)
docker run -v ~:/root ovcli:latest OVCLI 192.168.0.91 MESSAGEBUS METRIC CAPABILITIES
Enable server-hardware reporting on a 5min basis
docker run -v ~:/root ovcli:latest OVCLI 192.168.0.91 MESSAGEBUS SETCONFIG /rest/server-hardware 300 300
This command will start listening to HP OneViews Metric Message Bus and pass it to the Influx server (port 8083) using the oneview database
docker run -d -v ~:/root ovcli:latest OVCLI 192.168.0.91 MESSAGEBUS LISTEN METRIC msmb.# INFLUXDB 192.168.0.25 oneview
Once the container has started it will be accessible from the CoreOS host on the exposed ports e.g. http://192.168.0.25:3000
docker run -d -p 3000:3000 --link influxdb:influxdb --name grafana grafana/grafana
This part is probably the easiest as we’ve starter our three docker instances as detached and left them running in the background, it’s time to use a nice simple Web-based UI for the remainder.
Log into the Grafana instance (using port 3000 (look at the URL in step four for an idea)) and use the credentials that are mentioned on the Grafana Docker Hub page. From here you’ll be the Administrator of the Grafana instance and it’s a straightforward task to add in the InfluxDB by selecting Data Source from the menu on the left.
In the Data Sources page select Type as InfluxDB 0.9.x (current) and add in the settings of the InfluxDB such as the http url (something along the lines of the url mentioned in step one) along with the database name and the credentials, which are available from the InfluxDB Docker hub page. Once it looked something like the example image save/or test the connection and you’re ready to build your first graph from the data in InfluxDB.
To save on duplication.. follow the steps on the official grafana website until we get to the section titled “Adding & Editing Graphs and Panels” as it’s here where we will be adding in our custom metrics to populate our graphs with information from HP OneView. In this quick example I will create a new dashboard and on that dashboard add in a new graph panel, change the data source from grafana to the InfluxDB and then create a query to get CPU usage and build a graph from it.
That should be all that’s required to build your first initial graph for a single server.. Time to play around and add in additional servers or other data sources and expand the dashboard as you see fit !