I’ve spent around nine months getting to grips with the “nuances” of both HP OneViews design and it’s API (both are intrinsically linked). During this time i’ve had a couple of attempts at wrapping around the OneView API with a few different levels of success.. So some quick takeaways, and ways these can be extended through the API (with OVCLI):
Everything is built around the use of URI (presuming Uniform Resource Locator), but for the best part Unique identifier for an element inside HP OneView.
Same hardware added to two HP OneView instances
dan$ OVCLI 192.168.0.91 SHOW SERVER-HARDWARE-TYPES URI
/rest/server-hardware-types/BF2E08CD-D213-422B-A19D-3297A7A5581E BL460c Gen8 1
/rest/server-hardware-types/A4AB76D5-B4E3-4272-A18A-ECD24A500F2A BL460c Gen9 1
/rest/server-hardware-types/D53C5B86-C826-4434-97C1-68DDBE4D4F49 BL660c Gen9 1
dan$ OVCLI 192.168.0.92 SHOW SERVER-HARDWARE-TYPES URI
/rest/server-hardware-types/A2B0009F-83FC-42EC-A952-1B8DF0D0B46A BL460c Gen9 1
/rest/server-hardware-types/CD79A904-483A-4BA3-8D8F-69DED515A0FE BL460c Gen8 1
/rest/server-hardware-types/BDC49ED0-FEC2-4864-A0B8-99A99E808230 BL660c Gen9 1
After using OVCLI’s copy network function (whilst trying to persist the URI)
dan$ OVCLI 192.168.0.91 SHOW NETWORKS URI
dan$ OVCLI 192.168.0.91 COPY NETWORKS /rest/ethernet-networks/c5657d2e-121d-48d4-9b57-1ff1aa62ce29 192.168.0.92
dan$ OVCLI 192.168.0.92 SHOW NETWORKS URI
A look around the internet for “HP OneView Federation” will result in a number of results mentioning a few sentences talking about using the message queues etc. to handle federated OneView appliances, other than that there isn’t a master HP One”View” to rule them all currently available. HP OneView scales quite large, and doesn’t require the use of dedicated management devices (such as a Fabric Interconnect or Cisco UCS manager), the only requirement is simple IP connectivity to either the C7000 OA/VC, HP rack mount iLO, San switches, Network switches or the Intelligent PDU devices for monitoring and management meaning for most deployments federating a number of HP OneView instances won’t be a requirement.
There will be the odd business or security requirement to have separate instances, such as a security requirement to ensure physical and logical separation between Test/Dev and production or a multi-tenant data centre with separate POD’s. So currently your only options are to build something cool with the OneView API or open multiple tabs in a web browser the latter will look something like this from a memory usage perspective (although i’ve seen it hover around 200MB per instance):
The web UI provides an excellently detailed interface that easily puts all of the relevant information at your fingertips, but that’s only for a single OneView instance.
A one liner to list all server profiles from two OneView instances (.91 = Test/Dev , .92 is the prod)
dan$ OVCLI 192.168.0.91 SHOW SERVER-PROFILES FIELDS name description serialNumber status; \
> OVCLI 192.168.0.92 SHOW SERVER-PROFILES FIELDS name description serialNumber status
TEST Test Machines VCG0U8N000 OK
DEV Development VCG0U8N001 OK
PROD Production VCGO9MK000 OK
Another one to pull all of the names and URIs
dan$ OVCLI 192.168.0.91 SHOW SERVER-PROFILES URI; \
> OVCLI 192.168.0.92 SHOW SERVER-PROFILES URI
With the UI interface, there isn’t a method to move or copy elements such as Networking/server-profiles etc. between numerous OneView interfaces, with the API this is a simple task however as noted in the networking example above it is impossible to keep the identifiers(URIs) common between OneView instances. This makes it quite a challenge to move an entire server profile from one instance to the next as it’s a complicated task moving or determining connectivity information that is unique from one OneView instance to another. It is possible as show in the video (here), but the connectivity information proved too much of a challenge to keep in the current version of OVCLI.
The Web UI again simplifies a lot of the tasks, including providing some incredible automation/workflows of tasks such as automating storage provisioning and zoning when applying a server profile to a server. It can also handle some bulk tasks through the ability to do some group selection in the UI. However currently and with the limitations to server-profiles and profiles templates (changes might fix this in 2.0), make it quite an arduous task to deploy large amounts of server profiles through the UI.. it’s easy to do but it’s a case of a click or two per server profile. Using the API makes this very simple:
Let’s find the Development Server Profile and create 50 of them.
dan$ OVCLI 192.168.0.91 SHOW SERVER-PROFILES URI | grep DEV
dan$ date; OVCLI 192.168.0.91 CLONE SERVER-PROFILES /rest/server-profiles/dd87433e-c564-4381-8542-7e9cf521b8c0 50; date
Tue 14 Jul 2015 17:06:40 BST
Tue 14 Jul 2015 17:06:52 BST
Twelve seconds and 50 development profiles are ready to go.