There is an ongoing argument about the technological underpinnings of the cloud: the hypervisors which are actually delivering all these resources. Michael Ducy helpfully listed many of the main arguments for adopting multiple hypervisors in his post. The main drivers he identifies are managing license costs and reducing lock-in.
However, I still see very few companies deliberately choosing multiple hypervisors as a strategy. There are a few, and in fact I have even seen MBOs based on the introduction of a second hypervisor platform. The idea here is that sure, today Hypervisor A dominates the market, and therefore drives the availability of integrated software, skilled administrators, and support from third-party vendors. However, if their commercial or licensing policies were to change, it would be difficult to impossible to migrate away, if they were the only platform adopted. Therefore, a small island is deliberately created using an alternative technology, either lower-cost or perhaps even open-source, purely to nurture the skillsets and experience in case a hypervisor migration is ever indicated.
That said, I think most companies find themselves with multiple hypervisors, rather than planning for that situation deliberately. Perhaps two virtualization projects start separately, one in the Windows team and one in the Unix team (more probably, the Unix guys have been doing partitioning for a while, and are surprised to hear that they were actually doing virtualization). Perhaps the applications team use an open-source hypervisor for rapid prototyping, which is different from the choice the operations team made. Finally, IT may discover that other departments have been using public clouds outside of their view (“shadow cloud”).
In all of these situations, trying to persuade everyone to move to one platform is going to take a lot of time, compromises, and frayed tempers. However, continuing with a Balkanized approach is not ideal either; best practices will not be implemented in a uniform manner, migrating content and configurations between the different silos will be an ongoing challenge, management and monitoring visibility is limited and fragmented, and there is significant risk of deployment failure due to unexpected differences between the environments. This is where a management platform that can address all of today’s hypervisors comes in, as it allows higher-level functions to be unified across the different hypervisors, while leaving each team to manage the technical details of its own infrastructure to suit.
This becomes especially important with full cloud approaches, as opposed to simpler virtualization management projects. For the cloud, abstraction from the internal details of the virtualization platform is the name of the game. Business users cannot and should not be expected to understand the complex technical details required to implement their request. If an IT person is required to be in the loop to translate the business request into technical language, the cloud project has already failed, as turnaround time will increase, error rates will go up, and user satisfaction will go down due to the bottleneck in the process.
Once all those details are abstracted away, though, it does not matter whether there is one hypervisor or ten, or whether they are on-premise or elsewhere. At some level the differences are important, so the cloud platform needs to be able to discriminate between the different resources for delivery, but the business user doesn’t need to know.
Some say that this abstraction is itself a form of lock-in, as the cloud portal exposed to the end-users is difficult to change once adopted, and the business finds itself relying on the portal’s developer to support its hypervisor choices. My answer is that BMC is committed to a strategy of heterogeneity. We do not have a hypervisor, an operating system, or a hardware platform in our portfolio; all we do is management. Therefore, we go where our customers go. We support the main hypervisor platforms today, and we have a constantly evolving roadmap based on customer adoption of each platform. However, we do not expect to be the only players in the datacenter; low-level technical administration of the individual platforms will always require specific knowledge and tools, and these will continue to work exactly as users expect them to. This may even extend to alternative portals for specific use cases.
This is a complex topic and one which will no doubt continue to be debated for some time to come. Please join the conversation, either in the comments below or on Twitter. We would love to hear from you.