Falling clouds

Cloud computing is still a young discipline, young enough that we can still argue about what is and is not a cloud. That said, the argument about whether the Dino is a real Ferrari or not doesn’t show any sign of dying down yet, so we may have a while to go on this one; the Dino was introduced in 1968…

Regardless, cloud computing has been around long enough to have delivered a few disappointments already. In fact, Gartner places it on their Hype Cycle half-way down the slope from the Peak of Inflated Expectations to the Trough of Disillusionment. This a natural and healthy development, as some of the more heated claims that were made about the cloud in its infancy are tested in practice. Once we all become more comfortable with this new paradigm, what it can do for us, and which tasks it is best suited to, we will be able to achieve the full promised benefit of cloud computing.

This is the thrust of a recent interview with Josh McKenty, CTO of Piston Cloud Computing, which appeared  in Network World. Piston Cloud is an OpenStack-based provider based in San Francisco. In common with many cloud providers, Piston Cloud has experienced significant drop-off between promising proof-of-concept and pilot implementations at one end, and actual production cloud implementations at the other end. The interesting part is in Mr. McKenty’s prescriptions for how to avoid these unsuccessful projects, which I think apply far beyond just the OpenStack market. Here are his recommendations, with my comments.

Get out of the data center and talk to all business influencers

Slide1.pngThis one is crucial, and it’s the first piece of advice I give to BMC colleagues and partners too. IT teams work in the datacenter, and that is the way they look at the world. This is fine and healthy, up to a point. Cloud projects do not live or die purely by what happens in the data center; they need to satisfy users’ requirements, and by definition, those users will not be IT people.

IT people often look at the cloud as just the latest iteration of virtualisation, and calculate its business case on the basis of how quickly it can provision VMs or which SDN features it supports. The problem is that business users do not particularly care about those metrics, except in so far as they impact their own work. The business case for the cloud and the use cases that it will support need to be driven by those business requirements, with IT giving input about feasibility and impacts, not driving the process.

Set user expectations

The flip side of the previous point is that sometimes those business users have unrealistic expectations, that they will be able to sit in airport coffee shops and provision whatever they want with a couple of swipes on their iPad. Now that may be a fine goal, but there needs to be a lot in place for that to work, so it’s important to discuss not just what users want, but what they need, and what they need now or in a few months or a year, what is urgent and what would be nice to have, and so on. This could include what will be offered in the service catalog, what infrastructure options will be supported, or what approvals processes may apply. All of these factors may evolve over time, of course, and that is also part of the discussion: what do we need on day one, and what can we plan to add down the road.

Motivate users to take advantage of the private cloud

This one seems obvious, but it’s where projects can founder easily. If adoption rates are low – users don’t use the cloud platform – it doesn’t matter how much of a technical success and how buzzword-compliant it is, it’s still a failure. Andrew Hillier, CTO of CiRBA, is quoted in the Network World article as saying that “Mandated use is not out of the question”. This is of course an option, but one of the interesting experiences I had at BMC as we built the very first cloud projects, a few years ago now, was that users can also be guided pretty easily.

Our very first European customer was trying to get users off expensive Unix physical gear and onto commodity Linux virtual infrastructure. However, users were resisting and kept finding ways to provision the Unix hardware. IT then decided to abstract the choice, mandating Linux VMs below 4 GB of RAM – and suddenly everyone needed 4.1 GB. Finally as part of the implementation of an early ancestor of BMC Cloud Lifecycle Management, we exposed some pricing information, with the intention of hooking it up to cross-charges between departments later in the project – what today we call chargeback.

What actually happened was that just showing that price, even though nothing was ever done with the number afterwards, was enough to guide users to the cheaper option when there was no actual requirement for the more powerful physical server. This is what we call showback today, and I find it a fascinating way to motivate users and guide their choices.

Don’t stay wedded to old data center gear

I have mixed feelings about this one. It’s true that there comes a point when you have to standardise your hardware, if only to rationalise maintenance contracts and the number and type of spare parts you need to keep on hand, but there’s no need for complete uniformity. The customer I spoke about earlier didn’t get rid of their physical Unix hardware, but they have that in a mix that includes Linux and Windows on x86 hardware from different vendors, with a couple of different hypervisors on top, and Amazon AWS to round out the options. As long as your management layer supports the different platforms properly, you can continue to take advantage of the unique features of each. Horses for courses, as the saying goes.

Make sure existing apps are moved into the private cloud

This is an area that is overlooked all too often. Some apps can be installed equally on a cloud environment as on physical infrastructure, but others cannot deal with the different approach, and may need to be upgraded or even replaced to take advantage of that environment’s capabilities. There is no point in trying to use public IaaS like in-house physical servers – treating livestock like pets. On the other hand, rewriting, migrating or upgrading business applications can be very disruptive, so it’s best not to get into that sort of exercise without some careful planning. Once again, it’s best to have a management layer that can deal with both worlds so that you are not rushed into a change that neither IT nor the business might be ready for.

At BMC, we have developed a methodology for cloud computing projects to help ensure user adoption and successful projects. This methodology is divided into Plan, Build and Run phases. The Plan phase is a consulting engagement where experts from our Cloud Center of Excellence will go through the sorts of exercises that I mentioned briefly above, with the aim of producing a cloud roadmap that is realistic and adapted to each IT organisation’s specific situation. You can find out more about this offering here, and of course for any other questions the whole bmc.com/cloud site is full of useful information, constantly updated to ensure it remains relevant in this fast-paced market.

These postings are my own and do not necessarily represent BMC's position, strategies, or opinion.

Share This Post


Dominic Wellington

Dominic Wellington

Dominic Wellington is BMC's Cloud Marketing Manager for Europe, the Middle East and Africa. He has worked on the largest cloud projects in EMEA, and now he calls on that experience to support new cloud initiatives across the region. Previously Dominic supported BMC's automation sales with direct assistance and enablement throughout EMEA. Dominic joined BMC Software with the acquisition of BladeLogic, where he started up Southern Europe pre-sales operations. Before BladeLogic, he worked in pre-sales and system administration for Mercury and HP. Dominic has studied and worked between Italy, England and Germany.