Get ready for your next-generation cloud: lessons learned from first-generation private clouds

By now, a majority of medium and large organizations have developed an initial cloud strategy. Virtually all of these companies are consuming external cloud services (whether infrastructure, platform, or application services), and many have built an internal IT infrastructure model that they term a “private cloud.”Circular Glass Wall

However, many first-generation private cloud efforts fell short of their goals or have reached the limit of possible IT services they can offer. These initial projects were worthwhile as they enabled the IT organization to ramp up their cloud skill sets and learn useful lessons for their next-generation cloud development. Many are now ready to move on to a more sophisticated cloud (private and/or hybrid) that meets a broader range of their business needs. Here are a few lessons learned from the first-generation cloud experience, along with tips for planning a next-generation cloud.

Focus on the cloud consumer, not the cloud plumbing. Many initial clouds were built from the “bottom up” with a focus on which infrastructure to use, which hypervisor to choose, which scripting engine to use for orchestration, and so on. The actual cloud services being offered were not intensely considered, as most of these early clouds provided provisioning of simple infrastructure services based on a virtual machine/server template mode. The next-generation clouds will be driven “top down” by a business discussion focused on the types of services the IT consumer desires from the cloud platform. This will drive greater IT/business alignment and ultimately greater cloud usage.

Offer complete self-service. Many of the early clouds support a request for a front-end that is suitable for savvy system administrators but not the larger population of IT users. As a result, these early clouds were protected within the IT shop for limited use, with the traditional external IT request mechanisms (e.g., request tickets) still in place. The internal IT administration teams would use their “private cloud” to accelerate the building of servers or IT environments and hand them back to the requester.

However, with a complete cloud management platform today, it is possible to offer everything from simple to complex cloud-based services to all IT users. As a result, IT consumers are demanding a personal cloud experience and the empowerment that comes from self-service (coupled with the back-end industrialization of a fully automated configuration and deployment model to ensure cloud velocity and agility).

Offer advanced cloud services. Many early-adopter organizations built their clouds out of available component products. In those early days, end-to-end cloud management platforms were often not available or mature. A wide variety of these early private clouds are merely virtualized servers overlaid with a thin layer of orchestration — often augmented by custom scripting.

For most organizations, early private cloud services consist of provisioning/deprovisioning of servers, server templates, and storage. IT groups today want to offer advanced services (e.g., fully configured software stacks, applications, databases, development environments, user-specified options) to meet the needs of their customers, but these early cloud efforts will typically require extensive customization to support these initiatives. Today, these types of advanced services are possible with the state-of-the-art cloud management platforms.

Involve the network. Most of the early adopters’ clouds do almost nothing with the network tier of the datacenter. These clouds are fully dependent on the virtual network components provided by the compute hypervisor and ignore the power that is available in datacenter networks. IT organizations must consider cloud platforms that deliver these capabilities to leverage their investment in datacenter networks and to support complex services that require multi-tier models, significant workload segregation for compliance and performance requirements, and optimization of cloud workloads. Just as compute services are delivered “just in time” in a cloud model, network services should likewise be procured and configured to meet the needs of the cloud service at the time of request.

Embrace service flexibility. Many of the early cloud services are based on template or image models. As demand for variations in standard services expands, IT generates more and more new images. Each image often ends up as another item in the cloud service catalog that can be requested.

For example, one organization has more than 1,000 different catalog items based on this model. Its first-generation cloud cannot provision a base image and then configure or customize it before handing it over to the resource requester. To mitigate this sort of image/template sprawl, the next-generation cloud should be able to support custom options at request time (e.g., infrastructure specifications, middleware parameters) so the resulting cloud service is configured perfectly time after time for the cloud consumer.

Ensure hybrid from Day One. While first-generation private clouds were focused on internal datacenters, next-generation cloud platforms support internal and external cloud models — enabling a hybrid approach to managing cloud resources, no matter where those resources are sourced. Even IT organizations with no current external cloud needs must plan for this and think in terms of hybrid models for optimized workload deployment and cloud service brokering.

Integrate with broader IT. Many organizations now realize their clouds are stand-alone and not integrated into existing IT processes, such as change management, asset tracking, capacity optimization, availability management, software license compliance, audit capabilities, reporting capabilities, and chargeback. These organizations want to add those integrations now to improve service delivery. Next-generation clouds will be the driving force behind remaking the datacenter and must embrace (and evolve) the broad range of service management and IT operational processes.

What the next generation will look like

Many organizations are now trying to see what their next-generation cloud is going to look like. They’re open to looking at new options for extending their cloud investment or replacing it with a whole new model.

The cloud computing model for provisioning, configuration, deployment, and the overall operational lifecycle is the blueprint for building and operating the next-generation datacenters. The cloud computing principles and characteristics will essentially supplant the way people are building and operating their IT today.

A checklist for moving to your next-generation cloud

Based on the lessons learned from building first-generation clouds, here are five platform requirements for any next-generation approach:

  1. A management platform that engenders a high degree of service flexibility — You don’t know what your cloud consumers will want one, two, or three years from now. Making the right technology choice today enables flexibility in how you define and deliver cloud services now and in the future.
  2. A platform that can support multiple constituencies — Make sure it supports multi-tenancy from top to bottom. Then you can successfully segregate users, infrastructure, and workloads as your cloud evolves.
  3. A platform that is not tied to a single infrastructure — The market keeps changing. For example, today the servers from Vendor A look cost-effective. But two years from now, the servers from Vendor B may look even better. The underlying infrastructure technology you use to build the cloud typically has a refresh cycle of three to seven years. The refresh cycle is longer for the network, shorter for servers, and in between for storage. You need neutrality — a cloud platform that can support any kind of internal infrastructure that’s going to be in your datacenter and also connect to any external public cloud provider you may choose.
  4. An intelligent platform — Make sure that your platform can be informed by external events, external situations, and policy to make decisions about where workloads are going to be deployed. The platform must not only be able to dynamically build and deliver IT environments but also have enough smarts to deploy workloads to the right place where they need to run and optimize them over time. Some cloud workloads won’t run well together, and some workloads, for compliance reasons, aren’t even permitted to run close to one another. You don’t want to have individual system administrators making deployment decisions and performing manual deployment steps. You will need to fully automate the delivery of IT.
  5. A platform that is integrated with your existing enterprise management technology and processes — You don’t want cloud to always be something separate. In the 1990s, when Web-based technologies came to enterprise IT, people would have a separate Web team, Web gurus, and a Web server expert. Now it’s all mainstreamed within IT. And that’s what will happen with the cloud. A few years from now, people won’t be talking about cloud; it’ll just be the way IT gets done.

This next-generation approach will deliver value to the business faster by automating everything from request to deployment and configuration — and do so up and down the stack and across the entire infrastructure.

Smoothing the path to the next generation

As you move to your next-generation cloud, keep three points uppermost in your mind:

  • If you’re going to choose a technology to build your cloud, choose one that takes you into the future.
  • A cloud strategy is not an event but a continuous activity. Most organizations will be working on this for years as their needs emerge, as their IT consumers become more sophisticated and as IT learns how to deliver fully automated IT services.
  • In the future, cloud computing principles and characteristics will just be the way that we design, build, and operate datacenters.

The first generation cloud was typically a modest, limited success. If IT can learn from the first-generation experience and lay the necessary foundation, the second generation cloud will be an evolving success beyond the current expectations of many organizations. For more information, visit www.bmc.com/cloud.

These postings are my own and do not necessarily represent BMC's position, strategies, or opinion.

Share This Post


Herb Van Hook

Herb Van Hook

Herb VanHook is the Deputy Chief Technology Officer (CTO) for BMC Software, and manages the CTO Office for BMC. Herb spends much of his time with BMC customers, and focuses specifically on the technology, process and organization impacts and opportunities presented by Cloud Computing and Data Center automation models. He leads BMC’s overall Cloud strategy for the CTO office. Herb has worked in strategic, corporate development and business planning functions since joining BMC in 2005. Previous to BMC, he held several executive positions at industry analyst firm META Group (now Gartner, Inc.). While at META, Herb was executive vice president and research director, leading all infrastructure and operations research, and last serving as META’s interim president and chief operating officer prior to its sale to Gartner. Herb has more than 30 years of experience in information technology – across operations, development, support and management – including positions at IBM, CA Technologies, and Legent Corporation.