Data Center Operations Guide – BMC Software | Blogs https://s7280.pcdn.co Mon, 23 Oct 2023 10:55:07 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png Data Center Operations Guide – BMC Software | Blogs https://s7280.pcdn.co 32 32 Securing Data Centers & Server Rooms: Goals & Best Practices https://s7280.pcdn.co/secure-server-room/ Fri, 11 Feb 2022 00:00:32 +0000 https://www.bmc.com/blogs/?p=13793 Information security is a critical topic in the enterprise IT industry, especially when mission-critical data is stored in cloud data centers off-premises. Indeed, cybercrime is the most pressing concern since 79% of the organizations using cloud computing services have experienced a cloud-related data breach incident according to a recent survey. However, there’s more to security […]]]>

Information security is a critical topic in the enterprise IT industry, especially when mission-critical data is stored in cloud data centers off-premises. Indeed, cybercrime is the most pressing concern since 79% of the organizations using cloud computing services have experienced a cloud-related data breach incident according to a recent survey.

However, there’s more to security of the cloud data center than defending against the prevalent cybercrime attack vectors. Physical security of hardware assets is equally critical for secure and dependable operations of a cloud data center.

In this blog, we will discuss what the physical security of a cloud data center entails, the applicable industry standards, and industry-proven best practices to secure your cloud data center resources.

(This article is part of our Data Center Operations Guide. Use the right-hand menu to navigate.)

Data Centers & Server Rooms

Security controls for data centers & server rooms

Cloud data center and server room security controls encompass four key aspects of the data center:

  • Physical security. The security of static systems that constitute a data center. The building structure, hardware resources, utility services infrastructure, as well as portable and mobile objects that constitute a data center facility are included. The characteristics of these systems determine the security and physical threats facing a data center including fire, unauthorized access, work conditions for the workforce, and dependability.
  • Geographic characteristics. The location of a data center determines the natural threats such as earthquakes, flooding, and volcanic eruptions. Additionally, human-responsible threats such as burglary, civil disorders, interruptions, damages, and interceptions are also highly dependent on the location of the data center facility.
  • Supporting facilities. Refers to the services necessary for smooth data center operations. These facilities include the infrastructure of utility services such as energy, water, cooling, communication, and air conditioning. Emergency services including firefighting, policing, and emergency healthcare also impact the risk mitigation capacity of a data center.
  • Future prospects. Economic, political, geographic, and demographic factors affect how well a location is suitable for data center operations over the long term. The services and facilities available to your server room may suffice for now but does the location offer sufficient capacity, services, and facilities to scale in the future?

Securing your server room

The first step to securing a server room is to design one that is fully compliant to the leading industry standards. Organizations such as the National Institute of Standards and Technology (NIST) as well as government regulatory authorities provide guidelines, standards and frameworks that encompass all aspects of server room security: physical, environmental and information security.

Some of the common server room security standards and framework guidelines include:

Server room best practices

Server room security is an ongoing process. The security frameworks provide guidelines to maintain server room security in context of changing external circumstances and the scale of IT operations. Once a data center room is designed in compliance with the applicable standards, the next steps involve a range of controls that can help mitigate threat vectors ranging from human risks to threats from natural disasters.

The following best practices and security controls can help you get started with data center security:

Restricted access & multi-layer authentication

Only the authorized personnel should be allowed to enter (and exit) the premises. Multiple layers of security—passwords, RFID tags, and biometrics—can be combined to enforce implementation.

Server systems should be isolated such that the principle of least privilege access can be adopted, ensuring that damage can be contained within isolated sections of the data center when compromised.

(Read about zero trust network access.)

Fire safety & HVAC

Fire incidents, explosions, and inadequate HVAC affect the dependability of a server room. These incidents can leave irreversible damages to a server room, especially when the stored data is not adequately duplicated. Consequently, it’s important to evaluate the safety capacity of the building against these risks.

Adopt fire detection and control systems, automate emergency service routing and  limit building occupancy. Data center efficiency is highly dependent on the HVAC systems. An effective server room design considers all aspects of ventilation, including damage limitation in event of a fire incident.

Building structure & utility infrastructure capacity

The hardware racks and building structure should be highly capable of supporting heavy hardware devices. Access to these devices should be convenient and systematic: troubleshooting, repairs, and upgrades should take minimal time and effort. The utility infrastructure that powers HVAC systems should be designed for:

  • High capacity
  • Structural integrity
  • Long life

Information security

Physical security of a server room also impacts the ability to secure information stored within the server systems. If the data is encrypted, it will remain secure even when the storage devices in the server room are compromised.

Similarly, the server systems should be designed for redundancy. If one device is no longer operational or is compromised, the stored data should be accessible through alternative and redundant storage devices.

(Learn more about data center redundancies.)

Emergency services

In event of a security breach or emergency incident, access to emergency services—police, healthcare, and firefighting services—should be automated and highly available. Deploy automated technology systems to inform the appropriate emergency services in event of an incident and engage with private security services to enhance building security.

Securing server rooms is critical business

Securing server rooms is an absolute necessity. It is not a cheap endeavor (would you want cheap security?), so you’ll have to find some custom balance of security, accessibility, and cost. Leadership may be hesitant to invest in server security, but by knowing that something will go wrong, it’s just a matter of when, you can choose to be on the offense instead of on the defense.

A good tenet of server room security: the more you control, the more secure your servers will be.

Related reading

]]>
How Data Center Colocation Works https://www.bmc.com/blogs/data-center-colocation/ Fri, 04 Feb 2022 00:00:50 +0000 http://www.bmc.com/blogs/?p=11721 Data Center Colocation (aka “colo”) is a rental service for enterprise customers to store their servers and other hardware necessary for daily operations. The service offers shared, secure spaces in cool, monitored environments ideal for servers, while ensuring bandwidth needs are met. The data center will offer tiers of services that guarantee a certain amount […]]]>

Data Center Colocation (aka “colo”) is a rental service for enterprise customers to store their servers and other hardware necessary for daily operations. The service offers shared, secure spaces in cool, monitored environments ideal for servers, while ensuring bandwidth needs are met. The data center will offer tiers of services that guarantee a certain amount of uptime.

The decision to move, expand, or consolidate your data center is one that must be weighed in the context of cost, operational reliability and of course, security. With these considerations in mind, more companies are finding that colocation offers the solution they need without the hassle of managing their own data center.

Data center colocation works like renting from a landlord: Customers rent space in the center to store their hardware.

(This article is part of our Data Center Operations Guide. Use the right-hand menu to navigate.)

Benefits of data center colocation

Data center colocation could be the right choice for any business of any size, in any industry. Let’s look at the benefits.

Uptime

Server uptime is a big advantage enterprise businesses have in data center colocation. By buying into a specific tier, each enterprise server client is guaranteed a certain percentage of uptime without the payroll cost to maintain or other maintenance fees.

Risk management

Utilizing a colocation facility ensures business continuity in the event of natural disasters or an outage. This means that if your business location loses power, your network traffic will not be affected.

Its key to success is redundancy. The layers of redundancy offered at a data center colocation are far more complex than many companies can afford in-house.

Some enterprise companies will consider the off-site location as their primary data storage location while maintaining onsite copies of data as backup.

(Read about enterprise risk management.)

Security

Data centers are equipped with the latest in security technology including cameras and biometric readers, check-in desks that welcome inbound visitors, and checks for security badges are commonplace.

These facilities are monitored 24/7/365, both in the physical world and on the cloud to ensure that unauthorized access does not occur.

Cost

One of the main advantages of colocation is that it results in significant cost savings especially when measured against managing a data center in-house. This means that for many companies, renting the space they need from a data center offers a practical solution to ever-shrinking IT budgets. With colocation, there is no need to worry about planning for capital expenditures such as:

  • UPS (uninterrupted power sources)
  • Multiple backup generators
  • Power grids
  • HVAC units (and the ongoing cost of cooling)

Apart from these capital expenditures, there are also ongoing maintenance costs associated with maintaining and managing an in-house server.

Bandwidth

Colos issue the bandwidth that enterprise client servers need to function properly. With large pipes of bandwidth to power multiple companies, data center colocations are primed to support businesses in a way their office location likely cannot—something that’s increasingly important to remote work.

Support & certifications

Data center colocation offers the benefit of peace of mind.

When you partner with a data center colocation, your enterprise business may be able to reduce potential payroll costs by relying on the data center experts to manage and troubleshoot major pieces of equipment. Enterprise businesses can rely on expert support from experts who are certified to help.

Scalability

As your business grows, you can easily expand your IT infrastructure needs through colocation.

Different industries will have different requirements in terms of the functionalities they need from their data center as it relates to space, power, support and security. Regardless, your service provider will work with you to determine your needs and make adjustments quickly.

In-house data center vs data center colocation

While data center outsourcing offers many benefits, some enterprise organizations may still prefer to manage their own data centers for a few reasons.

Control over data

Whenever you put important equipment in someone else’s charge, you run the risk of damage to your equipment and even accidental data loss. Fortunately, data centers are set up with redundancy and other protocols to reduce the likelihood of this occurring, as discussed above.

But some enterprise businesses with the knowledge and resources to handle data in-house, feel more comfortable with being liable for their own servers.

They also benefit from being able to fix server issues immediately when they occur. Enterprise businesses who seek to outsource instead must work closely with their service providers to ensure issues are resolved in a timely manner.

Contractual constraints

Enterprise business owners may find that they are unpleasantly surprised by the limitations of the contract between their company and a colo facility. Clauses that include:

  • Vendor lock-in
  • Contract termination or nonrenewal
  • Equipment ownership

Choosing a data center

Here are eight considerations enterprise IT Directors should think about before moving their data to a co-located data facility.

  1. Is the agreement flexible to meet my needs?
  2. Does the facility support my power needs, current and future?
  3. Is the facility network carrier neutral? Or does it offer a variety of network carriers?
  4. Is it the best location for my data? Accessible? Out of the way of disaster areas?
  5. Is the security up to my standards?
  6. Is the data center certified with the Uptime Institute?
  7. Does my enterprise business have a plan for handling transitional costs?
  8. Is this data center scalable for future growth?

If an enterprise business leader can answer ‘yes’ to the above questions, it may be the right time to make the change.

Cloud services vs colocation

The cloud is another option over data center colocation:

  • A cloud services provider will manage all elements of the data: servers, storage, and network elements.
  • An enterprise’s only responsibility will be to work with their services and use it.

Cloud services are great for allowing a business to focus more on their business requirements and less on the technical requirements for warehousing their data. In this case, cloud services can be cheaper, and enable new businesses to get off the ground quicker.

More established businesses are considered to be better suited to handle their own data center needs through colo or in house means, and the costs to establish and maintain their colo will be cheaper in the long run than cloud services options.

Cloud services also allow access to quick start-up times, less technical knowledge required to get going, easily scalable (both up and down) server needs, and then integrated services with all the other options a cloud service provider might offer such as:

  • Integrated monitoring
  • Data storage and querying tools
  • Networking tools
  • Machine learning tools

(Accurately estimate the cost of your cloud migration.)

What’s next for data center colocation?

The biggest push in the industry comes from cloud service providers who use colo as a way to meet their hefty equipment storage needs. At the same time, the industry has been and will continue to remain fluid as laws change with regard to cloud storage requirements.

While soaring demand from cloud service providers has made the need for data center colocation increase, new technology offers rack storage density options that allow colo facilities to mitigate the demand for hardware space.

Related reading

]]>
Introduction To Data Center Operations https://www.bmc.com/blogs/data-center-operations/ Tue, 11 Jan 2022 14:53:24 +0000 https://www.bmc.com/blogs/?p=51438 Data Center Operations refer to the systems, processes, and workflows used to operate a data center facility. These operations include several areas: The construction, maintenance, and procurement of data center infrastructure The IT systems architecture design and security Ongoing data center management, including compliance, audits, and and accounting of the data center organization In this […]]]>

Data Center Operations refer to the systems, processes, and workflows used to operate a data center facility. These operations include several areas:

  • The construction, maintenance, and procurement of data center infrastructure
  • The IT systems architecture design and security
  • Ongoing data center management, including compliance, audits, and and accounting of the data center organization

In this article, let’s look at data center operations, including the core components of running and supporting a data center.

(This article is part of our Data Center Operations Guide. Use the right-hand menu to navigate.)

How data centers work

Large cloud vendors including AWS, Google, and Microsoft operate a global footprint of data center facilities that serve cloud-based computing services to millions of business organizations and Internet consumers.

Global IT data center spending has reached $196 billion in 2021. Over 700 hyperscale data centers are operational around the world.

As the global Internet traffic has increased by 40% in 2020, the number of Internet users has doubled, increasing Internet traffic at 30% per year. The prevalence of work-from-home business practices and video streaming services has significantly contributed toward increasing demands on highly available data center operations. Other contributions to high data center energy consumption include:

  • Machine learning training and inference
  • Bitcoin and other cryptocurrency mining

These services are delivered to end-users at specific performance and dependency levels specified in the Service Level Agreements (SLAs). Additionally, these data center facilities operate in compliance with stringent global regulations such as ISO/IEC 27001, GDPR, HIPAA, and SOC 2, among others.

Components of data center operations

In order to meet these various objectives, the modern data center operations cover the following key pillars:

  • Physical components
  • IoT, connect systems & data-driven control
  • Standards and process workflows

Let’s take a look at each pillar.

Physical data center components

The physical design aspects are critical to managing highly dependent data center operations. Some of the most efficient data centers are located at low-temperature geographic regions, safe and secure from natural and man-made disaster incidents, with ready access to utility and emergency services.

The common physical elements of a data center include:

  • The facility. The building space with efficient access to utility and emergency services. Since data centers are some of the most energy-consuming building facilities, the architecture is optimized for space and environmental control. Natural cooling in specific humidity and low-temperature regions are chosen to offset energy consumption necessary for data center component cooling. Data centers account for around 1% of global electricity demand, which comes to about 250 TWh.
  • Core components. This includes the standard IT equipment and software necessary to deliver computing services to a large customer base. These include servers, networking devices, infrastructure such as racks, HVAC and electrical systems, and other computing infrastructure resources.
  • Support Infrastructure. This includes the physical security of the space, HVAC cooling, Uninterruptible Power Sources (UPS) such as generators and battery banks, utility services infrastructure. and access to emergency services is critical to maintain data center operations.
  • Operational staff. The workforce that supports the datacenter, which can include employees available on-premises as well as off-site teams that work toward managing and maintaining the data center operations to meet the defined performance, security, and compliance standards.

(Learn how the cloud is changing data center jobs.)

IoT, connected systems & data-driven control

The modern data center is highly dependent on a network of connected devices that relay information on several key attributes of the data center operations. These are not limited solely to computing performance and network security, but also include the overall performance of the facility in terms of:

  • Cooling
  • Energy consumption
  • Airflows
  • Reliability
  • Costs

A Data Center Infrastructure Management (DCIM) solution integrates the network of IoT sensors to capture relevant information logs from across the facility and data center components. These technologies use sophisticated algorithms and analytics capabilities to:

  • Report on data center performance
  • Guide on decisions to optimize various aspects of data center operations
  • Manage workflow changes at the physical layer of the IT network with respect to the network traffic and software applications running on the servers

Therefore, the supply of computing resources is optimized against changing demands and network traffic flows.

In order to achieve these goals, the DCIM also physically tracks every component of the IT environment tagged by an RFID chip. As a result, the DCIM presents a holistic dashboard view of the current status of all components and helps engineers manage process workflows accordingly.

(Read all about DCIMs & data center management.)

DCIM Software Components

Standards & process workflows

A significant proportion of data center optimization takes place at the logical level. Operational workflows that govern the information flow, system design, engineering and business practices, and the end-to-end data center lifecycle procedures govern the effectiveness of the data center facility.

Industry standards and organizations—including Lawrence Berkeley National Laboratory, The Green Grid, Open Compute Project, ITI and the TBM Council—provide guidelines on managing data center operations. These guidelines encompass the end-to-end lifecycle of data center operations, including:

  • Design and deployment
  • Management and troubleshooting
  • Decommissioning of data center components

Organizations such as the National Institute of Standards and Technology (NIST) provide guidelines on information systems and design architecture of the IT environment.

Optimizing data center operations for customer value

The final element of cloud-based data center operations corresponds to the IT services delivered to end-users. Data center organizations can adopt tools such as ITIL 4 to integrate multiple service management operating models that can help organizations optimize IT operations for maximum business value.

Related reading

]]>
Data Center Infrastructure Management (DCIM) Solutions Explained https://www.bmc.com/blogs/dcim-data-center-infrastructure-management/ Fri, 21 May 2021 16:09:04 +0000 https://www.bmc.com/blogs/?p=49769 Data Center Infrastructure Management (DCIM) refers to the discipline of managing and optimizing the operations and performance of your data center infrastructure. A vast subject, DCIM emerged in response to the growing complexity of IT infrastructure systems. Let’s take a look at the basics of this topic. (This article is part of our Data Center […]]]>

Data Center Infrastructure Management (DCIM) refers to the discipline of managing and optimizing the operations and performance of your data center infrastructure.

A vast subject, DCIM emerged in response to the growing complexity of IT infrastructure systems.

Let’s take a look at the basics of this topic.

(This article is part of our Data Center Operations Guide. Use the right-hand menu to navigate.)

Data center infrastructure is complex

Ever since business organizations adopted networked computing technologies, the demands for computing resources evolved haphazardly.

In recent decades, requirements around scalability, performance, security, and operations have forced IT to adopt rapid changes to the underlying infrastructure all too often. As technology evolves to address the growing demands of the modern enterprise, new hardware gradually replaces old ones while IT struggles to manage the hybrid mix of latest and legacy data center systems.

The lack of a comprehensive integrated control system for hardware assets gave rise to the need for a purpose-built enterprise-class data center management suite. Enter DCIM.

(Smoothly migrate your data center with this checklist.)

What is data center infrastructure management?

In simple terms, DCIM is a software solution that delivers the processes and tools necessary to manage data center environments in a structured way. A DCIM tool could be the center point of your data center management.

One key value proposition of DCIM is its ability to manage change workflows at the physical layer in connection with the software applications that run on top of them.

DCIM also helps IT optimize the supply of computing resources against the changing demand.

For example, IT is encouraged to minimize the cost of per-unit operations of computing resources while satisfying the peak demand when necessary. Enterprises cannot invest enough in expensive data center technologies to match the peak demands on operational capacity. Instead, they rely on cloud infrastructure resources that seamlessly integrate into their IT environment.

DCIM technology uses a critical set of metrics that consider the performance of virtual instances, autonomously matching operations between resource demand and supply.

DCIM Software Components
Components of DCIM

DCIM solutions are made of several components. These support a variety of enterprise IT functions at the infrastructure layer.

Physical architecture

The floor space of a data center is planned according to:

  • The dimensions of the equipment
  • Airflow and cooling
  • Human access
  • Other geometric and physical factors

Here, DCIM technology helps you visualize and simulate the representation of server racks deployed in the data center, so you can determine if the physical space is satisfactory.

Rack design

Typically, you’ll use standardized cabinets to install server and networking technologies in your data center. Understanding of the specifics associated with rack design can help data center organizations to plan for capacity, space, cooling and access for maintenance and troubleshooting.

DCIM can help optimize the selection and placement of server racks based on these factors.

(Learn how to secure your server room.)

Materials catalog

DCIM technologies contain vast libraries of equipment material. The information ranges from basic parameter specifications to high-resolution renders. With new technologies introduced rapidly in the industry, these libraries are updated and maintained regularly in coordination with the vendors.

Change management

Data center hardware must be replaced periodically, due to a few reasons:

  • The inherently limited lifecycle of hardware
  • A malfunction
  • The need to upgrade to a better product

This change, however, can affect the performance of other integrated infrastructure technologies. DCIM allows a structured approach to manage such hardware changes, allowing IT to change or replace hardware by:

  • Following predefined process workflows
  • Reducing the risks associated with the change

Capacity planning

The data center should be designed to scale in response to changing business needs. That means your capacity planning must account for:

  • Space limitations
  • Weight of equipment and racks
  • Power supply
  • Cooling performance
  • A range of other physical limitations of the data center

The DCIM tool can model a variety of future/potential scenarios, planning future capacity based on these limitations.

(Read more about capacity planning for the mainframe.)

Software integration

DCIM solutions integrated with existing management solutions that are designed to track and coordinate data center assets and workflows. Integrations can include:

  • Protocols such as SNMP and Modbus
  • Complex web integrations
  • CMDBs

Data analysis

Real-time data collection and analysis is a critical feature of DCIM technologies. With a DCIM tool, you can:

  • Track a variety of asset metrics
  • Transfer data between DCIM solutions using web-based APIs
  • Analyze data using advanced AI solutions

Looking at the real-time performance of the metrics can help you mitigate incidents such as power failure, security infringements, and network outages—ahead of schedule.

(See how data analysis can support DCIM.)

Reporting & dashboard

A good DCIM tool transforms vast volumes of metrics log data into intuitive dashboards and comprehensive reports. Automated actions can be triggered using the reporting information and studied for further analysis.

DCIM capabilities & vendors

These capabilities are delivered with multiple software modules and solutions, potentially from multiple different vendors and can be integrated into a comprehensive DCIM suite.

Some of the popular DCIM vendors include:

  • Nlyte Software
  • Sunbird Software
  • Vertiv
  • Schneider Electric
  • openDCIM

Getting started with DCIM

Moving from traditional spreadsheet planning to a full-scale DCIM suite may require organizations to reevaluate how they manage their data center assets.

A good starting point is to adopt DCIM solution modules in phases: start with the bare minimum and upgrade the functionality in small steps.

Related reading

]]>
Data Center Migration: Creating a DC Inventory https://www.bmc.com/blogs/data-center-migration/ Tue, 02 Feb 2021 00:00:01 +0000 http://www.bmc.com/blogs/?p=9605 There are many reasons an organization may need to migrate an in-house data center (DC) to another location, whether it’s a new physical location or in the cloud. All sorts of scenarios can trigger the data center move: Mergers and expansions Cloud initiatives End of lease situations Regulatory requirements Before you can start moving, though, […]]]>

There are many reasons an organization may need to migrate an in-house data center (DC) to another location, whether it’s a new physical location or in the cloud. All sorts of scenarios can trigger the data center move:

  • Mergers and expansions
  • Cloud initiatives
  • End of lease situations
  • Regulatory requirements

Before you can start moving, though, you need to know what you have. That’s why all DC migrations have one strategic best practice in common:

You need to discover, map, and inventory all your existing data center assets in order to move, replace, or retire them.

Let’s take a look at what goes into your DC inventory, so you can build the best foundation for your DC migration. We’ll also point out common surprises you might find along the way. The earlier in planning you know about them, the less likely they’ll cause problems during the overall migration.

(This article is part of our Data Center Operations Guide. Use the right-hand menu to navigate.)

What’s a data center inventory?

The data center inventory provides all the planning information that you’ll need to successfully plan a data center move to either another physical DC or the cloud.

Your inventory should document two key categories:

  • The physical and network infrastructure that powers your company
  • The application connections that rely on that infrastructure

The DC inventory underpins your approach and strategy for your migration. It also helps you identify and plan for any risks your organization faces with a DC move.

A DC inventory should be one of the first items you create after the decision to move a data center has been made.

Elements of a data center inventory

A data center inventory provides a road map for what needs to be done to close and relocate a data center. You’ll want to create this list as the first step of your planned migration. Once the migration is confirmed that it’s going to happen, use this checklist to begin your inventory.

You can create a data center inventory in two ways:

The following image illustrates the various DC item categories you will need to discover and map while planning a data center move.

data center inventory

Now, let’s take a look at each category in more detail.

Current data center contractual obligations

Review any terms and conditions associated with the data center you are leaving, including termination clauses and penalties. This tells you what obligations you have in leaving an existing data center.

Hardware inventory

The hardware inventory discovers the physical servers and infrastructure equipment you need to move or replace. This equipment should include: all DC network servers, PCs, printers, routers, switches, firewalls, Web filters, Web server farms, load balancing devices, modems, edge or DMZ servers, uninterruptable power supplies (UPSes), power distribution units (PDUs), backup devices, etc.

For each piece of equipment, you’ll need to gather and inventory these details:

  • Machine manufacturer, model, and date placed in service or approximate age
  • Operating system and version
  • IP addresses, ethernet adaptors, subnet masks, default gateways, and DNS servers each piece of hardware uses
  • Relevant equipment-specific information, including CPU, memory, storage, database types (i.e., MS SQL Server, Oracle, IBM DB2, etc.)
  • Power requirements, including wattage, input/output voltage, amps, types of electrical connectors, and whether the equipment uses single or redundant power supplies

This list may provide surprises, including critical equipment you did not realize you had, or ancient servers that still service critical functions. Be thorough.

Communications inventory

The communications inventory includes all non-tangible network resources and configurations that you’ll need to either move, replace, or retire when migrating the DC.

Items needed here include:

  • Internet class A, B, or C networks used in the DC and the organization you obtained them from, and the IP subnets used from these networks
  • Internal (non-routable) IP address subnets used in the data center (10.x.x.x, 192.168.x.x, or 172.16.x.x through 172.31.x.x)
  • Any Classless interdomain routing (CIDR) blocks or subnet masks for your IP subnets
  • IP information gathered from the hardware inventory
  • Telecommunications lines (Telco), IP address classes, and subnet masks associated with each Telco line
  • Domain names and registrars for domains that are tied to data center IP addresses
  • DHCP IP address reservations for specific DC and subnet equipment
  • Internal and external DNS entries that reference IP addresses in the data center
  • Firewall access control lists (ACLs) containing outside IP addresses and ports that the firewall allows traffic into and out of the data center
  • Wide area network (WAN) mapping showing any outside locations that communicate with the existing Data Center through the WAN, along with any mesh network connections the DC uses in case of network failure
  • Contract information associated with any leased resource, including the date the contract expires, line speeds (if applicable), and termination procedures

Like the hardware inventory, the communications inventory can contain surprises. Surprises might include:

  • Domain IP addresses that are owned by the data center you are moving away from
  • Telco lines that have multiple years left on their contracts
  • Severe penalties for terminating a contract
  • Other unexpected items

Application inventory map

After you have inventoried your hardware and communications inventory, identify all applications running on DC hardware and the physical or logical machines they are running on, as well as any outside servers that communicate with DC hardware or communication resources. This includes:

  • Core network applications such as domain controllers and file and print servers
  • Support services, such as:
    • Windows Server Update Services (WSUS) servers that provide Windows patches to client devices,
    • Third-party servers that update client software such as anti-virus and malware management consoles, email servers, database servers, Web servers, FTP servers, time-clock servers, backup servers
    • Remote access servers that reside on data center servers
  • Production applications that run your business such as ERP software, business intelligence, big data servers, data warehouses, and CRM software residing on DC servers
  • Servers or applications in other organization-owned data centers that communicate with the applications in the DC being moved
  • PC applications that communicate with applications in the data center
  • Customers and business partners who access your network or applications through firewalls in your Data Center, for applications such as EDI, FTP exchange, and remote access to vendor-owned equipment
  • External Web sites that exchange data with DC servers
  • Email providers and email filtering services
  • IP addresses or DNS entries each of the above entities use to contact applications on your network

The application inventory provides a map of how interconnected your data center is, both inside and outside your organization. It shows you all the connections that need to be accounted for when you move.

SLA requirements

Review and inventory all customer Service Level Agreements (SLAs) that are currently in place and are related to Data Center performance. SLAs will include items such as:

  • Network reliability
  • Availability
  • Support

Any SLAs that were required for the old DC will probably need to be enacted and enforced for the new data center.

BMC supports data centers

BMC Helix Discovery is a cloud-native discovery and dependency mapping solution. It gives you the exact visibility into hardware, software, and service dependencies you need for any data center move. Have assets across the multi-cloud? BMC Helix Discovery can handle those, too.

BMC Helix Discovery can help your company improve:

  • Service awareness and delivery
  • Security
  • Cost transparency
  • Digital transformation

Plus, a recent report indicates that the average organization deploying BMC Helix Discovery sees a payback period of only five months.

Related reading

]]>
What Is a Software-Defined Data Center? SDDCs Explained https://www.bmc.com/blogs/what-is-a-software-defined-data-center/ Tue, 05 Jan 2021 08:10:26 +0000 http://www.bmc.com/blogs/?p=11610 Are you looking to improve your IT agility? Is scaling up your physical data center too slow and cumbersome? A software defined data center (SDDC) might be what your company needs to accelerate your IT service delivery. SDDCs resemble traditional data centers, with some notable differences, particularly through their use of virtualization, abstraction, resource pooling, […]]]>

Are you looking to improve your IT agility? Is scaling up your physical data center too slow and cumbersome?

A software defined data center (SDDC) might be what your company needs to accelerate your IT service delivery. SDDCs resemble traditional data centers, with some notable differences, particularly through their use of virtualization, abstraction, resource pooling, and automation.

In this article, we’ll cover these topics to help you determine whether you’re ready to transition to an SDDC:

  • SDDCs
  • Key components
  • Benefits of SDDCs
  • SDDC vendors
  • How to switch to an SDDC
  • Additional resources

(This article is part of our Data Center Operations Guide. Use the right-hand menu to navigate.)

What is an SDDC?

A traditional data center is a facility where organizational data, applications, networks, and infrastructure are centrally housed and accessed. It is the hub for IT operations and physical infrastructure equipment, including servers, storage devices, network equipment, and security devices. Traditional data centers can be hosted:

  • On-premise
  • With a managed service provider (MSP)
  • In the cloud

In contrast, a software-defined data center is an IT-as-a-Service (ITaaS) platform that services an organization’s software, infrastructure, or platform needs. An SDDC can be housed on-premise, at an MSP, and in private, public, or hosted clouds. (For our purposes, we will discuss the benefits of hosting an SDDC in the cloud.) Like traditional data centers, SDDCs also host servers, storage devices, network equipment, and security devices.

Here’s where the differences come in.

Unlike traditional data centers, an SDDC uses a virtualized environment to deliver a programmatic approach to the functions of a traditional data center. Like server virtualization concepts used for years, SDDCs abstract, pool, and virtualize all data center services and resources in order to:

  • Reduce costs
  • Increase scalability
  • Improve business agility

(Learn more about IT virtualization.)

You can manage SDDCs from any location, using remote APIs and Web browser interfaces. SDDCs also make extensive use of automation capabilities to:

  • Reduce IT resource usage
  • Provide automated deployment and management for many core functions

Key components of SDDCs

With the advent of hyperconverged infrastructure—where all IT infrastructure elements are software-defined and deployed—network technology has the power to support cloud SDDC and ITaaS initiatives.

SDDC host servers reside in the cloud, where you can create and configure your data centers to your needs, without having to physically configure or host network equipment. SDDCs rely heavily on virtualization technologies to abstract, pool, manage, and deploy data center functions.

SDDC

Key SDDC architectural components include:

  • Compute virtualization, where virtual machines (VMs)—including their operating systems, CPUs, memory, and software—reside on cloud servers. Compute virtualization allows users to create software implementations of computers that can be spun up or spun down as needed, decreasing provisioning time.
  • Network virtualization, where the network infrastructure servicing your VMs can be provisioned without worrying about the underlying hardware. Network infrastructure needs—telecommunications, firewalls, subnets, routing, administration, DNS, etc.—are configured inside your cloud SDDC on the vendor’s abstracted hardware. No network hardware assembly is required.
  • Storage virtualization, where disk storage is provisioned from the SDDC vendor’s storage pool. You get to choose your storage types, based on your needs and costs. You can quickly add storage to a VM when needed.
  • Management and automation software. SDDCs use management and automation software to keep business critical functions working around the clock, reducing the need for IT manpower. Remote management and automation is delivered via a software platform accessible from any suitable location, via APIs or Web browser access.

You can also connect additional critical software to connect with and customize your SDDC platform. But, for companies just moving to an SDDC, your first goal is to get your basic operations software infrastructure ready for the transition. Customizing can come later.

Benefits of SDCCs

Many experts believe the inevitability of switching from a traditional data center to an SDDC is just around the corner, as platforms like IT-as-a-service emerge as the new normal.

As your organization transitions to an SDDC, you can expect to benefit in several major ways, including:

Business agility

An SDDC offers several benefits that improve business agility with a focus on three key areas:

  • Balance
  • Flexibility
  • Adaptability

SDDCs increase business productivity by consolidating duplicate functions. This means that IT resources are freed up to spend their time solving other problems, resulting in greater agility.

Lastly, SDDCs help businesses increase their ROI so they have more funds to spend on long-term strategy and innovation.

Reduced cost

In general, it costs less to operate an SDDC than housing data in brick-and-mortar data centers. Traditional data centers must charge more to cover the cost of:

  • Round-the-clock employees
  • Security
  • Operational needs like building leases and hardware

Organizations that house their data in-house require additional IT manpower, expensive equipment, time, and maintenance. Those that have not put much thought into data storage, for example, may suffer the possible costs of potential data breaches. An expensive hardware malfunction is yet another possibility that could cause loss of data.

Cloud SDDCs operate similarly to SaaS platforms that charge a recurring monthly cost. This is usually an affordable rate, making an SDDC accessible to all types of businesses, even those who may not have a big budget for technology spending.

Increased scalability

By design, cloud SDDCs can easily expand along with your business. Increasing your storage space or adding functions is usually as easy as contacting the data facility to get a revised monthly service quote.

This is a significant advantage over brick-and-mortar data centers, which scale only by making more room for additional servers, purchasing hardware and software, and bringing in manpower to make the transition. This clearly costs more and takes more time.

The appeal of outsourced data centers has always been that they ease the burden off your company’s shoulders, leaving your in-house IT team to focus on strategy. But SDDCs take this benefit a step further, offering potentially unlimited scalability.

SDDC vendors

SDDC hardware and services are sold by many different vendors, including:

  • Dell/VMware
  • Hewlett Packard Enterprise
  • Microsoft
  • Amazon
  • Oracle
  • Citrix
  • IBM
  • Business partners who help create and configure custom SDDCs

How to move to an SDCC

While SDDCs offer many benefits, not every organization is ready to make the transition. Here are some key considerations to determine whether your organization is ready.

Assess company culture

Collaboration is critical to the adoption of new technology. Is your company’s culture one of collaboration? Be honest.

It can be difficult for legacy businesses to transition to an SDDC without carefully considering long-term strategy and employee skillsets.

Pay attention to timing

For your organization, a complete traditional DC-to-SDDC migration could be years away. That does not mean that key stakeholders should not be considering it today. Leadership should evaluate solutions and start by transitioning software one piece at a time.

  • Many organizations start utilizing SDDCs by migrating their Microsoft Exchange servers to Microsoft Azure servers, retaining their ability to control their Exchange setup while Microsoft’s SDDC handles the infrastructure setup.
  • Other SDDC migration candidates might include file and print servers and other special purpose servers that do not need to run in an organizational data center.

Ultimately, CIOs and CTOs should implement a basic infrastructure of software that can link with an SDDC.

Understand the capabilities & limitations of your DevOps team

Adding a platform like SDDC requires a commitment from DevOps. Cloud SDDC migration requires fewer traditional IT operations skills focused on maintaining the infrastructure, and more DevOps skills focused on application delivery speed, product quality, and automation.

Cloud SDDCs reduce IT infrastructure and operations silos and increase the need for DevOps skills, including container and microservices management needed to run production workloads at scale. Before making this leap, assess whether:

  • You have the right DevOps team in place.
  • You will need to add additional employees—and determine which skills you need the most.
  • You could outsource professional help for the implementation.

Plan ahead

If your organization has not aligned the above requisites, now may not be the best time to make the switch. Instead, start with these actions:

  • Prepare your basic software infrastructure
  • Bolster your DevOps team
  • Pay attention to timing

CTOs should keep their focus on a long-term goal where they ultimately convert to an SDDC by the mid-2020s.

The outlook for SDDCs

SDDCs are not yet as commonplace in today’s digital economy, but technology trends suggest that they will be.

Until then, as more businesses virtualize automated IT functions, demand for both products like SDDCs and DevOps professionals who can code them will continue. SDDCs offer an innovative way to store data suitable for enterprise organizations interested in successfully using DevOps to advance digital transformation.

Related reading

]]>
Rise of Data Centers and Private Clouds in Response to Amazon’s Hegemony https://www.bmc.com/blogs/rise-of-data-centers/ Fri, 28 Sep 2018 00:00:13 +0000 https://www.bmc.com/blogs/?p=12864 Amazon has long since dominated the cloud platform scene for eCommerce in an era of big data. Since the company’s inception, it has used data about customer wants and desires to influence investments in technology in a continuous way. Today, Amazon is exploring innovative ways to get products to customers faster and expand into new […]]]>

Amazon has long since dominated the cloud platform scene for eCommerce in an era of big data. Since the company’s inception, it has used data about customer wants and desires to influence investments in technology in a continuous way. Today, Amazon is exploring innovative ways to get products to customers faster and expand into new markets like fresh food and grocery delivery.

Such a confluence of innovation has given Amazon a hegemony in eCommerce that has remained unchallenged. Until now, that is.

Big box retailers and companies with significant enterprise capabilities are beginning to respond to Amazon’s dominant stance. Here’s what we can expect from challengers throwing their hats in the ring to compete with Amazon in 2018 and beyond.

(This article is part of our Data Center Operations Guide. Use the right-hand menu to navigate.)

Competitive Innovation in Retail

While data centers aren’t totally wiping out shopping malls, paired with a private cloud they are becoming a force to be reckoned within retail due to security and flexibility. What’s clear today is that eCommerce brands need to think about cloud solutions if they want rival players in a highly competitive market. This could lead those who have the wherewithal to compete with Amazon to purchase data centers that will allow them to build proprietary clouds in a hybrid-cloud environment.

Sizing Up the Competition

The ability to compete with Amazon is no small undertaking. There are only a handful of sizable retailers who stand poised to put a dent in Amazon’s market share. These are:

  • Walmart: retail giant in the brick and mortar big box sector
  • Apple: trailblazer of stores as a showcase in addition to being a major media competitor with iTunes (the largest online store for music and media)
  • Target: big box retailer with high standards for customer satisfaction
  • Best Buy: big box store specializing in technology and media
  • Alibaba Group: Sixth largest internet retailer
  • eBay, Zulily, Wayfair: other popular eCommerce sites

Within this space, companies like Walmart are beginning to leverage the advantages of private cloud computing to better compete with Amazon.

Benefits of Private Cloud for eCommerce

A private cloud refers to a cloud solution dedicated for use by a single organization. The data center resources may be located on-premise or operated off-site by a third-party vendor. A switch to a private cloud is an ideal choice for large enterprises that require advanced data center technologies to operate efficiently and cost-effectively as well as organizations which possess the financial resources to invest in high performance and availability technologies.

Establishes Trust Among Customers

Implementing a private cloud is one way enterprise retailers convey security and trust because their customers know that their personal information is being stored in a dedicated and secure environment that cannot be accessed by other organizations. Said another way, private clouds offer presumably a secure architecture with fewer backdoors for hackers.

Highly Available

A major benefit of private clouds relates to high availability sensitive mission-critical IT workloads. This is especially important in the retail space when it comes to page load times and server availability because proprietary private clouds are typically more reliable and higher velocity.

Overall, access to data is typically much quicker in the case of on-premise private cloud environments because the information doesn’t have as far to travel compared with a public cloud.

Greater Flexibility

Private clouds have limitless potential for flexibility and scalability. While these proprietary server environments require considerably more management at the data center level compared to public cloud services, a well managed private cloud can offer enterprises more customizable solutions than any public cloud.

Data Storage

Data storage is a priority for retailers who store important customer information. Although private cloud storage is similar to public cloud storage as it regards scalability, usability and flexibility, the major difference arises when we add security to the mix. Private cloud storage, also known as internal cloud storage requires the implementation of a data center, which keeps all of a company’s data centrally housed. Think of a data center as the hub for IT operations and equipment.

Compliance Standards

A private cloud can also help large retailers reduce their risk by adhering to compliance standards of organizations that govern a number of industries. No organization wants to experience a data breach of the magnitude of Target or Home Depot’s from a few years ago. Although these incidents were non-cloud, if retailers learned anything from the media backlash that occurred at the time, it’s that it pays to keep your data safe.

Walmart Response to Amazon Hegemony

We know that private clouds establish trust while offering accessibility, security and flexibility. Therefore, it’s no surprise that Walmart, like Amazon, has now embarked on the proprietary cloud journey.

In recent years, Walmart acquired six data centers in what is considered their “best chance at taking on Amazon”. In fact, some in the industry predict that Walmart hopes to create the world’s largest private cloud with these heavy investments.

The facilities took around five years to build, but since going online have boosted Walmart’s competitiveness. Indeed, Reuters reports that Walmart’s online sales growth has been outpacing industry standards.

In addition, Tim Kimmet, head of cloud operations at Walmart, suggests the shift to a private cloud is helping the company to improve services both online and in-store. The increase in service levels powered by cloud technology could be the beginning of a race to close the gap between Walmart and Amazon, while breathing new life into in-store shopping.

But the truth is, Walmart has a long way to go. As reported by Reuters, Walmart’s share of the US e-commerce market stands at 3.6%, while Amazon controls a whopping 43.5%.

Another question that has popped up is whether Walmart will enter the cloud services market to compete with Amazon’s AWS and add another revenue stream to its model. Kimmet indicated they had not ruled this idea out for the future.

Trends in Retail

Retailers big and small are becoming more reliant on cloud technology to power their business. Some trends, like showrooming, allow retailers to save on in-store operational costs while gently shifting most of their business to online shopping. This gives retailers the flexibility and scalability to support more customers, and also to collect important customer data that may one day allow them to compete with retail giants like Walmart and Amazon.

In addition, there are a number of startups hoping to create their own competitive advantages:

  • CommonSense Robotics: a robotics logistics company automating fulfillment centers-as-a-service
  • Flytrex: fully operational drone delivery service
  • Brinng: innovative last mile delivery service

These startups are bringing innovation to fulfillment and delivery for some retailers. But what’s going to propel large enterprise eCommerce platforms closer to Amazon status sooner than later is a switch to data centers with a private cloud.

While not every retailer can afford a private cloud, as-a-service innovations have allowed all businesses to benefit from things like payment processing in the cloud, without the hassle of a lot of expensive equipment and operational processes.

Whether public, hybrid-cloud or private cloud solutions, the future of retail lies in the cloud.

The Rise of Data Centers & Private Clouds

While more and more retailers deploy cloud solutions that increase their efficiency and deliver more value to customers, the acquisition of data centers and implementation of private clouds will be limited to larger enterprise retailers who have the funds to compete in the current marketplace.

According to Gartner, the cloud industry is proving that on-premise data center deployments don’t always translate into strong security and that cloud computing is a secure alternative.

Hybrid cloud is another much talked about solution that offers a mix of public and private cloud deployments optimized for cost, security and performance based on organizational needs. Gartner also anticipates that the hybrid cloud market will continue to experience robust growth, with 90 percent of the organizations investing in the technology by 2020.

While the private cloud infrastructure market is more likely to expand at a slower rate than the public cloud services market, IDC forecasts that it will continue to gain importance in IT investment decisions as organizations pursue secure and reliable alternatives for managing their data.

Final Thoughts

Amazon has led the pack when it comes to cloud computing for eCommerce for quite some time, but today the company’s hegemony is being challenged by other enterprises with the means and the wherewithal to leverage the benefits of the private cloud like Walmart. But whether these efforts will prove successful remains to be seen.

The ability to effectively manage cloud solutions will be a major determining factor in the outcomes of these initiatives. BMC Multi-Cloud Solutions are uniquely positioned to help IT achieve the full benefits of multi-cloud ecosystems via industry-leading, vendor agnostic solutions for these required capabilities by:

  • Customizing cloud services to meet business needs,
  • Managing a variety of platforms from a single console,
  • Enabling high-precision analysis across different applications; and
  • Uniting cloud and enterprise management tools and processes.

If you’re a retailer looking for help navigating private or hybrid cloud solutions to forge a lasting competitive advantage, then click here to learn more.

]]>
Power Outages at Public Cloud Data Centers: How To Mitigate Risks https://www.bmc.com/blogs/power-outages-cloud-data-centers/ Fri, 06 Jul 2018 00:00:14 +0000 https://www.bmc.com/blogs/?p=12604 Public cloud solutions allow businesses to access IT services without having to manage and operate the underlying infrastructure on-site. In effect, this statement also suggests that customers of public cloud have no control over infrastructure operations as they would for on-site deployments. The vendor is entirely responsible to deliver smooth operating environment as per the […]]]>

Public cloud solutions allow businesses to access IT services without having to manage and operate the underlying infrastructure on-site. In effect, this statement also suggests that customers of public cloud have no control over infrastructure operations as they would for on-site deployments. The vendor is entirely responsible to deliver smooth operating environment as per the agreed Service Level Agreement (SLA), while customers can do little to ensure that downtime doesn’t occur at periods of peak usage.

For instance, a massive cyclone that hit Loudoun County, Virginia caused power outages at the Equinix data center facilities and led to connectivity issues impacting several AWS customers earlier this year.

At the same time, businesses operating on-premise IT infrastructure left without power could have done little to bring the power back. Customers of public cloud infrastructure however, have several options to mitigate risks associated with power outages that impact public cloud data centers:

(This article is part of our Data Center Operations Guide. Use the right-hand menu to navigate.)

1. Understand the Risks of Public Cloud Outages

A few years ago, Gartner analysts suggested that IT outages pose a greater risk than security breaches in the cloud. Public cloud data centers are protected with several layers of sophisticated security mechanisms to prevent security infringements and data leaks. Power outages occur more frequently and render customer data inaccessible during the service downtime. In some cases, the data is lost and irrecoverable.

Business organizations investing in cloud solutions should therefore understand the inherent risks of power outages in public cloud data centers to determine appropriate risk mitigation strategies. The risk may include geographical and zonal threats based on natural incidents such as cyclones and hurricanes. Organizations must also consider the true cost of losses they may incur due to IT service outages. These may include the actual business loss due to service disruption, impact on brand loyalty and reputation, lost revenue and business opportunities, as well as loss of workforce productivity.

2. Identify SLA Requirements and Know Your SLAs

Based on the potential cost of service downtime, organizations need to evaluate their SLA requirements for different workloads. Public cloud may not be the best option for security and availability sensitive mission-critical IT workloads and there may be specific SLA requirements to meet regulatory compliance standards. Yet, proving compliance may also not suffice if end-users and customers are not satisfied with the availability of data and services operating on outage-prone public cloud environments. The SLA figure should therefore be meaningful in terms of its impact on business operations, goals, revenue and profits, opportunities and other key business indicators.

Organizations also need to understand and know how the agreed SLA translates into service availability in the real-world. The impact of IT outage is significant during hours of peak service usage and for most organizations, it may not be possible to predict when a power outage to public cloud data center will occur.

It is important to identify critical processes and goals based on business impact. Organizations may monitor metrics that are most relevant and impactful to facilitate these processes and goals. If the SLA is designed to maintain desired performance standards of the most impactful metrics, the downtime will have minimal impact on customers and end-users.

3. Institute Redundancy and Multi-cloud Strategies

Cloud computing allows organizations to prevent the impact of power outages and downtime by introducing redundancy into their IT infrastructure strategies. Redundancy in cloud computing follows a simple approach: if one server instance fails or runs out of power, the workload can shift to another server instance. If the entire data center is impacted by the power outage, data replicated on data centers at distant geographic zones can take over to deliver the necessary IT service. The strength of redundancy overcomes the risk of power outages in public cloud data centers. This is further complemented by a multivendor cloud strategy, which involves pairing of services from multiple cloud providers. When a power outage impacts the primary cloud provider, the cloud service from a secondary vendor can serve as a failover solution to ensure business continuity. Additionally, a multi-cloud strategy also reduces the risk of vendor lock-in and allows organizations to optimize their public cloud investments by leveraging the best options available in the market in terms of features, support, reliability, price and other key factors that impact business decisions.

4. Test for Business Continuity and Disaster Recovery

Business continuity and disaster recovery plans can have a vastly different result during real disasters if the organization is not prepared to execute the plan during the real incidents. An effective business continuity and disaster recovery plan should be designed to identify the circumstances and address the limitations that may occur during a disaster. Execution of these risk mitigation plans is then a matter of following the documented checklist items and best practices. Without mock exercises, simulations and testing, organizations cannot understand the true circumstances and limitations that they may face during disaster incidents.

Some organizations may also be obliged by compliance regulations to conduct regular drills for business continuity during IT service outages. This approach ensures that the business continuity plan aligns with the evolving IT service availability needs of the organization in response to changing business and IT requirements.

5. Communicate with Stakeholders

Once the power outage has taken place and the organization executes their business continuity and disaster recovery plans, it is important to notify the end-users regarding the scope of impact and best practices for damage limitation. Proactive communication with the affected users may not eliminate the threat of a power outage but it will help lower the impact on customer trust and brand loyalty among users eagerly waiting for service uptime. Communication with business stakeholders, internal experts and external vendors may be required based on the documented disaster recovery plan designed to mitigate risks of power outages.

Power outages in public cloud data centers tend to occur without prior warning. It’s important to not only understand the risks but also take the necessary steps to limit the damages. Organizations should follow a strategic approach in employing public cloud solutions and be well prepared for the threats that cause unannounced and unpredictable service downtime.

]]>
How Data Center Jobs Are Changing in the Age of the Cloud https://www.bmc.com/blogs/how-data-center-jobs-are-changing-in-the-age-of-the-cloud/ Mon, 14 May 2018 00:00:57 +0000 http://www.bmc.com/blogs/?p=12240 There’s been a lot of changes in the Data Center (DC) since the 1990s. We’ve seen the migration of in-house Data Centers to managed Data Centers run by managed service providers (MSPs). Managed DCs and in-house DCs are also migrating to cloud hosting environments where your applications reside solely on vendor machines running in the […]]]>

There’s been a lot of changes in the Data Center (DC) since the 1990s. We’ve seen the migration of in-house Data Centers to managed Data Centers run by managed service providers (MSPs). Managed DCs and in-house DCs are also migrating to cloud hosting environments where your applications reside solely on vendor machines running in the cloud, and you may not even know where the underlying hardware servicing your network resides.

Today, let’s look at how DC migration affects IT Operations professionals and examine some of the ways Data Center jobs are changing as more organizations move to the cloud. I’ll examine these changes in two ways, by looking at 1) IT Ops functions and skills that will be less needed when organizations migrate to the cloud; and 2) IT Ops functions and skills that will be more needed as organizations migrate to the cloud. Here’s my list of how DC functions and skills change when moving to the cloud.

(This article is part of our Data Center Operations Guide. Use the right-hand menu to navigate.)

Data Center functions and skills that are less needed after migrating to the cloud Data Center functions and skills that are more needed after migrating to the cloud
Jobs as full-time IT Ops employees at end-user organizations will be less in demand. There’ll be less need for network technicians, administrators, system operators (what few are left), and Help & Service Desk personnel. As services and equipment migrate out of the corporate Data Center, corporate IT Ops jobs will follow. Jobs working for Infrastructure-as-a-Service (IaaS) providers, cloud providers, or technology companies will increase, performing many of the same activities in-house IT Ops personnel used to provide. IT Ops jobs will follow hardware, services, and applications, and migrate to the cloud.
Less focus on infrastructure issues, such as operating system management, hardware management, and network management. In-house IT Ops managers typically manage physical servers and associated equipment, along with the applications that run on those servers. With server migration to the cloud, server management will also move to the cloud, and it will be performed by the cloud provider’s staff, not the organization’s staff. Hardware, operating system, and some network functions will be assumed by cloud providers.
IT Ops will focus more on application performance and security, rather than hardware and network management.
Rather than assuming an infrastructure-centric focus, IT Ops personnel will have to assume an applications-centric focus, working with technologies based around services and apps rather than servers.
A Capital Expenditure (CapEx) view of the network, where IT Ops owns or leases all application and network equipment. An Operational Expenditure (OpEx) view of the network where organizations host their applications on a cloud hosting provider’s equipment and the provider charges them a monthly fee.
Lower level IT Ops tasks such as tape management, simple backup procedures, checking server and application status and availability, and detecting and correcting hardware errors will decrease. Responsibilities for higher level DevOps functions will increase, including application error correction, data integrity, code validation, and response time.
Management will increasingly look towards IT Ops to handle performance management issues that were previously handled by application developers and owners. App developers will be too busy with constant application upgrades to handle these issues, and IT Ops will be asked to step up and fill the gap.
At the very least, IT Ops professionals will need a working familiarity with APIs and other application technologies (what the technologies do, how to test apps, how apps address each other in the cloud, how to troubleshoot issues, etc.). This is fast becoming a requirement for managing IT Ops in the cloud. For more information on learning these technologies for IT Ops personnel, see my post on the new skill sets needed for AIOps. Most of those same skills will be needed for IT Ops cloud management.
Hub-Spoke architecture may become less important, as more applications are hosted in a cloud environment, reducing the need for different connected locations to communicate with each other and hub-based servers. The need for redundant pathways between closed point-to-point networks may also decrease, as redundancy can also be handled in the cloud. Applications will run on cloud-based servers, rather than on-site physical servers. Apps that formerly resided in the hub of the hub-spoke network will now be reachable by everyone through the cloud, and IT will have to authorize and control that access. VMs for different locations that were previously hosted on their own in-house file servers can now be hosted on the same provider environment inside the cloud.
“Leisurely” operating system updates where companies stay on the same operating system version for years before upgrading to the latest version, will start to go away. The cloud provider will now be in charge of your OS upgrade schedule.
Automatic OS upgrades will occur at fairly regular intervals, forcing IT Ops personnel to ensure that their applications and software will continue running on the latest OS. IT Operations will go from planning and executing OS upgrades for Windows, Linux, AIX, and IBM i to testing and insuring that OS migrations will work correctly when the next scheduled OS upgrade occurs.

These are some of the changes that will occur when your applications and network functions move to the cloud. The cloud will force IT Ops skills and responsibilities to shift from machine and network management to server and application configuration, application performance, and security.

]]>
Data Center Tiers: What Are They and Why Are They Important? https://www.bmc.com/blogs/data-center-tiers-important/ Thu, 12 Jan 2017 03:46:42 +0000 http://www.bmc.com/blogs/?p=10079 When dealing with Data Centers (DCs), it’s helpful to understand what Data Center tiers are and how they affect IT organizations. Here’s a brief overview of what a data center tier is, what data center tiers tell people about your data center, and why they’re valuable to have. (This article is part of our Data […]]]>

When dealing with Data Centers (DCs), it’s helpful to understand what Data Center tiers are and how they affect IT organizations. Here’s a brief overview of what a data center tier is, what data center tiers tell people about your data center, and why they’re valuable to have.

(This article is part of our Data Center Operations Guide. Use the right-hand menu to navigate.)

What is a Data Center tier?

Data center tiers are a standard methodology for ranking data centers in terms of their potential infrastructure performance (uptime). Data center tiers are ranked from 1 to 4 and higher ranked data centers have more potential uptime than lower ranked data centers.

Here are the four currently accepted data center tier rankings from the Uptime Institute and what each ranking represents in terms of uptime and availability.

Tier 1 (Basic Capacity) Tier 1 data centers go beyond staging your servers in a spare office or large closet inside a larger facility. Tier 1 DCs need a dedicated space for all your IT systems (a server room which may or may not include a locked door); uninterruptable power supplies (UPSes) to condition incoming power and to prevent spikes from damaging your equipment; a controlled cooling control environment that runs 24x7x365; and a generator to keep your equipment running during an extended power outage.
Tier 2 (Redundant Capacity) A tier 2 data center incorporates all the characteristics of a tier 1 DC. It also contains some partial redundancy in power and cooling components (the power and cooling systems are not totally redundant). A tier 2 DC exceeds tier 1 requirements, providing some additional insurance that power or cooling needs won’t shut down processing.
Tier 3 (Concurrently maintainable DC) A tier 3 DC incorporates all the characteristics of tier 1 and tier 2 data centers. A tier 3 data center also requires that any power and cooling equipment servicing the DC can be shut down for maintenance without affecting your IT processing. All IT equipment must have dual power supplies attached to different UPS units, such that a UPS unit can be taken off-line without crashing servers or cutting off network connectivity. Redundant cooling systems must also be in place so that if one cooling unit fails, the other one kicks in and continues to cool the room. Tier 3 DCs are not fault tolerant as they may share different components such as utility company feeds and external cooling system components that reside outside the data center.
Tier 4 (Fault Tolerance) A tier 4 DC incorporates all the capabilities found in tier 1, 2, and 3 DCs. In addition, all tier 4 power and cooling components are 2N fully redundant, meaning that all IT components are serviced by two different utility power suppliers, two generators, two UPS systems, two power distribution units (PDUs), and two different cooling systems powered (again) by different utility power services. Each data and cooling path is independent of the other (fully redundant). If any single power or cooling infrastructure component fails in a tier 4 DC, processing will continue without issue. IT processing can only be affected if components from two different electrical or cooling paths fail.

Scoring data center tiers on uptime

Data center uptime is expressed as the percentage of time each year that your data center is available, with each higher data center tier having a higher uptime percentage.

Here are the standard uptime percentages along with the maximum downtime you can expect to see with DCs in each data center tier.

  • Tier 1 DCs have a 99.671% uptime percentage per year. Maximum total yearly downtime = 1729.2 minutes or 28.817 hours each year
  • Tier 2 DCs have a 99.741% uptime percentage per year. Maximum total yearly downtime = 1361.3 minutes or 22.688 hours
  • Tier 3 DCs have a 99.982% uptime percentage per year. Maximum total yearly downtime = 94.6 minutes or 1.5768 hours
  • Tier 4 DCs have a 99.995% uptime percentage per year. Maximum total yearly downtime = 26.3 minutes or 0.4 hours

Note that your mileage may vary (YMMV) when using any one of these DC tier models. Uptime percentages for tier 3 and tier 4 are more accurate and consistent because of their high degree of redundancy, while tier 1 and tier 2 DCs could experience longer processing outages depending on what causes their downtime.

What do you do with a data center tier ranking?

It’s also important to understand your business needs for using a tier 1, 2, 3, or 4 data center provider. A tier 1 or 2 data center may work well for a smaller company that doesn’t have full 24×7 requirements and can stand being down after-hours or on weekends for maintenance. In that case, it may not be worth it to put in the extra investment to run in a tier 3 or 4 environment.

However, if you’re a large multi-national organization that does business around the clock and you have several critical applications that can never be down, you may want to opt for hosting your apps in a tier 3 or 4 data center or make your in-house data center tier 3 or 4 compliant.

DC tier rankings are also important in several different situations, including the following:

  • When planning a data center move to an external provider or to a cloud provider, data center rankings help you understand the risks involved in using these providers
  • When building or redesigning your own data center, to provide a blueprint for its setup and configuration that meets your needs
  • When you’re hosting a critical application for a customer, they will want to know what your data center ranking is, who certified your DC, and what certification standard was used
  • In risk evaluation scenarios when you have to justify network availability to management

Who certifies a data center?

Data centers are generally rated and certified using either the Uptime Institutes’ standard tier classification system or the TIA/942 standard. Data centers are certified against these standards and issued a rating.

When hiring someone to certify your data center, make sure you know their reputation in the industry and which standard they are using.

]]>