Stephen Watts – BMC Software | Blogs https://s7280.pcdn.co Fri, 07 Apr 2023 12:52:26 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png Stephen Watts – BMC Software | Blogs https://s7280.pcdn.co 32 32 How Data Center Colocation Works https://s7280.pcdn.co/data-center-colocation/ Fri, 04 Feb 2022 00:00:50 +0000 http://www.bmc.com/blogs/?p=11721 Data Center Colocation (aka “colo”) is a rental service for enterprise customers to store their servers and other hardware necessary for daily operations. The service offers shared, secure spaces in cool, monitored environments ideal for servers, while ensuring bandwidth needs are met. The data center will offer tiers of services that guarantee a certain amount […]]]>

Data Center Colocation (aka “colo”) is a rental service for enterprise customers to store their servers and other hardware necessary for daily operations. The service offers shared, secure spaces in cool, monitored environments ideal for servers, while ensuring bandwidth needs are met. The data center will offer tiers of services that guarantee a certain amount of uptime.

The decision to move, expand, or consolidate your data center is one that must be weighed in the context of cost, operational reliability and of course, security. With these considerations in mind, more companies are finding that colocation offers the solution they need without the hassle of managing their own data center.

Data center colocation works like renting from a landlord: Customers rent space in the center to store their hardware.

(This article is part of our Data Center Operations Guide. Use the right-hand menu to navigate.)

Benefits of data center colocation

Data center colocation could be the right choice for any business of any size, in any industry. Let’s look at the benefits.

Uptime

Server uptime is a big advantage enterprise businesses have in data center colocation. By buying into a specific tier, each enterprise server client is guaranteed a certain percentage of uptime without the payroll cost to maintain or other maintenance fees.

Risk management

Utilizing a colocation facility ensures business continuity in the event of natural disasters or an outage. This means that if your business location loses power, your network traffic will not be affected.

Its key to success is redundancy. The layers of redundancy offered at a data center colocation are far more complex than many companies can afford in-house.

Some enterprise companies will consider the off-site location as their primary data storage location while maintaining onsite copies of data as backup.

(Read about enterprise risk management.)

Security

Data centers are equipped with the latest in security technology including cameras and biometric readers, check-in desks that welcome inbound visitors, and checks for security badges are commonplace.

These facilities are monitored 24/7/365, both in the physical world and on the cloud to ensure that unauthorized access does not occur.

Cost

One of the main advantages of colocation is that it results in significant cost savings especially when measured against managing a data center in-house. This means that for many companies, renting the space they need from a data center offers a practical solution to ever-shrinking IT budgets. With colocation, there is no need to worry about planning for capital expenditures such as:

  • UPS (uninterrupted power sources)
  • Multiple backup generators
  • Power grids
  • HVAC units (and the ongoing cost of cooling)

Apart from these capital expenditures, there are also ongoing maintenance costs associated with maintaining and managing an in-house server.

Bandwidth

Colos issue the bandwidth that enterprise client servers need to function properly. With large pipes of bandwidth to power multiple companies, data center colocations are primed to support businesses in a way their office location likely cannot—something that’s increasingly important to remote work.

Support & certifications

Data center colocation offers the benefit of peace of mind.

When you partner with a data center colocation, your enterprise business may be able to reduce potential payroll costs by relying on the data center experts to manage and troubleshoot major pieces of equipment. Enterprise businesses can rely on expert support from experts who are certified to help.

Scalability

As your business grows, you can easily expand your IT infrastructure needs through colocation.

Different industries will have different requirements in terms of the functionalities they need from their data center as it relates to space, power, support and security. Regardless, your service provider will work with you to determine your needs and make adjustments quickly.

In-house data center vs data center colocation

While data center outsourcing offers many benefits, some enterprise organizations may still prefer to manage their own data centers for a few reasons.

Control over data

Whenever you put important equipment in someone else’s charge, you run the risk of damage to your equipment and even accidental data loss. Fortunately, data centers are set up with redundancy and other protocols to reduce the likelihood of this occurring, as discussed above.

But some enterprise businesses with the knowledge and resources to handle data in-house, feel more comfortable with being liable for their own servers.

They also benefit from being able to fix server issues immediately when they occur. Enterprise businesses who seek to outsource instead must work closely with their service providers to ensure issues are resolved in a timely manner.

Contractual constraints

Enterprise business owners may find that they are unpleasantly surprised by the limitations of the contract between their company and a colo facility. Clauses that include:

  • Vendor lock-in
  • Contract termination or nonrenewal
  • Equipment ownership

Choosing a data center

Here are eight considerations enterprise IT Directors should think about before moving their data to a co-located data facility.

  1. Is the agreement flexible to meet my needs?
  2. Does the facility support my power needs, current and future?
  3. Is the facility network carrier neutral? Or does it offer a variety of network carriers?
  4. Is it the best location for my data? Accessible? Out of the way of disaster areas?
  5. Is the security up to my standards?
  6. Is the data center certified with the Uptime Institute?
  7. Does my enterprise business have a plan for handling transitional costs?
  8. Is this data center scalable for future growth?

If an enterprise business leader can answer ‘yes’ to the above questions, it may be the right time to make the change.

Cloud services vs colocation

The cloud is another option over data center colocation:

  • A cloud services provider will manage all elements of the data: servers, storage, and network elements.
  • An enterprise’s only responsibility will be to work with their services and use it.

Cloud services are great for allowing a business to focus more on their business requirements and less on the technical requirements for warehousing their data. In this case, cloud services can be cheaper, and enable new businesses to get off the ground quicker.

More established businesses are considered to be better suited to handle their own data center needs through colo or in house means, and the costs to establish and maintain their colo will be cheaper in the long run than cloud services options.

Cloud services also allow access to quick start-up times, less technical knowledge required to get going, easily scalable (both up and down) server needs, and then integrated services with all the other options a cloud service provider might offer such as:

  • Integrated monitoring
  • Data storage and querying tools
  • Networking tools
  • Machine learning tools

(Accurately estimate the cost of your cloud migration.)

What’s next for data center colocation?

The biggest push in the industry comes from cloud service providers who use colo as a way to meet their hefty equipment storage needs. At the same time, the industry has been and will continue to remain fluid as laws change with regard to cloud storage requirements.

While soaring demand from cloud service providers has made the need for data center colocation increase, new technology offers rack storage density options that allow colo facilities to mitigate the demand for hardware space.

Related reading

]]>
Cloud Governance vs Cloud Management: What’s the Difference? https://www.bmc.com/blogs/cloud-governance-vs-cloud-management-whats-the-difference/ Tue, 04 Jan 2022 11:10:09 +0000 http://www.bmc.com/blogs/?p=11029 It is undeniable that the cloud has changed business, and life, as we know it. From collaborating on a document with team members across the globe, to ensuring applications are always up-to-date, the cloud has allowed organizations to instantly share and deploy whenever needed. This instant access doesn’t come without its risks, however, and it’s […]]]>

It is undeniable that the cloud has changed business, and life, as we know it. From collaborating on a document with team members across the globe, to ensuring applications are always up-to-date, the cloud has allowed organizations to instantly share and deploy whenever needed.

This instant access doesn’t come without its risks, however, and it’s crucial that these be avoided at all costs. For both security and efficiency purposes, it is crucial that businesses have both cloud governance and cloud management—but what is the difference between the two? And how can organizations ensure they are correctly utilizing them?

(Understand IT governance & management basics.)

Cloud Governance and Management

What is cloud governance?

A first step in establishing a successful multi-cloud strategy is simply making clear the difference between governance and management. Organizations need to define how to control, operate, optimize, and secure their cloud infrastructures and the applications running in multiple clouds.

As with anything, and especially concerning the cloud, there must be protocols in place that minimize any risks. Cloud governance is a set of rules that ensures an organization‘s cloud capabilities to support and enable its business strategies.

Governance is essentially the activity of defining, continuously monitoring, and auditing the rules, guidelines, policies, and processes that allocate, coordinate, and control a given operation’s resources and actions. Some governance rules could include:

  • Roles and responsibilities definitions
  • Compliance with industry regulations
  • Disaster recovery policies
  • Alert escalation procedures
  • Enforcement of network policies

(Explore these cloud governance best practices.)

Why is cloud governance important?

As an organization’s cloud environments become more complex, this list of rules only continues to grow. For companies with hybrid clouds or highly sensitive data that is being sent across the cloud, governance only becomes more critical—especially if you’re operating in the increasing number of countries that have stringent data privacy and data migration policies.

At the end of the day, however, cloud governance is not intended to execute the rules—it’s simply a system that structures all of them.

What is cloud management?

All of these rules and regulations are a great idea in theory, but they are useless unless there is an efficient way to put them into practice.

Cloud management, then, is the process of maintaining administrative control and oversight of all aspects of cloud computing. This includes all cloud services and products, whether they are deployed in private, public, or hybrid cloud environments.

This complementary activity of organizing, coordinating, and steering resources in full compliance with the defined governance ensures the strategic and operational objectives of the business are met while all assets operate under the established rules.

Cloud management is supplied through cloud management tools, which provide businesses the ability to manage resources across the multi-cloud and multi-vendor landscape. Some common responsibilities that might be included in cloud management are:

  • Organizing and steering corporate resources
  • Ensuring compliance is being followed
  • Maintaining data security

Why is cloud management important?

As organizations migrate more to the infrastructure-as-a-service (IaaS) business model, they find that the increasingly larger amount of applications that are being deployed into the cloud requires more structure in order to monitor all of them.

With cloud management tools, IT departments can be confident that their cloud-based applications meet applicable compliance and are being watched for all security concerns. This visibility and control over the ecosystem of applications allow enterprises full transparency into their cloud infrastructure, optimizing applications, managing compliance, and reducing risks.

Benefits of proper cloud governance & management

  • Automation: Working established processes and workflows can be automated, significantly raising efficiency.
  • Innovation: The evolution of cloud offerings is driven by the provider, which in turn creates effective opportunities to evolve one’s IT infrastructure at a low cost.
  • Optimization: Having a huge integration capacity that can leverage the existing potential of alternative, more capable infrastructure that can be installed and integrated within a matter of minutes, hours, or a few days.
  • Change: Proper processes in place over a highly dynamic and responsive IT landscape facilitate change management, quality assurance, and compliance.
  • CAPEX/OPEX: Utilize the most appropriate IT assets for a fraction of the “traditional way” cost.
  • Profitability: Organizations with above-average IT governance have been shown to have more than 20 percent higher profits than those with poor governance following the same strategy.

Real-world challenges addressed by cloud governance & management

Let’s play governance and management out across key business functions.

Costs

A contract has been established with the cloud services provider (CSP) where costs per cloud resource are defined. These roles must be involved:

  • The controller’s office audits the observance of such an established cost table.
  • The Chief Information Officer (CIO) establishes a continuous improvement workflow, leveraging existing frameworks and methodologies such as Kaizen, Six Sigma, and Lean to constantly analyze more cost/effective evolution paths to the existing cloud-based IT Infrastructure.

Budgets

Currently, almost every company area or department’s budget has a direct or indirect share of IT costs represented. One of cloud-based services’ main edges is precisely allowing dynamic allocation of assets, implying dynamic costs. Having the capacity to easily (only at a “mouse click” away) get additional resources leads to the natural “temptation” of triggering them.

  • Team Leaders and Area Managers, with the steering support of the CIO, manage dedicated existing IT resources to the best capacity while promoting synergies that delay the need for IT infrastructure escalation.
  • The CIO acts as an area manager towards the IT department within this topic.

(Understand the difference in capital vs operating expenses.)

Operations

IT operations obey corporate guidelines, which must be adapted, configured, and monitored within a cloud IT landscape context. This ensures compliance with operational standards which fosters operational efficiency and security.

  • The CIO audits and monitors the observance of existing ruling, including IT guidelines, internal adherence to the existing services contracts, and inherent SLAs with Cloud Service Providers.
  • Area Managers confirm area users have proper awareness about Corporate Operations rulings through coaching sessions and training opportunities

Security

IT security has gained an all-new relevancy with cloud based services due to the higher exposure of hybrid IT landscapes.

  • The Chief Information Security Officer (CISO) and CIO audit and monitor the observance of existing ruling, not only internally but also with regards to the Cloud Service Providers.
  • Team Leaders and Area Managers lead by example, in this case, coaching and identifying training needs towards team members which ensures wide corporate IT security awareness.

Risks

Risk management is yet another component of corporate IT Operations that has gained an increased relevancy with the arrival of cloud based services. These range from proper IT Infrastructure Load Balance among providers, as well as geographies, to prevent service disruption to shadow IT.

  • The CIO defines and locally adapts, fine-tunes, audits, and monitors the observance of existing corporate policies towards risk mitigation.
  • The Controller’s Office audits and reports/blocks the attempted acquisition of unauthorized IT assets or resources by the areas that constitute potential shadow IT.
  • Area Managers lead teams towards compliance by educating and reducing shadow IT and other practices that bear risks.

Getting started with cloud governance & management

There are three phases towards adopting and effectively running cloud governance and management:

  1. Design. The hardest part—assessing where you are and what can be leveraged over a specified period within a cloud environment including expected savings (time and money) and gains in effectiveness. Add to it the defining and designing the inherent project, metrics, SLAs, goals, milestones, risk mitigation actions, etc.
  2. Implement. Moving towards the cloud with proper governance and management
  3. Continuous Improvement. Undergoing a continuous cycle of assessment towards getting things more efficient, and at the same time, more cost effective

(Learn about continual improvement.)

Forward-thinking with cloud governance and management

The cloud will continue to push forward strategies and business in innovative ways. It is up to organizations to ensure that their cloud ecosystems have the structures set in place, and the tools necessary to manage them, to ensure the infrastructure is steady for years to come.

Related reading

]]>
What Is TOGAF®? A Complete Introduction https://www.bmc.com/blogs/togaf-open-group-architecture-framework/ Thu, 30 Dec 2021 00:00:47 +0000 http://www.bmc.com/blogs/?p=11970 In a long line of enterprise architecture frameworks, TOGAF® is not the first and it’s unlikely to be the last. But it is one that’s endured for nearly two decades, with worldwide usage—an impressive feat in today’s technology landscape. TOGAF is the acronym for The Open Group Architecture Framework, and it was developed by The […]]]>

In a long line of enterprise architecture frameworks, TOGAF® is not the first and it’s unlikely to be the last. But it is one that’s endured for nearly two decades, with worldwide usage—an impressive feat in today’s technology landscape.

TOGAF is the acronym for The Open Group Architecture Framework, and it was developed by The Open Group, a not-for-profit technology industry consortium that continues to update and reiterate the TOGAF.

This article will focus on familiarizing beginners with TOGAF.

Understanding enterprise architecture

In a previous article, we deep dived into enterprise architecture frameworks. Enterprise architecture refers to the holistic view and approach of software and other technology across an entire company or enterprise.

Typically, enterprise architecture isn’t just a structure for organizing all sorts of internal infrastructures. Instead, the goal is to provide real solutions to business needs through analyzing, designing, planning, and implementing the right technology in the right ways.

More and more, enterprise architecture also encompasses additional business needs, such as:

The goal of an organized enterprise architecture, then, is to successfully execute business strategy with efficiency, efficacy, agility, and security.

If all this sounds like it can be complicated – designing and implementing a clear, long-term solution to all enterprise software in a way that solves business needs – it’s because it is. That’s why enterprise architecture frameworks (EAFs) started emerging informally and formally, as long as five decades ago.

(Read our explainer on enterprise application software.)

TOGAF history and facts

As a subset of computer architecture, enterprise architecture as a field dates back to the mid-1960s. IBM among other companies and universities spearheaded some explicit ways to build enterprise architecture, knowing that all the pieces involved to run on a network are complicated.

Over the next few decades, technology only became more complicated: today, most companies, regardless of size or product, utilize the internet to make their business processes easier, quicker, and sometimes more transparent. Today, enterprise architecture is a necessary process to make sense of various hardware and software options, on premise and in the cloud, and to ensure security when sharing data across multiple platforms.

(Understand how enterprise networking works.)

The TOGAF was initially developed in 1995. As was common in the field of enterprise architecture by then, newer versions or models offered improved iterations and theories. Likewise, TOGAF took a lot of inspiration from the U.S. Department of Defense’s own EAF, called the Technical Architecture Framework for Information Management (TAFIM). Interestingly, the USDoD stopped using the TAFIM within a couple years of the emergence of TOGAF. Still, TOGAF implementation and success continues worldwide today, more than 20 years later.

The Open Group has updated TOGAF to the current 9.2 version. The Open Group further certifies tools and courses that meet TOGAF standards. Today various organizations have developed 8 tools and 71 courses which are officially certified by the Open Group.

The TOGAF approach to EAFs

The Open Group defines the TOGAF as the “de factor global standard for enterprise architecture”. The framework is intended to help enterprises organize and address all critical business needs through four goals:

  • Ensuring all users, from key stakeholders to team members, speak the same language. This helps everyone understand the framework, content, and goals in the same way and gets the entire enterprise on the same page, breaking down any communication barriers.
  • Avoiding being “locked in” to proprietary solutions for enterprise architecture. As long as the company is using the TOGAF internally and not towards commercial purposes, the framework is free.
  • Saving time and money and utilizing resources more effectively.
  • Achieving demonstrable return on investment (ROI).

3 pillars of TOGAF

If the four goals are the theoretical outcome of using TOGAF, then the three pillars are the way to achieve the goals. These pillars help create a systematic process to organize and put software technology to use in a structured way that aligns with governance and business objectives. Because software develop relies on collaboration across various business departments inside and outside of IT, TOGAF’s goal of speaking the same language encourages and assists the various stakeholders to get on the same page, something that may not otherwise happen in business environments.

The TOGAF is divided into three main pillars:

Enterprise architecture domains

Enterprise architecture domains divide the architecture into four key areas (sometimes shortened to ‘BDAT areas’):

  • Business architecture, which defines business strategy and organization, key business processes, and governance and standards.
  • Data architecture, which documents the structure of logical and physical data assets and any related data management resources.
  • Applications architecture, which provides a blueprint for deploying individual systems, including the interactions among application systems as well as their relationships to essential business processes.
  • Technical architecture (also known as technology architecture), which describes the hardware, software, and network infrastructure necessary to support the deployment of mission-critical applications.

Architecture Development Model (ADM)

This iterative cycle uses performance engineering to develop an actual enterprise architecture. Importantly, it can be customized to the enterprise’s needs, so it’s not a one-size-fits-all approach. Once an architecture is developed, the enterprise can roll it out to all teams or departments in iterative cycles, ensuring minimal errors and further helping the company communicate cohesively.

Enterprise Continuum

This classification system tracks architecture solutions on a range, starting at generic, industry-standard options and including customized enterprise-specific solutions.

The heart of TOGAF

Proponents say that ADM is the heart of TOGAF: it’s this pillar that makes TOGAF both very effective and a standout from other frameworks. The Architecture Development Method offers eight steps as guidance to figure out where the enterprise currently is and determine where the enterprise wants and needs to be in each of the four enterprise architecture domains.

Once business processes are established through the entire lifecycle, the ADM helps the enterprise to:

  1. Identify the gaps between current status and long-term goals.
  2. Collate these gaps into smaller actionable and understandable packages that the team can then implement.

Two other areas are sometimes included in TOGAF’s main pillars:

  • TOGAF certified tools
  • Qualifications

The Open Group offers two certifications for individuals:

  • The first level is known as the Foundation, teaching basic tenets of enterprise architecture and rolling out TOGAF.
  • Level 2 Certified involves business analysis and application.

The Open Group also certifies tools that align with TOGAF. For the most recent version, eight tools from eight organization are available.

Benefits of using TOGAF

The benefits of ADM are that it is customizable to organizational need—there’s no need to create a structure that doesn’t serve your business. These smaller packages are also scalable, so if one team rolls it out, it can successfully be rolled out to other teams without much tweaking. This helps the enterprise establish a process with multiple check points, so that there are few errors the wider the architecture is implemented.

There can also be benefits to individuals who certify in TOGAF. A study of industry employees indicates that enterprise architects, software architects, and IT directors, among others, who choose to earn a certification in TOGAF often see an average yearly pay bump of $10,000 to $20,000 over similarly placed colleagues who aren’t certified.

Some experts in enterprise architecture point out that while TOGAF may appear very logical, it’s actually quite a shake up to traditionally educated technology consultants today – but perhaps this will change has TOGAF adoption continues along steadily.

TOGAF vs ITIL: A brief comparison

TOGAF and ITIL® are two of the most popular management frameworks, each describing common interests in managing IT services and operational activities in an IT-driven organization. Yet, both provide a different perspective:

  • ITIL is focused on service management.
  • TOGAF is focused on developing and managing enterprise architecture.

Both have emerged as an integral component of managing enterprise IT, allowing organizations to anticipate and prepare for change in a fast-evolving enterprise IT landscape. Two main changes facing enterprise IT has encouraged IT professionals and decision makers to view TOGAF and ITIL not as separate and different frameworks, but ones that compete for relative interest in particular application domains:

  • IT Service Management has matured such that it is no longer an isolated operational function but a critical segment that drives business value.
  • Enterprise Architecture is not only a technical discipline but requires advice at a senior executive level on financial, operational, and business aspects: hybrid and multi-cloud environments are as much a business question as a technical question.

As the professionals and decision makers working with different frameworks are now compelled to collaborate and work across business functions, the comparison between TOGAF and ITIL frameworks has emerged as a relevant debate. In this context, organizations should note the following considerations:

  • Both ITIL and TOGAF may describe certain aspects of enterprise IT from different perspectives. These perspectives tend to be conflicting at times.
  • Professionals already adopting one of ITIL or TOGAF frameworks may need to completely understand the requirements as well as instructions when adopting the other framework. In reality, both frameworks only provide generic guidelines instead of specific actionable practice items.
  • It may require detailed discussions and in-depth experimentations based on trial-and-error to evaluate the effectiveness of ITIL vs TOGAF on specific management matters.

You can develop these discussions by considering some of the key differences between TOGAF and ITIL:

  • TOGAF helps develop enterprise architecture. The scope of ITIL in this context is limited only to developing an efficient IT department within that business architecture.
  • Managing IT operations and services are well within the scope of ITIL. TOGAF on the other hand does not cover the run-time (IT Service) operations of the business.
  • ITIL is focused on policies that help deliver value to end-users through service quality. TOGAF takes a similar approach for the larger enterprise architecture. This is one of the key areas where ITIL and TOGAF converge.
  • ITIL Service Design guidelines largely overlap the TOGAF framework. Specifically, collecting and managing  requirements in designing business architecture or managing IT services is similar.
  • TOGAF guidelines on service transition largely overlap with the ITIL framework. Specifically, developing, testing and migrating of desired systems and activities is extensively covered in ITIL.
  • Architecture change management and ITIL continual service improvements greatly resemble: ITIL provides detailed practical guidelines whereas TOGAF provides a big picture overview on the topic.
  • TOGAF encourages the use of other frameworks for change management decisions and the 7-step guideline provided by ITIL is applicable to enterprise architecture, albeit to the IT services domain only.

Success & criticism of TOGAF

According to the Open Group, TOGAF is employed in more than 80% of Global 50 companies and more than 60% of Fortune 500 companies. Though criticism of the framework is often that it is too complicated or theoretical to be applicable, it seems that plenty of companies are using the structure.

Companies who have successfully implemented the framework admit that failings do happen, because TOGAF cannot be a cure all for enterprise issues. While the issue can be TOGAF principles or the enterprise architecture itself, others argue that sometimes key stakeholders and C-level management don’t always take the time to set up important factors, such as key performance indicators (KPIs), to make the architecture team successful.

This lack of complete buy-in may sometimes be due to the complicated nature of TOGAF, when looked at in its entirety. Indeed, even when the framework feels overwhelming, the best advice may be to pick what works best for your company. Some technology experts suggest precisely that: skip what seems overdone or unnecessary and implement the pieces that seem most necessary. After all, the key stakeholders are the ones who need to find use in this structure, and they know the company the best.

Many understand that TOGAF is a work in progress—marked by new releases every few years. Even skeptics of TOGAF and enterprise architecture frameworks in general find that the applied use of TOGAF is often successful simply because it is better than doing nothing.

When companies want to jump onboard a new technology, it often requires building out the right tech team from scratch and then tracking down all sorts of data. It gets messy, and the rapid pace that technology shifts and improves, these requests occur a lot more often than they used to. This can explain, in part, the bustling IT and architecture teams that are always busy yet somehow always seem behind.

TOGAF is no miracle tool, but it does provide structure to help these teams—and upper-level management—from having to reinvent the wheel each time the company wants to incorporate new technology. Technology expert Jason Bloomberg underscores why TOGAF is such a conflicting topic in the enterprise architecture industry. When organizations employ TOGAF, they typically “fall into four buckets”:

  • Those who apply it incorrectly, and therefore it shows no value.
  • Those who achieve some baseline success in handling legacy problems.
  • Those who achieve explicit business goals.
  • Those who want to handle change better overall.

This final group sees enterprise architecture as a way to become more agile.

As its widespread use indicates, TOGAF can help enterprises of any size and any industry—but those who employ it are probably best served by understanding its pros and cons first, and then applying the parts that make particular sense for their own company.

Related reading

]]>
Application and Software Modernization: Concepts, Pros/Cons & Best Practices https://www.bmc.com/blogs/application-software-modernization/ Tue, 10 Aug 2021 14:25:29 +0000 https://www.bmc.com/blogs/?p=50413 Digital transformation requires being proactive. And modernizing software and applications is a critical part of any successful digital transformation. After all, legacy software applications and technologies limit your organization’s ability to enable a digital-first user experience and business operations. This sentiment rings true across the enterprise IT industry. According to a research survey conducted across […]]]>

Digital transformation requires being proactive. And modernizing software and applications is a critical part of any successful digital transformation. After all, legacy software applications and technologies limit your organization’s ability to enable a digital-first user experience and business operations.

This sentiment rings true across the enterprise IT industry. According to a research survey conducted across 800 senior IT decision makers in global enterprises, 80% of the respondents believe that failing to modernize applications and IT infrastructure will negatively affect the long-term growth of their business.

And organizations that successfully transition from legacy to modernized infrastructure technologies should expect a 14% annual increase in revenue. These organization are also better poised to take advantage of the next phase of technology evolution. Frontier technologies—everything from artificial intelligence and blockchain to gene editing and nanotechnologies—is expected to reach $3.2 trillion by the year 2025, according to a recent UN research report.

In this article, we look at what application and software modernization means. We’ll explore the opportunities and the risks, and then we discuss actionable approaches and best practices to follow.

Let’s get started!

What is application & software modernization?

Application and software modernization is just that—modernizing apps and software.

In practice, it is the transitioning from existing and/or traditional software functionalities to a context that is compliant with up-to-date IT landscape potential. The most obvious example is moving an enterprise IT environment from a traditional on-premises data center model powered by mainframes to a cloud environment rich with microservices and containers.

Cloud vs mainframe

Of course, this isn’t to say that every company needs to move everything to the cloud.

The mainframe, for example, continues to play a significant role for many companies. Though many, many companies in every industries are moving workloads and data storage to the cloud, not every single workload or data set must go to the cloud.

Today, thousands of companies at the global scale have critical core business processes based on corporate software that dates back some 30 or even 60 years. Most of the companies will continue to keep critical processes on the mainframe. In fact, the 2020 Mainframe Survey, which polled more than 1,000 IT professionals and directors underscores this:

  • 90% of respondents see the mainframe as a long-term platform for growth
  • 67% of extra-large shops have more than 50% of their data on the mainframe

(Read about mainframe modernization.)

Three routes to software modernization

When considering whether to modernize software and app, you can choose from three options:

  • Rewrite. Migrate the coding to a more recent development language. This option has significant potential for underestimating both inherent costs and required time frame—so it often turns into a never-ending story.
  • Replace. Throw away the old system, replacing it with a new one. This option usually involves having to build an entirely new application ecosystems with semi off-the-shelf components that mimic or mirror some functionalities of your existing systems. This usually increases complexity without achieving 100% functionality.
  • Reuse. Find a way to turn a legacy system into something more portable, allowing remote sites and users to access it. If applicable, this option allows a step-by-step approach where change is tested and fine-tuned.

Prior to deciding the best route for your modernization project, you first need to understand the pros and cons, with a clear picture about current status and future goals.

Path To Madernizing Software Application

Benefits of modernizing software

Having old software supporting business operations often means increasing your risk, increasing your costs, and severely decreasing your business agility.

So, let’s look at the many benefits of choosing to modernize your apps and software.

Minimizing obsolescence

  • Losing control. Having a part of one’s core business dependent on some piece of code developed in some “old school” programming language isn’t good for business—you need people today who can service that code even if it has been in use for decades.
  • No support. Having any type of critical business processes running on hardware and/or operating systems that are no longer supported by their manufacturer or there is even no longer a manufacturer to resort to; well, hardly the position any business manager is eager to be in.
  • The missing component.  The support infrastructure no longer exists. Some older software was developed having its backup process and policies dependent on either the backup hardware or the logic inherent to some specific backup product or solution. Most likely such backup product has already been discontinued which potentially renders restore activities unfeasible.

Reducing cost

  • Choosing the lesser problem. If a legacy system is still around, it’s probably playing a critical role towards corporate core business. In most cases, replacing it comes with significant cost, both from an investment perspective and an operational perspective. (Consider the harm that stoppages could do if that system is offline.) Still, doing nothing, which may save money in the short term, could increase your risk of unable to work at all for the foreseeable future, particularly if the hardware or operating system are no longer supported by a manufacturer.
  • OpEx vs CapEx. Having old systems running may imply investing in discontinued components stock such as backup tapes, hard disk drives, RAM memories or resorting to high-cost contractors that leverage their niche knowledge.

Increasing agility

  • Responsiveness. As more burden is placed on digital systems to support customers and business critical processes, some legacy systems may be causing a bottleneck and slowing over application responsiveness. It’s important to examine if it’s an input/output issue or a processing issue that’s causing the issue.
  • Business Cycle. Being competitive in today’s market requires the ability to promptly address an unforeseen demand or release new features quickly. Some legacy and traditional systems may not able to cope with the required flexibility and speed to deploy new code quickly and efficiently.

Integration

  • IT has been growing in terms of platforms served (mobile and social networks) and complexity in terms of IT landscape components—cloud, hybrid and on-premise data centers. Agile “plugin” integrability is becoming a competitive edge in business terms, which is often held back by legacy systems.

Minimizing non-alignment

  • The law. In some cases, legislation or market regulation may imply the need of migrating an existing software so that it may comply with newly posed requirements. GDPR and related privacy legislation, for example, could force you to a newer platform that has the privacy you need.
  • The Market. Competition is another common compelling reason for modernizing software. Upstart competitors are typically not weighed down by technical debt, older systems, and processes, making them nimbler by nature. When competition gets ahead in the market by having more effective business tools, the time has come to move forward quickly or perish.

Risks of modernizing

Legacy modernization may seem like an easy and logical decision when dealing with systems that are multiple decades old—modernize and become more productive. But, we all know the adage “If it ain’t broke, don’t fix it.” Gartner Research warns that moving away from legacy systems such as the IBM mainframe could end up costing more and pose a risk to quality.

Before leaping into a modernization effort that touches every single company process, move with caution. Gartner advises organizations considering a modernization project to start with these steps:

  • Focus on business needs and capabilities, not perception. Phrases like “fragile” or “legacy”—or even Gartner’s preferred term “traditional”—can carry connotations and bias decision makers towards modernization when it isn’t necessary or even beneficial.
  • Audit existing platforms and processes with a focus on misalignments or gaps between business requirements and what the platforms deliver. Start with what you’ve got, not with the assumption that change is necessary.
  • Consider total cost of ownership, including cost of transitioning and dependencies over a multiple year period. By focusing on business needs and cost of ownership, organizations can correctly prioritize projects that will drive the most business value while minimizing risk.

Considerations for the cost of such a move need to go beyond current CapEx for the traditional system versus expected OpEx of a new system. Businesses need to also look at things like:

  • Change order costs
  • The cost of running multiple systems simultaneously during any transition
  • Training costs
  • Any new security exposure

The emphasis is to look at the issue from a business perspective and ask questions that relate to problems such as:

  • Competitive ability
  • Current backlog of change requests
  • Friction in the business workflow
  • Where failure occurs

As Gartner summarizes:

“The objective is to determine whether the traditional platform is helping the business or hindering it from meeting its goals.”

Modernizing Software Apps

Gartner model for application modernization

The first step to transition from legacy to modern applications is to identify, evaluate, and mitigate the risk-to-reward ratio of your IT modernization initiatives. Legacy technologies are typically characterized by two factors:

  • They’re in use to serve some critical purpose.
  • They usually exist because the modernization process faces steep obstacles.

In this context, research firm Gartner presents a guideline for evaluating legacy applications while reducing the risks associated with your modernization project.

Evaluating legacy technologies

Successful digital transformation projects are driven by the business case. For example, legacy technologies may:

  • Prevent organizations from competing in a digital-first business market.
  • Expose undue security risks that can be easily mitigated by using modern technology infrastructure.

It’s important to find opportunities and risk-aversion in the new technologies and operating models in the modern era. So, when debating whether to modernize, consider these key drivers:

  • Business fit. Consider alignment with organizational vision, market needs, and the competitive strength delivered with your current IT environment. See how application modernization fits with your goals.
  • Business value. By investing in new technologies, what added value are you delivering, in the short and the long term?
  • Business agility. In the age of software, business agility should be a key driving factor for evaluating existing technologies. Embracing business agility can be challenging, but it’s increasingly a business imperative.
  • Cost. Consider the OpEx, CapEx and the total cost of ownership (TCO) of your digital transformation and application modernization initiatives.
  • Complexity. Migrating to new technologies can become exponentially difficult, adding to the cost of transition.
  • Risk. Compare the risks and opportunities associated with new technology investments. These risks can include technical challenges, cost, and soft factors such as cultural change issues and end-user acceptance of new technologies.

Evaluating modernization

Once you’ve identified any legacy technologies as possible candidates for modernizing and digital transformation, you’ll want to follow a stringent process that maximizes innovation and future proofing. Use these steps:

  1. Encapsulate. Capture the functions and data of the legacy application and deliver the encapsulated service as an external API connection.
  2. Rehost. Migrate the application from present mainframe data center servers to cloud infrastructure.
  3. Replatform. Migrate the software code to a new operating platform. This will require changes at the code level (interface and functionality) as well as the software architecture.
  4. Refactor. Optimize existing code for improved functionality on new platforms and cloud infrastructure.
  5. Rearchitect. Thoroughly change the software architecture to introduce new features and functionality.
  6. Rebuild. Restart the project from scratch, replicating the functions and features.
  7. Replace. Eliminate the legacy technology and invest in an alternate solution.

Gartner’s TIME framework

Once you’ve identified and collected information on a variety of candidate legacy technologies, it’s time to prioritize.

Gartner offers the Tolerate-Invest-Migrate-Eliminate (TIME)  framework that helps organizations develop a clear change strategy and observe options from a larger organizational perspective:

  • Tolerate. Re-engineer.
  • Invest. Innovate and evolve.
  • Migrate. Modernize.
  • Eliminate. Replace and consolidate.

By using this simple 2×2 matrix, organizations can evaluate how well existing and new technologies can perform in terms of business fit, functionality, architecture, and adoption.

  • Business Value represents the importance of a technology to achieve business goals.
  • Quality refers to the technical integrity of the technology, with high quality indicating the ease of change of the legacy technology.

Popular approaches to software modernization

Of course, there are other approaches you can take when it comes to modernizing. In fact, several approaches and methodologies have been developed:

  • Architecture Driven Modernization (ADM) resorts to support infrastructure to mitigate the complexity of modernization. Virtualization is one great example.
  • The SABA Framework entails planning ahead for both the organizational and the technical impacts. This is a best practice that can minimized some serious headaches for enterprises.
  • Reverse Engineering Model represents high costs and a very long project that may be undermined by the pace of technology.
  • Visaggio’s Decision Model (VDM) is a decision model that aims to reach the suitable software renewal processes at a component-level based for each case combining both technological and economic perspectives.
  • Economic Model to Software Rewriting and Replacement Times (SRRT). Similar to the above mentioned VDM model.
  • DevOps contribution. DevOps focus is to allow swift deployment of new software releases with an absolute minimum degree of bug or errors in total compliance with target operational IT environment. This, by itself, represents a major enabler factor to speed up Legacy Modernization processes.

Best practices for modernizing apps/software

Whether you follow a prescribed approach or are DIYing it, these best practices ring true for any modernization effort.

  • Identify and map all IT systems and inherent software applications within your corporate landscape, regarding role, interactions (both amongst apps and towards human users), their support platforms, resources, and ageing status within the current standards.
  • Consider the business needs, not just IT’s needs. You must look at the business as well as IT. If your company still considers IT “something annoying that we must have,” you might have a hard time getting the unequivocal sponsorship of the board of management—which you need for any major modernization effort.
  • Proceed with a dual perspective. On one hand, compare your existing IT systems and software applications age and technological status versus current standards. On the other hand, align these with the business’ roadmap.
  • Cross-check process knowledge. Often, key users know how processes work and they likely understand the various steps involved in a process. But remember what they might not know: the logic behind the system, or how it does or doesn’t affect other business processes.

The key takeaway: modernizing software and applications is a necessary activity for most businesses today. This does not mean that every single business process or tool is replaced with a shiny, cloud-first option. Instead, it means choosing the practices that will benefit the most from this concerted effort.

Related reading

]]>
Data Quality Explained: Measuring, Enforcing & Improving Data Quality https://www.bmc.com/blogs/data-quality/ Mon, 12 Apr 2021 13:56:35 +0000 https://www.bmc.com/blogs/?p=49292 Data drives business decisions that determine how well business organizations perform in the real world. Vast volumes of data are generated every day, but not all data is reliable in its raw form to drive a mission-critical business decision. Today, data has a credibility problem. Business leaders and decision makers need to understand the impact […]]]>

Data drives business decisions that determine how well business organizations perform in the real world. Vast volumes of data are generated every day, but not all data is reliable in its raw form to drive a mission-critical business decision.

Today, data has a credibility problem. Business leaders and decision makers need to understand the impact of data quality. In this article, we will discuss:

Let’s get started!

What is data quality?

Data Quality refers to the characteristics that determine the reliability of information to serve an intended purpose (often, in business these include planning, decision making, and operations).

Data quality refers to the utility of data as a function of attributes that determine its fitness and reliability to satisfy the intended use. These attributes—in the form of metrics, KPIs, and any other qualitative or quantitative requirements—may be subjective and justifiable for a unique set of use cases and context.

If that feels unclear, that’s because data is perceived differently depending on the perspective. After all, the way you define a quality dinner, for instance, may be different from a Michelin-starred chef. Consider data quality from these perspectives:

  • Consumer
  • Business
  • Scientific
  • Standards
  • Other perspectives

In order to understand the quality of a dataset, a good place to start is to understand the degree to which it compares to a desired state. For example, a dataset free of errors, consistent in its format, and complete in its features, may meet all requirements or expectations that determine data quality.

(Understand how data quality compares to data integrity.)

Data quality in the enterprise

Now let’s discuss data quality from a standards perspective, as it is widely used particularly in the domains of:

Let’s first look at the definition of ‘quality’ according to the ISO 9000:2015 standard:

Quality is the degree to which inherent characteristics of an object meet requirements.

We can apply this definition to data and the way it is used in the IT industry. In the domain of database management, the term ‘dimensions’ describes the characteristics or measurable features of a dataset.

The quality of data is also subject to external and extrinsic factors, such as availability and compliance. So, here’s holistic and standards-based definition for quality data in big data applications:

Data quality is the degree to which dimensions of data meet requirements.

It’s important to note that the term dimensions does not refer to the categories used in datasets. Instead, it’s talking about the measurable features that describe particular characteristics of the dataset. When compared to the desired state of data, you can use these characteristics to understand and quantify data quality in measurable terms.

Data Quality Dimensions

For instance, some of the common dimensions of data quality are:

  • Accuracy. The degree of closeness to real data.
  • Availability. The degree to which the data can be accessed by users or systems.
  • Completeness. The degree to which all data attributes, records, files, values and metadata is present and described.
  • Compliance. The degree to which data complies with applicable laws.
  • Consistency. The degree to which data across multiple datasets or range complies with defined rules.
  • Integrity. The degree of absence of corruption, manipulation, loss, leakage, or unauthorized access to the dataset.
  • Latency. The delay in production and availability of data.
  • Objectivity. The degree with which data is created and can be evaluated without bias.
  • Plausibility. The degree to which dataset is relevant for real-world scenarios.
  • Redundancy. The presence of logically identical information in the data.
  • Traceability. The ability to verify the lineage of data.
  • Validity. The degree to which data complies with existing rules.
  • Volatility. The degree to which dataset values change over time.

DAMA-NL provides a detailed list of 60 Data Quality Dimensions, available in PDF.

Why quality data is so critical

OK, so we get what data quality is – now, let’s look at why you need it:

  • Cost optimization. Poor data quality is bad for business and has a significant cost as it relates to time and effort. In fact, Gartner estimates that the financial impact of the average financial impact of poor data quality on organizations is around $15 million per year. Another study by Ovum indicates that poor data quality costs business at least 30% of revenues.
  • Effective, more innovative marketing. Accurate, high-velocity data is critical to making choices about who to market to—and how. This leads to better targeting and more effective marketing campaigns that reach the right demographics.
  • Better decision-making. A company is only as good as its ability to make accurate decisions in timely manner—which driven by the inputs you have. The better the data quality, the more confident enterprise business leaders will be in mitigating risk in the outcomes and driving efficient decision-making.
  • Productivity. According to Forrester, “Nearly one-third of analysts spend more than 40 percent of their time vetting and validating their analytics data before it can be used for strategic decision-making.” Thus, when a data management process produces consistent, high-quality data more automation can occur.
  • Compliance. Collecting, storing, and using data poses compliance regulations and responsibilities, often resulting in ongoing, routine processes. Dashboard-type analytics stemming from good data have become an important way for organizations to understand, at a glance, your compliance posture.

How to measure data quality

Now that you know what you expect from your data—and why—you’re ready to get started with measuring data quality.

Data profiling

Data profiling is a good starting point for measuring your data. It’s a straight-forward assessment that involves looking at each data object in your system and determining if it’s complete and accurate.

This is often a preliminary measure for companies who use existing data but want to have a data quality management approach.

Data Quality Assessment Framework

A more intricate way to assess data is to do it with a Data Quality Assessment Framework (DQAF). The DQAF process flow starts out like data profiling, but the data is measured against certain specific qualities of good data. These are:

  • Integrity. How does the data stack up against pre-established data quality standards?
  • Completeness. How much of the data has been acquired?
  • Validity. Foes the data conform to the values of a given data set?
  • Uniqueness. How often does a piece of data appear in a set?
  • Accuracy. How accurate is the data?
  • Consistency. In different datasets, does the same data hold the same value?

Using these core principles about good data as a baseline, data engineers and data scientists can analyze data against their own real standards for each. For instance, a unit of data being evaluated for timeliness can be looked at in terms of the range of best to average delivery times within the organization.

Data quality metrics

There are a few standardized ways to analyze data, as described above. But it’s also important for organizations to come up with their own metrics with which to judge data quality. Here are some examples of data quality metrics:

  • Data-to-errors ratio analyzes the number of errors in a data set taking into account its size.
  • Empty values assess how much of the data set contains empty values.
  • Percentage of “dark data”, or unusable data, shows how much data in a given set is usable.
  • The time-to-value ratio represents how long it takes you to use and access important data after input into the system. It can tell you if data being entered is useful.

(Learn more about dark data.)

Data Quality and Integrity Key factors

How to enforce data quality

Data quality management (DQM) is a principle in which all of a business’ critical resources—people, processes, and technology—work harmoniously to create good data. More specifically, data quality management is a set of processes designed to improve data quality with the goal of actionably achieving pre-defined business outcomes.

Data quality requires a foundation to be in place for optimal success. These core pillars include the following:

  • The right organizational structure
  • A defined standard for data quality
  • Routine data profiling audits to ensure quality
  • Data reporting and monitoring
  • Processes for correcting errors in bad and incomplete data

Getting started

If you are like many organizations, it’s likely that you are just getting settled in with big data. Here are our recommendations for implementing a strategy that focuses on data quality;

  • Assess current data efforts. An honest look at your current state of data management capabilities is necessary before moving forward.
  • Set benchmarks for data. This will be the foundation of your new DQM practices. To set the right benchmarks, organizations must assess what’s important to them. Is data being used to super-serve customers or to create a better user experience on the company website? First, determine business purposes for data and work backward from there.
  • Ensure organizational infrastructure. Having the proper data management system means having the right minds in place who are up for the challenge of ensuring data quality. For many organizations, that means promoting employees or even adding new employees.

DQM roles & responsibilities

An organization committed to ensuring their data is high quality should consider the following roles are a part of their data team:

  • The DQM Program Manager sets the tone with regard to data quality and helps to establish data quality requirements. This person is also responsible for keeping a handle on day-to-day data quality management tasks, ensuring the team is on schedule, within budget, and meeting predetermined data quality standards.
  • The Organization Change Manager is instrumental in the change management shift that occurs when data is used effectively, and this person makes decisions about data infrastructure and processes.
  • Data Analyst/Business Analyst interprets and reports on data.
  • The Data Steward is charged with managing data as a corporate asset.

Leverage technology

Data quality solutions can make the process easier. Leveraging the right technology for an enterprise organization will increase efficiency and data quality for employees and end users.

Improving data quality: best practices

Data quality can be improved in many ways. Data quality depends on how you’ve selected, defined, and measured the quality attributes and dimensions.

In a business setting, there are many ways to measure and enforce data quality. IT organizations can take the following steps to ensure that data quality is objectively high and is used to train models that produce the profitable business impact:

  • Find the most appropriate data quality dimensions from a business, operational, and user perspective. Not all 60 data quality dimensions are necessary for every use case. Likely, even the 12 included above are too many for one use case.
  • Relate each data quality dimension to a greater objective and goal. This goal can be intangible, like user satisfaction and brand loyalty. The dimensions can be highly correlated to several objectives—IT should determine how to optimize each dimension in order to maximize the larger set of objectives.
  • Establish the right KPIs, metrics, and indicators to accurately measure against each data quality dimension. Choose the right metrics, and understand how to benchmark them properly.
  • Improve data quality at the source. Enforce data cleanup practices at the edge of the network where data is generated (if possible).
  • Eliminate the root causes that introduce errors and lapses in data quality. You might take a shortcut when you find a bad data point, correcting it manually, but that means you haven’t prevented what caused the issue in the first place. Root cause analysis is a necessary and worthwhile practice for data.
  • Communicate with the stakeholders and partners involved in supplying data. Data cleanup may require a shift in responsibility at the source that may be external to the organization. By getting the right messages across to data creators, organizations can find ways to source high quality data that favors everyone in the data supply pipeline.

Finally, identify and understand the patterns, insights, and abstraction hidden within the data instead of deploying models that churn raw data into predefined features with limited relevance to the real-world business objectives.

Related reading

]]>
How & Why To Become a Software Factory https://www.bmc.com/blogs/software-factory/ Fri, 04 Dec 2020 07:32:58 +0000 https://www.bmc.com/blogs/?p=19582 When it comes to software development, you’ve likely already evolved from Waterfall approaches to more modern, DevOps-based approaches. But an even more mature approach to software development is the software factory. Initially a lofty goal, becoming a software factory is something that companies across all industries are considering as a means to getting quality software […]]]>

When it comes to software development, you’ve likely already evolved from Waterfall approaches to more modern, DevOps-based approaches.

But an even more mature approach to software development is the software factory. Initially a lofty goal, becoming a software factory is something that companies across all industries are considering as a means to getting quality software to market sooner.

Is your company pushing streamlined, clean software on a regular basis? If not, becoming a software factory—and embracing the factory’s core concepts of AI and machine learning—is something to consider.

So, let’s take a look at the software factory concept.

What is a software factory?

Companies like Google and Netflix have set the gold standard in software development, with many updates and releases pushed daily in order to fix bugs, strengthen code, introduce new features, and handle unpredictable scaling.

This factory approach to software—churning out new software quickly, easily, and frequently—is called a software factory. Software factories roll out high-quality products and features, using lean code, that quickly enhance your business. (Software factories can closely relate to SAFe environments, too.)

A software factory relies on reducing the amount of interaction from developers so they can focus on higher-level technical challenges within the organization, such as:

  • Monitoring and maintaining the automated framework
  • Ensuring that enterprise data is secured

Today, companies across all industries are trying to become more like these leading tech companies. Essential to this approach are:

  • A true DevOps environment, where software development and IT operations collaborate harmoniously
  • Right mindset and company culture
  • Skills
  • Creativity

With these components in place, you’ll next look to automation. Automation is essential to creating a software development process that works much like an assembly line of software creation—hence, a software factory. Automation can apply to a number of dev practices, like continuous integration/delivery (CI/CD) and automated testing.

Typically, a software factory consists of proprietary tools, processes, and components packaged together. This package offers templates and code that you can easily arrange and process to create a program quickly—with little original code required. Of course, software engineers still must interact with the product to ensure it does what it is supposed to do and doesn’t have bugs or other issues, but when you create or update an app quicker, you have time to shift your testing left.

Why businesses encourage “software factory” mentality

A well-functioning software factory implies a well-functioning internal development team that works hard on shared goals with operations team members, creating features that impact the entire business unit. This harmonious environment is conducive to:

  • Higher levels of satisfaction and success
  • Better technology utilization
  • Fast communication of information, resulting in fast decision making

All and all, implementing a software factory with machine learning and artificial intelligence helps enterprise businesses achieve this goal.

Deepak Seth’s article in CIO—How’s the ‘software factory’ going?—calls software factory the “holy grail” of enterprise software development performance. Seth suggests that, with QA teams “squeezed to the breaking point,” smart automation is the future because it reduces the burden on teams responsible for testing applications.

In addition to creating a better work environment for dev teams overwhelmed by high software production goals, creating a software factory is the most efficient use of enterprise resources because it ensures that:

  • Learning occurs before the next round of software.
  • Any learned methods are applied in future builds.

Continuous improvement is a foundational pillar of DevOps. When you deploy smart automation to accomplish software factory goals, teams deliver on a commitment to always improve and provide faster, more comprehensive services and upgrades to customers when and where they need them.

Components of software factories

These components comprise a software factory:

components of software factory

  • Recipes. Automated processes to perform routine tasks with little or no regular interaction from the developer.
  • Oriented architecture patterns. These patterns define how the design will be executed in the application and why those choices were made.
  • Templates. Prefabricated application elements, code, and development features, including placeholders for arguments. Used early in a project, templates establish a consistent framework and configurations.
  • Reusable code. Reusable components that implement common mechanisms or routine functions. These are used for creating elements throughout the project that would otherwise be wholly manual.
  • How-to elements. These informational elements are a development resource for those getting started with the software factory.
  • Reference implementation. AKA an example of realistic product completion.
  • Designers. Tools that aid developers in more complex design hierarchies.
  • Factory scheme. This map defies hierarchies and serves as a basis for project development.

Software factory development lifecycle

Below is the product development lifecycle:

software factory development lifecycle

1. Problem Analysis

First, determine whether the product scope makes sense for a software factory and can be implemented successfully. Understand two key components:

  • Which parts of the product can be successfully implemented using automation
  • Where manual development is required

2. Product Specifications

Next, define the scope. Using machine learning the software factory will compare product specifications against previous products to draw inferences about the most efficient development strategies that can be automated.

3. Product Design

Then, automated programs can map differences between two designs and update based on changes in scope.

4. Product Implementation

The various mechanisms that can be used to develop the implementation depends on the extent of the differences in implementation between existing products.

5. Product Deployment

Deploy or reuse existing constraints for default deployment and configuration of the resources required to install and execute the product.

6. Product Testing

Finally, smart automation can create and reuse testing components (such as test cases, data sets, and scripts) and implement instrumentation and measurement tools that offer important data output.

Software Factory best practices

Follow these tips for more success, sooner, in your software factory.

  • Avoid tool redundancy. With templates and tools available across agencies and departments, ensure that a sprawl of the same tool doesn’t exist within the organization.
  • Ensure vendor management protocols are in place. Using services that allow you to create and automate the creation of DevOps software often means working with more than one service provider. Comprehensive vendor management is required to ensure user access, security, billing considerations, service considerations, and other elements of the relationship meet your enterprise standards.
  • Diversify your dev skills. With automation in place, developers can spend more time on the product itself, including exploring new languages or approaches. With time on their hands, they can look at the best way to get a feature or improvement done—instead of using only what’s worked in the past. As such, look for devs who know a variety of languages. In a software factory, there’s no need to commit to one programming language or framework only.

Related reading

]]>
The 6 Best Mainframe Podcasts for Mainframe Pros https://www.bmc.com/blogs/mainframe-podcasts/ Wed, 02 Dec 2020 08:00:06 +0000 https://www.bmc.com/blogs/?p=19524 Tech professionals are constantly learning and following what’s ahead. This is especially true for mainframers, as old tech skills are constantly being combined with recent developments. But keeping up with one more demand is something that pro mainframers probably don’t have much time for—mainframe pros are in high demand and short supply. This is why […]]]>

Tech professionals are constantly learning and following what’s ahead. This is especially true for mainframers, as old tech skills are constantly being combined with recent developments.

But keeping up with one more demand is something that pro mainframers probably don’t have much time for—mainframe pros are in high demand and short supply.

This is why podcasts are a great option. There are tons of substantive and entertaining podcasts by industry leaders. Podcasts outshine reading or browsing online forums simply for their on-demand, take-it-with-you accessibility. No longer do you need to dedicate a few hours a week to reading. The podcast format makes it easy breezy to listen while working out, cooking, commuting, over lunch. Anytime, anywhere, podcasts make it easy to stay up to date with industry news.

With the staggering number of podcasts available, there are bound to be a few dedicated to every niche imaginable. In fact, there are currently over 800,000 podcasts available with over 54 million episodes to access. Not surprisingly, tech is one of the more popular podcast categories.

This sheer volume of information—though useful—can sometimes feel overwhelming and difficult to navigate. To help with that, we’ve compiled a list of the best podcasts for mainframers.

(This article is part of our Tech Books & Talks Guide. Use the right-hand menu to navigate.)

Best mainframe podcasts

In no particular order, these six podcasts are uniquely focused on the mainframe, providing a vast amount of information and knowledge about working in the mainframe ecosystem and with mainframe modernization. For each recommendation, we include:

  • Podcast debut
  • Episode release
  • General episode length
  • Hosts
  • The platforms that carry the podcast

Subscribe and listen to as many as you can.

The Modern Mainframe

Debut: February 2019
Released:
Twice per month
Length: Range from 12-45 minutes, but most are quick listens under 25 minutes
Hosts: Varies
Where to Access:Apple Podcasts, Stitcher, Overcast, SoundCloud

BMC’s mainframe podcast should be added to all mainframers’ listening queues. (Don’t worry, we’re only a little biased.) The Modern Mainframe shares thought leadership, how-to advice, customer stories, personal experiences, and more on topics ranging from security and operations to shifting workforce demographics and DevOps implementation.

Along with new episodes, you’ll find all past episodes of the BMC AMI Z Talk podcast plus past Modern Mainframe episodes, including “Building a Better Software Delivery Platform,” the 2020 winner of DevOps.com’s DevOps Dozen award for best DevOps-related podcast series.

Following the addition of BMC AMI DevX application development solutions to BMC’s mainframe portfolio, The Modern Mainframe featured a must-listen three-part discussion of mainframe innovation and DevOps with John McKenny, BMC SVP of Intelligent Z Optimization and Transformation, and April Hickel, BMC VP of Intelligent Z Strategy. Listen to Part 1 here:

I Am A Mainframer

Debut: January 2017
Released:
Monthly
Length: around 30 minutes, but range from 18-40 minutes
Host: Steven Dickens of IBM
Where to access: Apple Podcasts, Stitcher, Spotify, Overcast, iHeartRADIO, SoundCloud

The I am a Mainframer Podcast is produced by the Open Mainframe Project. It’s currently hosted by Steven Dickens of IBM, who was part of launching the Open Mainframe Project in 2019. The purpose of the podcast is to look at the careers of people in the mainframe ecosystem, and each episode interviews a different mainframe professional.

The podcast covers key topics like the modern mainframe, offers insights into the mainframe industry, and generally offers advice for those working in the mainframe ecosystem.

Mainframe, Performance, Topics

Debut: March 2020
Released:
Frequently
Length: Most clock in around 30 minutes
Hosts: Martin Packer and Marna Walle of IBM
Where to Access: Apple Podcasts, Spotify, Overcast, iHeartRADIO

Mainframe, Performance, Topics is hosted by two IBM professionals:

  • Martin Packer, a principal Z system investigator
  • Marna Walle, from z/OS development

In their podcast, the two casually talk about whatever z/OS topics that they’re interested in that week. While there’s lots of flexibility from episode to episode, there is one important structural piece. Each episode involves a mainframe item, a performance item, and a few “topics.” It’s a topical podcast that shares helpful insights from leaders in the industry.

Terminal Talk

Debut: June 2017
Released:
Every two weeks
Length: Most are around 30 minutes; others range from 20-50 minutes
Hosts: Jeff Bisti and Frank de Gilio of IBM
Where to Access: Apple Podcasts, Stitcher, Spotify, Overcast, iHeartRADIO, SoundCloud

We’d be remiss not to include Terminal Talk, one of the longer-running mainframe podcasts with over 100 episodes available. Hosted by Jeff Bisti and Frank de Gilio, Terminal Talk aims to “take a look at the people, technology, and culture behind one of the world’s most powerful and important computing platforms, the mainframe.”

The episodes, which come out every two weeks, include interviews, discussions, and interesting facts and analysis. While some podcasts include mainframe discussion encompassed in other topics, this one is solely focused on mainframes. In fact, in the initial episode, Bisti said that he anticipated listener complaints of “all they talk about is mainframes.”

Obviously, for those working with mainframes, this is a must-listen.

Z DevOps Talk

Debut: December 2019
Released:
Monthly
Length: Most are around 45 minutes
Hosts: Chris Hoina and Chris Sayles
Where to Access: Spotify, Anchor, Radio Public, Overcast, Google Podcasts, Breaker, Apple Podcasts

This IBM developer podcast is hosted by Chris Hoina and Chris Sayles. Obviously IBM focused, the podcast looks at ways that the company is working with open source technology to make mainframes more accessible.

Through interviews with industry experts, the podcast explores some key topics, ranging from Z software architecture to the next frontier for mainframe modernization.

The RE: Frame Podcast

Debut: November 2019
Released:
Infrequently
Length: around 30 minutes
Hosts: Lenn Thompson & David Cook of Broadcom
Where to Access: Stitcher, Overcast, SoundCloud, Anchor, Apple Podcasts

The RE: Frame Podcast’s tagline exclaims:

“What’s NOW and NEXT for the mainframe”

Hosted by Lenn Thompson and David Cook, both of Broadcom, this podcast is helpful for mainframe professionals because it’s focused on where mainframe is and, more importantly, where it’s going. The hosts are committed to making the podcast stand out by being a forward-looking mainframe resource, and so far it’s lived up to that goal.

First launched in October of 2019, the first three episodes of a proposed 10-episode season have been released. So far, the topics include:

  • The People Factor, which looks at the type of professionals that are choosing mainframe careers
  • Machine Learning, which looks at ML on the mainframe
  • Mainframe DevOps, which looks at the tools, techniques, and experiences of mainframe developers

This new podcast, with few available episodes, is still worth the one click to subscribe.

Related reading

At BMC Blogs, we’re always listening to, reading, and creating the best tech content to share knowledge and lead innovation. Get more of our recommendations on what to read, where to listen, and who to follow in our Guide Tech Books & Talks and explore these resources:

]]>
Wardley Value Chain Mapping: What Is It & How To Create Yours https://www.bmc.com/blogs/wardley-value-chain-mapping/ Wed, 25 Nov 2020 08:13:09 +0000 https://www.bmc.com/blogs/?p=19415 Strategic plans that support intelligence are vital. Planning ahead and having advanced warning of possible shortcuts or obstacles helps individuals and organizations navigate uncertainty with steady footing. Although some plans may look good from the outside, not all plans are as equipped as others to support through to the end. Take, for example, hikers climbing […]]]>

Strategic plans that support intelligence are vital. Planning ahead and having advanced warning of possible shortcuts or obstacles helps individuals and organizations navigate uncertainty with steady footing. Although some plans may look good from the outside, not all plans are as equipped as others to support through to the end.

Take, for example, hikers climbing over a mountain. They are given two options to navigate the trails: a list of directions or a map. The directions details all the obstacles on the mountain, like how many rocks and trees are in an area, but it does not show exact locations.

The map, on the other hand, shows all the major mountain locations visually with distance but does not tell the hiker how many trees they will pass. Using a map, the hiker is able to:

  • Avoid sheer drops
  • Clearly know their location
  • Pinpoint resources like running water
  • Define which route will be shorter

Trail map, Monarch Lake, Colorado (Source)

Sure, a tree may fall on a path or a rockslide may occur, but if that hiker has planned their course using a map, they can now tackle the obstacles with more ease. Using mere directions, the hiker might have to try multiple times to reroute or find resources. The choice is clear: hikers use maps for a reason.

When we apply the simple use of strategic landscape mapping to business (or any operation), it is essentially the same concept. And that concept is called Wardley Value Chain Mapping.

In this article, we’ll:

Who is Simon Wardley?

A thought leader in business with decades of experience, Simon Wardley created mapping in 2005 to

  • Predict market trends
  • Foresee outcomes within his organization

Today, with a focus on corporate-level IT strategies, he helps business leaders build intuitive and shareable maps that deliver several advanced operational benefits.

At the Leading Edge Form (LEF), Wardley “uses mapping in his research for the LEF covering areas from Serverless to Nation-State competition whilst also advising/teaching LEF clients on mapping, strategy, organization, and leadership.”

What is Wardley Value Chain Mapping?

Wardley value chain mapping is a way for you to:

  • Visualize and examine your environment
  • Identify upcoming changes
  • Properly choose actions and activities

“By examining what is needed, what components will be in use, what are their dependencies and characteristics, you can build a visual representation of your world, play what-if games, and pick your direction and best actions to support it.” –GitHub

Understanding that all organizations operate within a landscape is the core of Wardley Value Chain Mapping. In business, the landscape is expressed by the value chain or independent activities needed to reach user goals. These are then set into place based on evolution and demand. The end result is a visual graphic depicting risk predictions and objectives.

Used to avoid project risk, Value Chain Maps are continual. They have an infinite level of potential. As you identify each new insight, you’ll also reveal new decision outlets. Start small and don’t worry about getting the first map right. Improvements are part of the process.

Here’s Wardley in 2014:

Benefits of mapping

Benefits of Wardley Value Chain Mapping include:

  • Advanced company-wide communication
  • Enhanced risk and opportunity identification
  • Advanced utility of products
  • An easier ability to cut costs
  • Advanced collaboration of teams at all levels

Strategic plans that support intelligence are essential to success. From the hiker to government leaders, CEOs, project managers—strategy is used to guide decision-making and avoiding getting lost. With that in mind, according to Simon in an article from Medium,

“Strategy is all about observing the landscape, understanding how it is changing, and using what resources you have to maximize your chances of success.”

It is like playing a game of chess and the board is a landscape map.

Wardley map example

(Source)

There are two layers to the map. Define the User, Need, and Activity to determine the X and Y axes. For example:

  • Person → Need: Hunger → Activity: Acquire smoothie
  • Smoothie company → Need: Ingredients → Need: Recipes → Activity: Blending smoothie
  • Smoothie company → Need: Standardized recipes throughout stores  → Activity: Printed menus

How to create a Wardley map

Follow these steps to understand, and then create, your own Wardley Value Chain map.

Step 1: Define your purpose & user needs

To begin a Wardley Map, identify a purpose or driving force. Determine:

  • Why you’re doing the work
  • What you hope to achieve
  • What others will gain from the goal

Ultimately, the purpose should center around the user—the customer.

Step 2: Use value chains to define the scope

Next, focus on critical elements. As Wardley writes for CIO, “Once you’ve determined the high-level needs, the next step is to flesh this out with the components required to meet those needs.”

In this step, answer questions like:

  • What does the user need?
  • How do you serve this need?
  • What adds to each component when fulfilling the need?
  • Are there any dependencies or links?

The chains will go from a user (most valuable) to need, then to activity (least valuable).

Step 3: Link the value chains

In this step, you’ll create the value chain map. As an example of a chain, if a user wants to watch a batman show, they may subscribe to a cable provider, that cable provider hires a studio to make a superhero series, the studio hires writers to script the series with batman, and so on. This chain, as-is, isn’t too useful or productive.

Instead, to provide context to these value chains, you’ll want to apply “change over time”. Simon offers a tip for overcoming change plotting:

“Take your value chain and plot the components along an evolution axis covering genesis, custom-built, product (+rental), and commodity (+utility).”

 Moving left to right in terms of supply and demand, you’ll build a map. It is the activities of the map that move into the genesis, custom, product, or commodity columns.

Step 4: Gap & similarity analysis

After completing your map, you can can use it on multiple levels to:

  • Curb bias
  • Enhance communication
  • Eliminate duplicates

Comparing internal maps as well as competitor maps will reveal opportunities often overlooked or left unchallenged. As Simon describes:

“One of the beautiful things about maps is that you can start to build up a portfolio of maps from different parts of the organization and start to challenge this duplication and bias by sharing. Maps give you the communication mechanism to do this.”

Three pillars of Wardley mapping

  1. Visualize systems and how they change.
  2. Know the basic patterns of capitalism.
  3. Exploit patterns with strategic intent.

A tool for those who want to create and understand a clear strategy plan, Wardley Value Chain Mapping opens up awareness of risk and helps maximize the ability to use opportunities. Within business, staying ahead of the curve with clearly mapped strategy changes the game.

Before setting out on your own creating Wardley Maps, study previously proven maps.

Related reading

]]>
What Is XaaS? Everything as a Service Explained https://www.bmc.com/blogs/xaas-everything-as-a-service/ Tue, 24 Nov 2020 08:17:19 +0000 https://www.bmc.com/blogs/?p=19357 The “as-a-Service” model of cloud computing, providing services over the internet, is a trend that continues to gain traction across the globe. Software-as-a-Service (SaaS) offerings are becoming the de facto method for users to access services and products like Adobe Creative Suite and Microsoft Office. Other kinds of offerings are being made available in the […]]]>

The “as-a-Service” model of cloud computing, providing services over the internet, is a trend that continues to gain traction across the globe. Software-as-a-Service (SaaS) offerings are becoming the de facto method for users to access services and products like Adobe Creative Suite and Microsoft Office. Other kinds of offerings are being made available in the same pay-as-you-go business model.

You’ve seen the terms SaaS, PaaS (Platform-as-a-Service), and IaaS (Infrastructure-as-a-Service) plenty of times. Now, a newer concept encompasses these ideas—and more: everything as a service, or XaaS.

In this article, we’ll:

What is XaaS?

XaaS is short for Everything-as-a-Service and sometimes Anything-as-a-Service. XaaS reflects how organizations across the globe are adopting the as-a-Service method for delivering just about, well, everything. (That’s why we’re seeing services like FaaS, BPMaaS, ITaaS, and even ransomware as a service!)

Initially a digital term, XaaS can now apply to the real, non-digital world, too.

Many B2B organizations provide as-a-Service offerings. These offerings are neatly sliced up and portioned out to create customized services that meet the specific needs of each client at a price that makes sense for them. In this way, XaaS could be simply thought of as a combination of SaaS, PaaS, and IaaS offerings.

On-Prem, IAAS, SAAS PAAS

You wouldn’t be wrong to think that. But you also wouldn’t be getting the full picture of what XaaS means today.

The primary goal of XaaS offerings is to increase the value for the customer. In a XaaS model, you want to convert one-time buyers into service subscribers who receive ongoing benefits from the product. In a XaaS offering, customers should feel that their money is being put to good use—otherwise, as we’ll see below, customers are unlikely to adopt XaaS.

This is where the concept of servitization comes in.

What is servitization?

With the massive success of subscription-based business models, more organizations are looking to get in on the action by leveraging “servitization”—the combination of products and services into a single package.

To succeed, the goal of servitization must be more than just milking more money from customers. Combining services and products together allows organizations to provide customers with greater value than the products or services would provide as standalone offerings.

In many ways, XaaS and the Internet of Things (IoT) are connected. Many consumer-facing organizations are finding ways to integrate data tools into their existing products to provide users with increased value. Rolls-Royce is one such company.

A real XaaS example

Rolls-Royce seeks to provide its customers with a jet engine rental service that helps “customers to maximise the flying potential of their engines.” Through its TotalCare program, Rolls-Royce offers customers a way to off-load the burden of engine maintenance while “reducing waste, increasing efficiency, and enhancing the robustness of our supply chain.”

This XaaS offers long-term jet engine rental contracts wherein the customer “is charged on a fixed $ per flying hour basis…” This arrangement:

  • Incentivizes the jet engine maker to maintain the reliability of their products.
  • Enables the maker to recover valuable materials from the engines at the end of their life.

Equipped with IoT sensors, their service utilizes advanced analytics that track the performance of the engine throughout its lifetime. This means Rolls-Royce can best maintain operational efficiency for airlines through data-driven, proactive maintenance and optimization. The goal is to cut costs for their customers while also increasing their own profitability and reducing waste.

According to Rolls-Royce, “Up to 95% of a used aero engine can be recovered and recycled. Around half of the materials recovered are of such high quality they can be safely remanufactured for use as new aerospace components, reducing our need to procure raw materials.”

This XaaS option provides obvious benefits to the company:

  • Allows the company to retain ownership of the product and thus make use of its valuable materials once it is decommissioned.
  • Cuts down on waste—a boon for the planet and great PR for the company at the same time.

And it’s big benefit to the customer? No responsibility for maintenance.

Switching to XaaS

While the possibilities of servitization and increasing your organization’s value proposition through the use of the XaaS model may be tempting, adopting this approach is no simple task. Subscription based business models are gaining momentum.

Still, customers might be resistant to adopt XaaS. The biggest reason why is usually around perceived value. You might have a hard time convincing buyers to rent a toaster from you and subscribe to an analytics platform that uses machine learning to toast different types of bread to the perfect doneness for each person in the house.

Products like the WHOOP Strap seek to servitize fitness devices (think FitBit) by selling customers on a subscription to their analytics platform and giving away the tracking bands for “free”. A move like this does two things:

  • Offloads the upfront cost of an exercise tracking band
  • Promises to provide more long-term value to the consumer over the product’s life, through the user platform

As you can see, comparing this service to the theoretical XaaS toaster, not all XaaS ideas are created equal.

If the value proposition doesn’t meet the demands of the consumer, maintaining a base of subscribers becomes all but impossible.

XaaS must maximize value

The best way to establish and maintain a XaaS product is to:

  • Find ways, online and offline, to maximize the value proposition of your offerings.
  • Provide a service that offers unique benefits to your customers .

Utilizing IoT devices and analytics to create a better value proposition is something we’ll be seeing a lot more in the years to come.

Related reading

]]>
Introduction To Web Scale IT https://www.bmc.com/blogs/web-scale/ Tue, 24 Nov 2020 08:08:01 +0000 https://www.bmc.com/blogs/?p=19363 Web scale IT is a relative newcomer to IT infrastructure and architecture, but it changes everything we know about scale, agility, and flexibility for companies. In this article, I’ll tackle web scale IT, including: Concepts & components Web scale vs hyper-converged How & why to switch to web scale IT What is web scale? Web […]]]>

Web scale IT is a relative newcomer to IT infrastructure and architecture, but it changes everything we know about scale, agility, and flexibility for companies.

In this article, I’ll tackle web scale IT, including:

What is web scale?

Web scale IT, or web scale infrastructure, is the technology of converged architectures with very flexible, scalable, fault-tolerant software that can run on standard x86 hardware. This combination of characteristics allows you to integrate various infrastructure components—computation, storage, virtualization, and networking—into one platform or appliance. By aggregating resources and centralizing management, you will:

  • Increase efficiency and flexibility
  • Minimize maintenance

Global enterprises working across large scales have implemented (essentially created) web scale IT to change the way their IT teams work and maintain speed, agility, and scalability.

The best examples are web giants like Google, Facebook, and Amazon, but smaller players are starting to get in on the action, too. Gartner first introduced the term in 2013 with high hopes for its rapid growth within five years. Although the uptake has been slower than anticipated, its steady growth will likely continue into the future.

Web scale characteristics

Web scale doesn’t refer to one single technology; rather, it represents an infrastructure formed by a set of technologies and capabilities that large companies have successfully implemented at global scales.

The methodology and approach of web scale infrastructure are unique in that they primarily only work in large environments, fostering different ways of thinking compared to more traditional approaches. Driving the methodology are properties and components that focus on:

Methodology & approach

The web scale architectural approach focuses on designing, building, and managing data center and software infrastructure in a way that tailors systems to global computing and large environments.

The nature of the internet and global computing means modern architectures tend to grow at very fast rates. This, in turn, creates bottlenecking. To avoid scale limitations, web scale methods prioritize the efficient scalability of your infrastructure for these key components:

  • Speed and agility
  • Consistency
  • Versioning
  • Tolerance

Tolerance, in particular, is key to helping operators identify and fix issues more quickly, thus preventing bottlenecking and enabling faster deployment. Web scale’s open environment fosters tolerance by facilitating:

  • Standardizing protocols
  • Identifying issues
  • Creating a unified stack for efficient communication

Making improvements in these areas often requires some customization to help suit specific business requirements. Doing so allows you to:

Properties & components

Web scale infrastructures are software-based, meaning that everything is on software running on standard x86 hardware without any accompanying specialized hardware performing single tasks.

This pairs well with the need for being able to expand while functioning as a cohesive unit—rather than relying on several deployments of multiple units that are not individually scalable.

That said, web scale cannot upgrade all components in one go due to the immense size of the system. That means that certain features are crucial for web scale systems:

  • Self-defining and versioned objects
  • Self-describing and version-aware services

This allows the encoding and serialization of structured data as well as communication between the various parts of the distributed system, all without the expectation of upgrading all components at once.

The huge size of web scale architecture also leads to the human-to-machine ratio heavily skewing in the robots’ favor. Such disparities mean you should implement analytics and automation software to reduce human responsibilities and interactions.

To aid this automation, web scale architecture should include programmatic interfaces working off HTTP-based services. These interfaces should use latency, loss-tolerant protocols, and asynchronous request responses to grant complete control and automation.

And finally, to protect against single points of failure and bottlenecks, the architecture should include failure tolerance considerations that can address problems as quickly as possible. Some techniques for accomplishing this goal include:

  • Consensus algorithms
  • Rate limiting
  • Multiple replicas
  • Two-phase commit

Hyper-converged vs web scale

One word you’re likely to hear to describe web scale IT is hyper-converged. Although web scale infrastructures are converged, they still have significant differences from hyper-converged systems.

The first difference is that hyper-converged architectures replicate between machines across systems to provide reliable hardware abstractions. As web scale architectures are software-based, they use custom software abstractions provided by applications to build reliability. These software abstractions are more ideal for scalability than hyper-converged hardware abstractions that have strong consistency requirements.

The hardware that web scale does use is also frequently customized—the opposite of mass-produced commodity hardware that hyper-converged systems tend to use. The companies employing web scale are so large that they can afford to pay the higher upfront costs of these customizations in order to reap the rewards of performance gains (and eventual cost savings in the form of lower maintenance) in the future.

The relationship between the software and any existing hardware is also quite different in the two models:

  • Hyper-converged architectures typically separate the two.
  • Web scale fits them together in a tightly coupled data center design.

This fits into the customization theme, where such coupling helps the infrastructure, hardware, and applications to work optimally in a specific environment.

Within the software, hyper-converged architectures also co-locate storage and compute services rather than separating them into different services, as web scale typically does. Although web scale can place compute and storage together, it still divides the applications into microservices that are separated by the network.

Harnessing web scale IT

So why has the uptake of web scale infrastructure been slower than expected? Part of the problem is that it’s intimidating!

Making this change requires IT teams alter their entire way of working. If an IT group hasn’t fully embraced the idea of making the switch, they might not make the necessary effort when it comes to web scale implementation. While incremental changes can be helpful in some situations, web scale implementation is better suited to big leaps, leaving all old ways behind.

Some of the biggest changes will include a heavier focus on architecture and design rather than maintenance. IT teams will need to constantly think about and work on improving:

  • Compute
  • Storage
  • Growth horizons
  • Failure recovery
  • Automation

Automation in particular is a crucial aspect of web scale, so the IT teams should be comfortable with implementing artificial intelligence and machine learning in appropriate contexts. You’ll end up spending more time on automation—and less time on network architecture or operations.

This situation might be the opposite of what most network operators are used to, so the change could feel particularly drastic for them. But with automation in place, the team can then:

  • Focus more on managing network devices
  • Reduce time spent on debugging.
  • Minimize or eliminate human error in the ever-growing system

Getting started with web scale

Think you’re ready for web scale?

Ease your IT team into web scale by building the infrastructure on the side. Then, progressively push new applications onto it.

To encourage full implementation, communicate thoroughly, reaching the entire organization and emphasizing the importance of the switch. Highlight the benefits the change will bring in the long term—and that it’s likely all or nothing. In the long run, benefits of switching to web scale will include improved performance, efficiency, resiliency, flexibility, and cost savings.

The actual switch is daunting and will require concerted effort all around, but it’s the way of the future. Keeping up with the big players—and getting ahead of them—hinges on embracing web scale architecture.

Related reading

]]>