DevOps Blog – BMC Software | Blogs https://s7280.pcdn.co Fri, 08 Mar 2024 15:38:29 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png DevOps Blog – BMC Software | Blogs https://s7280.pcdn.co 32 32 Docker CMD vs. ENTRYPOINT: What’s the Difference and How to Choose https://s7280.pcdn.co/docker-cmd-vs-entrypoint/ Fri, 08 Mar 2024 08:50:46 +0000 https://www.bmc.com/blogs/?p=49213 CMD and ENTRYPOINT are two Dockerfile instructions that together define the command that runs when your container starts. You must use these instructions in your Dockerfiles so that users can easily interact with your images. Because CMD and ENTRYPOINT work in tandem, they can often be confusing to understand. This article helps eliminate any potential […]]]>

CMD and ENTRYPOINT are two Dockerfile instructions that together define the command that runs when your container starts. You must use these instructions in your Dockerfiles so that users can easily interact with your images. Because CMD and ENTRYPOINT work in tandem, they can often be confusing to understand. This article helps eliminate any potential disparity in this realm.

In a cloud-native setup, Docker containers are essential elements that ensure an application runs effectively across different computing environments. These containers are meant to carry specific tasks and processes of an application workflow and are supported by Docker images.

The images, on the other hand, are run by executing Docker instructions through a Dockerfile. There are three types of instructions (commands) that you use to build and run Dockerfiles:

  • RUN. Mainly used to build images and install applications and packages, RUN builds a new layer over an existing image by committing the results.
  • CMD. Sets default parameters that can be overridden from the Docker Command Line Interface (CLI) when a container is running.
  • ENTRYPOINT. Default parameters that cannot be overridden when Docker Containers run with CLI parameters.

Any Docker image must have an ENTRYPOINT or CMD declaration for a container to start. Though the ENTRYPOINT and CMD instructions may seem similar at first glance, there are fundamental differences in how they build container images.

(This is part of our Docker Guide. Use the right-hand menu to navigate.)

Shell form vs. executable form

First, we need to understand how a Docker Daemon processes instructions once they are passed.

All Docker instruction types (commands) can be specified in either shell or exec forms. Let’s build a sample Dockerfile to understand these two commands.

(Explore more Docker commands.)

Shell command form

As the name suggests, a shell form of instructions initiate processes that run within the shell. To execute this, invoke /bin/sh -c <command>.

Typically, every execution through a shell command requires environment variables to go through validation before returning the results.

Syntaxes of shell commands are specified in the form:
<instruction> <command>
Examples of shell form commands include:

RUN         yum -y update
RUN         yum -y install httpd
COPY        ./index.html/var/www/index.html
CMD         echo “Hello World”

A Dockerfile named Darwin that uses the shell command will have the following specifications:

ENV name Darwin
ENTRYPOINT /bin/echo "Welcome, $name"

(The command specifications used above are for reference. You can include any other shell command based on your own requirements.)

Based on the specification above, the output of the docker run-it Darwin command will be:

Welcome, Darwin

This command form directs the shell to go through validation before returning results, which often leads to performance bottlenecks. As a result, shell forms are usually not a preferred method unless there are specific command/environment validation requirements.

Executable command form

Unlike the shell command type, an instruction written in executable form directly runs the executable binaries, without going through shell validation and processing.

Executable command syntaxes are specified in the form:

<instruction> [“executable”, “parameter 1”, “parameter 2”, …]

Examples of executable commands include:

RUN ["yum", "-y", "update"]
CMD ["yum", "-y" "install" "httpd"]
COPY ["./index.html/var/www/index.html"]

To build a Dockerfile named Darwin in exec form:

ENV name Darwin
ENTRYPOINT ["/bin/echo", "Welcome, $name"]

Because this avoids a shell processing, the output of the docker run -it Darwin command will be returned as: Welcome, $name.

This is because the environment variable is not substituted in the Dockerfile. To run bash in exec form, specify /bin/bash as executable, i.e.:

ENV name Darwin
ENTRYPOINT ["/bin/bash", “-c” "echo Welcome, $name"]

This prompts shell processing, so the output of the Dockerfile will be: Welcome, Darwin.

Commands in a containerized setup are essential instructions that are passed to the operating environment for a desired output. It is of utmost importance to use the right command form for passing instructions in order to:

  • Return the desired result
  • Ensure that you don’t push the environment into unnecessary processing, thereby impacting operational efficiency

Interested in Enterprise DevOps? Learn more about DevOps Solutions and Tools with BMC. ›

CMD vs. ENTRYPOINT: Fundamental differences

CMD and ENTRYPOINT instructions have fundamental differences in how they function, making each one suitable for different applications, environments, and scenarios.

They both specify programs that execute when the container starts running, but:

  • CMD commands are ignored by Daemon when there are parameters stated within the docker run command.
  • ENTRYPOINT instructions are not ignored, but instead, are appended as command-line parameters by treating those as arguments of the command.

Next, let’s take a closer look. We’ll use both command forms to go through the different stages of running a Docker container.

Docker CMD

Docker CMD commands are passed through a Dockerfile that consists of:

  • Instructions on building a Docker image
  • Default binaries for running a container over the image

With a CMD instruction type, a default command/program executes even if no command is specified in the CLI.

Ideally, there should be a single CMD command within a Dockerfile. For example, where there are multiple CMD commands in a Dockerfile, all except the last one are ignored for execution.

An essential feature of a CMD command is its ability to be overridden. This allows users to execute commands through the CLI to override CMD instructions within a Dockerfile.

A Docker CMD instruction can be written in both Shell and Exec forms as:

  • Exec form: CMD [“executable”, “parameter1”, “parameter2”]
  • Shell form: CMD command parameter1 parameter2

Stage 1. Creating a Dockerfile

When building a Dockerfile, the CMD instruction specifies the default program that will execute once the container runs. A quick point to note: CMD commands will only be utilized when command-line arguments are missing.

We’ll look at a Dockerfile named Darwin with CMD instructions and analyze its behavior.

The Dockerfile specifications for Darwin are:

FROM centos:7
RUN    apt-get update
RUN     apt-get -y install python
COPY ./opt/source code
CMD ["echo", "Hello, Darwin"]

The CMD instruction in the file above echoes the message Hello, Darwin when the container is started without a CLI argument.

Stage 2. Building an image

Docker images are built from Dockerfiles using the command:

$ docker build -t Darwin .

The above command does two things:

  • Tells the Docker Daemon to build an image
  • Sets the tag name to Darwin located within the current directory

Stage 3. Running a Docker container

To run a Docker container, use the docker run command:

$ docker run Darwin

Since this excludes a Command-line argument, the container runs the default CMD instruction and displays Hello, Darwin as output.

If we add an argument with the run command, it overrides the default instruction, i.e.:

$ docker run Darwin hostname

As a CMD default command gets overridden, the above command will run the container and display the hostname, thereby ignoring the echo instruction in the Dockerfile with the following output:

6e14beead430

which is the hostname of the Darwin container.

When to use CMD

The best way to use a CMD instruction is by specifying default programs that should run when users do not input arguments in the command line.

This instruction ensures the container is in a running state by starting an application as soon as the container image is run. By doing so, the CMD argument loads the base image as soon as the container starts.

Additionally, in specific use cases, a docker run command can be executed through a CLI to override instructions specified within the Dockerfile.


Ready to take your IT Service Management to the next level? BMC Helix ITSM can help. ›

Docker ENTRYPOINT

In Dockerfiles, an ENTRYPOINT instruction is used to set executables that will always run when the container is initiated.

Unlike CMD commands, ENTRYPOINT commands cannot be ignored or overridden—even when the container runs with command line arguments stated.

A Docker ENTRYPOINT instruction can be written in both shell and exec forms:

  • Exec form: ENTRYPOINT [“executable”, “parameter1”, “parameter2”]
  • Shell form: ENTRYPOINT command parameter1 parameter2

Stage 1. Creating a Dockerfile

ENTRYPOINT instructions are used to build Dockerfiles meant to run specific commands.

These are reference Dockerfile specifications with an Entrypoint command:

FROM centos:7
RUN    apt-get update
RUN     apt-get -y install python
COPY ./opt/source code
ENTRYPOINT ["echo", "Hello, Darwin"]

The above Dockerfile uses an ENTRYPOINT instruction that echoes Hello, Darwin when the container is running.

Stage 2. Building an Image

The next step is to build a Docker image. Use the command:

$ docker build -t Darwin .

When building this image, the daemon looks for the ENTRYPOINT instruction and specifies it as a default program that will run with or without a command-line input.

Stage 3. Running a Docker container

When running a Docker container using the Darwin image without command-line arguments, the default ENTRYPOINT instructions are executed, echoing Hello, Darwin.

In case additional command-line arguments are introduced through the CLI, the ENTRYPOINT is not ignored. Instead, the command-line parameters are appended as arguments for the ENTRYPOINT command, i.e.:

$ docker run Darwin hostname

will execute the ENTRYPOINT, echoing Hello, Darwin and then displaying the hostname to return the following output:

Hello, Darwin 6e14beead430

When to use ENTRYPOINT

ENTRYPOINT instructions are suitable for both single-purpose and multi-mode images where there is a need for a specific command to always run when the container starts.

One of its popular use cases is building wrapper container images that encapsulate legacy programs for containerization, which leverages an ENTRYPOINT instruction to ensure the program will always run.

Using CMD and ENTRYPOINT instructions together

While there are fundamental differences in their operations, CMD and ENTRYPOINT instructions are not mutually exclusive. Several scenarios may call for the use of their combined instructions in a Dockerfile.

A very popular use case for blending them is to automate container startup tasks. In such a case, the ENTRYPOINT instruction can be used to define the executable while using CMD to define parameters.

Let’s walk through this with the Darwin Dockerfile, with its specifications as:

FROM centos:7
RUN    apt-get update
RUN     apt-get -y install python
COPY ./opt/source code
ENTRYPOINT ["echo", "Hello"]CMD [“Darwin”]

The image is then built with the command:

$ docker build -t darwin .

If we run the container without CLI parameters, it will echo the message Hello, Darwin.

Appending the command with a parameter, such as Username, will override the CMD instruction, and execute only the ENTRYPOINT instruction using the CLI parameters as arguments. For example, the command:

$ docker run Darwin User_JDArwin

will return the output:

Hello User_JDArwin

This is because the ENTRYPOINT instructions cannot be ignored, while with CMD, the command-line arguments override the instruction.


Scale operational effectiveness with an artificial intelligence for IT operations. Learn more about AIOps with BMC! ›

Using ENTRYPOINT or CMD

Both ENTRYPOINT and CMD are essential for building and running Dockerfiles—it simply depends on your use case. As a general rule of thumb:

  • Use ENTRYPOINT instructions when building an executable Docker image using commands that always need to be executed.
  • Use CMD instructions when you need an additional set of arguments that act as default instructions until there is explicit command-line usage when a Docker container runs.

A container image requires different elements, including runtime instructions, system tools, and libraries to run an application. To get the best out of a Docker setup, it is strongly advised that your administrators understand various functions, structures, and applications of these instructions, as they are critical functions that help you build images and run containers efficiently.

Related reading

]]>
Accelerating Mainframe Development Processes to Meet the SEC’s T+1 Regulation https://www.bmc.com/blogs/accelerating-mainframe-development-processes/ Tue, 04 Apr 2023 10:33:21 +0000 https://www.bmc.com/blogs/?p=52791 A new regulation from the Security and Exchange Commission (SEC) aimed at reducing risks and increasing efficiency in the financial markets will require a move to a T+1 securities settlement cycle by May 28, 2024. T+1 means that trades must be settled within one business day of execution, instead of the current T+2 to T+5 […]]]>

A new regulation from the Security and Exchange Commission (SEC) aimed at reducing risks and increasing efficiency in the financial markets will require a move to a T+1 securities settlement cycle by May 28, 2024. T+1 means that trades must be settled within one business day of execution, instead of the current T+2 to T+5 settlement cycles, which allow settlements to occur two to five days after a customer purchases a security. That new speed requirement will have a significant impact on banks and financial institutions.

The new regulation will reduce counterparty risk, as trades will be settled faster, and will also provide investors with faster access to their funds. However, the move to T+1 will also require banks and financial institutions to make significant changes to their application code, systems, and processes to meet the SEC’s new trading requirements.

To comply with the new regulation, banks and financial institutions must accelerate their mainframe development processes. They will also need to streamline their software analysis, development, testing, and application delivery processes to reduce the time it takes them to settle securities transactions.

This is where the BMC AMI DevX suite comes in. This set of solutions can help banks and financial institutions modernize and accelerate their mainframe development and take advantage of modern, agile DevOps processes to meet the requirements of the new SEC regulation.

BMC AMI DevX provides a number of features that make it an ideal platform for banks and financial institutions, such as a visual interface for managing the development process, which makes it easy for developers to track changes and collaborate with each other. As the mainframe code that turns these transactions ages, the ability to analyze the code, its components, sub-components, and the data stores that feed it becomes more important than ever. BMC AMI DevX uniquely presents this information in graphical format, giving mainframe developers the information they need to make necessary changes to code, without fear of “breaking” it. The suite also provides build and deploy functionality that can automatically deliver changes to production code quickly and efficiently, even when integrated with modern Git tooling.

BMC AMI DevX testing tools empower developers to shift left their code testing, so they can develop faster while improving code quality. Finally, by leveraging the suite’s out-of-the-box integrations with the other independent software vendors (ISVs) in the modern DevOps toolchain, banks, brokerages, and other financial institutions dependent on the mainframe for securities transactions can take full advantage of the DevOps automation that open systems have enjoyed for years.

To learn more about how BMC AMI DevX solutions can help banks and financial institutions modernize and mainstream mainframe code creation and deployment and adopt modern DevOps processes ahead of the new T+1 settlement cycle, visit our product pages for BMC AMI DevX Topaz Total Test, BMC AMI DevX Code Pipeline, and BMC AMI DevX Workbench for Eclipse.

]]>
How BMC Helix for ServiceOps Advances Agile DevOps for Enterprises https://www.bmc.com/blogs/how-bmc-helix-for-serviceops-advances-agile-devops-for-enterprises/ Mon, 13 Mar 2023 10:32:22 +0000 https://www.bmc.com/blogs/?p=52701 Legacy technology has often been seen as an inevitable barrier to DevOps, but in most enterprises, it will never disappear entirely, and in fact, mainframe technology is growing and remains projected to do so through the rest of this decade. A successful modern enterprise DevOps strategy needs a pragmatic approach to leveraging legacy technology and […]]]>

Legacy technology has often been seen as an inevitable barrier to DevOps, but in most enterprises, it will never disappear entirely, and in fact, mainframe technology is growing and remains projected to do so through the rest of this decade. A successful modern enterprise DevOps strategy needs a pragmatic approach to leveraging legacy technology and enhancing it with state-of-the-art ServiceOps technology.

Many BMC customers are large, established enterprises and organizations that have been investing in technology for decades. Nearly all of them have embarked on a DevOps journey, but for many, the impact of these initiatives, while usually very positive, has been limited.

The notion of a fully modernized technology infrastructure might be desirable, but it is an expensive and complex change. It may not be the best short- or long-term business decision to replace every legacy component, particularly if the opportunity cost of that effort reduces the capacity to deliver new, transformative innovations.

This reality has created significant challenges for DevOps, which grew in the first half of the decade in smaller organizations such as start-ups and open-source vendors that tended not to have much legacy technology. In these environments, services are easier to understand and it’s easier to enable rapid, light-touch deployment, compared to a company like a major bank where a simple customer interaction (for example, moving money from one bank to another) may cross many different technology platforms of different ages.

Enterprises are more complicated, so achieving the appropriate balance between the intentionally simplified microservice perspective of a DevOps team and the broader picture of the complex enterprise is more complicated, as well. As a result, organizations frequently lack the confidence and ability to let DevOps break out of small, contained pockets and become a mainstream innovation channel for large, complex, critical services.

This challenge of agile, enterprise-scale DevOps needs to be addressed to achieve higher DevOps maturity. BMC Helix for ServiceOps enables agile, enterprise-scale DevOps in several ways:

  • BMC Helix Intelligent Integrations align ServiceOps seamlessly and in real-time directly with the tooling used by the DevOps teams, reducing handoffs and creating shared understanding through collaboration and automation.
  • Dynamic service modeling with BMC Helix CMDB enables the organization to discover and understand its services, even as they rapidly evolve across various underlying technologies, old and new.
  • BMC Helix Operations Management with AIOps applies machine learning and predictive capabilities to deliver an unparalleled understanding of the risk and impact of changes.
  • BMC Helix IT Service Management provides rapid, AI-driven insight into change risks, giving the organization greater confidence to enable DevOps while reducing impacts that arise as DevOps-managed services interact with the broader technology environment.

Legacy is not a barrier to innovation; you just need the right toolset to make it part of your modern development efforts.

]]>
New BMC Helix Dashboard Brings DORA Metrics to Support DevOps https://www.bmc.com/blogs/bmc-helix-dashboard-dora-devops/ Wed, 01 Feb 2023 16:33:03 +0000 https://www.bmc.com/blogs/?p=52602 In the most recent release of BMC Helix Dashboards, BMC introduced a new DevOps-focused dashboard, the BMC Helix ITSM DevOps Metrics Dashboard, which uses industry-standard DORA metrics to visualize how organizational software development performance impacts a service or application. In this blog post, we will introduce this new dashboard and discuss how these metrics are […]]]>

In the most recent release of BMC Helix Dashboards, BMC introduced a new DevOps-focused dashboard, the BMC Helix ITSM DevOps Metrics Dashboard, which uses industry-standard DORA metrics to visualize how organizational software development performance impacts a service or application. In this blog post, we will introduce this new dashboard and discuss how these metrics are a powerful tool for performance optimization, not only for DevOps-driven software delivery, but also for a wider range of agile, incremental, and collaborative IT work.

What are DORA metrics?

DORA metrics were introduced by DevOps Research and Assessment, Google Cloud’s research program, to measure the state of an organization’s software delivery. They focus on some of the key characteristics identified by DORA as being critical to the performance of an organization in delivering successful outcomes based on DevOps practices.

The four key DORA metrics are:

  • Deployment frequency: For the service or application being worked on, how often does the organization deploy code to production or release it to end users?
  • Lead time for changes: How much time elapses between the initial commit of code into the production deployment process and its successful delivery running code in production?
  •  Time to restore service: How long does it generally take to restore service to users after a defect or an unplanned incident impacts them?
  • Change failure rate: What percentage of changes made to a service or application results in impairment or failure of the service and requires remediation?

DORA subsequently added an additional category, operational performance, which reflects the reliability and health of the service.

DORA metrics in BMC Helix

The new BMC Helix ITSM DevOps Metrics Dashboard brings these metrics and more to life, enabling you to visualize current performance, as well as ongoing performance trends, for change activity against a service. In addition to the four key DORA metrics, the dashboard harnesses the best-in-class ServiceOps and AIOps capabilities of BMC Helix to provide an ongoing view of the health of the service.

The new dashboard also provides the viewer with valuable information about upcoming change activity, as well as additional actionable insights to help drive improvements.

Sample-BMC-Helix-ITSM-DevOps-Metrics-Dashboard

Figure 1. Sample BMC Helix ITSM DevOps Metrics Dashboard.

DORA, of course, has its roots firmly in the DevOps world; key members of the group include Gene Kim and Dr. Nicole Forsgren. For organizations practicing DevOps, this dashboard provides the insights specified by DORA for those activities.

However, the dashboard should not be considered only for code deployment. As explained in the 2021 State of DevOps Report, “These four metrics don’t encompass all of DevOps, but they illustrate the measurable, concrete benefits of pairing engineering expertise with a focus on minimizing friction across the entire software lifecycle.”

This pairing of expert-driven optimization with reduced friction draws comparisons with ITIL® 4, which shares many of the same guiding principles that have underpinned DevOps throughout its short life: iterative progression with continuous feedback; collaboration and visibility of work; optimization; and automation.

Indeed, High Velocity IT is one of the four key practitioner books in the ITIL® 4 library, and specifically adapts the learnings of DevOps to the broader IT environment. As the book’s introduction states, “High velocity does not come at the expense of the utility or warranty of the solution, and high velocity equates with high performance in general.”

As such, we anticipate this dashboard will be of great value to organizations that are active adopters and practitioners of DevOps, as well as any technical organization seeking to implement more changes more quickly and iteratively with greater resilience and automation. The benefits described by DORA in its 2022 State of DevOps Report apply to much more than just DevOps: “The faster your teams can make change, the sooner you can deliver value to your customers, run experiments, and receive valuable feedback.”

]]>
Gain Innovation Capabilities Faster with DevOps https://www.bmc.com/blogs/gain-innovation-capabilities-faster-with-devops/ Wed, 10 Aug 2022 08:11:33 +0000 https://www.bmc.com/blogs/?p=52173 Recently, BMC was very honored to host a webinar with renowned DevOps and customer-centricity expert, Gene Kim. As we’ve mentioned in previous entries in this series (you can read about Gene’s take on the scope of DevOps, its mainframe-specific attributes, and its role in the Jobs-as-Code approach for developers, engineers, and site reliability engineers, or […]]]>

Recently, BMC was very honored to host a webinar with renowned DevOps and customer-centricity expert, Gene Kim. As we’ve mentioned in previous entries in this series (you can read about Gene’s take on the scope of DevOps, its mainframe-specific attributes, and its role in the Jobs-as-Code approach for developers, engineers, and site reliability engineers, or SREs), Gene has clarified and expanded the definition of DevOps for thousands of IT and business professionals, and his deeply insightful books, The DevOps Handbook, The Phoenix Project, and The Unicorn Project, have helped ground DevOps principles in concrete and relatable teamwork scenarios for readers around the world.

Gene’s definition of DevOps, which can be applied to any business, is the architectural practices, the technical practices, and the cultural norms that enable us to increase our ability to deliver applications and services quickly and safely, thereby allowing us to deliver customer value without sacrificing security, reliability, and stability.

As Gene explained, “We create organizations, systems, [and] cultures, that either fully unleash the creative problem-solving potential of everyone in the organization, or we create systems that constrain, or even extinguish entirely the creativity and problem-solving potential of each organization. And so the hallmark of the second one, where people can’t do what they need to do…I think it actually led to DevOps.” Gene further summarized the business goal of DevOps in the words of one of his chief collaborators, Jon Smart, “Get better value, sooner, safer, and happier.”

Enabling technologies

In a nutshell, DevOps allows people to do what they need to do—and in today’s business, that freedom is afforded by robust and capable IT service management (ITSM) and IT operations management (ITOM) capabilities. Adding artificial intelligence and machine learning (AI/ML)-enabled ServiceOps capabilities to the mix unifies operational metrics, service request and change management information, and third-party data with dynamic service models, allowing faster problem resolution and better support for DevOps teams. There are key features and outcomes to keep in mind when considering these technologies.

To start, especially for those working in HR, IT, and Facilities job roles but extending to service delivery owners in any function, the ability to manage, automate, and scale service delivery for peak efficiency is crucial to that mission. A few examples of key differentiators for service delivery solutions include:

  • Intelligent self-service that gives employees the ability to be self-sufficient and productive
  • Shared services that help ensure faster, more accurate problem resolution based on persona and business rules
  • Automated artificial intelligence and machine learning (AI/ML) and robotic process automation (RPA) capabilities that extend use cases and task bundles organization-wide for reduced resolution time and low-touch/no-touch solutions
  • Advanced service reporting and service level agreement (SLA) monitoring that inform integrated, customizable dashboards so insights and performance are easily evaluated and reported
  • Agents and end users receiving fast, accurate responses with real-time translation in their channel of choice

BMC’s own DevOps guru, Solutions Director Tony Anter, hosted the webinar, and discussed with Gene the importance of surrounding yourself with knowledge, shared information, and respect for expertise. The concept couldn’t be more fundamental to DevOps, and to a successful knowledge management and digital workplace strategy, where expanding self-service access to ready, verifiable resources contributes to an agile, innovative workforce. In fact, a recent study by Forrester indicates that businesses that are taking a more comprehensive enterprise service management (ESM) approach to expanding service thinking and the service catalog into domains outside of IT are experiencing noticeable gains in speed, productivity, and efficiency. Some key components of a successful approach include:

  • Accurate, consistent knowledge with AI-powered analytics and search, built-in translation capabilities, and cross-channel support that gives easy access to proven answers
  • A unified service catalog that eliminates catalog confusion and streamlines requests
  • A consumer-like experience in a one-stop shop that helps employees get what they need, quickly
  • Intelligent chatbots with cognitive search capabilities that can help anticipate users’ needs and save human time and effort

Gene’s conversation with Tony also took a fascinating look at the expansion of DevOps principles and patterns in large, complex organizations, and how to make in-roads for a culture shift in accepting DevOps as a game-changing structure for innovation. In its essence, approaching DevOps from a mind-set of operational problem-solving and efficiency can help shine a light on its benefits—and drill down on those as collaborative exercises not specific to any working group. Similarly, identifying use cases that appeal to all facets of operational improvements, including security, can be very powerful. In particular:

  • Built-in artificial intelligence for IT operations (AIOps) and service management (AISM) capabilities help reduce risk, manage governance, and automate change
  • Service-centric monitoring, advanced event management, and AI/ML-based root cause isolation reduce mean time to repair (MTTR) and improve agility
  • Dynamic service modeling and mapping for all application and infrastructure dependencies can enable more effective change management, an optimal customer experience, and regulatory compliance
  • A jobs-as-code approach to the continuous integration and continuous delivery (CI/CD) toolchain makes it easier to version, test, and maintain workflows so teams can deliver better apps, faster and with less rework

Expanding the potential of DevOps

Enterprise DevOps is a tech tenet of the Autonomous Digital Enterprise framework, a vision for successful organizations that adapt to ongoing change by increasing their investment in digital transformation and evolving to new business models that equip them for a future of growth through actionable insights, business agility, and customer centricity. That forward-looking, optimistic view is counter-balanced by its opposite. As Gene says, “Technical debt and legacy systems slow down valuable development work.”

At BMC, we can help customers deliver on the innovation capabilities of the DevOps discipline and the promise of consistency and speed afforded by Enterprise DevOps. BMC Helix, our award-winning ITSM/ITOM portfolio, leverages ServiceOps and AIOps to enable faster, more accurate, and more efficient ways of delivering service innovations. It also identifies patterns in monitoring and capacity, data across IT operations (ITOps), and DevOps environments for real-time, enterprise-wide insights.

And with the SaaS-based application workflow orchestration capabilities of BMC Helix Control-M, developers, ITOps, and business users get a simplified, end-to-end view of critical services in production, further streamlining processes and ensuring more reliable deliverables.

Are you unleashing the full potential of your organization and its people? Learn more about Gene Kim’s point of view on DevOps and its role as an agent of change for all businesses today, as well as his plans for the future and his advice for practitioners of any age or experience level.

]]>
Gene Kim Talks DevOps and the Mainframe https://www.bmc.com/blogs/devops-and-the-mainframe/ Fri, 29 Jul 2022 13:34:49 +0000 https://www.bmc.com/blogs/?p=52154 If you’re going to do a webinar about DevOps, you go straight to the source, and BMC recently hosted a chat with DevOps superstar and Tripwire founder and former chief technology officer (CTO) Gene Kim to talk about all thing DevOps and more. Including the Mainframe in Your DevOps Journey Early in the webinar, BMC DevOps architect […]]]>

If you’re going to do a webinar about DevOps, you go straight to the source, and BMC recently hosted a chat with DevOps superstar and Tripwire founder and former chief technology officer (CTO) Gene Kim to talk about all thing DevOps and more.

Including the Mainframe in Your DevOps Journey

Early in the webinar, BMC DevOps architect and evangelist Tony Anter and Gene discussed the perception by many people that DevOps is only for the startups, so traditional companies that have been around for 50 years or more won’t gain any value from it. Gene states, “In fact, my area [of] passion since 2014 has been studying not so much the tech giants, but really large, complex organizations that have been around for decades or even centuries, who are using those same DevOps principles and patterns to win in the marketplace.”

When asked about the importance of DevOps to large complex organizations that have a mainframe, Gene responded, “DevOps really transcends the platforms you’re running on.” He provided proof with an anecdote about a publicly traded billion-dollar customer care company that used DevOps practices to evolve its half-century-old flagship application, moving from twice-a-year releases to quarterly, monthly, and finally, a daily deployment model.

“They were able to decrease the cost per transaction by 20X, and if you look at transaction count, it also went up by nearly 20X. [They now have] some of the highest Net Promoter Scores in the cities where they operate. If you can do it for a 50-year-old mainframe allocation, you know you can do it for anything,” he pointed out.

Enabling DevOps Across the Enterprise

We tend to agree with Gene’s analysis that traditional, complex, and large companies that are running mainframes can easily adopt DevOps practices and break down the silos between developers and operations to create a DevOps ecosystem.

Building a mainframe-inclusive DevOps toolchain enables agile development and testing of critical applications for faster delivery of innovations. Innovators and early adopters of mainframe-inclusive Agile and DevOps are reaping tremendous benefits, including greater agility, faster delivery cadence, and higher application quality.

Integrated mainframe tools work across an array of cross-platform tools, empowering developers on every platform to perform and improve the processes necessary for each phase of the DevOps lifecycle by:

  • Adopting shift-left automated testing
  • Speeding IBM® Db2® database changes
  • Addressing security earlier in the development process

And we aren’t alone in our opinion. The recent IDC Market Glance for Mainframe DevOps found that embracing a mainframe-inclusive DevOps toolchain enables faster, more frequent delivery of code.

According to the study, “We have observed forward motion in the mainframe DevOps market as of late, making available the tools and technology needed to make mainframe agility realizable for the organizations that depend on it.” The IDC research also indicates that 73 percent of DevOps influencers believe that mainframe DevOps is critical to digital business success.

Mainstreaming the Mainframe

It’s no secret that many mainframe developers are retiring out of the workforce, especially in larger organizations, so making the mainframe more accessible to non-mainframe developers is imperative. Gene gave an example here, too, referring to the same publicly traded billion-dollar customer care company. The organization was tasked with migrating assembler code running on the mainframe into Java, while continuing to leave the workload on the mainframe.

Addressing the skills gap of mainframe programmers, he explained that with this migration into Java, a mainframe programmer isn’t needed to change a report, a database administrator (DBA) or a Java developer can do that. As Gene said, “I have this picture of a mid-fifties, [maybe] late-fifties mainframe developer who’s paired with a lead Java engineer [who’s] probably thirty something wearing shorts.” Gene also referred to a quote he got from that mainframe developer. “He said, ‘I had my share of baggage. I knew this couldn’t be done. We spent hundreds of millions of dollars, multiple times trying to do this. But it was amazing to see the best of open systems and .net and Java that could make it work on the mainframe. It was exhilarating watching thousands of lines of assembler code disappear.’”

Gene added, “Not only was the code more maintainable and didn’t [require] the mainframe team, any Java developer could do that, but it ran on a different zIIP engine on the mainframe and the cost of operating that and running that went down by 95 percent.”

Workloads are expected to grow on the mainframe, and BMC DevOps tools are empowering developers to make the mainframe as adaptive as any other platform, regardless of their experience developing on the mainframe. We call this “mainstreaming the mainframe,” an approach that not only brings agility to the mainframe, but also helps unify mainframe and non-mainframe code into a single, highly-manageable repository of digital business logic. Ultimately, it makes the mainframe a more accessible platform for anyone who has little or no mainframe expertise.

Accelerating Innovation with BMC AMI DevX

BMC is proud to offer multiple solutions to mainstream the mainframe and encourage your enterprise DevOps journey, starting with a portfolio of BMC AMI DevX DevOps products. The portfolio integrates with each other and with an expanding array of best-in-class, cross-platform partner tools, enabling developers of every kind to:

  • Accelerate application development and delivery process
  • Integrate the mainframe with developers’ favorite DevOps tools that are already used on the distributed side of the organization
  • Ramp productivity and attract a new generation to the mainframe
  • Leverage best-in-class tools and integrations to support the full software development lifecycle

We were keen to enable better, slicker processes that dovetail technology with the needs of the business, so we can be agile and responsive to our customers. DevOps is pivotal to this. It gives us an opportunity to identify new capabilities that our customers are calling for and bring them to market quickly, providing a competitive edge.”
—DevOps Transformation Manager, Large UK Bank

Driving the Value of DevOps on the Mainframe with BMC AMI

BMC AMI DevOps for Db2® is a solution that integrates with application development orchestration tools to automatically capture database changes and communicate them to the DBA while also enforcing DevOps best practices. The solution speeds up application changes by automatically integrating mainframe database changes into agile application development processes. BMC AMI Ops supports agile DevOps processes with artificial intelligence for IT operations (AIOps) capabilities to predict and proactively address potential deployment and operational issues in complex environments.

The agility provided by BMC AMI solutions gives a new wave of Dev and Ops teams the tools they need to collaborate and create with the most timely and accurate data available, driving the value of DevOps on the mainframe like any other platform by:

  • Accelerating application deployment
  • Providing self-service for application developers
  • Improving the quality and efficiency of Db2 schema changes
  • Streamlining communication between app dev and DBA teams
  • Mitigating risk through fully audited and transparent automation

Being integrated into the development process with BMC AMI DevOps for Db2 enables us to all share in the responsibility of moving changes toward production. We are working as a unified team now.”
—Steven Goedertier, Database Administrator, Colruyt

Conclusion

With unparalleled agile application development, testing, and delivery, BMC AMI and BMC AMI DevX provide a mainframe-inclusive DevOps toolchain that accelerates innovation and resiliency.

You can read more about Gene’s opinion on traditional and government agencies using DevOps in a separate blog, and be sure to check out Tony’s blog here referencing Gene’s book The Unicorn Project and how it relates to the mainframe. To see more from Tony’s conversation with Gene about achieving success through DevOps.

]]>
Gene Kim Shares DevOps Successes from Government Agencies to Billion-Dollar Enterprises https://www.bmc.com/blogs/gene-kim-devops-webinar/ Tue, 28 Jun 2022 15:26:55 +0000 https://www.bmc.com/blogs/?p=52093 If you’re going to do a webinar about DevOps, you go straight to the source, and BMC recently hosted a chat with DevOps superstar and Tripwire founder and former chief technology officer (CTO) Gene Kim to talk about all thing DevOps and more. Gene is a researcher, speaker, and author of six books, including two Wall […]]]>

If you’re going to do a webinar about DevOps, you go straight to the source, and BMC recently hosted a chat with DevOps superstar and Tripwire founder and former chief technology officer (CTO) Gene Kim to talk about all thing DevOps and more.

Gene is a researcher, speaker, and author of six books, including two Wall Street Journal bestselling novels set at Parts Unlimited, a car parts manufacturer and retailer setting out on an IT transformation initiative called The Phoenix Project.

In the first book, “The Phoenix Project: A Novel about IT, DevOps, and Helping Your Business Win,” IT manager Bill is tasked with bringing that initiative in on time and under budget against the threat of losing his whole department—a daunting proposition until he learns about “The Three Ways” (aka DevOps principles).

In the sequel, “The Unicorn Project: A Novel About Developers, Digital Disruption, and Thriving in the Age of Data,” Parts Unlimited employee Maxine is reassigned to the Phoenix Project. When she runs into an issue with a build, she meets Kurt and the Rebellion, a group of IT engineers trying to transform the company and the way they work by focusing on the “Five Ideals” (of DevOps). Armed with a discovery-oriented approach focused on customer success, the team is able to overcome apathy and frustration to deliver successful results.

Defining DevOps in Today’s Business

Gene recently spent an hour with BMC’s own DevOps guru, Solutions Director Tony Anter, to reflect on his history in the field and what he’s learned along the way, and to discuss real-world examples of how DevOps plays for organizations with complex, traditional infrastructures and bureaucratic systems, much like those depicted in the books.

To level set, Gene recapped his baseline DevOps definition from another of his books, “The DevOps Handbook,” telling the audience that it’s the architectural practices, the technical practices, and cultural norms that improve our ability to deliver applications and services quickly and safely.

He added that those capabilities also enable rapid experimentation and innovation so companies can deliver value as quickly as possible to customers—without sacrificing security, reliability, and stability—while winning in the marketplace.

In response to Tony’s question about how DevOps fits into verticals (industries like insurance, FinTech, and utilities) and large systems that run on older, mid-range platforms, mainframes, and code bases, Gene discussed Geoffrey Moore’s concept of core technology versus context. Core competencies of an organization are those that offer the great, lasting, durable business advantage that customers pay extra for. Context is everything else—even mission-critical systems like HR, marketing, and development—because they don’t drive the competitive advantage offered by core.

It’s important to remember that context is essential and should be funded adequately. To that end, for DevOps initiatives to succeed, they must be balanced appropriately across both the core and contextual structures and environments—including their budgets.

Moving DevOps Beyond the FAANGs

Gene expanded on the idea, saying DevOps isn’t just for the FAANG—Facebook (now Meta), Amazon, Apple, Netflix, and Google (now Alphabet)—companies. He pointed to his longstanding passion for studying the large, complex organizations that have been around for decades or even centuries, that are now using DevOps principles and patterns to win in the marketplace.

“The hero stories [have] been the highlight of my career. The oldest organization [I’ve worked with is] Her Majesty’s Revenue Collection Service [in the U.K.] and that was an organization founded in the year 1200. I don’t think there’s any code that goes that far back, but there are values and traditions that certainly go back centuries,” he shared.

“They are one of the most complex IT estates on the planet and they described how, in four weeks, they were able to put hundreds of billions of pounds into citizen pockets, into small businesses to, as they say, ‘avert government ruin,’ and especially among the most vulnerable parts of the population. So it’s not just commercial industry, it’s government agencies, military services, and so forth.”

Evolving Traditional Infrastructure for the Future

Gene also referenced a publicly-traded billion-dollar customer care company that used DevOps practices to evolve its half-century-old flagship application that generated 50 to 70 percent of the company’s revenue, moving from twice-a-year releases to quarterly and eventually, a monthly cadence. He says that through engineering excellence, the company achieved a daily deployment model and decreased the cost per transaction by 20x while the actual transaction count also went up by nearly 20x.

“[They now have] some of the highest Net Promoter Scores in the cities where they operate. If you can do it for a 50-year-old mainframe allocation, you know you can do it for anything and [that] shows that [DevOps] really transcends [the] platforms you’re running on,” he pointed out. “We’ve had technology leaders from almost every industry vertical, [and] there’s a certain universality to the problems they are facing and how they’re tackling [them].”

Bringing DevOps Teams Back In-House

Gene was particularly impressed with the success of Fernando Cornago, VP of digital tech at Adidas, and his ability to create a pocket of greatness inside an organization that was previously 80 percent outsourced. “Now, in their strategic plan published by the board of directors, 50 percent of revenue is supposed to come through direct consumer channels. So there [are] thousands of developers now working to essentially make sure that they can compete and win in a very competitive marketplace,” he explained. “The best are getting better and they’re getting ever-larger missions.”

To see more from Tony’s conversation with Gene about achieving success through DevOps, hear his responses to engaging questions from the audience, and find out what he’s up to next

]]>
There are Unicorns on the Mainframe https://www.bmc.com/blogs/gene-kim-devops/ Mon, 06 Jun 2022 00:00:44 +0000 https://www.bmc.com/blogs/?p=52054 I just finished reading the “The Unicorn Project: A Novel About Developers, Digital Disruption, and Thriving in the Age of Data” by Gene Kim. If you are not familiar with the book, the story follows Maxine and Kurt, employees of a car parts retailer and manufacturer named Parts Unlimited. After a production issue, Maxine is […]]]>

I just finished reading the “The Unicorn Project: A Novel About Developers, Digital Disruption, and Thriving in the Age of Data” by Gene Kim. If you are not familiar with the book, the story follows Maxine and Kurt, employees of a car parts retailer and manufacturer named Parts Unlimited. After a production issue, Maxine is reassigned to the Phoenix Project, a massive transformative project that spans almost every platform in the company. When Maxine tries to do a build on the Phoenix Project and is unable to get it working, she is introduced to Kurt and the Rebellion. The Rebellion is a group of engineers within IT trying to transform the company and the way they do work. In working with the Rebellion, Maxine and Kurt lead a total transformation of Parts Unlimited by focusing on the “Five Ideals.”

  • The First Ideal: Locality and Simplicity
  • The Second Ideal: Focus, Flow, and Joy
  • The Third Ideal: Improvement of Daily Work
  • The Fourth Ideal: Psychological Safety
  • The Fifth Ideal: Customer Focus

At first glance, you might think that the book would advocate removing older legacy platforms like the mainframe, but that is not at all the case. The book is about modernizing, improving the developer experience, and eliminating technical debt. All of these goals are possible and part of the modern mainframe.

First let’s tackle the idea of “modernizing the mainframe.” I do not like this term, since it implies that as a platform, the mainframe is not modern. I am here to say that is not the case. The mainframe is a modern and vibrant platform. With the release of IBM z16®, as well as Db2® v13, the platform is as modern as any cloud, web, or mobile platform. Enhanced capabilities around artificial intelligence and machine learning (AI/ML), quantum computing, and hybrid cloud are just a few of the advancements built into the latest release of the mainframe, bringing it in line with any modern platform today.

Instead, the discussion should really be focused on modernizing application development and architecture on the mainframe, and modernizing the development environment. This may seem like a subtle distinction, but it is a distinction nonetheless. Modernizing the application development space is absolutely necessary and in line with the message in “The Unicorn Project.” The most business-critical applications run on the mainframe, so updating and modernizing them is essential to becoming a high-performing organization.

Let’s look at modernizing application development through the lens of “The Unicorn Project” and the Five Ideals.

The First Ideal: Locality and Simplicity

Simply stated, this ideal relates to the degree to which a development team can make code changes in a single location without impacting various teams. Teams should be able to make simple changes in a single place and test them without affecting or involving other application teams.

I think of all the Five Ideals, this one could potentially be the most difficult to achieve for mainframe application teams. Over the years, many mainframe applications have become very interwoven with other applications, and it’s become increasingly difficult to make simple changes given their size and complexity. Refactoring and simplifying these applications will be challenging. Many of the subject matter experts who wrote these applications are either retired or close to retirement, so the expertise on these systems is dwindling.

The immediate task of refactoring and simplifying applications can seem very daunting, but the payoff for that work will set application teams up to reap the benefits of DevOps, and more importantly, make things simpler to maintain and change for the most critical applications in an organization. And by doing so, organizations will also modernize and simplify your most critical applications and make working on them easier and more fulfilling.

The Second Ideal: Focus, Flow, and Joy

This ideal is all about how daily work feels. To boil it down into a couple of words: developer experience. How does it feel to come into work each day and try to do your job? Is it a joy, are you working with top-of-the-line systems and processes that aid in getting work done, or is it a challenge each day? Does it take days or weeks waiting on processes or system to be available?

Focus, flow, and joy make your work a delight and help organizations attract and retain top talent. Like the first ideal, this one has fallen by the wayside for a lot of mainframe application development. As an example, many teams are still using green-screen technology and following older waterfall processes that make it more difficult to work. This need not be the case.

Mainframe developers can have the same focus, flow, and joy as their distributed counterparts. Modern integrated development environments (IDEs) like BMC AMI DevX Workbench for Eclipse or Visual Studio Code are available for doing day-to-day work. Mainframe source code management and deploy systems like BMC AMI DevX Code Pipeline are built with agility and the developer experience in mind. For those application teams  who can and want to make the transition to Git, ISPW fully integrates with Git systems. Achieving focus, flow, and joy will bring new life to existing developers and attract an entirely new generation onto the mainframe.

The Third Ideal: Improvement of Daily Work

While this ideal sounds a lot like things we touched on in the first and second ideal, this is a different concept. This ideal is about paying down technical debt that has built up over the years and re-architecting your applications to be more efficient. The first ideal is about decoupling applications so they’re easier and simpler to change. The second ideal is about making it easier for developers to do their work.

For the third ideal, technical debt needs to be treated as a priority and paid down and your architecture needs to be modernized so your development teams can flow and push out changes faster and easier. This can be a difficult proposition for mainframe teams. For years, technical debt on the mainframe has been ignored and allowed to build up, so it will take a lot of time and effort to dig back out of that hole, but it is well worth it. By improving your daily work on the mainframe, you make it approachable, easy, and smooth to deliver value. This is a necessary step for the advancement or development on the mainframe.

The Fourth Ideal: Psychological Safety

This is the idea that people should feel safe to speak up about issues, concerns, and problems. If teams do not feel they can bring up issues without fear of repercussions, then no problems or concerns will be discussed and issues will continue to arise and never get fixed. An open dialogue is one of the top indicators of team performance and is essential to building a high-performance team. Regardless of the platform, organizations need to create a culture of open communication at all levels to be able to confronts hard problem and resolve issues. This is true on the mainframe or any other platform.

The Fifth Ideal: Customer Focus

This ideal highlights the idea of “context and core,” as described by Geoffrey Moore. Core is what customers can and are willing to pay for—the center of your business. Context is the systems that customers do not care about—backend systems like marketing or HR. They are very important to the business and should be treated as such, but they are not essential to the customer or the customer experience.

Context should never kill core. Your focus should always be on the systems that face and are important to the customer. In almost all of the organizations we talk with, mainframe systems are core. They are the center of the business and essential to the customer experience. By focusing on mainframe applications and modernizing the development experience, organizations are keeping the fifth ideal.

The Five Ideals in “The Unicorn Project” are not exclusive to the four “ technology stock leaders—Meta, Amazon, Netflix, and Alphabet. They are for any organization, on any platform, with any application, that wants to make a change and become a high-performing organization. Mainframe is just another box in the datacenter and the applications on it are like any others that need to be modernized and reconstructed to stay relevant in today’s ever-changing market.

 

]]>
Release Management in DevOps https://www.bmc.com/blogs/devops-release-management/ Wed, 30 Mar 2022 00:00:34 +0000 https://www.bmc.com/blogs/?p=13642 The rise in popularity of DevOps practices and tools comes as no surprise to those who have already utilize the new techniques centered around maximizing the efficiency of software enterprises. Similar to the way Agile quickly proved its capabilities, DevOps has taken its cue from that and built off of Agile to create tools and […]]]>

The rise in popularity of DevOps practices and tools comes as no surprise to those who have already utilize the new techniques centered around maximizing the efficiency of software enterprises. Similar to the way Agile quickly proved its capabilities, DevOps has taken its cue from that and built off of Agile to create tools and techniques that help organizations adapt to the rapid pace of development today’s customers have come to expect.

As DevOps is an extension of Agile methodology, DevOps itself calls for extension beyond its basic form as well.

Collaboration between development and operations team members in an Agile work environment is a core DevOps concept, but there is an assortment of tools that fall under the purview of DevOps that empower your teams to:

  • Maximize their efficiency
  • Increase the speed of development
  • Improve the quality of your products

DevOps is both a set of tools and practices as well as a mentality of collaboration and communication. Tools built for DevOps teams are tools meant to enhance communication capabilities and create improved information visibility throughout the organization.

DevOps specifically looks to increase the frequency of updates by reducing the scope of changes being made. Focusing on smaller tasks at a time allows for teams to dedicate their attention to truly fixing an issue or adding robust functionality without stretching themselves thin across multiple tasks.

This means DevOps practices provide faster updates that also tend to be much more successful. Not only does the increased rate of change please customers as they can consistently see the product getting better over time, but it also trains DevOps teams to get better at making, testing, and deploying those changes. Over time, as teams adapt to the new formula, the rate of change becomes:

  • Faster
  • More efficient
  • More reliable

In addition to new tools and techniques being created, older roles and systems are also finding themselves in need of revamping to fit into these new structures. Release management is one of those roles that has found the need to change in response to the new world DevOps has heralded.

(This article is part of our DevOps Guide. Use the right-hand menu to navigate.)

What is Release Management?

Release management is the process of overseeing the planning, scheduling, and controlling of software builds throughout each stage of development and across various environments. Release management typically included the testing and deployment of software releases as well.

Release management has had an important role in the software development lifecycle since before it was known as release management. Deciding when and how to release updates was its own unique problem even when software saw physical disc releases with updates occurring as seldom as every few years.

Now that most software has moved from hard and fast release dates to the software as a service (SaaS) business model, release management has become a constant process that works alongside development. This is especially true for businesses that have converted to utilizing continuous delivery pipelines that see new releases occurring at blistering rates. DevOps now plays a large role in many of the duties that were originally considered to be under the purview of release management roles; however, DevOps has not resulted in the obsolescence of release management.

Advantages of Release Management for DevOps

With the transition to DevOps practices, deployment duties have shifted onto the shoulders of the DevOps teams. This doesn’t remove the need for release management; instead, it modifies the data points that matter most to the new role release management performs.

Release management acts as a method for filling the data gap in DevOps. The planning of implementation and rollback safety nets is part of the DevOps world, but release management still needs to keep tabs on applications, its components, and the promotion schedule as part of change orders. The key to managing software releases in a way that keeps pace with DevOps deployment schedules is through automated management tools.

Aligning business & IT goals

The modern business is under more pressure than ever to continuously deliver new features and boost their value to customers. Buyers have come to expect that their software evolves and continues to develop innovative ways to meet their needs. Businesses create an outside perspective to glean insights into their customer needs. However, IT has to have an inside perspective to develop these features.

Release management provides a critical bridge between these two gaps in perspective. It coordinates between IT work and business goals to maximize the success of each release. Release management balances customer desires with development work to deliver the greatest value to users.

(Learn more about IT/business alignment.)

Minimizes organizational risk

Software products contain millions of interconnected parts that create an enormous risk of failure. Users are often affected differently by bugs depending on their other software, applications, and tools. Plus, faster deployments to production increase the overall risk that faulty code and bugs slip through the cracks.

Release management minimizes the risk of failure by employing various strategies. Testing and governance can catch critical faulty sections of code before they reach the customer. Deployment plans ensure there are enough team members and resources to address any potential issues before affecting users. All dependencies between the millions of interconnected parts are recognized and understood.

Direct accelerating change

Release management is foundational to the discipline and skill of continuously producing enterprise-quality software. The rate of software delivery continues to accelerate and is unlikely to slow down anytime soon. The speed of changes makes release management more necessary than ever.

The move towards CI/CD and increases in automation ensure that the acceleration will only increase. However, it also means increased risk, unmet governance requirements, and potential disorder. Release management helps promote a culture of excellence to scale DevOps to an organizational level.

Release management best practices

As DevOps increases and changes accelerate, it is critical to have best practices in place to ensure that it moves as quickly as possible. Well-refined processes enable DevOps teams to more effectively and efficiently. Some best practices to improve your processes include:

Define clear criteria for success

Well-defined requirements in releases and testing will create more dependable releases. Everyone should clearly understand when things are actually ready to ship.

Well-defined means that the criteria cannot be subjective. Any subjective criteria will keep you from learning from mistakes and refining your release management process to identify what works best. It also needs to be defined for every team member. Release managers, quality supervisors, product vendors, and product owners must all have an agreed-upon set of criteria before starting a project.

Minimize downtime

DevOps is about creating an ideal customer experience. Likewise, the goal of release management is to minimize the amount of disruption that customers feel with updates.

Strive to consistently reduce customer impact and downtime with active monitoring, proactive testing, and real-time collaborative alerts that enable you to quickly notify you of issues during a release. A good release manager will be able to identify any problems before the customer.

The team can resolve incidents quickly and experience a successful release when proactive efforts are combined with a collaborative response plan.

Optimize your staging environment

The staging environment requires constant upkeep. Maintaining an environment that is as close as possible to your production one ensures smoother and more successful releases. From QA to product owners, the whole team must maintain the staging environment by running tests and combing through staging to find potential issues with deployment. Identifying problems in staging before deploying to production is only possible with the right staging environment.

Maintaining a staging environment that is as close as possible to production will enable DevOps teams to confirm that all releases will meet acceptance criteria more quickly.

Strive for immutable

Whenever possible, aim to create new updates as opposed to modifying new ones. Immutable programming drives teams to build entirely new configurations instead of changing existing structures. These new updates reduce the risk of bugs and errors that typically happen when modifying current configurations.

The inherently reliable releases will result in more satisfied customers and employees.

Keep detailed records

Good records management on any release/deployment artifacts is critical. From release notes to binaries to compilation of known errors, records are vital for reproducing entire sets of assets. In most cases, tacit knowledge is required.

Focus on the team

Well-defined and implemented DevOps procedures will usually create a more effective release management structure. They enable best practices for testing and cooperation during the complete delivery lifecycle.

Although automation is a critical aspect of DevOps and release management, it aims to enhance team productivity. The more that release management and DevOps focus on decreasing human error and improving operational efficiency, the more they’ll start to quickly release dependable services.

Automation & release management tools

Release managers working with continuous delivery pipeline systems can quickly become overwhelmed by the volume of work necessary to keep up with deployment schedules. This means enterprises are left with the options of either hiring more release management staff or employing automated release management tools. Not only is staff the more expensive option in most cases but adding more chefs in the kitchen is not always the greatest way to get dinner ready faster. More hands working in the process creates more opportunities for miscommunication and over-complication.

Automated release management tools provide end-to-end visibility for tracking application development, quality assurance, and production from a central hub. Release managers can monitor how everything within the system fits together which provides a deeper insight into the changes made and the reasons behind them. This empowers collaboration by providing everyone with detailed updates on the software’s position in the current lifecycle which allows for the constant improvement of processes. The strength of automated release management tools is in their visibility and usability—many of which can be accessed through web-based portals.

Powerful release management tools make use of smart automation that ensures continuous integration which enhances the efficiency of continuous delivery pipelines. This allows for the steady deployment of stable and complex applications. Utilizing intuitive web-based interfaces provides enterprises with tools for centralized management and troubleshooting that helps them plan and coordinate deployments across multiple teams and environments. The ability to create a single application package and deploy it across multiple environments from one location expedites the processes involved in continuous delivery pipelines and makes the management of them much more simplified.

Related reading

]]>
What Is Terraform? Terraform & Its IaC Role Explained https://www.bmc.com/blogs/terraform/ Tue, 29 Mar 2022 13:36:59 +0000 https://www.bmc.com/blogs/?p=51908 Managing infrastructure is a core requirement for most modern applications. Even in PaaS or serverless environments, there will still be components that require user intervention for customization and management. With the ever-increasing complexity of software applications, more and more infrastructure modifications are required to facilitate the functionality of the software. It is unable to keep […]]]>

Managing infrastructure is a core requirement for most modern applications. Even in PaaS or serverless environments, there will still be components that require user intervention for customization and management. With the ever-increasing complexity of software applications, more and more infrastructure modifications are required to facilitate the functionality of the software.

It is unable to keep up with the rapid development cycles with manual infrastructure management. It will create bottlenecks leading to delays in the delivery process.

Infrastructure as Code (IaC) has become the solution to this issue—allowing users to align infrastructure changes with development. It also facilitates faster automated repeatable changes by codifying all the infrastructure and configuration and managing them through the delivery pipeline.

Terraform is one of the leading platform agnostic IaC tools that allow users to define and manage infrastructure as code. In this article, let’s dig into what Terraform is and how we can utilize it to manage infrastructure at scale.

What is Infrastructure as Code?

Before moving into Terraform, we need to understand Infrastructure as Code. To put it simply, IaC enables users to codify their infrastructure. It allows users to:

  • Create repeatable version-controlled configurations
  • Integrate them as a part of the CI/CD pipeline
  • Automate the infrastructure management

If an infrastructure change is needed in a more traditional delivery pipeline, the infrastructure team will have to be informed. The delivery pipeline cannot proceed until the change is made to the environment. Having an inflexible manual process will hinder the overall efficiency of the SDLC with practices like DevOps leading to fast yet flexible delivery pipelines.

IaC allows infrastructure changes to be managed through a source control mechanism like Git and integrated as an automated part of the CI/CD pipeline. It not only automates infrastructure changes but also facilitates auditable changes and easy rollbacks of changes if needed.

What is Terraform?

Terraform is an open-source infrastructure as a code tool from HashiCorp. It allows users to define both on-premises and cloud resources in human-readable configuration files that can be easily versioned, reused, and shared. Terraform can be used to manage both low-level components (like compute, storage, and networking resources) as well as high-level resources (DNS, PaaS, and SaaS components).

Terraform is a declarative tool further simplifying the user experience by allowing users to specify the expected state of resources without the need to specify the exact steps to achieve the desired state of resources. Terraform manages how the infrastructure needs to be modified to achieve the desired result.

Terraform is a platform-agnostic tool, meaning that it can be used across any supported provider. Terraform accomplishes this by interacting with the APIs of cloud providers. When a configuration is done through Terraform, it will communicate with the necessary platform via the API and ensure the defined changes are carried out in the targeted platform. With more than 1,700 providers from HasiCorp and the Terraform community available with the Terraform Registry, users can configure resources from leading cloud providers like Azure, AWS, GCP, and Oracle Cloud to more domain-specific platforms like Cloudflare, Dynatrace, elastic stack, datadog, and Kubernetes.

The Terraform workflow

The Terraform workflow is one of the simplest workflows only consisting of three steps to manage any type of infrastructure. It provides users the flexibility to change the workflow to support their exact implementation needs.

Terraform Workflow

1. Write

The first stage of the workflow is where users create the configurations to define or modify the underlying resources. It can be as simple as provisioning a simple compute instance in a cloud provider to deploy a multi-cloud Kubernetes cluster. This writing part can be facilitated either through HasiCorp Configuration Language (HCL), the default language to define resources or using the Cloud Development Kit for Terraform (CDKTF) which allows users to define resources using any supported common programming languages like Python, C#, Go, and Typescript.

2. Plan

This is the second stage of the workflow where Terraform will look at the configuration files and create an execution plan. It enables users to see the exact charges that will happen to the underlying infrastructure from what new resources are getting created, resourced, modified, and deleted.

3. Apply

This is the final stage of the workflow which takes place if the plan is satisfactory once the user has confirmed the changes. Terraform will carry out the changes to achieve the desired state in a specific order respecting all the resource dependencies. It will happen regardless of whether you have defined dependencies in the configuration. Terraform will automatically identify the resource dependencies of the platform and execute the changes without causing issues.

Terraform uses the state to keep track of all the changes to the infrastructure and detect config drifts. It will create a state file at the initial execution and subsequently update the state file with new changes. This state file can be stored locally or in a remote-backed system like an s3 bucket. Terraform always references this state file to identify the resources it manages and keep track of the changes to the infrastructure.

Benefits of Terraform

Let’s look at why so many people appreciate Terraform

  • Declarative nature. A declarative tool allows users to specify the end state and the IaC tools will automatically carry out the necessary steps to achieve the user configuration. It is in contrast to other imperative IaC tools where users need to define the exact steps required to achieve the desired state.
  • Platform agnostics. Most IaC tools like AWS CloudFormation and Azure Resource templates are platform specific. Yet, Terraform allows users to use a single tool to manage infrastructure across platforms with applications using many tools, platforms, and multi-cloud architectures.
  • Reusable configurations. Terraform encourages the creation of reusable configurations where users can use the same configuration to provision multiple environments. Additionally Terraform allows creating reusable components within the configuration files with modules.
  • Managed state. With state files keeping track of all the changes in the environment, all modifications are recorded and any unnecessary changes will not occur unless explicitly specified by the user. It can be further automated to detect any config drifts and automatically fix the drift to ensure the desired state is met at all times.
  • Easy rollsbacks. As all configurations are version controlled and the state is managed, users can easily and safely roll back most infrastructure configurations without complicated reconfigurations.
  • Integration to CI/CD. While IaC can be integrated into any pipeline, Terraform provides a simple three-step workflow that can be easily integrated into any CI/CD pipeline. It helps to completely automate the infrastructure management.

(Learn how to set up a CI/CD pipeline.)

How to use Terraform

You can start using Terraform by simply installing it in your local environment. Terraform supports Windows, Linux, and macOS environments. It provides users the option to install manually using a pre-compiled binary, or use a package manager like Homebrew on Mac, Chocolatey on Windows, Apt/Yum on Linux. It offers users the flexibility to install Terraform in their environments and integrate it into their workflows.

HashiCorp also provides a managed solution called Terraform Cloud. It provides users with a platform to manage infrastructure on all supported providers without the hassle of installing or managing Terraform itself. Terraform Cloud consists of features like;

  • Remote encrypted state storage
  • Direct CI/CD integrations
  • Fully remote and SOC2 compliant collaborative environment
  • Version Controls
  • Private Registry to store module and Policy as Code support to configure security and compliance policies
  • Complete auditable environment.
  • Cost estimations before applying infrastructure changes in supported providers.

Additionally, Terraform Cloud is deeply integrated with other HasiCrop Cloud Platform services like Vault, Consul, and Packer to manage secrets, provide service mesh and create images. All these things allow users to manage their entire infrastructure using the HasiCorp platform.

Using Terraform to provision resources

Finally, let’s look at a simple Terraform configuration. Assume you want to deploy a web server instance in your AWS environment. It can be done by creating an HCL configuration similar to the following.

terraform {

required_providers {

aws = {
source  = "hashicorp/aws"
version = "~> 3.74"
}
}
}
# Specifiy the Provider
provider "aws" {
region  = var.region
# AWS Credentials
access_key = "xxxxxxxxxxxxx"
secret_key = "yyyyyyyyyyyyy"
default_tags {
tags = {
Env            = "web-server"
Resource_Group = "ec2-instances"
}
}
}

# Configure the Security Group

resource "aws_security_group" "web_server_access" {
name        = "server-access-control-sg"
description = "Allow Access to the Server"
vpc_id      = local.ftp_vpc_id
ingress {

from_port        = 22
to_port          = 22
protocol         = "tcp"
cidr_blocks      = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
ingress {

from_port        = 443
to_port          = 443
protocol         = "tcp"
cidr_blocks      = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
egress {
from_port        = 0
to_port          = 0
protocol         = "-1"
cidr_blocks      = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]

}

tags = {
Name     = "server-access-control-sg"
}
}

# Get the latest Ubuntu AMI

data "aws_ami" "ubuntu" {
most_recent = true
owners      = ["099720109477"] # Canonical
filter {
name   = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
filter {

name   = "virtualization-type"
values = ["hvm"]
}
}

# Elastic IP

resource "aws_eip" "web_server_eip" {
instance = aws_instance.web_server.id
vpc      = true
tags = {
Name     = "web-server-eip"
Imported = false
}
}

# Web Server Instance

resource "aws_instance" "web_server" {

ami                         = data.aws_ami.ubuntu.id
instance_type               = "t3a.small"
availability_zone           = "eu-central-1a"
subnet_id                   = "subnet-yyyyyy"
associate_public_ip_address = false
vpc_security_group_ids      = "sg-xxxxxxx"
key_name                    = "frankfurt-test-servers-common"
disable_api_termination     = true
monitoring                  = true
credit_specification {
cpu_credits = "standard"
}

root_block_device {
volume_size = 30
}

tags = {
Name     = "web-server"
}
}

In the HCL file, we are pointing to the AWS provider and providing the AWS credentials (Access Key and Secret Key) which will be used to communicate with AWS and provision resources.

We have created a security group, elastic IP, and ec2 instance with the necessary configuration options to obtain the desired state in the configuration itself. Additionally, the AMI used for the ec2 instance is also queried by the configuration itself by looking for the latest Ubuntu image. Its easily understandable syntax pattern allows users to easily define their desired configurations using HCL and execute them via Terraform. You can have an in-depth look at all the available options for the AWS provider in the Terraform documentation.

Terraform summary

Terraform is a powerful IaC tool that aims to provide the best balance between user friendliness and features. Its declarative and platform-agnostic nature allows this tool to be used in any supported environment without being vendor-locked or having to learn new platform-specific tools. The flexible workflow and configuration options of Terraform allow it to be run in local environments.

Furthermore, users have the flexibility to select the exact implementation suited for their needs to manage Terraform Cloud solutions. All this has led Terraform to become one of the leading IaC tools.

Related reading

]]>