DevOps – BMC Software | Blogs https://s7280.pcdn.co Tue, 16 May 2023 11:05:44 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png DevOps – BMC Software | Blogs https://s7280.pcdn.co 32 32 Release Management in DevOps https://s7280.pcdn.co/devops-release-management/ Wed, 30 Mar 2022 00:00:34 +0000 https://www.bmc.com/blogs/?p=13642 The rise in popularity of DevOps practices and tools comes as no surprise to those who have already utilize the new techniques centered around maximizing the efficiency of software enterprises. Similar to the way Agile quickly proved its capabilities, DevOps has taken its cue from that and built off of Agile to create tools and […]]]>

The rise in popularity of DevOps practices and tools comes as no surprise to those who have already utilize the new techniques centered around maximizing the efficiency of software enterprises. Similar to the way Agile quickly proved its capabilities, DevOps has taken its cue from that and built off of Agile to create tools and techniques that help organizations adapt to the rapid pace of development today’s customers have come to expect.

As DevOps is an extension of Agile methodology, DevOps itself calls for extension beyond its basic form as well.

Collaboration between development and operations team members in an Agile work environment is a core DevOps concept, but there is an assortment of tools that fall under the purview of DevOps that empower your teams to:

  • Maximize their efficiency
  • Increase the speed of development
  • Improve the quality of your products

DevOps is both a set of tools and practices as well as a mentality of collaboration and communication. Tools built for DevOps teams are tools meant to enhance communication capabilities and create improved information visibility throughout the organization.

DevOps specifically looks to increase the frequency of updates by reducing the scope of changes being made. Focusing on smaller tasks at a time allows for teams to dedicate their attention to truly fixing an issue or adding robust functionality without stretching themselves thin across multiple tasks.

This means DevOps practices provide faster updates that also tend to be much more successful. Not only does the increased rate of change please customers as they can consistently see the product getting better over time, but it also trains DevOps teams to get better at making, testing, and deploying those changes. Over time, as teams adapt to the new formula, the rate of change becomes:

  • Faster
  • More efficient
  • More reliable

In addition to new tools and techniques being created, older roles and systems are also finding themselves in need of revamping to fit into these new structures. Release management is one of those roles that has found the need to change in response to the new world DevOps has heralded.

(This article is part of our DevOps Guide. Use the right-hand menu to navigate.)

What is Release Management?

Release management is the process of overseeing the planning, scheduling, and controlling of software builds throughout each stage of development and across various environments. Release management typically included the testing and deployment of software releases as well.

Release management has had an important role in the software development lifecycle since before it was known as release management. Deciding when and how to release updates was its own unique problem even when software saw physical disc releases with updates occurring as seldom as every few years.

Now that most software has moved from hard and fast release dates to the software as a service (SaaS) business model, release management has become a constant process that works alongside development. This is especially true for businesses that have converted to utilizing continuous delivery pipelines that see new releases occurring at blistering rates. DevOps now plays a large role in many of the duties that were originally considered to be under the purview of release management roles; however, DevOps has not resulted in the obsolescence of release management.

Advantages of Release Management for DevOps

With the transition to DevOps practices, deployment duties have shifted onto the shoulders of the DevOps teams. This doesn’t remove the need for release management; instead, it modifies the data points that matter most to the new role release management performs.

Release management acts as a method for filling the data gap in DevOps. The planning of implementation and rollback safety nets is part of the DevOps world, but release management still needs to keep tabs on applications, its components, and the promotion schedule as part of change orders. The key to managing software releases in a way that keeps pace with DevOps deployment schedules is through automated management tools.

Aligning business & IT goals

The modern business is under more pressure than ever to continuously deliver new features and boost their value to customers. Buyers have come to expect that their software evolves and continues to develop innovative ways to meet their needs. Businesses create an outside perspective to glean insights into their customer needs. However, IT has to have an inside perspective to develop these features.

Release management provides a critical bridge between these two gaps in perspective. It coordinates between IT work and business goals to maximize the success of each release. Release management balances customer desires with development work to deliver the greatest value to users.

(Learn more about IT/business alignment.)

Minimizes organizational risk

Software products contain millions of interconnected parts that create an enormous risk of failure. Users are often affected differently by bugs depending on their other software, applications, and tools. Plus, faster deployments to production increase the overall risk that faulty code and bugs slip through the cracks.

Release management minimizes the risk of failure by employing various strategies. Testing and governance can catch critical faulty sections of code before they reach the customer. Deployment plans ensure there are enough team members and resources to address any potential issues before affecting users. All dependencies between the millions of interconnected parts are recognized and understood.

Direct accelerating change

Release management is foundational to the discipline and skill of continuously producing enterprise-quality software. The rate of software delivery continues to accelerate and is unlikely to slow down anytime soon. The speed of changes makes release management more necessary than ever.

The move towards CI/CD and increases in automation ensure that the acceleration will only increase. However, it also means increased risk, unmet governance requirements, and potential disorder. Release management helps promote a culture of excellence to scale DevOps to an organizational level.

Release management best practices

As DevOps increases and changes accelerate, it is critical to have best practices in place to ensure that it moves as quickly as possible. Well-refined processes enable DevOps teams to more effectively and efficiently. Some best practices to improve your processes include:

Define clear criteria for success

Well-defined requirements in releases and testing will create more dependable releases. Everyone should clearly understand when things are actually ready to ship.

Well-defined means that the criteria cannot be subjective. Any subjective criteria will keep you from learning from mistakes and refining your release management process to identify what works best. It also needs to be defined for every team member. Release managers, quality supervisors, product vendors, and product owners must all have an agreed-upon set of criteria before starting a project.

Minimize downtime

DevOps is about creating an ideal customer experience. Likewise, the goal of release management is to minimize the amount of disruption that customers feel with updates.

Strive to consistently reduce customer impact and downtime with active monitoring, proactive testing, and real-time collaborative alerts that enable you to quickly notify you of issues during a release. A good release manager will be able to identify any problems before the customer.

The team can resolve incidents quickly and experience a successful release when proactive efforts are combined with a collaborative response plan.

Optimize your staging environment

The staging environment requires constant upkeep. Maintaining an environment that is as close as possible to your production one ensures smoother and more successful releases. From QA to product owners, the whole team must maintain the staging environment by running tests and combing through staging to find potential issues with deployment. Identifying problems in staging before deploying to production is only possible with the right staging environment.

Maintaining a staging environment that is as close as possible to production will enable DevOps teams to confirm that all releases will meet acceptance criteria more quickly.

Strive for immutable

Whenever possible, aim to create new updates as opposed to modifying new ones. Immutable programming drives teams to build entirely new configurations instead of changing existing structures. These new updates reduce the risk of bugs and errors that typically happen when modifying current configurations.

The inherently reliable releases will result in more satisfied customers and employees.

Keep detailed records

Good records management on any release/deployment artifacts is critical. From release notes to binaries to compilation of known errors, records are vital for reproducing entire sets of assets. In most cases, tacit knowledge is required.

Focus on the team

Well-defined and implemented DevOps procedures will usually create a more effective release management structure. They enable best practices for testing and cooperation during the complete delivery lifecycle.

Although automation is a critical aspect of DevOps and release management, it aims to enhance team productivity. The more that release management and DevOps focus on decreasing human error and improving operational efficiency, the more they’ll start to quickly release dependable services.

Automation & release management tools

Release managers working with continuous delivery pipeline systems can quickly become overwhelmed by the volume of work necessary to keep up with deployment schedules. This means enterprises are left with the options of either hiring more release management staff or employing automated release management tools. Not only is staff the more expensive option in most cases but adding more chefs in the kitchen is not always the greatest way to get dinner ready faster. More hands working in the process creates more opportunities for miscommunication and over-complication.

Automated release management tools provide end-to-end visibility for tracking application development, quality assurance, and production from a central hub. Release managers can monitor how everything within the system fits together which provides a deeper insight into the changes made and the reasons behind them. This empowers collaboration by providing everyone with detailed updates on the software’s position in the current lifecycle which allows for the constant improvement of processes. The strength of automated release management tools is in their visibility and usability—many of which can be accessed through web-based portals.

Powerful release management tools make use of smart automation that ensures continuous integration which enhances the efficiency of continuous delivery pipelines. This allows for the steady deployment of stable and complex applications. Utilizing intuitive web-based interfaces provides enterprises with tools for centralized management and troubleshooting that helps them plan and coordinate deployments across multiple teams and environments. The ability to create a single application package and deploy it across multiple environments from one location expedites the processes involved in continuous delivery pipelines and makes the management of them much more simplified.

Related reading

]]>
DevOps Metrics for Optimizing CI/CD Pipelines https://www.bmc.com/blogs/devops-ci-cd-metrics/ Fri, 18 Feb 2022 14:10:43 +0000 https://www.bmc.com/blogs/?p=51770 DevOps organizations monitor their CI/CD pipeline across three groups of metrics: Automation performance Speed Quality With continuous delivery of high-quality software releases, organizations are able to respond to changing market needs faster than their competition and maintain improved end-user experiences. How can you achieve this goal? Let’s discuss some of the critical aspects of a […]]]>

DevOps organizations monitor their CI/CD pipeline across three groups of metrics:

  • Automation performance
  • Speed
  • Quality

With continuous delivery of high-quality software releases, organizations are able to respond to changing market needs faster than their competition and maintain improved end-user experiences. How can you achieve this goal?

Let’s discuss some of the critical aspects of a healthy CI/CD pipeline and highlight the key metrics that must be monitored and improved to optimize CI/CD performance.

(This article is part of our DevOps Guide. Use the right-hand menu to go deeper into individual practices and concepts.)

Continuous Monitoring

CI/CD brief recap

But first, what is CI/CD and why is it important?

Continuous Integration (CI) refers to the process of merging software builds on a continuous basis. The development teams divide the large-scale project into small coding tasks and deliver the code updates iteratively, on an ongoing basis. The builds are pushed to a centralized repository where further automation, QA, and analysis takes place.

Continuous Delivery (CD) takes the continuously integrated software builds and extends the process with automated release. All approved code changes and software builds are automatically released to production where the test results are further evaluated and the software is available for deployment in the real world.

Deployment often requires DevOps teams to follow a manual governance process. However, an automation solution may also be used to continuously approve software builds at the end of the software development (SDLC) pipeline, making it a Continuous Deployment process.

(Read more about CI/CD or set up your own CI/CD pipeline.)

Metrics for optimizing the DevOps CI/CD pipeline

Now, let’s turn to actual metrics that can help you determine how mature your DevOps pipeline is. We’ll look at three areas.

Agile CI/CD Pipeline

In regard to delivering high quality software, infusing performance and security into the code from the ground up, developers should be able to write code that is QA-ready.

DevOps organizations should introduce test procedures early during the SDLC lifecycle—a practice known as shifting left—and developers should respond with quality improvements well before the build reaches production environments.

DevOps organizations can measure and optimize the performance of their CI/CD pipeline by using the following key metrics:

  • Test pass rate. The ratio between passed test cases with the total number of test cases.
  • Number of bugs. The number of issues that cause performance issues at a later stage.
  • Defect escape rate. The number of issues identified in the production stage compared to the number of issues identified in pre-production.
  • Number of code branches. Number of feature components introduced into the development project.

Automation of CI/CD & QA

Automation is the heart of DevOps and a critical component of a healthy CI/CD pipeline. However, DevOps is not solely about automation. In fact, DevOps thrives on automation adopted strategically—to replace repetitive and predictable tasks by automation solutions and scripts.

Considering the lack of skilled workforce and the scale of development tasks in a CI/CD pipeline, DevOps organizations should maximize the scope of their automation capabilities while also closely evaluating automation performance. They can do so by monitoring the following automation metrics:

  • Deployment frequency. Measure the throughput of your DevOps pipeline. How frequently can your organization deploy by automating the QA and CI/CD processes?
  • Deployment size. Does automation help improve your code deployment capacity?
  • Deployment success. Do frequent deployments cause downtime and outages, or other performance and security issues?

Infrastructure Dependability

DevOps organizations are expected to improve performance without disrupting the business. Considering the increased dependence on automation technologies and a cultural change focused on rapid and continuous delivery cycles, DevOps organizations need consistency of performance across the SDLC pipeline.

Dependability of infrastructure underlying high performance CI/CD pipeline responsible for hundreds (at times, thousands) of delivery cycles on a daily basis is therefore critical to the success of DevOps. How do you measure the dependability of your IT infrastructure?

Here are a few metrics to get you started:

  • MTTF, MTTR, MTTD: Mean Time to Failure/Repair/Diagnose. These metrics quantify the risk associated with potential failures and the time it takes to recover to optimal performance. Learn more about reliability calculations and metrics for infrastructure or service performance.
  • Time to value. Another key metric is the speed of Continuous Delivery cycle release performance. It refers to the time taken before a complete written software build is released into production. The delaying duration may be caused by a number of factors, including infrastructure resources and automation capabilities available to test and process the build, as well as the governance process necessary for final release.
  • Infrastructure utilization. Evaluate the performance of every service node, server, hardware, and virtualized IT components. This information not only describes the computational performance available for CI/CD teams but also creates vast volumes of data that can be studied for security and performance issues facing the network infrastructure.

With these metrics reliably in place, you’ll be ready to understand how close to optimal you really are.

Related reading

]]>
What Is CI/CD? Continuous Integration & Continuous Delivery Explained https://www.bmc.com/blogs/what-is-ci-cd/ Thu, 30 Dec 2021 00:00:31 +0000 https://www.bmc.com/blogs/?p=13621 Flexibility, speed, and quality are the core pillars of modern software development. Increased customer demand and the evolving technological landscape have made software development more complex than ever, making traditional software development lifecycle (SDLC) methods unable to cope with the rapidly changing nature of developments. Practices like Agile and DevOps have gained popularity in facilitating […]]]>

Flexibility, speed, and quality are the core pillars of modern software development. Increased customer demand and the evolving technological landscape have made software development more complex than ever, making traditional software development lifecycle (SDLC) methods unable to cope with the rapidly changing nature of developments.

Practices like Agile and DevOps have gained popularity in facilitating these changing requirements by bringing flexibility and speed to the development process without sacrificing the overall quality of the end product.

Together, Continuous Integration (CD) and Continuous Delivery (CD) is a key aspect that helps in this regard. It allows users to build integrated development pipelines that spread from development to production deployments across the software development process. So, what exactly are Continuous Integration and Continuous Delivery? Let’s take a look.

(This article is part of our DevOps Guide. Use the right-hand menu to navigate.)

What is CI/CD?

CI/CD refers to Continuous Integration and Continuous Delivery. In its simplest form, CI/CD introduces automation and monitoring to the complete SDLC.

  • Continuous Integration can be considered the first part of a software delivery pipeline where application code is integrated, built, and tested.
  • Continuous Delivery is the second stage of a delivery pipeline where the application is deployed to its production environment to be utilized by the end-users.

Let’s deep dive into CI and CD in the following sections.

What is Continuous Integration?

Modern software development is a team effort with multiple developers working on different areas, features, or bug fixes of a product. All these code changes need to be combined to release a single end product. However, manually integrating all these changes can be a near-impossible task, and there will inevitably be conflicting code changes with developers working on multiple changes.

Continuous Integrations offer the ideal solution for this issue by allowing developers to continuously push their code to the version control system (VCS). These changes are validated, and new builds are created from the new code that will undergo automated testing.

This testing will typically include unit and integration tests to ensure that the changes do not cause any issues in the application. It also ensures that all code changes are properly validated, tested, and immediate feedback is provided to the developer from the pipeline in the event of an issue enabling them to fix that issue quickly.

This not only increases the quality of the code but also provides a platform to quickly identify code errors with a shorter automated feedback cycle. Another benefit of Continuous Integrations is that it ensures all developers have the latest codebase to work on as code changes are quickly merged, further mitigating merge conflicts.

The end goal of the continuous integration process is to create a deployable artifact.

What is Continuous Delivery?

Once a deployable artifact is created, the next stage of the software development process is to deploy this artifact to the production environment. Continuous delivery comes into play to address this need by automating the entire delivery process.

Continuous Delivery is responsible for the application deployment as well as infrastructure and configuration changes, monitoring and maintaining the application. CD can extend its functionally to include operational responsibilities such as infrastructure management using automation tools such as:

  • Terraform
  • Ansible
  • Chef
  • Puppet

Continuous Delivery also supports multi-stage deployments where artifacts are moved through different stages like staging, pre-production, and finally to production with additional testing and verifications at each stage. These additional testing and verification further increase the reliability and robustness of the application.

Why we need CI/CD

CI/CD is the backbone of all modern software developments allowing organizations to develop and deploy software quickly and efficiently. It offers a unified platform to integrate all aspects of the SDLC, including separate tools and platforms from source control, testing tools to infrastructure modification, and monitoring tools.

A properly configured CI/CD pipeline allows organizations to adapt to changing consumer needs and technological innovations easily. In a traditional development strategy, fulfilling changes requested by clients or adapting new technology will be a long-winded process. Moreover, the consumer need may also have shifted when the organization tries to adapt to the change. Approaches like DevOps with CI/CD solve this issue as CI/CD pipelines are much more flexible.

For example: suppose there is a consumer requirement that is not currently addressed with a DevOps approach. In that case, it can be quickly identified, analyzed, developed, and deployed to the software product in a relatively short amount of time without disrupting the normal development flow of the application.

Another aspect is that CI/CD enables quick deployment of even small changes to the end product, quickly addressing user needs. It not only resolves user needs but also provides visibility of the development process to the end-user. End-users can see that the product grows with frequent deployments related to bug fixes or new features.

This is in stark contrast with traditional approaches like the waterfall model, where the end-users only see the final product after the complete development is done.

CI/CD today

CI/CD has come a long way since its inception, where it began only as a platform to support application delivery. Now it has evolved to support other aspects, such as:

  • Database DevOps, where database changes are continuously delivered.
  • GitOps, where infrastructure is defined in a declarative version-controlled manner to be managed via CI/CD pipelines.

Thus, users can integrate almost all aspects of the software delivery into Continuous Integration and Continuous Delivery. Furthermore, CI/CD can also extend itself to DevSecOps, where security testing such as vulnerability scans, configuration policy enforcements, network monitoring, etc., can be directly integrated into CI/CD pipelines.

CI/CD pipeline & workflows

CI/CD pipeline is a software delivery process created through Continuous Integration and Continuous Delivery platforms. The complexity and the stages of the CI/CD pipeline vary depending on the development requirements.

Properly setting up CI/CD pipeline is the key to benefitting from all the advantages offered by CI/CD. One pipeline might have a multi-stage deployment strategy that delivers software as containers to a multi-cloud Kubernetes cluster, and another may be a simple pipeline that builds, tests, and deploys the application as a serverless function.

A typical CI/CD pipeline can be broken down into the following stages:

  1. Development. This stage is where the development happens, and the code is merged to a version control repository and validated.
  2. Build. The application is built using the validated code, and this artifact is used for testing.
  3. Testing. Usually, the built artifact is deployed to a test environment, and extensive tests are carried out to ensure the functionality of the application.
  4. Deploy. This is the final stage of the pipeline, where the tested application is deployed to the production environment.

All the above stages are continuously monitored for any errors and quickly notified to the relevant parties.

Advantages of Continuous Integration & Delivery

CI/CD undoubtedly increases the speed and the efficiency of the software development process while providing a top-down view of all the tasks involved in the delivery process. On top of that, CI/CD will have the following benefits reaching all aspects of the organization..

  • Improve developer and QA productivity by introducing automated validations, builds, and testing
  • Save time and resources by automating mundane and repeatable tasks
  • Improve overall code quality
  • Increase the feedback cycles with each stage and the process in the pipeline being continuously monitored
  • Reduce the bugs or defects in the system
  • Provide the ability to support other areas of application delivery, such as database and infrastructure changes directly through the pipeline
  • Support varying architectures and platforms from traditional server-based deployment to container and serverless architectures
  • Ensure the application’s reliability, thanks to the ability to monitor the application in the production environment with continuous monitoring

CI/CD tools & platforms

When it comes to CI/CD tools and platforms, there are many choices ranging from simple CI/CD platforms to specialized tools that support a specific architecture. There are even tools and services directly available through source control systems. Let’s look at some of the popular CI/CD tools and platforms.

Continuous Integration tools & platforms

  • Jenkins
  • TeamCity
  • Travis CI
  • Bamboo
  • CircleCI

Continuous Delivery tools & platforms

  • ArgoCD
  • JenkinsX
  • FluxCD
  • GoCD
  • Spinnaker
  • Octopus Deploy

Cloud-Based CI/CD

  • Azure DevOps
  • Google Cloud Build
  • AWS CodeBuild/CodeCommit/CodeDeploy
  • GitHub Actions
  • GitLab Pipelines
  • Bitbucket Pipelines

Summing up CI/CD

Continuous Integration and Continuous Delivery have become an integral part of most software development lifecycles. With continuous development, testing, and deployment, CI/CD has enabled faster, more flexible development without increasing the workload of development, quality assurance, or the operations teams.

Today, CI/CD has evolved to support all aspects of the delivery pipelines, thus also facilitating new paradigms such as GitOps, Database DevOps, DevSecOps, etc.—and we can expect more to come.

BMC supports Enterprise DevOps

From legacy systems to cloud software, BMC supports DevOps across the enter enterprise. Learn more about Enterprise DevOps.

Related reading

 

]]>
Test Automation Frameworks: The Ultimate Guide https://www.bmc.com/blogs/test-automation-frameworks/ Fri, 10 Dec 2021 01:00:02 +0000 http://www.bmc.com/blogs/?p=12115 Quality assurance (QA) is a major part of any software development. Software testing is the path to a bug-free, performance-oriented software application—one that also satisfies (or exceeds!) end-user requirements. Of course, manual testing is quickly unscalable due to the rapid pace of development and ever-increasing requirements. Thus, a faster yet accurate testing solution was required, […]]]>

Quality assurance (QA) is a major part of any software development. Software testing is the path to a bug-free, performance-oriented software application—one that also satisfies (or exceeds!) end-user requirements.

Of course, manual testing is quickly unscalable due to the rapid pace of development and ever-increasing requirements. Thus, a faster yet accurate testing solution was required, and automated testing became the ideal solution for this need. Automated testing does not mean replacing the entire manual testing process. Instead automated testing means:

  1. Allowing users to automate most routine and repetitive test cases.
  2. Freeing up valuable time and resources to focus on more intricate or complex test scenarios.

Introducing automated testing to a delivery pipeline can be a daunting process. Several factors—the programming language, user preferences, test cases, and the overall testing scope—directly decide what can and cannot be automated. However, if set up correctly, automated testing can be the backbone of the QA team to ensure a smooth and scalable testing experience.

Different types of automation frameworks came into prominence to aid in this endeavor. An automation framework allows users to easily set up an automated test environment that ultimately helps in providing a better ROI for both development and QA teams. In this article, we will have a look at different types of test automation frameworks available and their advantages and disadvantages.

(This article is part of our DevOps Guide. Use the right-hand menu to go deeper into individual practices and concepts.)

What is a test automation framework?

Before diving into different types of test automation frameworks, we need to understand what an automation framework is. Test automation is the process of automating repetitive and predictable testing scenarios.

A test automation framework is a set of guidelines or rules that can be used to define test cases. These test cases can then be configured and implemented using test automation tools such as Selenium, Puppeteer, etc., to the delivery process via a CI/CD pipeline.

A test automation framework will consist of practices and tools that are designed to create efficient test cases. These practices range from coding standards, test-data handling methods, object repository management, and managing access control to test environment and external tools, etc. However, testers have more freedom than this. Testers are:

  • Not confined to these rules or guidelines
  • Free to create test cases in their preferred way

Still, a framework provides standardization across the testing process, leading to a more efficient, secure, and compliant testing process.

Advantages of a test automation framework

There are some key advantages of adhering to the rules and guidelines offered by a test automation framework. These advantages include:

  • Increased speed and efficiency of the overall testing process
  • Improved accuracy and repeatability of the test cases
  • Lower maintenance requirements with standardized practices and processes
  • Reduced manual intervention and human error
  • Maximized test coverage across all areas of the application, from the GUI to internal application logic

Top Test Automation Frameworks

Popular test automation frameworks

When it comes to test automation frameworks, there are six leading frameworks available these days. In this section, we will look at each of these six frameworks with regard to their architecture, advantages, and disadvantages:

  • Linear automation framework
  • Modular-driven framework
  • Library architecture framework
  • Data-driven framework
  • Keyword-driven framework
  • Hybrid testing framework

Linear Automation Framework

The linear framework or the record and playback framework is best suited for basic, introductory level testing.

In a linear automation framework, users target a specific program functionality, create test scripts in sequential order and run them individually. This process includes capturing all the tests like navigation, inputs, etc., and playing them back repeatedly to conduct the test.

Advantages of Linear Framework

  • Does not require specific automation knowledge or custom code
  • It is easier to understand test cases due to sequential order
  • Faster approach to testing
  • Simper implementation to existing workflows and most automation tools provides inbuilt tools for record and playback functionality

Disadvantages of Linear Framework

  • Test cases are not reusable as they are targeted towards specific use cases or functions
  • With static data, there is no option to run tests with different data sets as test data is hardcoded
  • Maintenance can be complex as any change will require rebuilding test cases

Modular Driven Framework

This framework takes a modular approach to testing which breaks down tests into separate units, functions, or modules and will be tested in isolation. These separate test scripts can be combined to build larger tests covering the complete application or specific functionality.

(Learn about unit testing, function testing, and more.)

Advantages of Modular Framework

  • Increased flexibility of test cases. Individual sections can be quickly edited and modified as tests are separated
  • Increased reusability as individual test cases can be modified from different overarching modules to suit different needs
  • The ability to scale up testing quickly and efficiently to include any new functionality

Disadvantages of Modular Framework

  • Can be complex to implement and require proper programming knowledge to build and set up test cases
  • Cannot be used with different test data sets in a single test case

Library Architecture Framework

This framework is derived from the modular framework that aims to provide a greater level of modularity to testing by breaking down tests by units, functions, etc.

The library architecture framework identifies similar tasks within test scripts and groups them by function. These modular parts aren’t directly about function—they’re more focused on common objectives. Then these functions are stored in a library sorted by their objectives, and test scripts call upon this library to obtain different functionality when testing.

Advantages of Library Architecture Framework

  • A high level of modularity leads to increased scalability of test cases
  • Increased reusability as libraries can be used across different test scripts
  • Can be a cost-effective solution due to its reusability, especially in larger projects

Disadvantages of Library Architecture Framework

  • Can be complex to set up and integrate into delivery pipelines
  • Technical expertise is required to identify and modularize the common tasks
  • Test data are static as they are hardcoded in script with any changes requiring direct changes to the scripts

Data-Driven Framework

The main feature of the data-driven framework is that it decouples data from the script logic. It is the ideal framework when users need to test a function or scenario with different data sets but still use the same internal logic.

In data-driven frameworks, values such as inputs and outputs are passed as parameters to test scripts from external data sources such as variable files, databases, etc.

Advantages of Data-Driven Framework

  • Decoupled approach to data and logic leads to increased reusability of test cases while providing the ability to test under different data sets without modifying the test case
  • Handle multiple scenarios using the same test scripts with varying sets of data, which leads to faster test environments
  • Since there is no need to hardcode data, scripts can be changed without affecting the overall functionality
  • Easily adaptable to suit any testing need

Disadvantages of Data-Driven Framework

  • One of the most complex frameworks to implement as decoupling data and logic will require expert knowledge both in automation and the application itself
  • Can be time-consuming and a resource-intensive process to implement in the delivery pipeline

Keyword-Driven Framework

The keyword-driven framework takes the decoupling of data and the logic introduced in the data-driven framework a step further. In addition to the data being stored externally, specific keywords that are associated with different actions and used to test the GUI are also stored externally to be referenced at the test execution.

It makes keywords independent entities that reference specific functions or actions that are associated with specific objects. Users write code to prompt the necessary keyword-based action, and the appropriate script is executed within the test when the keyword is referenced.

Advantages of Keyword-Driven Framework

  • Test scripts can be built independently of the application
  • Increased reusability and flexibility while providing a detailed approach to categorize test functionality
  • Reduced maintenance requirements compared to non-decoupled frameworks

Disadvantages of Keyword-Driven Framework

  • One of the most complex frameworks to configure and implement, requiring a considerable investment of resources
  • Keywords need to be scaled according to the application testing needs, which can lead to increased complexity with each test scope or requirement change

Hybrid Testing Framework

A hybrid testing framework is not a predefined framework with its architecture or rules but a combination of previously mentioned frameworks.

Depending on a single framework will not be a feasible endeavor with the ever-increasing need to cater to different test scenarios. Therefore, different types of frameworks are combined in most development environments to best suit the application testing needs while leveraging the strengths of each framework and mitigating the disadvantages.

With the popularity of DevOps and agile practices, more flexible frameworks are needed to cope with the changing environments. Therefore, a hybrid approach provides the best solution by allowing users to mix and match frameworks to obtain the best results for their specific testing requirements.

Customizing your frameworks

Selecting a test automation framework is the first step towards creating an automated testing environment. However, relying on a single framework has become a near-impossible task due to the ever-evolving nature of the technological landscape and rapid development cycles. That’s why the hybrid testing framework has gained popularity—for enabling users to combine different test automation frameworks to build an ideal automation framework for their needs.

Even if you are new to the automation world, you can start with a framework with many built-in solutions, build on top of it and customize it to create the ideal framework.

Related reading

]]>
Containers & DevOps: Containers Fit in DevOps Delivery Pipelines https://www.bmc.com/blogs/devops-containers/ Mon, 29 Nov 2021 00:00:04 +0000 https://www.bmc.com/blogs/?p=13685 DevOps came to prominence to meet the ever-increasing market and consumer demand for tech applications. It aims to create a faster development environment without sacrificing the quality of software. DevOps also focuses on improving the overall quality of software in a rapid development lifecycle. It relies on a combination of multiple technologies, platforms, and tools […]]]>

DevOps came to prominence to meet the ever-increasing market and consumer demand for tech applications. It aims to create a faster development environment without sacrificing the quality of software. DevOps also focuses on improving the overall quality of software in a rapid development lifecycle. It relies on a combination of multiple technologies, platforms, and tools to achieve all these goals.

Containerization is one technology that revolutionized how we develop, deploy, and manage applications. In this post, we will look at how containers fit into the DevOps world and the advantages or disadvantages offered by a container-based DevOps delivery pipeline.

(This article is part of our DevOps Guide. Use the right-hand menu to navigate.)

What is a containerized application?

Virtualization helped users to create virtual environments that share hardware resources. Containerization takes this abstraction a step further by sharing the operating system kernel.

This leads to lightweight and inherently portable objects (containers) that bundle the software code and all the required dependencies together. These containers can then be deployed on any supported infrastructure with minimal or no external configurations.

Container Structure

One of the most complex parts of a traditional deployment is configuring the deployment environment with all the dependencies and configurations. Containerized applications eliminate these configuration requirements as the container packages everything that the application requires within the container.

On top of that, containers will require fewer resources and can be easily managed compared to virtual machines. This way, containerization leads to greatly simplified deployment strategies that can be easily automated and integrated into DevOps delivery pipelines. When this is combined with an orchestration platform like Kubernetes or Rancher, users can:

  • Leverage the strengths of those platforms to manage the application throughout its lifecycle
  • Provide greater availability, scalability, performance, and security

What is a continuous delivery pipeline?

DevOps relies on Continuous Delivery (CD) as the core process to manage software delivery. It enables software development teams to deploy software more frequently while maintaining the stability and reliability of systems.

Continuous Delivery utilizes a stack of tools such as CI/CD platforms, testing tools, etc., combined with automation to facilitate frequent software delivery. Automation plays a major role in these continuous delivery pipelines by automating all the possible tasks of the pipeline from tests, infrastructure provisioning, and even deployments.

In most cases, Continuous Delivery is combined with Continuous Integration to create more robust delivery pipelines called CI/CD pipelines. They enable organizations to integrate the complete software development process into a DevOps pipeline:

  • Continuous Integration ensures that all code changes are integrated into the delivery pipeline.
  • Continuous Delivery ensures that new changes are properly tested and ultimately deployed in production.

Both are crucial for a successful DevOps delivery pipeline.

(Learn how to set up a CI/CD pipeline.)

How does it all come together?

Now that we understand a containerized application and a delivery pipeline, let’s see how these two relate to each other to deliver software more efficiently.

Traditional DevOps pipeline

First, let’s look at a more traditional DevOps pipeline. In general, a traditional delivery pipeline will consist of the following steps.

  1. Develop software and integrate new changes to a centralized repository. (Version control tools come into play here.)
  2. Verify and validate code and merge changes.
  3. Build the application with the new code changes.
  4. Provision the test environment with all the configurations and dependencies and deploy the application.
  5. Carry out testing. (This can be both automated and manual testing depending on the requirement)
  6. After all tests are completed, deploy the application in production. (This again requires provisioning resources and configuring the dependencies with any additional configurations required to run the application.)

Most of the above tasks can be automated, including provisioning infrastructure with IaC tools such as Terraform, CloudFormation, etc., and deployment can be simplified using platforms such as AWS Elastic Beanstalk and Azure App Service, etc. However, all these automated tasks still require careful configuration and management, and using provider-specific tools will lead to vendor lock-in.

Containerized delivery pipeline

Containerized application deployments allow us to simplify the delivery pipeline with less management overhead. A typical containerized pipeline can be summed up in the following steps.

  1. Develop and integrate the changes using a version control system.
  2. Verify and validate and merge the code changes.
  3. Build the container. (At this stage, the code repository contains the application code and all the necessary configuration files and dependencies that are used to build the container.)
  4. Deploy the container to the staging environment.
  5. Carry out application testing.
  6. Deploy the same container to the production environment.
DevOps Delivery Pipeline

Container-based DevOps delivery pipeline

As you can see in the above diagram, containerized application pipelines effectively eliminates most regular infrastructure and environment configuration requirements. However, the main thing to remember is that the container deployment environment must be configured beforehand. In most instances, this environment relates to either:

  • A container orchestration platform like Kubernetes or Rancher
  • A platform-specific orchestration service like Amazon Elastic Container Service (ECS), AWS Fargate, Azure Container services, etc.

The key difference

The main turning point of the delivery pipeline is the application build versus the containerization. Only the application is built in a normal delivery pipeline, while the complete container is built in a containerized application, which can be deployed in any supported environment.

The container includes all the application dependencies and configurations. It reduces any errors relating to configuration issues and allows delivery teams to quickly move these containers between different environments such as staging and production. Besides, containerization greatly reduces the scope of troubleshooting as developers only need to drill down applications within the container with little to no effect from external configurations or services.

Modern application architectures such as microservices-based architectures are well suited for containerization as they decouple application functionality to different services. Containerization allows users to manage these services as separate individual entities without relying on any external configurations.

There will be infrastructure management requirements even with containers, thought containers do indeed simplify these requirements. The most prominent infrastructure management requirement will be managing both the:

However, using a managed container orchestration platform like Amazon Elastic Kubernetes Service (EKS) or Azure Kubernetes Service (AKS) eliminates any need for managing infrastructure for the container orchestration platform. These platforms further simplify the delivery pipeline and allow Kubernetes users to use them without being vendor-locked as they are based on Kubernetes.

(Determine when to use ECS vs AKS vs EKS.)

Container orchestration in DevOps delivery pipeline

Container Orchestration goes hand in hand with containerized applications as containerization is only one part of the overall container revolution. Container Orchestration is the process of managing the container throughout its lifecycle, from deploying the container to managing availability and scaling.

While there are many orchestration platforms, Kubernetes is one of the most popular options with industry-wide support. It can power virtually any environment, from single-node clusters to multi-cloud clusters. The ability of orchestration platforms to manage the container throughout its lifecycle while ensuring availability eliminates the need for manual intervention to manage containers.

As mentioned earlier, using a platform-agnostic orchestration platform prevents vendor-lock-in while allowing users to utilize managed solutions and power multi-cloud architectures with a single platform.

(Explore our multi-part Kubernetes Guide, including hands-on tutorials.)

Are containers right for your DevOps delivery pipeline?

The simple answer is yes. Containerization can benefit practically all application developments, with the only detractors including overly simple developments or legacy monolithic developments.

  • DevOps streamlines rapid development and delivery while increasing team collaboration and improving the overall application quality.
  • Containers help simplify the DevOps delivery process further by allowing users to leverage all the advantages of containers within the DevOps delivery pipelines without hindering the core DevOps practices.

Containers can support any environment regardless of the programming language, framework, deployment strategy, etc., while providing more flexibility for delivery teams to customize their environments without affecting the delivery process.

Related reading

]]>
ITOps vs DevOps vs NoOps: The IT Operations Evolution https://www.bmc.com/blogs/itops-devops-and-noops-oh-my/ Mon, 29 Nov 2021 00:00:00 +0000 http://bmcsoftware.wpengine.com/itops-devops-and-noops-oh-my/ The modern technology landscape is an uber-competitive, constantly evolving ecosystem. With technology integrated into all aspects of modern life, all companies must continuously evolve to: Meet increased consumer demand and market conditions Continue to provide quality products as quickly as possible Historically, IT departments acted as a single team, but they have been increasingly divided […]]]>

The modern technology landscape is an uber-competitive, constantly evolving ecosystem. With technology integrated into all aspects of modern life, all companies must continuously evolve to:

  • Meet increased consumer demand and market conditions
  • Continue to provide quality products as quickly as possible

Historically, IT departments acted as a single team, but they have been increasingly divided into specialized departments or teams with specific goals and responsibilities. This increased specialization is vital for quickly adapting to the evolving technological landscape. However, this division has also created some disconnect between teams when it comes to software development and deployment.

DevOps, ITOps, and NoOps are some concepts that help companies to become as agile and secure as possible. Understanding these concepts is the key to structuring the delivery pipeline at an organizational level. So, in this article, let’s take a look at the evolution of ITOps, DevOps, and NoOps.

DevOps, ITOps, and NoOps

(This article is part of our DevOps Guide. Use the right-hand menu to go deeper into individual practices and concepts.)

What is ITOps?

ITOps (or TecOps) is shorthand for IT Operations. IT Operations is the most traditional concept of the three we’ll discuss, and it’s also the basis for these more modern practices.

Any IT task can come under the ITOps umbrella regardless of the business domain, as almost every business domain relies on IT for day-to-day operations. ITOps can apply to virtually any field.

(Understand how operations management can vary from service management.)

ITOps basics

In its most fundamental form, ITOps is the process of delivering and maintaining all the services, applications, technologies, and infrastructure that are required to run an information technology-based product or service. Therefore, ITOps views software development and IT infrastructure management as a unified entity that is a part of the same process. The main difference of ITOps is how it handles delivery and maintenance.

ITOps typically covers all the following job roles:

The above roles represent the people who are responsible for delivering IT changes and providing long-term support for the overall IT services and infrastructure.

ITOps goals

ITOps are geared more towards stability and long-term reliability with limited support for agile and speedy workflows. Generally, agility and speed are not the primary concerns of ITOps at all. Thus, ITOps will seem inflexible with rigid workflows. This approach is also geared towards managing physical infrastructure with release-based, highly tested software products where reliability and stability are key factors.

This inflexible nature is also the major downside of ITOps. However, it may be an excellent choice for monolithic and slow-moving software developments, such as in the financial services industry. Yet ITOps becomes obsolete in rapidly evolving software developments. As modern software developments come under this category, ITOps is not a suitable candidate for such environments.

What is DevOps?

DevOps provides a set of practices to bring software development and IT operations together to create rapid software development pipelines. These development pipelines feature greater agility without sacrificing the overall quality of the end product. We can understand DevOps as a major evolution of traditional ITOps that is an outcome of the Cloud era.

(Explore our comprehensive DevOps Guide.)

DevOps basics

DevOps combines cultural philosophies, different practices, and tools with a greater emphasis on team collaboration. Moreover, DevOps will bring fundamental changes to how an organization handles its overall development strategy. As mentioned previously, a modern software delivery team consists of multiple specialized teams such as:

  • Development
  • Quality assurance (QA)
  • Infrastructure
  • Security
  • Support

DevOps aims to bring all these teams together without impacting their specialty while fostering a more collaborative environment. This environment provides greater visibility of the roles and responsibilities of each team and team member.

Automation also plays a key role in DevOps to facilitate an agile and rapid SDLC. It enables offloading most manual and tedious tasks such as testing and infrastructure provisioning into automated workflows. Tools to facilitate this automation include:

DevOps goals

The gist of adapting DevOps in your organization is that it can power previously disconnected tasks such as infrastructure provisioning and application deployments through a single unified delivery pipeline.

For example, in a more traditional development process, developers will need to inform the operations team separately if they need to provision or reconfigure infrastructure to meet the application changes. This process can lead to significant delays and bottlenecks in the overall delivery process.

However, DevOps streamlines this process by allowing separate teams to understand the requirements of each other. It enables them to foresee these requirements and address them promptly. This process can be automated in some situations, eliminating the need for manual interaction to manage the infrastructure.

DevOps is well situated for modern, cloud-based, or cloud-native application developments and can be easily adapted to meet the ever-changing market and user requirements. There is a common misconception that DevOps is unsuitable for traditional developments, yet DevOps practices can be adapted to suit any type of development—including DevOps for service management.

What is NoOps?

NoOps is a further evolution of the DevOps method to eliminate the need for a separate operations team by fully automating the IT infrastructure. In this approach, all provisioning, maintenance, and similar tasks are automated to a level where no manual intervention is required.

NoOps and DevOps are similar in a sense as they both rely on automation to streamline software development and deployment. However, DevOps aims to garner a more collaborative environment while using automation to simplify the development process.

On the other hand, NoOps aims to remove any operational concerns from the development process. In a fully automated environment, developers can use these tools and processes directly even without knowing their underlying mechanisms.

NoOps basics

NoOps is solely targeted at a cloud-based architecture where infrastructure can be less of a burden or the complete responsibility of the service provider.

Serverless architectures are perfect examples of NoOps software delivery where developers only need to create their applications and simply deploy them in the serverless environment, eliminating any infrastructure or operational considerations.

NoOps may seem like the perfect operational strategy. Unfortunately, it lacks proper process management or team management practices baked into the method. Due to that, it may hinder the overall collaboration within a delivery pipeline as well as put more burden on the developers to manage the application lifecycle without any operational assistance.

In most cases, NoOps will be an ideal method to complement DevOps practices by introducing further automation to a delivery pipeline while preserving the collaborative multi-team environments.

Choosing ITOps vs DevOps vs NoOps

In the above sections, we discussed the impact of each of these methods on the software development lifecycle. But what is the ideal solution for your organizational environment? Let’s summarize the primary characteristics of each method to find out the answer to that question.

ITOps

  • Stability and Long-term support over speed and agility
  • Strict, inflexible yet tried and tested workflows
  • Primary focus on the IT operations side to streamline the overall IT infrastructure to ensure business continuity
  • Geared towards managing physical infrastructure across multiple business domains
  • Well suited for legacy release-based enterprise software developments

DevOps

  • Bring fundamental changes at an organizational level with a focus on streamlining the overall delivery process
  • Increase collaboration and introduce automation throughout the application lifecycle
  • Aims to create more flexible and rapid delivery pipelines while increasing the overall product quality
  • Can be adapted across any application type, architecture, platform from cloud-native developments to legacy enterprise developments
  • Greater flexibility to select tools and platforms depending on the user requirements
  • As DevOps is based on CI/CD principles, software is constantly evolving to stay up to date with the ever-changing technological landscape
  • Faster feedback cycles to quickly fix and improve the product

NoOps

  • Automate everything
  • Eliminates the need for separate operations teams while providing all the necessary automated tools and platforms for developers to manage the software delivery
  • Relies heavily on cloud services such as serverless computing and containers to provide an environment where there is no concern on infrastructure
  • Focuses on speed and simplicity at the cost of flexibility and granular controls.
  • Ideal for cloud-focused workloads

As you can see, ITOps and NoOps excel at their domains, whereas DevOps can be considered a more universal approach.

A continuing evolution

ITOps is slowly becoming obsolete due to its slow rate of adaptation to the current technological landscape. (In fact, AIOps is rapidly moving in.)

NoOps is an idealistic approach where everything can be automated. However, it is still a way off as some critical aspects such as testing and advanced infrastructure and networking configurations require manual intervention.

Finally, we will come back to DevOps. DevOps has gained high popularity due to its adaptability to almost all development environments while improving the agility, speed, and efficiency of the software delivery process. Approaches like NoOps can even be integrated into the overall DevOps process to enhance the DevOps approach further.

Related reading

]]>
Introduction To Database DevOps https://www.bmc.com/blogs/database-devops/ Tue, 09 Nov 2021 11:14:30 +0000 https://www.bmc.com/blogs/?p=51076 Database DevOps is an emerging area of the DevOps methodology. Let’s take a look at database management and what happens when you apply DevOps concepts. (This article is part of our DevOps Guide. Use the right-hand menu to go deeper into individual practices and concepts.) DevOps for databases? DevOps is (officially) the preferred software development […]]]>

Database DevOps is an emerging area of the DevOps methodology. Let’s take a look at database management and what happens when you apply DevOps concepts.

(This article is part of our DevOps Guide. Use the right-hand menu to go deeper into individual practices and concepts.)

DevOps for databases?

DevOps is (officially) the preferred software development lifecycle (SDLC) framework for application development in recent years. Continuous operations and automation have emerged as a key characteristic of the DevOps philosophy as practiced in successful DevOps environments.

Yet, the principles of continuity and automation have not entirely encompassed all aspects of software applications.

The ongoing management, change, updates, development, and processing of database (code) has emerged as a bottleneck for many DevOps organizations. This has forced engineers to invest countless hours on database development rework that support continuous release cycles as expected for a streamlined DevOps SDLC pipeline.

Since database changes and development is considered as one of the riskiest and slowest processes of the SDLC, applying the DevOps principles of continuous operations and automation specifically for database code development is seen as a potential solution to the database problem. According to a recent research report:

  • Over 90% of application stakeholders work toward accelerating database deployment and management procedures.
  • More than half of all application changes further require modifications to the corresponding database code.

Database challenges in DevOps

Before we discuss how database DevOps can make the DevOps SDLC pipeline efficient, let’s discuss the database-specific challenges facing DevOps organizations:

  • Manual changes. Traditional database management follows manual processes such as code reviews and approval—all of which hold up the release cycle.
  • Data provisioning. Due to security and regulatory limitations, data from production is often not available to test early application builds. The data is therefore processed and encrypted to address the necessary regulatory requirements.
  • CI/CD for database. Data persistence cannot be maintained in the same way as code persistence is managed for a Continuous Integration/Continuous Deployment (CI/CD) pipeline. Continuous integration and deployment of new database versions have to respect the common structure and schema of databases—which is precisely why manual intervention becomes necessary.
  • Integration challenges. The sheer variety of tooling and architectural variations can make it difficult for database systems to work coherently. The lack of standardization means that a DevOps teams cannot entirely follow continuous and automated infrastructure operations for database system provisioning and management.

And then there’s a bigger, harder-to-tackle challenge: insufficient DevOps attention.

Many real-world DevOps implementations have failed to integrate database and application development process into a unified and holistic SDLC framework policy. Database management processes have continued the traditional route, and the increasing scale of database changes has made it difficult for engineers to standardize and coordinate database development efforts with the rest of application development.

(Watch these challenges grow as data storage increases.)

The Database DevOps Process

What’s Database DevOps? A process

Now, let’s look at the main tasks involved in Database DevOps, which in fact make the process similar to adopting the DevOps framework for application code:

1. Centralize source control

Use a centralized version control system where all of the database code is stored, merged, and modified. Storing static data, scripts, and configuration files all within a unified source control system makes it easy to roll back changes and synchronize database code changes with the application code development following a CI/CD approach.

(Learn more about CI/CD.)

2. Synchronize with CI/CD

Automate build operations that run alongside application releases. This makes it easy to coordinate the application and database code deployment process. The database code is tested and updated at the same time a new software build integration takes place, according to the underlying database dependencies.

3. Test & monitor

The CI/CD approach with a centralized version control system for both database and application code makes it easier to identify database problems every time a new build is checked in and compiled to the repository.

Database DevOps best practices

Additional best practices consistent with the DevOps framework:

  • Adopt the DevOps and Agile principles of writing small, incremental, and frequent changes to the database code. Small changes are easier to revert and identify potential bugs early during the SDLC process.
  • At every incremental change, monitor and manage dependencies. Follow a microservices-based approach for database development.
  • Adopt the fast feedback loop similar to the application code development process. However, critical feedback may be hidden within the deluge of log metrics data and alerts generated at every node of the network.
  • Track every change made to the database code. Test early and often, prioritizing metrics performance based on business impact and user experience.
  • Set up the testing environment to replicate real-world use cases. Establish a production environment to stage tests that ensure dependability of the database.
  • Automate as much as possible. Identify repetitive and predictable database management tasks and write scripts that update the database code when a new build is compiled at the Continuous Integration server.

Finally, note that every DevOps implementation is unique to the organization adopting the framework. Database DevOps can conceptually take several guidelines from the application code DevOps playbook, and integrate database code development and management along with the application code for similar SDLC performance and efficiency gains.

Related reading

]]>
Explained: Monitoring & Telemetry in DevOps https://www.bmc.com/blogs/devops-monitoring-telemetry/ Thu, 14 Oct 2021 15:28:00 +0000 https://www.bmc.com/blogs/?p=50875 DevOps is a data-driven software development lifecycle (SDLC) framework. DevOps engineers analyze logs and metrics data generated across all software components and the underlying hardware infrastructure. This helps them understand a variety of areas: Application and system performance Usage patterns Bugs Security and regulatory issues Opportunities for improvement Extensive application monitoring and telemetry is required […]]]>

DevOps is a data-driven software development lifecycle (SDLC) framework. DevOps engineers analyze logs and metrics data generated across all software components and the underlying hardware infrastructure. This helps them understand a variety of areas:

  • Application and system performance
  • Usage patterns
  • Bugs
  • Security and regulatory issues
  • Opportunities for improvement

Extensive application monitoring and telemetry is required before an application achieves the coveted Service Level Agreement (SLA) uptime of five 9’s or more: available at least 99.999% of the time. But what exactly is monitoring and telemetry and how does it fit into a DevOps environment? Let’s discuss.

(This article is part of our DevOps Guide. Use the right-hand menu to go deeper into individual practices and concepts.)

DevOps Best Practice

What is monitoring?

Monitoring is a common IT practice. In the context of DevOps, monitoring entails the process of collecting logs and metrics data to observe and detect performance and compliance at every stage of the SDLC pipeline. Monitoring involves tooling that can be programmed to

  • Procure specific log data streams
  • Produce an intuitive visual representation of the metrics performance
  • Create alerts based on specified criteria

The goals of monitoring in DevOps include:

  • Improve visibility and control of app components and IT infrastructure operations. Applications can range from cybersecurity to resource optimization. For instance, monitoring tools can alert incidents of network breaches and excessive network traffic at a specific node.
  • Monitor application performance issues, identify bugs, and understand how specific app components behave in production and test environments. Once deployed, monitoring tools alert on several metrics to track resource utilization and workload distribution. With this information, engineers can allocate resources to account for dynamic traffic and workload demands.
  • Understand user and market behavior. This information can help engineers make technical decisions such as adding a specific feature, removing a button, or investing in cloud resources to further improve the SLA performance. Proactive decision making in this regard helps organizations maintain and expand their market share in the competitive business landscape.

(Explore continuous delivery metrics, including monitoring.)

What is telemetry?

Telemetry is a subset of monitoring and refers to the mechanism of representing the measurement data provided by a monitoring tool. Telemetry can be seen as agents that can be programmed to extract specific monitoring data such as:

  • High-volume time-series information on resource utilization
  • Real-time alerting for specific incidents

DevOps monitoring vs telemetry

Consider the case of motor racing where fans get to see metrics such as top speed, G-forces, lap times, race position, and other information that displays on TV screens. These measurement displays refer to the telemetry.

Conversely, the process of installing sensors, extracting data, and providing a limited set of metrics information onto TVs is, in its entirety, called monitoring.

In the context of DevOps, some of the most common metrics measured are related to the health and performance of an application, and various corresponding metrics are always visible at the dashboard.

Monitoring challenges

Before discussing the various DevOps use cases of telemetry, let’s discuss the most common monitoring challenges facing DevOps organizations:

  • Operations personnel invest significant time and resources to find performance issues on the infrastructure and apps.
  • Devs frequently interrupt their development work to address new bugs and issues identified at the production stage.
  • The rapid release cycle approach makes apps prone to performance issues—thorough testing takes time and resources that may not be justified from a business perspective.
  • The deployment procedure is complex: engineers need to synchronize and coordinate multiple development workstreams, within microservices, multi-cloud, and hybrid IT
  • Anomalies are a sign of potential emerging issues. It’s important to identify and contain the damages before the impact is realized and spreads across the global user base.
  • Security and regulatory restrictions require organizations to exercise deep control and maintain visibility into the hardware resources operating sensitive user data and applications. This is challenging, especially when the underlying infrastructure is a cloud network operating off-premise by a third-party vendor that can offer only limited logs data, metrics information, and insights into the hardware components.

Monitoring & telemetry use cases

In order to address these challenges, DevOps teams use a variety of monitoring tools to carefully identify and understand patterns that could predict future performance of an app, service, or the underlying infrastructure.

Some of the common use cases of telemetry in DevOps include the following metrics and use cases:

Data analysis is necessary

Analysis follows monitoring. Telemetry doesn’t necessarily include analyzed and processed logs or metrics information. The decision making based on telemetry of log metrics requires extensive analysis of a variety of KPIs and can be integrated with the monitoring systems to trigger automated actions when necessary.

Related reading

]]>
Top DevOps Trends in 2022 https://www.bmc.com/blogs/devops-trends/ Thu, 14 Oct 2021 00:00:06 +0000 https://www.bmc.com/blogs/?p=14237 The impact that the COVID-19 pandemic has had on businesses and people is still being understood, its effects still rippling out for years to come. From more remote workers to supply chain challenges no one saw coming, this past year has forced more innovation and creativity than ever. One huge takeaway—for IT leaders and everyone […]]]>

The impact that the COVID-19 pandemic has had on businesses and people is still being understood, its effects still rippling out for years to come. From more remote workers to supply chain challenges no one saw coming, this past year has forced more innovation and creativity than ever.

One huge takeaway—for IT leaders and everyone else—is how technology has played a crucial role in people’s ability to continue to work, learn, receive services, and socialize.

For a majority of companies, IT has shifted from that which helps to get business done to a mission-critical role. Developers and DevOps professionals are now suddenly in a world of opportunity, with a renewed focus on frequent improvements and new innovations alike. With this eye on the future, we have put together some of the top DevOps trends for 2022.

(This article is part of our DevOps Guide. Use the right-hand menu to navigate.)

DevOps Topologies

DevOps predictions for 2022

Based on current trends and future predictions, here’s what we can expect to see in the world of DevOps in 2022.

Continued cloud adoption

Even before the changes that came about as a result of the pandemic, most enterprises were already making moves to adopt a more cloud-centric infrastructure to support cloud-based workflows and applications. Given the pressing need for the industry to adapt and adjust, this shift has had to happen even faster than originally planned.

Simply using the cloud will not make a company highly evolved, however. According to the recently released Puppet 2021 State of DevOps report, a majority of DevOps teams are using the cloud, but most of them are using it poorly. Results show that:

  • 65% of what are considered mid-evolution organizations report using the public cloud.
  • Yet only 20% of them are using it to its full potential.

For those looking to improve their cloud adoptions, considering different types of clouds could be beneficial. Results from the 2021 Accelerate State of DevOps survey found that teams who used hybrid or multi-cloud software deployments were 1.6 times more likely to meet their organizational performance targets than those who used more traditional cloud strategies.

(Read more about the State of DevOps.)

Automation

Automation is nothing new to the DevOps community, but being good at automation does not mean that an organization is good at DevOps.

According to the Puppet 2021 State of DevOps report, highly evolved firms are far more likely to have implemented extensive automation, with 90% of respondents with highly evolved DevOps practices reporting that their teams have automated most of their repetitive tasks.

For organizations that are not considered highly evolved, these initiatives will only continue to have more urgency in their adoption. In order to make this happen, teams must not work only to automate the entire pipeline—they must also be willing to integrate AI and ML.

Applying ML to the delivery lifecycle will allow organizations to understand where blockages or capacity issues occur. Armed with this knowledge, problems can be better mitigated when they arise. AI-based predictive analytics can make the DevOps pipeline smarter in two key ways:

  • Anticipating problems
  • Providing potential solutions

Prioritizing security

With a majority of employees working from home this past year, and many potentially into the future, organizations are beginning to realize that having a secure software supply chain is no longer an option—it’s a necessity. And this security cannot simply be added on as an afterthought. Rather, we must inject security into every layer as secure code, ensuring that any vulnerabilities are quickly detected and mitigated.

DevOps engineers must adapt and change the way they are writing software, ensuring that it is secure not only as it is written, but as it is deployed, as well. Some ways to begin prioritizing security for DevOps include:

  • Understanding security goals
  • Having proper cloud vulnerability scanners
  • Securing code with standard tests

(Explore DevSecOps, an emerging practice that embraces security.)

SRE & DevOps

According to findings from Google Cloud’s 2021 Accelerate State of DevOps Report, Site Reliability Engineering (SRE) and DevOps will continue to grow more complementary in years to come, with certain SRE techniques, like service-level indicators, providing practices that can enhance objectives of the DevOps team.

“Teams that prioritize both delivery and operational excellence report the highest organizational performance,” said Dustin Smith, research lead for Google Cloud’s DORA team

Evidence from the survey indicates that teams who excel at modern operational practices are 1.4 times more likely to report greater software delivery and operational performance compared to those who are less mature with operational practices. These teams are also 1.8 times more likely to report better business outcomes.

(Learn more about SRE.) 

Hybrid by design

While some organizations have started migrating employees back into the office this past year, many have continued to allow a hybrid model, with workers either fully remote or at least with the option to continue to work from home. And one thing that’s becoming apparent is that many employees want the choice. In fact, the IDC report predicts that by 2023, 75% of G2000 companies will have created some type of hybrid structure.

“There was always this perception about, ‘Okay, the remote work, it’s a temporary thing. When the crisis is over, people will go back to the office’,” said Rick Villars, IDC’s Group Vice President for Worldwide Research.

“What we saw was that really the companies who were accelerating and doing well were ones who are inclined to completely change their model and recognize that giving their employees to work anywhere and in any environment, and work together anywhere as equals, as opposed to second-class citizens and first-class citizens, was going to be a key part of succeeding in the next phase.”

DevOps innovation continues

No matter what the future brings for workers and organizations, DevOps will continue to evolve and pivot, as it always does. Businesses will have an opportunity to use the current challenges as ways to push their limits, adopting innovative technologies and trusting their skilled workers. By embracing these top DevOps trends, professionals can ensure that DevOps remains in the spotlight for years to come.

BMC supports DevOps processes

At BMC, we encourage DevOps across the enterprise. See how our DevOps tools and solutions help your organization succeed.

Change risk management

In IT environments, broad organizational risk is a given. Making the most of DevOps requires reducing that risk.

Built-in artificial intelligence for IT operations (AIOps) and service management (AISM) capabilities help to manage risk without adding work to DevOps teams, so it’s easier to:

  • Handle governance and hardening
  • Automate change
  • Quickly assess risk

Mainframe DevOps

Break the silo and include the mainframe environment in your DevOps ecosystem. Building a mainframe-inclusive DevOps toolchain enables agile development and testing of critical applications so you can deliver innovations faster.

Integrated modern mainframe tools work across an array of cross-platform tools, empowering developers of every stripe to perform and improve the processes necessary to fulfill each phase of the DevOps lifecycle:

  • Shift-left automated testing
  • Speed IBM® Db2® database changes
  • Address security earlier in the dev process

Dependency mapping

A change to one application or system can ripple through the ecosystem and impact customer experience.

As agility increases and changes are made quickly, it’s critical to know all your assets—and how they’re used. Enlist dynamic service modeling to automatically map all application and infrastructure dependencies so you can:

  • Assess the impact of change
  • Ensure an optimal customer experience
  • Support regulatory compliance

Workflow & production automation

The handoff of code from development to production can be a major stumbling block in the DevOps workflow. Application workflow orchestration can help by augmenting the traditional CI/CD toolchain.

A Jobs-as-Code approach with the CI/CD toolchain makes it easier to version, test, and maintain workflows so your teams can deliver better apps, faster.

  • Enable shift-left best practices
  • Give developers a familiar work environment
  • Reduce rework

Related reading

]]>
DevOps Values & Principles https://www.bmc.com/blogs/devops-values-principles/ Wed, 13 Oct 2021 00:00:23 +0000 https://www.bmc.com/blogs/?p=13894 The IT industry is undergoing a major paradigm shift in the software development process. Traditional software development methods have become obsolete thanks to: Increasing customer demands Rapidly evolving technologies Growing complexity Increasing security requirements The shift towards the software as a service (SaaS) model and high dependency on cloud-based technologies has also contributed to the […]]]>

The IT industry is undergoing a major paradigm shift in the software development process. Traditional software development methods have become obsolete thanks to:

  • Increasing customer demands
  • Rapidly evolving technologies
  • Growing complexity
  • Increasing security requirements

The shift towards the software as a service (SaaS) model and high dependency on cloud-based technologies has also contributed to the need for more agile software development processes. DevOps has become the solution to power fast-paced software development processes while providing the necessary flexibility.

(This article is part of our DevOps Guide. Use the right-hand menu to navigate.)

What is DevOps?

DevOps is the process of combining people, processes, and technologies to build higher quality software rapidly. The DevOps model combines the developers (Dev) with the operations (Ops) team rather than working as separate entities. This combination helps these teams to act as a single entity that manages the entire application lifecycle.

DevOps Model

It has further evolved to include the security team, which has created DevSecOps that integrates security to all aspects of the software development lifecycle. All these things lead to a more collaborative culture across the organization.

Additionally, DevOps has paved the way for introducing automation to all aspects of the SDLC, from application builds to testing to deployment. With as a core tenant, DevOps aims to provide the necessary technology stack to create quality software efficiently while reducing errors.

Continuous Integration and Continuous Development tools are also a core part of the DevOps tool stack. The reason is that they act as the backbone of any DevOps process by providing a platform to integrate all the necessary tools and build automated processes.

Adopting DevOps helps organizations respond well to evolving market conditions and customer requirements while achieving their business goals.

DevOps Values

Core values of DevOps

The way DevOps is used can differ based on how it is implemented since there is no concrete value set that applies to each and every DevOps implementation. However, there are some core values that apply to DevOps regardless of the implementation.

Collaboration & communication

When considering the universal core values of DevOps, the primary value is collaboration.

DevOps is focused on bridging the gap between different teams and creating a collaborative environment where all teams are working together to benefit the product. This will lead to more communication between team members, allowing them to clearly understand each other’s roles and responsibilities. It also helps them to understand how their work impacts other team members.

Transparency, innovation, freedom

Open communications introduce more transparency within the organization. This includes stakeholder decisions, team activities, etc., which directly affect the end product. This transparency also eliminates ambiguity and allows organizations to move forward with clear goals in mind.

The freedom and flexibility offered by DevOps lets team members experiment freely, leading to more innovations. Encouraging innovation also plays a major role in the success of DevOps. With the freedom to innovate and experiment with team members, they can try out new technologies, methods, tools, etc.

These can then be implemented to improve the SDLC or the product itself.

CALMS framework

All the values mentioned above should be applied as a fundamental cultural shift within the organization to get the most out of DevOps and implement proper DevOps practices. Using the CALMS framework is one of the ways to evaluate the progress an organization is making in adopting DevOps.

This framework analyzes the following aspects to evaluate the impact of DevOps in the business process.

  • Culture. Embrace a culture of shared responsibility that includes all the stakeholders of the organization.
  • Automation. Adapt automation in all the possible aspects of the SDLC
  • Lean. Reduce software development waste with efficient resource usage and implementing best practices to create efficient processes.
  • Measurement. Monitor every aspect of the SDLC, gather data, and evaluate to improve the DevOps process constantly.
  • Sharing. Constant communication between teams to create a more collaborative environment.

By evaluating the above aspects, organizations can better understand their DevOps process and improve any deficiencies to align well with their business goals and core DevOps values.

Principles of DevOps

Adapting DevOps can be a complex process, especially if you are moving from a more traditional software development model. Organizations can properly adapt or migrate to DevOps processes by focusing on the following DevOps principles.

Customer-centric actions

Developments should target customer requirements with the primary goal of creating quality software faster. Customer feedback can be obtained easily with shorter feedback loops facilitated by the continuous incremental development process.

Then this feedback can be used to make the necessary improvements to the software or introduce new features to meet customer requirements. Developments can be planned to meet exact customer demands by following a customer-centric approach.

End-to-end workflows

DevOps is a continuous process that encompasses all the aspects of the SDLC and does not stop after delivering the software. Software is continuously developed with regular releases.

This way, it is important to consider the overall development process when creating development workflows rather than considering an individual component. Workflows created considering all aspects of the SDLC create fewer chances for unexpected incidents that would affect the development process.

Shared responsibility

In traditional software development, the responsibility of the software was divided among different teams.

Developers were responsible for the development, while the operations team was responsible for deployments and maintenance. There were no shared responsibilities considering the product as a single entity shared by the organization.

In contrast, DevOps environments bring all these different teams together and share the responsibility of the software between all stakeholders. This allows delivery teams to work under a single directive developing and maintaining the software collaboratively.

Continuous improvement

DevOps is constantly evolving. Therefore, organizations should continuously evaluate both the DevOps process and the software product even after creating a solid DevOps pipeline. It helps to improve them to reduce waste, manage costs, optimize performance, etc.

This continuous improvement enables organizations to easily adapt to changing market conditions without having to undergo major cultural or technical shifts. When this is coupled with the freedom offered by DevOps to experiment and innovate, delivery teams can focus on improving products organically without making it another mundane task.

Automate all possibilities

Automation is an integral part of any successful DevOps process. It can help save time and resources while organically increasing the efficiency of the SDLC. Automation should not be limited to repetitive manual tasks and should cover any task in the SDLC that can be automated from development to monitoring and support.

Even so, automation should not be done for the sake of automation if it does not offer any tangible benefit or may result in additional work that should be left out of the automation process.

(Compare automation & orchestration.)

Embrace failure

There will be issues along the way in any SDLC. However, failure should be considered a normal part of the development process to adapt to DevOps properly. This should come as an attitude change, with failures becoming learning opportunities to mitigate future issues and improve the product. One of the major barriers to innovation is that people are afraid that they will fail.

By removing such stigma, team members will be more willing to take risks and experiment which will ultimately lead to new innovations and improvements in the end product.

What DevOps is—and isn’t

DevOps is not simply adding CI/CD and automation to a software development lifecycle. It is a fundamental shift in how software is developed and managed as a whole.

Any organization can create a robust DevOps process to deliver software faster by adhering to the above-mentioned values and principles. It will also improve the overall software quality with fewer errors, making the software flexible enough to meet the ever-changing customer and market demands.

Related reading

]]>