Spencer Hallman – BMC Software | Blogs https://s7280.pcdn.co Mon, 12 Feb 2024 11:53:51 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png Spencer Hallman – BMC Software | Blogs https://s7280.pcdn.co 32 32 Top Continuous Improvement Metrics DevOps Teams Should Care About https://s7280.pcdn.co/mainframe-devops-metrics-continuous-improvement/ Wed, 22 Nov 2023 10:42:45 +0000 https://www.bmc.com/blogs/?p=53301 Aligning software delivery teams around a single source of truth not only enables them to do their jobs better, it also helps them visualize what needs improving. DevOps leaders need such insights to quickly unlock value now and understand the tipping point where technical debt begins to create diminishing returns. Continuous improvement is based on […]]]>

Aligning software delivery teams around a single source of truth not only enables them to do their jobs better, it also helps them visualize what needs improving. DevOps leaders need such insights to quickly unlock value now and understand the tipping point where technical debt begins to create diminishing returns. Continuous improvement is based on the idea that what doesn’t get measured doesn’t get improved; but some metrics matter more than others, and too many can lead to analysis paralysis. So, what is the right mix to move the needle towards value creation?

One compelling metric from the 2023 BMC Mainframe Survey is that mainframe workloads have grown 24 percent over the past five years. This comes from mainframe developers armed with modern tools and capabilities working towards breaking down silos and focusing on continuous improvement. And that’s helping to maximize mainframe workloads, optimize costs, reduce risks, and generate high-impact value streams. Enterprise software delivery teams that analyze core DevOps key performance indicators (KPIs) can quantify the payoffs of their digital transformation investments, and get better, faster.

So, what are the core areas that DevOps leaders can focus on to drive quick value today and stay the course to achieve modernization outcomes tomorrow?

The DevOps DORA 4 metrics: the building blocks of quality

Introduced by the DevOps Research and Assessment team (DORA), the DevOps DORA 4 metrics help DevOps leaders accelerate software delivery and increase digital agility through these four areas:

  • Deployment frequency—how often an organization successfully releases to production
  • Lead time for changes—the amount of time it takes a commit to get into production
  • Change failure rate—the percentage of deployments causing a failure in production
  • Time to restore service—how long it takes an organization to recover from a failure in production

These data points are extracted from across DevOps tool chains to expose insights around each phase of the DevOps delivery cycle. Smaller companies running workloads on both mainframe and cloud may have an easier time visualizing the DevOps DORA 4. But enterprises with a vast portfolio of back and front office software applications to manage are often more complex and have greater efficiencies to gain by optimizing around these KPIs.

The trifecta of continuous improvement

Velocity

Velocity is the right balance between speed and accuracy. It exists in harmony with, not at the expense of, quality and efficiency. Velocity data is mined from code commits, remediations, deployments, down time, and burndown rates derived from source code management (SCM), testing, integrated development environments (IDEs), IT service management (ITSM), and development tools like the BMC AMI DevX tool suite.

Continuously benchmarking progress is vital to achieve results and make informed decisions. Velocity KPIs include development lifecycle time and change failure rate, as well as:

  • Deployment frequency—how often is code deployed to production
  • Mean time from checkout to production—how much time the entire lifecycle takes, from when code is checkout by the developer to when it is deployed to production

Quality

Swift delivery means nothing if the product does not meet quality standards. Comparing trends of hot fixes and rollbacks, delivery teams can rationalize change failure rates. Whether failure rates are increasing or declining, defect density shows how many bugs are escaping into production environments in near-real time. A decreasing defect density indicates improved code quality and fewer post-production issues.

Quality KPIs:

  • Escaped bug ratio—the ratio of bugs that occur in production versus all environments
  • Change failure rate—the percentage of production deployments that result in failure

Efficiency

Shift-left development approaches test sooner and more often, leaving developers with more data to analyze more frequently. Efficiency metrics examine defect ratios, developer productivity, uptime, product usage, and cost optimization factors over time. This helps chart progress toward shipping faster, with greater frequency and fewer bugs. Automated testing can greatly improve efficiency metrics, and for organizations still doing manual testing, efficiency measures can substantially reduce costs and risks to help increase competitive posture.

Efficiency KPIs:

  • Lead time for change—determines the time required to deploy new releases after the developer has implemented changes to the code
  • Innovation percentage—the amount of time that developers spend on activity that results in new functionality being deployed to production versus the amount of time they spend on bug fixes

Aligning around a common data-driven vision

Mainframe teams and data can’t exist in silos. Democratizing insights around a common source of truth empowers the entire software delivery team—from leadership to developers—with the insights to make the right decisions. While platforms are fundamentally different, DevOps teams should be aligned around common KPIs, tools, and workflows. This prioritizes the mainframe’s role in unlocking digital transformation goals, even in cautionary times.

Measuring testing itself

Some organizations may still be using old tests to monitor performance on refactored applications. Unfortunately, it’s not sufficient to simply track incremental improvements or declines whenever testing takes too long or when it delivers inconclusive results. Long response times have critical impacts on the customer experience, so it is vital to conduct performance testing at the system level to evaluate response times, CPU and memory utilization, and I/O rates. When a single minute of delay can result in the loss of a customer, it is mandatory to track trends and see problems as they arise, before they escalate to a bigger problem.

The future of mainframe monitoring is in AI/ML

If developers are manually reviewing log files, code commits, and test results, then they are not writing code. Developers are happiest when they’re writing code. Artificial intelligence and machine learning (AI/ML) offer new and easier ways to enhance and simplify continuous improvement. In a recent podcast, BMC Vice President of Research and Development Dave Jeffries commented that natural language processing (NLP) helps developers formulate the right questions, and get to the right answers faster. By analyzing patterns to determine what normal looks like, predictive analytics tools like BMC AMI zAdviser help teams align around the right corrective actions and understand what good looks like. While “trust but verify” may still be the golden rule of AI today, developers can achieve success guided by AI/ML-led continuous software delivery insights.

Learn more about continuous improvement

If you want to dig deeper into continuous improvement on the mainframe, watch the webinar, “Driving DevOps and AIOps Continuous Improvement on Mainframe,” to learn more about how AI/ML is enabling faster DevOps through cross-team analytics.

]]>
Continuously Improve with DORA Metrics for Mainframe DevOps https://www.bmc.com/blogs/dora-metrics-for-mainframe-devops/ Wed, 29 Jun 2022 14:21:57 +0000 https://www.bmc.com/blogs/?p=52099 Today’s demanding business environment requires that mainframe houses adopt the same DevOps practices used for distributed platforms. Measuring the key performance indicators (KPIs) of the mainframe helps ensure that the platform is active and reliable at all times, which is vital to the organizations that rely on it to process millions of transactions a day. […]]]>

Today’s demanding business environment requires that mainframe houses adopt the same DevOps practices used for distributed platforms. Measuring the key performance indicators (KPIs) of the mainframe helps ensure that the platform is active and reliable at all times, which is vital to the organizations that rely on it to process millions of transactions a day. This has also created a need for KPIs in DevOps—regardless of the platform—grouped into three categories:

  • Velocity: understanding how fast it takes to go from development to production and to each stage along the way.
  • Quality: ensuring consistently good quality down the pipeline.
  • Efficiency: comparing the effort expended against the work being done.

To measure these KPIs, the DevOps Research and Assessment (DORA) team came up with four key DevOps metrics, now called the “DORA metrics.”

What are DORA metrics and why are they so important?

These four metrics are essential indicators of progress and improvement in:

  • Lead time for changes: how long it takes for your pipeline to deliver code once a developer has checked it back in. Not tracking this delays getting innovations to market.
  • Deployment frequency: how often you deploy to production, delivering the innovation the business demands. The goal is to deliver small batches of work that improve quality and put innovation more quickly into the hands of your customers.
  • Change failure rate: assesses the percentage of code changes pushed back into production that are failing and/or must be rolled back. This metric can be used in conjunction with the prior metrics to understand where the sweet spot is for delivering code. Delivering code too fast or in large batches may cause this metric to rise.
  • Mean time to recover (MTTR) service: tracks how long it takes to fix problems that occur in production. This is tied to the previous metric. A higher change failure rate may cause this metric to rise, as well.

As a company improves, it can deploy more frequently while keeping the change failure rate the same (see Fast, Good, or Cheap—Get All Three with Continuous Improvement by Mark Schettenhelm).

Maturity and elite status

Alongside deployment frequency metrics, organizations are also rated at low, medium, high, and elite levels of maturity.

  • Low deploys between once a month and once every six months, typical in mainframe.
  • Medium deploys between once a week and once a month.
  • High deploys between once a day and once a week.
  • Elite organizations like Google, Facebook, and Netflix deploy multiple times a day. They expect their teams to push code into production on day one, and in terms of MTTR, can fix a problem in less than an hour.

DORA metrics for the mainframe

Mainframe shops must stay up to date and using DORA metrics is an efficient and standardized way to measure that modernization and get to that next level of maturity. If DORA metrics are not used, it becomes difficult to know whether you are as efficient as you can be, and to benchmark against industry peers.

The four metrics are interconnected like cylinders in an engine. Organizations must remember that despite the attention given to cloud and distributed technology, the mainframe is probably the most important server they have.

How do you capture DORA metrics for the mainframe?

Mainframe can capture DORA metrics the same way a distributed environment can: through DevOps tooling and/or system logs, which are then parsed into the four measurements. This should start as early as possible. We’ve made this easier for teams by offering BMC AMI Adviser dashboards, which eliminate the need to plot and consolidate data and provide immediate perspective on your industry positioning.

How BMC AMI DevX tools blend with DORA metrics

In addition to zAdviser, additional BMC AMI DevX tools can help provide a complete picture of your metrics.

BMC AMI DevX Code Pipeline lays the groundwork, capturing the original source code management (SCM) function, which facilitates editing, testing, and preparing to deploy by gathering metrics all along the way. ISPW is at the core of developer activities, allowing people to check code out, edit it, and check it back in. ISPW then compiles the code when it is ready. If, during testing, an error is found that forces a re-edit and another compile, it passes through ISPW until it is promoted and approved for deployment. During this time, metrics are applied from check out to deployment to production.

zAdviser is a software-as-a-service (SaaS) solution that simplifies the process of gathering metrics by making data easier to review—which also helps bring organizations closer to elite level. zAdviser captures data around how people are using BMC AMI DevX mainframe tools and applications such as ISPW and BMC AMI DevX Abend-AID. This parallels the practice common to the distributed world, largely through the use of Git. Customers thinking of migrating to Git and outside of ISPW should be aware that ISPW allows a company to have some teams using Git with ISPW while others continue to use just ISPW.

Abend-AID automatically detects and diagnoses problems across multiple environments, addressing issues the first time they occur. In the context of DORA Metrics, Abend-AID provides supporting data and helps seek out the root cause. It is possible to obtain DORA Metrics without Abend-AID input, but you can’t improve them without it.

BMC AMI DevX is the only solution that can take data in from SCMs and allow customers to see how they are developing their mainframe applications today, which tools they are using, the pieces of code their developers are working on, and how that code is going through the lifecycle.

DORA metrics and the four KPIs help management measure and understand the performance of their time to delivery and their development teams while BMC AMI Adviser KPI Dashboard for DORA Metrics allows them to leverage that data to continuously improve their DevOps efforts.

To learn more about tracking DORA metrics on the mainframe, download our eBook, DORA Metrics for Mainframe DevOps.

]]>
New DevOps KPI Dashboards in zAdviser Drive Continuous Improvement https://www.bmc.com/blogs/mainframe-devops-kpis-continuous-improvement/ Tue, 04 Jan 2022 15:55:51 +0000 https://www.bmc.com/blogs/?p=51374 The concept of continuous improvement is central to DevOps, challenging organizations to constantly strive to utilize the best practices and tools available to satisfy and delight customers who demand high-quality, innovative solutions delivered at the speed of “now.” To maintain a continuous pace of improvement, though, organizations must know where they stand at each step […]]]>

The concept of continuous improvement is central to DevOps, challenging organizations to constantly strive to utilize the best practices and tools available to satisfy and delight customers who demand high-quality, innovative solutions delivered at the speed of “now.” To maintain a continuous pace of improvement, though, organizations must know where they stand at each step of the DevOps journey, continuously measuring themselves and making data-based decisions on where to go next.

BMC AMI zAdviser​ provides this insight, continuously collecting data and applying machine learning to generate key performance indicators (KPIs) that give development organizations the data-driven insights required to measure their own progress and improve the quality, velocity, and efficiency of their software development and delivery.

Our January 2022 release includes six new KPI dashboards within BMC AMI zAdviser​ that:

  • Provide visibility into the progress of an organization’s DevOps transformation
  • Pinpoint trends causing software delivery bottlenecks and quality issues
  • Offer actionable insights that fuel the organization’s continuous improvement

The new dashboards provide insight in the following areas:

Development Productivity

This BMC AMI DevX Code Pipeline​ dashboard measures how long it takes to move code through the software delivery lifecycle (SDLC) from the developer checking out the code to promotion to production.

Development Productivity

Automated Testing

Automated testing solutions not only help bring new products and services to market faster, they improve overall code quality—but only if they are being utilized. The BMC AMI DevX Total Test dashboard promotes the adoption of agile mainframe DevOps, giving managers insight into how many developers are utilizing automated testing, how many tests are executed on a daily basis, and whether or not those tests are passing or failing.

The Quality dashboard promotes application quality by tracking the percentage of bugs escaping into production through the Escaped Abend Ratio.

DevOps Adoption

Another way to track DevOps adoption, this dashboard shows how many developers have adopted use of BMC AMI DevX Workbench for Eclipse’s modern integrated development environment (IDE) and how many are still developing using the ISPF “green screen.”

System Automation Visibility

The BMC AMI Ops Automation dashboard provides IT Operations managers with visibility into the performance and efficiency of their automated processes. It displays the total number of mainframe events and how the events were handled by BMC AMI Ops automation. Some of the key metrics include:

  • Visibility into number of rules vs EXECs that were run
  • How many alerts were generated

Events are also broken out by event type to provide a better understanding of what is driving automation. The dashboard can be filtered by time as well as by LPAR and BMC AMI Ops instance to get a more detailed understanding of automation behavior.

BMC AMI Ops Automation

Message Queue Visibility

The BMC AMI Message Advisor for IMS™ dashboard makes it easy to identify trends in IMS message queues and avoid queue error conditions and overflows. The dashboard shows overall BMC AMI Message Advisor for IMS events with the ability to drill in on Queue Protection Facility (QPF) actions where the product automatically responded to prevent an IMS outage.

Users also get visibility into message requeue actions with the ability to know how many happened, how many messages were successfully requeued, and how many were skipped.

New zAdviser dashboards advance DevOps

With enhanced visibility into the productivity of development teams, the quality of mainframe code, IMS message queues trends, and the overall adoption rates and performance of modern tooling and automation, these new zAdviser dashboards help organizations advance their DevOps initiatives with actionable insights. We can’t think of a better way to ensure continuous improvement than by continuously improving DevOps tools and processes.

Related reading

]]>
Improving Quality and Shift-Left Testing with BMC AMI zAdviser​ https://www.bmc.com/blogs/zadviser-shift-left-testing/ Wed, 13 Oct 2021 10:01:56 +0000 https://www.bmc.com/blogs/?p=50830 Customer satisfaction is a top concern for all levels of an organization, from the service desk to developers, and from management to the executive suite. An organization must provide services that not only fulfill its customer needs, but that are also reliable and as free from bugs as possible. Even the smallest bug can create […]]]>

Customer satisfaction is a top concern for all levels of an organization, from the service desk to developers, and from management to the executive suite. An organization must provide services that not only fulfill its customer needs, but that are also reliable and as free from bugs as possible. Even the smallest bug can create dissatisfaction and negatively impact a brand’s reputation.

Development teams and managers can gain insight into mainframe software quality with BMC AMI zAdviser​. Offered free to customers with current maintenance, this software-as-a- service (SaaS) solution captures data from BMC AMI DevX products and uses machine learning (ML) to develop key performance indicators (KPIs), giving users advanced metrics to help improve their organization’s mainframe DevOps processes and outcomes. These metrics can then be compared to benchmarks determined by blending anonymous data from all zAdviser customers to show how development teams are performing relative to the BMC AMI DevX ecosystem as a whole. In short, zAdviser gives development managers a view of what “good” looks like.

Two new KPIs have been introduced as part of BMC’s October release. Part of zAdviser’s Quality Dashboard, these KPIs help determine where and when abends are occurring, giving valuable insight into development quality and aiding shift-left testing.

Escaped Abend Ratio

Abends that escape to production could be customer-facing, creating dissatisfaction and damaging an organization’s reputation. The Escaped Abend Ratio KPI can be thought of as an escaped bug ratio. It shows the percentage of unique abends occurring in production compared to abends in all of an organization’s logical partitions (LPARs).

Ideally, all abends would be caught in the development testing environment, but that is never the case. This metric enables developers and management to see how many abends escaped into production and then adjust test cases accordingly to catch them in the future. The goal is to see the percentage of abends in production decrease. The same or higher number of abends in development indicates that testing is catching what it should, leading to a higher quality of product in production.

Median Time to Detect Since Compile

When a program abends, zAdviser collects a variety of data, including the amount of time since the program’s last compile. The Median Time to Detect Since Compile calculates the median (50th percentile) of this time for all abends. In a development environment, a low number is favorable—as the old adage goes, “You want to fail fast.” For production environments, the opposite is desirable; longer timeframes between compiles and abends might indicate that the program abended due to a unique confluence of events, or a unique use case that wasn’t taken into consideration during testing.

Early Warning

With these new KPIs, the Quality Dashboard becomes a sort of early warning system for developers and development management. Teams can gauge the quality of code as it is developed and deployed rather than waiting for a complete degradation of quality or a complaint from a customer. When development teams see abends increasing in production, they can refer to zAdviser’s BMC AMI DevX Total Test dashboard to see what tests were executed, when, and how they failed, then adjust test cases to make sure bugs are caught before they leak into production.

In addition to using the Quality Dashboard of zAdviser to see what “good” looks like, managers can see what “quality” looks like, as well, with new KPIs that help organizations improve the quality and coverage of shift-left automated testing and the overall quality of software. Quality products, after all, lead to happy customers.

]]>
Speed Mainframe Software Delivery Without Compromising Quality https://www.bmc.com/blogs/speed-mainframe-delivery-quality-shift-left/ Tue, 05 Oct 2021 15:50:21 +0000 https://www.bmc.com/blogs/?p=50785 In today’s digital economy, there is more pressure than ever to deliver new services and applications as quickly as possible. To keep customers satisfied and keep pace with their competitors, organizations are quickly accelerating their software delivery lifecycles (SDLCs) with practices like agile development and DevOps. But this need for speed also introduces new risk. […]]]>

In today’s digital economy, there is more pressure than ever to deliver new services and applications as quickly as possible. To keep customers satisfied and keep pace with their competitors, organizations are quickly accelerating their software delivery lifecycles (SDLCs) with practices like agile development and DevOps. But this need for speed also introduces new risk. Releasing software faster isn’t good enough—the software must work as planned before being delivered to customers. Speed is good, but speed with quality is what satisfies customers.

Shift Left with Automation

“Shift left” testing simply means that testing is done more frequently and earlier in the SDLC. Testing code in smaller chunks and earlier in the development process makes it easier to find where bugs entered the code and fix them before they cause any further problems.

But even shift-left testing practices, if they’re done manually, can slow your time to market. Automated testing supercharges the shift left with more frequent testing (in some cases, code can be tested as it is written), saved test cases that reduce the time developers spend testing, and higher code quality. According to a Forrester Total Economic Impact™ study commissioned by BMC, organizations switching from manual to automated testing with BMC AMI DevX Total Test saw a return on investment (ROI) of up to 205 percent, along with 20 percent fewer bugs in production and a 90 percent time savings for developers.

While automated testing improves quality and saves developers time, it is important to use the right tools. According to a recent BMC-commissioned Forrester report on mainframe development tools, 62 percent of those surveyed found automated testing of mainframe applications to be a challenge. However, a Dimensional Research survey of mainframe professionals found that only 29 percent of companies have the proper DevOps tools for debugging. The use of proper tooling is significant—nearly two-thirds of respondents to the Forrester survey reported that modern development tools will increase the quality of their development by 23 percent.

Implementing a mainframe-inclusive DevOps toolchain leads to increases in development speed and quality. These benefits can be made even greater, though, with the adoption of shift-left processes. Our October 2021 release has several new enhancements to BMC AMI DevX solutions that will help your organization shift left and meet the speed and quality demands of your customers.

Create PL/I Non-Virtualized Test Cases

BMC AMI DevX Total Test now supports non-virtualized testing of code written in PL/I. This allows test cases to be shared between teams, enabling the creation of a common test repository that can be used by both quality assurance (QA) and development teams. By automating PL/I testing, these applications can be included in continuous integration/continuous delivery (CI/CD) pipelines, shortening the time required to develop and deliver them, and reducing the effort needed to maintain them. This new PL/I feature makes it easier to create more reusable test cases, helping developers shift left in the SDLC.

The Quality Dashboard

Two new key performance indicators (KPIs) in the BMC AMI zAdviser Quality Dashboard help shift-left efforts and increase code quality. Escaped Abend Ratio shows the percentage of unique abends occurring in production versus those occurring in development. This enables testing teams to see how many abends escape into production and adjust test cases to catch them while they’re still in development. Median Time to Detect Since Compile calculates the median time since the last compile for abending programs, giving teams a sense of whether abends are occurring because of common bugs (shorter median time) which better testing will help fix or because of more unique use cases (longer median time) which a developer may not have contemplated when developing the test cases.

These new KPIs give development managers high-level visibility into the quality of code being produced. By correlating abend percentages and test data from zAdviser’s Topaz for Total Test dashboard, they can determine how testing can be adjusted to reduce the number of abends in production.

More Shift-Left Improvements

By shifting left with performance testing, teams can identify potential problems before they have an impact on production environments. Performance architectural issues can be caught earlier, in development, where they are more easily resolved. Additionally, transitioning to 64-bit applications adds another factor that could affect performance. BMC AMI Strobe now supports performance testing of 64-bit applications, ensuring greater test coverage and better performing applications in production. Developers can use this process to test potential performance improvements before transitioning programs to 64-bit and confirm that expected improvements are being realized.

We continued to enhance the integration between Git and BMC AMI DevX Code Pipeline, which facilitates code reviews while keeping all source code in one location that can be accessed by ISPW for build and deploy and by Topaz for Total Test for automated testing.

The Need for Speed

Mainframe organizations across all industries feel the need for speed. Shift-left testing gives them an opportunity to increase development velocity by decreasing the time spent on testing while also providing greater assurances of quality code. With the enhancements included in our October 2021 release, BMC AMI DevX helps your organization optimize its DevOps toolchain with shift-left testing while meeting its need for speed without sacrificing quality.

]]>
FinServ Company Improves Financial Close with Control-M and ThruPut Manager https://www.bmc.com/blogs/control-m-and-thruput-manager/ Wed, 10 Mar 2021 07:42:20 +0000 https://www.bmc.com/blogs/?p=20382 When I joined IBM at the beginning of my career, I saw firsthand how reliable, available, scalable, and secure the mainframe was for customers. Over the years, I have watched as concepts like virtualization were pioneered on the mainframe and how mainframe technology has consistently evolved to enhance its importance to the enterprise. According to […]]]>

When I joined IBM at the beginning of my career, I saw firsthand how reliable, available, scalable, and secure the mainframe was for customers. Over the years, I have watched as concepts like virtualization were pioneered on the mainframe and how mainframe technology has consistently evolved to enhance its importance to the enterprise. According to IBM, mainframes now manage up to 19 billion encrypted transactions a day. That’s why I’m not surprised when SHARE says that mainframes handle 90 percent of all credit card transactions, or when IDC says most large enterprises have mainframes that run mission-critical workloads.

As organizations work to be more agile, data-driven, and customer-centric in their journey to become an Autonomous Digital Enterprise, continued mainframe modernization will be critical and companies must  integrate their mainframes with the wider IT ecosystem.

BMC’s acquisition of Compuware last year is an example of how we are helping customers speed their modernization journey. Forrester said of the acquisition: “CIOs have realized their digital transformations get stuck if they don’t modernize their core systems, many of which run on mainframes… The bet of BMC and Compuware is to scale DevOps on the mainframe like any other platform. We like this move by BMC.” The acquisition provides mainframe developers a fully integrated DevOps toolchain that supports agile, high-quality mainframe application development.

We are already connecting innovative mainframe solutions that will help customers thrive now and in the future. A few weeks ago we announced an integration between Control-M, BMC’s leading application workflow orchestration solution, and ThruPut Manager, Compuware’s best-in-breed mainframe batch processing optimization solution. Here’s a look at how one customer is already using the integration to improve the management of its month-end close processes.

Automating the complexity out of financial close processes

Every company, regardless of industry, completes financial close processes to verify and adjust account balances and produce summary financial statements. These financial statements are critical to help executives make strategic data-driven decisions.

A large financial services company that offers banking, insurance, and investment services decided to use application workflow orchestration and mainframe batch management to improve their monthly financial close processes. To ensure timely delivery, their financial close processes required the flow of accounting and summarization jobs–triggered from different lines of business–to enter the system in the middle of the last business day of the month, and to be executed at the optimal point in time.

This meant that financial close workflows must be given preference over daily cycle jobs, which are accommodated once the monthly jobs are completed. But they had to ensure that daily jobs would not be delayed too long, or business service SLAs could be missed.

In the past, customers running Control-M for z/OS and ThruPut Manager needed to write substantial amounts of code to be able to align the prioritization logic of the two solutions. However, the recently announced integration changes all that.

Control-M for z/OS and ThruPut Manager

Control-M for z/OS simplifies the orchestration of mainframe application workflows. It helps customers define, schedule, manage, and monitor mainframe application workflows so business services are delivered on time, every time. It is part of a wider platform that enables end-to-end application workflow orchestration, mainframe to cloud.

Control-M’s orchestration planning is based on a rich set of data, including predefined date and time schedules, job durations, dependencies, priorities, SLA deadlines, and other logical requirements. It also includes historical run-time statistics, which are used to refine plans and improve SLAs over time.

With this built-in intelligence, Control-M for z/OS produces an accurate job prioritization that determines the job submission order. It submits jobs in that specific order to the system for execution but does not have control over their actual execution. That’s where ThruPut Manager enters the equation.

ThruPut Manager automates the processing of batch queues and determines the execution order of jobs, based on its own service levels, queue waiting times, resources, and CPU utilization. It constantly reprioritizes jobs in the queue and adjusts the load based on workload performance. As a result, ThruPut Manager delivers intelligent batch processing, optimal system loading and balance, higher service levels, maximized throughput and speed, and reduced MLC charges.

In summary, Control-M for z/OS managed the job submission order, based on scheduling insights, and ThruPut Manager controlled the job execution order, based on real-time environment insights.

However, the customer still faced a critical challenge. Their month-end close jobs, submitted by
Control-M for z/OS with high priority order, were not being selected for execution with the same priority. ThruPut Manager can give precedence to other Control-M daily jobs or even ad hoc workloads or online workloads, based on its prioritization criteria. But it doesn’t have visibility into the scheduling view and information such as when jobs need to start and complete to meet SLAs, or their average duration.

Better together

Instead of having to build a lot of cumbersome (and difficult to scale) code logic to check both ThruPut Manager and Control-M for z/OS for priorities, the products’ integration does this for the customer automatically. The scheduling logic built in Control-M for z/OS is leveraged by ThruPut Manager to drive intelligence in real-time execution of jobs.

In addition to real-time environment load levels, ThruPut Manager now has visibility to SLA business requirements to prioritize workload execution most effectively, respecting business priorities and infrastructure constraints. It defers or anticipates the execution of workloads depending on real-time system loads, resource availability, and CPU consumption, plus scheduling needs and SLA impacts.

This helps the customer optimize workload performance based on business service levels and real-time resource utilization, ultimately:

  • Ensuring critical business services are delivered on time, every time
  • Providing executives accurate, timely monthly financial close data
  • Reducing costs through optimized mainframe performance

What Control-M for z/OS and ThruPut Manager can do for you

The new integration between Control-M for z/OS and ThruPut Manager helps companies optimize mainframe performance by syncing business requirements with real-time resource utilization and system load data. The result? You get more efficient, cost-optimized batch workload and SLA performance, and improved resource utilization.

Want to learn more? Check out these great resources:

]]>
A Study in Mainframe Development: Working from Home in 2020 https://www.bmc.com/blogs/mainframe-development-from-home-2020/ Thu, 25 Feb 2021 07:38:53 +0000 https://www.bmc.com/blogs/?p=20250 The year 2020 will be remembered for many things and one of those is the largest exercise in employees working from home. With few exceptions across the world, most employers told their employees to start working from home in early March. Business Continuity plans were dusted off and implemented. Some plans may have been better than […]]]>

The year 2020 will be remembered for many things and one of those is the largest exercise in employees working from home. With few exceptions across the world, most employers told their employees to start working from home in early March. Business Continuity plans were dusted off and implemented. Some plans may have been better than others; some may have played it by ear while others may have had a very detailed plan.

Regardless, this was very disruptive, especially for those companies that found the colocation of a development team to be a strategic advantage prior to 2020. Teams that sit together in a physical office communicate in ways that may be more difficult when everyone is sitting at home and trying to collaborate over the company’s chat application (e.g. Microsoft Team, Skype, Slack, etc.). In the office, developers struggling with a particular problem or using a tool can sometimes show a body language that a more senior developer or manager can interpret and offer help without being asked. Hallway conversations, over-the-cube conversations or just walking over to someone’s desk for advice do not happen as easily on Teams or Slack.   This is not a problem central to development; it affects all aspects of a company.

We’re one year into the crisis and from the anecdotal evidence it looks like most companies have and will survive this period with little loss of productivity.   At Compuware, a BMC Company, customers participate in a free service called zAdviser. zAdviser collects telemetry data from the development process when Compuware tools are used.  A developer debugs a program, an abend occurs, code is promoted, code is checked out—all of these activities create data which makes its way to zAdviser, where the free service allows our customers to view this data and learn how to become better at mainframe development.

This was originally an exercise to see if there was any fatigue evident as employees were spending more time at home. In analyzing the data from zAdviser, data from 2020 showed three distinct anomalies when compared to the same periods in 2019.

Below is the analysis of these three periods and how they compared to when developers started working from home.

Period 1: Compiles per Program

From April 2020 to June 2020, there were significantly more Compiles per program than we saw in 2019. In the visualization below, the red line represents the Compilations per program weekly (defined as # of Compilations / # Unique Programs over a one-week period) and the blue area represents activity from 2020.  For most of 2020, the metric matches 2019, with 2019 being slightly higher in the July to December timeframe. From May to Mid-June there was significant increase in the number of Compiles per program.

What is going on here? Are developers adding display statements into the code, compiling, executing the code and checking the output to see what happened? There are times where you make one simple change to the code, compile and promote to testing. Developers should be utilizing Xpediter (source code debugger) and Topaz Program Analysis to ensure they understand the application and what the ramifications of the change are.  Do the developers feel confident in using the tools and if they do not, are they confident they can reach out asking questions of those that do? Communication in an office where developers are colocated happens very easily, “Hey Joe, how do you set a watch on a variable again?” Picking up a phone or texting someone through Microsoft Teams may be more intimidating for some.

Period 2: Batch Abends per Program

This metric looks at how often a program is abending on a weekly basis. The definition is defined as sum of batch abends / number of unique Programs

For those not familiar with the mainframe term, “abend” means “abnormal end” or simply the program terminating due to some condition or “exception”. It may have tried to read a file that was not available, reading a string into a numerically defined variable, or simply maybe it tried to divide by zero.  Whatever happened is generally “not good” and often requires a person to investigate what happened to try to get the program running again. This can be done by regressing the code to a previous version, making sure the file trying to be read is available, removing the offending records that are causing the problem, or something else.   This elevation in batch abends per program continues from mid-July until October 1st when the metric returns to a curve like 2019.

Is this the result of what happened in Period 1 and of not using Xpediter and Topaz Program Analysis? Based on the two time periods 1 & 2, it appears there is a cause and effect. Ineffective development and testing processes may very well be the instigator to a higher than normal number of abends. This is significant, especially when you look at what 2019 looked like.

Period 3: Debug Sessions per User

This metric looks at how often a user is debugging a program. The definition of the metric is # of times programs are debugged by # of unique users or Sum of Programs Debugged / Sum of Unique Users.

This is the metric and visualization that pulls the story together. Starting in March 2020 the metric shows the amount of debugging being done is falling compared to what was seen in 2019. This corresponds to the time when companies were forced to close their physical offices. If you recall the first metric that was discussed in Period 1, the hypothesis was that there was less use of Xpediter and Topaz Program Analysis. There is some recovery towards mid-March before the metric dives sharply starting with the third week.

Contrast that to the 2019 data when the number of debug sessions per User increased. Starting in May, the metric begins to recover before dropping again in June and lagging the 2019 data until mid-August. In Mid-August through November the metric exceeds the 2019 activity significantly. Most companies have system freezes of new code starting in the fall, is this the rush to beat that cutoff? Have companies also become better at communicating between remote workers?   These are questions that are difficult to answer without more analysis and conducting interviews with those that supplied the data.

Summary

While each of the individual periods is interesting on its own when compared to 2019, if they are looked at together, patterns emerge. After developers started working from home in March 2020, they debugged less and started adding code to applications, compiling and re-running the program to see if what they did worked. This is not the ideal approach to developing code. This code was ultimately deployed to production and because of the lack of sufficient testing and debugging, there were significantly more abends during the mid-July to October 2020 timeframe (Period 2).

At this point, since PLAN A of compiling and running the program (Period 1) to see if a new feature or bug was fixed didn’t work, developers started to fall back to the tools that they should have been using to test and fix the bugs. After six months of working from home, communication improved, asking for help became easier, managers became better at spotting developers needing help, and the bugs introduced when the code was deployed were fixed prior to the end of year system freezes.

About the Data and Analysis

This data represents two years of data collected from the Compuware customers who participate in zAdviser.  zAdviser is a free service for all customers which collects telemetry data from Compuware products; in addition, customers supply data from their SCM and ITSM toolsets. This data is used to form KPIs so customers can understand how well they are developing mainframe code.  Machine Learning algorithms analyze the data and determine prescriptive analytics to help developers become better by suggesting small nudges such as learning a function that they may not be using in one of the Compuware tools to become better developers and deliver code faster.

Different permutations of the data were studied, looking for patterns. The dataset was normalized by user and by program where appropriate. A great deal of time went into studying different metrics and different data sets, looking for patterns and discerning what happened in 2020.

This represents over 1 million data points across all customers. The data does not include any Compuware data, which is captured and stored in zAdviser, but many of those datapoints are from internal testing of the tools. Adding this data to the dataset would impact analysis.

This post originally appeared on LinkedIn.

]]>
Are You Optimizing Your Development? https://www.bmc.com/blogs/are-you-optimizing-your-development/ Thu, 16 Apr 2020 14:01:48 +0000 http://www.compuware.com/?p=47540 Overview: A recent McKinsey & Company research article examined modern software development. AMI DevX has encouraged many of these practices for years, including shift-left testing, Agile development, and mainframe DevOps. These efforts have been helped by measurement and analysis from AMI zAdviser​. In February 2020 McKinsey & Company published the research article, “A New Management […]]]>

Overview: A recent McKinsey & Company research article examined modern software development. AMI DevX has encouraged many of these practices for years, including shift-left testing, Agile development, and mainframe DevOps. These efforts have been helped by measurement and analysis from AMI zAdviser​.

In February 2020 McKinsey & Company published the research article, “A New Management Science for Technology Product Delivery” This timely article looked at how companies have managed software delivery in the past and what the right practices might be today. I see parallels between what they are espousing and what AMI DevX has been doing and discussing for the last several years.

There are several findings in this article worth repeating:

1. Adding more developers to a team can increase speed, but there is a point of diminishing returns. My boss likes to call it the “two pizza rule”: if you have more developers than you can feed with 2 pizzas you probably have too many on the team. McKinsey stated that the point of diminishing returns is around 15 developers.

2. It’s more expensive to fix software bugs later in the SDLC. I’ve heard different takes on this most of my career. If you are waiting until later testing phases or letting the bug escape to production, it’s going to be more time-consuming and costlier to fix than if you fix during the initial phases of development. This is something I’ve been aware of since the early 1990’s. You can pay the bill today, or you can wait and pay 10x the bill later.

3. Teams that utilize the Agile framework excel at delivery predictability. This to me is where a majority of mainframe development fails. They are stuck in waterfall—projects that are forecasted for 6-12 months take much longer or get canceled. Our mainframe customers who have adopted Agile see that their Agile mainframe teams can keep up with their distributed brethren, deliver quickly and on time, and in some cases exceed them. The idea of “two-speed IT,” where mainframe practices waterfall and the distributed teams practice Agile was a failure the minute someone put the idea on paper.

4. The last finding is about the co-location of a team. At BMC AMI DevX we feel it’s a competitive advantage that our Development and Product Management teams are co-located in our Detroit HQ. McKinsey found that co-location led to fewer bugs. They also noted that it might lead to longer project times but qualified their finding by stating it needs further study and their sample size may be small.

Navigating New Circumstances

Co-location of our Development Teams has worked exceptionally well for us—we just delivered our 22nd consecutive quarter of new enhancements and updates to classic offerings, which is unheard of in our industry. But like many others, we must work from home temporarily, while maintaining our same level of productivity and throughput. To that end, our Development Managers have been heavily relying on AMI zAdviser​.

AMI zAdviser ​captures the telemetry data from our products, and when combined with data from our Atlassian JIRA instance, helps us understand the velocity, efficiency and quality of our development processes. Free to all maintenance-current AMI DevX customers, AMI zAdviser ​allows a user to track Development KPIs and correlate the data with product usage. The insights gained from zAdviser help Development Managers identify what might be impeding DevOps processes so they can make small adjustments, while thoughtfully nudging developers to continuously improve. We have found this data-driven approach to be incredibly helpful in enabling our Development Teams to increase their software delivery velocity, efficiency and quality.

Continuous Measurement and Improvement Essential

This leads us to our changing world. AMI DevX continues to develop code and deliver in a regular quarterly cadence. After 22 consecutive quarters, we are much better at estimating what can be delivered in the quarter. Instead of promising to deliver every proposed enhancement in a quarter, we deliver what’s important for our customers. Using AMI zAdviser​we can see how long the code spends getting developed; how long it spends in a testing phase; and what unit and system tests are being executed. This is an unfair advantage we have over our competitors, and since our maintenance paying customers can participate for free, they in turn have an advantage over their competitors.

When we study the data from the time our developers started working out of their home offices to the present, we haven’t seen any drops in productivity. Maybe this is due to the “Hawthorne effect,” but I think it’s because we have a gritty group of developers who will always get the job done. Either way, AMI zAdviser​is integral to our developers’ success as well as AMI DevX.

This post originally appeared on LinkedIn.

]]>