Mainframe Blog – BMC Software | Blogs https://s7280.pcdn.co Wed, 18 Jun 2025 12:57:41 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png Mainframe Blog – BMC Software | Blogs https://s7280.pcdn.co 32 32 BMC AMI Support of IBM® z17™ https://s7280.pcdn.co/bmc-ami-support-ibm-z17/ Wed, 18 Jun 2025 12:57:41 +0000 https://www.bmc.com/blogs/?p=55181 Future mainframe transformation will be built on artificial intelligence (AI). BMC has been preparing for this AI revolution for some time now: In July 2024, BMC mainframe Senior Vice President and General Manager John McKenny set forth a statement of direction for the BMC AMI Platform, bringing together AI, generative AI (GenAI), and cloud-based tooling […]]]>

Future mainframe transformation will be built on artificial intelligence (AI). BMC has been preparing for this AI revolution for some time now: In July 2024, BMC mainframe Senior Vice President and General Manager John McKenny set forth a statement of direction for the BMC AMI Platform, bringing together AI, generative AI (GenAI), and cloud-based tooling to create a unified access point for all BMC AMI solutions. In October, we introduced BMC AMI Assistant, which leverages GenAI within BMC AMI solutions to bridge workforce knowledge gaps and facilitate data-driven decision making.

For example, within BMC AMI Code Insights, BMC AMI Assistant provides developers with quick, clear explanations of complex code. With a simple right-click, developers can see explanations of unfamiliar COBOL, PL/I, JCL, and Assembler code, then copy these explanations as code comments. This code explanation accelerates onboarding, reduces time spent on code analysis, and helps less-experienced team members confidently understand and update critical applications.

BMC AMI Assistant also enhances root cause analysis from BMC AMI Ops Insight with natural-language explanations, and recommended next steps, guiding systems programmers and IT operations teams through issue resolution, helping to shorten incident response times and reduce downtime, while helping operations teams resolve issues faster—directly impacting service quality and operational efficiency.

In April 2025, we introduced a curated LLM library and bring-your-own-LLM (BYOLLM), empowering organizations to seamlessly integrate multiple AI models to tailor BMC AMI Assistant output to their specific use cases, policies, and security requirements. This gives organizations flexibility and control over how they deploy AI, ensuring that outputs are aligned with internal standards, regulatory requirements, and unique business priorities.

Hardware designed for AI

Given our dedication to simplifying mainframe management with AI, we are excited by the possibilities the new IBM® z17 brings. Announced in April and generally available on June 18, the IBM z17 was designed with AI in mind, with the new IBM Telum® II processor, featuring an on-chip AI accelerator, and the coming IBM Spyre Accelerator, expected to be released later this year, which will further GenAI capabilities on the mainframe.

As with every new release on the IBM Z® platform, BMC is committed to ensuring full support of the IBM z17 throughout the BMC AMI portfolio. Whether you’re currently utilizing AI on the platform or planning to do so in the future, you can confidently transition to and take advantage of this new hardware with minimal disruption.

Faster, more powerful mainframe AI

One especially exciting aspect of the new IBM z17 is the upcoming Spyre Accelerator, a PCIe-attached card designed to provide increased compute power and efficiency to handle large-scale AI workloads and enable on-platform large language model (LLM) support. This complements BMC’s current and future GenAI capabilities in a number of ways.

This increased power will enable BMC AMI Ops Insight to gain greater insight into system workloads and potential issues while empowering BMC AMI Assistant to provide even more robust, accurate, and specific GenAI guidance. This power and scalability will also help accelerate our expansion of BMC AMI Ops Insight and the integration of AI/ML guidance into the rest of the BMC AMI portfolio.

The Spyre Accelerator also gives even more power to BMC’s hybrid AI and BYOLLM design approach. The accelerator’s increased compute power enables GenAI workloads directly on the mainframe, giving organizations the ability to leverage the LLMs or small language models (SLMs) of their choosing. With this approach, GenAI applications like BMC AMI Assistant can provide honed, adaptable answers specific to the organization’s systems and policies.

You’ve never mainframed like this

With enhancements to processing power and efficiency specifically geared toward AI on the mainframe, the IBM z17 complements BMC’s ongoing strategy to provide a strategic AI partner for mainframe transformation. While the IBM z17 is architected for the AI revolution, our BMC AMI solutions are already architected to exploit the AI capabilities of the IBM z17, with full support that enables you to take advantage of these advancements immediately.

As new mainframe AI workloads emerge and are brought to scale, BMC is ready. With the BMC AMI portfolio, so are you.

]]>
Forrester Study Explores Total Economic Impact™ of BMC AMI DevX https://www.bmc.com/blogs/forrester-total-economic-impact-bmc-ami-devx/ Mon, 09 Jun 2025 17:25:02 +0000 https://www.bmc.com/blogs/?p=55138 Mainframes continue to power the world’s most critical systems—from financial transactions to healthcare operations—thanks to their unmatched reliability, security, and processing power. These platforms have advanced significantly to meet modern business demands, processing billions of transactions daily to support essential enterprise operations with speed and precision. With the integration of cloud technologies, automation, and AI, […]]]>

Mainframes continue to power the world’s most critical systems—from financial transactions to healthcare operations—thanks to their unmatched reliability, security, and processing power. These platforms have advanced significantly to meet modern business demands, processing billions of transactions daily to support essential enterprise operations with speed and precision.

With the integration of cloud technologies, automation, and AI, working with mainframes is becoming more intuitive and efficient. Development teams now benefit from streamlined workflows, improved visibility, and tools that align the mainframe experience with that of other modern platforms—enabling faster innovation and greater agility across the enterprise.

Forward-thinking organizations recognize that providing mainframe teams with advanced tools and practices accelerates innovation, enhances developer satisfaction, and delivers exceptional business value. Adopting advanced development methods allows enterprises to harness the full potential of their mainframe applications and drive meaningful business results.

The Forrester TEI study: $26.2M in benefits from BMC AMI DevX

To evaluate the business impact of improving mainframe development, BMC commissioned Forrester Consulting to conduct a Total Economic Impact™ (TEI) study, drawing insights from six BMC AMI DevX customers across financial services, insurance, and healthcare. The TEI study found that the composite organization transformed their mainframe application development environments using BMC AMI DevX and saw a 217 percent return on investment (ROI), with payback in under six months.

For a composite organization with 300 mainframe developers, the study revealed substantial returns:

  • $26.2 million in total benefits over three years
  • $18.0 million in net present value (NPV)
  • Payback in under six months

These results show how advanced development practices can drive measurable efficiency, agility, and value across the enterprise, making a compelling case for transformation at scale.

Mainframe deployment at scale: from hours to minutes with BMC AMI DevX

The Forrester TEI study showed that organizations using BMC AMI DevX significantly accelerated development workflows, cutting release time by 96 percent. By reducing manual coordination, teams reclaimed time for innovation and higher-value work.

A mainframe DevOps lead in financial services highlighted these dramatic efficiency gains:

“There was a lot of admin work that the developers had to do, and we effectively freed up their time so that they could focus on the real value-adding [work] of developing code. … We have as fast a release schedule as other platforms. … We actually deploy our mainframe changes even quicker than our program increment cycles.”

Key outcomes include:

  • Increased deployment frequency by 50% without compromising quality
  • Reduced mean time to restore service by 98%
  • Decreased change failure rate by 33%
  • Minimized application downtime by 99%

Improving mainframe visibility and code analysis with BMC AMI DevX

These benefits aren’t just theoretical. They’re already transforming how teams work. Organizations using BMC AMI DevX reported significant improvements in visibility, efficiency, and scale across their mainframe environments.

A lead product engineer in financial services shared a striking example of how much faster code analysis has become:

“I’ll give you an example: [take] something like the program analysis function. One developer said it [used to] take days to map out a program and to follow through the processes in a program. But now, [with the BMC AMI DevX tools] she does it in minutes…”

These kinds of gains are especially valuable in mainframe environments, where applications often represent decades of business logic and institutional knowledge. BMC AMI DevX tools help teams uncover hidden structures and relationships through advanced visualizations and analytics, making it easier to understand, maintain, and evolve complex codebases.

Reported outcomes include:

  • Increased active coding time by 33% by reducing admin overhead and sharpening developer focus
  • Delivered productivity gains equivalent to adding 25 full-time team members
  • Accelerated code analysis—from days to just minutes

Building agile, multi-generational mainframe teams

Mainframe development is evolving—not only how code is written and deployed, but also how teams are structured and supported.

According to Forrester’s TEI study, organizations using BMC AMI DevX are building dynamic, multi-generational teams where both new and experienced developers thrive. By aligning with current software practices, they’re making the platform more accessible, collaborative, and attractive to today’s talent.

Key workforce outcomes include:

  • Expanded junior developer headcount by 240%
  • Increased overall mainframe workforce by 6%
  • Accelerated onboarding by 50%—cutting ramp-up time from 9 to 4.5 months

A mainframe systems engineer in financial services shared how intuitive tools helped attract and engage new talent:

“We switched about one to two years ago. … We never forced someone to use [Workbench], [but] all the youngest people jumped in immediately. … Twenty to 25 percent of the developers switched by themselves.”

With BMC AMI DevX, the composite organization realized $7.9 million in workforce development benefits—driven by intuitive interfaces and streamlined workflows that energize teams and reduce barriers to productivity.

Driving innovation and agility through next-generation mainframe development

Mainframe development is no longer just about maintenance. It’s driving innovation and agility. With BMC AMI DevX, organizations gain the speed, visibility, and flexibility to adapt and improve continuously. Forrester’s TEI study confirms that advanced tooling accelerates development, increases transparency, and fosters innovation.

A financial services platform lead emphasized the value of real-time insight:

“These capabilities have created visibility into what changes are going into the system under the emergency banner. It’s not just the faster timelines [when we’re] deploying on a regular basis. If it’s an emergency change, [we have] visibility.”

BMC AMI DevX supports full-lifecycle visibility with impact analysis, automated testing, streamlined code reviews, and actionable metrics, helping teams innovate faster and more precisely.

A platform lead for mainframe DevOps noted the cultural shift sparked by advanced tooling:

“We have some early adopters who are very interested in the new features and functionalities of the BMC toolsets. [The tools] are creating a good spark of interest. I think they’re [changing developer conversations] from ‘This application is horribly built,’ to ‘Hey, this is cool. I can use a lot of new tools and capabilities.’”

A financial services development lead praised the platform’s adaptability and alignment with evolving business needs:

“In terms of going where I wanted to go and being flexible enough to meet my needs, [BMC has been] phenomenal. Not even a question—easy decision. [BMC AMI DevX] has done everything we wanted it to do.”

Transforming the mainframe into an advanced engine for innovation and growth

The Forrester TEI study shows that advanced tools like BMC AMI DevX helped organizations maximize mainframe value by boosting efficiency, enabling smarter development, and attracting top talent through intuitive, automated, and integrated workflows.

This transformation is not just technical. It’s cultural. A mainframe systems engineer at a financial services organization observed that “With these tools, the mainframe is a platform just like any other.”

By combining intuitive tooling with a developer-centric culture—alongside automation, AI, and cloud-native practices—organizations are not only accelerating technical performance but also reshaping how teams collaborate, innovate, and view the mainframe as a simplified, modern platform for growth.

For organizations working to elevate developer productivity, accelerate time to value, and strengthen mainframe talent strategies, BMC AMI DevX delivers a proven solution aligned with enterprise transformation goals.

Download the full Forrester TEI study to explore what this could mean for your organization.

]]>
How to Supercharge Mainframe DevOps with Git + BMC AMI DevX Code Pipeline https://www.bmc.com/blogs/how-does-git-work-with-ispw/ Mon, 09 Jun 2025 12:07:06 +0000 https://www.bmc.com/blogs/?p=51961 More and more mainframe organizations are either moving to Git or talking about moving to Git. Traditionally, the mainframe has existed as a separate platform, with its own set of tooling and processes, walled off from other platforms in the company like cloud, mobile, or web. As more mainframe teams are introduced to DevOps, though, […]]]>

More and more mainframe organizations are either moving to Git or talking about moving to Git. Traditionally, the mainframe has existed as a separate platform, with its own set of tooling and processes, walled off from other platforms in the company like cloud, mobile, or web. As more mainframe teams are introduced to DevOps, though, these walls are starting to come down.

Many organizations use distributed tooling like Jenkins or GitHub Actions to orchestrate their continuous integration/continuous delivery (CI/CD) pipelines. Many also use tools like SonarQube to scan their source code and Veracode to enforce security. It is only natural that organizations would start to look to Git, the de facto standard for version control and source code management (SCM), to house their source. But this alone will not bring results—you need something to handle the remainder of the DevOps processes, such as build and deploy. To take your DevOps to the next level, you need to pair Git with a world-class build and deployment system like BMC AMI DevX Code Pipeline to supercharge your DevOps.

Why Git, why now?

The choice of Git is an easy one. It is the dominant SCM system in the industry and there are many reasons to adopt it.

  • It was designed with the developer experience in mind. Whether you are using native Git or one of the popular remotes available like GitLab, GitHub or Bitbucket, Git was made to provide the best possible user experience and make day-to-day development easy. Git provides many built-in features to enhance the developer experience, like comparing, merging, and making it easy to approve changes.
  • Git supports multiple languages and multiple methodologies for working. Just as it supports distributed languages like Java, C, Node or Python, it can support Cobol, JCL, Rexx, PL1, Assembler or any other source language necessary for mainframe applications.
  • It offers branching support to enable parallel development amongst teams and individuals. Git allows a user to create an isolated environment for development or R&D and then merge that code back into the main branch or trunk.
  • Git allows organizations to consolidate their software delivery lifecycle (SDLC) processes across the organization, regardless of platforms. The mainframe will no longer be a siloed platform if it uses the same tool and process as the rest of the organization.
  • It enables easier on-boarding of new developers and empowers less-experienced engineers on the mainframe platform. College grads and younger engineers already have experience, having been trained on Git systems from day one. They come out of school already having a GitHub account or equivalent and have been using Git in their studies.
  • Git Platforms are open and easy to integrate into almost any system. Due to its prolific use, almost any DevOps tools or environment already supports Git and Git systems support most platforms and tools.

How does Git work with BMC AMI DevX Code Pipeline

So, now we understand Git but how does it work with BMC AMI DevX Code Pipeline to create a next-level DevOps system? You may be thinking, “Doesn’t BMC AMI DevX Code Pipeline handle source code itself?” Yes, BMC AMI DevX Code Pipeline does handle source code in addition to providing speed, agility, quality, and velocity in the way it handles that source. But, as detailed above, there are a lot of reasons an organization would want to bring Git into the picture to complement BMC AMI DevX Code Pipeline, allowing Git to do what it does best on top of BMC AMI DevX Code Pipeline’s world-class build and deploy capabilities on the mainframe.

To use Git on the mainframe is no different than using Git on any other platform or with any other codebase. A developer would branch code into a local environment and start making changes. As part of that process, they would call BMC AMI DevX Code Pipeline to handle building and executing any unit tests in their local environment. Once complete and satisfied with their changes, the developer would issue a pull request to merge the changes into the main branch. At that point, the changes would be reviewed and, if complete, merged back into the main branch. This is no different than a distributed developer working in Java.

This is not the only interaction BMC AMI DevX Code Pipeline would have with Git though. Now that changes are complete and merged into the main branch, they must be deployed. Thanks to BMC AMI DevX Code Pipeline’s open-borders approach, all the functions necessary to promote and deploy your change are exposed via API so now BMC AMI DevX Code Pipeline can be used to deploy your changes, as well. To make this even easier, a Jenkins plugin has been developed to handle this task which can easily be triggered from your Git system. If you do not have Jenkins, the same integrations have been built for other popular orchestration tools like GitHub Actions and Azure DevOps to name a few. Along with an API for anything, our expert services teams can help you create a deployment flow for your pipelines.

With your Mainframe plugged into the same process and CI/CD tooling you use for the rest of the platforms in your organization, you can then coordinate deployment across cross-functional teams so changes that go onto the mainframe can also be deployed at the same time as changes on cloud or any other distributed systems.

gitispw

Does everyone HAVE to move to Git?

The answer is a simple, “no.” Like anything, this is not a one-size-fits-all solution. Git has many benefits and can be used in any scenario, but it might not fit every team or application. With BMC AMI DevX Code Pipeline, you can take a hybrid approach to development within the organization, allowing teams to decide if moving to Git is right for them. Let’s look at an example with 2 teams, team A and team B. Team A supports an application that is in maintenance mode, with no real new development, and they only deploy patches and updates 1-2 times per year. Team B supports a very active app that is deploying monthly, weekly, or maybe even daily. For team A, transitioning to Git does not make a lot of sense and they would get limited benefit out of it. So, for team A, using the traditional mainframe development method would be fine.

Team B, however, wants to move to Git and would reap many benefits from the capabilities Git brings to the table. Typically, either team A or team B would have to conform to the other teams wishes—but not with BMC AMI DevX Code Pipeline. BMC AMI DevX Code Pipeline allows for a hybrid approach to development where one team can use Git and the other team can use BMC AMI DevX Code Pipeline for all of their development needs. This allows for flexibility within the organization and provides choice so that each team can make the best decision for themselves.

hybrid

Where should I start?

After reading all of this you might be asking yourself, “Where do I get started? Do I just put some code on Git and start working?” You can but that might not be the best place to start. Like anything, a phased approach is preferred to reduce the risks inherent in a major transformation. With a proven track record of success, BMC has some steps to follow to help you migrate successfully to using Git and supercharge your mainframe DevOps.

  • Step 1: Migrate off your legacy mainframe SCM system and onto BMC AMI DevX Code Pipeline. Whether you have another SCM solution or a home-grown tool, getting off that system and onto a modern tool built for agility, like BMC AMI DevX Code Pipeline, is a key first step to setting yourself up for success. If you already have BMC AMI DevX Code Pipeline then you are 1/3 of the way there already. If not, BMC professional services can help migration from your old system so you can immediately start reaping the benefits.
  • Step 2: Automate all your processes and get your pipelines built out. Integrate your existing DevOps tooling with BMC AMI DevX Code Pipeline and start automating all you can. Get your teams used to automation and DevOps and start making your process as efficient as you can.
  • Step 3: Assess all your teams and begin migrating them to Git as needed. You have already set up your pipelines and automated your processes, now it is time to assess your teams and move those that want to migrate onto your Git offering and plug into the pipelines defined for BMC AMI DevX Code Pipeline. Once again, BMC professional services has scripting that can help you move your source code, history, and meta data to Git.

Using Git and BMC AMI DevX Code Pipeline together enables organizations to manage mainframe workloads with the speed and efficiency that only DevOps can bring. Doing so ensures that code is maintained in a place that encourages parallel development and is immediately comprehensible, while also allowing seamless and accurate building, testing and deployment of code.

To explore these concepts in greater detail, download our eBook, Git for the Mainframe.

]]>
Mainframe Transformation with AIOps: Smarter Operations, Greater ROI https://www.bmc.com/blogs/mainframe-transformation-aiops-operations-roi/ Thu, 22 May 2025 13:50:57 +0000 https://www.bmc.com/blogs/?p=55104 Organizations that rely on legacy mainframe monitoring tools often face costly inefficiencies, including SLA violations, regulatory compliance risks, and application slowdowns. These hidden costs can increase capital expenditure and operational inefficiencies—even impact overall business resilience.  Luckily, there are alternatives to reduce, if not eliminate, that are based on AIOps practices. As artificial intelligence (AI) and […]]]>

Organizations that rely on legacy mainframe monitoring tools often face costly inefficiencies, including SLA violations, regulatory compliance risks, and application slowdowns. These hidden costs can increase capital expenditure and operational inefficiencies—even impact overall business resilience.  Luckily, there are alternatives to reduce, if not eliminate, that are based on AIOps practices. As artificial intelligence (AI) and generative AI (GenAI) mature, organizations can integrate AI with monitoring and observability tools for a new level of system visibility.

Addressing the role GenAI can play in optimizing mainframe operations, John McKenny, BMC Senior Vice President and General Manager of Intelligent Z Optimization and Transformation, says, “GenAI is revolutionizing mainframe AIOps by transforming reactive operations into proactive, data-driven systems. Organizations embracing these technologies gain enhanced insights, minimized risks, and optimized costs.”

In a recent webinar, “7 Hidden Costs of Ignoring AIOps,” BMC’s Mark Banwell, Alan Warhurst, and Jeremy Hamilton explored these challenges, sharing insights on how AIOps with GenAI capabilities and advanced automation can transform mainframe operations.

One of the topics covered in this webinar focused on the not-so-insignificant hurdles organizations face with legacy mainframe monitoring tools. This includes visibility gaps, which make it difficult for teams to detect issues before they escalate. Legacy monitoring also comes with manual processes and siloed tools that create bottlenecks in operational workflows. As a result, IT teams are forced to put out fires  rather than proactively manage performance. Without actionable insights, the consequences can be severe, leading to unexpected downtime and higher costs.

The solution: Leveraging GenAI and AIOps for smarter operations

BMC AMI Ops solutions leverage advanced AI capabilities to improve monitoring, reduce inefficiencies, and eliminate legacy constraints.  This empowers organizations to detect issues proactively for improved MTTD (mean time to detect) and kick off resolution processes, helping IT teams significantly cut mean time to repair (MTTR). End result, teams ensure consistent system performance and minimize downtime. Also, the ability to integrate predictive analytics with automated compliance monitoring leads to enhanced regulatory adherence and reduces the risk of costly SLA breaches.

During a recent podcast, The Game-Changing Benefits of AIOps for Modern Mainframe Operations,” Hamilton likened this AI-powered automation to the five senses in the human body. “The machine learning portion is seeing, hearing, and sensing what’s going on in the environment, while the GenAI component takes it further—analyzing, interpreting, and enabling seamless communication between users and systems. That’s where you get to the hybrid aspect—bringing all of these capabilities together for a more intelligent, automated, and predictive mainframe operation.”

So just like the five senses acting together, combining the power of AI for a more holistic view of system operations—and automating issues before they impact availability—can help reduce costs and enable organizations to provide services, combined with a user experience, that can meet or even exceed customer expectations. 

The real-world impact: ROI and operational benefits

Also during the webinar, Warhurst shared his perspective on how AIOps is reshaping mainframe operations: “Organizations often struggle with fragmented monitoring solutions that provide data but not insights. AIOps changes this by offering a holistic approach that not only identifies issues but also predicts and prevents them before they escalate, reducing unplanned downtime and improving overall performance.”

One important note, the financial impact of adopting BMC AMI Ops solutions can be substantial. A recent Forrester Total Economic Impact™ study, commissioned by BMC, showed that the composite organization created for the study (based on interviews with four BMC AMI Ops Monitoring customers) experienced significant benefits after implementing AIOps-powered monitoring and automation. This includes a 50 percent reduction in downtime, leading to improved service availability and enhanced customer satisfaction. Also, by replacing outdated legacy monitoring tools, the organization reduced operational expenses by up to 80 percent. This resulted in millions in savings over three years. Plus, when factoring in labor cost savings as teams reallocated IT staff to higher-value tasks, this resulted in an estimated $493,000 in savings.

Overall, the financial gains resulted in a 130 percent ROI, with a net present value of $2.94 million.

This is why organizations are increasingly looking to BMC AMI Ops Monitoring to minimize performance bottlenecks, increase operational efficiency, and ensure service continuity.

Industry-Leading root cause analysis with GenAI-guided issue resolution

BMC AMI Ops Insight provides best-in-class automated root cause analysis enhanced by the GenAI-driven BMC AMI Assistant, delivering real-time diagnostics, plain-language explanations, and recommended next steps. By automatically detecting, diagnosing, and suggesting resolutions, it accelerates Mean-Time-to-Detect (MTTD) and Mean-Time-to-Resolve (MTTR), reducing downtime and improving service reliability. Unlike traditional tools, BMC AMI Ops Insight doesn’t just surface alerts—it guides sysprogs and IT operations teams through issue resolution, making expertise more accessible at all skill levels.

With BMC AMI Ops, teams can combine built-in intelligence with AI-powered analytics to optimize performance, enhance system reliability, and proactively manage with actionable insights. Organizations leveraging BMC AMI Ops see a dramatic improvement in overall operational efficiency, with a reduced need for manual intervention and better risk management. When moving from a reactive posture to proactive management with BMC AMI Ops Insight, anomalies can be found and addressed before they escalate into SLA-impacting issues.

What’s next? Applying AIOps to drive real-world results

As discussed in the webinar, transforming mainframe operations with AIOps and GenAI isn’t just about reducing costs—it’s about creating a more efficient, resilient, and future-ready mainframe. We invite you to explore how employing AIOps can address your greatest pains, replacing them with intelligent automation, AI-powered, GenAI-capable, and streamlined operations, and how BMC AMI Ops solutions can help. Let’s continue the conversation and drive innovation together.

For more information on how AIOps can help your organization, watch the on-demand webinar, “7 Hidden Costs of Ignoring AIOps.” We also invite you to listen to our in-depth podcast, The Game-Changing Benefits of AIOps for Modern Mainframe Operations,” which expands on these themes and provides additional real-world perspectives.

]]>
Taking a Ride on the Modern Mainframe Trip With Your AI Best Friend https://www.bmc.com/blogs/taking-a-ride-on-modern-mainframe-trip-with-ai/ Mon, 12 May 2025 10:32:43 +0000 https://www.bmc.com/blogs/?p=55077 If you’ve been working in enterprise software most of your life like me, then artificial intelligence (AI) might seem like the next inevitable disruptive wave that is going to wash over our industry whether we want it or not, leaving good people’s careers in its wake. But just like factory automation hasn’t replaced the need […]]]>

If you’ve been working in enterprise software most of your life like me, then artificial intelligence (AI) might seem like the next inevitable disruptive wave that is going to wash over our industry whether we want it or not, leaving good people’s careers in its wake.

But just like factory automation hasn’t replaced the need for skilled auto workers, and self-driving trucks still can’t replace skilled drivers from first mile to last, we’ll wind up in a situation where advanced systems will only increase the scale and complexity of IT work. Humans will need to learn new skills and stay in the loop, to keep applications as performant as possible ahead of increasing demand.

No aspect of enterprise software reflects this trend more than the mainframe modernization journey we’re on right now. The mainframe is still the beating heart of the enterprise, responsible for our most critical business logic and transactional capabilities.

At the same time, we are experiencing a talent attrition crisis, as skilled mainframers—with their decades of experience and understanding—are retiring and moving on. Enterprises need to bring forward a new generation of mainframe talent and continue to innovate on the mainframe to meet ever-expanding business challenges.

In this arena, we’re not looking for another disruption. As it turns out, AI might just be our best friend on this ride, so we don’t have to go it alone.

Fortunately, BMC has been steadily and pragmatically working on AI-driven functionality across their Automated Mainframe Intelligence (AMI) portfolio to help ease the mainframe transformation journey, culminating in the release of their BMC AMI Assistant. Here are several new ways generative AI (GenAI) with specialized AI models could become our perfect traveling guide and companion.

Bringing SME knowledge forward with a GenAI knowledge expert

Let’s start by addressing the skills gap before we embark. As long-tenured SME subject matter experts (SMEs) are leaving the traveling party, we need to do everything we can to impart their institutional knowledge to newer engineers. And if GenAI systems were ideally suited to do one thing well, it’s documentation and knowledge transfer.

However, a large language model (LLM) is only as good as the data that feeds it. To avoid irrelevance, we’ll need much more than another chatbot that provides manually canned answers or information scraped from the internet.

While much of the existing codebase may be poorly commented and not clearly mapped out, BMC AMI Assistant can trace work on mainframe modules over time, documenting the accumulated knowledge gained through changes made by SMEs over the preceding years and decades, to help newer team members understand how to safely unlock procedural dependencies and connect new business services to the mainframe.

Navigating difficult passages with AI-guided issue resolution

Monitoring and observability tools have their place within any enterprise software estate. To move forward with confidence, we need systems that can flag problems and alert teams to take action. Unfortunately, IT Ops teams and site reliability engineers (SREs) who are used to dealing with web-centric architectures will throw issues “over the wall” to the mainframe team for resolution if a problem appears to originate in a back-end system.

Mainframes can be opaque to traditional observability and remediation tools, so it’s best to start with best-of-breed solutions that can glean telemetry signals from IBM Z® architectures, IBM® CICS® regions, IBM® Db2® databases and so on, such as BMC AMI Ops Monitoring. But simply detecting a system issue isn’t enough if there’s not enough context in cryptic failure codes and abends for mainframers to reach remediation.

Here’s where AI agents really shine. BMC AMI Assistant agents act like private investigators, marrying mainframe telemetry data with root cause analysis workflows, providing mainframers with GenAI-guided issue resolution help—explaining complex interactions between services, and zeroing in the exact locations and root causes of errors and failures by interacting with BMC AMI Ops Insight, which provides telemetry and rules-based logic to contribute to machine learning (ML).

This “Hybrid AI” approach is core to BMC’s strategy for simplifying mainframe transformation:
The GenAI of BMC AMI Assistant is infused into BMC AMI Ops Insight, which provides rules-based logic and machine learning as it observes telemetry data with a deep understanding of mainframe operations and incident root causes. Then, the LLM in BMC AMI Assistant communicates findings to teams, parsing that telemetry into natural language explanations of root causes and step-by-step recommendations of the next best actions to take to achieve resolutions.

Empowered with expert AI guidance and next-step instructions, mainframe teams can reduce mean time to detect (MTTD) and mean time to resolve (MTTR) for both minor and severe incidents, as well as kicking back issues that originated in front-ends, API services and networks that have nothing to do with the mainframe—i.e. “reframing the mainframe blame game.”

Pick the right LLM for the trip, or bring your own

As AI makes its way from the top of the hype cycle to genuine productivity, there will never be “one model to rule them all.” That is why I always advocate for true composite AI approaches that use the right AI model for the job, rather than expecting an off-the-shelf LLM trained on generalized data to fulfill enterprise needs.

In this sense, BMC AMI Assistant acts like a concierge for selecting from multiple flavors of GenAI LLMs and specialized language models (SLMs), and orchestrating AI workloads across them, based on the particular use case or mainframe work to be done.

End customers can choose from a pre-curated set of open source LLMs and SLMs, each vetted as fit-for-purpose for specific work. For instance, you might check out Mixtral to explain PL/I and COBOL code, Granite for Assembler and JCL, and a Llama 3 running on GPU for operational insights.

Or better yet, bring your own LLM that is tuned to your own business needs and policies—an especially valuable capability for secure or proprietary mainframe environments where internal data security and sovereignty are important. There’s no longer any reason to compromise or lock in one particular GenAI approach.

The Intellyx Take

There’s no sense in going it alone on the mainframe journey anymore when AI can be your guide.

The modern mainframe really is one of the bright spots where we can see Hybrid AI improving team collaboration and productivity, while future-proofing the mainframe’s critical functionality against some of the AI-generated risk we see emerging in other areas of enterprise software.

To get there, we need the flexibility of GenAI assistants that can understand the context of your own internal business workflows and knowledge, paired with a “never fail” approach that we have always expected from our core transactional system investments.

©2025 Intellyx B.V. Intellyx is editorially responsible for this document. At the time of writing, BMC is an Intellyx subscriber. None of the other organizations mentioned here are Intellyx customers. No AI bots were used to write this content.

]]>
Cracking the Code for Java on the Mainframe https://www.bmc.com/blogs/java-on-mainframe-cracking-the-code/ Tue, 06 May 2025 09:50:07 +0000 https://www.bmc.com/blogs/?p=55060 COBOL remains the dominant programming language on the mainframe, but Java® is making substantial inroads on COBOL’s popularity. According to the 2024 BMC Mainframe Survey, developers are writing 64 percent of new mainframe applications in Java – and they are rewriting a remarkable 55 percent of existing applications in the language as well. Clearly, mainframe […]]]>

COBOL remains the dominant programming language on the mainframe, but Java® is making substantial inroads on COBOL’s popularity.

According to the 2024 BMC Mainframe Survey, developers are writing 64 percent of new mainframe applications in Java – and they are rewriting a remarkable 55 percent of existing applications in the language as well.

Clearly, mainframe operators must treat Java as a first-class mainframe participant by leveraging appropriate tooling.

While tooling that supports Java development in the distributed world is familiar and commonplace, the mainframe requires specialized tooling so organizations can optimize their use of Java on the mainframe.

As a result, the mainframe context for Java requires extra care from the organizations implementing it.

Java tooling requirements on the mainframe

BMC is a pioneer in DevOps on the mainframe and retains its leadership role with a comprehensive suite of mainframe management, DevOps, and automation tools under its Automated Mainframe Intelligence (AMI) brand.

It’s no surprise, therefore, that BMC offers tooling that supports and optimizes Java on the mainframe. In fact, BMC offers a complete Java toolset, extending the value of established mainframe tools to Java.

Optimizing Java on the mainframe requires tooling specific to the platform. Analyzing and optimizing Java performance on the mainframe requires different tools than similar tasks in distributed environments.

The performance of Java applications, for example, depends upon the infrastructure supporting those applications, including all the dependencies among various infrastructure elements that provide Java with a runtime context.

Such modernization tools must take into account the specific requirements of the mainframe, including data structures and integration with mainframe assets and other dependencies.

Troubleshooting Java on the mainframe

How the mainframe handles Java exceptions is also different from Java in other environments.

Java provides its own exception handling, of course – but developers don’t always implement it properly. As a result, there is always a chance that a programming failure will impact more than the failed program itself.

The mainframe handles exceptions in its own way, as any COBOL developer will attest to. For this reason, Java on the mainframe requires its own approach to exception handling.

Addressing this need is BMC AMI DevX Abend-AID, which brings automated exception handling to Java applications on the mainframe, supporting the troubleshooting of Java-based applications.

BMC AMI DevX Abend-AID automatically detects, analyzes and diagnoses problems across multiple mainframe environments, including Java. By extending the power of BMC AMI DevX Abend-AID to Java workloads on the mainframe, developers become more productive across the full lifecycle of Java development.

Managing the performance of Java workloads

The second tool that BMC has extended to Java on the mainframe is BMC AMI Strobe, which enables operators to capture and analyze Java performance data, empowering developers to locate and eliminate resource bottlenecks for Java applications on the mainframe.

BMC AMI Strobe helps operators identify application inefficiencies that lead to excessive CPU consumption and prolonged execution times. BMC AMI Strobe for Java® combines its powerful measurement capabilities with the BMC AMI Ops Monitor for Java Environments so that operators can measure and analyze the performance of Java workloads on the mainframe.

By leveraging BMC AMI Strobe, mainframe teams can improve the performance of their Java applications and develop more efficient and responsive applications moving forward.

The Intellyx take

The combination of BMC AMI DevX Abend-AID and BMC AMI Strobe empowers mainframe Java development teams to develop and troubleshoot their applications while maintaining the reliability that organizations have come to expect from their mission-critical mainframe systems.

These benefits extend across the entire mainframe landscape by enhancing support for modernization and AI initiatives, building cross-platform expertise, and reducing mean time to resolution for mission-critical systems that include Java applications.

For the organizations depending upon Java on the mainframe to support their mission-critical application development and modernization efforts, BMC’s industry leadership provides the support they require to drive innovation on the mainframe.

Copyright© Intellyx BV. BMC is an Intellyx customer. Intellyx retains final editorial control of this article. No AI was used to write this article.

]]>
Do Your Developers Have the Droids They Are Looking For? https://www.bmc.com/blogs/tools-help-developers-deliver-quality-code/ Wed, 30 Apr 2025 22:36:26 +0000 https://www.bmc.com/blogs/?p=55050 In the Star Wars movies, starfighter pilots are commonly assisted by astromechs, a type of repair droid that serves as an automated mechanic. Astromechs have many appendages — tools that can do almost anything. Pilots rely on their astromech copilots to control flight and power distribution systems while also calculating hyperspace jumps and performing simple repairs. The best example of this […]]]>

In the Star Wars movies, starfighter pilots are commonly assisted by astromechs, a type of repair droid that serves as an automated mechanic. Astromechs have many appendages — tools that can do almost anything. Pilots rely on their astromech copilots to control flight and power distribution systems while also calculating hyperspace jumps and performing simple repairs. The best example of this is the loyal R2-D2, always there for Luke Skywalker.

When problems arise, and they always do, the astromech droid is there to fix the problem. Couldn’t developers benefit from similar automated assistants as they work on code? Luckily, such tools do exist.

Here are some examples of BMC AMI technology that assists developers in delivering quality code – fast.

  • When confronted with code they do not understand, a developer working in BMC AMI DevX Code Insights merely highlights a section of the code, right-clicks, and selects “Explain.” BMC AMI Assistant returns a short, artificial intelligence (AI) generated summary of the business logic and a detailed description of the code’s logic flow. Developers now have an easily available way to work with confidence on the code using the BMC AMI DevX Workbench Editor in Eclipse or VS Code. They can also see charting to understand the structure of the program and the flow of the logic, and trace data from its arrival to its departure—right from the editor they use every day.
  • Another assistant is the Runtime Visualizer in BMC AMI DevX Code Insights, which enables developers to visualize their applications in real time. With it, developers can quickly see exactly how the application works.
  • When entering code in the BMC AMI DevX Workbench Editor, a type-ahead feature anticipates an automatically completes reserved words, allowing the developer to select one, saving time and avoiding typos.
  • When it comes time to debug a batch program using BMC AMI DevX Code Debug, developers can right-click the JCL member, select ‘Debug as’ and they’re good to go, with the configuration already filled out. Also, the configuration settings are all visible in one dialog, and if important information is missing, the dialog will point it out.
  • Creating test data can sometimes be difficult, but in BMC AMI DevX Workbench, developers can use the ‘Copy To’ function of the Host Explorer. Being able to copy multiple files and rename them, and even copy them to another LPAR with no shared DASD is a big help.
  • When there is a compile error, developers can use ‘Show Compile Diagnostics’ in BMC AMI Workbench Host Explorer or BMC AMI DevX Code Pipeline. It takes them straight to the line(s) in their program that caused their compile to fail. This capability saves having to page through the compiler output and then go open the program and locate the line(s) that caused the issue.

Whether you’re a starfighter in a galaxy far, far away or a developer working on mainframe applications, it’s best not to go it alone. Thanks to these tools, developers have their own faithful assistants to help them reach their goals.

To learn more about these features and how to use them, turn to the BMC Education Courses for AMI DevX.

]]>
Choose the Right LLM: Why AI Flexibility Matters for Mainframe Transformation https://www.bmc.com/blogs/mainframe-ai-llm-flexibility/ Tue, 15 Apr 2025 18:08:33 +0000 https://www.bmc.com/blogs/?p=54940 As generative artificial intelligence (GenAI) takes center stage in enterprise innovation, many leaders are asking how to bring its power into their mainframe environments. Some assume it’s as easy as plugging in a general-purpose large language model (LLM) like ChatGPT—but this oversimplified approach often misses the mark. The mainframe isn’t the problem; in fact, modern […]]]>

As generative artificial intelligence (GenAI) takes center stage in enterprise innovation, many leaders are asking how to bring its power into their mainframe environments. Some assume it’s as easy as plugging in a general-purpose large language model (LLM) like ChatGPT—but this oversimplified approach often misses the mark. The mainframe isn’t the problem; in fact, modern platforms like the recently announced IBM® z17™ are fully capable of supporting AI transformation. What matters is how you integrate AI—and more importantly, which AI model you choose.

Building or fine-tuning LLMs from scratch is expensive, time-consuming, and highly specialized. According to a Forrester report, “The State of GenAI in Financial Services,” financial services organizations overwhelmingly depend on technology and services partners to deliver GenAI solutions. To succeed, enterprises need more than off-the-shelf chatbots—they need flexibility to select the right LLM or small language model (SLM) for each task, and the ability to adapt as requirements evolve. That’s why a curated LLM library and bring-your-own LLM (BYOLLM) strategy is quickly becoming essential for mainframe transformation.

Yet, as with any transformative technology, there’s a critical question to ask: What’s the right way to implement AI in mainframe environments?

It’s easy to assume that selecting an AI model is a one-and-done decision—just pick one, integrate it, and let it work its magic. But AI isn’t one-size-fits-all, and the wrong selection can lead to inefficiencies, compliance issues, vendor lock-in, and even security risks. Instead, organizations need flexibility—the ability to choose, adapt, and control AI in a way that fits their unique business needs.

This is where BMC AMI Assistant is helping enterprises move beyond rigid AI adoption. By providing a curated LLM model library alongside a BYOLLM option, it allows organizations to tailor their AI strategy to their specific workloads, security requirements, and governance policies. But to understand why this level of AI flexibility is so important, we first need to rethink what AI flexibility really means in the enterprise.

What AI flexibility really means—and why it matters

AI flexibility isn’t just about having access to different models—it’s about ensuring that AI decisions align with business priorities. It means being able to adjust AI strategies as business needs and requirements change, as well as how technology, security policies, and compliance requirements evolve.

Imagine an enterprise that integrates an AI model into its mainframe operations, only to find out months later that it doesn’t meet new regulatory requirements. Or consider a company that needs AI to process mission-critical workloads, but the latency is too high, causing inefficiencies that impact the bottom line.

These aren’t hypothetical risks; they are real challenges enterprises face when AI strategies lack flexibility. The key to avoiding these pitfalls is having choice, adaptability, and control:

  • Choice: The ability to select a variety of AI models that best fits the tasks—whether for code explanation, debugging code, analyzing root causes of system issues, or automating responses.
  • Adaptability: The ability to switch AI models as business needs, compliance laws, and operational constraints shift.
  • Control: The ability to determine where AI models are hosted, how they interact with data, and how outputs align with enterprise governance policies.

With these factors in mind, enterprises must carefully decide which AI model to use for each task. And that starts with understanding the differences between LLMs and SLMs.

LLM vs. SLM: Choosing the right AI model for the right task

There’s a reason why AI leaders don’t rely on just one model—different tasks require different capabilities. The choice between LLMs and SLMs comes down to the trade-off between powerful contextual understanding and lightweight efficiency.

Imagine a global bank that needs AI-powered assistance for its mainframe applications. If it wants broad contextual reasoning, such as explaining legacy COBOL code to a new developer, it would likely turn to an LLM—a model trained on vast datasets and capable of understanding relationships across a variety of sources.

But if that same bank needs a highly specific, tightly controlled AI model that generates responses based solely on proprietary company data, it may instead use an SLM—a small, purpose-built AI model designed for speed, security, and precision.

When to Use an LLM

LLMs are powerful tools for scenarios that demand broad, generalized intelligence. They shine when AI needs to:

  • Interpret and explain complex relationships across vast datasets.
  • Analyze large amounts of unstructured information to identify trends and patterns.
  • Support multiple use cases across different business functions, such as customer service automation or IT troubleshooting.

For example, in mainframe environments, an LLM can analyze codebases, detect inefficiencies, and provide AI-driven recommendations to remediate mainframe system issues. But while LLMs are incredibly versatile, they are not always the best fit for high-security, compliance-driven workloads.

When to use an SLM

SLMs, on the other hand, excel when organizations require highly-specific AI models tailored to precise use cases. Unlike LLMs, which are designed for broad generalization, SLMs focus on narrow domains with strict control over data and outcomes. They are best suited for scenarios where:

  • Organizations require AI models that are narrowly focused on highly specific use cases, ensuring precise outputs aligned with their unique business requirements.
  • Low-latency processing is required, such as in high-speed transaction environments.
  • AI outputs need to be tightly governed and based strictly on internal, proprietary data sources.
  • Efficiency and cost management are top priorities. SLMs require a lower hardware footprint and reduced computational resources, making them ideal for such environments.

For example, a healthcare organization handling sensitive patient data would likely prefer an SLM for AI-driven documentation rather than exposing confidential information to a broad LLM.

Recent findings from the BMC Mainframe Survey indicate that 64 percent of enterprises identified compliance and security as their top mainframe priority.  This highlights the need for SLMs when deploying AI in highly regulated environments.

Understanding when to deploy an LLM versus an SLM is just the first step—enterprises also need the flexibility to choose where their AI models come from. That’s where BMC AMI Assistant’s curated LLM Library and BYOLLM approach comes in.

AI Flexibility with LLM Library + BYOLM: The Power of Choice

Selecting the right AI model is just one part of the equation. The next question is: What corpus of text was used for training the AI model and what bias could that training contain?

Enterprises benefit from fine-tuned AI models that are ready to use out of the box. Others need to train AI models on their own proprietary data to maintain control over security, compliance, and governance.

BMC AMI Assistant offers both options:

  • Curated LLM Library: A collection of AI models that are tested and evaluated to work best with BMC AMI Assistant, allowing teams to deploy AI without the burden of building models from scratch. A core capability is the ease of deployment of LLMs from the AI management console of BMC AMI Platform, ensuring seamless integration and operational efficiency.
  • Bring-Your-Own LLM (BYOLLM): The flexibility to integrate any AI model best suited to an organization’s needs, policies, regulations, and use cases, ensuring full control over security, data privacy, and AI training methods.

For many organizations, the best approach is a hybrid one—leveraging curated LLMs for quick AI adoption while integrating BYOLLM to align to their corporate AI policies. This hybrid approach ensures organizations can adapt AI strategies to their specific use cases, enterprise policies, and security requirements. With the freedom to choose the right model for the right task, teams gain greater control, better alignment with organizational AI policies, and the ability to optimize AI for mainframe transformation.

Adaptability: Keeping pace with change

Adaptability means more than switching between models—it’s about aligning AI strategies with the constant evolution of business needs, security demands, and compliance standards. In mainframe environments where workloads are mission-critical, adaptability ensures AI can keep up without introducing risk. As environments change, so must the AI models that support them.

That’s why flexibility must include the ability to swap models, retrain where needed, and adopt newer or more specialized LLMs and SLMs over time. With an adaptable architecture, organizations can adjust their AI strategies without rebuilding their systems or compromising performance. BMC AMI Assistant supports this model agility—ensuring enterprises stay resilient no matter how their policies, requirements, or use cases shift.

Is an LLM future-proof?

The AI model chosen today may not meet the business, compliance, and security challenges of tomorrow. This is where a curated LLM Library becomes invaluable. With BMC AMI Assistant, organizations can rapidly take advantage of the latest breakthroughs in LLM technology, ensuring they are always leveraging the most advanced and capable models available. At the same time, they retain the flexibility to pivot—adopting new LLMs when their business needs change, rather than being locked into a single, static AI model.

This ability to dynamically adjust AI strategies ensures that enterprises remain agile, compliant, and ahead of the curve in an era where business needs change and technology shifts at an unprecedented pace.

Final Thoughts: The future of AI for mainframe transformation

AI is actively shaping how organizations manage, transform, and optimize their mainframe environments. However, success in AI adoption isn’t just about implementation; it’s about ensuring AI remains flexible enough to evolve with the business.

BMC AMI Assistant provides the adaptability and flexibility needed to navigate an ever-changing landscape. With the ability to choose between curated LLMs and BYOLLM, organizations gain the strategic advantage of selecting the right AI model for their needs—today and in the future.

As AI continues to advance, the enterprises that embrace flexibility will be the ones best positioned for long-term success. The question is no longer whether to use AI in mainframe transformation—but whether the AI strategy in place is built to last.

To learn more about the new capabilities of BMC AMI Assistant—including the curated LLM Library and BYOLLM—read “Transforming the Mainframe’s Future with AI-Powered Intelligence,” by BMC Vice President of Product Management and Design Matt Whitbourne. You can also discover how the entire BMC AMI portfolio can accelerate your mainframe transformation by visiting the BMC AMI webpage.

]]>
Transforming the Mainframe’s Future with AI-Powered Intelligence https://www.bmc.com/blogs/ai-transforming-mainframe-future/ Tue, 15 Apr 2025 11:25:12 +0000 https://www.bmc.com/blogs/?p=54921 In today’s digital economy, customers expect a wide range of services, accessible anywhere, always available, and instantly responsive. As a result, successful organizations are constantly looking for ways to quickly adapt and innovate, building solutions that connect systems across the enterprise seamlessly and securely, regardless of platform. To do so, they must navigate the challenges […]]]>

In today’s digital economy, customers expect a wide range of services, accessible anywhere, always available, and instantly responsive. As a result, successful organizations are constantly looking for ways to quickly adapt and innovate, building solutions that connect systems across the enterprise seamlessly and securely, regardless of platform. To do so, they must navigate the challenges presented by complex infrastructures, a changing mainframe workforce, ever-increasing data volume, and constant cybersecurity threats.

The April 2025 release of enhancements to the BMC AMI portfolio empowers mainframe organizations to conquer these challenges by harnessing the power of artificial intelligence (AI), simplifying Java development on the platform, ensuring system and data resilience, and strengthening security.

The recent announcement of IBM® z17 shows that AI is poised to play a key role in fueling innovation and growth on the platform by accelerating application modernization, delivering key insights that streamline operations and enhance the value of data, and improve productivity while enhancing security. Read on to see how this quarter’s enhancements can help your organization not only optimize the mainframe of the present but also be ready to maximize the potential of the future.

GenAI-powered intelligence

As organizations increase their utilization of generative AI (GenAI), selecting large language models (LLMs) that fit their policies, requirements, and use cases can be a challenge. BMC’s pioneering mainframe GenAI solution, BMC AMI Assistant, supports the use of multiple AI models, including bring-your-own LLMs (BYOLLMs), providing the flexibility to adapt AI strategies to specific use cases, policies and security requirements. Mainframe teams can select LLMs from a curated language model library or use their own LLMs, giving them greater control over and confidence in GenAI output.

The new BMC AMI Assistant knowledge expert, now in beta, helps enterprises preserve institutional knowledge and eliminate reliance on manual knowledge transfer by delivering precise, context-aware responses tailored to mainframe challenges. Leveraging AI agents, our new knowledge agent fuses LLM intent, user persona and skill level, the BMC AMI knowledge base, and customer enterprise knowledge into every prompt, ensuring high probable relevance.

Full development lifecycle Java support

Results of the 2024 BMC Mainframe Survey show the use of Java on the rise, with 64 percent of organizations developing new mainframe applications and 55 percent rewriting existing ones in the language. This quarter, we’ve extended support for mainframe Java workloads with the addition of automated exception handling for Java applications in BMC AMI DevX Abend-AID.

Paired with the Java performance monitoring capabilities recently introduced in BMC AMI Strobe, this enhancement empowers mainframe teams to efficiently develop, monitor and troubleshoot Java applications while maintaining system reliability and optimizing operational costs.

Operations: AI-guided issue remediation and intelligent automation

When mainframe system issues occur, operations teams expend precious time deciphering root causes and developing remediation plans. Leveraging AI agents, new GenAI-powered capabilities in BMC AMI Assistant translate root cause analysis from BMC AMI Ops Insight into plain-language guidance and actionable next steps, making expertise accessible at all skill levels and helping teams to resolve issues faster, reduce downtime, and maximize system performance. Our solution doesn’t just identify the problem—it explains it, and more importantly, recommends the next best actions to fix it. This dramatically reduces MTTR and empowers teams to act with confidence—even when tribal knowledge is no longer available.

System programmers and operations teams can now monitor, analyze, and act on network insights faster, regardless of their experience level, thanks for an integration between BMC AMI Ops Monitor for IP and BMC AMI Datastream for Ops. This integration enables real-time streaming of z/OS network activity into Splunk, Elastic, and other enterprise analytics tools.

Improving data recovery, DevOps integration, and DORA compliance

A new self-correcting recovery feature in BMC AMI Recovery for Db2 automatically adjusts recovery execution options by proactively identifying and addressing potential recovery issues. By adapting recovery processes to changing conditions, this reduces manual intervention, minimizes downtime, and enhances database resilience.

GitLab support has been extended to BMC AMI DevOps for Db2, building on existing integrations with Azure DevOps, Jenkins, and GitHub Actions to enable seamless database pipeline integration.

To help ensure compliance with the European Union’s Digital Operational Resilience Act (DORA), BMC AMI Application Restart Control for Db2 introduces foundational capabilities in April, with full support planned for general availability in Summer 2025. These capabilities include secure configuration, function, and process controls while logging unauthorized access and transmitting ICH408I messages for anomaly tracking during auditing.

Simplified hybrid cloud storage

New Cloud Data Sets (CDS) concatenation empowers BMC AMI Cloud Data users to streamline operations, reduce errors, and improve efficiency by combining multiple data sets into a single logical workflow, while enhanced visibility enables storage teams to easily index and visualize CDS originator information with automated metadata capture and advanced filtering.

Automated certificate management, enhanced security visibility

BMC AMI Enterprise Connector for Venafi now supports subject alternative name (SAN) parameters, enabling seamless failover and multi-DNS support to ensure compliance with modern security standards while simplifying secure connections in complex environments and supporting hybrid IT integration.

BMC AMI Command Center for Security offers new easy-to-read BMC AMI Security Policy Manager dashboards that provide quick insights with the ability to drill deeper for comprehensive analysis.

Forging the future with continuous innovation

The enhancements discussed above are just a few of those included in this quarter’s release (for full details on the release, visit our What’s New in Mainframe Solutions webpage). By offering GenAI LLM flexibility, the preservation of institutional knowledge, full lifecycle Java support, increased data efficiency and compliance, streamlined hybrid cloud data storage, and automated security options, the BMC AMI suite of solutions continues to leverage new technologies to help your organization transform the mainframe to meet present demands while preparing for the future.

To learn more about this quarter’s enhancements to BMC’s mainframe and workflow orchestration portfolios, read, “Delivering Business Value Through Innovation,” by BMC Chief Technology Officer Ram Chakravarti.

]]>
The Rise of AI Agents: A New Era for Mainframe Transformation https://www.bmc.com/blogs/ai-agents-mainframe-transformation/ Wed, 09 Apr 2025 16:19:18 +0000 https://www.bmc.com/blogs/?p=54907 There is an old adage that says, “When the mainframe sneezes, the rest of the organization gets sick.” This remains true today as most business-critical applications depend on the mainframe. When problems do occur, the process to find and fix the problem is more of a process of elimination – is the problem a hardware […]]]>

There is an old adage that says, “When the mainframe sneezes, the rest of the organization gets sick.” This remains true today as most business-critical applications depend on the mainframe. When problems do occur, the process to find and fix the problem is more of a process of elimination – is the problem a hardware or software problem, is it a batch or online issue, is it a database or storage problem – with the owner of each working to prove their part isn’t the one that caused the problem.

To make matters worse, a lot of monitoring solutions are siloed and only report on the part of the mainframe which they monitor. In the worst-case scenario, the issue stems from something buried deep in a database query written twenty years ago by someone who retired a decade ago. There are tools that can help, of course, but most of the time, the most powerful tool is a senior expert with institutional knowledge not found in any manual.

But what if that wasn’t the only way? What if you could transform your mainframe with the help of a workforce of artificial intelligence (AI) agents that don’t just assist, but collaborate, learn, reason, and act? What if you could simplify the complexity of mainframe systems by surrounding them with intelligent agents that work together like a team, operate autonomously, and understand your systems in plain language?

That’s the future of mainframe transformation—and for some, it’s already starting to take shape.

Fear of change

Let’s back up for a moment. For decades, we’ve been caught in the same loop: legacy tools built for a world that no longer exists, supported by a shrinking pool of experts, all trying to keep mission-critical systems running without missing a beat. The result? Fear of change. Developers hesitate to modernize code they don’t understand. Operators worry that tweaking one thing might break ten others. And system programmers spend hours chasing root causes through layers of logs, dumps, and outdated documentation.

That’s the gap. It’s not just a knowledge gap—it’s a skills gap, a tooling gap, and a confidence gap.

But here’s what could be: a world where intelligent AI agents simplify and modernize mainframe systems from the inside out.

You’ve probably heard terms like “agentic AI agents,” “autonomous agents,” or “large language model (LLM)-based agents.” Let’s demystify this concept and, more importantly, put it into the context of how they could transform the way you develop, operate, and manage mainframe systems.

The coordinated workforce: Agentic AI agents

Think of a highly skilled workforce spread across different teams—each person brings a unique skill set to the table. Some specialize in troubleshooting, others in analysis, and a few in orchestrating the big picture. Individually, they contribute valuable insights. But when they work in sync, they become a powerful network of collaborators solving challenges together. Agentic AI agents function the same way. These are multiple AI agents that interact, cooperate, and sometimes even compete to solve problems. They can be homogenous (all similar) or heterogeneous (each one designed for a different task).

That vision of a coordinated AI workforce isn’t just theoretical—it’s already being explored in the industry. On a recent episode of The Modern Mainframe podcast, Jason English, an analyst at Intellyx, described it this way:

“It’s like a team of mainframe specialists who can meet, collaborate, and decide amongst themselves how to solve a problem—without needing constant supervision. That’s the power of agentic AI: coordinated intelligence that turns complexity into clear, actionable insight.  It’s not one agent that will be suitable for all aspects of AIOps, for instance, when dealing with system management and operations.  We will see a collection of AI agents that will work in collaboration to proactively solve system issues.”

In a mainframe environment, that might mean one agent continuously scans SMF data for anomalies while another watches CPU usage patterns and a third looks for changes in application behavior. A fourth agent acts as the coordinator, pulling together insights from the others to provide a clear picture of what’s happening. Like bees sharing nectar, these agents share data and context in real time.

This isn’t just monitoring—this is distributed intelligence. These agents don’t just raise an alert, they work together to give you the why, the where, and sometimes even the how to fix it.

Self-driving mainframes: Autonomous agents

Now let’s imagine a self-driving car. It perceives its surroundings, makes a plan, takes action, learns from what happens, and adjusts the next time it’s used. That’s how autonomous agents work.

In a mainframe transformation context, you could have an autonomous agent that notices an unusual spike in workload at 2 a.m. Instead of just logging it, the agent investigates. Was it a job rerun? A missed SLA? A rogue script? The agent then suggests a fix or even reschedules the job and adjusts thresholds based on learned patterns.

Or picture a developer saying, “Check all modules in this COBOL app for security vulnerabilities.” The autonomous agent gets to work—it scans the code, applies pattern recognition, checks against known vulnerabilities, and generates a report with recommendations. Over time, it learns which types of vulnerabilities are common in your shop and becomes faster and more precise.

Autonomous agents don’t just respond; they anticipate. And in doing so, they make your mainframe environment more adaptive and resilient.

The copilot you always wanted: LLM-based agents

Here’s where things get even more exciting. LLM-based agents are powered by large language models and act like copilots. They reason, plan, use tools, and respond in natural language. Imagine the smartest assistant you’ve ever worked with—but instead of needing years of onboarding, it already understands your mainframe code.

That’s exactly what tools like BMC AMI Assistant are designed to do. A developer opens a file and asks, “What does this code do?” The assistant reads the code, understands the logic, and responds with a clear explanation in natural language. An operator asks, “Why did this job fail last night?” and gets a contextual answer that pulls from logs, dumps, and performance data.

This is knowledge transfer at scale. The institutional knowledge that used to live in one system programmer’s head is now codified, accessible, and explainable—instantly.

When They Work Together, You’ve Never Mainframed Like This

Now picture all three of these agent types working together:

  • Agentic agents identify a problem: a pattern of job slowdowns happening across three applications.
  • An autonomous agent investigates and discovers a DB2 performance issue due to an inefficient access path.
  • An LLM-based agent explains the root cause, suggests a fix, and even offers to generate documentation or a test plan.
  • A development feedback loop agent captures insights from the incident, integrates the fix into the CI/CD pipeline, and triggers a new build with updated code—closing the loop between operations and development.

Suddenly, your mainframe isn’t a black box anymore—it’s a transparent, collaborative environment where human and machine intelligence work side by side.

This is what we mean by “mainframe, simplified.” This is what we mean when we say “you’ve never mainframed like this.”

Why the rise of AI agents matters now

The rise of AI agents isn’t just a trend—it’s a shift in how we think about mainframe transformation.

Mainframe transformation isn’t just about rewriting code or lifting and shifting workloads. It’s about removing the barriers that have historically held us back—barriers like institutional knowledge locked away in a retiring workforce, brittle systems no one wants to touch, and tools that were never designed for agility.

When AI agents become part of your daily workflow, something remarkable happens. Developers stop fearing the code they don’t understand and start building with confidence. Operators no longer spend all night piecing together clues—they resolve issues with clarity and speed. And system programmers finally get to share their expertise in a way that’s scalable and accessible to the next generation.

These aren’t just tools—they’re trusted teammates. They learn. They adapt. And they speak your language.

That fear of change that’s been lingering in the background, hindering modernization efforts? It begins to fade. In its place, there’s a new mindset: one that sees the mainframe not as a system you’re stuck with, but as a system with which you can evolve.

It’s not about replacing people—it’s about giving people the superpowers they need to move faster, solve smarter, and transform with confidence.

And in that shift, the mainframe becomes more than a foundation for critical business systems. It becomes a platform for what’s next.

That’s what could be. And for more and more organizations—it’s already beginning.

So, the next time someone asks you how you plan to modernize your mainframe, you might just say, “We’re letting the agents handle it.” Not because you’re stepping away, but because you’re stepping forward into a new era where AI doesn’t replace expertise—it scales it. And with that, the mainframe becomes not just a system of record, but a platform of innovation.

This is the rise of AI agents—a new chapter in how we simplify the complex, bridge generations of expertise, and make the mainframe more intuitive, intelligent, and indispensable than ever before.

That’s what could be. And honestly, it’s already starting to be.

Learn more about AI Agents, LLMs, generative AI (GenAI), hybrid AI and more in the Modern Mainframe podcast, “The Future is Hybrid: 2025 Predictions for GenAI,” featuring Intellyx analyst Jason English.

]]>