Mainframe Blog – BMC Software | Blogs https://s7280.pcdn.co Fri, 19 Apr 2024 13:28:51 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png Mainframe Blog – BMC Software | Blogs https://s7280.pcdn.co 32 32 What You Need To Know About Following Cybersecurity Frameworks https://s7280.pcdn.co/follow-mainframe-cybersecurity-frameworks/ Fri, 19 Apr 2024 13:28:51 +0000 https://www.bmc.com/blogs/?p=53557 Do you trust what you’re eating? I can rest easy; in the U.S., I have the Food and Drug Administration regulating my food supply. Do you trust the product you just purchased? If I I’m in the EU, I have confidence because of the CE mark. Do you trust your bank/supermarket/airline is secure? I shouldn’t […]]]>

Do you trust what you’re eating? I can rest easy; in the U.S., I have the Food and Drug Administration regulating my food supply. Do you trust the product you just purchased? If I I’m in the EU, I have confidence because of the CE mark. Do you trust your bank/supermarket/airline is secure? I shouldn’t need to because they follow established security frameworks. This is a long-winded way to say that throughout our lives, we trust regulations and standards to ensure our safety, and cybersecurity is no different. In this blog we are going to look at what security frameworks are, why we need them, how to choose a framework, and, finally, balance things out by looking at the potential downsides of security frameworks.

Why do companies need cybersecurity frameworks?

Short answer: To improve the organization’s security posture in the following ways.

  • Standardization: A common language and set of measures ensures that requirements are well-understood and consistent. Frameworks provide best practices and guidelines that are platform-agnostic to implement cybersecurity measures.
  • Risk management: A framework provides a structured approach to risk management. It will usually begin with a discovery phase followed by an assessment of the finding. This in turn cascades down into resource (be that time or money) allocation to ensure the most significant threats are mitigated.
  • Compliance requirements: While many industries have specific regulatory requirements for cybersecurity, non-industry-specific frameworks will align with standards of the regulatory bodies to ensure compliance.
  • Resources/Skills gaps: Implementing effective cybersecurity controls is no small task. When you add skills shortages (or a complete lack of skills) to the mix, you may wonder where to start. Frameworks offer guidance based on risk, helping to ensure your implementation plan is as effective as possible.
  • Continuous improvements: The threat landscape is always changing—the next exploit is currently in development somewhere. Frameworks provide the structure to ensure you stay secure and are continuously updated to keep pace with the latest threats for maximum effectiveness.
  • Interoperability: Similar to standardization but slightly different, cybersecurity measures span the wider digital ecosystem to ensure that collaboration and communication can be facilitated between internal stakeholders and with vendors and auditors.
  • Awareness/Visibility: The frameworks themselves raise awareness about cybersecurity and promote best practices.

How do you choose a framework?

Depending on your business sector or organizational status, which framework you follow may already be mandated. For example, a company that is publicly traded will need to comply with the Sarbanes-Oxley Act (SOX), so it may use the Control Objectives for Information and Related Technologies (COBIT) framework to achieve this. For US government agencies, the National Institute of Standards and Technology (NIST) regulations must be followed. What if you are not mandated to follow a certain framework? Then you focus on a framework that can help you address one of more of the following:

  • Risk:
    • Encompasses risk identification, analysis, evaluation, treatment, and monitoring.
    • Facilitates ongoing monitoring and reporting of compliance efforts.
    • Helps the organization prioritize effort based on risk.
  • Control:
    • Provides a high-level strategy for the cybersecurity team.
    • Platform-agnostic set of security controls.
    • Easy-to-digest current state of the organization.
  • Program:
    •  Covers the whole organization.
    • Assesses the organization’s current state in a single place.
    • Measurable.
    • Brings the whole organization into a common language from technical team to executives.

Are there downsides to using a framework?

If a piece of work is improving the security posture, ultimately there is no downside to it, however that doesn’t mean it can’t be critiqued. There is a common misconception that if you have a followed a security framework, then you are completely secure. While you are better protected than you were before, your security posture must still be validated with assessments and penetration tests (pentests).

Organizations are complex places, and a one-size-fits-all approach can lead to both over-investment and under-investment in certain areas if they have not been properly risk assessed. While it is expensive to implement a framework, it’s also costly to maintain it going forward, which can leave an organization vulnerable if they do not invest in both implementation and continuous maintenance.

Examples of popular frameworks

Summary

Security frameworks provide organizations with structured guidelines and methodologies that align with industry standards, best practices, and regulatory requirements. Choosing which framework to implement is no small task and requires a significant financial investment and time commitment. Ultimately, adhering to a framework will improve the security posture of the organization, but do not become a victim of a false sense of security. Following the framework alone does not make you secure—you must also conduct security assessments and penetration testing to ensure agility in the face of continuously evolving threats.

]]>
Leveraging Generative AI in Mainframe DevOps https://www.bmc.com/blogs/leveraging-generative-ai-mainframe-devops/ Mon, 15 Apr 2024 17:46:39 +0000 https://www.bmc.com/blogs/?p=53546 In the dynamic landscape of software development, where mainframe systems continue to play a critical role, the adoption of artificial intelligence (AI) is revolutionizing traditional practices. Specifically, the emergence of generative AI presents a paradigm shift in how developers interact with mainframe applications within the DevOps ecosystem. This article delves into the fundamental disparities between […]]]>

In the dynamic landscape of software development, where mainframe systems continue to play a critical role, the adoption of artificial intelligence (AI) is revolutionizing traditional practices. Specifically, the emergence of generative AI presents a paradigm shift in how developers interact with mainframe applications within the DevOps ecosystem. This article delves into the fundamental disparities between using generative AI and conventional AI methodologies in mainframe DevOps and explores how the former augments the developer experience.

The overarching goal of employing generative AI in mainframe DevOps is to enhance the application developer journey. Generative AI operates on the premise of explaining, guiding, and testing mainframe application changes and enhancements. Unlike traditional AI and machine learning (ML) techniques, which predominantly focus on optimizing system performance or detecting anomalies, generative AI takes a proactive stance in empowering developers throughout the software development lifecycle.

Understanding the scope of AI capabilities

Before delving into the nuances of generative AI, it’s important to delineate the capabilities of AI in the mainframe DevOps domain. AI encompasses a spectrum of techniques, ranging from anomaly detection and predictive maintenance to optimization and automation. These methodologies excel in augmenting system monitoring, maintenance, and performance. However, traditional AI falls short in addressing the intricate nuances of the developer journey, such as code comprehension, adherence to coding standards, and efficient testing strategies.

Why generative AI is the optimal solution

Generative AI emerges as the quintessential solution for augmenting the application developer journey in mainframe DevOps due to its innate capacity to comprehend, guide, and enhance code-related tasks. Unlike traditional AI methodologies, which primarily focus on system optimization and anomaly detection, generative AI transcends these boundaries by actively participating in the software development lifecycle. For instance, in code explanation tasks, generative AI not only provides comprehensive insights into code snippets but also automatically integrates explanatory comments, fostering better understanding and collaboration among developers.

Similarly, in code-review scenarios, generative AI offers real-time guidance on adhering to enterprise coding standards and industry best practices, thereby ensuring consistency and compliance across applications. These examples underscore the multifaceted capabilities of generative AI in enhancing developer productivity and facilitating seamless collaboration within the DevOps ecosystem.

Let’s explore five key use cases that demonstrate the efficacy of generative AI in this realm:

1. Explanation: Understand what the code does

Generative AI offers in-depth explanations of code snippets and programs. Leveraging sophisticated natural language processing (NLP) methods, it seamlessly integrates comments into code, improving readability and fostering collaboration among developers. This automated process enhances understanding and streamlines the development workflow, ensuring that developers can grasp the intricacies of the codebase more efficiently, which helps accelerate the development cycle.

2. Review: Receive immediate feedback on code standards

Generative AI plays a pivotal role in code reviews, assisting developers with real-time guidance and remediation suggestions that ensure adherence to predefined standards to promote consistency and compliance across applications. Developers benefit by receiving immediate feedback on their code, which helps them identify and rectify potential issues early in the development process. Consequently, it leads to higher-quality codebases, improved software reliability, and increased efficiency in development workflows.

3. Improve the delivery of your changes

In an era characterized by increasingly complex toolchains, generative AI can significantly streamline the code delivery process. By facilitating quicker root cause isolation and enhancing the resiliency and performance of continuous integration and continuous delivery (CI/CD) toolchains, it expedites mean time to resolution (MTTR). Reduced downtime and faster issue resolution lead to increased productivity and smoother workflows for developers, while businesses experience improved software delivery efficiency, enhanced competitiveness, and greater customer satisfaction.

4. Give every developer a personal assistant

Generative AI functions as an indispensable virtual coding assistant, equipped with extensive knowledge of best practices, design patterns, and syntax. By seamlessly integrating into developer workflows, it provides contextual recommendations and promotes adherence to industry and organizational standards, enhancing developer productivity and code quality. Developers gain real-time guidance, enabling them to make informed decisions and produce high-quality code efficiently . As a result, businesses get accelerated development cycles, reduced error rates, and enhanced software reliability, ultimately leading to improved customer satisfaction and competitive advantage in the market .

5. Efficient testing with minimal viable data

Generative AI revolutionizes the testing paradigm by intelligently analyzing source code and devising optimal testing strategies. By leveraging sophisticated algorithms, it generates test case scripts and tailors datasets to the specific requirements of the application, ensuring thorough coverage while maintaining efficiency. Automation of the testing process saves developers time and effort previously spent creating comprehensive test suites so they can focus more on developing new features and addressing critical issues. For businesses, the adoption of generative AI for test case generation results in improved software quality and reliability . With thorough testing coverage, organizations can mitigate the risk of bugs and performance issues, leading to higher customer satisfaction and reduced maintenance costs in the long run. Additionally, it facilitates faster release cycles, enabling businesses to stay competitive in the market.

Considerations for adoption

When considering the adoption of generative AI in mainframe DevOps, organizations should carefully evaluate several key factors to ensure successful implementation:

  • Scalability and compatibility with existing mainframe infrastructure and development processes
  • Potential benefits, such as improved code quality, faster development cycles, and enhanced developer collaboration, etc.
  • Cost-effectiveness and return on investment, encompassing initial investments in training and infrastructure and the long-term benefits of increased efficiency and productivity
  • Alignment with strategic objectives and overall digital transformation initiatives

By carefully weighing these considerations, organizations can make informed decisions about whether and how to incorporate generative AI into their mainframe DevOps workflows.

Conclusion: Generative AI fosters a culture of continuous improvement

In essence, the integration and adoption of generative AI into mainframe DevOps signifies a transformative leap in the developer journey. With enhanced code comprehension and improved testing and adherence to standards, developers are empowered to innovate with unparalleled confidence and agility. This not only accelerates development cycles but also fosters a culture of continuous improvement within organizations.

Embracing generative AI paves the way for enhanced efficiency, reliability, and scalability in mainframe application development. Businesses can expect streamlined processes, reduced time to market, and better software quality. Moreover, the adoption of generative AI sets the stage for future advancements in technology and reinforces the organization’s position at the forefront of innovation in the competitive landscape. Overall, it signifies a strategic investment in harnessing the full potential of AI to drive sustainable growth and success in the digital era.

]]>
The Continuous Improvement of Mainframe Speed and Efficiency https://www.bmc.com/blogs/mainframe-speed-efficiency-continuous-improvement/ Thu, 04 Apr 2024 11:38:02 +0000 https://www.bmc.com/blogs/?p=53500 In today’s always-on digital economy, customers expect 24×7 system availability, instant response times, and products and services that constantly evolve to meet their needs. Whether your mainframe teams are developing new applications and features, resolving database issues, or recovering from a security incident, speed and efficiency are vital to gaining and maintaining a competitive advantage. […]]]>

In today’s always-on digital economy, customers expect 24×7 system availability, instant response times, and products and services that constantly evolve to meet their needs. Whether your mainframe teams are developing new applications and features, resolving database issues, or recovering from a security incident, speed and efficiency are vital to gaining and maintaining a competitive advantage. This quarter’s release of enhancements to the BMC AMI portfolio helps your mainframe teams work quickly and cost-effectively without sacrificing quality, regardless of the task at hand. Read on to learn more.

BMC AMI Cloud Data users can now reduce storage capacity usage and cost by archiving BMC AMI Storage IAM datasets to the cloud, while new automations enable more efficient data management processing. We’ve also introduced streamlined filtering and grouping capabilities give users better data insights.

An enhancement to BMC AMI DevX Code Insights builds on our recently released code refactoring support by automatically analyzing all fields in a specific copybook and giving developers the ability to quickly comment and remove unused copybooks. Meanwhile, BMC AMI DevX Code Pipeline now offers detailed deploy controls, enabling developers to specify actions for different environments, set specific deployment times, and specify how to handle failures.

This quarter’s enhancements also boost the efficiency and performance of IBM® Db2® and IBM® IMS. A new BMC AMI zAdviser dashboard provides actionable insights from BMC AMI Recovery for Db2® with new visibility into data migration and audit task oversight. BMC AMI Online Reorg for IMS streamlines database change management capabilities, while new features in BMC AMI Database Administration for Db2® and BMC AMI Utilities for Db2® reduce downtime and accelerate mean time to diagnose (MTTD) and resolve issues.

New BMC AMI Ops quick-view dashboards for IBM® z/OS®, IBM® CICS®, Db2®, and Java help users easily visualize key performance metrics for mainframe systems and subsystems, providing at-a-glance insights to help identify areas that could affect uptime or performance.

Finally, BMC now offers clients mainframe penetration testing (pentesting) through NetSPI to determine system vulnerabilities, validate security posture, and easily fulfil compliance requirements.

This quarter’s release coincides with the 60th anniversary of the release of the IBM® System/360 machine on April 7, 1964. Over the past 60 years, the mainframe has undergone incredible change, not only in hardware capacity but also in its role as the backbone of the modern digital economy. As we enter the next 60 years of mainframe history, BMC is committed to continually offering new products and features that improve the speed, efficiency, and quality of your mainframe data management, development, operations, and security teams. We look forward to being your partner in 2024 and beyond as you optimize and transform the mainframe, taking the platform to new heights.

Learn more about our April release of BMC AMI enhancements on the What’s New in Mainframe Solutions page.

]]>
The Next, Next Generation of Mainframers Is Here https://www.bmc.com/blogs/mainframe-next-generation-is-here/ Wed, 27 Mar 2024 15:03:52 +0000 https://www.bmc.com/blogs/?p=53470 On April 7, 2024, we will be celebrating the 60th anniversary of the mainframe. It would take some pretty thick rose-colored glasses to not remember the many times the future of the mainframe was in doubt. We heard Stewart Alsop’s 1991 announcement about the impending demise of the mainframe, but March 15, 1996 came and […]]]>

On April 7, 2024, we will be celebrating the 60th anniversary of the mainframe. It would take some pretty thick rose-colored glasses to not remember the many times the future of the mainframe was in doubt. We heard Stewart Alsop’s 1991 announcement about the impending demise of the mainframe, but March 15, 1996 came and went, followed by further predictions of doom. The mainframe also survived Y2K and many other events, such as downsizing, rightsizing, and outsourcing.

For the first decade or so of my career, when I said I was a mainframe programmer, I would get one of two questions: “What is a mainframe?” or “Why on earth would you enter a dying field?”

Way back then, people, even those in the mainframe world, couldn’t understand why a new generation would choose the mainframe. I did. It was a good choice. I’m still here to talk about it. And it’s still a good choice. I’m excited about the future potential still left untapped.

From a viewpoint inside the mainframe world, it is easy to overlook the massive innovation and growth that has happened over 60 years. Some of that has been on the IBM® Z® platform, or “Z” as we call it now, and some in outside areas. Gone are the days of heading to the machine room to find the manual needed (on the rack of manuals) or chasing tapes escaping across the floor. Good riddance.

Thanks to Google and now, artificial intelligence (AI), we can get answers at our fingertips in a few seconds. Tapes are virtual. Who knows where the mainframe is actually located? While we may look back with nostalgia on the days of lifting floor tiles to find a loose wire, we can probably all agree our time is better spent these days.

The challenges we face today are nothing that would have been foreseen 20 years ago. Everyone has a device in their hand that can send a transaction to the mainframe. Data is growing astronomically. Gasps were heard as we saw the first million-row table, then a billion rows, and then the first terabyte table. One thing is for certain, the data is continuing to grow, and the rate of growth is still unbelievable but true.

When I became a manager, I was given a book by management consultant Peter Drucker. My key takeaways were that the customer must be the focal point of the business and, to provide what the customer needs, you have to take care of your team. As Agile development arrived and everyone talked about all the changes needed in management styles, I started finding blogs devoted to linking Drucker’s principles to managing Agile teams effectively. The more things change, the more they stay the same.

People have changed, no doubt. No one would mistake an intern in 2024 for someone new in career in 1964, for many reasons. But with all the change in society, some things remain the same. Early career mainframers are brave, inquisitive, and persistent. The great news is that we do have a next, next generation of mainframers who are innovative and driven to make the most of the mainframe. In the same ways—and maybe different ways—that we have tapped into the potential to exploit the speed, security, and “wow” of the mainframe, they will surprise us with progress we can’t even imagine today.

Here’s to the next 60 years with a bright new generation of mainframers to pave the way!

]]>
IBM System/360 Laid Groundwork for Mainframe Innovation https://www.bmc.com/blogs/mainframe-innovation-groundwork-ibm-s360/ Thu, 14 Mar 2024 15:05:38 +0000 https://www.bmc.com/blogs/?p=53459 With the IBM® System/360 celebrating its 60th birthday this year, the BMC mainframe group was asked if anyone remembered working on this hardware. In a moment of weakness, I admitted that I had, and was asked to blog about what I remembered. Although I took my first programming course at the age of 15 while […]]]>

With the IBM® System/360 celebrating its 60th birthday this year, the BMC mainframe group was asked if anyone remembered working on this hardware. In a moment of weakness, I admitted that I had, and was asked to blog about what I remembered.

Although I took my first programming course at the age of 15 while still in high school, it was not until I was taking my first programming course as a freshman at the University of Michigan (U of M) that I used a System/360.

In fact, I interacted with two System/360 machines. The computer center used a System/360 Model 20 to run the card readers and printers that we used to submit all our programs for testing, and we’d get the results back on printed output. Our programs were executed on a System/360 Model 67 that was a very unique machine at the time. The 360/67 at the University of Michigan was the first IBM computer to have virtual memory.

The Model 67 was built to the specifications derived from a paper written in 1966 by four authors, Bruce Arden, Bernard Galler, Frank Westervelt (who were associate directors at U of M’s academic computing center), and Tom O’Brian, called “Program and Addressing Structure in a Time-Sharing Environment.”. Dr. Galler and Dr. Westervelt were professors who I got to know during my time at U of M. I took my first advanced programming course, as well at the last course for my master’s degree, from Dr. Galler. Fun fact: Dr. Galler’s son, Glenn, works in the mainframe group here at BMC.

I got my first IT job the summer before finishing my master’s degree at Project Management Associates (PMA), a subsidiary of a construction company, Townsend and Bottum, that specialized in building power plants. PMA hired me to do COBOL development on a scheduling system they were developing for the construction industry.

I had one minor challenge with this first job. Although I had learned Basic Assembler Language, FORTAN, SNOBOL, PIL (Pittsburgh Interpretive Language), LISP, wrote an operating system, wrote a compiler for a language called GLORY, and used other programming languages at U of M, they did not offer a course in COBOL. I spent first week on my new job reading the manual and learning COBOL.

The COBOL programs I developed ran on the Townsend and Bottum data processing center System/360 Model 30 that had 8 kilobytes (yes, 8K) of physical core memory—no virtual memory. I designed an overlay structure into the program so that it reused the physical memory as the program executed. For example, once I did the initialization processing, I overlayed those in-memory instructions with the instructions to read the data and create the report. If the program produced multiple reports, I needed to design an overlay structure to reuse the application instruction storage from the first report for each subsequent one.

After graduating from U of M, I moved to the Townsend and Bottum parent company and continued to write COBOL programs. Those programs also ran on a Model 30 that used the Disk Operating System (DOS), a predecessor of Virtual Storage Extended (VSE). I remember when they upgraded the Model 30 from 8K to 32K of memory. This greatly reduced the overlay processing requirements in the programs.

A year or so later as Townsend and Bottum was expanding, the company decided to upgrade to a System/360 Model 50. At that time, it also decided it needed its own systems programmer, and I was willing to take the position, so the company sent me to a number of IBM courses to get the knowledge I would need for the job.

My first activity as the new systems programmer was to calculate the electrical power requirements for the new Model 50, the associated bank of eight 2314 disk drives (where each disk pack held 29 MB of data), the tape drives, printer, card reader/punch, and other peripherals so that the computer room could be designed with enough power to run the system.

Initially, the same DOS operating system that was on the Model 30 was used for the new Model 50, but shortly thereafter I installed the Operating System/360 (OS/360) that was designed for the new hardware. The first OS/360 system I generated and installed was Multiprogramming with a Fixed number of Tasks (OS/MFT). Later, I upgraded the operating system to run OS/360 Multiprogramming with a Variable number of Tasks (OS/MVT). Both of these were non-virtual memory systems since the Model 50 did not come with a virtual memory capability.

After a few more years of business growth, the System/360 was too small and Townsend and Bottum moved to a System/370 machine, so I moved off the System/360 platform and on to this larger and faster machine that had virtual memory.

Thanks for indulging me on my trip down memory lane. It’s amazing to think how far mainframe systems have come while maintaining their role as the system of record for the global economy. The original System/360 and the innovation of subsequent versions provided the foundation for modern mainframes and their utilization of cutting-edge technology, including artificial intelligence (AI). While it’s exciting to see what is on the horizon for future versions of the mainframe, I would not have the background and knowledge that I possess today if I had not started my career on the IBM System/360.

]]>
What DORA Means for Mainframe Teams in and Around EMEA https://www.bmc.com/blogs/what-dora-means-for-mainframe-teams-emea/ Thu, 14 Mar 2024 13:27:39 +0000 https://www.bmc.com/blogs/?p=53460 Over the past month, I have had the opportunity to discuss the European Union’s Digital Operational Resilience Act (DORA) with the mainframe teams of 14 of the largest financial institutions in EMEA and the UK. Here are my key takeaways from those conversations: There is general agreement that for mainframe teams, the DORA requirements are […]]]>

Over the past month, I have had the opportunity to discuss the European Union’s Digital Operational Resilience Act (DORA) with the mainframe teams of 14 of the largest financial institutions in EMEA and the UK. Here are my key takeaways from those conversations:

There is general agreement that for mainframe teams, the DORA requirements are different than previous regulatory guidelines:

  • Penalties that include one percent of annual revenues and criminal liability are getting the attention of executives and board members
  • As DORA calls out “all critical infrastructure,” the spotlight is shining on mainframe infrastructure like never before
  • DORA requires an independent penetration test/security assessment of all critical infrastructure. Only some mainframe teams are heeding that advice.
  • The biggest change in requirements when comparing DORA to other regulations is the ability to prove that your financial institution can recover from a cyberattack—which is much different than a disaster recovery.
  • At least half of the financial institutions have already been engaged in European Central Bank (ECB) stress tests to evaluate their organizational ability to recover from a cyberattack.
  • There is considerable concern over the “interpretation” of the technical/business/resilience requirements for DORA, even after the January final report was published.
  • Most financial institutions are already in the process of implementing immutable backup solutions for their mainframe environments—a key step toward cyberattack resilience.
  • For those organizations implementing immutable backups, nearly all recognize the challenge of determining which immutable backup is appropriate to use for their recovery.
  • Many financial institutions recognize that recovering from an immutable backup poses a critical issue around data loss, potentially losing hours of financial transactions.
  • Most financial institutions have created DORA-specific working groups to guide their IT teams on appropriate measures to take, but even those teams have difficulties translating regulation requirements into IT guidelines.

Bottom line: DORA presents new challenges for mainframe teams, not only because the cyberattack scenario is new, but because the ECB is actively engaging with financial institutions that do business in Europe to prove that they comply with the new objectives.

Learn more about how DORA guidelines help achieve operational resilience in the podcast, “Mainframe Operational Resilience: DORA and Beyond.”

]]>
Navigating DORA Regulations: A Guide for Mainframe Operational Resilience https://www.bmc.com/blogs/dora-regulations-mainframe-operational-resilience/ Tue, 12 Mar 2024 11:55:31 +0000 https://www.bmc.com/blogs/?p=53448 In the bustling realm of finance, mainframe systems stand as silent sentinels, processing transactions and safeguarding sensitive data. Yet, in the face of escalating workloads and looming cyberthreats, traditional operational resilience measures may falter, exposing financial institutions and their data to risk. Enter the European Union’s Digital Operational Resilience Act (DORA), a transformative force reshaping […]]]>

In the bustling realm of finance, mainframe systems stand as silent sentinels, processing transactions and safeguarding sensitive data. Yet, in the face of escalating workloads and looming cyberthreats, traditional operational resilience measures may falter, exposing financial institutions and their data to risk. Enter the European Union’s Digital Operational Resilience Act (DORA), a transformative force reshaping the landscape of operational resilience in finance.

This act, with its comprehensive standards and framework, extends beyond distributed systems to include the mainframe as well, offering a lifeline of regulation and guidance to fortify critical infrastructures against the tides of uncertainty. Strengthening the core of mainframe systems not only ensures regulatory compliance but also bolsters their ability to withstand the dynamic pressures of the modern financial landscape. This article serves as a guide, exploring the essential components and technology considerations that empower financial institutions on their journey towards DORA compliance and, ultimately, resilience.

Reevaluating mainframe operational resilience in the digital age

Operational resilience has resurged as a top priority, reflecting the acknowledgment of its indispensable role in navigating the digital age. This resurgence is particularly pronounced when considering the mainframe systems that serve as the backbone of financial operations, managing vast amounts of sensitive data and transactions. In an era marked by escalating workloads and demands, as well as cyberthreats and potential disruptions, traditional operational approaches fall short, underscoring the necessity for a renewed focus on mainframe operational resilience.

The importance of operational resilience for mainframe systems is not merely theoretical—it’s a strategic imperative for financial institutions. Every transaction, data point, and critical operation relies on the mainframe, making any disruption a significant risk. The repercussions of not embracing new technologies to enhance resilience are multifold—financial organizations risk not only regulatory non-compliance but also jeopardize the integrity of their operations.

Increased mainframe workloads demand a paradigm shift, and without a robust operational resilience framework powered by innovative technologies, institutions risk compromising the very core of their mainframe operations, putting data security and operational stability at stake. Embracing new technologies isn’t just a choice; it’s a necessity for financial organizations aspiring to thrive and remain resilient in the face of the evolving digital landscape.

Embracing DORA: Beyond compliance to mainframe operational resilience

DORA introduces a pivotal shift in the financial sector, expanding beyond traditional compliance into a comprehensive framework that reimagines service awareness, risk management, business continuity, and governance. This evolution in regulation serves as a call to action for financial institutions, urging them to proactively enhance their mainframe infrastructure.

As DORA harmonizes risk management practices and raises the standard for resilience in mainframe systems, it emphasizes not just compliance but a transformation of mainframe operations to meet the challenges of a dynamic digital landscape. This necessitates embracing advanced mainframe technology solutions, crucial for maintaining robustness and agility in response to these evolving demands.

DORA sets a regulatory focus on five key topics impacting mainframe operational resilience

DORA emphasizes the importance of holistic operational resilience principles, urging financial institutions to gain a thorough comprehension of their entire IT infrastructure, discern potential vulnerabilities and risks, and establish resilient automated strategies to safeguard their systems, data, and clientele from cyberthreats and other potential disruptions. Key areas of DORA focus include information and communication technology (ICT) risk management, incident reporting, resilience testing, ICT third-party risk management, and information sharing. Nevertheless, companies utilizing mainframe systems should consider the following:

1. Service awareness and availability

Effective technology for service awareness includes regular health checks, automated maintenance, and predictive alarms based on workload patterns. Log mechanisms aligned with DORA’s transparency requirements offer real-time insights into mainframe activities.

2. Risk management

Beyond standard vulnerability assessments, technology solutions for risk management involve real-time monitoring tools, security patch updates, and dynamic risk mitigation. This approach addresses exposures and vulnerabilities, aligning seamlessly with DORA standards.

3. Business continuity management

Technological considerations for business continuity management include comprehensive recovery plans, failover mechanisms, and automated backup solutions. Integration of cloud storage ensures scalability, meeting DORA expectations for enhanced recovery objectives.

4. Incident management

An effective incident management approach involves the seamless integration of monitoring alerts into an enterprise service console. Automated response playbooks and collaborative incident resolution align with DORA guidelines for efficient incident management.

5. Governance and compliance

Technology for governance and compliance encompasses vulnerability scanning tools specific to mainframe environments. Automated compliance checks, regular audits, and the evolution of governance processes ensure adherence to DORA components.

Operational resilience toolchain: a holistic approach

In navigating the intricacies of DORA compliance, the focus should extend beyond specific solutions to a holistic toolchain approach. Technologies that empower financial institutions share common attributes:

1. Identify

Early detection mechanisms and robust data analysis capabilities are integral. Technologies that offer insights into potential issues and risks provide a proactive foundation for resilience.

2. Protect

Implementation of security measures and safeguards for mainframe systems is crucial. Technologies that fortify defenses, ensuring the integrity of critical data, contribute to DORA-aligned protection.

3. Detect

Real-time monitoring tools equipped with anomaly detection capabilities are essential. Technologies that vigilantly spot threats in vast data landscapes align with DORA’s emphasis on understanding potential impacts.

4. Respond

Incident response protocols and collaborative incident resolution mechanisms are vital. Technologies that facilitate well-defined action plans and coordinated efforts to limit the impact of cybersecurity events meet DORA guidelines effectively.

5. Recover

Swift recovery strategies and post-incident analysis capabilities are key components. Technologies that streamline recovery processes and offer insights for continuous improvements contribute to a resilient mainframe environment.

Summary: A technological compass for mainframe resilience

As financial institutions embark on the journey towards DORA compliance and the intricacies of mainframe operational resilience, this exploration serves as a technological compass, guiding financial institutions towards a fortified future. We’ve underscored the imperative of adopting innovative technologies that align with the key components of DORA. From service awareness to governance and compliance, the compass points towards solutions that offer early detection, robust safeguards, real-time monitoring, efficient incident response, and swift recovery strategies. The essence lies not just in compliance but also in leveraging technology to proactively fortify mainframe systems, ensuring they both meet regulatory standards and stand resilient against the ever-evolving challenges of the digital landscape.

Want more resources to learn about DORA and it’s impact on mainframe operational resilience? Go to BMC’s DORA Survival Guide and learn how to fortify your mainframe.

]]>
These Are Mainframe Development’s Good New Days https://www.bmc.com/blogs/mainframe-application-development-good-new-days/ Thu, 07 Mar 2024 07:47:26 +0000 https://www.bmc.com/blogs/?p=53441 Sometimes, old-time mainframers get together and talk about the good old days. I was there; those days weren’t so good. I started out in college learning to be very proficient with a key punch machine; I was ecstatic when I was able to get access to an IBM® 3270 and enter my code and have […]]]>

Sometimes, old-time mainframers get together and talk about the good old days. I was there; those days weren’t so good. I started out in college learning to be very proficient with a key punch machine; I was ecstatic when I was able to get access to an IBM® 3270 and enter my code and have it stored on disk. It was the first example I had of not accepting the status quo and being open to new technologies that would make my job easier. Over time I have been open to the many improvements in the developer experience.

With that, I can say that I feel that today is the golden age for the mainframe. This is the best time to be a developer working on mainframe applications. Here are some of the areas in which today’s mainframe outshines “the good old days.”

  • Communications—The ability to access systems and work remotely is a huge improvement over the way we used to have to drive into an office to fix errors, then wait while they were confirmed. We may think that our always-on culture is disruptive, and it can be, but contrast that with having to stop what you are enjoying and drive to the office for an unknown amount of time. Working on the mainframe means you are working on a critical system, so there will always be disruptions; the difference is that now there are far fewer. It has also enabled “work from home,” which used to be impossible, or at least very difficult. We can now work from anywhere.
  • A choice of interfaces—In addition to the “classic” Interactive System Productivity Facility (ISPF), there are also now Eclipse and Virtual Studio (VS) Code solutions, among others. The ability to have multiple displays on multiple monitors and not have to remember keywords makes it so much easier to work. Sure, you could, over time, be proficient in ISPF, but that’s it—it took time, and developers should not have to go through an initiation process to be productive. It’s much better to start out day one with a modern, familiar interface that is easily upgraded and configured.
  • New perspectives—I learned from those who came before me, and many of them insisted things were “better than before,” and, in fact, “just perfect.” There was a tendency to not change because change was risky. Now, with new developers coming to mainframe development, we see that things have changed, there are new ideas. I think some of these ideas were there before, but because of limitations in technology, they couldn’t be implemented. Now, with a fresh look, we are seeing that the ideas were sound, and the technology has caught up so we can implement them. Where these ideas and practices were previously discarded because we looked at them and they wouldn’t work, we are now benefiting from them. It is great to have this new perspective to spur us on to greater improvements.
  • Automation, and automation of automation—Automation is a good concept, but it has been the victim of what I mentioned above. We either tried it and it never worked, or we implemented it with a complex process and don’t really want to touch it because it is fragile. Now is the time to reevaluate and benefit from new technologies. I know that when I look at what is available with webhooks and REST APIs, I am amazed at the possibilities. It is so much easier to set up automation now than it was in the past. Where I might have given up previously, I am now eagerly looking to new automations, putting something into place and then a few months later, adding to it.

This also applies to automating your automation. By this, I mean that you may have already automated test scripts. If so, great! But if you don’t automate them, the alerts, and, really, the whole process, then you are missing out on some great benefits. In short, look at everything—including code reviews—and see if there is some way that manual tasks can be automated and supplemented.

  • Graphical displays—Going from a card deck to seeing the code on a 24×80 display was a huge advancement. I worked in that box for, well, a long time, before graphical displays became possible. Having my program charted out—automatically—was a game-changer. I am a visual learner and to see my program—and the connections of my program with others—right there on my monitor was amazing. My days of “playing computer” or interactively debugging for analysis were over. I could see the structure of the program, drill down into a paragraph, and see the flow of data, of the field from where it entered, how it was processed, and then left to another program, file, or database. This graphical display meant I could take on any new program, understanding it quickly, and have confidence in my changes. It was an instant improvement in productivity.
  • Agile scrum—I was taught to write new programs in a modularized way—stub out sections, test, then fill in the logic. This was a good practice I followed and was a way to catch errors early. However, this was done for multi-month waterfall projects. We didn’t take the best practice and move it to the next level. The game changer came when we were able to implement a source code management (SCM) solution, BMC Compuware ISPW (now BMC AMI Code Pipeline), which enabled multiple developers to work on code at the same time. That helped us keep track of versions and enabled an Agile scrum framework for mainframe development, which delivered huge increases in productivity and quality. I’ve found that taking an open approach with our SCM and build and deploy, through the use APIs and webhooks, ensures that development teams can continuously improve their agility.
  • Code coverage—I remember when testing was guesswork and sometimes bluffing. You ran your new code through as much data as you could find with the assumption that it should hit your changes, then you could verify they were right and that you didn’t break anything. But that was just it, you had to assume, and that is where the bluffing came in for a code review. The testing took a lot of time to run and was not very effective. Code coverage provided proof that the lines I changed were tested. No more guesswork, no more bluffing. It also meant that I could test with much less data than I had been using, so I could test sooner and more often. I could save the massive datasets for final testing.

I am more excited about mainframe development than I have ever been—I envy developers today with everything they have available. The applications they will be able to envision, the connections, and the things they will be able to accomplish to keep the mainframe going in the decades to come will be amazing. All of this will be done faster and with much greater quality than we could have ever imagined back in the slow old days. These are truly the good new days for the mainframe, and I am glad I had a small part in building it.

]]>
Women in Mainframe: The Legacy of Innovation Continues https://www.bmc.com/blogs/mainframe-women-innovation-legacy-continues/ Wed, 06 Mar 2024 00:00:50 +0000 https://www.bmc.com/blogs/?p=49224 The mainframe has a long history of female trailblazers, with a legacy that continues through to today. Several women played a crucial role in the development of COBOL. Rear Admiral Grace Hopper’s championing of a computer language based on English words led to the creation of the FLOW-MATIC language, leading to her service as a […]]]>

The mainframe has a long history of female trailblazers, with a legacy that continues through to today. Several women played a crucial role in the development of COBOL. Rear Admiral Grace Hopper’s championing of a computer language based on English words led to the creation of the FLOW-MATIC language, leading to her service as a technical consultant to the committee that defined COBOL in 1959. The sub-committee which created most of the new language’s specifications included both Gertrude Tierney of IBM and Jean E. Sammet of Sylvania.

At BMC, we’re proud of our history of continuous innovation. Women’s History Month presents a great opportunity to reflect on the women who have carried on the legacy of Grace Hopper, Gertrude Tierney, Jean Sammet, and others by receiving patents for the innovations they contributed to BMC’s solutions.

Being recognized for unique and groundbreaking contributions is quite an honor and a source of pride for the rest of one’s career. Senior Product Manager Irene Ford received patents in 2007 and 2010 for solutions that enable customers to mask sensitive data without writing custom programs. Reflecting on the experience, Irene says, “As I look back on this today, I am proud to have been able to work so closely with our amazing development team not only on these patents but also on the work we do every day to make our tools better for our customers.”

Fifteen women have been granted patents for their mainframe innovations at BMC:

Linda S. Ball (2001, 2006 & 2010)
Carla C. Birk (1997 & 2001)
Donna M. Di Carlo (2012)
Catherine Drummond (2022, 2023 & 2024)
Linda C. Elliott (2000)
Irene Ford (2007 & 2010)
Carol Harper (1988, 1989 & 1992)
Roxanne Kallman (2020, 2022 & 2023)
Karen Nelson-Katt (1998)
Lisa S. Keeler (2000)
Annette B. McCall (1995)
Pradnya Shah (2002)
Melody Vos (2005, 2006, 2007, 2012, 2015)
Lori Walbeck (2020 & 2021)
Wenjie Zhu (2022, 2023 & 2024)

I’d also like to take this opportunity to recognize our colleagues from other business units who have received patents:

Tamar Admon (2013)
Kalpa Ashhar (2016 & 2018)
Maribeth Carpenter (2023)
Jiani Chen (2015, 2017, 2019 & 2020)
Gwendolyn Curlee (2023)
Kanika Dhyani (2015, 2016, 2020 & 2023)
Priyanka Jain (2020)
Nitsan Daniel Lavie (2019, 2020 & 2024)
Donna S. Lowe-Cleveland (2005)
Pallavi Phadke (2018 & 2019)
Soumee Phatak (2020)
Carol Rathrock (1998)
Komal K. Shah (2014)
Annie Shum (2008)
Jeyashree Sivasubramanian (2016, 2018, 2019, 2020, 2021 & 2023)
Cynthia L. Sturgeon (2006, 2014, 2017 & 2018)
Priya Saurabh Talwalkar (2023)
Elaine Tang (2022)

Patent filings for Michal Barak, Komal Padmawar, Jennifer Glenski, and Priya Talwalkar are currently pending.

At BMC, we’re proud of our history of innovation and of the role that our female employees have played in making that innovation possible. We look forward to continuing to create solutions that serve our customers and move the industry forward with contributions from both current employees and future generations.

]]>
The Power of Air-Gapped Object Storage as Part of a Mainframe Data Resilience Strategy https://www.bmc.com/blogs/mainframe-data-resilience-air-gapped-cloud-object-storage/ Mon, 05 Feb 2024 08:50:39 +0000 https://www.bmc.com/blogs/?p=53420 Safeguarding sensitive data has become a paramount concern for organizations. As cyberthreats evolve and new regulations to safeguard data emerge, the need for robust data resilience solutions has never been more pressing. Air-gapped object storage is a technology that provides security and protection for mainframe data against cyberthreats. The need for mainframe data resilience It […]]]>

Safeguarding sensitive data has become a paramount concern for organizations. As cyberthreats evolve and new regulations to safeguard data emerge, the need for robust data resilience solutions has never been more pressing. Air-gapped object storage is a technology that provides security and protection for mainframe data against cyberthreats.

The need for mainframe data resilience

It is crucial to understand the importance of data resilience in mainframe systems. Organizations rely heavily on their mainframe data to make informed decisions, conduct day-to-day operations, and maintain a competitive edge. Any disruption due to cyberattacks, natural disasters, or human error can have significant consequences.

Mainframe data resilience strategy ensures the continuous availability, integrity, and accessibility of critical information. Traditional storage solutions may provide some level of protection, but they often fall short in the face of sophisticated cyberthreats. This is where air-gapped object storage should be considered as an additional level of security.

New regulations are being issued globally to ensure companies are protecting their data adequately and can recover from a cyberattack or other logical data corruption. For example, the Digital Operational Resilience Act (DORA) regulation will come into effect in the European Union in January 2025.

The role of BMC AMI Cloud Vault in data resilience strategy

BMC AMI Cloud Vault protects mainframe data from cyberattacks such as ransomware by creating an additional copy on immutable, air-gapped, cloud-based storage, creating a third copy of the data. This enables quick recovery from cyberattacks such as ransomware and reduces the risk of data loss while maintaining compliance with regulatory requirements for enterprise data retention.

The power of BMC AMI Cloud Vault and air-gapped object storage

Unplugging from cyberthreats

Air-gapped storage involves physically isolating the storage infrastructure from the network, creating an “air gap” that serves as a powerful barrier against cyberthreats. Without a direct connection to the internet or any external network, the chances of unauthorized access or data breaches are significantly reduced. BMC AMI Cloud Vault provides the ability to write data directly from the mainframe to air-gapped storage and, after isolating the storage, create a “golden copy” of the data.

Immunity to online attacks

Common cyberthreats, such as ransomware and malware, rely on network connectivity to propagate and infect systems. With BMC AMI Cloud Vault, mainframe data is kept in an air-gapped environment, allowing organizations to create a fortress that remains impervious to online attacks. The air-gapped storage remains untouched and secure even if the network is compromised.

Protection against insider threats

While external threats are a significant concern, insider threats pose an equally formidable risk. Air-gapped storage limits access to authorized personnel who are physically present at the storage location. This minimizes the risk of internal breaches and ensures that only individuals with explicit permissions can interact with the stored data. BMC AMI Cloud Vault, leveraging mainframe security control, helps create end-to-end protection against threats.

The cloud is a different technological environment from the mainframe and relies on a separate set of authorizations and security controls than the mainframe does. A mainframe user with admin privileges, such as a storage administrator, would typically not have admin privileges in the cloud environment. This provides an additional layer of protection in case a mainframe user ID has been compromised.

Guarding against data corruption

Air-gapped object storage enhances data integrity by protecting against accidental or intentional corruption. Since the storage system is isolated and can keep track of any changes to identify attacks, the likelihood of malware altering or deleting critical data is virtually eliminated. BMC AMI Cloud Vault’s ability to recover a specific version of the data ensures organizations can quickly recover their data in its original, unaltered state.

Resilience in the face of disasters

Beyond cybersecurity concerns, air-gapped storage adds an extra layer of resilience against physical disasters. Whether natural calamity, fire, or other catastrophic events, data stored in an air-gapped environment remains sheltered from external factors that could compromise its integrity. With cloud hyper scalers, data is spread over three availability zones by default to ensure maximum availability.

Conclusion

In an age where data is the lifeblood of organizations, ensuring its resilience is non-negotiable. BMC AMI Cloud Vault, with air-gapped object storage as part of a mainframe resilience strategy, offers unparalleled protection against cyberthreats and provides a robust solution for data resilience needs. By adopting this innovative approach, organizations can fortify their data infrastructure, safeguard critical information, and confidently navigate the digital landscape.

Finally, considering new regulations such as DORA, mainframes can no longer afford to rely on existing solutions for data resilience and recovery and must act as soon as possible to ensure compliance.

To learn more about BMC AMI Cloud and how to modernize your data management with hybrid cloud agility check out our hybrid cloud solutions webpage.

]]>