Workload Automation Blog – BMC Software | Blogs https://s7280.pcdn.co Mon, 23 Jun 2025 12:19:22 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png Workload Automation Blog – BMC Software | Blogs https://s7280.pcdn.co 32 32 Control-M and SAP RISE: Ready for the Future of SAP S/4HANA Integration https://s7280.pcdn.co/ctm-sap-rise-integration/ Mon, 23 Jun 2025 12:19:22 +0000 https://www.bmc.com/blogs/?p=55186 As enterprises accelerate their digital transformation journeys, many are turning to SAP RISE with SAP S/4HANA to simplify their path to the cloud while preserving business continuity. SAP RISE is SAP’s strategic offering that bundles cloud infrastructure, managed services, and SAP S/4HANA into a single subscription model. But as SAP landscapes grow more complex—with a […]]]>

As enterprises accelerate their digital transformation journeys, many are turning to SAP RISE with SAP S/4HANA to simplify their path to the cloud while preserving business continuity. SAP RISE is SAP’s strategic offering that bundles cloud infrastructure, managed services, and SAP S/4HANA into a single subscription model.

But as SAP landscapes grow more complex—with a mix of on-premises, cloud, and hybrid environments—the need for seamless orchestration and intelligent automation has never been greater. Control-M  is a proven, SAP-certified application and data workflow orchestration platform that is now fully compatible with SAP RISE and integration-ready with SAP S/4HANA.

Control-M: Purpose-Built for Modern SAP Workflows

Control-M empowers enterprises to orchestrate and monitor business-critical workflows across SAP and non-SAP systems with a single, unified platform. As organizations transition from SAP ECC to SAP S/4HANA—either on-premises or through SAP RISE—Control-M ensures that scheduling, automation, and monitoring capabilities remain robust, flexible, and aligned with modern best practices.

Whether it’s traditional ABAP-based jobs, cloud-native extensions, or third-party integrations, Control-M manages them all—without relying on custom scripts or siloed tools.

SAP RISE Compatibility: Simplifying the Move to Cloud ERP

While SAP RISE streamlines procurement and lowers TCO, it also introduces a shared responsibility model, making automation and visibility into background jobs even more essential.

Control-M is designed to integrate directly with SAP S/4HANA under the SAP RISE model, ensuring that organizations retain full control over their scheduled jobs, dependencies, and business workflow, even as SAP infrastructure and services are managed by SAP or Hyperscalers.

Seamless Integration with SAP S/4HANA Features

Control-M supports the full range of SAP S/4HANA features and architecture elements, including:

  • ABAP jobs
  • SAP Business Warehouse (BW) processes
  • Data archiving
  • SAP HANA and Fiori-based applications
  • SAP BTP extensions and API-based workflows
  • Hybrid and multi-cloud environments (including OCI, AWS, Azure)

With Control-M, users can define and manage dependencies between SAP S/4HANA jobs and external workflows—whether they’re running in a data lake, cloud integration platform, or third-party ERP module.

Future-Ready: Clean Core and SAP BTP Integration

As enterprises adopt clean core strategies—keeping custom logic outside the core S/4HANA system, Control-M’s support for SAP BTP and API-based orchestration becomes critical. Businesses can now automate workflows across SAP BTP extensions and custom applications, maintain upgrade readiness, and drive agility across their IT operations.

This makes Control-M an ideal partner for organizations embracing side-by-side innovation with SAP BTP, as well as cloud-native integrations.

Key Benefits for SAP-Centric Enterprises

  • Unified orchestration of SAP and non-SAP workflows across cloud, hybrid, and on-prem environments
  • Out-of-the-box support for SAP ECC, SAP S/4HANA, and SAP RISE
  • End-to-end visibility and SLA management from a single control plane
  • Faster troubleshooting and reduced downtime through proactive monitoring and alerts
  • Support for clean core principles via SAP BTP and API integrations

The move to SAP S/4HANA and SAP RISE is a strategic imperative for many organizations—but the transition requires careful orchestration, especially as business processes become more distributed and data-driven.

With Control-M, enterprises can confidently modernize their SAP environments, maintain full control over their critical workloads, and unlock the full value of SAP’s intelligent ERP—now and in the future.

To learn more about how Control-M for SAP can help your business, visit our website.

]]>
Empower Digital Innovation with Control-M and SAP Business Technology Platform https://www.bmc.com/blogs/empower-innovation-ctm-btp/ Fri, 20 Jun 2025 15:37:48 +0000 https://www.bmc.com/blogs/?p=55184 SAP Business Technology Platform (SAP BTP) is a comprehensive, multi-cloud platform that enables organizations to develop, extend, and integrate business applications. It offers a broad suite of services across data and analytics, artificial intelligence, application development, process automation, and enterprise integration—empowering digital innovation and agile business transformation. By leveraging SAP BTP, organizations can achieve centralized […]]]>

SAP Business Technology Platform (SAP BTP) is a comprehensive, multi-cloud platform that enables organizations to develop, extend, and integrate business applications. It offers a broad suite of services across data and analytics, artificial intelligence, application development, process automation, and enterprise integration—empowering digital innovation and agile business transformation. By leveraging SAP BTP, organizations can achieve centralized visibility, enhanced operational reliability, and real-time coordination of background processes, data flows, and application services. This seamless integration via Integration Suite leverages API’s which streamlines operations, minimizes manual intervention, and ensures uninterrupted business execution—particularly critical during digital transformation initiatives such as migrating to SAP RISE /SAP S/4HANA.

Designed to support clean core principles, SAP BTP enables customers to decouple custom logic from the digital core (e.g., SAP S/4HANA), using APIs and side-by-side extensions that promote agility, upgradeability, and innovation.

Control-M integrates with SAP BTP through robust API-based connectivity, (Application Integrator) enabling enterprises to seamlessly orchestrate, schedule, and monitor workflows that span both SAP and non-SAP systems. By leveraging SAP BTP’s extensibility and integration capabilities, Control-M can automate and manage end-to-end business processes involving applications built or extended on BTP, such as SAP S/4HANA extensions, custom applications, or third-party service integrations.

This integration allows for real-time execution and monitoring of background jobs, data pipelines, and event-driven processes across hybrid environments. Control-M simplifies job scheduling and workflow orchestration on SAP BTP by offering a centralized platform to define dependencies, manage workloads, and ensure SLA compliance across diverse systems.

Control-M’s capabilities are further enhanced by the introduction of a new SAP BTP job type, designed specifically to streamline scheduling, orchestration, and monitoring of workflows running on SAP BTP. This new job type enables users to natively connect with SAP BTP’s API-driven environment, allowing seamless automation of jobs across SAP extensions, custom applications, and integrations built on the platform.

With this innovation, Control-M users can define, schedule, and monitor SAP BTP-based tasks alongside traditional SAP jobs and non-SAP workflows—all within a unified interface. The integration provides end-to-end visibility and control over complex, hybrid workflows, reducing manual effort and accelerating response times to job failures or exceptions.

With its cloud integration services and API-first architecture, SAP BTP allows seamless connectivity across hybrid environments and supports integration with non-ABAP systems. These capabilities align perfectly with Control-M’s application and data workflow orchestration, delivering powerful automation across complex enterprise landscapes.

This capability is particularly valuable for organizations migrating to SAP S/4HANA or adopting SAP RISE, as it supports automation and governance across modern SAP landscapes. By leveraging Control-M’s new SAP BTP job type, businesses can enhance operational efficiency, improve SLA adherence, and drive smoother digital transformation journeys.

To learn more about Control-M for SAP, visit our website.

]]>
Smarter orchestration, real results: How Control-M connects your data ecosystem https://www.bmc.com/blogs/control-m-dataops/ Wed, 11 Jun 2025 17:20:03 +0000 https://www.bmc.com/blogs/?p=55128 Disconnected tools. Missed SLAs. Broken pipelines. For many data teams, the challenge isn’t building—it’s making everything work together reliably. Control-M bridges that gap. It connects the technologies you already use—like Snowflake, SageMaker, and Tableau—into unified and resilient workflows. Here’s how Control-M helps you orchestrate across your entire data stack, delivering operational efficiency without adding complexity. […]]]>

Disconnected tools. Missed SLAs. Broken pipelines. For many data teams, the challenge isn’t building—it’s making everything work together reliably.

Control-M bridges that gap. It connects the technologies you already use—like Snowflake, SageMaker, and Tableau—into unified and resilient workflows.

Here’s how Control-M helps you orchestrate across your entire data stack, delivering operational efficiency without adding complexity.

Orchestrate full ML pipelines with Control-M and Amazon SageMaker

Machine learning delivers real value only when it reaches production. Control-M helps you get there faster by transforming SageMaker workflows into orchestrated, resilient pipelines.

Instead of hand-cranked scripts and brittle logic, Control-M offers:

  • Direct integration to launch and monitor SageMaker training and inference jobs
  • Built-in SLA tracking, parallel job execution (up to 50 per agent), and predictive alerts
  • Seamless coordination with upstream data prep and downstream deployment steps

Real-world example: A data science team used Control-M to automate everything from data prep to monitoring. The result: a repeatable ML lifecycle that integrates directly with CI/CD pipelines.

Run smarter ELT workflows with Snowflake

Snowflake is built for scale—but it takes orchestration to turn SQL jobs, UDFs, and transformations into stable, production-ready workflows.

Control-M integrates directly with Snowflake so you can:

  • Automate, orchestrate, and monitor SQL and UDF executions with clear visibility
  • Automatically trigger downstream steps based on completion status
  • Ensure consistency and data integrity across platforms and tools

Use case: Orchestrate an end-to-end pipeline from Kafka ingestion to Snowflake transformation to Tableau visualization—all within a single, governed Control-M workflow.

Scale Airflow with centralized orchestration

Apache Airflow is great for DAG orchestration—but complexity can grow quickly. That’s where Control-M comes in.

Control-M works with Airflow to:

  • Centralize DAG execution and integrate with other toolchains
  • Track job status visually with SLA and error-handling logic
  • Trigger Airflow jobs from upstream events or use them within larger workflows

Best of both worlds: Use Airflow where it excels but orchestrate across your stack with Control-M to ensure continuity and control.

Deliver always-fresh analytics with Power BI, Looker, and Tableau

Dashboards only add value when they reflect current, accurate data. Control-M connects BI platforms to the rest of your pipeline, ensuring updates happen on time, every time.

Through native and API-based integrations, you can:

  • Trigger dashboard refreshes after upstream ELT or ML jobs complete
  • Link model outputs directly to reporting tools and decision-making
  • Monitor and resolve issues proactively before end users are affected

Example: Push SageMaker inference results into Power BI or Tableau dashboards immediately upon model run completion—no manual intervention required.

Why Control-M?

If you’re running modern data platforms across multi-cloud environments, Control-M helps everything run better—without reinventing your stack. It delivers:

  • Unified governance: Centralized visibility, management, monitoring, and alerting
  • Resilience: Auto-recovery, retries, and SLA intelligence built-in
  • Seamless integration: Up to three integrations added monthly, including AWS, Azure, GCP, and many more

Control-M is the orchestration engine that makes your stack smarter—by making it work together.

See it in action

Whether you’re deploying models, syncing dashboards, or scheduling ELT jobs, Control-M helps you build smarter workflows that don’t break under pressure.

Explore real-world demos and integration guides—no forms, no sales pitch.
Take the Control-M product tour to see how it fits your stack.

]]>
BMC on BMC: Unlocking Data to Elevate Customer Experiences https://www.bmc.com/blogs/customer-zero-data-cx/ Wed, 04 Jun 2025 09:57:37 +0000 https://www.bmc.com/blogs/?p=55124 Supporting a diverse range of business users who make hundreds of thousands of database inquiries daily, and access data from hundreds of interconnected applications, could make life miserable for the teams responsible for maintaining data integrity and making sure core business systems are always available and running on time. Not at BMC. It has made […]]]>

Supporting a diverse range of business users who make hundreds of thousands of database inquiries daily, and access data from hundreds of interconnected applications, could make life miserable for the teams responsible for maintaining data integrity and making sure core business systems are always available and running on time. Not at BMC. It has made us a better company. Thousands of our employees now use self-service data access and analytics to some degree, and 71% say it has made them more productive. This blog describes how we’ve achieved large-scale self-service analytics and how it helps our teams. Our people provided the vision, and Control-M gave us the means to put the vision into practice.

The tremendous growth in data access and consumption by business users is occurring as our systems are becoming more complex. Yet accessing and processing data in new ways to support individual needs is becoming easier. We did it by providing a flexible, user-friendly framework that encourages citizen development.

BMC has an ongoing data democratization program to give business teams the ability to access data and work with it in new ways to support their individual needs. The teams responsible for supporting our IT systems were the first to take advantage of expanded access and self-service. Since then, citizen development has spread to line-of-business users. In previous blogs, we’ve explained how we leveraged Control-M (self-hosted and SaaS) to introduce data democratization and showed how some early adopters took advantage to bring improvements to finance, customer support, marketing, sales operations, and information systems support. Self-service analytics and other forms of data democratization now touch two out of every three BMC employees worldwide.

Our team of business users is a good example of how this arrangement benefits all. We are data specialists, but not IT specialists, and our work depends on access to the data inside BMC’s comprehensive Snowflake data platform. That data is the product of hundreds of BMC’s other software applications, database formats, cloud environments, ETL operations, and data streams. Because of Control-M we don’t need to know all these products and their complementary tools. It gives us the interface to work with data from multiple systems without having to ask IT to provide access to each source. Before self-service, we had to ask the IT department for help, and wait in the proverbial line along with our colleagues in customer service, HR, finance, R&D, and every other function that needed help with data, software, and integrations.

Not anymore. Control-M gives our team (and other business users throughout BMC) role-based access to all the company’s data streams and an intuitive interface to build workflows that turn that data into new business intelligence. Control-M automatically enforces policies and access controls and orchestrates business-critical processes securely on the back end. Business users are free to create their products and processes, but in doing so they do not create their own versions of the core data. Through Control-M we’ve expanded access to data without increasing risk to uptime or data security. It’s been great for IT because they are freed up to focus on innovative projects too.

Control-M is well known for its ability to connect with multiple applications and environments, but its out-of-the-box Snowflake integration is still notable. Our team is doing more with the data and powerful features available through Snowflake because Control-M manages the complex dependencies within the platform and the others it connects with. We can connect to any Snowflake endpoint, create tables in a specified database and schema and populate them with a query, start or pause Snowpipes, and introduce all of Control-M’s scheduling and dependency features into Snowflake, all while monitoring these complex operations like any other job.

That’s not to say everything works perfectly the first time. Debugging is still required but the process is faster and completely different now that we use Control-M. Before, workflows that ran fine during testing didn’t always work right in production and it took a lot of phone calls, emails, and support tickets to find out why. That doesn’t happen now because Control-M lets us take a Jobs-as-Code approach so proper scheduling and execution are built directly into the workflow. Potential problems are discovered and flagged before jobs go into production. Then our business users simply click to drill down into the workflow and identify any issues with it or its dependent jobs. We see exactly where to debug the workflow and can usually resolve the issue without raising a support ticket. This functionality has saved us (and IT Ops) a lot of time, which means BMC is delivering innovations faster.

Delivering a 360-degree customer view

Customers ultimately benefit from our ability to scale innovation because we’re more proactive and responsive in addressing their challenges and needs. Our Customer360 dashboard is a great example. It provides a comprehensive view of a customer in a single pane of glass by organizing input from Salesforce, Jira, Qualtrics, Eloqua, Gainsight, Adobe, and over 40 different sources in all. Inputs include the customer’s open support cases, activity predictions generated by AI and machine learning, account and subscription status, downloads, marketing engagements, telemetry data on product usage, CRM metrics, and even intent data from second- and third-party sources. Many of the data and metrics presented come from sources that had never been combined before and were developed by business users who had new ideas.

“It is great to be able to use Control-M to match the customer outcome from a support request to the internal details of how we operate at BMC to ensure a customer is getting the most out of its investment in our products,” says Pam Dickerman, a BMC program manager who uses the portal. Within Customer360, she found details of how a customer’s support request led to an innovation by BMC that saved the customer more than $250,000. BMC then shared the learnings throughout the company to help other customers. “It is a full circle, because Control-M helps make Customer360 so useful for us at BMC, and that leads to such amazing results for our customers.”

Customer360 has enabled us to go from being reactive to proactive in meeting customers’ individual needs; 76% of the more than 2,000 people who use the dashboard say it has improved their understanding of customers. That’s had a powerful effect on BMC because the dashboard is available to all customer-facing teams. Notably, no one is required to use Customer360 to do their jobs – the fact that more than 2,000 people use it by choice is great a testimony to its value. We consistently measure user satisfaction with the tool, and it’s earned a world-class 50 Net Promoter Score (NPS). Users credit the dashboard for providing recurring time savings that our calculations show are significant.

User satisfaction with Customer360 and the improved customer understanding and responsiveness it produce show the real-world benefits of making citizen development available across an organization. The way the dashboard was built and how it functions show the power of Control-M.

As noted, the single-screen dashboard shows information that was created by accessing and blending input from over 40 sources, including our enterprise data warehouse, departmental databases, in-house servers, cloud-hosted applications, and more. Bringing these and other sources into a single environment has been seamless because Control-M has hundreds of out-of-the-box integrations – after years of developing BI and analytics solutions and creating thousands of workflows, we haven’t found an environment that we couldn’t connect to yet. When BMC invests in new software, these integrations shorten the time to value.

Control-M also brings our entire data team together. Data architects, data engineers, BI analysts, MLOps engineers, and data scientists all work with Control-M while continuing to use their favorite and job-specific tools. Control-M provides a common platform, enforces role-based access, orchestrates activity, and prevents workflow conflicts so users can focus on creating, not integrating and managing.

Time savings have been a clear and documented benefit of using Control-M as our single platform to support workflow development and orchestration. An even greater benefit, which we can’t measure, is the trust Control-M has created in our data and processes. Without this platform, there is simply no way IT would be able to give users the keys to enterprise data and say, ‘Have at it!’ Having Control-M as a platform is a key enabler for BMC’s data science and engineering teams to do what we do because it lets us focus on innovation.

In the very near future Control-M will be helping BMC business users take advantage of self service to innovate with AI. BMC recently introduced Jett, our first generative AI (GenAI) advisor for Control-M SaaS. Jett lets users interact with Control-M SaaS simply by speaking in their natural language. That will make it easier for us to continually optimize and troubleshoot our workflows.

What excites us most is what’s coming next. GenAI is changing the game. From conversational data experiences powered by NLP to intelligent agents that push insights where they matter most, the data and analytics landscape is evolving rapidly. Orchestration will be more important than ever in this next chapter, not just to keep up, but to lead. With a strong foundation in Control-M SaaS in our data ecosystem, we’re ready to take on what’s next.

]]>
Introducing Control-M 22 https://www.bmc.com/blogs/introducing-controlm-22/ Fri, 16 May 2025 16:02:06 +0000 https://www.bmc.com/blogs/?p=55079 The technology required to successfully run a business has grown in both the number of disparate applications and data, and the complexity of connecting them. This trend will absolutely continue as new technologies are introduced. Application and data workflow orchestration is essential to dynamically connect people, applications, and data to the business outcomes that matter […]]]>

The technology required to successfully run a business has grown in both the number of disparate applications and data, and the complexity of connecting them. This trend will absolutely continue as new technologies are introduced. Application and data workflow orchestration is essential to dynamically connect people, applications, and data to the business outcomes that matter most. Leveraging the capabilities we continuously build for your needs unlocks the extraordinary potential of your workflows and gives you a competitive edge.

That’s why I’m excited to announce the newest version of BMC’s industry leading application and data workflow orchestration platform, Control‑M 22!

Control-M 22 focuses on updates that align with four main themes from our product roadmap: Scale Extraordinary Results, Simplify and Speed Operations, Integrate Data Everywhere, and Collaborate with Agility.

Let’s take a closer look at each theme and its corresponding updates.

Scale Extraordinary Results – Control-M 22 reflects our continuous focus on making sure Control-M scales production workflow orchestration, regardless of workflow complexity or your chosen deployment method – whether Control-M is self-hosted in a datacenter, deployed in the cloud, consumed as a Service or operated in a hybrid model combining self-hosted and SaaS.​

As part of our effort to support customers in their deployment choices, Control-M 22 introduces multiple enhancements to assist those who chose to transition from self-hosted to SaaS. These enhancements expand SaaS capabilities and simplify the transition process. Among them, Managed File Transfer Enterprise (MFT/E) – which enables users to share MFT capabilities with external partners – is now available on SaaS. In addition, Control-M 22 allows a single agent to connect to multiple servers, making it possible to reuse self-hosted agents in a SaaS environment and facilitating the move to SaaS.

We also continue to support customers who choose to deploy Control-M in the cloud. For example, Control-M 22 enhances the Control-M containerized agent by adding file transfer capability.

Additionally, Control-M 22 strengthens integrations with specialized tools in your ecosystem, enhancing its capabilities by leveraging their specific functionalities.

The CyberArk integration has been enhanced with a new CyberArk REST API interface, enabling secure storage of Control-M secrets for cloud deployments. Application Performance Monitoring integration is now fully completed in Control-M 22, enabling comprehensive observability of all Java processes and services, giving customers deeper insights into system performance.

Simplify and Speed Operations – This demonstrates our ongoing commitment to efficiency and includes all the capabilities provided to you and your teams to accelerate results with less effort.

It includes a truly revolutionary capability: Jett.

Jett, the GenAI-powered advisor for Control-M SaaS puts instant expertise at your fingertips. Users from across the business – from experts to beginners – can ask workflow-related questions in their own words and in their own language. Jett provides text-based summaries that highlight key insights as well as easy-to-read charts and visuals.

The ability to immediately understand all the details of your workflows creates an infinite range of possibilities and accelerates key scenarios, such as problem determination, compliance verification, and workflow optimization.

Integrate Data Everywhere – Orchestration is at the heart of successful DataOps projects – and that’s where we continue to invest. Our goal remains to connect all your diverse data systems to help you achieve your DataOps objectives.

Control-M 22 enhances the Managed File Transfer – an essential component of data pipelines. These enhancements include strengthened security capabilities with more granular access controls, providing greater protection for your data pipeline workflows.

In addition to the core release, we’re continuously expanding our ecosystem with new Control-M integrations delivered on a monthly basis. These updates extend functionality across a variety of platforms and services, helping you stay connected in a rapidly evolving data landscape.

Collaborate with Agility – Control-M 22 puts strong focus in the web interface, which we believe is the most effective way to ensure broad accessibility and collaboration.​ We continue to enhance the web interface with new capabilities tailored to different user roles, empowering teams across the business to work together efficiently from a single, unified platform.

Finally, Control-M 22 brings important enhancements to the Unified View, which was recently introduced to provide a single point of control for customers managing both self-hosted and SaaS environments—whether temporarily permanently or during a transition period. With this release, Unified View expands its reach by adding support for new platforms and introducing high availability (HA) for self-hosted servers, further improving resilience and scalability.

Application and data workflow orchestration is vital to business success. With Control-M 22, you can build and run the most important platform there is: your own.

For more information about Control-M 22, join our webinar on May 20 (or watch on demand), check out the release notes and the “what’s new” section of our website to see a complete list of features.

]]>
Unlocking Efficiency with Control-M Automation API https://www.bmc.com/blogs/unlocking-efficiency-with-controlm-automation-api/ Tue, 22 Apr 2025 13:51:49 +0000 https://www.bmc.com/blogs/?p=55002 Introduction In the rapidly evolving digital world, businesses are always looking for ways to optimize processes, minimize manual tasks, and boost overall efficiency. For those that depend on job scheduling and workload automation, Control-M from BMC Software has been a reliable tool for years. Now, with the arrival of the Control-M Automation API, organizations can […]]]>

Introduction

In the rapidly evolving digital world, businesses are always looking for ways to optimize processes, minimize manual tasks, and boost overall efficiency. For those that depend on job scheduling and workload automation, Control-M from BMC Software has been a reliable tool for years. Now, with the arrival of the Control-M Automation API, organizations can elevate their automation strategies even further. In this blog post, we’ll delve into what the Control-M Automation API offers, the advantages it brings, and how it can help revolutionize IT operations.

What is the Control-M Automation API?

The Control-M Automation API from BMC Software demonstrates how developers can use Control-M to automate workload scheduling and management. Built on a RESTful architecture, the API enables an API-first, decentralized method for building, testing, and deploying jobs and workflows. It offers services for managing job definitions, deploying packages, provisioning agents, and setting up host groups, facilitating seamless integration with various tools and workflows.

With the API, you can:

  • Streamline job submission, tracking, and control through automation.
  • Connect Control-M seamlessly with DevOps tools such as Jenkins, GitLab, and Ansible.
  • Develop customized workflows and applications to meet specific business requirements.
  • Access real-time insights and analytics to support informed decision-making.

Key Benefits of Using Control-M Automation API

  • Infrastructure-as-Code (IaC) for Workload Automation
    Enables users to define jobs as code using JSON, allowing for version control and better collaboration. It also supports automation through GitOps workflows, making workload automation an integral part of CI/CD pipelines.
  • RESTful API for Programmatic Job Management
    Provides a RESTful API to create, update, delete, and monitor jobs from any programming language (Python, Java, PowerShell, etc.). It allows teams to automate workflows without relying on a graphical interface, enabling CI/CD integration and process automation.
  • Enhanced Automation Capabilities
    By leveraging the Control-M Automation API, organizations can automate routine tasks, decreasing reliance on manual processes and mitigating the potential for human error. This capability is particularly valuable for managing intricate, high-volume workflows.
  • Seamless Integration
    By serving as a bridge between Control-M and external tools, the Control-M Automation API enables effortless integration with CI/CD pipelines, cloud services, and third-party applications—streamlining workflows into a unified automation environment.
  • Improved Agility
    Through Control-M Automation API integration, organizations gain the flexibility to accelerate application deployments and dynamically scale operations, ensuring responsive adaptation to market changes.
  • Real-Time Monitoring and Reporting
    The Control-M Automation API provides real-time access to job statuses, logs, and performance metrics. This enables proactive monitoring and troubleshooting, ensuring smoother operations.
  • Customization and Extensibility
    The API provides the building blocks to develop purpose-built solutions matching your exact specifications, including custom visualization interfaces and integration with specialized third-party applications.

Use Cases for Control-M Automation API

  • DevOps Integration
    Integrate Control-M with your DevOps pipeline to automate the deployment of applications and infrastructure. For example, you can trigger jobs in Control-M from Jenkins or GitLab, ensuring a seamless flow from development to production.
  • Cloud Automation
    Leverage the Control-M API to handle workloads across hybrid and multi-cloud setups. Streamline resource provisioning through automation, track cloud-based tasks, and maintain adherence to organizational policies.
  • Data Pipeline Automation
    Automate data ingestion, transformation, and loading processes. The API can be used to trigger ETL jobs, monitor their progress, and ensure data is delivered on time.
  • Custom Reporting and Analytics
    Extract job data and generate custom reports for stakeholders. The API can be used to build dashboards that provide insights into job performance, SLA adherence, and resource utilization.
  • Event-Driven Automation
    Set up event-driven workflows where jobs are triggered based on specific conditions or events. For example, you can automate the restart of failed jobs or trigger notifications when a job exceeds its runtime.

Example 1: Scheduling a Job with Python

Here is another example of using the Control-M Automation API to define and deploy a job in Control-M. For this, we’ll use a Python script (you’ll need a Control-M environment with API access set up).

Scheduling a Job with Python

Output of the code shows the successful deployment of jobs/folder in Control-M.

Output of the code

And the folder is successfully deployed in Control-M which can be checked in GUI.

folder is successfully deployed in Control-M

Example 2: Automating Job Submission with Python

Here’s a simple example of how you can use the Control-M Automation API to submit a job using Python:

automating-job-submission-with-python-new

Execute the above Python code and it will return the “RUN ID”.

The folder is successfully ordered and executed which can be checked in Control-M GUI.

Getting Started with Control-M Automation API

  • Access the API Documentation
    BMC provides comprehensive documentation for the Control-M Automation API, including endpoints, parameters, and examples. Familiarize yourself with the documentation to understand the capabilities and limitations of the API.
  • Set Up Authentication
    The API uses token-based authentication. Generate an API token from the Control-M GUI and use it to authenticate your requests.
  • Explore Sample Scripts
    BMC offers sample scripts and code snippets in various programming languages (Python, PowerShell, etc.) to help you get started. Use these as a reference to build your own integrations.
  • Start with Simple Use Cases
    Begin by automating simple tasks, such as job submission or status monitoring. Once you’re comfortable, move on to more complex workflows.
  • Leverage Community and Support
    Join the BMC community forums to connect with other users, share ideas, and troubleshoot issues. BMC also offers professional support services to assist with implementation.

Conclusion

The Control-M Automation API is a game-changer for organizations looking to enhance their automation capabilities. By enabling seamless integration, real-time monitoring, and custom workflows, the API empowers businesses to achieve greater efficiency and agility. Whether you’re a developer, IT professional, or business leader, now is the time to explore the potential of the Control-M Automation API and unlock new levels of productivity.

]]>
Model Training & Evaluation for Financial Fraud Detection with Amazon SageMaker & Control-M https://www.bmc.com/blogs/fraud-detection-controlm-sagemaker/ Tue, 18 Mar 2025 17:02:04 +0000 https://www.bmc.com/blogs/?p=54707 BMC & AWS Logo Model training and evaluation are fundamental in payment fraud detection because the effectiveness of a machine learning (ML)-based fraud detection system depends on its ability to accurately identify fraudulent transactions while minimizing false positives. Given the high volume, speed, and evolving nature of financial fraud, properly trained and continuously evaluated models […]]]>

BMC & AWS Logo

Model training and evaluation are fundamental in payment fraud detection because the effectiveness of a machine learning (ML)-based fraud detection system depends on its ability to accurately identify fraudulent transactions while minimizing false positives. Given the high volume, speed, and evolving nature of financial fraud, properly trained and continuously evaluated models are essential for maintaining accuracy and efficiency. Fraud detection requires a scalable, automated, and efficient approach to analyzing vast transaction datasets and identifying fraudulent activities.

This blog presents an ML-powered fraud detection pipeline built on Amazon Web Services (AWS) solutions—Amazon SageMaker, Amazon Redshift, Amazon EKS, and Amazon Athena—and orchestrated using Control-M to ensure seamless automation, scheduling, and workflow management in a production-ready environment. The goal is to train three models—logistic regression, decision tree, and multi-layer perceptron (MLP) classifier across three vectors—precision, recall, and accuracy. The results will help decide which model can be promoted into production.

While model training and evaluation is the outcome, the training and evaluation is part of the larger pipeline. In this blog, the represented pipeline integrates automation at every stage, from data extraction and preprocessing to model training, evaluation, and result visualization. By leveraging Control-M’s orchestration capabilities, the workflow ensures minimal manual intervention, optimized resource utilization, and efficient execution of interdependent jobs.

Process Flow:

Figure 1. The end-to-end pipeline.

Key architectural highlights include:

  • Automated data extraction and movement from Amazon Redshift to Amazon S3 using Control-M Managed File Transfer (MFT)
  • Orchestrated data validation and preprocessing with AWS Lambda and Kubernetes (Amazon EKS)
  • Automated model training and evaluation in Amazon SageMaker with scalable compute resources
  • Scheduled performance monitoring and visualization using Amazon Athena and QuickSight
  • End-to-end workflow orchestration with Control-M, enabling fault tolerance, dependency management, and optimized scheduling

In production environments, manual execution of ML pipelines is not feasible due to the complexity of handling large-scale data, model retraining cycles, and continuous monitoring. By integrating Control-M for workflow orchestration, this solution ensures scalability, efficiency, and real-time fraud detection while reducing operational overhead. The blog also discusses best practices, security considerations, and lessons learned to help organizations build and optimize their fraud detection systems with robust automation and orchestration strategies.

Amazon SageMaker:

The core service in this workflow is Amazon SageMaker, AWS’s fully managed ML service, which enables rapid development and deployment of ML models at scale. We’ve automated our ML workflow using Amazon SageMaker Pipelines, which provides a powerful framework for orchestrating complex ML workflows. The result is a fraud detection solution that demonstrates the power of combining AWS’s ML capabilities with its data processing and storage services. This approach not only accelerates development but also ensures scalability and reliability in production environments.

Dataset Overview

The dataset used for this exercise is sourced from Kaggle, offering an excellent foundation for evaluating model performance on real-world-like data.

The Kaggle dataset used for this analysis provides a synthetic representation of financial transactions, designed to replicate real-world complexities while integrating fraudulent behaviors. Derived from the PaySim simulator, which uses aggregated data from financial logs of a mobile money service in an African country, the dataset is an invaluable resource for fraud detection and financial analysis research.

The dataset includes the following features:

  • step: Time unit in hours over a simulated period of 30 days.
  • type: Transaction types such as CASH-IN, CASH-OUT, DEBIT, PAYMENT, and TRANSFER.
  • amount: Transaction value in local currency
  • nameOrig: Customer initiating the transaction.
  • oldbalanceOrg/newbalanceOrig: Balance before and after the transaction for the initiating customer.
  • nameDest: Recipient of the transaction.
  • oldbalanceDest/newbalanceDest: Balance before and after the transaction for the recipient.
  • isFraud: Identifies transactions involving fraudulent activities.
  • isFlaggedFraud: Flags unauthorized large-scale transfers.

Architecture

The pipeline has the following architecture and will be orchestrated using Control-M.

Figure 2. Pipeline archetecture.

Note: All of the code artifacts used are available at this link.

Control-M Integration Plug-ins Used in This Architecture

To orchestrate this analysis pipeline, we leverage Control-M integration plug-ins that seamlessly connect with various platforms and services, including:

    1. Control-M for SageMaker:
      • Executes ML model training jobs on Amazon SageMaker.
      • Enables integration with SageMaker to trigger training jobs, monitor progress, and retrieve outputs.
    2. Control-M for Kubernetes:
      • Executes Python scripts for data processing and normalization within an Amazon EKS environment.
      • Ideal for running containerized jobs as part of the data preparation process.
    3. Control-M Managed File Transfer:
      • Facilitates file movement between Amazon S3 buckets and other storage services.
      • Ensures secure and automated data transfers to prepare for analysis.
    4. Control-M Databases:
      • Enables streamlined job scheduling and execution of SQL scripts, stored procedures, and database management tasks across multiple database platforms, ensuring automation and consistency in database operations.
    5. Control-M for AWS Lambda:
      • Enables seamless scheduling and execution of AWS Lambda functions, allowing users to automate serverless workflows, trigger event-driven processes, and manage cloud-based tasks efficiently.
      • Ensures orchestration, monitoring, and automation of Lambda functions within broader enterprise workflows, improving operational efficiency and reducing manual intervention

AWS Services used in this Architecture

  1. Amazon S3: Amazon S3 is a completely managed Object Storage service.
  1. Amazon SageMaker: Amazon SageMaker is a fully managed ML service.
  2. Amazon EKS: Amazon Elastic Kubernetes Service (Amazon EKS) is a fully managed Kubernetes service that enables you to run Kubernetes seamlessly in both AWS Cloud and on-premises data centers.
  3. Amazon Redshift: Amazon Redshift is a popular Cloud Data Warehouse provided by AWS.

Setting up the Kubernetes environment

The Amazon EKS Kubernetes environment is central to the pipeline’s data preprocessing stage. It runs Python scripts to clean, normalize, and structure the data before it is passed to the ML models. Setting up the Kubernetes environment involves the following steps:

  • Amazon EKS Cluster Setup:
    • Use Terraform to create an Amazon EKS cluster or set up your Kubernetes environment in any cloud provider.
    • Ensure the Kubernetes nodes can communicate with the cluster and vice versa.
  • Containerized Python Script:

For a detailed guide on setting up a Kubernetes environment for similar workflows, refer to this blog, where we described the Kubernetes setup process step by step.

For a comprehensive walkthrough on setting up Snowflake in similar pipelines, please refer to this blog.

Workflow summary

  1. Redshift_Unload Job
  • Action: Executes a SQL script (copy_into_s3.sql) to extract structured data from Amazon Redshift and store it in Amazon S3.
  • Purpose: Moves raw data into an accessible S3 bucket for subsequent transformations.
  • Next Step: Triggers S3_to_S3_Transfer to move data from the warehouse bucket to the processing bucket.
  1. S3_to_S3_Transfer Job
  • Action: Uses Control-M’s Managed File Transfer (MFT) to move the dataset from the sam-sagemaker-warehouse-bucket to the bf-sagemaker bucket.
  • Purpose: Ensures the data is available in the right location for preprocessing and renames it to Synthetic_Financial_datasets_log.csv.
  • Next Step: Triggers the Data_Quality_Check job.
  1. Data_Quality_Check Job
  • Action: Runs an AWS Lambda function (SM_ML_DQ_Test) to validate the dataset.
  • Purpose: Ensures the CSV file contains at least 5 columns and more than 1000 rows, preventing corrupt or incomplete data from entering the ML pipeline.
  • Next Step: Triggers EKS-Preprocessing-job for data transformation.
  1. EKS-Preprocessing-Job
  • Action: Executes a Kubernetes job to clean and transform the dataset stored in Amazon S3.
  • Purpose:
    • Runs a Python script (main.py) inside a container to process Synthetic_Financial_datasets_log.csv
    • Generates a cleaned and structured dataset (processed-data/output.csv).
  • Configuration Details:
    • Image: new-fd-repo stored in Amazon ECR
    • Environmental variables: Defines S3 input/output file locations
    • Resource allocation: Uses 2Gi memory, 1 CPU (scales up to 4Gi memory, 2 CPUs)
    • IAM permissions: Uses a Kubernetes service account for S3 access
    • Logging & cleanup: Retrieves logs and deletes the job upon completion
  • Next step: Triggers the Amazon SageMaker training job.
  1. Amazon SageMaker_TE_Pipeline
  • Action: Runs the TrainingAndEvaluationPipeline in Amazon SageMaker.
  • Purpose:
    • Trains and evaluates multiple ML models on the preprocessed dataset (processed-data/output.csv).
    • Stores trained model artifacts and evaluation metrics in an S3 bucket.
    • Ensures automatic resource scaling for efficient processing.
  • Next step: Triggers Load_Amazon_Athena_Table to store results in Athena for visualization.
  1. Load_Amazon_Athena_Table Job
  • Action: Runs an AWS Lambda function (athena-query-lambda) to load the evaluation metrics into Amazon Athena.
  • Purpose:
    • Executes a SQL query to create/update an Athena table (evaluation_metrics).
    • Allows QuickSight to query and visualize the model performance results.

How the steps are connected

  1. Redshift → S3 Transfer: Data is extracted from Amazon Redshift and moved to Amazon S3.
  2. Data validation and preprocessing: The data quality check ensures clean input before transformation using Kubernetes.
  3. ML Training: Amazon SageMaker trains and evaluates multiple ML models.
  4. Athena and QuickSight integration: The model evaluation results are queried through Athena, enabling real-time visualization in Amazon QuickSight.
  5. Final outcome: A streamlined, automated ML workflow that delivers a trained model and performance insights for further decision-making.

This detailed workflow summary ties each step together while emphasizing the critical roles played by the Kubernetes preprocessing job and the Amazon SageMaker training pipeline.

Control-M workflow definition

Figure 3. Control-M workflow definition.

In the next section we will go through defining each of these jobs. The jobs can be defined using a drag-and-drop, no-code approach in the Planning domain of Control-M, or they can be defined as code in JSON. For the purposes of this blog, we will use the as-code approach.

Amazon Redshift and file transfer workflows

Redshift_Unload Job

Type: Job:Database:SQLScript

Action: Executes a SQL script in Amazon Redshift to unload data from Redshift tables into an S3 bucket.

Description: This job runs a predefined SQL script (copy_into_s3.sql) stored on the Control-M agent to export structured data from Redshift into Amazon S3. The unloaded data is prepared for subsequent processing in the ML pipeline.

Dependencies: The job runs independently but triggers the Copy_into_bucket-TO-S3_to_S3_MFT-262 event upon successful completion.

Key configuration details: Redshift SQL script execution

SQL script:

UNLOAD ('SELECT * FROM SageTable')
TO 's3://sam-sagemaker-warehouse-bucket/Receiving Folder/Payments_RS.csv'
IAM_ROLE 'arn:aws:iam::xyz:role/jogoldbeRedshiftReadS3'
FORMAT AS csv
HEADER
ALLOWOVERWRITE
PARALLEL OFF
DELIMITER ','
MAXFILESIZE 6GB; -- 1GB max per file

Event handling

  • Events to trigger:
    • Copy_into_bucket-TO-S3_to_S3_MFT-262 → Signals that the data has been successfully unloaded to S3 and is ready for further processing or transfers.

See an example below:

"Redshift_Unload" : {
"Type" : "Job:Database:SQLScript",
"ConnectionProfile" : "ZZZ-REDSHIFT",
"SQLScript" : "/home/ctmagent/redshift_sql/copy_into_s3.sql",
"Host" : "<<host details>>",
"CreatedBy" : "<<creator’s email>>",
"RunAs" : "ZZZ-REDSHIFT",
"Application" : "SM_ML_RS",
"When" : {
"WeekDays" : [ "NONE" ],
"MonthDays" : [ "ALL" ],
"DaysRelation" : "OR"
},
"eventsToAdd" : {
"Type" : "AddEvents",
"Events" : [ {
"Event" : "Copy_into_bucket-TO-S3_to_S3_MFT-262"
} ]
}
}

S3_to_S3_Transfer job

Type: Job:FileTransfer

Action: Transfers a file from one S3 bucket to another using Control-M Managed File Transfer (MFT).

Description: This job moves a dataset (Payments_RS.csv000) from sam-sagemaker-warehouse-bucket to bf-sagemaker, renaming it as Synthetic_Financial_datasets_log.csv in the process. This prepares the data for further processing and validation.

Dependencies: The job waits for Copy_into_bucket-TO-S3_to_S3_MFT-262 to ensure that data has been successfully exported from Redshift and stored in S3 before initiating the transfer.

Key configuration details:

  • Source bucket: sam-sagemaker-warehouse-bucket
    • Source path: /Receiving Folder/Payments_RS.csv000
  • Destination bucket: bf-sagemaker
    • Destination path: /temp/
    • Renamed file: Synthetic_Financial_datasets_log.csv at the destination.
  • Connection profiles: Uses the MFTS3 profile for both the source and destination S3 buckets.
  • File watcher: Monitors the source file for readiness with a minimum detected size of 200 MB.

Event handling

  • Events to wait for:
    • Copy_into_bucket-TO-S3_to_S3_MFT-262 → Ensures data has been exported from Redshift to S3 before transferring it to another S3 bucket.
  • Events to trigger:
    • S3_to_S3_MFT-TO-Data_Quality_Check → Notifies the next step that the dataset is ready for validation.
    • SM_ML_Snowflake_copy-TO-SM_Model_Train_copy → Signals the beginning of the model training process using the processed data.

See an example below:

"S3_to_S3_Transfer" : {
"Type" : "Job:FileTransfer",
"ConnectionProfileSrc" : "MFTS3",
"ConnectionProfileDest" : "MFTS3",
"S3BucketNameSrc" : "sam-sagemaker-warehouse-bucket",
"S3BucketNameDest" : "bf-sagemaker",
"Host" : : "<<host details>>",
"CreatedBy" : : "<<creator’s email>>",
"RunAs" : "MFTS3+MFTS3",
"Application" : "SM_ML_RS",
"Variables" : [ {
"FTP-LOSTYPE" : "Unix"
}, {
"FTP-CONNTYPE1" : "S3"
}, {
"FTP-ROSTYPE" : "Unix"
}, {
"FTP-CONNTYPE2" : "S3"
}, {
"FTP-CM_VER" : "9.0.00"
}, {
"FTP-OVERRIDE_WATCH_INTERVAL1" : "0"
}, {
"FTP-DEST_NEWNAME1" : "Synthetic_Financial_datasets_log.csv"
} ],
"FileTransfers" : [ {
"TransferType" : "Binary",
"TransferOption" : "SrcToDestFileWatcher",
"Src" : "/Receiving Folder/Payments_RS.csv000",
"Dest" : "/temp/",
"ABSTIME" : "0",
"TIMELIMIT" : "0",
"UNIQUE" : "0",
"SRCOPT" : "0",
"IF_EXIST" : "0",
"DSTOPT" : "1",
"FailJobOnSourceActionFailure" : false,
"RECURSIVE" : "0",
"EXCLUDE_WILDCARD" : "0",
"TRIM" : "1",
"NULLFLDS" : "0",
"VERNUM" : "0",
"CASEIFS" : "0",
"FileWatcherOptions" : {
"VariableType" : "Global",
"MinDetectedSizeInBytes" : "200000000",
"UnitsOfTimeLimit" : "Minutes"
},
"IncrementalTransfer" : {
"IncrementalTransferEnabled" : false,
"MaxModificationAgeForFirstRunEnabled" : false,
"MaxModificationAgeForFirstRunInHours" : "1"
},
"DestinationFilename" : {
"ModifyCase" : "No"
}
} ],
"When" : {
"WeekDays" : [ "NONE" ],
"MonthDays" : [ "ALL" ],
"DaysRelation" : "OR"
},
"eventsToWaitFor" : {
"Type" : "WaitForEvents",
"Events" : [ {
"Event" : "Copy_into_bucket-TO-S3_to_S3_MFT-262"
} ]
},
"eventsToAdd" : {
"Type" : "AddEvents",
"Events" : [ {
"Event" : "S3_to_S3_MFT-TO-Data_Quality_Check"
} ]
},
"eventsToDelete" : {
"Type" : "DeleteEvents",
"Events" : [ {
"Event" : "Copy_into_bucket-TO-S3_to_S3_MFT-262"
} ]
}
},
"eventsToAdd" : {
"Type" : "AddEvents",
"Events" : [ {
"Event" : "SM_ML_Snowflake_copy-TO-SM_Model_Train_copy"
} ]
}
}

Data_Quality_Check job

Type: Job:AWS Lambda

Action: Executes an AWS Lambda function to perform a data quality check on a CSV file.

Description: This job invokes the Lambda function SM_ML_DQ_Test to validate the structure and integrity of the dataset. It ensures that the CSV file has at least 5 columns and contains more than 1,000 rows before proceeding with downstream processing. The job logs execution details for review.

Dependencies: The job waits for the event S3_to_S3_MFT-TO-Data_Quality_Check, ensuring that the file transfer between S3 buckets is complete before running data validation.

Key configuration details:

  • Lambda function name: SM_ML_DQ_Test
  • Execution environment:
    • Host: Runs on ip-172-31-18-169.us-west-2.compute.internal
    • Connection profile: JOG-AWS-LAMBDA
    • RunAs: JOG-AWS-LAMBDA
  • Validation criteria:
    • ✅ The CSV file must have at least 5 columns.
    • ✅ The CSV file must contain more than 1,000 rows.
  • Logging: Enabled (Append Log to Output: checked) for debugging and validation tracking.

Event handling:

  • Events to wait for:
    • The job waits for S3_to_S3_MFT-TO-Data_Quality_Check to confirm that the dataset has been successfully transferred and is available for validation.
  • Events to delete:
    • The event S3_to_S3_MFT-TO-Data_Quality_Check is deleted after processing to ensure workflow continuity and prevent reprocessing.

See an example below:

"Data_Quality_Check" : {
      "Type" : "Job:AWS Lambda",
      "ConnectionProfile" : "JOG-AWS-LAMBDA",
      "Append Log to Output" : "checked",
      "Function Name" : "SM_ML_DQ_Test",
      "Parameters" : "{}",
      "Host" : : "<<host details>>",
      "CreatedBy" : : "<<creator’s email>>",
      "Description" : "This job performs a data quality check on CSV file to make sure it has at least 5 columns and more than 1000 rows",
      "RunAs" : "JOG-AWS-LAMBDA",
      "Application" : "SM_ML_RS",
      "When" : {
        "WeekDays" : [ "NONE" ],
        "MonthDays" : [ "ALL" ],
        "DaysRelation" : "OR"
      },
      "eventsToWaitFor" : {
        "Type" : "WaitForEvents",
        "Events" : [ {
          "Event" : "S3_to_S3_MFT-TO-Data_Quality_Check"
        } ]
      },
      "eventsToDelete" : {
        "Type" : "DeleteEvents",
        "Events" : [ {
          "Event" : "S3_to_S3_MFT-TO-Data_Quality_Check"
        } ]
      }
    }

Amazon SageMaker: Model training and evaluation workflows

EKS-Preprocessing-Job

Type: Job:Kubernetes

Action: Executes a Kubernetes job on an Amazon EKS cluster to preprocess financial data stored in an S3 bucket.

Description: This job runs a containerized Python script that processes raw financial datasets stored in bf-sagemaker. It retrieves the input file Synthetic_Financial_datasets_log.csv, applies necessary transformations, and outputs the cleaned dataset as processed-data/output.csv. The Kubernetes job ensures appropriate resource allocation, security permissions, and logging for monitoring.

Dependencies: The job runs independently but triggers the sagemaker-preprocessing-job-TO-AWS_SageMaker_Job_1-751-262 event upon completion, signaling that the processed data is ready for model training in SageMaker.

Key configuration details:

Kubernetes job specification

  • Image: 623469066856.dkr.ecr.us-west-2.amazonaws.com/new-fd-repo
  • Command execution: Runs the following script inside the container:

bash

CopyEdit

python3 /app/main.py -b bf-sagemaker -i Synthetic_Financial_datasets_log.csv -o processed-data/output.csv

  • Environment variables:
    • S3_BUCKET: bf-sagemaker
    • S3_INPUT_FILE: Synthetic_Financial_datasets_log.csv
    • S3_OUTPUT_FILE: processed-data/output.csv

Resource allocation

  • Requested resources: 2Gi memory, 1 CPU
  • Limits: 4Gi memory, 2 CPUs
  • Volume mounts: Temporary storage mounted at /tmp

Execution environment

  • Host: Runs on mol-agent-installation-sts-0
  • Connection profile: MOL-K8S-CONNECTION-PROFILE for EKS cluster access
  • Pod logs: Configured to retrieve logs upon completion (Get Pod Logs: Get Logs)
  • Job cleanup: Deletes the Kubernetes job after execution (Job Cleanup: Delete Job)

Event handling:

  • Events to trigger:
    • sagemaker-preprocessing-job-TO-AWS_SageMaker_Job_1-751-262 → Signals that the preprocessed data is ready for SageMaker model training.

See an example below:

"EKS-Prerocessing-job" : {
      "Type" : "Job:Kubernetes",
      "Job Spec Yaml" : "apiVersion: batch/v1\r\nkind: Job\r\nmetadata:\r\n  name: s3-data-processing-job\r\nspec:\r\n  template:\r\n    spec:\r\n      serviceAccountName: default  # Ensure this has S3 access via IAM\r\n      containers:\r\n      - name: data-processing-container\r\n        image: 623469066856.dkr.ecr.us-west-2.amazonaws.com/new-fd-repo\r\n        command: [\"/bin/sh\", \"-c\", \"python3 /app/main.py -b bf-sagemaker -i Synthetic_Financial_datasets_log.csv -o processed-data/output.csv\"]\r\n        env:\r\n        - name: S3_BUCKET\r\n          value: \"bf-sagemaker\"\r\n        - name: S3_INPUT_FILE\r\n          value: \"Synthetic_Financial_datasets_log.csv\"\r\n        - name: S3_OUTPUT_FILE\r\n          value: \"processed-data/output.csv\"\r\n        resources:\r\n          requests:\r\n            memory: \"2Gi\"\r\n            cpu: \"1\"\r\n          limits:\r\n            memory: \"4Gi\"\r\n            cpu: \"2\"\r\n        volumeMounts:\r\n        - name: tmp-storage\r\n          mountPath: /tmp\r\n      restartPolicy: Never\r\n      volumes:\r\n      - name: tmp-storage\r\n        emptyDir: {}\r\n\r\n",
      "ConnectionProfile" : "MOL-K8S-CONNECTION-PROFILE",
      "Get Pod Logs" : "Get Logs",
      "Job Cleanup" : "Delete Job",
      "Host" : : "<<host details>>",
      "CreatedBy" : : "<<creator’s email>>",
      "RunAs" : "MOL-K8S-CONNECTION-PROFILE",
      "Application" : "SM_ML_RS",
      "When" : {
        "WeekDays" : [ "NONE" ],
        "MonthDays" : [ "ALL" ],
        "DaysRelation" : "OR"
      },
      "eventsToAdd" : {

        "Type" : "AddEvents",
        "Events" : [ {
          "Event" : "sagemaker-preprocessing-job-TO-AWS_SageMaker_Job_1-751-262"
        } ]
      }
    }

Amazon SageMaker_TE_Pipeline job

Type: Job:Amazon Sagemaker

Action: Executes an Amazon Sagemaker training and evaluation pipeline to train ML models using preprocessed financial data.

Description: This job runs the TrainingAndEvaluationPipeline, which trains and evaluates ML models based on the preprocessed dataset stored in bf-sagemaker. The pipeline automates model training, hyperparameter tuning, and evaluation, ensuring optimal performance before deployment.

Dependencies: The job waits for the event sagemaker-preprocessing-job-TO-AWS_SageMaker_Job_1-751-262, ensuring that the preprocessing job has completed and the cleaned dataset is available before training begins.

Key configuration details:

  • SageMaker pipeline name: TrainingAndEvaluationPipeline
  • Execution environment:
    • Host: Runs on prodagents
    • Connection profile: MOL-SAGEMAKER-CP for SageMaker job execution
    • RunAs: MOL-SAGEMAKER-CP
  • Pipeline parameters:
    • Add parameters: unchecked (defaults used)
    • Retry pipeline execution: unchecked (will not automatically retry failed executions)

Event handling:

  • Events to wait for:
    • sagemaker-preprocessing-job-TO-AWS_SageMaker_Job_1-751-262 → Ensures that the preprocessed dataset is available before initiating training.
  • Events to delete:
    • sagemaker-preprocessing-job-TO-AWS_SageMaker_Job_1-751-262 → Removes dependency once training begins.
    • SM_ML_Snowflake-TO-AWS_SageMaker_Job_1 → Cleans up previous event dependencies.

See an example below:

"Amazon SageMaker_TE_Pipeline" : {
      "Type" : "Job:AWS SageMaker",
      "ConnectionProfile" : "MOL-SAGEMAKER-CP",
      "Add Parameters" : "unchecked",
      "Retry Pipeline Execution" : "unchecked",
      "Pipeline Name" : "TrainingAndEvaluationPipeline",
      "Host" : : "<<host details>>",
      "CreatedBy" : : "<<creator’s email>>",
      "RunAs" : "MOL-SAGEMAKER-CP",
      "Application" : "SM_ML_RS",
      "When" : {
        "WeekDays" : [ "NONE" ],
        "MonthDays" : [ "ALL" ],
        "DaysRelation" : "OR"
      },
      "eventsToWaitFor" : {
        "Type" : "WaitForEvents",
        "Events" : [ {
          "Event" : "sagemaker-preprocessing-job-TO-AWS_SageMaker_Job_1-751-262"
        } ]
      },
      "eventsToDelete" : {
        "Type" : "DeleteEvents",
        "Events" : [ {
          "Event" : "sagemaker-preprocessing-job-TO-AWS_SageMaker_Job_1-751-262"
        }, {
          "Event" : "SM_ML_Snowflake-TO-AWS_SageMaker_Job_1"
        } ]
      }
    }

Load_Amazon_Athena_Table job

Type: Job:AWS Lambda

Action: Executes an AWS Lambda function to load evaluation results into an Amazon Athena table for further querying and visualization.

Description: This job triggers the Lambda function athena-query-lambda, which runs an Athena SQL query to create or update a table containing ML evaluation metrics. The table enables seamless integration with Amazon QuickSight for data visualization and reporting.

Dependencies: The job waits for the event SM_Model_Train_copy-TO-Athena_and_Quicksight_copy, ensuring that the SageMaker training and evaluation process has completed before loading results into Athena.

Key configuration details:

  • Lambda function name: athena-query-lambda
  • Execution environment:
    • Host: Runs on airflowagents
    • Connection Profile: JOG-AWS-LAMBDA
    • RunAs: JOG-AWS-LAMBDA
  • Athena table purpose:
    • Stores ML model evaluation results, including accuracy, precision, and recall scores.
    • Enables easy querying of performance metrics through SQL-based analysis.
    • Prepares data for visualization in Amazon QuickSight.

Event handling:

  • Events to wait for:
    • SM_Model_Train_copy-TO-Athena_and_Quicksight_copy → Ensures that the SageMaker training and evaluation process has completed before updating Athena.
  • Events to delete:
    • SM_Model_Train_copy-TO-Athena_and_Quicksight_copy → Cleans up the event dependency after successfully loading data.

See an example below:

"Load_Amazon_Athena_Table" : {
      "Type" : "Job:AWS Lambda",
      "ConnectionProfile" : "JOG-AWS-LAMBDA",
      "Function Name" : "athena-query-lambda",
      "Parameters" : "{}",
      "Append Log to Output" : "unchecked",
      "Host" : "airflowagents",
      "CreatedBy" : "[email protected]",
      "RunAs" : "JOG-AWS-LAMBDA",
      "Application" : "SM_ML_RS",
      "When" : {
        "WeekDays" : [ "NONE" ],
        "MonthDays" : [ "ALL" ],
        "DaysRelation" : "OR"
      }
    },

    "eventsToWaitFor" : {
      "Type" : "WaitForEvents",
      "Events" : [ {
        "Event" : "SM_Model_Train_copy-TO-Athena_and_Quicksight_copy"
      } ]
    },
    "eventsToDelete" : {
      "Type" : "DeleteEvents",
      "Events" : [ {
        "Event" : "SM_Model_Train_copy-TO-Athena_and_Quicksight_copy"
      } ]
    }
  }

WORKFLOW EXECUTION:

Training and evaluation steps in Amazon SageMaker

Figure 4. Amazon SageMaker training and evaluation steps.

Pipeline execution logs in CloudWatch:

Figure 5. CloudWatch execution logs.

Workflow execution in Control-M

Figure 6. Control-M workflow execution.

The role of  Amazon SageMaker:

To analyze the dataset and identify patterns of fraud, we will run the data through three ML models that are available in Amazon SageMaker: logistic regression, decision tree classifier, and multi-layer perceptron (MLP). Each of these models offers unique strengths, allowing us to evaluate their performance and choose the best approach for fraud detection.

  1. Logistic regression: Logistic regression is a linear model that predicts the probability of an event (e.g., fraud) based on input features. It is simple, interpretable, and effective for binary classification tasks.
  2. Decision tree classifier: A decision tree is a rule-based model that splits the dataset into branches based on feature values. Each branch represents a decision rule, making the model easy to interpret and well-suited for identifying patterns in structured data.
  3. Multi-layer perceptron: An MLP is a type of neural network designed to capture complex, non-linear relationships in the data. It consists of multiple layers of neurons and is ideal for detecting subtle patterns that may not be obvious in simpler models.

By running the dataset through these models, we aim to compare their performance and determine which one is most effective at detecting fraudulent activity in the dataset. Metrics such as accuracy, precision, and recall will guide our evaluation.

Trainmodels.py:

This script processes data to train ML models for fraud detection. It begins by validating and loading the input dataset, ensuring data integrity by handling missing or invalid values and verifying the target column isFraud. The data is then split into training and testing sets, which are saved for future use. The logistic regression, decision tree classifier, and MLP are trained on the dataset, with the trained models saved as .pkl files for deployment or further evaluation. The pipeline ensures robust execution with comprehensive error handling and modularity, making it an efficient solution for detecting fraudulent transactions.

Evaluatemodels.py:

This script evaluates ML models for fraud detection using a test dataset. It loads test data and the three pre-trained models to assess their performance. For each model, it calculates metrics such as accuracy, precision, recall, classification report, and confusion matrix. The results are stored in a JSON file for further analysis. The script ensures modularity by iterating over available models and robustly handles missing files or errors, making it a comprehensive evaluation pipeline for model performance.

Results and outcomes

Model evaluation results in Amazon QuickSight.

Model evaluation results in Amazon QuickSight

The decision tree classifier model shows the most balanced performance with respect to precision and recall, followed by the MLP. Logistic regression performs poorly in correctly identifying positive instances despite its high accuracy.

Summary

Building an automated, scalable, and efficient ML pipeline is essential for combating fraud in today’s fast-evolving financial landscape. By leveraging AWS services like Amazon SageMaker, Redshift, EKS, and Athena, combined with Control-M for orchestration, this fraud detection solution ensures seamless data processing, real-time model training, and continuous monitoring.

A key pillar of this workflow is Amazon SageMaker, which enables automated model training, hyperparameter tuning, and scalable inference. It simplifies the deployment of ML models, allowing organizations to train and evaluate multiple models—logistic regression, decision tree classifier, and MLP—to determine the most effective fraud detection strategy. Its built-in automation for training, evaluation, and model monitoring ensures that fraud detection models remain up-to-date, adaptive, and optimized for real-world transactions.

The importance of automation and orchestration cannot be overstated—without it, maintaining a production-grade ML pipeline for fraud detection would be cumbersome, inefficient, and prone to delays. Control-M enables end-to-end automation, ensuring smooth execution of complex workflows, from data ingestion to model training in Amazon SageMaker and evaluation in Athena. This reduces manual intervention, optimizes resource allocation, and improves overall fraud detection efficiency.

Moreover, model training and evaluation remain at the heart of fraud detection success. By continuously training on fresh transaction data within Amazon SageMaker, adapting to evolving fraud patterns, and rigorously evaluating performance using key metrics, organizations can maintain high fraud detection accuracy while minimizing false positives.

As fraudsters continue to develop new attack strategies, financial institutions and payment processors must stay ahead with adaptive, AI-driven fraud detection systems. By implementing a scalable and automated ML pipeline with Amazon SageMaker, organizations can not only enhance security and reduce financial losses but also improve customer trust and transaction approval rates.

 

]]>
Introducing Control-M SaaS’s New GenAI Advisor, Jett https://www.bmc.com/blogs/introducing-jett/ Fri, 14 Mar 2025 09:42:37 +0000 https://www.bmc.com/blogs/?p=54755 We live in a time where technology advancement occurs at a breakneck pace. With each new technology added to the tech stack, complexity increases and environments evolve. Additionally, IT teams are expected to deliver business services in production faster, with quick and effortless problem remediation or, ideally, proactive problem identification. All this can make it […]]]>

We live in a time where technology advancement occurs at a breakneck pace. With each new technology added to the tech stack, complexity increases and environments evolve. Additionally, IT teams are expected to deliver business services in production faster, with quick and effortless problem remediation or, ideally, proactive problem identification. All this can make it extremely challenging for IT to keep up with the demands of the business while maintaining forward progress. That, in turn, can make it increasingly critical for IT executives to find, train, and retain highly qualified IT staff.

Jett, the newest Control-M SaaS capability, is a generative artificial intelligence (GenAI)-powered advisor that revolutionizes the way users interact with the Control-M SaaS orchestration framework. Control-M SaaS users from across the business can ask a wide range of workflow-related questions in their own language and in their own words and quickly receive easy-to-understand graphical and tabular results with a concise text summary. Jett provides the knowledge required to keep business running smoothly. It is a game changer for IT operations (ITOps) teams, allowing them to accelerate troubleshooting, problem resolution, and compliance verification, proactively optimize their workflows, and much more.

ITOps professionals, data teams, application owners, and business users can easily get answers relevant to their individual roles and use cases. With Jett, users don’t need to have in-depth Control-M SaaS knowledge or special training. There’s no additional cost, and you can ask up to 50 questions per day.

The tech behind Jett

Jett leverages cutting-edge GenAI technology to power advanced natural language understanding and generate highly accurate, context-aware responses. Amazon Bedrock’s cutting-edge GenAI technology provides seamless access to Anthropic’s Claude Sonnet. Claude Sonnet, a general-purpose AI, pretrained on a vast dataset, has been leveraged as a foundation model (FM) to understand user questions and transform them into SQL queries and then convert query results into meaningful responses, including visual insights and concise summaries of relevant information.

When a user enters an inquiry, Jett utilizes Claude Sonnet to generate SQL queries based on that inquiry and present the results in an intelligent format. It is guided with well-structured prompts to produce the desired results. These prompts instruct Claude Sonnet to:

  • Classify questions based on the type of Control-M objects and whether the query requires aggregation or a list.
  • Interpret the Control-M SaaS database schema and generate optimized SQL queries.
  • Apply guardrails to restrict out-of-scope questions.
  • Summarize and present query results in a clear and structured format.

Jett in action

Jett can assist Control-M SaaS users across the organization in finding answers to a multitude of Control-M SaaS workflow questions that speed problem resolution, audit compliance verification, workflow optimization, and anomaly discovery and analysis. While all the information related to these use cases was available before, users would often have to seek it out and compile it manually. With Jett, questions are answered quickly and presented in a usable format.

Here’s an example of questions that can be answered by Jett:

  • Resolving problems quicklyJett response to question List all jobs
    • List all jobs that failed yesterday, and sort them by failure count.
    • Has job_1 failed prior to yesterday?
    • Analyze the past 10 runs for job_1.
  • Faster audit complianceJett Response to question List all updates
    • List all updates made to job_1 this month and include who made the changes.
    • Which users made changes to job_1 and application_1, and when were the changes made?
  • Optimize workflow performanceJett Response to question What were the most recurring
    • What were the most recurring user actions last week, and which jobs were impacted?
    • Provide all jobs that ran longer than average in the last month.
  • Find and analyze anomaliesJett response to question List all jobs that completed
    • List all jobs that completed faster than expected in the last week.
    • Were there any anomalies in job length over the past month?

Find out how Jett can help you turn valuable time spent on research and internal data collection into time spent on innovation. Contact your Sales or Support rep today!

]]>
The Future of Workload Automation: Embracing Cloud and AI-Driven Orchestration https://www.bmc.com/blogs/ema-wla-orchestration-ai/ Mon, 10 Mar 2025 17:21:26 +0000 https://www.bmc.com/blogs/?p=54752 Workload automation is at a turning point. Once confined to traditional batch processing and job scheduling, it has now become a central driver of digital transformation. The results of the latest Enterprise Management Associates (EMA) Research Report, The Future of Workload Automation and Orchestration, highlight a crucial shift: enterprises are increasingly relying on cloud-driven automation […]]]>

Workload automation is at a turning point. Once confined to traditional batch processing and job scheduling, it has now become a central driver of digital transformation. The results of the latest Enterprise Management Associates (EMA) Research Report, The Future of Workload Automation and Orchestration, highlight a crucial shift: enterprises are increasingly relying on cloud-driven automation and artificial intelligence (AI)-powered orchestration to navigate modern IT environments.

Cloud adoption is reshaping automation strategies at an unprecedented pace. More organizations are moving their workload automation to cloud-native and hybrid environments, breaking away from rigid, on-premises infrastructures. According to survey results, approximately 30 percent of workload automation (WLA) jobs are run in public clouds and 14 percent are run in hybrid cloud environments. As businesses accelerate cloud migration, the need for seamless application and data workflow orchestration across multiple platforms like Amazon Web Services (AWS), Azure, and Google Cloud, while also ensuring consistency, security, and compliance, has never been greater. Solutions must evolve to not only keep up with this shift but also to proactively streamline cloud operations, offering deep integration and visibility across hybrid ecosystems.

At the same time, AI is redefining the future of orchestration. In fact, 91 percent of survey respondents identify AI-enhanced orchestration as extremely or very important, with 70 percent planning to implement AI-driven capabilities within the next 12 months. The ability to go beyond automation and enable intelligent decision-making is becoming a necessity rather than a luxury. AI-driven orchestration is not just about optimizing job scheduling; it’s also about predicting failures before they occur, dynamically reallocating resources, and enabling self-healing workflows. As organizations integrate AI and machine learning (ML) into their IT and business processes, automation must evolve to support complex data pipelines, MLOps workflows, and real-time data orchestration.

This transformation is not without its challenges. The complexity of managing automation across multi-cloud environments, the growing need for real-time observability, and the increasing role of AI in automation demand a new level of sophistication. Enterprises need solutions that do more than execute tasks—they need platforms that provide visibility, intelligence, and adaptability. The role of workflow orchestration is no longer about keeping the lights on; it is about enabling innovation, agility, and resilience in an era of digital acceleration.

Platform requirements

Clearly, application and data workflow orchestration will continue to be a critical driver, and choosing the right orchestration platform one of the most important decisions a business can make. With that in mind, I’d like to share eight key capabilities a platform must have to orchestrate business-critical workflows in production, at scale.

Heterogeneous workflow support:

Large enterprises are rapidly adopting cloud and there is general agreement in the industry that the future state will be highly hybrid, spanning mainframe to distributed systems in the data center to multiple clouds—private and public. If an application and data workflow orchestration platform cannot handle diverse applications and their underlying infrastructure, then companies will be stuck with many silos of automation that require custom integrations to handle cross platform workflow dependencies.

SLA management:

Business workflows such as financial close and payment settlement all have completion service level agreements (SLAs) governed regulatory agencies. The orchestration platform must be able to understand and notify not only the failures and delays in corresponding tasks, but also be able to link this to business impact.

Error handling and notification:

When running in production, even the best designed workflows will have failures and delays. The orchestrator must enable notifications to the right team at the right time to avoid lengthy war room discussions about assigning a response..

Self-healing and remediation:

When teams respond to job failures within business workflows, they take corrective action, such as restarting something, deleting a file, or flushing a cache or temp table. The orchestrator should allow engineers to configure such actions to happen automatically the next time the same problem occurs, instead stopping a critical workflow while several teams respond to the failures.

End-to-end visibility:

Workflows execute interconnected business processes across hybrid tech stacks. The orchestration platform should be able to clearly show the lineage of the workflows for a better understanding of the relationships between applications and the business processes they support. This is also important for change management, to see what happens upstream and downstream from a process.

Appropriate UX for multiple personas:

Workflow orchestration is a team sport with many stakeholders such as developers, operations teams, and business process owners, etc. Each team has a different use case in how they want to interact with the orchestrator, so it must offer the right user interface (UI) and user experience (UX) for each so they can be effective users of the technology.

Standards in production:

Running in production always requires adherence to standards, which in the case of workflows, means correct naming conventions, error handling patterns, etc. The orchestration platform should be able to provide a very simple way to define such standards and guide users to them when they are building workflows.

Support DevOps practices:

As companies adopt DevOps practices like continuous integration and continuous deployment (CI/CD) pipelines, the development, modification, and even infrastructure deployment of the workflow orchestrator should fit into modern release practices.

EMA’s report underscores a critical reality: the future belongs to organizations that embrace orchestration as a strategic imperative. By integrating AI, cloud automation, and observability into their application and data workflow orchestration strategies, businesses can drive efficiency, optimize performance, and stay ahead of the competition.

To understand the full scope of how workflow orchestration is evolving and what it means for your enterprise, explore the insights from EMA’s latest research.

]]>
The Digital Operational Resilience Act (DORA) and Control-M https://www.bmc.com/blogs/controlm-and-dora/ Thu, 16 Jan 2025 10:21:54 +0000 https://www.bmc.com/blogs/?p=54526 The Digital Operational Resilience Act (DORA) is a European Union (EU) regulation designed to enhance the operational resilience of the digital systems, information and communication technology (ICT), and third-party providers that support the financial institutions operating in European markets. Its focus is to manage risk and ensure prompt incident response and responsible governance. Prior to […]]]>

The Digital Operational Resilience Act (DORA) is a European Union (EU) regulation designed to enhance the operational resilience of the digital systems, information and communication technology (ICT), and third-party providers that support the financial institutions operating in European markets. Its focus is to manage risk and ensure prompt incident response and responsible governance. Prior to the adoption of DORA, there was no all-encompassing framework to manage and mitigate ICT risk. Now, financial institutions are held to the same high risk management standards across the EU.

DORA regulations center around five pillars:

Digital operational resilience testing: Entities must regularly test their ICT systems to assess protections and identify vulnerabilities. Results are reported to competent authorities, with basic tests conducted annually and threat-led penetration testing (TLPT) done every three years.

ICT risk management and governance: This requirement involves strategizing, assessing, and implementing controls. Accountability spans all levels, with entities expected to prepare for disruptions. Plans include data recovery, communication strategies, and measures for various cyber risk scenarios.

ICT incident reporting: Entities must establish systems for monitoring, managing, and reporting ICT incidents. Depending on severity, reports to regulators and affected parties may be necessary, including initial, progress, and root cause analyses.

Information sharing: Financial entities are urged by DORA regulations to develop incident learning processes, including participation in voluntary threat intelligence sharing. Shared information must comply with relevant guidelines, safeguarding personally identifiable information (PII) under the EU’s General Data Protection Regulation (GDPR).

Third-party ICT risk management: Financial firms must actively manage ICT third-party risk, negotiating exit strategies, audits, and performance targets. Compliance is enforced by competent authorities, with proposals for standardized contractual clauses still under exploration.

Introducing Control-M

Financial institutions often rely on a complex network of interconnected application and data workflows that support critical business services. The recent introduction of DORA-regulated requirements has created an urgent need for these institutions to deploy additional tools, including vulnerability scanners, data recovery tools, incident learning systems, and vendor management platforms.

As regulatory requirements continue to evolve, the complexity of managing ICT workflows grows, making the need for a robust workflow orchestration platform even more critical.

Control-M empowers organizations to integrate, automate, and orchestrate complex application and data workflows across hybrid and cloud environments. It provides an end-to-end view of workflow progress, ensuring the timely delivery of business services. This accelerates production deployment and enables the operationalization of results, at scale.

Why Control-M

Through numerous discussions with customers and analysts, we’ve gained valuable insights that reinforce that Control-M embodies the essential principles of orchestrating and managing enterprise business-critical workflows in production at scale.

They are represented in the following picture. Let’s go through, in a bottom-up manner.

Enterprise Production at Scale

Support heterogeneous workflows

Control-M supports a diverse range of applications, data, and infrastructures, enabling workflows to run across and between various combinations of these technologies. These are inherently hybrid workflows, spanning from mainframes to distributed systems to multiple clouds, both private and public, and containers. The wider the diversity of supported technologies, the more cohesive and efficient the automation strategy, lowering the risk of a fragmented landscape with silos and custom integrations.

End-to-end visibility

This hybrid tech stack can only become more complex in modern business enterprise. Workflows execute interconnected business processes across this hybrid tech stack. Without the ability to visualize, monitor, and manage your workflows end to- end, scaling to production is nearly impossible. Control-M provides clear visibility into application and data workflow lineage, helping you understand the relationships between technologies and the business processes they support.
While the six capabilities at the top of the picture above aren’t everything, they’re essential for managing complex enterprises at scale.

SLA management for workflows

Business services, from financial close to machine learning (ML)-driven fraud detection, all have service level agreements (SLAs), often influenced by regulatory requirements. Control-M not only predicts possible SLA breaches and alerts teams to take actions, but also links them to business impact. If a delay affects your financial close, you need to know it right away.

Error handling and notification

Even the best workflows may encounter delays or failures. The key is promptly notifying the right team and equipping them with immediate troubleshooting information. Control-M delivers on this.

Appropriate UX for multiple personas

Integrating and orchestrating business workflows involves operations, developers, data and cloud teams, and business owners, each needing a personalized and unique way to interact with the platform. Control-M delivers tailored interfaces and superior user experiences for every role.

Self-healing and remediation

Control-M allows workflows to self-heal automatically, preventing errors by enabling teams to automate the corrective actions they initially took manually to resolve the issue.

Support DevOps practices

With the rise of DevOps and continuous integration and continuous delivery (CI/CD) pipelines, workflow creation, modification, and deployment must integrate smoothly into release practices. Control-M allows developers to code workflows using programmatic interfaces like JSON or Python and embed jobs-as-code in their CI/CD pipelines.

Standards in production

Finally, Control-M enforces production standards, which is a key element since running in production requires adherence to precise standards. Control-M fulfills this need by providing a simple way to guide users to the appropriate standards, such as correct naming conventions and error-handling patterns, when building workflows.

Conclusion

DORA takes effect January 17, 2025. As financial institutions prepare to comply with DORA regulations, Control-M can play an integral role in assisting them in orchestrating and automating their complex workflows. By doing so, they can continue to manage risk, ensure prompt incident response, and maintain responsible governance.

To learn more about who Control-M can help your business, visit www.bmc.com/control-m.

]]>