Workload Automation Blog – BMC Software | Blogs https://s7280.pcdn.co Fri, 05 Apr 2024 14:50:50 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png Workload Automation Blog – BMC Software | Blogs https://s7280.pcdn.co 32 32 Lessons Learned and Shared for Enabling Enterprise Self-Service Workflow Orchestration https://s7280.pcdn.co/bmc-on-bmc-helixcontrolm-self-service/ Fri, 05 Apr 2024 14:50:50 +0000 https://www.bmc.com/blogs/?p=53501 Many of BMC’s day-to-day operations run on our own solutions, which keeps us operating efficiently and gives us essential insights into our customers’ challenges and how we can improve our offerings. We call this approach BMC on BMC, and BMC Helix Control-M is a big part of it. As we’ve highlighted in previous blogs, Control-M […]]]>

Many of BMC’s day-to-day operations run on our own solutions, which keeps us operating efficiently and gives us essential insights into our customers’ challenges and how we can improve our offerings. We call this approach BMC on BMC, and BMC Helix Control-M is a big part of it. As we’ve highlighted in previous blogs, Control-M orchestrates the thousands of workflows that keep BMC running daily and has produced many benefits for various business units, including finance, the enterprise data warehouse team, customer support, sales operations, marketing and more. Some of their reported results include $27 million in recurring cost avoidance savings through application and data workflow orchestration; 40–50 days saved annually in sales operations; automated generation of key executive reports, which eliminated the need for one manager to do it weekly; a significantly streamlined quarterly close process; and more.

The automation and operational dashboards that Control-M provides have been great for our business. As word of these successes spread, more business users came forward with their ideas for new use cases. In addition to using Control-M on our own projects, another thing BMC had in common with many of our customers is that our workflow development was centralized and the Information Systems and Technology (IS&T) department was the gatekeeper. It became very challenging for our IS&T operations team to balance keeping our systems running and meeting the demand for new services.

This situation is common and is a leading driver for the citizen development movement, where companies are giving their non-IS&T business users the autonomy to create their services. Flipping the development model from centralized to decentralized can break development logjams, but it carries risks. Workflows today are more complex and have more dependencies than ever, making security and governance harder to maintain. Citizen developers can’t be expected to account for all the variables that could cause new services they envision to cause other enterprise workflows to crash or open other vulnerabilities.

We in IS&T operations understood these risks. We also understood that we needed to embrace citizen development to keep the company agile. That understanding formed the foundation of our automation-as-a-service (AaaS) program. With BMC Helix Control-M at the center, democratizing data and giving users the tools they needed to orchestrate workflows and business services became easy. It also helped us mitigate new risks and handle governance blind spots.

Before implementing decentralized workflow development and orchestration, our business users dedicated significant time submitting requests, while IS&T invested substantial effort in follow-up and development. However, with the introduction of AaaS, the landscape transformed. Business users are now efficiently deploying more services into production, automation processes have become streamlined, and IS&T resources have been liberated to concentrate on innovation. Let’s delve into a comparison of the processes before and after this transformation.

Before AaaS

It typically took one to three days for a business user to complete a request for a new service. Because requests involve using data from multiple systems, the requesting business user needed to find and contact numerous system administrators to request permission to access various applications and their data, which meant opening multiple tickets to support a single job request.

The IS&T operations team found themselves inundated with an escalating volume of automation requests from different departments across the business. Each request required careful evaluation, leading to a decision of approval or rejection, followed by the development of workflows for those that were accepted. Upon acceptance, our team undertook the entire development process, encompassing integration creation, extensive testing, conflict resolution with existing workflows, identification of security vulnerabilities, and the subsequent deployment of services into production. Development timelines fluctuated significantly depending on the complexity of each task. Leveraging Control-M proved instrumental, as it automated numerous development and execution tasks while offering a plethora of pre-built integrations.

Introducing AaaS via BMC Helix Control-M has revolutionized our workflow request and development procedures. By leveraging BMC Helix Control-M, we’ve significantly streamlined the formerly time-consuming, labor-intensive, and costly processes associated with requesting and developing workflows. We’ve automated the entire workflow lifecycle, from initial request submission (ticketing) through follow-ups for missing information to decision-making regarding approval or rejection of new business services. Moreover, we’ve optimized environment provisioning and provided users whose projects were approved with tailored training via learning paths. What previously took users several days to request services now takes hours. Furthermore, our learning paths actively encourage users to explore and utilize the modern features of BMC Helix Control-M, thus accelerating the automation process even further.

Providing AaaS with BMC Helix Control-M

We’re achieving even greater efficiencies in workflow development, primarily because users are taking the lead, requiring minimal intervention from IS&T for each workflow. Thanks to BMC Helix Control-M, users can access the tools and integrations to construct workflows seamlessly, facilitated by an intuitive interface. Within this framework, business users enjoy remarkable flexibility in designing automation and other workflows that streamline their tasks. BMC Helix Control-M automatically implements guardrails, preventing user-generated workflows from disrupting others. Our framework logically isolates jobs, mitigating interference between them, with much of this functionality operating seamlessly behind the scenes. The solution also ensures data security with the built-in data protection features that safeguard sensitive information.

Most business users have crafted their workflows use a graphical user interface (GUI) and the Jobs-as-Code methodology. They leverage BMC Helix Control-M’s user-friendly interface and extensive library of pre-built integrations. Additionally, we offer users a token for accessing pre-approved code on GitHub. To further support users, we’ve developed a learning path that enables them to explore BMC Helix Control-M’s capabilities.

With this solution, IS&T no longer creates automation and workflows; instead, the business users are responsible. Our task is to provide them with the necessary resources and ensure the smooth operation of the overall system, with BMC Helix Control-M handling most of the automation processes.

automation

Figure 1. When it comes to automation, a lot of work goes on behind the scenes.

AaaS architecture:

The image above represents the architecture behind AaaS. There are many moving parts, but at a high level, the architecture represents the following process:

  • Citizen developers use the current request mechanism within BMC Helix Digital Workplace to initiate requests.
  • The automated workflows commence once approval is obtained, and the security team has integrated the required group into OKTA. We aim to automate the OKTA process soon as part of our ongoing enhancements.

Through its Automation API, we’ve seamlessly integrated BMC Helix Control-M into our DevOps toolchain, comprising Bitbucket and HashiCorp Vault. This integration facilitates the provisioning of a secure swim lane within BMC Helix Control-M for the citizen developer, aligning with our enterprise standards for building workflows.

BMC-Helix-Control-Ms-Automation-API

Figure 2. BMC Helix Control-M’s Automation API allowed us to create a secure swim lane for citizen developers within our DevOps toolchain.

While self-service and citizen development aim to empower business users, basing the program on BMC Helix Control-M has greatly benefited our IS&T and operations teams. As we’ll elaborate in an upcoming blog post, our use of the application and data workflow orchestration platform has substantially reduced the time burden on the IS&T department by automating a considerable portion of the provisioning and securing environments, enabling business users to create their workflows more efficiently.

One of our notable successes, which we encourage other organizations to adopt as a best practice, involves the creation of a dashboard designed to monitor citizen development projects organization-wide. A recent snapshot from this dashboard is depicted in the image below.

The dashboard calculates the business value derived from developed workflows and automations, providing an ongoing tally. For instance, one business unit has identified $27 million in cost avoidance through its self-developed workflows, with the value increasing with each execution. Such metrics aid us in decision-making regarding request approvals and prioritizations. It is crucial for any initiative to track adoption and usage metrics and to address this need, a dashboard has been established that aggregates data from BMC Helix Control-M and other systems for comprehensive tracking. The screenshot below offers a preview of this dashboard.

Company-wide citizen development metrics dashboard.

Figure 3. Company-wide citizen development metrics dashboard.

“BMC on BMC” isn’t a temporary pilot or project with a predetermined end date or a specific quota of workflows to be developed. It’s an ongoing endeavor that continues to expand its influence across our entire international organization. Over half of our employees have used self-service to access our enterprise data warehouse (EDW), and the impact of user-developed workflows touches every employee in some capacity.

From this experience, we’ve learned some crucial insights:

  1. Enterprises must decentralize workflow development to effectively manage service requests and drive innovation.
  2. Decentralization should not entail compromises in workflow security, reliability, or governance.
  3. Scaling decentralized development and its governance mandates automation as the essential pathway forward.

While BMC Helix Control-M played a crucial enabling role, the collective efforts of many individuals were instrumental in achieving these process improvements. We advise customers aspiring for citizen development to recognize the significance of integrating a robust change management component into their programs. Self-service, citizen development, and advanced automation signify novel approaches to work. Both business users and IS&T professionals must be prepared for these changes. Embracing structured change management is one of our most valuable lessons learned.

While we’ve made significant strides and achieved numerous milestones since our inception, for those deeply engaged, we know that we’ve merely begun to tap into the potential of automation and self-service at BMC. We view it as a continuous journey and aspire to democratize self-service workflow automation for all. In my upcoming blog posts, I’ll delve into the crucial architecture and process of our approach to achieving this vision.

To learn more about BMC Helix Control-M, click here.

]]>
Simplify CSP Data Initiatives with Control-M https://www.bmc.com/blogs/simplify-csp-data-ctm/ Fri, 02 Feb 2024 05:59:29 +0000 https://www.bmc.com/blogs/?p=53418 In today’s hyper-connected digital-first world, having reliable phone, internet, and television services is non-negotiable. That means communications service providers (CSPs) must remain on the cutting edge of technology and maintain stellar customer relationships to stay competitive. They do this by leveraging massive amounts of data generated from sources like subscriber information, call detail records, and […]]]>

In today’s hyper-connected digital-first world, having reliable phone, internet, and television services is non-negotiable. That means communications service providers (CSPs) must remain on the cutting edge of technology and maintain stellar customer relationships to stay competitive. They do this by leveraging massive amounts of data generated from sources like subscriber information, call detail records, and sales.

The need to operationalize this data puts CSP data and analytics teams on a critical mission: To find ways to use insight-based analytics to support business transformation and create competitive advantages. The executive pressure behind it is strong. CSP data architects and their teams often struggle with deciding which data is needed and how it can be acquired, ingested, aggregated, processed, and analyzed so they can deliver the insights the business demands. Data isn’t a project—it’s a journey, and one that often comes without a roadmap.

Delivering data and analytics capabilities with the scope and scale CSPs need requires the flexibility to accommodate disparate data sources and technologies across varying infrastructure, both on-premises and in the cloud. To meet the demands of executives and business conditions, companies need a robust application and data workflow orchestration platform and strategy. This helps CSP organizations orchestrate essential tasks across the complete data lifecycle, so they can coordinate, accelerate, and operationalize their business modernization initiatives.

One of the biggest challenges on the data journey is not letting all the details and decisions about architecture, tools, processes, and integration distract from discovering how to deliver valuable insights and services across the organization.

All too commonly, organizations get bogged down by foundational data questions like:

Do we have the right framework to manage data pipelines?

What are the best options for feeding new data streams into our systems?

How can we integrate disparate technologies?

How can we leverage our existing systems of record?

Where should our data systems run?

The list goes on and on. As they try to find answers, companies can lose sight of the overall goal of creating systems that will provide better insight and improve decision-making. The details are essential, but so is staying focused on the big picture. The less time planners need to spend on the details of how data will be managed, the more they can focus on finding value and insight in their data.

To deal with the complexity, CSPs need industrial-strength application and data workflow orchestration capabilities. Many tools can orchestrate data workflows. Some of them—such as Apache Airflow—are open source. However, most of those tools are platform-specific, targeting specific personas to perform specific tasks. So, multiple tools must be cobbled together to orchestrate complex workflows across multi-cloud and hybrid environments.

End-to-end orchestration is essential for running data pipelines in production and an organization’s chosen platform must be able to support disparate applications and data on diverse infrastructures. Control-M (self-hosted and SaaS) does that by providing flexible application and data workflow orchestration for every stage of the data and analytics journey, operationalizing the business modernization initiatives every organization is striving to achieve. It offers interfaces tailored to the many personas involved in facilitating complex workflows, including IT operations (ITOps), developers, cloud teams, data engineers, and business users. Having everyone collaborating on a single platform, operating freely within the boundaries implemented by ITOps, speeds innovation and reduces time to value.

Control-M expedites the implementation of data pipelines by replacing manual processes with application and data integration, automation, and orchestration. This gives every project speed, scalability, reliability, and repeatability. Control-M provides visibility into workflows and service level agreements (SLAs) with an end-to-end picture of data pipelines at every stage, enabling quick resolution of potential issues through notification and troubleshooting before deadlines are missed. Control-M can also detect potential SLA breaches through forecasting and predictive analytics that prompt focused human intervention on specific remedial actions to prevent SLA violations from occurring.

Data pipeline orchestration offers CSPs unique opportunities to improve their business by operationalizing data. For example, CSPs can reduce customer churn by leveraging data to identify signals and patterns that indicate potential issues. With that analysis, they can proactively target at-risk customers with retention campaigns and personalized offers. Additionally, CSPs can utilize customer data to optimize pricing, provide targeted promotions to customers, and deliver excellent customer experiences.

Case study

A major European CSP and media conglomerate utilizes Control-M throughout its business to harness the power of data. With more than 12 million customers, the company collects a staggering six petabytes of customer data per night, including viewing habits from television cable boxes, mobile network usage information, and website traffic. Using this information, it creates a 360-degree view of each customer. That means its customer information is never more than 15 minutes out of date, allowing it to provide the best customer service possible. In addition, this information is used to deliver targeted advertising so that each customer sees what is most relevant to their interests.

Control-M manages and orchestrates the entire data science modeling workflow end to end, both on-premises and in the cloud, through technologies including Google Cloud Platform (GCP), BigQuery, DataBricks, and many more. With Control-M, the CSP can use this massive amount of data to understand its customers, provide an optimized customer experience, slash cancellations, and help create new revenue streams.

Conclusion

Turning data and analytics into insights and actions can feel impossible—especially with the massive amount of data generated by a CSP. Control-M orchestrates and automates data pipelines to deliver the insights your organization needs.

Control-M helps CSPs orchestrate every step of a data and analytics project, including ingesting data to your systems, processing it, and delivering insights to business users and other teams that need to better utilize the refined data. It also brings needed consistency and integration between modern and legacy environments. The benefit to this integration and automation is that you can operationalize data to modernize your business, innovate faster, and deliver data initiatives successfully.

To learn more about how Control-M can help you improve your business outcomes visit our website.

]]>
Simplifying Complex Mainframe Migration Projects with Micro Focus and Control‑M https://www.bmc.com/blogs/mainframe-migration-micro-focus-controlm/ Mon, 11 Dec 2023 18:23:37 +0000 https://www.bmc.com/blogs/?p=53333 In today’s complex and rapidly evolving digital landscape, organizations recognize business modernization as a key priority to drive growth and innovation. They are embracing a culture of agility and responsiveness that leverages both emerging technologies and tools and agile application and data pipeline development methods to foster customer-centric solutions. One area of enterprise digital modernization […]]]>

In today’s complex and rapidly evolving digital landscape, organizations recognize business modernization as a key priority to drive growth and innovation. They are embracing a culture of agility and responsiveness that leverages both emerging technologies and tools and agile application and data pipeline development methods to foster customer-centric solutions.

One area of enterprise digital modernization currently getting a lot of attention is the mainframe. As part of their modernization efforts, mainframe organizations are evaluating migrating applications to newer platforms like cloud and containers.

However, while the lure of more agility and flexibility for mainframe applications is a strong incentive, companies face significant risks because these changes can potentially impact critical business services. They’re also seeking to capitalize on these new technology investments while still preserving the knowledge and expertise embedded within existing applications, processes, and workflows that deliver these business-critical outcomes.

Consequently, most organizations with mainframe pursue a strategic migration approach that balances the pace of transition while still retaining the existing components that continue to deliver value and selectively migrating others that are better served by modern environments.

To manage this balancing act, mitigate operational risks, and execute a smooth journey to the desired state, companies rely on two core technology platforms—Control-M, BMC’s application and data workflow orchestration platform, and Micro Focus Enterprise Server, Open Text’s solution for mainframe application replatforming.

Migrating application and data workflows

While mainframe systems are well-known for powering real-time transactions, the majority of their workloads actually run as batch. These batch jobs are integrated into workflows to deliver essential business outcomes such as supply chain execution, customer billing, payments, and end-of-period closing. Over time, the workflows have evolved, often becoming hybrid and complex in nature as they incorporate modern infrastructure and data technologies. While evolving, they have also been adapted to company processes and standards, accumulating best practices, insights, and institutional knowledge.

Migrating these critical workflows of tightly interconnected applications and data sources, and maintaining the associated institutional knowledge, is a key challenge in almost every mainframe application migration. Control-M is an ideal platform in this context. It ensures the integration of mainframe and migrated applications alongside other technologies on distributed systems, cloud, and container platforms. It also preserves built-in knowledge, processes, and standards, and enables a smooth, no-risk migration at the speed the organization desires.

In addition, Micro Focus Enterprise Server is a high-performance, scalable deployment environment that allows applications traditionally run on IBM® mainframes to be moved to other platforms (replatformed), including distributed systems, cloud, and containers, with only minor adjustments.

Managed transformation

Control-M has recently delivered a Micro Focus Enterprise Server integration that enables the centralized orchestration of Micro Focus jobs alongside other application and data workflows. This integration supports managing Micro Focus jobs through Control-M’s interfaces, leveraging the same advanced orchestration capabilities used across mainframe jobs, file transfers, enterprise resource planning (ERP) solutions, data sources, and cloud and container services.

Control-M’s integration with Micro Focus Enterprise Server, coupled with BMC’s migration tools and support team expertise, positions the combined solution as the go-to migration resource and capability for mainframe modernization projects.

Control-M can easily replace mainframe applications with replatformed Micro Focus jobs, maintaining dependencies, workflow structure, built-in knowledge, and adherence to processes and standards. The change is operationally transparent, as Control-M continues to provide holistic visibility and standardized management of application and data workflows across source and destination environments, delivering consistent business outcomes and value as applications and the landscape evolve.

For existing Control-M customers, replatforming from mainframe to distributed or on-premises servers and/or cloud environments is guided, simple, and secure. The BMC Services organization, including its global partner network, is available to assist customers in migrating mainframe workflows through complete or selective replacement with Micro Focus jobs, providing continuously updated migration tools and sharing their expertise in converting workflows between platforms.

The BMC Services team follows a proven methodology with four key phases:

  • Planning: Includes creating a roadmap by assessing applications, dependencies, and environment constraints.
  • Development: Migrates applications using migration tools to minimize errors.
  • Verification: Compares original and migrated workflow outputs.
  • Execution: Deploys workloads once validation is complete.

Proven success

One BMC customer achieving success with such a migration is AG Insurance. To maintain its market leadership, the company is focused on customer and competitive differentiation, adding products and experiences to make it the best choice for customers, distributors, and brokers. The organization has embarked on an ambitious and complex replatforming modernization project to migrate from its mainframe to Windows servers.

Control-M has been integral to this transformation. As part of the replatforming project, AG Insurance migrated more than 80 million lines of code through Micro Focus application modernization solutions. To minimize risk and facilitate planning and implementation, the migration was accomplished through several sequential iterations, each with its own testing and validation cycles. During the iterative application migration process, Control-M was essential to the testing of parallel workflows, including migrated and non-migrated applications across the mainframe and the new distributed platform, and verifying that the business results they produced were identical.

Control-M continues to be the strategic orchestration framework driving all of AG Insurance’s applications and data workflows and enabling new possibilities.

Contemplating mainframe migration?

If you are one of the many mainframe-driven enterprises contemplating or actively pursuing a modernization program, Control‑M and Micro Focus Enterprise Server offer a compelling, integrated capability to manage your journey. Together, the solutions can help you adapt workflows and services with operational transparency so your organization can achieve a seamless transition from the mainframe while preserving your vital institutional knowledge and experience.

The migration can be approached through a managed methodology that leverages both Control-M’s expertise in migrating mainframe application workflows and Micro Focus’ expertise in migrating mainframe applications to mitigate risks and allow customers to migrate at their own pace.

For more information on how Control-M and Micro Focus Enterprise Server can advance your mainframe modernization initiative, download this whitepaper.

]]>
Streamlining Machine Learning Workflows with Control-M and Amazon SageMaker https://www.bmc.com/blogs/ml-workflows-controlm-sagemaker/ Fri, 10 Nov 2023 07:41:26 +0000 https://www.bmc.com/blogs/?p=53284 In today’s fast-paced digital landscape, the ability to harness the power of artificial intelligence (AI) and machine learning (ML) is crucial for businesses aiming to gain a competitive edge. Amazon SageMaker is a game-changing ML platform that empowers businesses and data scientists to seamlessly navigate the development of complex AI models. One of its standout […]]]>

In today’s fast-paced digital landscape, the ability to harness the power of artificial intelligence (AI) and machine learning (ML) is crucial for businesses aiming to gain a competitive edge. Amazon SageMaker is a game-changing ML platform that empowers businesses and data scientists to seamlessly navigate the development of complex AI models. One of its standout features is its end-to-end ML pipeline, which streamlines the entire process from data preparation to model deployment. Amazon SageMaker’s integrated Jupyter Notebook platform enables collaborative and interactive model development, while its data labeling service simplifies the often-labor-intensive task of data annotation.

It also boasts an extensive library of pre-built algorithms and deep learning frameworks, making it accessible to both newcomers and experienced ML practitioners. Amazon SageMaker’s managed training and inference capabilities provide the scalability and elasticity needed for real-world AI deployments. Moreover, its automatic model tuning, and robust monitoring tools enhance the efficiency and reliability of AI models, ensuring they remain accurate and up-to-date over time. Overall, Amazon SageMaker offers a comprehensive, scalable, and user-friendly ML environment, making it a top choice for organizations looking to leverage the potential of AI.

Bringing Amazon SageMaker and Control-M together

Amazon SageMaker simplifies the entire ML workflow, making it accessible to a broader range of users, including data scientists and developers. It provides a unified platform for building, training, and deploying ML models. However, to truly harness the power of Amazon SageMaker, businesses often require the ability to orchestrate and automate ML workflows and integrate them seamlessly with other business processes. This is where Control-M from BMC comes into play.

Control-M is a versatile application and data workflow orchestration platform that allows organizations to automate, monitor, and manage their data and AI-related processes efficiently. It can seamlessly integrate with SageMaker to create a bridge between AI modeling and deployment and business operations.

In this blog, we’ll explore the seamless integration between Amazon SageMaker and Control-M and the transformative impact it can have on businesses.

Amazon SageMaker empowers data scientists and developers to create, train, and deploy ML models across various environments—on-premises, in the cloud, or on edge devices. An end-to-end data pipeline will include more than just Amazon SageMaker’s AI and ML functionality, where data gets ingested from multiple sources, transformed, aggregated etc., before training a model and executing AI/ML pipelines with Amazon SageMaker. Control-M is often used for automating and orchestrating end-to-end data pipelines. A good example of end-to-end orchestration is covered in the blog, “Orchestrating a Predictive Maintenance Data Pipeline,” co-authored by Amazon Web Services (AWS) and BMC.

Here, we will specifically focus on integrating Amazon SageMaker with Control-M. When you have Amazon SageMaker jobs embedded in your data pipeline or complex workflow orchestrated by Control-M, you can harness the capabilities of Control-M for Amazon SageMaker to efficiently execute an end-to-end data pipeline that it also includes Amazon SageMaker pipelines.

Key capabilities

Control-M for Amazon SageMaker provides:

  • Secure connectivity: Connect to any Amazon SageMaker endpoint securely, eliminating the need to provide authentication details explicitly
  • Unified scheduling: Integrate Amazon SageMaker jobs seamlessly with other Control-M jobs within a single scheduling environment, streamlining your workflow management
  • Pipeline execution: Execute Amazon SageMaker pipelines effortlessly, ensuring that your ML workflows run smoothly
  • Monitoring and SLA management: Keep a close eye on the status, results, and output of Amazon SageMaker jobs within the Control-M Monitoring domain and attach service level agreement (SLA) jobs to your Amazon SageMaker jobs for precise control
  • Advanced capabilities: Leverage all Control-M capabilities, including advanced scheduling criteria, complex dependencies, resource pools, lock resources, and variables to orchestrate your ML workflows effectively
  • Parallel execution: Run up to 50 Amazon SageMaker jobs simultaneously per agent, allowing for efficient job execution at scale

Control-M for Amazon SageMaker compatibility

Before diving into how to set up Control-M for Amazon SageMaker, it’s essential to ensure that your environment meets the compatibility requirements:

  • Control-M/EM: version 9.0.20.200 or higher
  • Control-M/Agent: version 9.0.20.200 or higher
  • Control-M Application Integrator: version 9.0.20.200 or higher
  • Control-M Web: version 9.0.20.200 or higher
  • Control-M Automation API: version 9.0.20.250 or higher

Please ensure you have the required installation files for each prerequisite available.

A real-world example:

The Abalone Dataset, sourced from the UCI Machine Learning Repository, has been frequently used in ML examples and tutorials to predict the age of abalones based on various attributes such as size, weight, and gender. The age of abalones is usually determined through a physical examination of their shells, which can be both tedious and intrusive. However, with ML, we can predict the age with considerable accuracy without resorting to physical examinations.

For this exercise, we used the Abalone tutorial provided by AWS. This tutorial efficiently walks users through the stages of data preprocessing, training, and model evaluation using Amazon SageMaker.

After understanding the tutorial’s nuances, we trained the Amazon SageMaker model with the Abalone Dataset, achieving satisfactory accuracy. Further, we created a comprehensive continuous integration and continuous delivery (CI/CD) pipeline that automates model retraining and endpoint updates. This not only streamlined the model deployment process but also ensured that the Amazon SageMaker endpoint for inference was always up-to-date with the latest trained model.

Setting up Control-M for Amazon SageMaker

Now, let’s walk through how to set up Control-M for Amazon SageMaker, which has three main steps:

  1. Creating a connection profile that Control-M will use to connect to the Amazon SageMaker environment
  2. Defining an Amazon SageMaker job in Control-M that will define what we want to run and monitor within Amazon SageMaker
  3. Executing an Amazon SageMaker pipeline with Control-M

Step 1: Create a connection profile

To begin, you need to define a connection profile for Amazon SageMaker, which contains the necessary parameters for authentication and communication with SageMaker. Two authentication methods are commonly used, depending on your setup.

Example 1: Authentication with AWS access key and secret

Figure 1. Authentication with AWS access key and secret

Figure 1. Authentication with AWS access key and secret.

Example 2: Authentication with AWS IAM role from EC2 instance

Figure 2. Authentication with AWS IAM role

Figure 2. Authentication with AWS IAM role.

Choose the authentication method that aligns with your environment. It is important to specify the Amazon SageMaker job type exactly as shown in the examples above. Please note that Amazon SageMaker is case-sensitive, so make sure to use the correct capitalization.

Step 2: Define an Amazon SageMaker job

Once you’ve set up the connection profile, you can define an Amazon SageMaker job type within Control-M, which type enables you to execute Amazon SageMaker pipelines effectively.

Figure 3. Example AWS SageMaker job definition

Figure 3. Example AWS SageMaker job definition.

In this example, we’ve defined an Amazon SageMaker job, specifying the connection profile to be used (“AWS-SAGEMAKER”). You can configure additional parameters such as the pipeline name, idempotency token, parameters to pass to the job, retry settings, and more. For a detailed understanding and code snippets, please refer to the BMC official documentation for Amazon SageMaker.

Step 3: Executing the Amazon SageMaker pipeline with Control-M

It’s essential to note that the pipeline name and endpoint are mandatory JSON objects within the pipeline configuration. By executing the “ctm run” command on the pipeline.json file, it activates the pipeline’s execution within AWS.

First, we run “ctm build sagemakerjob.json” to validate our JSON configuration and then the “ctm run sagemakerjob.json” command to execute the pipeline.

Figure 4. Launching Amazon SageMaker job

Figure 4. Launching Amazon SageMaker job.

As seen in the screenshot above the “ctm run” command has launched the Amazon SageMaker job. The next screenshot shows the pipeline running from the Amazon SageMaker console.

Figure 5. View of data pipeline running in Amazon SageMaker console.

Figure 5. View of data pipeline running in Amazon SageMaker console.

In the Control-M monitoring domain, users have the ability to view job outputs. This allows for easy tracking of pipeline statuses and provides insights for troubleshooting any job failures.

Figure 6. View of Amazon SageMaker job output from Control-M Monitoring domain.

Figure 6. View of Amazon SageMaker job output from Control-M Monitoring domain.

Summary

In this blog, we demonstrated how to integrate Control-M with Amazon SageMaker to unlock the full potential of AWS ML services, orchestrating them effortlessly into your existing application and data workflows. This fusion not only eases the management of ML jobs but also optimizes your overall automation processes.

Stay tuned for more blogs on Control-M and BMC Helix Control-M integrations! To learn more about Control-M integrations, visit our website.

]]>
Unlock the power of SAP® Financial Close with Control-M https://www.bmc.com/blogs/unlock-sap-financial-close-controlm/ Thu, 19 Oct 2023 05:06:44 +0000 https://www.bmc.com/blogs/?p=53243 Executive summary SAP® is a complex system with many integrations and modules for thousands of time-sensitive financial closing activities that must sync with each other so the final general ledger can be balanced. All modules and sub-modules in finance need to interact with each other in a time-dependent fashion to successfully close any outstanding and […]]]>

Executive summary

SAP® is a complex system with many integrations and modules for thousands of time-sensitive financial closing activities that must sync with each other so the final general ledger can be balanced. All modules and sub-modules in finance need to interact with each other in a time-dependent fashion to successfully close any outstanding and open items. Collecting closing documents from various stakeholders across the organization can create major challenges for the business to successfully close its financial books and enter a new fiscal month/year. As a result, accounting is often behind in closing previous months, and the books are rarely up to date and balanced. Both issues create financial uncertainty for the organization.

Organizations need to have a successful month-end, quarter-end, and year-end close where all their carry forwards are moved into the next fiscal year and the general ledger (GL) and sub-ledgers can be closed and balance the trial balance. This allows companies to have strong cash flow, liquidity, and reduced total cost of ownership (TCO).

Financial closing is a jumble of task types, closing types (monthly, yearly, quarterly), cost centers/profit centers, time-dependent variables, custom factory calendars, and custom programs and transactions. Efficient financial closing processes are crucial for decision-making, financial transparency, and maintaining the trust of stakeholders, including investors, regulators, and the public.

Common challenges with the financial close process

1. Lack of enterprise visibility

If all tasks are not completed by the end of a given period on time, or there are still some GL postings not yet completed, it is very likely that the GL will not be balanced on time. Bad data being passed to another group leads to worse data. Access to real-time insights and visibility is critical, but not always common.

2. Data accuracy and reconciliation

Reconciling accounts, validating transactions, and resolving discrepancies can be time-consuming and complex, particularly in large organizations with numerous transactions and accounts. As business units are balancing their sub-ledgers, while waiting on the dependencies from within their own group, other teams may be waiting on them. Just a single error can lead to inaccurate data and an unbalanced GL. The manual effort to resolve this can be overwhelming.

3. Time sensitivity

Financial closes often have strict deadlines, especially for quarterly and annual reporting. Meeting these deadlines can be challenging, especially if there are delays in data gathering, reconciliation, or approval processes.

Missing a financial close, especially a critical one like a quarterly or annual close, can have significant consequences for enterprise organizations, such as being out of compliance with international accounting standards, tax laws, and industry-specific standards, or risking a possible audit. The most immediate consequence is a delay in financial reporting. This can erode trust among stakeholders, including investors, creditors, and regulators, who rely on timely and accurate financial statements for decision-making. Many organizations are legally obligated to file financial reports within specific deadlines. Failure to meet these deadlines can lead to fines, penalties, or legal actions by regulatory authorities. Another consequence is the potential negative impact to the stock price if investors become concerned that the company is in trouble. The list goes on and on.

Benefits of integrating BMC Helix Control-M and Control-M into your SAP finance system

Control-M for SAP® creates and manages SAP ECC, SAP S/4HANA®, SAP Business Warehouse (BW), and data archiving jobs, and supports any applications in the SAP ecosystem, eliminating time, complexity, and any specialized knowledge requirements, while also securely managing the dependencies and silos between SAP and non-SAP systems.

Control-M can speed up even the most complex closing cycles while meeting regulatory requirements and financial reporting standards, allowing you to track closing processes at every stage, including manual steps, transactions, programs, jobs, workflows, and remote tasks.

The step-by-step activities of a financial closing

Figure 1. The step-by-step activities of a financial closing.

Plan all your tasks with Control-M job planning and scheduling for better visibility

The jobs and tasks that affect all SAP modules relevant to a financial close can be grouped together in the planning feature of Control-M. This provides full enterprise visibility into the jobs and tasks that will be executed and helps reduce silos.

Control-M provides answers to your most common questions, including:

  • Where is my job running?
  • In which system?
  • In which cost center?

Further, Control-M can also provide:

  • Intelligent, predictive service level agreement (SLA) management for all business processes and jobs.
  • Resolution and better visibility for cross-application and cross-platform workload challenges.

Pre-carry forward and post-carry forward activities with the Control-M job dependency feature

Using Control-M, all pre- and post-carry forward activities can be put in their respective buckets and all-time dependencies can be defined. Once all pre-carry forward tasks are completed and all steps and jobs in those jobs have been successfully completed, you can move on to the next step. Conversely, if jobs have failed, then the alert and notification feature of Control-M can notify the job owners. When designing jobs, you can define a workflow for what to do if jobs fail, which determines whether the other subsequent job run should continue or the process should be stopped.

Effective control and measures put a temporary hold on certain financial postings processes

There are many scenarios during financial closing where manual adjustments are required. If the need arises to stop a certain financial closing job or put a temporary hold on a job for a manual adjustment, Control-M offers dynamic workload management to stop or start a process or job, pause subsequent jobs, and flexibly restart from the point of failure to prevent incorrect month-end postings.

Non-SAP postings with Control-M Managed File Transfer

Control-M can orchestrate all SAP jobs, as well as fully automate, schedule, and monitor all jobs coming from non-SAP systems. For example, there is a lot of data coming from investments, bank reconciliation files, and other open balances from sources that are not in SAP. All financial postings coming from non-SAP systems need to be consolidated within SAP Single Responsibility Principle (SRP). Control-M Managed File Transfer can be utilized to bring all manual postings from non-SAP systems into the SAP enterprise resource planning (ERP system of record) for all final closing and postings via file transfer protocol.

The solution also helps you reduce risk and deliver business services faster by automating internal and external file transfers in a single view with related application workflows in hybrid environments. With Control-M Managed File Transfer, you can schedule and manage your file transfers securely and efficiently with Federal Information Processing Standards (FIPS) compliance and policy-driven processing rules. Additionally, Control-M reduces file transfer point product risks and provides a 360-degree view, customizable dashboards, and advanced analytics.

Reduce audits risk with Control-M

Control-M provides complete visibility to the financial close cycle and effectively monitors all tasks and financial postings from inside and outside SAP, providing audits to enable timely remediation, and ultimately reducing the likelihood of an external financial audit. All jobs and tasks are transparent and contain logs. Only users with correct roles and authorizations can execute jobs. If auditors want to audit certain postings, they can see the logs and job output.

Workflow insights with Control-M

Control-M Workflow Insights provides valuable dashboards that give users in-depth observability to continuously monitor and improve the performance of the application and data workflows that power critical business services. Users get easy-to-understand dashboards with insights into the trends that emerge from continuous workflow changes to help reduce the risk of adversely impacting business services. With Control-M Workflow Insights, one can see the trends and any bottleneck in previous financial closings and plan better for the next fiscal close.

Control-M Workflow Insights also helps organizations:

  • Manage financial closing KPI tracking and performance, ensuring continuous improvement to financial closing workflow health and capabilities.
  • Improve forecasting of future infrastructure and capacity needs.
  • Understand critical SLA service duration and effects on the business during the financial close period.
  • Find workflow anomalies that could impact Control-M performance and workflow efficiency.

Conclusion

There are many benefits to integrating Control-M into your SAP finance close process. All jobs are effectively monitored within sub-modules and the process flow, and any hindrance can be escalated to the appropriate personnel. All manual steps below checking the logs, and job monitoring SM37, can be effectively automated bringing better visibility and control to the entire year-end process.

With Control-M for SAP, you’ll get the following benefits:

  • Better visibility at all times
  • The ability to restart failed process
  • The ability to wait on dependent processes can wait and trigger them manually, or apply an event
  • Sped closing cycles, that meet regulatory requirements and financial reporting standards
  • Increased user efficiency through centralized monitoring and control and enhanced automation

Control-M simplifies workflows across hybrid and multi-cloud environments and is available as self-hosted or SaaS. Get the most out of your SAP finance close process by modernizing your orchestration platform with Control-M, an SAP Certified Partner for Integration with RISE with SAP S/4HANA Cloud.

To learn more about Control-M for SAP, visit our website.

SAP, ABAP, SAP S/4HANA are the trademark(s) or registered trademark(s) of SAP SE or its affiliates in Germany and in several other countries.

]]>
Control-M Earns Top Spot in EMA’s 2023 Workload Automation and Orchestration Radar Report https://www.bmc.com/blogs/ema-radar-report-for-workload-automation/ Fri, 13 Oct 2023 00:00:44 +0000 http://www.bmc.com/blogs/?p=11782 Leading industry analyst and consulting firm Enterprise Management Associates just released its EMA RadarTM Report for Workload Automation and Orchestration 2023, and BMC is proud to share that Control-M was named a Value Leader and achieved the overall highest score among the vendors evaluated. Since 2010, EMA has published seven Radar Reports for Workload Automation and BMC has been recognized […]]]>

Leading industry analyst and consulting firm Enterprise Management Associates just released its EMA RadarTM Report for Workload Automation and Orchestration 2023, and BMC is proud to share that Control-M was named a Value Leader and achieved the overall highest score among the vendors evaluated. Since 2010, EMA has published seven Radar Reports for Workload Automation and BMC has been recognized as the overall leader every time. In addition to BMC’s top ranking, Control-M (self-hosted) and BMC Helix Control-M (SaaS) were recognized for offering “… a range of innovative features that streamline the creation, management, and monitoring of application and data workflows… across diverse hybrid and multi-cloud environments.”

The report highlights four key evaluation categories where Control-M significantly outpaced competitors:

  • Functionality
  • Deployment and administration
  • Architecture and integration
  • Vendor strength

In the report, EMA also notes that “BMC brings an innovative and efficient approach to data pipeline orchestration. Navigating on-premises and cloud technologies, the platforms facilitate data pipeline creation, integration, and automation across platforms like Airflow and cloud services…”.

Highlighted features include:

  • Seamless integration, automation, and orchestration “… of workflows across workflows across diverse hybrid and multi-cloud environments.”
  • Control-M and Helix Control-M’s “… vast (and rapidly expanding) catalog of out-of-the-box integrations”
  • Self-service interfaces for “… developers, data and cloud engineers, business users, and IT operations teams.”

During their research, EMA also interviewed BMC customers who shared the following insights:

  • “I am the Control-M evangelist because of the flexibility and integration.”
  • “Control-M is one of the most stable things I run. I don’t lose sleep over Control-M, and that’s my favorite thing.”
  • “[BMC’s] commitment to innovation and R&D is impressive.”
EMA Radar report 2023

Source: EMA RadarTM Report for Workload Automation and Orchestration 2023

Click here to download the report to get more details on EMA’s analysis and to learn more about how Control-M and Helix Control-M can simplify the orchestration of complex application and data workflows for your organization.

]]>
Data Orchestration: the Core Pillar for DataOps https://www.bmc.com/blogs/data-orchestration-core-pillar-dataops/ Wed, 11 Oct 2023 08:08:02 +0000 https://www.bmc.com/blogs/?p=53221 Introduction Organizations are drowning in data and thirsty for knowledge, wisdom, and insights. Data engineering teams in aspiring data-driven organizations are overwhelmed with fast-changing technology and organizational complexities as they look to move from proof of concept (POC) to proof of value (POV) and establish a sustainable operating model with continuous improvement. For most organizations […]]]>

Introduction

Organizations are drowning in data and thirsty for knowledge, wisdom, and insights. Data engineering teams in aspiring data-driven organizations are overwhelmed with fast-changing technology and organizational complexities as they look to move from proof of concept (POC) to proof of value (POV) and establish a sustainable operating model with continuous improvement.

For most organizations today, data unification and data integration challenges are growing overwhelmingly complex as they gravitate toward best-of-breed tools in a disaggregated data ecosystem. Data engineering that leverages DataOps and data orchestration is the foundational pillar on which organizations should build their next-generation data platforms in an ever-evolving data ecosystem to scale data teams with all the inherent process variability.

Why do Organizations Need DataOps

All organizations want to be data-driven but there’s a huge disconnect between wanting to be data-driven and getting it done. Bleeding and cutting-edge technologies are immature and not battle-tested and will not get organizations there. It is the operationalization process of technologies that is the key for organizations to become data-driven.

Most data teams do not think about “Day 2” which begins when product teams have completed development and successfully deployed to production. Do they have an end-to-end process to deploy artifacts? Have they tested what they are about to deploy with functional performance, load, and stress tests? Are they ready to roll back production changes if problems happen in production but keep the lights running?

There is a disconnect between doing POCs and POVs with emergent technologies and leveraging them to build and successfully deploy real-life use cases to production. There are a few reasons for this disconnect and most of them can be addressed by the missing component in the data economy: DataOps. Many organizations do DataOps, but it is ad hoc, fragmented, and built without guidelines, specifications, and a formalized process.

Data infrastructures today, spanning ingestion, storage, and processing, are deployed on distributed systems that include on-premises, public and private cloud, hybrid, and edge environments. These systems are a complex mix of servers, virtual machines, networking, memory, and CPU where failures are inevitable. Organizations need tools and processes in place that can quickly do root cause analysis and reduce the mean time to recovery (MTTR) from failures.

DataOps eliminates gaps, inefficiencies, and misalignments across the different set of steps from data production to consumption. It coordinates and orchestrates the development and operationalization processes in a collaborative, structured, and agile manner, enabling organizations to streamline data delivery and improve productivity through multiple process integrations and automations, delivering the velocity to build and deploy data-driven applications with trust and governance.

What is DataOps?

DataOps streamlines and automates data processes and operations to inform and speed the building of products and solutions and help organizations become data-driven. The goal of DataOps is to move from ad hoc data practices to a formalized data engineering approach with a controlled framework for managing processes and tasks.

To become data-driven, organizations need tools and processes that automate and manage end-to-end data operations and services. DataOps allows organizations to deliver these data products and services with velocity, reliability and,efficiency, with automation, data quality and trust.

DataOps is accomplished through a formal set of processes and tools that detect, prevent, and mitigate issues that arise when developing data pipelines and deploying them to production. This improves operational efficiency for building data products and data-driven services. DataOps applies ideas of continuous integration and continuous delivery (CI/CD) to develop agile data pipelines and promotes reusability with data versioning to collaboratively manage data integration from data producers to consumers. It also reduces the end-to-end time and costs of building, deploying, and troubleshooting the building of data platforms and services.

For organizations investing in digital transformation with analytics, artificial intelligence (AI), and machine learning (ML), DataOps is an essential practice to manage and unlock data assets to yield better insights and improve decision-making and operational efficiency. DataOps enables innovation through decentralization while harmonizing domain activities in a coherent end-to-end pipeline of workflows. It handles global orchestration, in a shared infrastructure with inter-domain dependencies enabling policy enforcements.

Why organizations need data orchestration

The data ecosystem today is flooded with a rich ecosystem of tools, frameworks, and libraries with no “one tool to rule them all.” Some tools are programmatically accessible with well-defined APIs, while others are invoked through APIs or command lines to integrate with other processes in the ecosystem. With the disaggregated data stack, enterprises must stitch together a plethora of different tools and services to build end-to-end data driven systems.

A cross-sectional view of a simple data pipeline

Figure 1. A cross-sectional view of a simple data pipeline.

Most organizations do not have handful number of data pipelines; they have tens and hundreds of them, with complex deployment models that span from edge to on-premises and across multiple cloud data centers. Components are deployed on different data centers and each component is built with distributed architecture and sub-components running across different servers and virtual machines. There are multiple handshaking points, each being a failure point—from network to server outages.

When architecting and building these pipelines, data engineers need to guarantee that the pipeline code executes as intended and that it is built with data engineering best practices and operational rigor. Handling both good cases and bad cases when things go wrong is critical. The goal is to coordinate all the pieces and make sure that the entire pipeline is resilient, can handle failover, and recover from the point of failure.

Production data pipelines are complex with multiple forks and dependencies and different mix of trigger logic for the tasks. All this is managed by the orchestrator’s scheduling and workflow engine. Data orchestration sits at the center of increasingly complex data operations. It controls and coordinates data operations in workflows that involve multiple stakeholders across the technical spectrum. It keeps track of what has happened and when and what still needs to happen for a successful pipeline execution. The orchestration engine triggers jobs and coordinates multiple tasks within the job to enforce dependencies. It logs actions and maintains traces for audits to provide a comprehensive view of the status of troubleshooting tasks.

Data and ML engineers leverage data orchestration for use cases like data ingestion/data integration, data processing, and data delivery. The most important data orchestration capabilities requested by data engineers and data scientists include ease of use, monitoring, and easy debugging and observability of their data pipelines. The end goal is to enhance productivity and to architect, develop, and deploy well-engineered data pipelines with monitoring, testing, and quick feedback loops.

What is data orchestration?

Modernizing an organization’s data infrastructure can become increasingly difficult and error-prone without a data orchestrator in a data engineering team’s toolkit. A long list of data engineering tasks needs to be accomplished before one can start working on the real business problem and build valuable data products. These tasks include provisioning infrastructure for data ingestion, storage, processing, consumption, testing, and validation, handling failures, and troubleshooting.

Data orchestration is the glue between these tasks across the data stack that manages dependencies, coordinates the tasks in the defined workflow, schedules the tasks, manages the outputs, and handles failure scenarios. It is a solution that is responsible for managing execution and automation of steps in the flow of data across different jobs. The orchestration process governs data flow according to the orchestration rules and business logic. It schedules and executes the workflow that is associated with the data flow.

This flow structure is labelled as a dependency graph, also called a DAG. Data orchestration is this process of knitting and connecting the tasks into a chain of logical steps to build an end-to-end data pipeline that is well coordinated.

Data orchestrators can be of two types – task-driven or data-driven. The former is when the data orchestrator doesn’t care about what’s the input or output of a step in a pipeline; its only focus is orchestrating a workflow. Data-driven orchestrators not only orchestrate the workflows, but are aware of the data that flows between tasks and their artifact outputs that can be version-controlled and allow tests to be associated with them.

A good orchestrator lets data teams quickly isolate errors and failures. Data orchestrators can be triggered according to a time-based trigger or custom-defined logic. The infrastructure required by data domains can be unified into a self-service infrastructure-as-a-platform managed using a DataOps framework.

Some of the best practices with data orchestrators include decoupling the DAGs to break them down to the simplest of dependencies with sub-DAGs. Making the DAG fault-tolerant so that if one of the sub-DAG breaks, they can be easily re-executed.

Data orchestration should be configuration-driven to ensure portability across environments and can provide repeatability and reproducibility. Other best practices include making data orchestration process single-click, with checkpoints to enable recovery of broken data pipelines from the point of failure and ensuring the ability to retry failed tasks with a configurable backoff process.

Conclusion

Enterprises are cautioned against jumping into building data platforms for data-driven decision-making without incorporating principles of data engineering and DataOps. These synergistic capabilities will provide organizations with the necessary process formality and velocity and reduce data and technical debt in the long run.

]]>
How Control-M Can Help You Minimize SAP® Business Warehouse Log Monitoring Issues https://www.bmc.com/blogs/controlm-minimize-sap-log-monitoring-issues/ Fri, 29 Sep 2023 16:19:32 +0000 https://www.bmc.com/blogs/?p=53204 SAP® Business Warehouse (BW) is a complex enterprise data warehouse (EDW) system connected and providing data to multiple upstream and downstream systems. Job scheduling within an EDW is a continuous process and monitoring these jobs presents its own unique set of challenges. One of the mechanisms for measuring job success and failure is to use […]]]>

SAP® Business Warehouse (BW) is a complex enterprise data warehouse (EDW) system connected and providing data to multiple upstream and downstream systems. Job scheduling within an EDW is a continuous process and monitoring these jobs presents its own unique set of challenges.

One of the mechanisms for measuring job success and failure is to use SAP workload background processes to monitor and analyze logs for root cause analysis when failures occur and processes slow, providing that data to the network operations center (NOC) operator or other process administrator for follow up. Issues arise when there are too many logs to analyze, and they are not in the same place. Examples of types of logs in SAP BW include SM21 system logs, SLG2 application logs, and ST22 short dumps.

Log monitoring tools help the IT operations (ITOps) department review jobs and processes experiencing longer than normal run times due to external or internal dependencies such as excessive heap memory utilization, excessive use of CPU, bad code, ABAP® dumps, or configuration errors. Longer running background processes are an indicator that either data is bad, the configuration is corrupted, there are potential system hardware (CPU/memory) issues, or there is simply an error in ABAP code that is causing the short dumps.

This reduces the performance because batch dialog processes are consumed and the system has exhausted the resources required for any available backend process to run new jobs, which can crash the system and forces ITOps or the process administrator into the uncomfortable position of having to explain why the system was unavailable. Monitoring logs within the system helps reduce the frequency of this happening in your organization.

Your organization’s reputation depends on its ability to deliver accurate results every time and in real time without any downtime or errors. Log monitoring is very important to finding any bottlenecks in the system and making sure your SAP enterprise resource planning (ERP) systems are up and running and that the jobs and processes are running on time, all the time.

Log monitoring challenges

Common challenges in log monitoring within SAP BW systems include:

  • SAP system-specific
    • If multiple systems are connected to the SAP BW system, then each source system needs to be selected and logs can be visualized for that source system and the process administrator must toggle back and forth between source systems, which can impact enterprise visibility.
  • Transaction code (T-code)
    • There are multiple T-codes to see and visualize the logs, like SM21 for system logs, SLG1 for application logs, ST22 for code-related error logs, and SM37 for job logs. This process is time-consuming and requires extra skills to become proficient.
  • Multiple steps involved to see and monitor jobs.
  • Individual clients and logins required for multiple systems and to view all job logs.
  • SAP coding knowledge required.
    • Hiring and training of the skilled resources required to monitor and analyze job logs can be costly for companies.
  • No complete visibility into dependencies.
  • SAP logs from previous years are archived and unavailable for viewing.

Control-M benefits for SAP BW log monitoring

BMC Helix Control-M and the self-hosted version of Control-M simplify application and data workflow orchestration as a service or on-premises. As a part of your SAP BW toolset, Control-M makes it easy to build, define, schedule, manage, and monitor all of your production workflows, ensuring visibility and reliability and improving service level agreements (SLAs). With Control-M, you can review job logs, job dependencies, and historical job executions to determine why a particular job failed or experienced issues.

Control-M log monitoring aids in identifying long-running processes, not only in jobs that have failed, but also those jobs that are in a yellow hanging state due to other factors. With this proactive approach to log monitoring, Control-M can help you avoid unnecessary system downtime and maintain trust and reliability in the system.

All log monitoring can be streamlined and automated with Control-M, saving time and money for the companies when those critical jobs fail. Control-M can also manage transformation logic and derivation rules jobs. Error handling and message logging capabilities are implemented in the extract, transform, load (ETL) process and controlled by Control-M. Finally, failed processes can be restarted with Control-M after correcting the data at the source.

With Control-M’s robust log monitoring capabilities, companies can avoid unnecessary system and process disruptions, while also improving the performance of the system by eliminating unwanted bad elements with proper root cause analysis.

Other key functional benefits of using Control-M in conjunction with your SAP BW tools include:

  • Full statistics on job average run times
  • Easy interface to monitor all system logs
  • User-friendly, easily navigable graphical user interface (GUI)
  • Provides enterprise visibility and a single dashboard for monitoring, analyzing, and managing multiple jobs logs from multiple systems

Conclusion

The reliability of your system is important because it ensures that your users trust and have confidence in the system you are maintaining. Your organization’s reputation depends on its ability to deliver accurate results every time without any downtime or errors. Control-M empowers companies to process and analyze logs in real time to quickly identify and address issues as they occur. Reliable alerting mechanisms promptly notify relevant parties when critical events or errors are detected in the logs. Control-M’s intuitive user interface makes it easy for operators and administrators to interact with the log monitoring features, set up alerts, and analyze log data, reducing the need for specialized resources and training. Finally, Control-M provides full statistics on job run times so companies can continue to find and correct inefficiencies in their key jobs and processes across hybrid and multi-cloud environments.

By leveraging Control-M’s SAP integration and job types, companies can enhance the reliability of their log monitoring system, leading to improved system stability, quicker issue resolution, and better overall operational performance.

Visit our website to learn more about how Control-M for SAP can help you with all your SAP needs.

SAP, ABAP, SAP S/4HANA are the trademark(s) or registered trademark(s) of SAP SE or its affiliates in Germany and in several other countries.

]]>
Optimize SAP® Business Warehouse Job Monitoring with Control-M https://www.bmc.com/blogs/optimize-sap-bw-with-control-m/ Mon, 25 Sep 2023 10:38:27 +0000 https://www.bmc.com/blogs/?p=53189 Business processes, jobs, and workflows are all interconnected, and the volume of data they handle and the complexity of those processes continue to increase. In this ever-expanding ecosystem, SAP® enterprise resource planning (ERP) and other SAP solutions are constantly bombarded with incoming and outgoing traffic, Application Link Enabling (ALE), Business Application Programming Interface (BAPI), or […]]]>

Business processes, jobs, and workflows are all interconnected, and the volume of data they handle and the complexity of those processes continue to increase. In this ever-expanding ecosystem, SAP® enterprise resource planning (ERP) and other SAP solutions are constantly bombarded with incoming and outgoing traffic, Application Link Enabling (ALE), Business Application Programming Interface (BAPI), or just incoming Intermediate Documents (IDocs) or workflows with numerous items. Amid all of this, effective and proper monitoring of SAP Business Warehouse (SAP BW) systems and their integrations becomes essential for managing business operations, ensuring business continuity, and avoiding costly disruptions and failures that can have far-reaching consequences.

Job monitoring challenges

SAP BW comes with its own built-in job monitoring layer controlled by transaction code (T-code) RSMO and is connected to various source systems via source system configuration. Overall visibility can be impacted when SAP BW is interfaced with multiple non-SAP source systems. Job interception, job scheduling, and job monitoring can be problematic due to the complex SAP landscape. Without visibility across the enterprise, failures can go unnoticed, and IT teams will often need to invest significant manual effort and time to identify and untangle the dependencies to understand the underlying problem. BMC Helix Control-M and Control-M can be coupled with SAP BW tools to provide the most holistic view of all SAP BW processes and manage all job interdependencies.

The most common challenge occurs when jobs are not effectively monitored and subsequently fail. Fixing these jobs costs organizations resources, causes disruption to business processes, and likely makes certain data and reports unavailable since the fix is happening during business hours and not as a nightly refresh. In this case, delivering timely service level agreements (SLAs) becomes risky at best.
Control-M, alongside SAP BW tools, provides peace of mind by delivering total visibility around the clock.

Multiple steps in data monitoring challenges

  1. SAP BW time selection challenges
    1. SAP BW GUI does not provide graphical time selection.
    2. Multiple systems involved.
    3. System-specific, only providing monitoring of specific BW systems—requires multiple BW systems to log in.
    4. Does not provide a 360-degree view.
  2. SAP BW data monitoring selections challenges
    1. There may be a lengthy learning curve for users to master the GUI, making it challenging to perform their tasks efficiently and effectively.
    2. Requires knowledge of SAP metadata.
    3. Requires SAP BW skills.
    4. Requires high SAP BW job monitoring skills.
  3. SAP BW execute challenges
    1. Any time “Execute” is executed, it only brings data for a specific source system.
    2. Lack of total enterprise visibility.
    3. System-specific logs—every time the RSMON T-code is called, a user must specify the source system to check its monitor logs.
    4. Logs for each system need to be monitored.
    5. System-specific silos—users must go to each source system to see the logs, creating a silo scenario, since all logs are not available. Users are also required to drill down to get to the desired outcome.
    6. Managing multiple systems without visibility into job statuses makes checking the status of jobs a time-consuming task.
    7. Expert RSMON T-code knowledge is required to monitor incoming and outgoing data packets.

Control-M benefits for SAP BW job monitoring

SAP BW is a system connected to multiple source systems, with inbound and outbound data and processes flowing constantly, so robust monitoring is needed. This is especially true in situations where heavy upstream or downstream activity is taking place.

Control-M monitoring and orchestration tools work in concert with SAP BW tools across the SAP BW landscape to provide a complete, 360-degree job, process, and monitoring layer that can help companies run and monitor their business processes effectively with fewer failures and bottlenecks. Control-M for SAP® creates and manages SAP ECC, SAP S/4HANA®, SAP BW, and data archiving jobs, and supports any applications in the SAP ecosystem, eliminating time, complexity, and any specialized knowledge requirements—while securely managing the dependencies and silos between SAP and non-SAP systems.

Control-M provides deep integration with and broad visibility for SAP BW monitoring through a single, powerful monitoring solution, resolving all cross-application, cross-platform data process monitoring challenges. Control-M enables total visibility of the data pipeline at every stage, while also providing timely SLAs for all processes and systems. All job monitoring can be streamlined and automated with Control-M, saving time and money for companies when those critical jobs fail. Error handling and message logging capabilities are implemented in the extract, transform, and load (ETL) process and controlled by Control-M. Finally, failed processes can be restarted with Control-M after data is corrected at the source.

Conclusion

Control-M is a robust application and data workflow orchestration platform that, when paired with SAP BW tools, offers various capabilities for monitoring and orchestrating jobs and processes across platforms. It provides a single dashboard from which all jobs and processes can be effectively monitored, delivering complete visibility across the data pipeline at every stage.

Control-M’s advanced job scheduling features enable you to schedule tasks across various platforms and applications, and hybrid and multi-cloud environments. The solution also provides real-time monitoring of job execution, giving you insights into the health and performance of critical processes.

Overall, Control-M’s comprehensive features and capabilities can help you optimize processes, enhance efficiency, and ensure reliable data pipelines. Learn how Control-M optimizes your SAP BW ecosystem, ensures business continuity, and mitigates the risk of disruptions that could impact your operations and budget at bmc.com/sap.

SAP, SAP S/4HANA are the trademark(s) or registered trademark(s) of SAP SE or its affiliates in Germany and in several other countries.

]]>
Avoid Common SAP® Business Warehouse Job Scheduling Challenges With Control-M https://www.bmc.com/blogs/avoid-common-sap-business-warehouse-job-scheduling-challenges/ Mon, 04 Sep 2023 05:19:36 +0000 https://www.bmc.com/blogs/?p=53123 SAP® Business Warehouse (BW) is a powerful data warehousing and analytics solution provided by SAP. It offers various tools and functions to help companies integrate, transform, and consolidate business information from different sources, including both SAP applications and external data sources. One essential component of SAP BW is the extract, transform, load (ETL) process, which […]]]>

SAP® Business Warehouse (BW) is a powerful data warehousing and analytics solution provided by SAP. It offers various tools and functions to help companies integrate, transform, and consolidate business information from different sources, including both SAP applications and external data sources.

One essential component of SAP BW is the extract, transform, load (ETL) process, which refers to the collection of objects and tools that facilitate the extraction, transformation, and loading of data from diverse data formats into the SAP BW system to ensure that it is harmonized, standardized, and made available for reporting and analysis within SAP BW.

With SAP BW ETL, users can import data from a wide range of sources, including Microsoft Excel files, text files, databases, SAP ECC (ERP Central Component), and other SAP systems. The data can be extracted using standard extractors or custom-built extractors based on specific business requirements. Once the data is extracted, it can be transformed using various techniques such as data cleansing, aggregation, derivation, and consolidation to ensure data quality and consistency.

After the transformation phase, the data is loaded into the SAP BW system, where it is organized into data structures like InfoCubes, DataStore Objects (DSOs), and InfoObjects. These structures enable efficient data storage, retrieval, and analysis within the SAP BW environment.

By leveraging SAP BW ETL capabilities, companies can create a centralized data repository that integrates data from multiple sources, allowing for comprehensive reporting, analytics, and decision-making. The ETL process plays a vital role in ensuring the accuracy, consistency, and availability of data for business intelligence purposes in SAP BW.

The SAP BW Process Chain is a SAP BW process that executes Infopackages and DTP’s (Data Transfer process). A process chain is a sequence of interconnected processes that can be scheduled to run in the background based on specific events or time triggers. These processes can be designed to trigger subsequent events, enabling the automation of complex data flows and transformations within SAP BW. The requests are formulated by the InfoSource and Source System and differentiated between master and transactional data. The Scheduler allows you to determine when data is requested and from where, updating it into the appropriate data targets.

It allows you to define scheduling parameters, such as time triggers or event-based triggers, for the extraction and loading processes. By configuring the Scheduler, you can automate the data flow and ensure that the required data is retrieved from the appropriate sources at the designated intervals.

Managing ETL of data can become challenging when multiple Source Systems are connected to a single SAP BW system. The complexity arises from the need to schedule and coordinate these processes and consider the dependencies and relationships between different data sources and targets. It requires careful planning, designing of process chains, and setting up dependencies and triggers to ensure data consistency and timely updates. Failed data management processes, especially those not detected quickly, can have severe consequences for projects, departments, and the business, as well as lead to the generation of incorrect or incomplete data, causing disruptions and setbacks in various areas of an organization.

Here are some potential impacts of such situations:

  • Disrupted projects: Data plays a crucial role in project execution, decision-making, and analysis. If data processes fail, they can delay project timelines, trigger inaccurate project insights, and hinder progress, resulting in project disruptions, increased costs, and dissatisfied stakeholders.
  • Derailed departments: Data serves as the foundation for many departmental activities and processes. If data quality issues occur due to hanging jobs or failed processes, they can negatively impact departmental operations reporting, and analysis and lead to inefficient business processes. Departments may then struggle to make informed decisions, leading to suboptimal outcomes.
  • Damaged reputations: Inaccurate or incomplete data can undermine an organization’s reputation, particularly if it leads to errors in customer interactions, financial reporting, compliance, or other critical areas. Reputational damage can be long-lasting, affecting customer trust, partner relationships, and overall brand perception.

Job scheduling challenges

Scheduling jobs in SAP systems is a routine operational task for many SAP customers. The transaction SM37 is commonly used within SAP systems to manage and execute batch jobs. However, in cases where complex job scheduling and dependencies exist across multiple SAP systems, customers often opt for third-party batch scheduling tools.

When SAP systems experience downtime or become unavailable, customers relying on third-party job scheduling tools may find that scheduled jobs do run as intended, leading to potential disruptions and delays in business processes, with business impact and potential remediation costs escalating according to how many systems are impacted.

To mitigate the risks associated with system unavailability, it is crucial for SAP customers to have contingency plans that include redundancy and failover options for SAP systems, backup and recovery strategies, and system monitoring to promptly identify and address any issues that may arise. Additionally, regular testing and maintenance of the job scheduling processes can help identify potential problems early on and minimize the impact of job failures.

Situation/Current state in job scheduling

  • SAP BW integrates data and schedules jobs from different sources, transforms and consolidates the data, performs data cleansing, and stores data.
  • SAP BW is becoming the norm for most enterprise businesses.
  • Since there are multiple sources of both SAP and non-SAP enterprise data, customers require cross-application and cross-platform visibility, generally in real time.
  • Businesses depend on IT to provide constant job scheduling and dependencies within those jobs.
  • SAP BW has its own scheduling and monitoring tools, which connect easily to other SAP applications, however mapping to non-SAP applications requires either an internal resource to write the code or delay a solution, or simply ignore the problem.

Pain/Challenge

  • Job scheduling becomes a challenge as multiple source systems are connected and need to manage job dependencies.
  • Overall visibility is very poor when SAP is interfaced with multiple non-SAP source systems. Job interception, scheduling, and monitoring all problematic due to the vast, complex SAP BW landscape, and critical SAP BW jobs often fail and are not scheduled in a timely manner. In other instances, less critical jobs are ignored due to the lack of robust scheduling. If jobs fail, updates to other dependent systems and processes are affected.
  • SAP BW integrates with multiple SAP and non-SAP systems. If jobs and process are not scheduled effectively, all upstream and downstream system will be impacted, increasing load time for business reports and delaying key metrics or key performance indicators (KPIs).

Control-M for SAP® BW job scheduling benefits

One solution is to automate your SAP BW landscape with Control-M’s job monitoring and orchestration tools. Control-M for SAP® creates and manages SAP ECC, SAP S/4HANA®, SAP BW, and data archiving jobs, and supports any application in the SAP ecosystem, eliminating time, complexity, and any specialized knowledge requirements and securely managing the dependencies and silos between SAP and non-SAP systems. With Control-M for SAP®, you can:

  • Easily create, manage, and orchestrate complex SAP services, jobs, processes, and workflows across on-premises, cloud, and hybrid environments
  • Automate all SAP BW process chains and get interdependency with Control-M
  • Orchestrate jobs between different SAP systems such as S/4HANA, ECC, SCM, and BW
  • Schedule all jobs going from SAP BW using Open Hub and then maintain Open Hub jobs with Control M
  • Securely manage dependencies and silos between SAP and non-SAP tasks (e.g., file transfers, database access) and centralize management of scheduling activities
  • Automate complex process flows in BW using Control-M event-controlled processing
  • Centrally control and schedule jobs within the Control-M scheduling dashboard
  • Reschedule failed and dependent processes
  • Schedule broadcasting jobs on an as-needed basis
  • Schedule housekeeping jobs, monitor the growth of the database, purge all unwanted objects and data, and keep system clean
  • Halt and restart jobs to make system copies and perform system upgrades and refreshes

Conclusion

Enjoy 360-degree visibility of the data pipeline at every stage. Control-M for SAP® is the industry-leading application and data workflow orchestration software for SAP environments, with capabilities that include workflow and data pipeline orchestration, SLA management, managed file transfer, and job scheduling and monitoring. SAP S/4HANA-certified, the solution can help you smoothly transition from ECC to S/4 HANA cloud-based enterprise resource planning (ERP) with simultaneous support for both platforms and modules. Control-M for SAP® and BMC Helix Control-M can also help reduce operational costs, improve efficiency, and increase performance by proactively monitoring jobs before they fail, reducing unnecessary system downtime and improving system performance and availability.

]]>