Jeff Gheen – BMC Software | Blogs https://s7280.pcdn.co Wed, 03 Aug 2022 13:36:26 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png Jeff Gheen – BMC Software | Blogs https://s7280.pcdn.co 32 32 How Business Pros Use More Data with Less Work Through Self-Service Automation https://s7280.pcdn.co/how-business-pros-use-more-data-with-less-work/ Thu, 29 Jul 2021 15:06:42 +0000 https://www.bmc.com/blogs/?p=50253 In recent blogs we detailed BMC’s data democratization initiative to create a self-service enterprise data warehouse that would be easily usable by business users across the company, and took a deep dive into how our finance operations are benefiting from having new ways to access and work with data. In this installment, we provide an […]]]>

In recent blogs we detailed BMC’s data democratization initiative to create a self-service enterprise data warehouse that would be easily usable by business users across the company, and took a deep dive into how our finance operations are benefiting from having new ways to access and work with data. In this installment, we provide an overview of how non-technical users in the customer support, marketing, and sales operations groups are doing their own job-specific analytics (with a big, behind-the-scenes assist from Control-M), and we’ll show how improving enterprise access to data has helped our customers and employees.

-By Jeff Gheen, IT Director – Business Intelligence, BMC

BMC, like many other companies, is dealing with a growing number of data sources, and more people that want to use data from those sources in new and expanding ways. In the past, dealing with such growth was a painful process. Exploding data volumes, options, and new user requests can create a lot of work and backlogs for the IT and analytics teams that are responsible for providing data and tools. They can also frustrate business users that want to develop new insights. However, the rapidly expanding set of data sources and business users that want to work with them isn’t causing headaches at BMC. That’s because of the success of our data democratization program, which puts data in the hands of those who need it to create new business insights, while keeping access policies, security, and other controls with us in IT.

What is Control-M?
Control-M simplifies application and data workflow orchestration on premises or as a service. It makes it easy to build, define, schedule, manage, and monitor production workflows, ensuring visibility and reliability, and improving SLAs.

We couldn’t do it if we weren’t able to automate many of the tasks, and to orchestrate the automated processes that need to coordinate across systems. Automation and orchestration enable self-service programs to be able to scale. To gain those abilities, we turned to BMC’s own application workflow orchestration platform, Control-M.

Control-M plays a crucial role in orchestrating all the workflows, integrations, and other handoffs needed to deliver data and insights, while providing an easy-to-use front-end interface for our business professionals. In this post, I’ll share a few examples of how it all comes together in customer service, marketing, and sales operations.

It’s easy to see the value of putting data from multiple sources into the hands of business users. However, it wasn’t easy to do because it involved multiple processes that are complex and interdependent. This complexity is what slows data democratization at many organizations. Automation is the bridge that gives non-technical users new analytics capabilities.

Our Customer Support organization uses data to drive key business decisions

In Customer Support, drawing insights from data is a critical component of the decision-making process. Given that, having accurate information available at all times from a single source of truth is vital. This ensures that everyone in the organization is working from the same set of information when making business decisions. The Support Analytics team has been working closely with the IT team to architect a process to drive high performance, high efficiency, and maximum automation throughout the reporting and analytics processes.

“It is imperative that the data be available and current at all times,” says Tricia Blank, senior manager of the Support Analytics team. “The question you have to think about to fully realize how impactful this is, is: ‘How much time is lost and what is the impact when data is not available or if the data is stale?’ For example, if someone comes in and views their dashboard, they may not realize that the data is old and thus, might draw some incorrect conclusions.”

To make sure the most up-to-date accurate data is available, Blank and her team took advantage of our self-service enterprise data warehouse architecture to create a single source for all the various types of data. All reporting and analytics tools, including MicroStrategy, Tableau, Alteryx, and Metrics Insights, utilize this single source of data. This ensures that regardless of which tool is being used to view the data, the resulting metrics and information will all be the same.

Before we created a modern, self-service enterprise data warehouse (EDW) and orchestrated it through Control-M, delays were common for collecting and integrating the data from different sources and producing the dashboards. It wasn’t unusual for database updates to be delayed four to seven hours, which caused significant problems downstream.

“If the dominoes don’t fall in the correct order, it all crashes down,” says Blank. For deadline-critical reports, the team would have to manually monitor the data jobs to make sure everything was running on time and in sequence. This was a time-consuming task and was a high-stress one at the end of each quarter.

To automate processes, Blank and her team used Control-M to automate all the data transfers, workflows, notifications, and many other jobs needed to produce daily dashboards and reports.

“Our goal is to ensure that the Customer Support organization has the information it needs to make key business decisions, both now and as the needs of the business change,” says Blank. “Utilizing the self-service enterprise data warehouse as well as Control-M has allowed us to do that in a very efficient way.”

Marketing gets more creative with data

Marketing produces and consumes a lot of data, for performance monitoring, email campaigns, lead and pipeline tracking, and more. Its essential data sources include Eloqua, Salesforce, and Adobe files. Marketing produces a lot of dashboards, and also outputs data and reports using MicroStrategy, Alteryx, and Tableau. It is another department that makes extensive use of Control-M without necessarily knowing it because of all that goes on behind the scenes.

Historically it was difficult for marketing to integrate its various data sources to produce the insights and reports it wanted. Getting a complete view of a customer or a campaign involved going to multiple sources and bringing the various data points together in an ad hoc way. A frequent problem was that a needed component, such as an Adobe file with web analytics data, wasn’t ready when it was needed to produce a dashboard. That can quickly lead to a lot of processes being delayed. The situation illustrates the value of process orchestration.

We were able to prevent such problems for our marketing operations. Now, all the back-end data loads, integrations, and jobs run in the background through automation built with Control-M.

When you integrate data for a dashboard or other use cases, you need to make sure the data sources you’re integrating are available. Strategically, we don’t want business units involved in ingestion. They shouldn’t be up at night wondering “Did the data load?” We’ve set it up for them so Control-M does the data loads automatically, in addition to running the jobs in sequence to produce the needed output. Our marketing staff doesn’t have to do any heavy lifting with the data, so they’ve been able to concentrate on using it in new ways. For example, we’ve gained visibility into the customer journey so we can better see connections between sales and marketing activities. In turn, that helps our salespeople know what questions to ask customers.

“The automation and self-service have really shortened our data-to-impact time. Now we focus on the right things sooner,” says Carlos Umana, a BMC senior manager of marketing business operations.

So far, we’ve been able to meet all the new dashboard and visualization requests because Control-M can orchestrate multiple processes—marketing professionals now create their own custom reports and dashboards.

“While access to data is powerful, the ability to leverage that data in a self-service manner and provide insights to our stakeholders is what defines success for our team and for BMC,” says Doug Piper, senior manager of marketing programs. “The ease of use and immediate access drives increased productivity, allowing our team to spend more time thinking strategically and less time working through extensive, manual data manipulation. Currently, we’re using the data in our dashboards to focus on ensuring quality and progression of marketing-sourced leads, working closely with Sales to ensure we’re aligned on maximizing results.”

Automation & alerts take Sunday out of the work week

Sales Operations is another department that was freed from the effort of loading data and monitoring the progress, which is something Richard Gilbert did every Sunday. His team produces a closely watched dashboard that must be ready as soon as BMC’s top executives begin work each Monday morning.

“We’re providing insights into sales operations to the sales leaders,” says Gilbert. “Our output is used as an executive management tool and as a sales productivity tool.”

Producing the data for the dashboard requires 24 to 32 workflows to process between 5 and 6 terabytes of data, which is housed in siloed systems. That volume and sophistication are why Gilbert used to have a lot of stressful Sundays. Control-M is why he doesn’t anymore. He and his international team used Control-M’s self-service tools to automate extract, transform, and load (ETL) functions and job runs, and to create alerts that send automatically if any delays or other problems develop.

“Control-M stripped the middleman out of the process and eliminated having to wait for results,” says Gilbert. “Automation has had a huge impact on our work. I went from having a 12-hour workday on Sundays to working maybe 10 minutes to have the databases up and ready by Monday morning.”

Success breeds success

BMC’s Sales Operations, Marketing, and Customer Support organizations have very different functions, but each has benefited from automation that gives them controlled access to enterprise data and analytics capabilities. Another common bond among these groups is the significant time savings each has realized since adopting Control-M.

Their success has led other teams across the company to look for ways Control-M’s automation capabilities can help them streamline their analytics activities. New ideas and use cases are launching all the time. More and more of the company’s day-to-day work is better informed by data, which is helping us be more responsive and effective. Our business users, our company, and our customers are all better for it.

Tricia Blank sums it up best: “Control-M was a huge win for our customers, for the BMC executives that use our dashboards, and for my team. With the time we are saving, we can focus on more strategic projects.”

]]>
Data Access and Automation Help Finance Unlock New Business Value https://www.bmc.com/blogs/data-access-and-automation-help-finance/ Thu, 08 Apr 2021 07:56:29 +0000 https://www.bmc.com/blogs/?p=49269 In our previous blog on BMC’s data democratization strategy, we described our efforts to make enterprise data and analytics capabilities available to business users throughout the company as part of our journey to become an Autonomous Digital Enterprise. In this installment, we’ll explore how our work thus far has allowed dozens of finance professionals to […]]]>

In our previous blog on BMC’s data democratization strategy, we described our efforts to make enterprise data and analytics capabilities available to business users throughout the company as part of our journey to become an Autonomous Digital Enterprise. In this installment, we’ll explore how our work thus far has allowed dozens of finance professionals to access new enterprise data and insights and perform self-service analytics. The finance organization is a great example of how data democratization enables business users to apply and advance to high-level enterprise strategy in their day-to-day work.

As part of BMC’s focus on driving innovation to become an Autonomous Digital Enterprise, our finance operation underwent a transformation. The role was becoming more strategic and we were called on to provide information and perspective to guide decisions in all areas of the business. We needed to create new insights, produce more reports, and be more creative in how we analyzed and presented data. Our needs aligned with the company’s mission to open up the enterprise data warehouse (EDW) and make self-service analytics available to business users. Finance became early adopters – and the entire company is benefiting – as we’ll describe in this blog.

Before we get started though, you’ll need some relevant background on our systems and operations. Oracle Financials is the core for our day-to-day finance operations. It is just one of more than 35 source systems (Salesforce and CallidusCloud are among the others) that feed our EDW. The foundation is Netezza, which is now being supplemented and replaced by Snowflake. Similarly, OneStream is supplanting Hyperion as the analysis engine. The EDW includes many departmental data lakes and pools, and employees in the finance organization were among the first to create user-managed databases (UMDBs) in it.

For financial reporting analysis, data flows from Oracle Financials to our corporate performance management toolset, which includes Hyperion and OneStream. Depending on the report or user, Power BI, Tableau, and Alteryx are all used for output.

These aren’t even all the tools and foundational pieces, and you can already get a sense for the integrations and handoffs that need to occur. No single ETL solution is used across the environment (there are six, plus numerous scripts), which adds to the complexity. Scheduling jobs across this environment and making sure they executed in the needed sequence used to be very challenging, especially during the closing cycle when it seemed everyone in the C-suite was watching what we were doing and asking for updates.

Life before Control-M

Here’s what some of our processes were like before we were able to modernize, with the help of Control-M, which brought automation, orchestration and self-service capabilities into the process.

When a customer signs a contract with BMC, our salesperson reports the deal and a service representative books the order in Oracle, where it waits for review by the revenue recognition team. One of those specialists reviews the order to determine the right way to classify it in our accounting system (e.g. service vs. product revenue, immediate vs. deferred, etc.).

Once the order is booked its various details are parsed and used in a variety of reports, which are exported via MicroStrategy, Excel, and other formats. Producing the reports required a lot of time-consuming manual data joins and refreshes to various systems. The join tables were often a mess, and we could spend days going back and forth over whose data was right.

Pre-automation, closing out a quarter was even more time consuming and stressful (and had to be done while we were still booking revenue, which is an ongoing activity). As with revenue booking, quarterly close process requires coordinating outputs from multiple systems in a specific sequence to provide reports. A breakdown in any source data or processing job would hold up the entire process. Quarterly results are eagerly anticipated and the process is closely watched, from the top down. The quarterly close process preparation, execution, and monitoring took hours, during which time the Global Financial Information Services (GFIS) team was nearly constantly responding to update requests.

Our team was monitoring eight or nine systems to make sure it was using the most up-to-date data, and had to manage file transfers and workload executions across all the systems. Scheduling the workloads alone was a significant hassle and the other manual steps were very time consuming. Doing the basic work – and answering emails and phone calls about its status – took up most of the GFIS team’s time. That prevented us from pursuing the type of data analysis and machine learning (ML) development we needed to do to see overall company operations more strategically. Later, our finance professionals were surprised at just how much of the process we could automate.

We’ve been told that BMC tracks more metrics and does more reporting than most software companies. The finance team’s mission to innovate data analysis to produce new insights meant that we’d be doing even more reporting. It became clear that our methods would be unsustainable. Our team had an open mind and some experience with Control-M for managing our workflows, so the finance team explored how it might help us with our booking and closing processes.

Life with Control-M

At this point, finance’s efforts to streamline processes dovetailed with IT’s data democratization initiative. Our IT leadership was creating its strategy and resources to open up the enterprise data warehouse for self service by business users. We became a test case for putting these ideas into practice.

Our approach was to try to automate as many of the data collection and processing tasks as we could, then to orchestrate the reporting processes as much as possible. Eventually, with Control-M, we were able to deploy automation end to end.

Control-M is a workflow automation and orchestration platform. It is application, data, infrastructure, and tool agnostic, which enables it to work across our dozens of systems of record, eliminating the need for multiple proprietary schedulers and ETL tools. Using the GUI in Control-M and the newly available access to the EDW, the finance team was able to build workflows to automatically receive, validate, and process data to complete bookings and closings. Finance professionals also created event-driven update notifications that are sent to business users automatically, which has saved them a lot of time that used to spend responding to requests. The development was self-service, but Control-M has native controls to block unauthorized data access and to prevent user-defined jobs interfering with the execution of others. This ability to orchestrate execution and prevent resource conflicts was immediately valuable, and became more so as reporting, workflows, and complexity all increased.

Going into the project, we thought one of the most time-consuming aspects would be to identify where all the data we wanted to use was located, then map it into workflows. We were pleasantly surprised to learn Control-M could do most of the mapping for us, which saved a lot of time.

To book an order or close a sale today, all the things that needed to happen before we used Control-M still need to happen. The big difference is our staff doesn’t need to do them all. For example, when a needed file or other data input comes in, Control-M automatically receives it and begins processing it. Control-M monitors all the upstream and downstream processes related to a job, and can automatically send status updates and notifications to anyone that wants to be kept informed of the process status. Previously, getting an update often involved sending a request to our systems team in India, and waiting up to 12 hours for a response.

Some of our team took advantage of the self-service capabilities in Control-M and the EDW to create real-time dashboards that show the status of the overall closing process, including key sub-processes and their dependencies. The graphic below depicts the process and includes a screenshot from one of the dashboards.

control-m process flow and communication

Now, business users no longer need to wait hours for an update, and our finance team doesn’t have to spend valuable time during the closing process responding to update requests. Our team can do its work, instead of updating colleagues on the status of our work. That is appreciated by staff and executives alike. Control-M has produced similar benefits for booking and other day-to-day processes. Quite simply, it has become the glue that holds our financial reporting together.

The innovation dividend

Since marrying self-service EDW access with automation from Control-M, finance has delivered several valuable innovations thanks to the time and effort we’ve saved. The benefits extend far beyond the finance department and are appreciated by BMC’s executive leadership team.

The time savings have also enabled the finance organization to advance in our efforts to become more strategic. The Global Financial Information Services staff used the time it saved to explore and apply machine learning. Today, an ML solution the team developed is helping solve a longstanding challenge for the company: How to improve forecasting for contract renewals, and improve renewal rates. Doing that required finding answers to difficult questions, such as: When is the best time to contact customers about renewal? Which customers with expiring contracts would benefit most from proactive outreach? Would outreach improve the likelihood the customer would renew? What is the best action to take for each individual customer?

Analytics and ML can help answer those questions. So, GFIS used Control-M to give business users access to the tools they needed to apply analytics and ML. To predict and then proactively mitigate customer churn, we first needed to get a better understanding of customer satisfaction. We already used Salesforce for our CRM, so improving our visibility into customer satisfaction would require us to get more insight into the customer relationship than Salesforce was giving us. With Control-M we could automatically take the data from a lot of sources and push it out to the data sets we need for our ML models.

Taking advantage of our access to the EDW and the self-service analytics enabled by Control-M, we set out to create a dashboard that would reflect the relationship health BMC has with each customer. That visibility would help us throughout the lifetime of our customer engagement by identifying any potential customer dissatisfaction issues long before we talked with the customer about renewing. Control-M manages all the data transfers from disparate sources, and executes all the workloads to produce and distribute the dashboards.

The dashboard helps with day-to-day customer care. For example, if we detect an issue, the system automatically notifies a customer care or client account representative. This new deliverable ensures we proactively give customers the attention and support they need when they need it, and helps BMC deploy customer support and other resources more effectively. Customers that need attention or would benefit from a check in are contacted promptly.

We then built ML models that analyze the dashboard data to predict each customer’s likelihood to renew. Results are output via Tableau, and Control-M manages all the file transfers and job sequencing needed to make it happen. The output has really helped our account teams focus their efforts.

We’ve been able to do more with ML now that much of the necessary data wrangling is automated through Control-M and our BI tools. We have a lot of use cases where we need to predict things, some internally focused and others at a customer level. On the finance side, we use those insights for financial and strategic planning, building business cases, and lots more. Some of the use cases we’ve already developed include:

  • A duplicate payment exception report generator that identifies duplicate invoices to avoid multiple payments for the same invoice. In this case, we combined Control-M and SQL for flagging exceptions, reducing processing times, and increasing accuracy.
  • An employee shift allowance calculator for calculating payments for employees working in different shifts. By automating this activity with Control-M and SQL, we reduced the time needed to complete this process from two days to just a few minutes. We’ve extended automation to cover additional data loads for weekend and public holiday payouts.
  • For cost accounting, we created a new process for consolidating cost adjustment files. The process helps remove manual errors (like data duplication or elimination) and runs in a few minutes; the process took about five hours when it was done manually.

We think there are many more significant opportunities for Control-M to play a lead role in ML use cases moving forward. Before Control-M, we were manually developing Python scripts and batch jobs to pull and process the data we wanted in our ML models. That took a lot of time, which limited the amount of actual analysis and experimentation we could do. Introducing Control-M into ModelOps has already helped and provides a gateway to more innovation. Market research found that many organizations are moving in that direction, as we described in this blog.

Meanwhile, self-service analytics and reporting continues to expand within BMC. The various automated updates Control-M now produces have been extremely popular, and the number of business users that request notifications is growing every quarter. The notifications and visibility into processes have given business users a glimpse into what is possible, and now many are beginning to explore their own use cases for self-service EDW access. They are creating their own dashboards, custom reports, and notifications. The global financial services team alone has eight separate UMDBs in the EDW, and other groups are creating their own.

Overall, these efforts will produce more innovation and efficiency benefits. We’ve already learned a lot about how analytics and ML processes can be automated. The bigger learning is that with the right tools, organizations can put high-level strategy into practice at the business user level. We’ll be sharing more use case examples in the future, so watch this space.

To learn more about Control-M, visit bmc.com/control-m.

]]>
How Control-M Helps BMC’s IT Department Deliver Data Democratization https://www.bmc.com/blogs/data-democratization/ Thu, 04 Mar 2021 12:52:14 +0000 https://www.bmc.com/blogs/?p=20332 How do you give access to the enterprise data warehouse to line of business users plus put powerful, advanced analytics and data warehouse tools in their hands so they can create their own visualizations, reports and dashboards –  without interfering with the thousands of workflows that keep the business running on a day-to-day basis? See […]]]>

How do you give access to the enterprise data warehouse to line of business users plus put powerful, advanced analytics and data warehouse tools in their hands so they can create their own visualizations, reports and dashboards –  without interfering with the thousands of workflows that keep the business running on a day-to-day basis? See how we safely opened up BMC’s enterprise data warehouse (featuring Snowflake, Netezza,  MicroStrategy, Oracle, Business Objects, Tableau, Alteryx and other technologies, including AI and machine learning), as part of a data democratization program that now enables business users across the company to use self service to create new data tools that are making the company more agile and responsive.

To be a modern competitive business, you have to be able to garner insights from data, and quickly make decisions on those insights. BMC knows this well, and we realized that for us to become more flexible and responsive, our enterprise data warehouse (EDW) – an asset critical to decision making and the delivery of essential business services – needed to do the same. Specifically, we needed to make our enterprise data warehouse easy to use and accessible to business users (the ultimate stakeholders) throughout our company. Despite prior investments in our EDW, business users found it hard to access and use data. To keep up with growing business demands, we knew it was crucial to develop a strategy based on self-service principals like mobile banking and ATMs. The more we could democratize data, the more value we could gain.

Self-service helped turn our enterprise data warehouse from what was considered an inflexible vault to more of an ATM machine that many employees now use to conveniently access data, analytics and new services that are making BMC more responsive. By automating processes for accessing, merging, and analyzing various data sets, applying analytics and enabling output to MicroStrategy, Tableau, Alteryx, or other solutions, we’ve made our data more available and more valuable because employees can use it in new ways. Now, each week an average of 1,003 employees use the EDW and its more than three dozen source systems to run their own analytics and create reports, visualizations and dashboards. Overall, 54 percent of our employees have taken advantage of the self-service capabilities we created to access the EDW. Helped by the new information and analytics, they’ve reported 40 to 50 days saved per year in sales operations, more proactive customer service, countless hours saved during the time-sensitive quarterly close process. Automation has freed up lots of weekend time because employees no longer have to prep data for Monday reports or check to make sure reports and other jobs are running.

In this blog I’ll share details of how and why we developed a data democratization strategy, some of the changes we made to our EDW ecosystem to make it happen, and how BMC’s own Control-M application workflow orchestration platform became the key enabler for delivering powerful new self-service analytics and workflow management capabilities to thousands of business users around the world.

Setting the strategy

Like many companies, BMC is on the journey to becoming an Autonomous Digital Enterprise. One of the core pillars of this strategy is to become a data-driven business. To maximize the value of the vast data at our fingertips, we knew we needed to make more data sources and analytics capabilities available to our business users in real time. Our line-of-business users are eager to find new ways to use enterprise data to help the company. Many of them have the technical skills to do innovative, sophisticated work with our data, but had to rely on IT for access to it. Without significant changes to our IT systems, role, and culture, we simply couldn’t keep up with the ever-growing volume of requests from our business users for new dashboards, visualizations, reports, and access to new data sources. Business users were frustrated. Some viewed IT as an obstacle to innovation, rather than an enabler. In response, we undertook a major a program of data democratization – making centralized data directly available to distributed business users and giving them business intelligence tools to get the information and insights they wanted anytime, anywhere.

In our case, that required major changes to the architecture and access points to our enterprise data warehouse and to how it interacted with our ERP and the other source systems that feed it.  Along the way we identified the need for an overarching strategy and IT cultural change at the highest level. To achieve the promise of democratized data access, our strategy had to be supported with a comprehensive automation and orchestration strategy across our information systems. If we could automate operations and build in controls, we could offer new self-service capabilities at the user level so our employees in various roles could find new ways to take advantage of real-time access to enterprise data and analytics. When it became clear that automation, orchestration and self-service were the keys to becoming an Autonomous Digital Enterprise, it also became clear that we should make Control-M, which was already being used extensively within our IT infrastructure (as well as thousands of companies globally) –  the conduit to carry out enterprise strategy and the end-user level.

Navigating the environment

Our EDW isn’t really a singular item, it’s a collection of assets. Netezza has been the foundation, but we’ve introduced Snowflake to supplement and potentially replace it. We’ve identified more than 35 source systems, including our Oracle ERP, Salesforce, CallidusCloud, Hyperion, OneStream, plus many function-specific applications and numerous scripts. We use six different ETL tools to bring data into the EDW environment. Once it’s there, it is reorganized into multiple data lakes, other pools, folders, and now, user-managed databases (UMDBs) that support specific needs for finance, marketing, sales operations, customer support, and other business functions.

When business users want a new dashboard or other service request, the workflow for collecting, updating, and processing the data often cuts across several sources, including the hundreds of applications used throughout the company. As volume grew, it became harder to orchestrate these processes so the workflows did not interfere with others while still using the most up-to-date data. Giving thousands of business users self-service access to the EDW and its dozens of source systems would make it harder than ever to coordinate data refreshes, ETL operations, dashboard updates, and other workload scheduling and execution. It also became even more important to prevent errors and hang-ups, because a single failed workflow execution now had the potential to affect many, many others.

We had learned in the past that when analytics and other data operations scaled, data quality problems often surfaced. For example, at higher scale the ETL process will take longer to complete. Without planning, that could cause a dependent job (say a dashboard update) to run before the ETL process was done refreshing the data. The result would either be a job failure, or execution with out-of-date data, which represents a data quality problem. We were about to scale our operations in a big way through self service, so we had to proactively address data quality.

We knew improving the quality of data going into our data warehouse and improving the orchestration of data warehouse-dependent jobs with systems throughout the enterprise, were what we needed to satisfy our volume and speed pressure. When we dug deeper, we determined it was critical to have an abstraction layer that can sit above applications to manage ETL, workflows and custom scripts.

Control-M gave us the flexible guardrails we needed. It gives us a single interface to orchestrate all the file transfers, ETL tasks, and other handoffs between the enterprise data warehouse and our source systems, and provides proactive alerts about potential workflow failures that will affect others. It helps us ensure self-service and other jobs run on the most up-to-date data, and even has some features that automate the data quality checks.

New users, new approach

The new ecosystem needed to accommodate everyone from executives and end users to analysts and data scientists. It had to allow users to blend trusted enterprise data with their own internal and external data sources.

But that’s not all that had to change. We recognized that we as an IT department needed to change. An ecosystem that focuses more on enabling citizen developers and analysts than on delivering IT-oriented solutions requires a shift in mindset. To become innovation enablers, the new engagement model had to become “guide and recommend,” rather than “guard and control.”

While we needed to share control, we were still responsible for information security and system reliability. That’s an uncomfortable position that required more system and cultural changes to navigate. We knew there was a lot of great data in the enterprise data warehouse that users could benefit from. We were confident in the data warehouse but were concerned about the security implications of opening it to users. Control-M helped give us the confidence we needed because it puts controls in place and automatically enforces data access policies and workload execution hierarchy while giving users self-service tools.

The new environment

To open the EDW and analytics resources to a wide user base, we created a multi-layer data ecosystem that is both centralized and distributed at the same time. Control-M manages workflow execution across the EDW and all the systems it interacts with, provides the user interface, and applies task automation and controls. The new ecosystem includes a trusted enterprise data warehouse and a metadata layer, enhanced with departmental self-service data preparation, visualization, and storage. Four frameworks make up the ecosystem:

Enterprise Data Warehouse – This IT-developed warehouse layer serves as the foundation for most analyses. It includes a corporate information factory (CIF) that stores historical fact and dimension data in third normal form (3NF), and a reporting-friendly dimensional view layer. Most of the data loads run once or twice per day, with some exceptions, and are developed following industry best practices for ETL and data architecture. The EDW is the single source of truth of data from all major BMC source systems.

Enterprise Data Hub – The MicroStrategy schema layer serves as the enterprise metadata for BMC, and it provides a user-friendly abstraction layer that points to warehouse tables and columns. Without coding, users can drag and drop pre-defined attributes, metrics, filters, and prompts into their reports. Because the schema, joins, and logic are already defined by IT, answers are consistent and accurate. Users that have already invested time and money into their own preferred visualization tools are welcome to keep using them, but they need to use the Enterprise Data Hub to source data when it exists. For these populations we support REST API-based connectors and data mart writeback capabilities that enable moving certified MicroStrategy data into departmental databases and BI tools, like SQL Server, Tableau, and Power BI.

Self-Service Analytics (SSA) and Data Visualization – Using MicroStrategy, all BMC employees can create their own reports and dashboards, which can be sourced from either the enterprise schema, bring-your-own-data sources, or a blended combination of the two. Each enterprise subject area is stewarded by a handful of business SMEs that serve as content owners. These analysts create pre-defined analytics content for the end-users in their areas, organize and secure the content of shared folders, and partner with IT to support their constituents’ analytics needs. Several business teams have created data-driven mobile applications for their end-user populations to consume on-the-go.

Self-Service Data Preparation and User-Managed Databases (UMDBs) – Many of BMC’s business users have advanced skillsets. The needs of such teams usually lean toward added control over data, whether for ad-hoc analysis or downstream reporting. Using Alteryx, these users typically query data from the EDW that they have been granted access to see, perform joins, cleansing, grouping and derived attribution, and capture data at different time intervals, including snapshots. This data can be consumed by a visualization tool of choice (primarily MicroStrategy and Tableau, and sometimes both). Most of the same teams that participate in the self-service data prep offering need a place to store the data they have prepared. For this, we provide a user-managed database framework. Co-located on the high-speed Netezza platform alongside the EDW, UMDBs typically include custom views directly on EDW entities, and serve as a pre-viz landing zone for data.

These user-focused frameworks get IT out of the way as much as possible, allowing us to break down the usual barriers to accessing, onboarding, creating, deploying, and maintaining citizen-developed BI applications. Control-M is the key enabler through its easy-to-use self-service interface for creating workflows, checking their status, and automating many of the handoffs and tasks that need to happen for jobs to execute. For example, Control-M automates ETL and file transfer operations through the six different third-party tools BMC uses for moving data in and out of the EDW, executes event-based vs. scheduled workflow execution, and more. Control-M frees IT operations from having to handle all these tasks while also managing SOX compliance and internal data access policies.

Turning strategy into action, with automation

Here’s an example of how data democratization helps us navigate the environment and makes BMC a more responsive business. Our finance department wanted to improve forecasting, and specifically to do a better job predicting what customers would do when their software licensing contracts would expire. Its idea was to create a customer health dashboard that presented relevant customer satisfaction scores and other metrics for each customer, then apply machine learning to predict whether the customer would renew.

The value to having this insight was clear, but the path to getting it was not. The dashboard and ML analysis would require access to the latest customer service and support data, historical contract information, and more. Once finance identified the metrics it wanted, Control-M automated the file transfer and other data extraction from multiple source systems, and managed the export to Tableau, which creates the dashboards. Analytics are used to calculate a customer health score, and a machine learning model predicts the likelihood that the customer will renew its contract.

Control-M also automatically sends updates and alerts to account managers when customer contracts are coming up for renewal. The finance team did much of this on their own, with Control-M enabling the self-service aspects.

“Before Control-M this was a nightmare,” says Robert Hanley, BMC’s director of finance. “We had to have an analyst dedicated to updating our machine learning models. Now the update process is zero percent of the analyst’s job. The process is completely automated, and we can easily get an up-to-date customer health score in real-time.”

In future blog posts we plan to cover that program in depth and share others too. Our business users have proven to be very creative and have developed many resources, including data marts, mobile apps, automated reporting, and more. In one case, all the data needed to run customer support operations, including the executive dashboards that guide decision making, is now refreshed and ready by 4 a.m. instead of 8 a.m. That improvement has improved our customer responsiveness while making it easier for hundreds of employees in the organization to focus on additional high-value projects. In another, our sales operations team used MicroStrategy, Tableau, Alteryx, multiple databases and other systems to create a workflow that combines previously hard-to-access data to improve customer support, simplify tasks for hundreds of business users, and give executives more strategic insight.

Our data democratization efforts never end but we are proud of our success so far, which was validated when we won TDWI’s Best Practices Award for Emerging Technologies and Methods – an award that recognizes tech users, not tech vendors.

While our business users that benefit from data democratization are on a broad spectrum of technical skill and have many diverse use cases, there’s a common thing they keep telling us: They are getting their weekends back thanks to Control-M. We will share some detailed use case examples in future posts. Until then, click here to learn more about Control-M capabilities and how other customers are using it to transform.

]]>
Control-M Provides Quality Control for BMC’s Customer Zero Data Warehouse https://www.bmc.com/blogs/control-m-provides-quality-control-for-bmcs-customer-zero-data-warehouse/ Tue, 11 Dec 2018 00:00:29 +0000 https://www.bmc.com/blogs/?p=13216 At BMC we learned the hard way that our enterprise data warehouse was much better at managing data volume than it was at managing data quality. For example, our data volumes are growing insanely fast and the load process may take hours. We can’t afford not to know where we are in the load process, […]]]>

At BMC we learned the hard way that our enterprise data warehouse was much better at managing data volume than it was at managing data quality. For example, our data volumes are growing insanely fast and the load process may take hours. We can’t afford not to know where we are in the load process, especially if it fails and we have to recover. In today’s analytical world, you can’t tell business users their jobs now will be ready in 12 hours instead of six because something broke earlier and we didn’t know about it. Unfortunately, that’s what used to happen. We also got burned many times when reports were wrong because the data was only partially refreshed.

It became clear we needed to build more quality control and automation into our workflows, especially for extract, transfer and load (ETL) operations involving our enterprise data warehouse. Fortunately, we also learned that Control-M could help us with workflow management and quality control in several ways.

Life Without Automation

Let me set the scene. BMC’s enterprise data landscape requires the ingestion of many different data sources. This includes numerous SaaS applications like SalesForce.com and Eloqua that use web services, on premise applications and databases like Oracle® CRM, flat files, unstructured data and other external sources that require custom parsing. Getting these sources into the warehouse for loading, processing and integration requires many different tools and processes. Managing the numerous tools and orchestrating the complex load process is where we felt the most pain.

In the early days, we leveraged custom scripting across the different tool sets to attempt to manage the hybrid data landscape. This became impossible to manage as the environment grew, and new technologies where added which greatly increased the complexity. Custom scripting is not scalable or manageable across many different tools.

Our data volumes began to scale significantly as we got more involved in analytics, business intelligence, big data, cloud and hybrid infrastructure, and we began having more data quality problems. We were also using more sources of data, and had to find ways to validate the data before it went into our enterprise data warehouse.

Two problems surfaced at this stage of our digital development. First, we hit the scalability limitations of using multiple toolsets and scripting. Second, the resulting consequences went from being an IT/Operations problem to becoming a business problem. Data quality issues could cause people not to be paid on time. Bad data might cause jobs to fail, or might not be apparent until jobs and reports were complete, which would necessitate the data being updated and the entire job having to be rerun. Situations like that threatened our ability to complete quarterly closing on time.
Many of the problems we experienced are preventable. For example, we can validate data by making sure all the required rows and columns are in place, and that values are within normal ranges. But that takes time and effort – time we didn’t have.

Integrate. Automate. Orchestrate. – Life After Control-M

We knew improving the quality of data going into our data warehouse, and improving the orchestration of data warehouse-dependent jobs with systems throughout the enterprise, were what we needed to satisfy our volume and speed pressure. When we dug deeper, we determined it was critical to have an abstraction layer that can sit above applications to manage ETL, workflows and custom scripts.

Control-M lets us do that. We can develop workflows that manage the load process across the disparate tools just like we do for any other job – we don’t need separate tools or scripts. And because everything is integrated, we can get complete visibility on job status. If an Informatica job is delayed because it can’t start until another workflow completes (for example a file transfer or ETL transaction) we’ll be proactively alerted. We no longer have to manually check five or six systems to get different status reports so we can figure out when a job will complete. We no longer have to tell anyone that the job that was supposed to be ready today isn’t even going to start for another 12 hours. Plus, with Control-M Self Service, business users can monitor their own jobs, on their cellphones.

We can also do some data validation. Before, we might have been able to do a rudimentary quality check, for example to check whether a needed file was there or not. Now we can go further and know if there is anything in the file, if the file is in the right format, if it loaded correctly and more.

Sometimes bad data still happens, but now it’s OK. Instead of 12 hours to learn about the problem and recover, now we can identify the problem, get an answer and move on in five minutes.

Click here to learn more about how Control-M integrates with Informatica and other business applications to manage ETL operations and much more.

Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names 2017-07-06 may be trademarks of their respective owners.

 

]]>