Basil Faruqui – BMC Software | Blogs https://s7280.pcdn.co Tue, 30 Jan 2024 14:30:11 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png Basil Faruqui – BMC Software | Blogs https://s7280.pcdn.co 32 32 What are Service Orchestration and Automation Platforms (SOAPs)? https://www.bmc.com/blogs/soaps-service-orchestration-automation-platforms/ Mon, 03 Jul 2023 00:00:46 +0000 https://www.bmc.com/blogs/?p=17496 SOAPs – Service Orchestration and Automation Platforms – represent the emergence of an enterprise orchestration engine that spans diverse applications and infrastructures. As noted in the recently published Gartner® report, “SOAPs enable I&O leaders to design and implement business services. These platforms combine workflow orchestration, workload automation and resource provisioning across an organization’s hybrid digital […]]]>

SOAPs – Service Orchestration and Automation Platforms – represent the emergence of an enterprise orchestration engine that spans diverse applications and infrastructures.

As noted in the recently published Gartner® report, “SOAPs enable I&O leaders to design and implement business services. These platforms combine workflow orchestration, workload automation and resource provisioning across an organization’s hybrid digital infrastructure.”*

Our takeaway is that Gartner expects SOAPs to function as the single orchestration point for managing and executing automation tasks across the enterprise and recommends that Infrastructure and Operations (I&O) leaders invest in SOAPs to drive digital innovation and business agility.

What is driving the need for SOAPs?

Organizations continually seek to improve cost and efficiency as they scale. SOAPs hold the promise of delivering on those improvements with predictability and reliability.

Traditional job scheduling and workload automation tools have failed to keep pace with the speed and complexity of digital business. Gartner predicts that “By year-end 2025, 80% of organizations currently delivering workload automation will be using SOAPs to orchestrate workloads across IT and business domains.” ”*

While tactical IT automation met the near-term needs for reducing manual effort and its associated errors and costs, SOAPs go beyond this minimum to orchestrating much more complicated event-driven workflows both inside and outside of IT.

What are the elements of a SOAP?

What are the elements of a SOAP

As per our understanding, Gartner identifies six key capabilities of a SOAP:

  1. Application workflow orchestration to create and manage workflows across multiple applications both on-premises and in the cloud.
  2. Event-driven automation to simplify IT processes involving manual steps or scripting.
  3. Scheduling, monitoring, visibility, and alerting to enable real-time capabilities and improve SLAs.
  4. Self-service automation to empower business users, developers, and others to orchestrate their own jobs.
  5. Resource provisioning of both on-premises and cloud-based compute, network, and storage resources.
  6. Managing data pipelines from automating file transfers to orchestrating the ingestion and processing of multiple data streams.

Whether available from the cloud or on-prem, SOAPs are likely to include a central administrative console, scheduling engine, workflow designer, agents for executing automation tasks, and a self-service mobile app for users. Additional capabilities may include support for machine learning algorithms and REST APIs that invoke orchestration programmatically.

What SOAPs are not

The number of automation tools is growing rapidly but they are not all designed for the solving the same problem. SOAPs Market Guide emphasizes this for two areas of automation, DevOps tools and RPA (Robotic Process Automation. Included below are some quotes from the Market Guide explaining how Gartner differentiates between these automation categories.

“SOAPs expand the role of traditional workload automation by adapting to use cases that deliver and extend into data pipelines, cloud-native infrastructure and application architectures. These tools complement and integrate with DevOps toolchains to provide customer-focused agility and to cost savings, operational efficiency and process standardization.” .” (Page 2)

“SOAPs provide a unified administration console and an orchestration engine to manage workloads and data pipelines and to enable event-driven application workflows. Most tools expose APIs enabling scheduling batch processes, monitoring task statuses and alerting users when new events are triggered and can be integrated into DevOps pipelines to increase delivery velocity.”

“SOAPs will not replace or replicate automation functionality in other domains, such as infrastructure automation, SaaS management, DevOps toolchains or Robotic Process Automation. Rather, they aim to be a single orchestration point to make the development, execution, routing and delegation of automation tasks as needed, both from and to these other domain automation platforms.”

“These platforms are complementary to automation platforms such as digital platform conductors for orchestrating workload placement across a hybrid delivery topology or RPA platforms for both interaction and API-enablement of legacy systems. The interaction of SOAPs with hyperautomation approaches is similarly complementary, extending SOAP value in the increasingly complex automation use cases.”

What is the future for SOAPs?

The orchestration and automation of IT processes and services, and application and data workflows, are evolving and coalescing into platforms that extend the boundaries of traditional scheduling, monitoring, and service delivery tools.

As organizations modernize their I&O practices, we see that I&O leaders should evaluate SOAP vendors based on:

  • “Prioritize support for orchestrating cloud-native applications and infrastructure during SOAP selection to prepare for cloud migration or integration with IaaS or SaaS workloads.”
    • The depth and breadth of native integrations
    • Customer support and long-term viability of the provider

SOAPs remain an evolving market, representing the transformation of a mature market for workload automation tools to meet modern infrastructure, application, data and business process requirements.”

* Gartner, Market Guide for Service Orchestration and Automation Platforms, by Analysts Chris Saunderson, Daniel Betts, Hassan Ennaciri, published 23 January 2023 – ID

This graphic was published by Gartner, Inc. as part of a larger research document and should be evaluated in the context of the entire document. The Gartner document is available upon request from BMC.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

]]>
Operationalization and Orchestration: the Keys to Data Project Success https://www.bmc.com/blogs/operationalization-orchestration-keys-to-data-project-success/ Fri, 12 Aug 2022 10:18:31 +0000 https://www.bmc.com/blogs/?p=52175 Data is vital to the success of every company. The amount of data available is staggering (and growing exponentially). But simply having the data isn’t enough; companies must also utilize it correctly. Unfortunately, businesses struggle to get into production the data projects that turn all this data into insights. In fact, in 2018, Gartner® predicted […]]]>

Data is vital to the success of every company. The amount of data available is staggering (and growing exponentially). But simply having the data isn’t enough; companies must also utilize it correctly. Unfortunately, businesses struggle to get into production the data projects that turn all this data into insights. In fact, in 2018, Gartner® predicted in their report entitled “Predicts 2019: Artificial Intelligence Core Technologies” that through 2022 only 15 percent of cutting-edge data projects would make it into production. Looking at this from the other side—85 percent of data projects will fail to produce results. Pretty staggering, right? In its Top Trends in Data and Analytics, 2022 report, Gartner points out that by 2024, organizations that lack a sustainable data and analytics operationalization framework will have their initiatives set back by up to two years.

As companies start to recognize that they need to build operationalization into their plans, the industry has begun to put a renewed focus on IT operations (ITOps). That has resulted in a plethora of variations around data (DataOps), machine learning (MLOps), artificial intelligence (AIOps), and analytics modeling (ModelOps). This boom has even spawned the term XOps, which some people in the industry are interpreting on a lighter note as, “we don’t know what’s coming next but it will involve Ops somehow, so we’ll fill in the blank later.” Ultimately, businesses know that they can have a project that works well in prototype in one location, but if it can’t be scaled nationally or globally, the project has essentially failed.

Another reason data projects are so difficult to move to production is the sheer number of moving parts involved. Every data project has the same four basic stages, which are the building blocks of data pipelines: data ingestion from multiple sources, data storage, data processing, and insight delivery. Each of these stages involves a significant amount of technology and moving parts.

Four Stages—Building Blocks of Data Pipelines

  1. Data ingestion
  2. Data storage
  3. Data processing
  4. Insight delivery

Looking at each stage, it quickly becomes apparent there are a lot of components across many application, data, and infrastructure technologies. Ingestion involves orchestrating data from traditional sources like enterprise resource planning (ERP) and customer relationship management (CRM) solutions, financial systems, and many other systems of record. This data is often combined with additional data from devices, sensors, social media, weblogs, and Internet of Things (IoT) sensors and devices, etc.

Storage and processing are also extremely complex. Where and how you store data depends significantly on persistence, the relative value of the data sets, the rate of refresh for your analytics models, and the speed at which you can move the data to processing. Processing has many of the same challenges: How much pure processing is needed? Is it constant or variable? Is it scheduled, event-driven or ad hoc? How do you minimize costs?

The last mile of the journey involves moving the data output to systems that provide the analytics. The insights layer is also complex and continues to shift. When the market adopts a new technology or capability, companies regularly adopt that shiny new thing. This constant innovation of new data technologies creates pressure and churn that can bring even the best operations team to its knees.

It’s important to be nimble. You must be able to easily adopt new technologies. Remember—if a new data analytics service is not in production at scale, you are not getting any actionable insights, and as a consequence, the organization is not getting any value from it, whether it is generating revenue or driving efficiencies and optimization.

An obvious goal at the operational level is to run data pipelines in a highly automated fashion with little to no human intervention, and most importantly, have visibility into all aspects of the pipeline. However, almost every technology in the data pipeline comes with its own built-in automation, utilities, and tools that are often not designed to work with each other, which makes them difficult to stitch together for end-to-end automation and orchestration. This has led to a rise in application and data workflow orchestration platforms that can operate with speed and scale in production and abstract underlying automation utilities.

Figure 1. Gartner Data and Analytics Essentials: DataOps by Robert Thanaraj

Control-M from BMC is an application and data workflow orchestration and automation platform that serves as the abstraction layer to simplify the complex data pipeline. It enables end-to-end visibility and predictive service level agreements (SLAs) across any data technology or infrastructure. Control-M delivers data-driven insights in production at scale, and easily integrates new technology innovations in the most complex data pipelines with ease.

The Control-M platform has a range of capabilities to help you automate and orchestrate your application and data workflows, such as:

  • The Control-M Automation API, which promotes collaboration between Dev and Ops by allowing developers to embed production-ready workflow automation while applications are being developed.
  • Out-of-the-box support for cloud resources including Amazon Web Services (AWS) Lambda and Azure Logic Apps, Functions, and Batch to help you leverage the flexibility and scalability of your cloud ecosystems.
  • Integrated file transfers with all your applications that allow you to move internal and external file transfers to a central interface to improve visibility and control.
  • Self-Service features that allows employees across the business to access the jobs data relevant to them.
  • Application Integrator, which supports the creation of custom job types and deploys them in your Control-M environment quickly and easily.
  • Conversion tools that simplify conversion from third-party schedulers.

Data projects will continue to grow in importance. Finding the best way to successfully operationalize data workflows as a key part of your overall project plan and execution is vital to the success of your business. An application and data workflow orchestration platform should be a foundational step in your DataOps journey.

To learn more about how Control-M can help you find DataOps success, visit our website.

GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.

]]>
If You Want to Be Data Driven, Pave the Way With DataOps https://www.bmc.com/blogs/if-you-want-to-be-data-driven-pave-the-way-with-dataops/ Mon, 13 Jun 2022 08:19:32 +0000 https://www.bmc.com/blogs/?p=52065 With enterprises across the world making concerted efforts to become data driven, several important disconnects have developed along the way. For example, more than 60 percent of enterprises today expect that their employees are using data to make decisions, but only a third of employees strongly believe their actions are data driven, and even fewer […]]]>

With enterprises across the world making concerted efforts to become data driven, several important disconnects have developed along the way. For example, more than 60 percent of enterprises today expect that their employees are using data to make decisions, but only a third of employees strongly believe their actions are data driven, and even fewer trust the data, according to Improving Business Outcomes with DataOps Orchestration, a recent IDC Analyst Connection, sponsored by BMC. That’s consistent with the 2022 results of a long-running research series from New Vantage Partners, which found that 97 percent of organizations are investing in data initiatives, but only 27 percent feel they have been successful at becoming data-driven organizations.

The disappointments organizations are experiencing, particularly in artificial intelligence and machine learning (AI/ML), are occurring despite the fact they have access to more data sources and software resources than ever before. So, what’s stopping companies from being more successful in their efforts to become data driven? It comes down to complexity and culture.

One of the complexities has to do with operationalizing data initiatives at scale. If data initiatives can’t be operationalized, then they won’t produce the expected value. To respond to this challenge, the industry is adopting DataOps as a set of practices that will industrialize the operational aspects of data initiatives. An influential and widely used approach to DataOps, which you can learn about in The DataOps Manifesto, addresses the leadership, cultural, and management principles that organizations should embrace to make analytics and related efforts successful. It speaks about the importance of data orchestration, but focuses much more on cultural than technical steps to success.

Wherever you are in your efforts to make better use of data, developing a solid DataOps program is a tangible step you can take today to make the journey easier. Organizations that are not getting the value they expected from their data initiatives should look at their processes for managing data before making any major new investments. Because of the data sources and tooling available, enterprises have incredible freedom in what they can develop to become data driven. But as IDC Research Director Stewart Bond notes in the study, “Freedom without a framework is chaos.”

DataOps provides the framework that enterprises need to control that chaos—if the DataOps program can orchestrate across all data sources, internal and external data users, and every infrastructure component, software asset, and process in between.

Orchestration as an important element of DataOps

The desire to be data driven and the need for DataOps are not new, but the complexity that organizations now face is unprecedented. Here’s an example that illustrates that point. Soon after the COVID-19 pandemic hit, Tampa General Hospital was sharing data about case counts, available ICU beds and ventilators, and other information with dozens of hospitals and other providers across its region to support a coordinated response.

The information was summarized in daily dashboards that were produced with data that relied on file transfers and other exact, transform, and load (ETL) operations from multiple health systems across the state of Florida. Control-M orchestrated it all, so hospital and public health officials could make decisions based on comprehensive, up-to-date information. DataOps is the key to keeping complex environments like this functioning reliably, and orchestration is the key requirement for DataOps today.

Complexity and the need for orchestration are common to businesses, even if they don’t operate at enterprise scale or have inter-enterprise complexity. According to the IDC Analyst Connection, two-thirds of organizations are already using at least ten different data engineering and intelligence tools. Figure 1 below shows some of the leading components of a typical data pipeline. These tools, and the applications that depend on them, are rarely centralized and instead are often spread across multicloud and on-premises infrastructure.

Components of data pipeline

Figure 1. Fundamental Components of a Data Pipeline, BMC Software

While these tools can be highly functional, they will have limited value in the real world if they can’t effectively work together. The inability to automate across processes demonstrates how complexity can limit the value of data programs. That is why the ability to orchestrate—in addition to automate—is so important now. DataOps addresses this orchestration, along with the human elements related to sharing and collaborating across enterprise functions.

Some organizations have achieved success by executing completely in the cloud, using Control-M to orchestrate and automate all their data ingestion, analysis, and automated remediation processes. Control-M has also evolved to meet today’s DataOps workflow orchestration needs and can help organizations control potential chaos by injecting automation and orchestration into DataOps, too. As IDC points out, “Data logistics is experiencing a renaissance. Many of the capabilities provided by legacy data management and automation solutions that provided control and governance are being refactored, reimagined, and modernized to accelerate work in the modern data environment and to help rein in the chaos.”

Get more of IDC’s research-backed perspective on how DataOps and orchestration add value to data initiatives in the full IDC Analyst Connection, Improving Business Outcomes with DataOps Orchestration (doc #US49015622, April 2022), and click here to learn more about how BMC is helping companies become a Data-Driven Business.

 

]]>
Introducing Control-M Workflow Insights https://www.bmc.com/blogs/introducing-control-m-workflow-insights/ Wed, 27 Oct 2021 10:09:09 +0000 https://www.bmc.com/blogs/?p=50933 “Automate everything that can be automated” is a phrase that we hear often in digital transformation agendas. The importance and focus on automation is not new, but for many organizations, this is now a business priority versus an IT-led initiative. And orchestration and automation of application and data workflows has become a critical need as […]]]>

“Automate everything that can be automated” is a phrase that we hear often in digital transformation agendas. The importance and focus on automation is not new, but for many organizations, this is now a business priority versus an IT-led initiative. And orchestration and automation of application and data workflows has become a critical need as companies look to operationalize—at scale—the delivery of digital business services from anomaly detection to personalized marketing offers to back-office processes like invoicing/billing, financial close, etc.

Companies that have invested in orchestrating and automating application and data workflows quickly realize that these workflows can become very complicated once they span many different operating systems and data platforms and integrate with applications across on-premises and cloud environments. The other issue is that the constant changes made to these workflows are often a result of changing business needs.

This combination of expanding and changing workflows can degrade the service of application and data workflows, reduce visibility, and significantly impact the overall business, not to mention lead to missed SLAs – both internally and for your customers. IT and engineering teams need a way to optimize workflows and ensure that any changes to them don’t impact business service delivery.

Consider this example: a critical—and now automated—service has been updated with new jobs in the workflow, and what used to take two hours has slowly crept up to three hours or more. These changes are often incremental and it can be difficult to understand their impact on business outcomes if you can’t trend and measure performance against key performance indicators (KPIs).

That’s why I’m pleased to announce today that we are launching a new offering for Control-M from BMC called Control-M Workflow Insights.

Control-M Workflow Insights

Control-M Workflow Insights provides valuable dashboards that give users in-depth observability to continuously monitor and improve performance of the application and data workflows that power critical business services. Users get easy-to-understand dashboards with insights into the trends that emerge from continuous workflow changes to help reduce the risk of adversely impacting business services. By exposing performance drifts, irregularities, and errors early, the time required to resolve issues is significantly reduced, and operations and application development teams can work more autonomously.

Control-M Workflow Insights dashboards

Each dashboard uses telemetry gathered from real-time application and data workflow behavior, giving users insights into how they can continually optimize both workflows and resource consumption, prioritize maintenance, and improve SLA performance.

Control-M Workflow Insights provides valuable SLA-dashboards

Control-M Workflow Insights also helps organizations:

  • Manage KPI tracking and performance, ensuring continuous improvement to business application workflow health and capabilities
  • Optimize capacity utilization by exposing potential impacts from onboarding new applications
  • Improve forecasting of future infrastructure and capacity needs
  • Understand critical SLA service duration and effects on the business
  • Find workflow anomalies that could impact Control-M performance and workflow efficiency

To support critical business services, you must move rapidly and implement changes at scale. When deciding what to integrate into your tech stack, it’s an easy choice to pick a solution that efficiently connects your applications and data and their workflows, as well as your Dev and Ops teams.

If you’re ready to reduce risk; get real-time insights into the health of your application and data workflows; increase visibility across your Control-M environment; and improve critical business service delivery, Control-M Workflow Insights can help.

To learn more about Control-M Workflow Insights, check out these resources:

]]>
How Control-M Sets Your Foundation for Hyperautomation https://www.bmc.com/blogs/control-m-hyperautomation/ Tue, 23 Mar 2021 14:25:46 +0000 https://www.bmc.com/blogs/?p=49184 Gartner is demonstrating forward-looking thought leadership again through its recent research and commentary on hyperautomation. The organization lays out a compelling vision of how various technologies, including IoT, robots, software bots, AI, machine learning, 3D printing, remote management and control software and others, will work together to automate many enterprise tasks so processes and facilities […]]]>

Gartner is demonstrating forward-looking thought leadership again through its recent research and commentary on hyperautomation. The organization lays out a compelling vision of how various technologies, including IoT, robots, software bots, AI, machine learning, 3D printing, remote management and control software and others, will work together to automate many enterprise tasks so processes and facilities can run automatically and be managed remotely. Gartner describes its vision for hyperautomation and the three steps enterprises need to take to achieve it. In this blog, we’ll explore how hyperautomation relates to the way enterprises are managing their workflows in production today, and how Control-M is supporting the transition.

In its recent Three Steps to Hyperautomation report, Gartner presents a compelling case about why hyperautomation is valuable and how it can be achieved. Organizations that are pursuing increased automation would be well served to follow Gartner’s advice and implementing the three steps to achieve it:

  1. Standardization and Interoperability
  2. Remote Management and Control
  3. Full or Semiautonomous Operations

Many elements of hyperautomation are being done today, but haven’t been labeled as such or risen to the predicted scale. The methods and lessons for managing these processes from today’s real-world leaders are highly relevant. We see that every day in our work with BMC customers. You can learn from their success and avoid some of the obstacles to gaining business value from automation.

Gartner states in its report that the automation deficit in companies leaves billions of dollars on the table every year because organizations don’t have a clear strategy and execution plan for the areas that comprise the three steps of hyperautomation. We find this to be particularly true in operations of business services that companies deliver every day. Business services such as financial close, supply chain, billing and invoicing, predictive analytics, marketing recommendation offers, etc. are all powered by a complex web of applications and data systems running on disparate infrastructure. A lack of standardized approach to automating the complex application and data workflows running across disparate technologies leads to a large operations staff that is constantly busy fighting fires as they use silos of automation to deliver business services. Business services delivery suffers without standardization and remote management capabilities for orchestrating across systems, with both reputational and financial impact.

The need I’ve just described is exactly the kind of functionality and capability that Control-M and the SaaS version, BMC Helix Control-M, are providing for thousands of customers, including innovators that have extensively automated operations featuring real-time big data processing, IoT, and other elements of the hyperautomation ecosystem.

Control-M provides a foundation platform for hyperautomation and directly supports each of Gartner’s suggested three steps – achieving standardization and interoperability across enterprise systems, orchestrating activity across remote facilities using IoT, MES, SCADA and other data sources, and achieving full or semiautonomous operations. That isn’t our vision for the future, it’ is what is happening today, as the following customer references show.

Tampa General Hospital – Applying data standardization and interoperability to fight COVID-19

Tampa General Hospital (TGH) led a coalition of more than 50 hospitals (including competitors) to coordinate regional response to COVID-19. It had been using Control-M for about a year before the pandemic hit to produce its internal daily performance dashboard. TGH thought it could use that experience to give healthcare providers, public health officials, and government agencies a comprehensive view of COVID-19’s impact on the entire region by combining its data with that of other providers. Tampa General Hospital used its experience with Control-M and its ability to integrate disparate data sources to create a region-wide dashboard that shows up-to-date hospitalization rates, ICU capacity, respirator availability and other information (see the case study here).

Control-M provides this information by integrating with the electronic medical record (EMR) and other systems at dozens of locations. The data it receives is not standardized, but can be processed, protected, and shared nonetheless. Having accurate, real-time data helped healthcare providers and the public sector successfully meet the surge in COVID-19 cases.

“As we navigate day by day through this public health crisis, the dashboard is helping us save lives,” said Dr. Peter Chang, Tampa General vice president of care transitions. “It gives us situational awareness of resources that are available across the region to take care of COVID-19 patients. It breaks down silos between competing hospitals for this collaborative community effort.”

Colruyt – Next-level warehouse automation brings fill rates to new highs

Gartner presents CEMEX, a cement producer in Mexico, as a leading example of what enterprises can achieve through hyperautomation. CEMEX has achieved the second step of hyperautomation (remote management and control) by coordinating production across 125 facilities in three countries to optimize hauling by its cement trucks. Control-M drives a very similar process for Colruyt, a Belgian-headquartered retailer with 600+ of its own stores and 580+ affiliated stores in Europe.

Colruyt’s distribution center operations include multiple batch and real-time software applications and processes. Control-M works across these systems to orchestrate the operational analysis that Colruyt needs to plan its shipping operations for just-in-time store fulfillment. Logistics and warehouse automation were highly automated to satisfy JIT requirements, and Colruyt later enhanced operations by introducing voice technology to support order picking. Control-M had no problems accommodating the new input technology. “It’s very easy to integrate a new technology into the same batch environment,” said Frank Waegeman, IT Manager, Colruyt.

By using voice picking and Control-M to enable order weight and volume information to flow reliably between supplier trucks and the order pickers, Colruyt raised its truck fill rate to 95% without affecting JIT delivery. “For me the Control-M environment is like the heart of the human body. It delivers oxygen to all the vital organs,” said Waegeman. His colleague, Peter Vanbellingen, Colruyt’s director of business processes and systems said, “It really brings added value to our operational excellence.”

Aspiag: Synchronized, timely data cuts product waste

Aspiag also used Control-M to bring together non-standard data sources from different systems to automate replenishment activities. In this case, the Italy-headquartered grocery retailer used Control-M to collect, orchestrate, and synch inventory data from POS systems and in-store mobile computers to create a more timely and accurate view of inventory. The visibility and resulting insights help executives optimize daily replenishment. With a clear, accurate view of up-to-date inventory data, Aspiag was able to reduce food waste across all categories by 5% and perishable waste by 8% (an impressive result because it is a challenging category to manage).

Aspiag reduced out-of-stocks at the same time it cut waste. That is often an either/or proposition for retailers since the simplest way to prevent out-of-stocks is to increase orders, which also increases the chance that perishables will go unsold and spoil.

There are other examples of how Control-M is already supporting hyperautomation elements and stages for customers including Navistar, Itaú Unibanco, CARFAX, Railinc, Up Sí Vale, UNUM, and others. This broad base of proven, real-world use cases demonstrates Control-M’s value for providing an excellent foundation for advanced, scalable automation initiatives. Control-M can support current and future needs because it can conduct event-driven orchestration for real-time processes, and work with a wide range of data sources across different systems.

Because of how much Control-M can automate, it shortens the leap to hyperautomation. What can Control-M do to help you get your legacy processes under control so hyperautomation can flourish? Please contact us if you’d like to discuss some ideas.

Next steps:

Check out Gartner’s complete Three Steps to Hyperautomation report here.

]]>
Identifying What’s Missing in the People-Process-Technology Trifecta https://www.bmc.com/blogs/identifying-missing-people-process-technology-trifecta/ Wed, 16 Sep 2020 17:04:44 +0000 https://www.bmc.com/blogs/?p=18647 Even in this age of digital transformation, many organizations lack the workflow automation they need to deliver the high-performing applications and superior services their end users and customers have come to expect. Disparate systems, islands of automation and siloed teams lead to technological disconnects that require additional manual labor, scripting and troubleshooting to correct. A […]]]>

Even in this age of digital transformation, many organizations lack the workflow automation they need to deliver the high-performing applications and superior services their end users and customers have come to expect.

Disparate systems, islands of automation and siloed teams lead to technological disconnects that require additional manual labor, scripting and troubleshooting to correct.

A new Forrester study brings this issue to light. Titled “Face the Workflow Automation Gap Head On,” the report reflects input from 355 global IT leaders, each representing large enterprises. The research shows that automation capabilities are critical to the operational success of organizations, both internally and externally.

By and large IT leaders recognize this, with over 65 percent of survey respondents calling their organization’s automation capabilities “‘very’ or ‘extremely’ important to meeting their most pressing priorities in the coming year.”

Of note, workflow automation is viewed by those surveyed as essential to achieving success in three pivotal areas:

  • Responding to business and market changes
  • Improving customer experience
  • Realizing efficiencies

Yet, those surveyed also report that only “a third or less of workflows for various categories under study (e.g., computational engines, file transfers, extract, transform, and load (ETL)) are fully automated today” within their organizations.

Plus, when automation is in place, it is typically limited in scope. According to the study, “26 percent report using a different technology for each workflow type and 44 percent use the automation (if any) that came with a package for any domain-specific workflow type. Meanwhile, seven percent don’t have any formal tool to manage workflows and are instead relying on homegrown methods or tools.”

Why Islands of Automation Fail

The repercussions are far-reaching, with more than two-thirds of respondents frequently experiencing an inability to deliver services in time for the business, excessive manual work to create or manage workflows between different applications or environments, and an inability to get ahead of service failures before they occur.

Addressing these challenges requires organizations to build additional staff, processes, and layers of technology into their IT operations. Consider also that every new service and application introduced into the computing environment further complicates the workflows.

“Many organizations have sought to modernize their IT systems by embracing the power of cloud computing, internet of things (IoT), mobile, and artificial intelligence (AI),” the report states, adding that “now, for a digital offering to be delivered, information must pass through multiple heterogenous systems and/or teams.”

So how can organizations bring their workflow automation capabilities up to date – and to scale – filling in the gaps between people, processes, and technology throughout the environment in service to customers?

Adopting a Holistic Approach

For today’s agile development environments, the right approach to workflow automation is a holistic one. Forrester advises companies to reevaluate – and even “reinvent” – their application development and delivery approach…bringing their “people, process, and technology resources together in harmony to drive improvement in their software delivery capability. This includes having processes that are continuous, tools that are connected, and teams that are able to draw from diverse skills and expertise.”

According to the study, here’s what this approach should look like:

People: Forrester advises organizations to “cultivate an environment of iteration and collaboration and remove barriers that keep teams from reaching their maximum productivity,” noting that high-maturity firms are bringing high-demand skills into these collaborative teams, such as data science, algorithm development and AI/ML expertise. “By building more well-rounded teams,” the report notes, firms “can reduce time wasted from role handoffs and hierarchies that slow service delivery.”

Processes: Organizations must break down their siloed processes and eliminate islands of automation across the enterprise as well. They should adopt an outcome-driven approach to DevOps that focuses on achieving quality results.

Technology: To deliver quality services and applications at speed, companies should employ automation across the software development lifecycle (SDLC). They should strive for continuous integration and continuous delivery (CI/CD), automating processes “for building and testing software and standardizing delivery practices so they’re easier to monitor and enforce.”

Forrester adds that automation should span the entire technological ecosystem as well, allowing firms to “remove errors from manual processes by standardizing and automating the movement of applications between environments.”

Automating Workflows at Speed and Scale

To truly optimize people, processes and technology within a modern DevOps environment takes an end-to-end workflow automation solution, one capable of orchestrating complex and overlapping initiatives at speed and scale.

“Done right,” the report notes, “application workflow automation moves the burden of workflow execution from people to software, freeing up IT staff to work on strategic initiatives rather than babysitting technology.”

The ideal solution should apply automation to capturing and managing workload output and logs, making it easier for interested stakeholders to diagnose, repair, and learn from service failures. It should also weave automation into the SDLC without complex scripting, allowing companies to bring new features and services to customers much faster.

Role-based access and dashboard views are important to implement as well as they help “right-size” control to the automation tool while promoting greater collaboration between IT and business users.

More than any other feature, though, a single point of control is essential as it eliminates the need to juggle multiple tools across different workflows, expediting productivity while ensuring quality results.

Control-M: Orchestrating Workflows Across the Continuum

Fortunately, this type of technology exists in Control-M, BMC’s application workflow orchestration tool. Control-M provides advanced operational capabilities easily consumed by Dev, Ops and lines of business alike, including end-to-end workflow connectivity – any application, any data source, and all critical systems of record – mainframe to cloud.

With Control-M implemented, firms realize these and other benefits:

  • Streamlined orchestration of business applications – which helps teams deliver better applications faster by embedding application workflow orchestration into the CI/CD pipeline
  • Extended Dev and Ops collaboration– which allows workflows to be versioned, tested and maintained, helping developers, engineers and SREs define, schedule, manage and monitor application workflows in the production environment
  • Simplified workflows across hybrid and multi-cloud environments– complete with AWS, Azure and Google Cloud Platform integrations
  • Data-driven outcomes delivered faster – making it easy for teams to manage big data workflows at scale
  • Control of file transfer operations– with intelligent internal and external file movement and enhanced visibility

With Control-M, SLAs are managed with intelligent predictive analytics, compliance auditing and governance are automated, and logs and output are easily captured and managed as well.

Measurable Results

Ultimately, Control-M provides the high levels of automation needed to deploy new applications and features swiftly and reliably in complex DevOps environments. Time-tested in the marketplace, Control-M has proven its stability, with thousands of companies scaling from tens to millions of jobs with zero downtime.

As noted by Forrester, firms that achieve comprehensive automation across the environment “not only have a desired qualitative outcome (such as improved customer satisfaction), but also quantitative targets that teams can use to measure progress. This tactic elevates your evaluation of a capability from ‘we have three systems of automation to manage customer success’ to ‘we have improved customer success by 20 percent and are targeting 35 percent.’”

To see what end-to-end workflow automation can do for your people-process-technology challenges, download “Face the Workflow Automation Gap Head On” now.

Click here to learn more about Control-M

]]>
XebiaLabs and Control-M: What’s the difference? https://www.bmc.com/blogs/xebialabs-and-control-m-whats-the-difference/ Thu, 25 Jun 2020 08:06:10 +0000 https://www.bmc.com/blogs/?p=17842 I recently came across an interesting article on cio.com titled What is a chief automation officer?. This title has been mentioned in a few conferences I attended a couple of years ago but this was the first article I saw that attempted to describe the role. According to the article a key deliverable for a […]]]>

I recently came across an interesting article on cio.com titled What is a chief automation officer?. This title has been mentioned in a few conferences I attended a couple of years ago but this was the first article I saw that attempted to describe the role. According to the article a key deliverable for a Chief Automation Officer (CAO) is to oversee enterprise process automation as a whole and focus on underpinning all automation attempts with the right technology across the companyIt remains to be seen how many companies will adopt a CAO role but I do think that one reason for the emergence of this role is the need for automation everywhere and the subsequent exponential growth of automation tools used across the enterprise.

I spend time with a lot of Control-M customers and many of them are seeking clarification on the use cases for many of the available automation tools. The number of automation tools available today seems to be about the same as the number of stars in the observable universe, making it impossible to categorize and compare even a fraction of the tools in one blog. I will focus on one question that has come up recently from our customers. What’s the difference between XebiaLabs and Control-M? Where and when should both products be used? A point to note is that XebiaLabs is now part of Digital.ai after merging with Collabnet VersionOne and Arxan.

Customer View

A good place to start any categorization is to see how the customers are using the tools.

Rabobank, a featured customer on the XebiaLabs website describes their use case as follows

“XebiaLabs enables us to deploy our applications more often, more reliably, and more predictably. It helps us to significantly improve our time to market in multiple critical areas.”

Another success story on XebiaLab’s website is Air-France KLM.

“Air France-KLM chose XL Deploy, an Application Release Automation solution from XebiaLabs. XL Deploy automates and accelerates Java and .NET deployments in cloud and middleware environments, such as IBM WebSphere, Oracle WebLogic, and JBoss.

The ability for developers to support around 200 Java deployments a week in a “self service” model has clear benefits for Air France-KLM, according to Bosch. We must be able to deploy the EAR files for a project as well as configure other middleware systems such as web servers, security proxies, and XML firewalls. For us, the ability to do this across different products in an integrated manner is a huge advantage.”

Reading these and many other success stories on the XebiaLabs website it is no surprise that customers are using their solutions to simplify the challenge of configuring and deploying applications across a diverse and complex infrastructure because that is exactly how XebiaLabs describes their products. Their main focus is on three areas Release Orchestration, Deployment Automation and DevOps Intelligence to measure and optimize DevOps performance.

Let’s look at how Control-M customers describe their use of the product.

Todd Lightner from the Hershey Company describes in this blog how Hershey’s is using Control-M to help keep inventory stocked at stores. Included below is his description of their use case.

“The data center operations group runs thousands of jobs each day. These jobs manage the digital interactions that are necessary to run our business—not just manufacturing, supply planning, supply chain, warehousing, and distribution but also finance, payroll, costing, human resources, marketing, and sales. We handle many of these functions within our complex SAP® environment. BMC’s Control-M solution automates most of these jobs and processes. It kick-starts them, monitors progress, and sends out alerts if issues arise. So, when anyone asks me what Control-M does at The Hershey Company, I tell them that it literally runs our business. It’s one of our five most critical applications.”

A case study of Raymond James Financial on BMC’s website describes their use as follows

“Control-M manages jobs across complex interdependencies among hundreds of applications that access the company’s data warehouse and consolidated data store. Nightly processing ensures that senior management and financial advisors have the data they need to help clients with investment decisions.

Audit report preparation, which previously took two to three weeks, now only takes a few hours”

Again, no surprise that Control-M customers describe their use cases focused on automation and orchestration of workflows for business applications because that’s what Control-M has been designed for. Description of Control-M from BMC’s website:

“Control-M simplifies application workflow orchestration. It makes it easy to define, schedule, manage and monitor workflows, ensuring visibility and reliability, and improving SLAs.”

Analyst View

Gartner covers XebiaLabs under the Application Release Orchestration Category. The latest magic quadrant for this category is available on XebiaLab’s website.

Control-M has historically been covered under the Workload Automation category by Gartner and EMA. The most recent report on Workload Automation is by EMA and can be found on the BMC’s website.

Summary

Products offered by XebiaLabs are focused on automation the DevOps toolchain where it automates the various steps in the software development lifecycle such as code, build, test, release. All stages that are covered before an application reaches production.

Once applications are in production one of the critical functions is automation and orchestration of application workflows that are directly involved in the delivery of the business services such as billing, payroll, business intelligence reporting, prediction and recommendation applications that are driven by machine learning and artificial intelligence algorithms. Control-M’s focus over the last 20 years has been to provide a single point of control for automating workflows of applications that deliver these business services.

The number of automation tools is only expected to grow as companies accelerate digital transformation efforts so it will be more important than ever to have clarity on the problem that needs to be solved and then choosing the right solution.

]]>
How PayPal Supercharges App Development with DevOps and Control-M https://www.bmc.com/blogs/how-paypal-supercharges-app-development-with-devops-and-control-m/ Tue, 27 Feb 2018 14:49:45 +0000 http://www.bmc.com/blogs/?p=11926 PayPal Highlights: $13,500+ in payments processed each second 2,600+ applications 4,500+ developers 42,000 batch executions per day With 210+ million active users (in the highly competitive global financial services market), PayPal must constantly find new and innovative ways to help their customers connect and transact. Yet, the highest levels of quality, security and compliance must […]]]>

PayPal Highlights:

  • $13,500+ in payments processed each second
  • 2,600+ applications
  • 4,500+ developers
  • 42,000 batch executions per day

With 210+ million active users (in the highly competitive global financial services market), PayPal must constantly find new and innovative ways to help their customers connect and transact. Yet, the highest levels of quality, security and compliance must be maintained. How do they do it? They’ve embraced DevOps and created self-service culture that empowers developers to make better applications faster.

Last November, Rama Kolli, a product manager for PayPal (@PayPal) spoke to attendees at the 2017 DevOps Enterprise Summit in San Francisco (@DOES_USA) about how the company has dramatically sped up application development. PayPal’s story is filled with valuable takeaways for any company facing the same challenges. Here’s some of what I learned.

A few years ago, a PayPal developer’s life was all about logging support tickets.

  • If a developer needed an estimate for production capacity, they logged a ticket.
  • If they needed certifications for a new app, they logged a ticket.
  • If they needed a release window, they logged a ticket.

You see where I’m going. It took days to create a new application, weeks to deploy a test server, and months to move the app into production. Developers were drowning in tickets and struggling to keep up with the innovation the business required. The process was clearly broken.

So, PayPal joined the DevOps revolution and built a custom self-service platform designed to give developers full control over every phase of the Software Development Lifecycle (SDLC) while still adhering to corporate standards. This approach completely shifts the developer’s day to day activities from the chaos of support tickets to a one-stop shop for all their requests.

One area of this self-service lifecycle that I am going to focus on is scheduling.

PayPal leverages BMC’s Digital Business Automation Platform, Control-M, to enable developers to orchestrate their batch workflows. Developers already have self-service access to build their batch applications. It’s been so successful, PayPal is building a self-service experience for scheduling batch jobs using Control-M Automation API.

Because batch applications are integrated between Control-M and their self-service platform, agents are provisioned along with the applications and get propagated to all the different environments that batch apps are deployed to. And by using the Control-M’s feature-rich editor, developers can define their batch schedules, execute dry runs of the schedules, and publish the schedule for continuous execution. This means that batch definitions are completed much earlier in the SDLC, reducing the amount of rework needed in the release and deploy phase while increasing the quality of the app in production.

So, what’s the end result?

PayPal is building and deploying applications faster than ever – in some cases in less than two weeks. To date, over 1,000 applications have been built in their self-service platform. Developers feel empowered and can actually focus on development, not logging and following up on tickets. The rate at which change is deployed to production is no longer bound by platform limitations. Now it’s all about the speed at which developers can write their own code, certify it and deploy it.

Rama also shared a few lessons that PayPal learned along the way.

  1. Site Operations is a key partner in the DevOps journey. Increasing development agility with self-service capabilities can never come at the expense of site stability. It’s important to work directly with your site operations partners to make sure their concerns are addressed and that their tools are built into the platform.
  2. DevOps raises the bar for development. With self-service, developers get used to a much faster work pace and no longer tolerate delays. This makes it easy to identify bottlenecks and opportunities to invest in automation.
  3. Freedom isn’t free. PayPal’s 4,500 developers have built over 12,000 test playgrounds. If these test instances are not being used by developers efficiently, there’s a significant cost to the business. It’s important that DevOps platforms give developers visibility into the cost of the resources they have provisioned.
  4. Invest in the user experience (UX) to lower support costs. The more intuitive you can make your platform, the more developers are able to be self-sufficient. Pain points in the UX will quickly show up in the form of added service costs. Strive to create a unified, user-friendly experience and you’ll keep service and training costs down.
  5. Productivity is key. PayPal is investing in tracking capabilities to ensure that the DevOps platform is available to developers whenever they need it and that they can complete tasks as fast as possible. Availability and usability will drive productivity and efficiency.

DevOps has revitalized the way PayPal approaches development, and BMC is proud to be a partner on that journey. I hope these insights are helpful as you navigate your DevOps journey. Are you currently implementing or planning to implement DevOps strategies and tools at your company? I’d love to hear your feedback and questions!

Click here to watch Rama Kolli’s entire presentation ‘Supercharging Application Development at PayPal to Democratize Financial Services

]]>
10 Troubling Excuses for Using Oozie with Hadoop Workflows https://www.bmc.com/blogs/10-troubling-excuses-for-using-oozie-with-hadoop-workflows/ Wed, 07 Feb 2018 17:06:18 +0000 http://www.bmc.com/blogs/?p=11842 Now is as good a time as any to think about what you’re going to do differently in 2018 to make it easier to keep up with the demands of big data. A great place to start is to look closely at the tools you’re using for working with Hadoop workflows. Let’s face it — […]]]>

Now is as good a time as any to think about what you’re going to do differently in 2018 to make it easier to keep up with the demands of big data. A great place to start is to look closely at the tools you’re using for working with Hadoop workflows. Let’s face it — if you’re using Oozie, you’re relying on older technology with limitations and inconsistences that can slow you down. Plus, there’s a much more effective alternative that enables you to automate big data workflows faster and easier — Control-M for Hadoop.

To validate our confidence in this product, we put Control-M to a test. We asked an independent company that specializes in big data to explore the functional differences between Oozie and Control-M for Hadoop. Spoiler alert: Control-M takes the lead across the board, as described in this summary based on their analysis.

“Oozier” isn’t easier

If you’ve been struggling with using Oozie with Hadoop workflows, or if you’re just starting a big data project, you’ll discover why these experts determined that Control-M provided a better, faster, and easier way for creating, testing, deploying and managing Hadoop-based workflows. Keep in mind that these testers came to this conclusion even though they had never used Control-M before but had extensive experience with Oozie.

  • After building a user-monitoring mobile application with Oozie and Control-M, they were able to develop workflows 40% faster with Control-M.
  • They also discovered very quickly how Control-M provided more security and included more features out of the box to avoid disrupting business activity and increase business value.

Read this summary for a side-by-side comparison. Specific testing included building workflows; scheduling, managing and updating jobs; conducting imports and file transfers; and evaluating security.

What excuses are holding you back from making a switch?

While some enterprises are familiar with Oozie, their teams may not realize all of the benefits of Control-M. For example, if you agree with the statements below, then go ahead and stick with Oozie. But if you don’t, then consider Control-M.

  1. The push to innovate is overrated – speed couldn’t possibly matter that much in developing workflows.
  2. Why even bother being concerned about scalability?
  3. Oozie may not meet most of my operational needs but at least it’s free.
  4. My team likes the extreme challenge of managing Hadoop workflows from multiple interfaces.
  5. Dealing with never-ending software bugs makes work exciting, even though it’s time-consuming and frustrating.
  6. Why leverage automation when I can spend countless hours writing scripts?
  7. Let someone else worry about file transfer security – It’s not my job.
  8. If the results of scheduling big data workflows are not accurate the first time, well, maybe they’ll be accurate later on.
  9. I’ve been using Oozie for many years. I don’t have the time to even think about switching to something that’s so much better.
  10. Digital business activity can wait. I’m too busy.

Get the facts

Want to learn more about the differences between Control-M and Oozie for managing Hadoop workflows?
Read the summary and see for yourself.
Access the white paper that compares the tools.

]]>
Keep Machine Learning Teams Focused on Data Science, Not Data Processing https://www.bmc.com/blogs/keep-machine-learning-teams-focused-on-data-science-not-data-processing/ Tue, 14 Nov 2017 02:42:55 +0000 http://www.bmc.com/blogs/?p=11462 Machine learning capabilities, use cases, and enterprise demand are exploding. The talent base to fulfill this demand is not. Talent shortages are a real barrier to machine learning initiatives. Organizations need to create an environment where their machine learning teams can focus on data science, not data entry and management. Automation is the key, but […]]]>

Machine learning capabilities, use cases, and enterprise demand are exploding. The talent base to fulfill this demand is not. Talent shortages are a real barrier to machine learning initiatives. Organizations need to create an environment where their machine learning teams can focus on data science, not data entry and management. Automation is the key, but it isn’t a skeleton key – no single set of automation tools unlocks successful machine learning development. This article presents guidance on selecting the right automation approach so organizations can keep their machine learning talent working on data science, not data processing.

Automation can be applied at every stage of machine learning development and execution. That’s good news, but also represents a problem with machine learning automation tools – they tend to be stage-specific and don’t integrate well with others. The more stages of the development-test-deploy-execute lifecycle a toolset can address, the faster your teams can deliver production-ready innovations. Here’s a look at some of the challenges and tools for different stages of machine learning development and execution.

There are several common challenges to developing machine learning models. Organizations need to work with large volumes of disparate data sources, both structured and unstructured. This requires frequent data transfers, which are often time sensitive. Without automation, basic extract, transform and load (ETL) operations often account for about 70 percent of the time and effort needed to establish a data warehouse – those tasks are not a good use of specialist machine learning resources. Scripting provides a semi-automated way to manage data, but scripts take time to develop and maintain. Again, it is best for specialists to spend their time on data science, not data processing.

The heart of machine learning development is creating test modules. This requires developing the modules themselves plus the workflows that will allow them to execute. Organizations have traditionally used different toolsets, but now developers can create workflows using the same development tools used to create the modules.

Testing follows, and has typically required a lot of manual configuration for the test environment, then debugging and rework of the module itself. These tasks can also be automated. Once testing and debugging are complete it’s time to promote the module to the production environment, which usually requires a fair amount of configuration time using another set of tools.

The process above is typical and may need to be repeated several times if the tools used are specific to data types or production environments (e.g. cloud, mainframe).

There are some tools to help, like Azure Machine Learning, Facebook’s FBLearner Flow and Amazon Machine Learning. These tools make it easier to get involved with machine learning because they simplify the processes for creating models. However, they do not seriously address the underlying data pipelines that feed machine learning modules. The data pipelines remain complex and often involve coordinating activity among mainframe distributed systems and other systems of record. Unless tools can provide automation that addresses the entire data pipeline for machine learning – ingestion, preparation and processing – scalability will be limited.

There are automation tools available for most steps in the process. Yet the risk of creating silos exists when different tools are used for different environments. Silos can lead to duplication of effort and limited scalability. For example, how well will a tool developed for a public cloud service work for an enterprise mainframe where key workflows and much of the data that power machine learning resides? Silos are a growing concern as hybrid IT architectures are becoming more popular.

It clearly takes a lot of plumbing to keep machine learning modules and insights flowing. If machine learning teams are doing the plumbing, they’re not working on fine-tuning the modules and delivering insights.

The way forward

Some development tools and environments are much easier to integrate than others, and the options also vary considerably in how much of the lifecycle they address. When selecting tools for machine learning, their abilities to integrate and automate should be highly valued. Integration and automation save time, which means you can deploy more machine learning modules in less time. Here are some questions to ask when assessing tool options:

  • How are data transfers and file transfers managed? Is a separate solution required?
  • What happens if data flow to the machine learning module is interrupted? Once resolved, does it have to restart from the beginning?
  • Does the component/solution require staff to learn a new environment, or can they work in what they’re already familiar with?
  • Does it support the technologies and environments you’re using, such as Spark, Hive, Scala, R, Docker, microservices, etc.?
  • Can the machine learning workloads be scheduled and managed with other enterprise workloads, or is a separate scheduler required?
  • What is the process for promoting machine learning modules to production?
  • Can execution be monitored through our current workload automation environment, or is a separate system required? If so, who will monitor it – the data science team or operations?

Carefully consider these questions so you can prevent your data scientists from becoming data processors.

Data scientists have been ranked as the top job in the U.S. for two years running1. It’s a tough job to do, and a tough job to fill. If you’re fortunate enough to have the talent, you need to give them the right tools to do the job.

1 Glassdoor “Glassdoor Reveals The Best 50 Jobs in America For 2017” January 24, 2017. See http://www.prnewswire.com/news-releases/glassdoor-reveals-the-50-best-jobs-in-america-for-2017-300395188.html

]]>