Workload Automation Blog

Workload automation and job scheduling have become critical as application and IT processing requirements have expanded. Automation for workloads across a broad spectrum of operating systems, applications, databases, and dependencies is now fundamental to on-time service delivery. Learn about Workload Automation at BMC or explore BMC's Introduction to Hadoop.

Why Control-M Customers Love Jobs-as-Code

BY

As DevOps becomes more the standard of how enterprises are delivering high-quality applications faster, our traditional customer is evolving right along with it. Operations team are down-right excited to get their development team involved to code jobs further upstream in the development lifecycle. These DevOps enabled teams know that in order to … [Read more...]

Using ElasticSearch with Apache Spark

BY

ElasticSearch is a JSON database popular with log processing systems. For example, organizations often use ElasticSearch with logstash or filebeat to send web server logs, Windows events, Linux syslogs, and other data there. Then they use the Kibana web interface to query log events. All of this is important for cybersecurity, operations, … [Read more...]

What is Batch Processing? Batch Processing Explained

BY

Simply put, Batch Processing is the process by which a computer completes batches of jobs, often simultaneously, in non-stop, sequential order. It’s also a command that ensures large jobs are computed in small parts for efficiency during the debugging process. This command goes by many names including Workload Automation (WLA) and Job … [Read more...]

Flip the Switch: Making the Case for Continuous Delivery

BY

Making the case for DevOps and Continuous Delivery (CD) can be tricky for some and downright frustrating for others. While the efficiency and cost-savings realized by implementing Continuous Delivery are monumental, the cultural enterprise shift should not be taken lightly. More than anything else, making the case requires gaining buy-in at … [Read more...]

Learn How to Automate Data Pipelines Across a Hybrid Environment

BY

Walk through a company’s journey to automate data pipelines across a hybrid environment with Basil Faruqui and Jon Ouimet of BMC Software at the Strata Data Conference in New York City. During their technical session, Basil and Jon will discuss the realities of building, running, and managing complex data pipelines across a dynamic … [Read more...]

How to Stop Worrying and Learn to Love Docker Containers

BY

Developers are discovering the power of using containers to package applications quickly and easily. This trend is driven by the need for speed and agility as it enables developers to rapidly release new applications and functions by taking a DevOps approach. While most enterprise organizations are using traditional environments for development, … [Read more...]

Using Spark with Hive

BY

Here we explain how to use Apache Spark with Hive. That means instead of Hive storing data in Hadoop it stores it in Spark. The reason people use Spark instead of Hadoop is it is an all-memory database. So Hive jobs will run much faster there. Plus it moves programmers toward using a common database if your company runs predominately Spark. It … [Read more...]