Workload Automation Blog – BMC Blogs BMC Software Fri, 20 Oct 2017 15:42:24 +0000 en-US hourly 1 Workload Automation Blog – BMC Blogs 32 32 Why Control-M Customers Love Jobs-as-Code Wed, 18 Oct 2017 09:04:50 +0000 As DevOps becomes more the standard of how enterprises are delivering high-quality applications faster, our traditional customer is evolving right along with it. Operations team are down-right excited to get their development team involved to code jobs further upstream in the development lifecycle. These DevOps enabled teams know that in order to reduce production outages […]]]> Using Apache Pig and Hadoop with ElasticSearch with The Elasticsearch-Hadoop Connector Wed, 11 Oct 2017 09:10:20 +0000 Here we show how to retrieve data from ElasticSearch using Apache Pig. The reason for doing that is Pig is much easier to use that Java, Scala, and other tools for doing data extraction and transformation ElasticSearch. (You can read our introduction to Apache Pig here.) Also you can construct complex queries and sets using […]]]> Using ElasticSearch with Apache Spark Wed, 11 Oct 2017 08:30:53 +0000 ElasticSearch is a JSON database popular with log processing systems. For example, organizations often use ElasticSearch with logstash or filebeat to send web server logs, Windows events, Linux syslogs, and other data there. Then they use the Kibana web interface to query log events. All of this is important for cybersecurity, operations, etc. Now, since […]]]> Three Things to Consider When Considering a Digital Business Automation Solution Mon, 09 Oct 2017 14:00:04 +0000 “Why Control-M?” is a question I often hear people ask when researching digital business automation solutions. I’m tempted to answer with a long list of features and facts that demonstrate why Control-M is superior in features, quality, price, customer support and durability. However, features change quickly. To quote Kano, “What’s exciting today will be asked […]]]> What is Batch Processing? Batch Processing Explained Wed, 20 Sep 2017 11:50:37 +0000 Simply put, Batch Processing is the process by which a computer completes batches of jobs, often simultaneously, in non-stop, sequential order. It’s also a command that ensures large jobs are computed in small parts for efficiency during the debugging process. This command goes by many names including Workload Automation (WLA) and Job Scheduling. Like most […]]]> Flip the Switch: Making the Case for Continuous Delivery Wed, 20 Sep 2017 09:15:06 +0000 Making the case for DevOps and Continuous Delivery (CD) can be tricky for some and downright frustrating for others. While the efficiency and cost-savings realized by implementing Continuous Delivery are monumental, the cultural enterprise shift should not be taken lightly. More than anything else, making the case requires gaining buy-in at every level of your […]]]> Learn How to Automate Data Pipelines Across a Hybrid Environment Tue, 19 Sep 2017 13:40:07 +0000 Walk through a company’s journey to automate data pipelines across a hybrid environment with Basil Faruqui and Jon Ouimet of BMC Software at the Strata Data Conference in New York City. During their technical session, Basil and Jon will discuss the realities of building, running, and managing complex data pipelines across a dynamic infrastructure. Today, […]]]> How to Stop Worrying and Learn to Love Docker Containers Mon, 18 Sep 2017 09:00:20 +0000 Developers are discovering the power of using containers to package applications quickly and easily. This trend is driven by the need for speed and agility as it enables developers to rapidly release new applications and functions by taking a DevOps approach. While most enterprise organizations are using traditional environments for development, they are also gradually […]]]> Using Spark with Hive Fri, 15 Sep 2017 11:00:12 +0000 Here we explain how to use Apache Spark with Hive. That means instead of Hive storing data in Hadoop it stores it in Spark. The reason people use Spark instead of Hadoop is it is an all-memory database. So Hive jobs will run much faster there. Plus it moves programmers toward using a common database […]]]> Using Hive Advanced User Defined Functions with Generic and Complex Data Types Thu, 07 Sep 2017 14:00:15 +0000 Previously we wrote how to write user defined functions that can be called from Hive. You can write these in Java or Scala. (Python does not work for UDFs per se. Instead you can use those with the Hive TRANSFORM operation.) Programs that extend org.apache.hadoop.hive.ql.exec.UDF are for primitive data types, i.e., int, string. Etc. If […]]]>