Ingram Micro traces its roots back to technology distributor Micro D, Inc., which was founded in 1979. Since that time, our company has become the largest distributor of wholesale technology products in the world and it ranks #62 in the Fortune 500.
Ingram Micro offers solutions from 1,700 technology suppliers including major players such as Acer Inc., Apple, Cisco, HP, IBM, Lenovo, Microsoft, and Samsung through 122 distribution centers in 160 countries. It takes a world-class IT infrastructure to support a vast, global organization like ours. Our application portfolio encompasses systems for virtually every aspect of the business, from manufacturing and supply chain to human resources, accounting, and customer and partner support.
BMC Control-M is empowering us to optimize our batch processing in support of Ingram Micro’s commitment to helping business fully realize the promise of technology™. We started using Control-M 10 years ago in our Dallas, Texas data center. Over the years, BMC Software has continually enhanced the solution, keeping it at the leading edge of workload automation and giving us new opportunities for automation and growth.
Centralizing Processes that Keep the Business Running
Our IT environment includes a wide array of systems running on mainframe, Windows, and Unix platforms. We have more than 30,000 Control-M jobs totaling 900,000 executions each week to keep the business running smoothly. Some processes aren’t time critical, so completing them within a strict timeframe isn’t an issue. However, they still deliver important benefits. Reporting is a good example. Batch jobs generate reports that give managers visibility into what’s happening and provides sound insights that improve decision making.
Jobs related to such functions as supply chain, manufacturing, inventory, orders, and invoicing are mission critical and we have to ensure that they are completed within the batch window. Control-M monitors those jobs for us and notifies us when issues arise. We’ve set up job names in Control-M in a way that speeds identification of critical applications so we can set priorities with respect to response and resolution. We also have a list of the top ten most critical jobs for each application.
The automatic notification capability is attracting a lot of attention and encouraging more and more application teams to automate their critical workloads using Control-M. In the past, these teams would use the scheduling tools native to the operating environments for their applications—mainframe, Windows, and Unix. Backups in particular were usually done with native tools. Moving those jobs into Control-M eliminates the need for people to stare at a console babysitting backup and other jobs. It also eliminates the scrambling to get a job restarted because it failed while somebody was taking a break and the team is under pressure to get the job finished before the batch window closes. Control-M lets these teams keep tabs on their jobs while freeing them up to focus on other responsibilities.
Control-M is highly scalable, so it can take on additional workloads and run them consistently and reliably. That’s enabling us to gradually centralize workload automation in a single tool and standardize the way we automate workloads across the environment, which is improving visibility and efficiency.
Optimizing Processes to Drive Efficiencies
Optimization is a key objective for the operations and scheduling team because we want to be sure that critical processes are running as efficiently as possible and that we minimize the risk that a critical job won’t get done on time. Control-M gives us visibility into what’s happening with batch processes so we can identify jobs that are creating problems. For example, we might see that one out of every 10 runs for a particular job doesn’t execute properly. Control-M captures useful information that the application team can use to compare successful runs with the failed runs so they can get to root cause and remediation faster
Our optimization efforts never end. We’re always pushing to make things better. Last year, for example, we coded approximately 24,000 changes. Next year we may set the target higher. Fortunately, regular enhancements from BMC make it easier for us to aim higher.
Workload Automation in Action
Control-M is doing a great job not only with our standard batch processes but also with special projects such as the migration of 3,000 jobs from the data center in Dallas to the one in Chicago. The process involved moving jobs to newer platforms and new technologies and the eventual decommissioning of older hardware in Dallas.
The migration required changes to code written into each job. Control-M provided a seamless way to mass update key attributes such as job names, server names and other metadata. Without Control-M, it would have been a manual, job-by-job effort that would have probably taken an average of 10 minutes per job. That adds up to a lot of time when you’re modifying 3,000 of jobs. Control-M gave us a quick way to make the changes and get the project completed. In addition, reporting provided insight into workloads and thresholds so that as we moved jobs from Dallas to Chicago we understood thresholds and were able to ensure we didn’t exceed our batch windows.
The Numbers Speak for Themselves
Control-M is continuing to deliver value by enabling us to optimize and automate activities to support the company’s steady growth in customers, suppliers, and revenues. We’ve grown from small instance of Control-M running a few hundred jobs in Dallas to two data centers running more than 30,000 jobs a day with an amazing level of reliability and stability. In 2015 we didn’t experience a single critical outage and had zero downtime within our Control-M environment.
Could we get along without Control-M? We’d have to say no. When we look at how much work we push through our environment, we estimate that we’d need 30 production control people—or full time equivalents as you say—working seven days a week to do the amount of work that our team of five does with Control-M. It’s definitely earning it’s keep at Ingram Micro.
- Using Zeppelin with Big Data
- Using GPUs (Graphical Processing Units) for Machine Learning
- Introduction to Neural Networks Part II
- MongoDB Overview: Getting Started with MongoDB
- How to Use Mongoose for MongoDB and NodeJS