Mainframe MLC, Jobs/STCs, and Neapolitan Ice Cream – Part 3: Scoop and Serve

BY

neapolitan3

This is a special 3-part blog series focused on helping you better understand the best practices for identifying key drivers of your mainframe costs, using the most efficient technology available today.

In part 1, I used the analogy of “Neapolitan ice cream” to describe how to look at the workloads running in the peak 4-hour rolling average. In part 2, I examined how to determine what jobs and STCs were running in the peak 4-hour rolling average and how to classify them into a “Neapolitan” flavor. In this final blog in the series, I will discuss how to “scoop out” workloads to save on your MLC charges.

Classifying your Jobs/STCs into one of the flavors helps to balance the discussion about what can be “scooped out”, i.e. where to look for cost savings. With the cost information given by BMC Cost Analyzer you are able to take a look at the Jobs/STCs from a business standpoint which helps to arbitrate the need to the technical staff.

Cost Analyzer also provides “what-if” analysis to place a value estimate on the Job/STC rescheduling effort before a decision is made, saving time and resources. With the Cost Analyzer Planning tool you can mimic deleting, scaling, schedule changes, and moving the Jobs/STCs to a different LPAR.

cost_analyzer_blog

Cost Analyzer will then calculate what the cost savings would be for your environment if the action(s) were to be enacted:

cost_analyzer2_blog

Since Cost Analyzer uses your own SMF records the cost savings model gives an accurate estimation of the effects from the change.

Food for Thought

Nothing worse than thinking you have a generous scoop of ice cream only to discover your bowl is empty. The various R4HA peaks across the month are extremely important to pay attention to, and perhaps one of the most important elements of your cost savings efforts. At any point in the month you will have a variety of workloads running, which give a variety of options from which to choose. Too often only the R4HA 1st peak is studied, and may have Jobs/STC that are not found in the 2nd and 3rd peaks, this begins a game of “whack-a-mole” which is that one peak is dropped and another emerges.

Portion size is always important when it comes to ice cream, and what you pay for MLC. WLM capping provides a way to limit the number of MSUs that are consumed during a given R4HA interval. A cap is the limit for the number of MSUs an LPAR can utilize, and will “defer” low importance workloads in favor of the higher importance work when that cap is reached. When more than one LPAR is involved BMC’s Intelligent Capping for zEnterprise (iCap) will intelligently and automatically adjust the respective LPARs caps so that you will get the most out of every MSU you are billed for

When you are ready for a serious diet, Subsystem Optimizer for zEnterprise (Subzero) is the answer to drive down your MLC spend.  Subzero provides the ability to de-couple subsystems that previously had to run on the same LPAR. For example, in the past if you had an application which CICS made data requests to DB2, they would need to be up and running on the same LPAR. With Subzero, CICS can be isolated from DB2 on respective LPARs. This capability brings your MLC bill for the specific subsystems closer to utilization based cost.

Just like everyone has a favorite flavor, every environment has a best fit cost savings solution. The goal is to have a wide array of options when it is time to make the final decision and ample choices to find the best (and most realistic) fit of cost savings and technical viability. Now scoop up and indulge in MLC savings!

Related posts:

12th Annual Mainframe Survey Results


The BMC annual mainframe survey is a key indicator of the future health and viability of the mainframe—and this year’s report busts 5 common myths about the platform.

Download Now ›

These postings are my own and do not necessarily represent BMC's position, strategies, or opinion.

Share This Post


Jeremy Hamilton

Jeremy Hamilton

Jeremy Hamilton is a technical Software Consultant for BMC’s Mainframe Cost Optimization Suite (R4). He has over 9 years’ experience in the software industry, primarily in the technical sales role. He started his career as a physicist for IBM Research before moving into the mainframe software world. At IBM he worked with many large corporations and government agencies, and wrote three IBM Redbooks regarding the IBM Application Development Tools. In his current role, he works with various customers to educate, consult, and implement the R4 solutions, as well as drive enhancements for the products. Jeremy has a Masters in Information Systems from Santa Clara University, and an American Indian Science and Engineering Society (AISES) Sequoyah Fellow.