Mainframe Blog

Mainframe MLC, Jobs/STCs, and Neapolitan Ice Cream – Part 2: What Drives Peak and Monthly Cost?

Jeremy Hamilton
4 minute read
Jeremy Hamilton

This is a special 3-part blog series focused on helping you better understand the best practices for identifying key drivers of your mainframe costs, using the most efficient technology available today.


What is Running in the Peak R4HA?

There is a wide array of applications, dependencies, and SLAs in every mainframe environment. Most mainframe shops have some type of job scheduler which enables them to schedule and some cases monitor batch jobs to accomplish their technical goals. This capability gives shops an idea of when a job should be executed, but very little insight into how the combined workload profiles add up over the course of an MLC billing cycle. Additionally, throughout a normal day ad hoc workloads are submitted by operators which can add to the MSU count.

Reminder from Part 1 of this blog series:

  • Strawberry – least likely to be moved, very little value
  • Vanilla – possible but not the first choice to move, somewhat valuable
  • Chocolate – easiest to move, most valuable

How Much are the Jobs/STCS Contributing to the Bottom Line?

Every job and started task executed on the mainframe creates SMF records which can be used for accounting, timestamps, configuration analysis, system resource utilization, and other practices. The statistics provided by these records produce a workload profile for jobs/STCs at each one hour interval in a month, which accumulatively gives an MSU count.  IBM uses the IBM Sub-Capacity Reporting Tool (SCRT) to collect the highest R4HA on each LPAR, which subsystems were running on each LPAR, and then calculate the MLC charges based on the combined peak R4HA for each subsystem respectively. The IBM SCRT report is just a billing tool for IBM and provides very little insight to how much individual workloads, or Jobs/STCs contribute to the MLC cost. Most customers just receive a bill at the end of each month with no true insight as to what their cost drivers were.

Cost Analyzer for zEnterprise (CAzE)

Cost Analyzer for zEnterprise (CAzE) provides the answer to both of those questions, plus will evaluate the potential cost savings from MLC reduction efforts. CAzE uses a similar mixture of SMF records as the SCRT to generate a graphical representation of the workloads that are contributing to the R4HA. Starting with the Monthly Summary report, which shows the monthly breakdown of MLC cost by subsystem and CPC, you can then “drill-down” to determine what each LPAR, workload, and lastly the Job/STC is contributing to the R4HA at any given time.

From the workload level you can see how many MSUs each workload contributes to the R4HA. There are various workload views to choose from, in this example I am showing workloads by importance. From this view you can modify the graph to show combinations of individual workload levels, and LPARs.


You can select all or individual workloads to view the Jobs/STCs which make up the workload(s) at a given time:


The Job/STC view provides clarity to what is running in the R4HA peak:

  • Job/STC name, location, workload classification
  • The MSUs contributed by the Job/STC to the R4HA (hourly and total)
  • When the Job/STC started

Perhaps most importantly, this view attaches a cost value that the Job/STC is contributing to the overall MLC bill.

Now that we know what is running in the peak, and how much it is costing, let’s serve some ice cream!

CHOCOLATE = Highest reward, non-essential, easy to reschedule

Chocolate to some is the tastiest flavor, in our analogy chocolate represents the Jobs/STCs that provide minimal value by running during the peak R4HA. From a low importance workload, easily rescheduled out of the peak, and/or high cost. Place Jobs/STCs in this category that should be examined for a better scheduling time:

  • Expensive relative to the other workloads, low importance
  • Have a SLA that is not critical in the timeframe given
  • Easy to reschedule outside of the peak R4HA

VANILLA = Valuable, some dependencies, flexible SLA

The vanilla flavor was usually half scooped out in my household, so this would be the Jobs/STCs that could be moved, but would not be the first choice. Perhaps they have minor dependencies which would need to be thought through, a SLA with a flexible window, or are not contributing much to the R4HA. The cost savings from moving them would need to be clear and justifiable:

  • From a mid importance workload classification
  • Have a SLA that is not critical, but has a specific timeframe
  • Dependencies which would require additional workload moves

STRAWBERRY =  High importance, strict SLA, complex to reschedule

Strawberry would be the least likely to be moved Jobs/STCs running during the peak R4HA timeframe; specific SLA, dependencies which would be too difficult to move, or another business reason that may prevent rescheduling. The business cost associated with these Jobs/STCs outweigh the cost savings from moving them:

  • From a high importance workload classification
  • Have a SLA that is critical to a specific timeframe
  • Dependencies which would require complex workload moves

Read more in Part 3. Scoop and Serve

Annual BMC Mainframe Survey

The 14th Annual BMC Mainframe Survey 2019 reports optimistic trends about the mainframe’s role in emerging and established businesses.
Download Now ›

These postings are my own and do not necessarily represent BMC's position, strategies, or opinion.

See an error or have a suggestion? Please let us know by emailing

Run and Reinvent Your Business with BMC

BMC has unmatched experience in IT management, supporting 92 of the Forbes Global 100, and earning recognition as an ITSM Gartner Magic Quadrant Leader for six years running. Our solutions offer speed, agility, and efficiency to tackle business challenges in the areas of service management, automation, operations, and the mainframe. Learn more about BMC ›

About the author

Jeremy Hamilton

Jeremy Hamilton

Jeremy has over 12 years of mainframe technical experience, with 5 of those focusing specifically on Mainframe Cost Optimization. He has served as a technical resource, Product Manager, and now as a Product Account Executive at BMC Software. He regularly speaks on MLC topics for customers, at SHARE and in IBM Systems Magazine webinars.