Mainframe Blog

Mainframe MLC, Jobs/STCs, and Neapolitan Ice Cream – Part 2: What Drives Peak and Monthly Cost?

Jeremy Hamilton
by Jeremy Hamilton

This is a special 3-part blog series focused on helping you better understand the best practices for identifying key drivers of your mainframe costs, using the most efficient technology available today.

mainframe_and_icecream_blog

What is Running in the Peak R4HA?

There is a wide array of applications, dependencies, and SLAs in every mainframe environment. Most mainframe shops have some type of job scheduler which enables them to schedule and some cases monitor batch jobs to accomplish their technical goals. This capability gives shops an idea of when a job should be executed, but very little insight into how the combined workload profiles add up over the course of an MLC billing cycle. Additionally, throughout a normal day ad hoc workloads are submitted by operators which can add to the MSU count.

Reminder from Part 1 of this blog series:

  • Strawberry – least likely to be moved, very little value
  • Vanilla – possible but not the first choice to move, somewhat valuable
  • Chocolate – easiest to move, most valuable

How Much are the Jobs/STCS Contributing to the Bottom Line?

Every job and started task executed on the mainframe creates SMF records which can be used for accounting, timestamps, configuration analysis, system resource utilization, and other practices. The statistics provided by these records produce a workload profile for jobs/STCs at each one hour interval in a month, which accumulatively gives an MSU count.  IBM uses the IBM Sub-Capacity Reporting Tool (SCRT) to collect the highest R4HA on each LPAR, which subsystems were running on each LPAR, and then calculate the MLC charges based on the combined peak R4HA for each subsystem respectively. The IBM SCRT report is just a billing tool for IBM and provides very little insight to how much individual workloads, or Jobs/STCs contribute to the MLC cost. Most customers just receive a bill at the end of each month with no true insight as to what their cost drivers were.

Cost Analyzer for zEnterprise (CAzE)

Cost Analyzer for zEnterprise (CAzE) provides the answer to both of those questions, plus will evaluate the potential cost savings from MLC reduction efforts. CAzE uses a similar mixture of SMF records as the SCRT to generate a graphical representation of the workloads that are contributing to the R4HA. Starting with the Monthly Summary report, which shows the monthly breakdown of MLC cost by subsystem and CPC, you can then “drill-down” to determine what each LPAR, workload, and lastly the Job/STC is contributing to the R4HA at any given time.

From the workload level you can see how many MSUs each workload contributes to the R4HA. There are various workload views to choose from, in this example I am showing workloads by importance. From this view you can modify the graph to show combinations of individual workload levels, and LPARs.

lpar_view

You can select all or individual workloads to view the Jobs/STCs which make up the workload(s) at a given time:

stc_view

The Job/STC view provides clarity to what is running in the R4HA peak:

  • Job/STC name, location, workload classification
  • The MSUs contributed by the Job/STC to the R4HA (hourly and total)
  • When the Job/STC started

Perhaps most importantly, this view attaches a cost value that the Job/STC is contributing to the overall MLC bill.

Now that we know what is running in the peak, and how much it is costing, let’s serve some ice cream!

CHOCOLATE = Highest reward, non-essential, easy to reschedule

Chocolate to some is the tastiest flavor, in our analogy chocolate represents the Jobs/STCs that provide minimal value by running during the peak R4HA. From a low importance workload, easily rescheduled out of the peak, and/or high cost. Place Jobs/STCs in this category that should be examined for a better scheduling time:

  • Expensive relative to the other workloads, low importance
  • Have a SLA that is not critical in the timeframe given
  • Easy to reschedule outside of the peak R4HA

VANILLA = Valuable, some dependencies, flexible SLA

The vanilla flavor was usually half scooped out in my household, so this would be the Jobs/STCs that could be moved, but would not be the first choice. Perhaps they have minor dependencies which would need to be thought through, a SLA with a flexible window, or are not contributing much to the R4HA. The cost savings from moving them would need to be clear and justifiable:

  • From a mid importance workload classification
  • Have a SLA that is not critical, but has a specific timeframe
  • Dependencies which would require additional workload moves

STRAWBERRY =  High importance, strict SLA, complex to reschedule

Strawberry would be the least likely to be moved Jobs/STCs running during the peak R4HA timeframe; specific SLA, dependencies which would be too difficult to move, or another business reason that may prevent rescheduling. The business cost associated with these Jobs/STCs outweigh the cost savings from moving them:

  • From a high importance workload classification
  • Have a SLA that is critical to a specific timeframe
  • Dependencies which would require complex workload moves

Read more in Part 3. Scoop and Serve

2018 BMC Mainframe Survey

The 13th Annual BMC Mainframe Survey is a key indicator of the future health and viability of the mainframe. See how mainframe attitudes, technologies, and practices are evolving in the digital era.
Download Now ›

These postings are my own and do not necessarily represent BMC's position, strategies, or opinion.

About the author

Jeremy Hamilton

Jeremy Hamilton

Jeremy Hamilton is the Senior Product Manager for BMC’s Mainframe Cost Optimization Suite (R4). Joining BMC in 2013 as a IMS-focused Software Consultant, he then transitioned to the R4 team. In his current role, he sets the strategy and direction for the R4 solution portfolio. His passion is to deliver solutions that solve real-world problems and provide quantifiable value to customers. Jeremy has a Masters in Information Systems from Santa Clara University, and over 10 years of experience in the mainframe world. He is an American Indian Science and Engineering Society (AISES) Sequoyah Fellow, and has written three IBM Redbooks regarding the IBM Mainframe Application Development Tools.