Mainframe MLC, Jobs/STCs, and Neapolitan Ice Cream – Part 2: What Drives Peak and Monthly Cost?

BY

This is a special 3-part blog series focused on helping you better understand the best practices for identifying key drivers of your mainframe costs, using the most efficient technology available today.

mainframe_and_icecream_blog

What is Running in the Peak R4HA?

There is a wide array of applications, dependencies, and SLAs in every mainframe environment. Most mainframe shops have some type of job scheduler which enables them to schedule and some cases monitor batch jobs to accomplish their technical goals. This capability gives shops an idea of when a job should be executed, but very little insight into how the combined workload profiles add up over the course of an MLC billing cycle. Additionally, throughout a normal day ad hoc workloads are submitted by operators which can add to the MSU count.

Reminder from Part 1 of this blog series:

  • Strawberry – least likely to be moved, very little value
  • Vanilla – possible but not the first choice to move, somewhat valuable
  • Chocolate – easiest to move, most valuable

How Much are the Jobs/STCS Contributing to the Bottom Line?

Every job and started task executed on the mainframe creates SMF records which can be used for accounting, timestamps, configuration analysis, system resource utilization, and other practices. The statistics provided by these records produce a workload profile for jobs/STCs at each one hour interval in a month, which accumulatively gives an MSU count.  IBM uses the IBM Sub-Capacity Reporting Tool (SCRT) to collect the highest R4HA on each LPAR, which subsystems were running on each LPAR, and then calculate the MLC charges based on the combined peak R4HA for each subsystem respectively. The IBM SCRT report is just a billing tool for IBM and provides very little insight to how much individual workloads, or Jobs/STCs contribute to the MLC cost. Most customers just receive a bill at the end of each month with no true insight as to what their cost drivers were.

Cost Analyzer for zEnterprise (CAzE)

Cost Analyzer for zEnterprise (CAzE) provides the answer to both of those questions, plus will evaluate the potential cost savings from MLC reduction efforts. CAzE uses a similar mixture of SMF records as the SCRT to generate a graphical representation of the workloads that are contributing to the R4HA. Starting with the Monthly Summary report, which shows the monthly breakdown of MLC cost by subsystem and CPC, you can then “drill-down” to determine what each LPAR, workload, and lastly the Job/STC is contributing to the R4HA at any given time.

From the workload level you can see how many MSUs each workload contributes to the R4HA. There are various workload views to choose from, in this example I am showing workloads by importance. From this view you can modify the graph to show combinations of individual workload levels, and LPARs.

lpar_view

You can select all or individual workloads to view the Jobs/STCs which make up the workload(s) at a given time:

stc_view

The Job/STC view provides clarity to what is running in the R4HA peak:

  • Job/STC name, location, workload classification
  • The MSUs contributed by the Job/STC to the R4HA (hourly and total)
  • When the Job/STC started

Perhaps most importantly, this view attaches a cost value that the Job/STC is contributing to the overall MLC bill.

Now that we know what is running in the peak, and how much it is costing, let’s serve some ice cream!

CHOCOLATE = Highest reward, non-essential, easy to reschedule

Chocolate to some is the tastiest flavor, in our analogy chocolate represents the Jobs/STCs that provide minimal value by running during the peak R4HA. From a low importance workload, easily rescheduled out of the peak, and/or high cost. Place Jobs/STCs in this category that should be examined for a better scheduling time:

  • Expensive relative to the other workloads, low importance
  • Have a SLA that is not critical in the timeframe given
  • Easy to reschedule outside of the peak R4HA

VANILLA = Valuable, some dependencies, flexible SLA

The vanilla flavor was usually half scooped out in my household, so this would be the Jobs/STCs that could be moved, but would not be the first choice. Perhaps they have minor dependencies which would need to be thought through, a SLA with a flexible window, or are not contributing much to the R4HA. The cost savings from moving them would need to be clear and justifiable:

  • From a mid importance workload classification
  • Have a SLA that is not critical, but has a specific timeframe
  • Dependencies which would require additional workload moves

STRAWBERRY =  High importance, strict SLA, complex to reschedule

Strawberry would be the least likely to be moved Jobs/STCs running during the peak R4HA timeframe; specific SLA, dependencies which would be too difficult to move, or another business reason that may prevent rescheduling. The business cost associated with these Jobs/STCs outweigh the cost savings from moving them:

  • From a high importance workload classification
  • Have a SLA that is critical to a specific timeframe
  • Dependencies which would require complex workload moves

Read more in Part 3. Scoop and Serve

11th Annual Mainframe Survey Results


The 11th Annual Mainframe Survey explores the continued importance of the mainframe in companies of all types as a key platform for digital business.

Download Now ›

These postings are my own and do not necessarily represent BMC's position, strategies, or opinion.

Share This Post


Jeremy Hamilton

Jeremy Hamilton

Jeremy Hamilton is a technical Software Consultant for BMC’s Mainframe Cost Optimization Suite (R4). He has over 9 years’ experience in the software industry, primarily in the technical sales role. He started his career as a physicist for IBM Research before moving into the mainframe software world. At IBM he worked with many large corporations and government agencies, and wrote three IBM Redbooks regarding the IBM Application Development Tools. In his current role, he works with various customers to educate, consult, and implement the R4 solutions, as well as drive enhancements for the products. Jeremy has a Masters in Information Systems from Santa Clara University, and an American Indian Science and Engineering Society (AISES) Sequoyah Fellow.