Mainframe MLC pricing can be complex. The reason lies in the fact that the pricing models have evolved over many years. Different customers have various needs, and with those needs come new ways of handling how software is charged. As you know, MLC pricing models are high in number, with new models being introduced quite often. One of the most prominent recent models, Mobile Workload Pricing (MWP), is a direct result of the growth in digital business.
Digital is creating much more work for the mainframe, in the form of data inquiries through access from a mobile device. This creates more data, transactions, higher peaks, and volatility, resulting in higher costs. Managing this growth is critical. Fortunately, the new MWP model allows for significant discounting. But how do you know how and when to take advantage of this new pricing? BMC Cost Analyzer is one way, with its new MWP support, helping you to: 1) identify the amount of mobile workload on the mainframe, 2) model cost savings from running mobile work on the platform, and 3) predict the cost impact of mobile workload growth.
If you’re reading this, your company probably runs a mainframe, and you’re keenly aware of the IBM monthly license charge (MLC) software costs you pay each month. Or, perhaps someone else in another department manages this bill? Regardless of your role, you should be aware of how your company is being charged for this big ticket item (upwards of 35% of total mainframe costs), which includes MLC software products like z/OS, DB2, IMS, CICS, and MQ.
My intent here is to help you better understand how your bill might be calculated. By investing in a bit of education on this topic, you might save your company some significant money, as there is low-hanging fruit to be harvested. One caveat – this is a very complex structure, and attempting to simplify it can be difficult. I’ll try to strike a balance in simplifying the complex.
Let’s begin with “full capacity pricing” versus “sub-capacity pricing”. Originally, IBM mainframe software charges were based on the full capacity of their machines, regardless of how much MSU resource (million service units) they used each month. And you can still choose to use this billing method. Then along came sub-capacity pricing, which customers liked because the basis for MLC charges became their peak usage on the machine, which is commonly less than the full capacity of the machine.
Here’s where things get complicated: the charges are based on the peak of a 4-hour rolling average of your MSU usage – across 720 hours in the 30-day month. This is an MSU usage average of each hour and its previous 3 hours combined. The peak interval becomes the basis for your monthly charge. If your peak is, say 500 MSUs, and let’s say that DB2 contributed 350 while IMS contributed 150, you would expect the simple math to apply (say for example $50 times 350 for DB2, and $110 times 150 for IMS cost). Simple usage-based pricing, right? Wrong. This is not the case, and here’s where things get complicated.
Your MLC software product is charged based on the aggregate peak MSU usage for all the products running in the peak for all logical partitions (LPARs) on which it runs. And, each product (DB2 for example) is charged at that peak MSU level that is the sum of the MSUs of all products that contribute to that peak (so 500 in our previous example). Not the MSU usage level that the particular product (such as DB2) used.
See Figure 1. Let’s assume the peak is in hour 713 and all MLC software totaled to 767 MSUs in hour 713. This means that each software product is charged at 767 (not the amount of their usage in the bar) MSUs times the cost per MSU. Therefore, instead of perhaps paying about $60,000 for this bill, you’re paying well over $300,000. Still far less than if you were charged for the full capacity of the machine, but far greater than true usage-based costs.
Figure 1. Identifying the peak 4-hour rolling average
You can really see the confusion when you compare your usage contribution to the peak for a product with its corresponding contribution to the total cost. For example, if your IMS usage in this example was just 5% of the total contributed MSUs to the peak, but is charged for 767 MSUs at its typical unit price, its cost contribution might be as high as 35% of the bill. This is not intuitive unless you have a cost analysis tool helping you with this.
Here’s the kicker: most companies, as you know, run more than one LPAR. In fact, many of the largest finance, insurance, government, education, transportation, and retail companies run hundreds of LPARs. If you are running, say DB2, on more than one LPAR, then the MSU usage is aggregated, or summed, across all those LPARs on which it runs. This means your bill just increased by a large margin.
Furthermore, this cross-LPAR aggregation makes analyzing and managing the peak, the cost drivers, and your next bill very difficult. Spreadsheets are commonly used to pinpoint key peak cost drivers, and deciding what workloads to move or change can have little to no impact on the ultimate bill. Here’s why: running multiple MLC software products results in a cost structure that uses multiple aggregated MSU peaks to calculate the bill. If, for example, IMS only runs on LPAR1 and LPAR3, its MSU peak might be in hour 346 on those two LPARs with an aggregate MSU peak of, say 990. This will probably be different from DB2, which might be running on 6 LPARs and have an aggregated peak in hour 720 with MSUs of 1003.
This is all obviously very complex, and you’re not alone in your quest to diagnose and manage down these peaks. In fact, according to David Schipper’s blog, “Probably around 20% of those trying to manage MLC costs do not have a good understanding of how the charges are calculated. Another 60% think they do, but have it wrong. I was in this 60% group before I really studied the calculation process.”
We’ve heard from many customers who spend hours each week attempting to analyze their SCRT reports and trying to model different scenarios to find ways to reduce their MLC bill, only to find that the peaks are complex and change out from under them constantly. The only practical way to tackle this is by using a visual, intelligent tool like BMC Cost Analyzer. The best approach is to view MLC cost management as a continuous process: gaining cost driver transparency, analyzing drivers of the peak, modeling proposed changes to see if they will really save money, and then managing the peaks with technologies like intelligent capping and subsystem optimization. For more information, see the BMC MLC Cost Management web page.
- Understanding LOBs in DB2 and the Explosion of Unstructured Data
- Sharing Best Practice: IMS Availability During Change
- Ode to the Mainframe – How do I love thee?
- From Antiquated to Automated in 45 Minutes – An IBM Sys Mag Webinar
- 4 Reasons You Need a Mainframe Cost Analyzer Solution