Phil Grainger – BMC Software | Blogs https://s7280.pcdn.co Thu, 08 Sep 2022 13:11:07 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png Phil Grainger – BMC Software | Blogs https://s7280.pcdn.co 32 32 Db2 11 – End of Service https://s7280.pcdn.co/db2-11-end-of-service/ Wed, 30 May 2018 00:00:10 +0000 http://www.bmc.com/blogs/?p=12325 When IBM announced their continuous delivery plans for IBM® Db2®, one question kept nagging at the back of my mind. Historically, IBM would support two versions of Db2 at the same time (colloquially known as “current” and “current-1”). But if there were to be no Db2 13 (just incremental functional additions to Db2 12), how […]]]>

When IBM announced their continuous delivery plans for IBM® Db2®, one question kept nagging at the back of my mind. Historically, IBM would support two versions of Db2 at the same time (colloquially known as “current” and “current-1”). But if there were to be no Db2 13 (just incremental functional additions to Db2 12), how would IBM remove support for Db2 11? On the one hand I could see Db2 11 existing far into the future, but at the same time I knew that IBM would not want to support Db2 11 indefinitely.

In February, IBM resolved my quandary by announcing the end of service date for Db2 11 as September 30, 2020. If you are still running Db2 11, this date should now be a significant part of your planning for that migration to Db2 12. If you are using Db2 10, be aware that Db2 11 will be removed from marketing on July 2, 2018. This means you will be unable to order Db2 11 from IBM after this date.

Leading (Bleeding) Edge?

Thinking about upgrading to a new version of Db2 brings us to the thorny question of “how current should you be?” There are two extremes. There will always be some who want to be on the latest and greatest versions of software. To be fair, there are situations where this makes perfect sense—especially if you are waiting desperately for some new feature.

The other extreme is for those for whom the migration plans are driven directly by the spectre of running software unsupported. For these customers, who perhaps do not need the new functionality, waiting until almost the bitter end also makes perfect sense.

In the middle lie the majority—those who want to make use of new functionality (whilst minimizing the risks of being an early adopter) and keep well clear of the risks of being unsupported.

Db2 12 was announced as generally available (GA) on October 21, 2016, so it has been available for over 18 months. In the days when IBM released a new version of Db2 every three years, this would put us halfway through the life of Db2 12 already!

This, I think, is what is making 2018 the “Year of Db2 12.” When I talk to our customers or speak at industry events, the majority response confirms that migration to Db2 12 will be completed this year. I hope they all know that there are only seven months left of 2018.

ISV Support

When making plans to migrate to a new version of Db2, it is important to incorporate updating third party software as part of that plan. Independent software vendors (ISVs) will document which version of their tools is needed to support a new version of Db2. At BMC, we try to make it easy to remember: our v12.1 release is needed to begin exploiting new features of Db2 12. Because this version of our software will also work happily with Db2 11, it makes sense to plan a migration to the latest version of BMC tools before embarking on an upgrade of Db2 itself. Most other ISVs will make similar suggestions. It is also worth noting that as IBM adds more and more functionality to Db2 12 through continuous delivery, it becomes ever more important to keep up to date with ISV releases as well.

Communication

As you build your plans to migrate to Db2 12, I strongly suggest involving any ISVs that you are working with. They will be able to confirm which versions of their software you should be using. They may also be able to share advice (anonymously, of course) based on the successes and concerns of other customers who have been through the same upgrade process. There is nothing worse than feeling that you are alone and that, somehow, your circumstances are unique.

Here’s wishing you all a quick and trouble-free migration to Db2 12. Remember, thanks to continuous delivery, this will be the last version upgrade of Db2 you will need to perform for a while. Now you just need to formulate a plan to keep relatively current with IBMs Db2 function levels.

 

]]>
When Is zIIP Offload Not What It Seems? https://www.bmc.com/blogs/ziip-offload-not-seems/ Wed, 12 Jul 2017 13:18:04 +0000 http://www.bmc.com/blogs/?p=10865 Minimizing the operating expense of your mainframe has never had a higher profile, nor been more important. Through the introduction of specialty processors (such as zIIPs), IBM has provided significantly lower cost hardware and the promise of dramatic savings in operational costs. The primary reason this is so attractive and cost-effective is not just the […]]]>

Minimizing the operating expense of your mainframe has never had a higher profile, nor been more important. Through the introduction of specialty processors (such as zIIPs), IBM has provided significantly lower cost hardware and the promise of dramatic savings in operational costs. The primary reason this is so attractive and cost-effective is not just the cheaper hardware, but also much cheaper software, since typically, processing capacity associated with zIIPs is not included in overall z/OS MIPS/MSUs capacity by IBM or ISVs, and is therefore exempt from many related charges such as license costs, upgrades and maintenance fees.

IBM authorizes customers to use specialty engines to process certain specific types of workloads as designated by IBM. For example, certain workloads that were written to run in non-task enclave SRB mode may be made eligible for redirection to a zIIP.

The choice for IT would seem obvious: maximize the use of specialty engines to reduce cost. However, the obstacle to IT achieving these savings is the pre-requisite that only certain types of processing are eligible to execute on zIIPs. To actually realize such savings, companies must identify and migrate workloads currently running on the mainframe’s traditional general purpose (GP) processors to one of the less expensive zIIPs. One way to do this might be to rewrite historical COBOL code into JAVA – JAVA workloads will always be zIIP enabled (now that we can run zAAP work on a zIIP!).

BMC Software has a way for IT to take advantage of these cost benefits with no migration, no effort, and no risk. Most of BMC’s tools running in the z/OS environment are now coded to take advantage of zIIPs wherever they are encountered.

Because of the way IBM licenses access to these zIIP engines, it is not possible to specifically direct activity to be performed on a zIIP – instead, workloads are made zIIP-Eligible. At execution time, IBM Workload Manager (WLM) makes real-time decisions about what can be executed on a zIIP and what must remain executing on a General Processor. Some of the reasons why zIIP eligible work may not run on a zIIP include:

  • No zIIPs are installed and online
  • The zIIPs that are installed and online are busy doing other work
  • The application which has scheduled the work has specified that only a portion of the work should be made eligible for redirection to zIIP

Because of the importance of offload eligibility, it has become a habit to look at zIIP offload percentages (as measured by various benchmarks) to determine who is running the most efficient software in any particular situation. So a tool that boasts “80 percent zIIP offload” is deemed to be better than one that only offers “50 percent offload eligibility.”

Unfortunately, this crude comparison overlooks the reasons for bothering with zIIP offloading in the first place. The point is not to maximize the work that is running on the zIIPs but to minimize the work that remains executing on the GPs. I know this sounds like two ways of saying the same thing, but let me explain:

  • A tool that consumes 100 MSUs and has 80 percent zIIP offload may, at best, leave 20 (chargeable) MSUs running on the GP.
  • An alternative tool that consumes 25 MSUs but only boasts 30 percent zIIP offload only leaves 17.5 chargeable MSUs on the GP.

You can easily see now, that the tool with the lower zIIP offload actually costs LESS to execute than the one with the higher offload. The question always needs to be asked “What is the zIIP eligible offload a percentage of?” – all zIIP offload figures are not created equal.

BMC takes very seriously the need to continually review execution costs, and zIIP offload is only one of the tools we bring to bear on the problem. We usually go through these steps in determining what we can do to minimize CPU consumption:

  1. Is there a way of avoiding CPU usage completely?
    An example of this is the way Next Generation Technology Reorg removes the need to decompress and recompress data during a reorg and the fact that it avoids having to call a SORT utility to reorganize Db2 data (among a long list of things that NGT Reorg no longer has to do compared with a traditional reorg utility).
  2. Are there more efficient ways of performing tasks?
    In our recovery tools, for example, we have looked at the instructions we use to move data around to make sure we are doing things in the most efficient way possible. In this context, not all instructions are created equal.
    We also continually look for ways to improve our algorithms – removing the need to do repetitive tasks multiple times for example.
  3. Can we make any of the remaining CPU consumption zIIP eligible?
    Using a zIIP should be the last resort after all other options have been tried. After all, the cheapest CPU second is the one you don’t consume.

Other vendors hope you will be impressed with their zIIP offload benchmarks and that you won’t notice the sleight of hand where they are trying to hide the real execution costs behind a mythical “best offload percentage” number. There is far more to reducing chargeable CPU than just enabling some of it to run on a zIIP.

What I haven’t mentioned so far is that zIIP capacity is not infinite – there is a defined ratio (by IBM) of how many zIIPs you can have based on the number of GPs that you purchased. This started as one zIIP per General Processor, but from July 2013 onwards, it is now possible to buy two zIIPs for each GP for zEC12 and/or zBC12 (or later).

For more recent hardware, it is also possible to run zAAP (z Systems Application Assist Processor) eligible workloads on a zIIP engine – this supports running Java and XML parsing workloads on a specialty engine.

Remember above where I mentioned that one reason not to run a zIIP eligible workload is unavailability of zIIP capacity? This is a very good reason to ensure that your tools are using zIIP offload in the most efficient way possible and are not just dumping unwanted GP cycles onto a zIIP. If you are running your zIIPs at, or close to, capacity, performance can suffer. Making code eligible to run on a zIIP and then not running it there, but on a GP instead, is actually more expensive (and slower) than if you hadn’t bothered with zIIP eligibility in the first place!

At the end of the day, zIIP offloading is only one way of minimizing chargeable CPU usage and people need to look at the whole picture to determine who is delivering the best value in terms of chargeable CPU usage.

Oh, and one last point. If you are aiming to reduce CPU consumption to reduce your IBM MLC licensing charges, remember that only MSUs consumed during your rolling four-hour peak contribute to that calculation. Savings elsewhere may make you feel better, but they won’t reduce your software bills. In the context of zIIP redirection, only those GP cycles offloaded to a zIIP during your four-hour peak will be saving you money.

]]>
Reorging Db2 – Mundane Housekeeping or a High Performance Move?” https://www.bmc.com/blogs/reorging-db2-housekeeping-high-performance-move/ Tue, 24 Jan 2017 02:03:04 +0000 http://www.bmc.com/blogs/?p=10099 Every Db2 DBA knows that there are regular “housekeeping” tasks that  you need to do to keep your Db2 databases running well – and one of those tasks is to run the Reorg utility. Today, there are three major goals relating to Db2 housekeeping which are addressed by running a reorganisation: Tidying up data that’s […]]]>

Every Db2 DBA knows that there are regular “housekeeping” tasks that  you need to do to keep your Db2 databases running well – and one of those tasks is to run the Reorg utility. Today, there are three major goals relating to Db2 housekeeping which are addressed by running a reorganisation:

  • Tidying up data that’s out of place – rebuilding index structures and generally organizing data in tablespaces
  • Consolidating extents and reclaiming wasted space
  • Instantiating those on-line schema changes that allow a change to be specified with an ALTER statement, but where the change is actually made by a reorg at a later date

The second goal noted above is a kind of special case of the first, and the third should really be scheduled as part of a change delivery process as there could be other actions that need to be performed (such as running RUNSTATS, rebinding affected packages etc).

Which leads us to the realization that we are actually reorging our Db2 data, tables, and indexes to maintain a high level of performance for our APPLICATIONS. This really should be the primary focus of running any reorgs – to ensure that the data is maintained in an ideal state so that application SQL is executed as efficiently as possible. The other reasons for running a reorg tend to be more ad-hoc in nature – whereas, reorging to maintain high levels of application performance should be performed as often as is necessary.

Modern Day Challenges to Performing Reorgs

If we cast our minds back to our early experiences with Db2 (personally, I have to cast my mind WAY back), we probably scheduled reorgs in a quiet part of the week. For example, I was fortunate enough to have the entire weekend to myself, when I could effectively reorganise everything in sight. In today’s world, we are hit by two conflicting situations:

  • Firstly, we have far less down time than we used to so these reorgs have to fit into ever shrinking periods of time.
  • Secondly, even though our down time window has diminished, the amount of data and the number of tables to process seems to have grown exponentially. Most people today do not have the time or the luxury of being able to reorganize everything every week. So we started to make compromises.

The first compromise most people look at is whether they need to reorganise every table or not – and make use of Db2 catalog statistics to determine which objects to reorg. In reality, what’s happening is that the selection is determining which tables/indexes do NOT look like they need reorganising right now. The process usually generates JCL for submission at a later time.

This is a good start, but the reorg or don’t reorg decisions are being based on statistics that were collected some time prior and they may not be up to date. Of course, ISV customers could be using tools provided by their vendor to look at statistics outside the catalog. This was typically a solution to earlier problems with the IBMs RUNSTATS utility in that it was not possible to collect data disorganization statistics without also providing that information to the Db2 optimizer. This would risk catastrophic degradations in performance if plans/packages were to rebound when these statistics indicated severe levels of data disorganization. This challenge has been mitigated by changes to RUNSTATS and is solved completely with the more widespread use of real time statistics.

The second compromise is to decide whether really big partitioned tablespaces need reorging in their entirety. A lot of time can be saved by reorging these at a partition level, perhaps only processing a subset of the total number of partitions at a time, and creating a schedule that ensures that all partitions are reorganized – eventually. The problem with this approach is that the Non Partitioning Indexes (NPIs) will tend to get more and more disorganised, because they are not themselves reorged or rebuilt as part of this process. They are just updated as rows are moved around in the partitions that are being reorged.

This can cause a gradual decrease in performance of applications using these indexes. Running a reorg of these NPIs can yield surprising results – I’ve seen up to 90% elapsed and CPU improvements in the worst cases. The challenge here though is that these NPIs can be of a significant size and it may not be possible to schedule a reorg in a usual housekeeping window.

Solution Options

So what can we do to improve our Db2 reorganisation strategy yet stay within the bounds of these conflicting demands and constraints forced upon us with digital business in the 21st century?

Let’s look at the options:

  • We should be looking at Real Time Statistics to determine which objects to reorg and which to skip. We should also be making the decision in real time, rather than generating JCL at one point in time, to be scheduled later when time permits. In the gap between generation and execution, some other table or index may need reorganising – and perhaps this other table should be a higher priority.
  • We should be looking at table and index criteria separately. In many cases, it is actually a reorg of the index(es) that can have the biggest benefit to application performance
  • Be careful with partition level reorgs. Sometimes these may be a necessary evil, but where you do use them, don’t forget the implications for the NPIs. Make sure these are considered for reorgs regularly.
  • Don’t assume once a month (or once a week even) is often enough to reorganise your Db2 data. After a reorg, monitor the real time statistics and see how quickly the data is becoming disorganised. You will probably find, that you are reorging some tables or indexes too often. Equally, you may find that for some highly active tables, reorging once a week is not often enough!
  • Look at the tooling you are using – is it fit for purpose? At the very least it should be scalable to the limits you need. It should also be minimally intrusive to your applications when it runs. This does not just mean making sure the switch phase doesn’t cause problems, but includes considerations about CPU consumption and workfile usage. It’s not good if your reorg costs more to execute than what you will save, but I have seen examples where using modern reorg tools can even reverse the year on year growth in application cpu consumption – and, of course, these savings are immediate. People have demonstrated benefits by reorging MORE often rather than less often.

Lastly, design yourself a strategy that you can put on auto pilot and leave alone. In my experience, DBAs have far too much work to do to be manually managing Db2 housekeeping. Set your reorg thresholds or policies and create JCL that will run regardless of any changes in the application. If the data grows, or if the rate of disorganization changes, or even if new tables are created, you can relax knowing that all your Db2 data is being kept optimally organized and that your applications are operating at their peak efficiency. In other words, you have completed your housekeeping and automated it for the best possible Db2 performance.

For more information on how BMC can help you to improve the performance of your Db2 Databases, see https://www.bmc.com/it-solutions/performance-Db2-databases.html

]]>
Db2 10 end of support and what it means to you https://www.bmc.com/blogs/db2-10-end-support-means/ Mon, 08 Aug 2016 01:01:57 +0000 http://www.bmc.com/blogs/?p=9620 While most of us are looking forward to the impending announcement from IBM as to when Db2 12 will be generally available, it’s worth taking a moment to look back over our shoulder. Not everyone sees Db2 12 on their immediate horizon and not everyone is yet safely migrated to Db2 11. If you have […]]]>

While most of us are looking forward to the impending announcement from IBM as to when Db2 12 will be generally available, it’s worth taking a moment to look back over our shoulder. Not everyone sees Db2 12 on their immediate horizon and not everyone is yet safely migrated to Db2 11. If you have not migrated to Db2 11, read on.

IBM have announced that Db2 10 will cease to be supported from September 30, 2017 – a little over a year from now. So this defines the date by which EVERYONE should be migrated to Db2 11. At that point, the only Db2 versions fully supported by IBM will be Db2 11 and Db2 12 – unless you have made specific arrangements with IBM, that is.

It can take a typical Db2 installation (whatever one of those is) 6-12 months to migrate all of their subsystems from one Db2 version to another, so it is time to be making those migration plans on how to get to Db2 11 from where you are.  Do not let the September 30 date become your deadline; plan to be safely on Db2 11 with a comfortable window before IBM ceases support. Bear in mind that you will have to progress through the three-phase migration process (CM, ENFM and NFM in turn) which just by itself can extend the time it takes to migrate – especially if you are also going to verify that it is possible to fall BACK from Db2 11 to Db2 10, or between any of the migration modes. On the bright side though, this will be the last time a Db2 migration will be handled this way. IBM have already told us there will be a single phase migration again for the step from Db2 11 to Db2 12.

If you have third-party tools, you will probably need to upgrade those when you migrate. For those of you who are BMC customers, you will need to be running with v11.1 version (or later) of our tools, although we recommend that you install the most recent release. Version 10 and earlier BMC tools do not support running in a Db2 11 environment. Other tools vendors probably have a similar minimum version statement for their tools to run with Db2 11. Remember, you will likely have to migrate all your tools to a Db2 11 compatible version before you start the actual Db2 version migration itself. Make sure you factor this into your Db2 upgrade plans.

If you are running Db2 9 or earlier, you have a whole different set of problems. Firstly, IBM withdrew Db2 10 from marketing on July 6, 2015 so it is already too late to order Db2 10 media. Secondly, you must migrate to Db2 10 to get to Db2 11 and beyond. There is no skip-migration available from Db2 9 to Db2 11 (or from Db2 10 to Db2 12 for that matter. The only recognized migration path is one version upgrade at a time until you reach Db2 12). Finally don’t forget, your journey to Db2 11 still has to be completed before September 30, 2017.

There is one ray of sunshine on the horizon. At IDUG North America in Austin, IBM talked about a possibility of shipping new function in Db2 continuously in the future. This infers that they would use Db2 12 as a base and incrementally add new abilities to it over time. So it just may well become much easier (and much faster) to get access to new Db2 functionality without necessarily going through the effort and complexity of migrating to a new release. IBM will be talking about this new delivery model on a webcast scheduled for Tuesday September 27th. Here is the title and link: Db2 for z/OS – Delivering New Capabilities Faster for Increased Productivity.

Finally, this makes me wonder whether anyone in our worldwide community has migrated Db2 subsystems through all 14 Db2 version changes there have been over the last 30+ years (1 -> 1.2 -> 1.3 -> 2.1 -> 2.2 -> 2.3 -> 3 -> 4 -> 5 -> 6 -> 7 -> 8 -> 9 -> 10 -> 11). When you look back over all those different versions, it’s been an amazing ride so far – and it makes you wonder what IBM have in their plans for the future of Db2.

]]>