Bob Yee – BMC Software | Blogs https://s7280.pcdn.co Thu, 06 Apr 2023 18:29:09 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png Bob Yee – BMC Software | Blogs https://s7280.pcdn.co 32 32 What You Need to Know About COBOL V6 Continuous Delivery https://s7280.pcdn.co/cobol-v6-1-continuous-delivery/ Tue, 27 Jun 2017 09:00:44 +0000 http://insidetechtalk.com/?p=18599 COBOL V6.1 Continuous Delivery will instill more business value to the z Systems platform for those millions of COBOL programs that “run the world.”]]>

You’ll find a number of people spieling about COBOL being old, inefficient and revolting, especially to new programmers, in one way or another. But current mainframe programmers writing COBOL, many of whom are now millennials, know that couldn’t be further from the truth.

A question for COBOL critics: Where have you been the last five years? Prior to 2013, I would have said that COBOL is stale, isn’t growing, not exciting and has questionable return as a language to learn for new computer professionals. I think the exact opposite today.

There have been innovations and enhancements added to COBOL that would make Jean Sammet and Grace Hopper smile with glee—and now those modernizations are coming down a Continuous Delivery pipeline for COBOL V6.1.

COBOL V6 Continuous Delivery

To deliver new features customers demand, IBM has developed new and not-so-small compiler enhancements in an Agile environment that supports COBOL V6 Continuous Delivery. Many IBM laboratories are implementing Continuous Delivery, including but not limited to PL/I, CICS, DB2, IMS and MQSeries. Additionally, bugs in COBOL have gone from a trickle, as in IBM Enterprise COBOL V4.2 and earlier, to almost monthly fix lists for COBOL V5/6.IBM’s COBOL optimization efforts to leverage the new hardware features in the z Systems mainframe have spurred customers to ask for new features and functions in COBOL that will make programming in it easier. And not just features to assist migration or use on the z Systems mainframe, but more. For example:

  • Features added UNICODE capability for some intrinsic functions, XML functions, UNBOUNDED tables and groups, SMF89 generation to track usage
  • Features added VSAM extended addressability, COBOL 2002 standard conformance, IBM extensions to COBOL
  • Features added JSON support

How Recent Compiler Options Help You

IBM COBOL V6.1 Continuous Delivery compiler options and features that were recently delivered are described in some detail below. If you wish a more extensive description and usage, refer to the hyperlink for each APAR number:

  • INITCHECK: Locates uses of uninitialized data items (September 2016 APAR PI68226)
  • NUMCHECK: Validates PACK-DECIMAL or BINARY data (February 2017 APAR PI71625)
  • PARMCHECK: Assists detecting programs with mismatch in size of arguments passed and parameters received from CALLing programs to CALLED programs (April 2017 APAR PI78089)
  • INLINE: >> INLINE [ON | OFF] compiler directives and INLINE compile option allow manual control of compiler optimization process within COBOL source code, useful in paragraphs and sections referenced by PERFORM statements (April 2017 APAR PI77981)

IBM COBOL V6.2 Continuous Delivery introduces even more changes to assist migration:

  • A new compiler option, COPYLOC, allows you to specify an additional location to be searched for copy members during the library phase.
  • New RULES suboptions, OMITODOMIN|NOOMITODOMIN and UNREF|NOUNREFALL|NOUNREFSOURCE, provides you with additional compiler controls.
  • ZONEDATA(NOPFD) and ZONEDATA(MIG) effects are expanded to MOVE statements, comparisons, and computations for USAGE DISPLAY or PACKED-DECIMAL data items that could contain invalid data (digits, sign code, or zone bits), with actions more compatible with earlier compilers.
  • An additional warning message indicates when a CICS DFHCOMMAREA is larger than 32 KB.
  • The CICS Reserved Word Table supplied by IBM is updated.

Reasoning for COBOL V6 Continuous Delivery Model

So, what’s the point of COBOL V6 Continuous Delivery? Effectively, IBM is producing features that, when tested and ready, are delivered in the normal service stream. Note that certain customers have requested these features and have a specific reason and need, usually as an aid to assist migration from older COBOL compilers to new optimized COBOL releases.

Significantly, there is no “new compiler” release, but instead PTFs only to this latest COBOL V6release. With COBOL V6 Continuous Delivery, IBM can effectively make the latest version more attractive and instill more business value to the z Systems platform for those millions of COBOL programs that “run the world.”

Read the V6.1 announcement letter and the V6.2 announcement letter for further details.

What AMI DevX Is Doing

AMI DevX is treating each IBM COBOL V6 Continuous Delivery PTF as “in-service maintenance” and will support these new features as a maintenance item as we get them. Some may take a while to support, others may be quick to deliver.

AMI DevX continues to collaborate with IBM development before and after general availability of the Continuous Delivery features to ensure that our solutions fully address customer needs. We applaud IBM’s efforts to deliver new features and functions that leverage the power and the breadth of the recent z Systems hardware technologies. Fast always beats slow, and IBM COBOL is moving fast so we need to keep pace to ensure our products are enabled sooner than later.

AMI DevX intends to support new COBOL V6 Continuous Delivery features when available through the maintenance process that is already set up with customers for our products. That means PTFs when necessary and monthly or quarterly rollups of support for these features (cumulative maintenance). Customers that have immediate needs for these new features or Continuous Delivery PTFs should make that known by opening a AMI DevX support call.

What Customers Must Do

But there’s a problem: Most customers don’t have an existing process to apply monthly maintenance or pre-setup environments to do all the testing that is required to validate functional operation with new COBOL V6 Continuous Delivery.

IBM’s moving COBOL into the new Agile age, and customers need to move with it to allow these new COBOL features to be quickly adopted, tested and implemented. We’ve found that numerous problems reported in the field are due to missing maintenance for both IBM COBOL and ISV software.

Customers that are implementing the most recent IBM Continuous Delivery enhancements, say using the March 2018 FIX LIST for COBOL, must obtain and install the AMI DevX fixes to match.  This will require opening a call to support to ensure the absolutely latest are installed..

The need for quick and accurate testing is evident. This is the perfect opportunity to consider implementing more Agile solutions in a DevOps environment. Tools to help in COBOL code quality will be helpful, reduce time and resources spent. Solutions such as Topaz Workbench and Topaz for Total Test can make your testing automatic and reduce time in test validation.

Optimization Is King

Mainframe compiler optimization isn’t new. IBM OS/VS COBOL V2.4, the 1983 version of venerable COBOL, had this feature and so does every compiler up to and including IBM Enterprise COBOL V4.2. Unfortunately, for more than a few reasons, optimization wasn’t widely used, subsequently languished and hence was given little care, attention and feeding it deserved.

In 2013, IBM Enterprise COBOL V5 marked the revival of IBM’s mainframe initiative to instill new life into compiler optimization and finally take advantage of customer investments in the new hardware features on the z Systems processors. Finally, customers could get more ROI for their large mainframe hardware purchases.

The new COBOL V5 effort to revive optimization didn’t happen overnight, but over years. It has been five years since COBOL V5 hit the streets in June 2013 and COBOL optimization is still evolving. We’ve gone through four compiler versions, from V5.1 to V5.2 to V6.1 to the most current V6.2. Continuous Delivery will now accelerate that process.

]]>
How to Complete a Successful COBOL Version 5.2 or 6.1 Rollout https://www.bmc.com/blogs/successful-cobol-migration-rollout/ Thu, 13 Apr 2017 13:00:09 +0000 http://insidetechtalk.com/?p=18269 After you’ve worked through our suggested steps for migrating your COBOL programs to COBOL Version 5.2 or 6.1, it’s time to begin an initial rollout. Here's how.]]>

After you’ve migrated your COBOL programs to COBOL Version 5.2 or 6.1, it’s time to begin an initial rollout. As always, be sure to refer to the IBM Migration Guide. Following those guidelines will generally make the migration go more smoothly.

Before you roll out your COBOL Version 5.2 or 6.1 programs, set up some comparative testing. Run the data from the previous COBOL release you were using. Later, when you run your new programs, you can compare them to the old programs. This will help you quantify the benefits of migrating to the new release to business leaders. Every machine cycle saved allows other applications to use those saved cycles for better performance.

Rolling Out Your Programs

Begin your initial COBOL Version 5.2 or 6.1 rollout with one business application and document the migration process. This will serve as a template for subsequent rollouts. During and after the initial rollout, seek feedback that will help you inform other business units on the success and encountered issues of the rollout. Discovering issues early on will make it easier to correct them and adjust your template for later rollouts.

After the initial COBOL Version 5.2 or 6.1 rollout, go back to that comparative testing we talked about for an early idea of potential savings. See what worked and prepare to move forward, but don’t begin rolling out everything.

Only convert those applications that can benefit from the new versions of COBOL. Start with your highly arithmetic, floating-point or complex mathematical applications. These application types will take advantage of the z architecture of COBOL Versions 5.2 and 6.1 and will benefit from higher levels of optimization. If applications do not take advantage of the z architecture extensions, or would not benefit from higher levels of optimization, you should leave them in their current versions of COBOL.

As a side note, for sites with 24/7 CICS regions, replacing PDS with PDSE won’t be easy. What you can do is dynamically insert program objects ahead of your load libraries in the CICS RPL. This will provide a way for you to take advantage of new COBOL Version 5.2 and 6.1 programs for CICS.

Continue this COBOL Version 5.2 or 6.1 rollout path, releasing programs that will benefit from the new COBOL version you’ve decided to go with and testing those against their previous versions to calculate savings.

Reviewing Your COBOL Migration Strategy

Before you roll out, it may be useful to go back and review the steps that you’ve taken. For your convenience, here is a list of things to note, compiled from past blog posts on the subject:

    1. Don’t convert your COBOL programs to Java. This requires sacrificing efficiency for cost saving, and the cost saving you generate will be short-lived and evolve into a series of long-term pitfalls costing you more money.

 

    1. The first step in a COBOL Version 5.2 or 6.1 migration strategy is roadmap planning. You need to understand your current programs, consider some potentially problematic areas you could happen upon when migrating them, understand your hardware machines, and determine which version of COBOL you’re going to migrate to (there’s an explanation for why you have a choice).

 

    1. If you have a choice between COBOL versions, which is better? Starting at COBOL Version 6.1 eliminates duplicate compiler upgrades twice, once to 5.2 and again to 6.1. However, by migrating to COBOL Version 5.2 first, you can save costs by leveraging the IBM Enterprise COBOL trials for both 5.2 and 6.1 to gain maximum experience and application comfort before making a formal decision to upgrade.

 

    1. A large percentage of these migration problems are data related, so thorough testing is crucial to discovering them. The testing you do now will save you from countless hours of debugging, potentially losing revenue and disappointing your customers.

 

  1. There are several types of performance optimization available in COBOL Versions 5.2 and 6.1, but you may need some guidance on how to identify and use these new COBOL optimization features.

Digital business is driving more mainframe activity. Mainframe shops should optimize their programs so they perform well against the modern demands they face from new and growing digital engagement. The best way to optimize your COBOL programs is to migrate them to COBOL Version 5.2 or 6.1. Good luck along your journey.

]]>
How to Identify and Use New COBOL Optimization Features https://www.bmc.com/blogs/new-cobol-optimization-features/ Tue, 14 Mar 2017 13:00:35 +0000 http://insidetechtalk.com/?p=17792 Here is an overview of the types of performance optimization available in COBOL Versions 5 and 6 and how to identify and use these new COBOL optimization features.]]>

You’ve tested and debugged your COBOL programs, now it’s time to optimize them. In this post, we’ll provide an overview of the types of performance optimization available in COBOL Versions 5 and 6, as well as direct you how to identify and use these new COBOL optimization features.

IBM COBOL Versions 5 and 6 have provided more than a few types of optimization to date, and more are likely to arrive in future releases and through on-stream COBOL maintenance. It’s important, however, to provide a measurement of your gains using a performance profiler like Compuware Strobe.

It’s imperative to provide application performance baselines before and after optimization so capacity and performance specialists can get the data they need. This data can provide future guidance for your applications and be significant to reducing your 4HRA costs.

Implementation Hints and COBOL Optimization Opportunities

Before you start looking for big optimization opportunities, start small. Run with OPT(0) to flush any data format problems, then move to OPT(1) or OPT(2). Make sure you test for accurate results at each optimization level, as some bugs have appeared only with higher OPT levels.

When you’re ready to search out optimization opportunities, here’s what IBM has been focused on:

  • PERFORM statements that can be eliminated, made more efficient by “block reordering” or replaced with more efficient inline code. These optimization processes are especially helpful if the PERFORMs are frequently called because it removes subroutine calls, which add overhead, especially if they’re called repetitively.
  • COMPUTE statements, especially anywhere significant arithmetic is involved. This moves arithmetic to higher-performing, newer instructions, such as SIMD. Decimal arithmetic can be exceptionally fast using the newer DFP features on newer z Systems hardware, z9 and higher. Conversions to and from DECIMAL were especially slow on COBOL Version 4 and below. The new DFP conversion instructions (CONVERT FROM/TO PACKED) on the z13 accelerate a COBOL “DIVIDE A BY B GIVING C.”

There are also compile options available during testing that help programmers identify issues that would affect application optimization:

  • RULES(NOLAXPERF) will issue warnings for inefficient coding practices, as well as compiler options that can impact performance.
  • NUMPROC(PFD) can significantly improve decimal and zone-decimal processing; however, the data must adhere to specific IBM system standards (refer to the IBM programming guide for more information).

Areas of Greatest Return

There are significant areas of improvement in COBOL Versions 5 and 6 and in hardware systems that that could accelerate your COBOL programs.

Binary Floating Point (BFP)

COBOL Versions 5 and 6 have significant performance improvements when optimizing PERFORM statements and anywhere there’s significant arithmetic. Version 5 will use all 16 BFP registers. Older COBOL versions used only the original four BFP registers. Using register arithmetic is faster than using memory and swapping to/from registers. Having 12 more BFP registers increases register use, keeps memory use at a minimum and helps optimize hardware cache use.

Vector Registers

The z13 Systems introduced vector register instructions, enabling COBOL to use the z13 single instruction, multiple data (SIMD) features to help arithmetic performance. There are 32 vector registers on the z13 and there is a movement to use more SIMD instructions to further performance gains, according to IBM Systems Magazine. In his SHARE presentation, Tom Ross described how the z13 instructions accelerate COBOL arithmetic calculations, including integer and floating point computations, and string manipulations like string search and string comparisons.

Decimal Floating Point (DFP)

Another significant area of improvement involves decimal arithmetic acceleration using the DFP feature on the newer machines. DFP was implemented starting with the z9 Systems and enhanced for each successive machine (five total) through the z13. PACKED-DECIMAL data in your COBOL programs could easily benefit and be especially quick for complex calculations.

Conversions to and from decimal and other data formats could be accelerated with DFP conversion instructions, too. IBM has shown that replacing decimal conversion instructions generated by the COBOL Version 4.2 compiler could be many times faster by just using the newer DFP conversion instructions. Elimination of conversions that usually involve the use of common subroutine calls with in-line DFP instructions can reduce CPU usage and increase performance.

Single Instruction, Multiple Data (SIMD)

A final area of great return, the SIMD feature on the z13 Systems can be exploited by COBOL Version 5.2 and 6.1. Functions like INSPECT TALLYING or the REPLACING statement can benefit. We’ll probably see future COBOL versions exploit more SIMD features.

Keeping Hardware and Software Up-to-Date

COBOL has just started with optimization and use of new features on the newer machines. You’ll definitely see those advantages appear in newer releases of COBOL. This fact makes it imperative to keep your software and hardware up to date. This list is not in any way complete, but types of optimization that we’ve encountered include:

  • Statement deletion: Some COBOL statements are just gone, and if your debugging tool isn’t enabled, you’ll have problems.
  • In-lining: Instead of calling a subroutine/function as the COBOL code implies, optimization could instead insert the code in the COBOL stream. You’ll see no subroutine/function call.
  • Arithmetic type substitution: This is an equivalent way to perform the calculation. Use of DFP on all z Systems machines and use of SIMD on z13 are common COBOL Version 5 and 6 techniques to accelerate arithmetic computations.

Hardware provides the features but you can’t take advantage unless you keep your software up to date. Plan a regimen to update both hardware and software. Updating the hardware but not the software will likely lead to discussions around ROI as your optimization benefits decrease. Keep statistics and measure with the proper tools. You’ll need to show progress and that will keep the skeptical ROI discussions to a minimum.

]]>
How to Properly Test and Debug COBOL Programs You’re Migrating https://www.bmc.com/blogs/test-and-debug-cobol-programs/ Tue, 21 Feb 2017 14:00:27 +0000 http://insidetechtalk.com/?p=17646 A large percentage of COBOL migration problems are data related, so thorough testing is crucial to discovering them. The testing you do now will save you from countless hours of debugging, potentially losing revenue and disappointing your customers.]]>

It’s inevitable you will encounter challenges when migrating to COBOL Version 5.2 or 6.1. They don’t preclude the benefits of migrating, but they exist. According to IBM, a large percentage of these migration problems are data related, so thorough testing is crucial to discovering them. The testing you do now will save you from countless hours of debugging, potentially losing revenue and disappointing your customers.

Keep in mind there are changes to COBOL Versions 5.2 and 6.1 that affect how debuggers operate, so your toolset must support those changes. There are some small external changes to the 5.2 and 6.1 languages that COBOL programmers deal with every day; however, the internal changes are more extensive.

The “back-end” of COBOL Versions 5.2 and 6.1 has radically changed and keeps changing, even daily. It’s imperative to use a toolset that both is enabled to and can continuously deliver support for frequent changes made to these new versions.

Here are some best practices to follow to properly test and debug COBOL programs you’re migrating:

1. Prepare Your Testing Environment

Brush off your testing suites and introduce a strict regimen for testing. As you move through your applications, you’ll become more experienced and will encounter problems that are familiar and repeatable. Categorize the problems you find and build a test suite that finds those common problems related to both data and poor coding practices. Likewise, optimize your testing by ensuring you have the latest IBM compiler and Language Environment maintenance up to date. Refer to the IBM COBOL fix list for this information. A systematic plan must be adopted to keep the compiler and Language Environment update to date with monthly fixes. Testing with outdated maintenance would be wasted time and effort.

2. Fix Your COBOL Programs

Before you start nonchalantly migrating COBOL programs, you should analyze each one and determine where fixing is required. Our colleague Glenn Everitt wrote about the importance of this a while back. His conclusion: if you neglect fixing your COBOL programs early on, your migration efforts will be more difficult than you can imagine. We agree. When testing each application lined up for migration, verify that each provides the correct results, performs according specifications and scales to expected norms. Remove all exceptions and abend conditions too.

3. Start Unit Testing

Unit testing of mainframe applications is also important. This process allows developers to test the small parts of an application to find and fix low-level bugs before moving into testing processes that involve larger parts.

However, unit testing is one of the weakest spots in the mainframe development life cycle. Because it’s a tedious manual process that sucks time and energy away from other development tasks, which must be accomplished for code to move forward, unit testing often disregarded or left incomplete.

Fortunately, the dearth of unit testing in mainframe environments is gradually changing with the advent of automated unit testing tools that allow developers to:

  • Validate code changes immediately
  • Eliminate dependency on specialized mainframe knowledge
  • Automatically collect and keep test data with unit tests using tools like Compuware
  • Automatically generate unit tests and test assets

Automated unit testing makes it easier to deliver quality with velocity.

The idea of automation and testing applies to migrating your programs to COBOL Versions 5.2 or 6.1 too. Unit testing would ensure program quality early on, preventing future migration issues, and automation would accelerate that testing in the already time-consuming migration process.

4. Invest in Source Debuggers

As mentioned earlier, changes to COBOL Versions 5.2 and 6.1 affect how debuggers operate, so you should invest in source debuggers that support these newer versions. Modern debuggers provide the best experience and direct access to data from the COBOL programmer perspective. Avoid using a source tool unintended for 5.2 and 6.1. That’s like stepping through machine code and will severely hinder debugging. We’ve worked with many customers to successfully migrate their applications to COBOL Versions 5.2 and 6.1 using a variety of Compuware’s application debugging, fault and failure management, and performance analysis tools, including:

  • Xpediter for visual, interactive debugging, even if programs are compiled OPT(2)
  • Abend-AID for abend analysis of 5.2 and 6.1 programs, including optimized programs
  • Strobe for profiling of 5.2 and 6.1 programs, including optimized programs

5. Utilize Compile Options

Utilizing compile options will help you identify testing issues prior to moving programs into production. There are several compile options that may be beneficial to identifying issues in tests, but these are the core options worth noting:

    • SSRANGE identifies out-of-range table issues, generating an IGZ006S message when an index or subscript points to storage beyond the bounds of a table. The use of SSRANGE should be restricted to testing, as IBM has indicated it can increase CPU time by an average of 18 percent.

 

    • ZONECHECK either generates warnings or causes a program to abend if the zone bits are not numeric. It will insert IF NUMERIC class tests in front of any zoned decimal variables acting as sending fields.

 

    • DIAGTRUNC generates warning messages during the compilation to alert the programmer that, during a move statement, the receiving numeric variable contains fewer integers than the sending field. This can help ensure truncation will not create an issue.

 

    • RULES(NOEVENPACK) generates a warning message when the packed data contains an even number of digits. According to the Enterprise COBOL for z/OS Programmer Guide V5.2 (or V6.1), packed data should always have an odd number of digits to prevent truncated data. However, systems programmers can change the warning message to an error, forcing a coding change.

 

  • RULES(NOLAXPERF) will issue warnings for inefficient coding practices, as well as compiler options that can impact performance.

Ensuring you properly test and debug COBOL programs early in the process will prevent bigger issues from inhibiting your progress through the ensuing steps of a migration. The benefits will be noticeable in the next phase of your plan, application optimization.

]]>
Choosing Your COBOL Migration Path: Leading with Version 5.2 or 6.1 https://www.bmc.com/blogs/choosing-cobol-migration-path/ Thu, 09 Feb 2017 14:00:19 +0000 http://insidetechtalk.com/?p=17577 Deciding which COBOL release you want to migrate to first and then following through with a stable migration requires a well-researched and carefully considered plan. Here we present the similarities and differences between the two latest versions of COBOL.]]>

Converting COBOL programs to Java can create a multitude of problems, including performance issues. Migrating to a newer version of COBOL can yield desired reductions in CPU costs without the headaches caused by migration to Java. In this post we’ll discuss the difference between COBOL Version 5.2 and 6.1 to help you determine which release you should migrate to first.

To start, let’s look at some fundamental differences between COBOL Version 5.2 and 6.1. Per IBM’s announcement of COBOL Version 5.2, the upgrade included new and improved features such as:

  • Support for the new IBM z13 hardware
  • New, restored and enhanced compiler options for ease of migration and programmer productivity
  • New features added from the ISO 2002 COBOL Standard
  • New IBM extensions to COBOL
  • Product-related enhancements

However, more improvements and features were added in the release of COBOL Version 6.1, including:

  • Increased compiler capacity
    This provides a new “back end” that converts the intermediate form of a program to machine code. It resolves problems in previous COBOL versions that failed to compile very large COBOL programs. Large programs are typically those that used “code generators.” These code generators spew out a lot of code and use a lot of CPU and memory resources. The IBM “back end” for COBOL Version 6.1 uses 64-bit storage to overcome this problem. You may use more CPU to compile a program with 6.1 than with 4.2, but if you run the program several times a day, you will end up using far less CPU during run-time.
  • New COBOL language features
    More new language features were added from the ISO 2002 COBOL Standard, introducing the use of ALLOCATE, FREE and INITIALIZE statements.
  • New and enhanced compiler options
    These new options ease migration and improve programmer productivity, but to implement some of them, you may have to consider how they’re to be used and when to use them properly. For example:

    • VSAMOPENFS allows compatibility with “file status 97” in pre-COBOL Version 6 programs
    • SUPPRESS/NOSUPPRESS allow debuggers to use the copybook information in the listing
    • SSRANGE sub option enhances range checking for subscripts, reference modifications group ranges and indexes at run time
    • Diagnostic message for ZONECHECK(MSG) compiler option assists the detection of invalid zoned decimal data items
    • LVLOPTION was replaced by a seven-character build level identifier in the format PYYMMDD to the compiler listing header
  • Runtime and product-related enhancements
    For RENT compiled programs, WORKING-STORAGE will be acquired from HEAP storage and will be initialized at the COBOL program call and not when the program object was loaded. This can help applications by:

    • Freeing storage below the 16MB line and using storage in the HEAP
    • Reducing storage usage and performance tuning in Table SORT
    • Improving performance for INSPECT, UNSTRING, SEARCH ALL (exploiting z13 features such as SIMD)

Which Version Is Better?

To reiterate the final thought from our last post, migration to both COBOL Version 5.2 and 6.1 is a smart idea. It’s true that starting at COBOL Version 6.1 eliminates duplicate compiler upgrades twice, once to 5.2 and again to 6.1. However, by migrating to COBOL Version 5.2 first, you can save costs by leveraging the IBM Enterprise COBOL trials for both 5.2 and 6.1 to gain maximum experience and application comfort before making a formal decision to upgrade. There will not be any Single Version Charging (SVC) changes during these trial periods. Consult the IBM announcement letter Enterprise COBOL Developer Trial for z/OS, V6.1 for more details.

Keep in mind COBOL Version 5 can be ordered until September 2017, so you have a limited time to take advantage of the benefits from migrating to this version. Also, consider that COBOL Version 5.1 doesn’t support the z13 and 5.2 is the recommended first choice for the first migration phase because it provides XML compatibility with the Enterprise COBOL Version 3 parser.

Be sure to use ZONECHECK and ZONEDATA to identify “improperly formatted data fields.” It has been said that many migration problems are data related. Data that was accepted as valid (or not detected as invalid) by Enterprise COBOL Version 4.2 and earlier could now be invalid with COBOL Version 5.2 and 6.1.

And, of course, spend some time reading through the migration guides for each version to get a better sense of where you’re going:

Performing a Migration

The following steps and, admittedly very technical, information should help guide you on your migration journey.

    1. Prepare to Migrate
      To prepare for a migration to the latest version of COBOL, consider whether you have programs that were last compiled under OS/VS COBOL, or COBOL V2. You need to upgrade the code to Enterprise COBOL 4.2 before migrating to anything higher. Add FLAGMIG4 and FLAG(I,I) to identify necessary coding changes. Next, determine what programs you want to upgrade. Focus on applications that consume CPU resources as candidates to convert first. Note that migration to PDSEs is required. Try to do that when you are at Enterprise COBOL V4.2. Also, note there will be more work datasets required during compile time.

 

    1. Brace for Compile Time “Shocks”
      There are a few “shocks” to brace for. Compile time CPU usage will rise five to 15 times its current volume, depending on the optimization level. Meanwhile, compile time memory usage will rise upwards of 20 times its current level. IBM requires the region size of the compile step to be a minimum of 200MB. Many shops are having to utilize a default size of 500MB or 0M because of the amount of storage required for optimization.

 

    1. Find Invalid COBOL Data
      Invalid COBOL data must be found using repetitive testing during the migration process. This data generally cannot be found using a compiler feature or option, but programmatically can be found using COBOL statements such as IF NUMERIC. You can use compiler options such as SSRANGE to flush out the problems during testing.

 

    1. Stay Up to Date
      Keep the service updates for the compiler and LE up to date. Several PTFs are resolved nearly every month. Refer to the IBM COBOL Fix list web page for this information.

 

    1. Learn and Watch for Pitfalls
      As usual, there are a few pitfalls to watch for. It’s important to learn from what others have gone through. Refer to literature and bug lists on the above-mentioned fix list web page. Also, make sure you repetitively resolve problems in each of your applications. It will get easier for later migrations as you build a comprehensive list of common errors in your codebase.

 

  1. Understand the Impact of New Compile Options
    When compiling, adding new compile time options will help quantify the impact. IBM recommends compiling with SSRANGE, ZONECHECK and OPT(0) to flush out all table misuse and invalid data, and recompiling with NOSSRANGE, NOZONECHECK and OPT(2) for QA and production.

    • SSRANGE can average as much as 18 percent more CPU.
    • ZONECHECK will add an IF NUMERIC before any numeric statements, which also will increase the CPU time of the run.
      • SSRANGE, ZONECHECKS and other options will definitely speed up the ability to clean up bad zone bits.
      • Compile with NUMPROC(PFD), which is more efficient.
    • ZONEDATA will confirm the zoned part of decimal data is valid and provide compatibility with prior COBOL Version 6.1 compiler behavior.
    • RULES will cause the compiler to issue warning messages when detecting non-standard or poor coding practices that were tolerated prior to COBOL Version 6.1. Systems personnel can change the warning to an error to force changes.

Deciding which COBOL release you want to migrate to first and then following through with a stable migration requires a well-researched and carefully considered plan. As you begin migrating your COBOL applications, testing and properly debugging early in the process are also vital to avoiding future problems.

]]>
First Steps when Migrating to the Latest Version of COBOL https://www.bmc.com/blogs/migrating-latest-version-of-cobol/ Tue, 24 Jan 2017 14:00:47 +0000 http://insidetechtalk.com/?p=16907 The best way to optimize your COBOL programs is to migrate them to the latest version of COBOL. The first part of that strategy involves roadmap planning.]]>

Today, digital business is driving more mainframe activity, according to a survey from BMC. Instead of sacrificing application performance to save a little cash, you should optimize your mainframe programs so they perform well against the modern demands they face from new and growing digital engagement.

The best way to optimize your COBOL programs is to migrate them to the latest version of COBOL, spanning COBOL Version 5 and 6. The first part of that strategy involves roadmap planning.

First Steps Migrating to the Latest Version of COBOL

There are a few first steps you need to take before beginning a migration to the latest version of COBOL:

1. Read the Migration Guide for the COBOL release you will be upgrading to.

The correct guide can be found via the IBM Enterprise COBOL for z/OS documentation library. It will provide the cautionary points pertinent to your migration, but up front, there are a few to consider, including:

  • Source language incompatibilities when migrating from OS/VS COBOL and VS COBOL II to any newer COBOL version
  • Data problems undetected by the compiler must be found by repetitive testing (some helpful compiler options: SSRANGE, RULES and DIAGTRUNC).
  • JCL, REGION size and more work datasets are required (there are major changes to REGION size)
  • Some compiler options have been removed, such as NUMPROC(MIG) (refer to your Migration Guide for a complete list)
  • Executables must be in PDSE libraries
  • OS/VS COBOL or VS COBOL II programs should be upgraded to COBOL 4.2 before recompiling with COBOL Version 5 or Version 6

2. Determine a business application to serve as the template for upgrading others.

A suggested approach is to use a business application that has encountered elongated run times to see the potential impact of recompiling problematic code under COBOL Version 5 or 6. You don’t have to recompile every program, just those you think might benefit from the new compiler and advanced optimization. Use mainframe profiling tools to set a performance baseline before you start. For example, Compuware Strobe, our application performance monitoring and analysis tool, can capture the information for later use.

3. Analyze the load library modules to determine the release of COBOL the business application was last compiled under.

When going about this, refer to your Migration Guide to learn about changes that may be necessary to make within the code for those modules compiled in earlier versions of COBOL. There may also be compile options that have been deleted, have changed in function or offer new options. Portfolio Analyzer—accessed via the Library Utility (3.1) of Compuware File-AID file and data management tool—allows you to view and export COBOL and PL/I compiler options for all releases to a .CSV file for later analysis.

In addition to these steps, there are other preliminaries that will ease the migration execution, including learning from your own and others’ mistakes and making sure you understand your hardware machines.

Learning from Mistakes

Although migrating older COBOL programs to the latest version of COBOL presents fewer pitfalls than converting them to Java, you should still consider potential hazards that may crop up along the journey.

For one, there’s a chance you could take on too much at one time. It’s best to stick to a small application or set of applications to start, as mentioned in point two above. You can learn from that experience, and apply your new understanding to other application migrations to prevent recurring missteps.

Also, avoid making assumptions about programs before migrating them. Do some analysis up front. You need facts about the programs that help you understand their composition and behavior to make good determinations that help you prevent problems and avoid creating them.

Know Your Hardware Machines: ARCH and OPT

It’s important to understand some background information on COBOL and how various versions affect performance. The type of COBOL version has a significant effect on how you migrate the application.

From the language perspective, COBOL has been upgraded many times—OS/VS COBOL with features of the ANSI 74 standard, VS COBOL II with features of the COBOL 85 standard, COBOL Version 5 had elements of ISO standard 2002 and a future COBOL version may have elements of a newer standard.

But from the performance perspective, one of the past knocks against COBOL was its failure to take advantage of the newest processor hardware machines. IBM mainframe customers bought huge new machines and COBOL didn’t use any of the newest features.

This meant all COBOL versions, from VS COBOL II (1985) through Enterprise COBOL Version 4.2 (2009), used the same back end and generated the same code. Hence, over the course of 25 years, the same older and less efficient code continued to be generated for COBOL and didn’t use any of the new features introduced on the z Systems machines, starting with the z900/800 (CY2000).

Fortunately, this changed with COBOL Version 5.1. All levels of COBOL starting with Version 5.1 are designed to take advantage of the newest machine hardware. Newer hardware machines have features that, if used, tend to offload work to the hardware, reducing CPU consumption and enabling them to run more efficiently.

The COBOL ARCH compile time option controls these features. ARCH(6) is for the z990/890 systems all the way up until z13, which uses ARCH(11). Likewise, the COBOL OPT compile time option controls how complex, and potentially more efficient, of an optimization to generate. OPT(0) is the minimum optimization level and OPT(2) level is the maximum (OPT(0) does not mean “none,” rather “some minimal amount”). Higher ARCH levels tend to generate more use of the newest machine features, while higher OPT levels tend to create more efficient optimizations.

All COBOL versions default to OPT(0). COBOL Version 5.1 defaults to ARCH(6), a z990/z890 code base. (Note: COBOL Version 5.1 doesn’t support ARCH(11), a z13, and programs compiled with ARCH(10) will run on a z13, but will not generate code that exists only on a z13.) COBOL Version 5.2 and 6.1 default to ARCH(7), a z9 code base. However, all COBOL versions allow customers to change the defaults.

Still, a critical concern is the need to set a standard: the highest ARCH level of any COBOL compile should be set to the lowest ARCH level of any machine in your enterprise. For example, if your enterprise consists of a z10, a z11 and a z12, setting the default to ARCH(8) will allow that program to run on the z11 and z12 without problems. However, setting the default to ARCH(10) will cause problems if a program is subsequently run on a z10.

Disaster site hardware is also of special concern, as disaster sites may be at a different ARCH machine level than your production machines. In the previous example, if the disaster site has a z9, you will have problems at the site, as there is the potential for hardware instructions to be generated for an ARCH type higher than your disaster recovery machine.

Determine the Latest Version of COBOL You’re Migrating to

After these initial preparatory steps, you should begin thinking about where you want to go: COBOL Version 5.2 or 6.1. In our opinion, you should first try migrating to 5.2 to maximize experience and eliminate bugs through a couple of test sets.

Although there is no technical reason to start a migration at COBOL Version 5.2 and then jump to 6.1, customers need to determine what’s right for their environment. Consider this:

  • Starting at COBOL Version 6.1 eliminates duplicate compiler upgrades twice, once to 5.2 and again to 6.1. However…
  • You can save costs by leveraging the IBM Enterprise COBOL trials for both COBOL Version 5.2 and 6.1 to gain maximum experience and application comfort before making a formal decision to upgrade. There will not be any Single Version Charging (SVC) changes during these trial periods. Consult the IBM announcement letter Enterprise COBOL Developer Trial for z/OS, V6.1 for more details.
]]>
Seven Reasons Converting COBOL to Java to Save Cash Is a Red Herring https://www.bmc.com/blogs/converting-cobol-to-java/ Tue, 10 Jan 2017 14:00:36 +0000 http://insidetechtalk.com/?p=16813 Before executing a COBOL to Java on z/OS conversion, there are several unrecognized pitfalls companies should be aware of that render possible savings to be of no consequence.]]>

Companies are having discussions about reducing CPU costs through the wholesale conversion of COBOL applications to Java. Inviting as it seems at first glance, this option of reducing costs may be short lived and is a red herring because of unrealized issues and problems that occur when converting entire applications to Java.

Seven Issues of Converting COBOL to Java

A while back, a company told us they were considering converting their huge base of COBOL Version 3 and 4 programs to Java. They admitted the code would run less efficiently, but thought the conversion would dramatically reduce costs because the code would run on a specialty processor like a zIIP.

Before executing a COBOL to Java on z/OS conversion, there are several unrecognized pitfalls companies should be aware of that render possible savings to be of no consequence.

1. Hardware Resource Problems

The initial premise to run as much as possible on zIIP processors to save in 4HRA costs will instead become a hardware resource problem with the allocation and resource sharing of zIIP processors. Not just Java applications, but other uses of zIIP engines may be impacted.

For example, batch workloads written in Java could end up using a large part of the zIIP capacity. DB2 is a large user of zIIP engines and is invoked indirectly by batch and transactional workloads.

A DB2 workload redirected to a zIIP is managed by DB2, not the user, and the amount varies with each different DB2 release. This means you would need to keep up to date on DB2 releases and maintenance because there would be a higher percentage offloaded to the zIIP for more recent DB2 releases.

Additionally, the need for DB2 usage of the zIIP would need to be balanced with other workloads like Java. To boot, transactional workloads increasing web usage in z/OS would impact Java usage, including:

  • CICS – Java and web applications
  • IMS – remote web workloads, through IMS connectors and servers
  • WAS – Java web and server application

There is an excellent discussion about how IBM Db2 for z/OS uses the zIIP processors. William Favero’s “It’s time to ask ‘Do I have an adequate zIIP capacity?’” is a must read.

2. Spillover to a GCP and Incurring MIPS Charges

Upfront planning would be necessary to minimize the possibility of spillover to a GCP occurring to avoid extra MIPS charges. You may need to isolate workloads on different LPARs to prevent it.

As the zIIP workload grows, zIIP engine shortages will occur. This would require constant monitoring and corrective actions. But there are hardware limitations to be considered. The zIIP:CP ratio is 2:1 for the zEC12 mainframe and up, as of 2013. For example, five GCP engines will allow up to 10 zIIP engines. The z13 and z14 maifntain this 2:1 zIIP:CP ratio.

As the zIIP workload grows, you’ll need to plan for spikes that force the allocation of more zIIP capacity than you initially planned for. You’ll also need an algorithm for determining workload capacity. Some shops continually evaluate and add zIIP engines when their overall usage hits 70 percent, but workloads are usually underestimated and will spillover to a GCP. Analysis of SMF Type 30 and Type 70 records can give you an estimate.

3. “zIPP-able” Only Means Eligible

Work that is zIIP-able is considered zIIP eligible, but just because code is zIIP eligible does not mean it’s guaranteed to run on a zIIP, even if you have “sufficient capacity.”

In his PowerPoint “zIIP Experiences,” IBM’s Adrian Burke provides plenty of information and examples of what work is considered zIIP-able, zIIP-ed (work that executed on a zIIP) and Un-zIIP-ed (eligible work that executed on a general CP). IBM states that its workloads are limited to how much can be offloaded to a zIIP processor. This is true for most workloads whether IBM or not. Burke suggests following percentages for these IBM workload types:

  • XML Parsing – 98%
  • CTG/CICS/ERWW – 40%
  • IMS Connect ERRW – 40%

It’s possible to force specific workloads to run only on zIIP processors. However, don’t do this if the applications have critical response time requirements (do this by setting IIPHONORPRIORITY=NO in PARMLIB member IEAOPTxx).

4. Intricacies of Converting COBOL to Java

Intricacies include things you should consider that could end up being problematic, costly or inefficient in a COBOL to Java migration, including:

  • Loss of undocumented application knowledge of COBOL
  • Retraining or hiring new developers that know Java
  • Software and hardware resources are needed that you may not have (development and maintenance platform for Java)
  • Software conversion issues
    • Won’t be exact
    • Little consideration for performance
    • No accommodation for mainframe specific problem areas like decimal arithmetic and BCD conversions
    • Conversions tend to change the logic of COBOL programs due to the widespread use of many non-standard COBOL language tricks or undocumented features.
  • How to tackle z/OS specific features like VSAM with Java

5. Performance Problems

The company considering converting COBOL to Java was correct in its assumption the code would run less efficiently. Ironically, while the company’s aim was to reduce costs, the consequence of this migration would be dealing with CPU efficiency issues in the new Java code. Additionally, real-time response may suffer due to inadequate zIIP capacity.

6. Bad SCM for Java

Your current source control management system may not be a good choice for Java, which means you may need to consider implementing a new tool. Things you may need to consider are:

  • How to integrate the new code into your current test/production promotion processes
  • What backup and fallback capabilities are available
  • Logging and audit facilities to meet corporate standards

7. Maintenance Methodologies

Do your programmers know Java? If so, how many? It’s likely they have zero experience debugging Java, which will require toolset changes and providing education to change. Additionally, your current program change methodologies for COBOL won’t apply to Java, so the time spent making changes or expediting fixes will be chaotic and lengthy.

Migrate to COBOL Version 5 and 6 Instead

Of course, the issues involved in converting COBOL to Java are rooted much deeper than these seven summaries, but these alone are enough to start giving you a headache. It’s best to avoid sacrificing performance for the red herring of saving cash.

Alternatively, why not just migrate your inefficient COBOL 3 and 4 programs to COBOL 5 and 6?

]]>