David Schipper – BMC Software | Blogs https://s7280.pcdn.co Thu, 14 Mar 2024 15:05:38 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png David Schipper – BMC Software | Blogs https://s7280.pcdn.co 32 32 IBM System/360 Laid Groundwork for Mainframe Innovation https://s7280.pcdn.co/mainframe-innovation-groundwork-ibm-s360/ Thu, 14 Mar 2024 15:05:38 +0000 https://www.bmc.com/blogs/?p=53459 With the IBM® System/360 celebrating its 60th birthday this year, the BMC mainframe group was asked if anyone remembered working on this hardware. In a moment of weakness, I admitted that I had, and was asked to blog about what I remembered. Although I took my first programming course at the age of 15 while […]]]>

With the IBM® System/360 celebrating its 60th birthday this year, the BMC mainframe group was asked if anyone remembered working on this hardware. In a moment of weakness, I admitted that I had, and was asked to blog about what I remembered.

Although I took my first programming course at the age of 15 while still in high school, it was not until I was taking my first programming course as a freshman at the University of Michigan (U of M) that I used a System/360.

In fact, I interacted with two System/360 machines. The computer center used a System/360 Model 20 to run the card readers and printers that we used to submit all our programs for testing, and we’d get the results back on printed output. Our programs were executed on a System/360 Model 67 that was a very unique machine at the time. The 360/67 at the University of Michigan was the first IBM computer to have virtual memory.

The Model 67 was built to the specifications derived from a paper written in 1966 by four authors, Bruce Arden, Bernard Galler, Frank Westervelt (who were associate directors at U of M’s academic computing center), and Tom O’Brian, called “Program and Addressing Structure in a Time-Sharing Environment.”. Dr. Galler and Dr. Westervelt were professors who I got to know during my time at U of M. I took my first advanced programming course, as well at the last course for my master’s degree, from Dr. Galler. Fun fact: Dr. Galler’s son, Glenn, works in the mainframe group here at BMC.

I got my first IT job the summer before finishing my master’s degree at Project Management Associates (PMA), a subsidiary of a construction company, Townsend and Bottum, that specialized in building power plants. PMA hired me to do COBOL development on a scheduling system they were developing for the construction industry.

I had one minor challenge with this first job. Although I had learned Basic Assembler Language, FORTAN, SNOBOL, PIL (Pittsburgh Interpretive Language), LISP, wrote an operating system, wrote a compiler for a language called GLORY, and used other programming languages at U of M, they did not offer a course in COBOL. I spent first week on my new job reading the manual and learning COBOL.

The COBOL programs I developed ran on the Townsend and Bottum data processing center System/360 Model 30 that had 8 kilobytes (yes, 8K) of physical core memory—no virtual memory. I designed an overlay structure into the program so that it reused the physical memory as the program executed. For example, once I did the initialization processing, I overlayed those in-memory instructions with the instructions to read the data and create the report. If the program produced multiple reports, I needed to design an overlay structure to reuse the application instruction storage from the first report for each subsequent one.

After graduating from U of M, I moved to the Townsend and Bottum parent company and continued to write COBOL programs. Those programs also ran on a Model 30 that used the Disk Operating System (DOS), a predecessor of Virtual Storage Extended (VSE). I remember when they upgraded the Model 30 from 8K to 32K of memory. This greatly reduced the overlay processing requirements in the programs.

A year or so later as Townsend and Bottum was expanding, the company decided to upgrade to a System/360 Model 50. At that time, it also decided it needed its own systems programmer, and I was willing to take the position, so the company sent me to a number of IBM courses to get the knowledge I would need for the job.

My first activity as the new systems programmer was to calculate the electrical power requirements for the new Model 50, the associated bank of eight 2314 disk drives (where each disk pack held 29 MB of data), the tape drives, printer, card reader/punch, and other peripherals so that the computer room could be designed with enough power to run the system.

Initially, the same DOS operating system that was on the Model 30 was used for the new Model 50, but shortly thereafter I installed the Operating System/360 (OS/360) that was designed for the new hardware. The first OS/360 system I generated and installed was Multiprogramming with a Fixed number of Tasks (OS/MFT). Later, I upgraded the operating system to run OS/360 Multiprogramming with a Variable number of Tasks (OS/MVT). Both of these were non-virtual memory systems since the Model 50 did not come with a virtual memory capability.

After a few more years of business growth, the System/360 was too small and Townsend and Bottum moved to a System/370 machine, so I moved off the System/360 platform and on to this larger and faster machine that had virtual memory.

Thanks for indulging me on my trip down memory lane. It’s amazing to think how far mainframe systems have come while maintaining their role as the system of record for the global economy. The original System/360 and the innovation of subsequent versions provided the foundation for modern mainframes and their utilization of cutting-edge technology, including artificial intelligence (AI). While it’s exciting to see what is on the horizon for future versions of the mainframe, I would not have the background and knowledge that I possess today if I had not started my career on the IBM System/360.

]]>
BMC AMI Data for IMS: Enabling World-Class Data and Transaction Management https://www.bmc.com/blogs/introducing-bmc-ami-data-for-ims/ Tue, 05 Jan 2021 15:45:22 +0000 https://www.bmc.com/blogs/?p=19860 The mainframe isn’t just the workhorse of modern business, used by about 75 percent of Fortune 1000 companies and thousands more businesses around the world for efficient, reliable data access and transaction processing. With IBM IMS®, the only database environment proven capable of running over 117,000 database updating transaction per second, it’s also a racehorse. […]]]>

The mainframe isn’t just the workhorse of modern business, used by about 75 percent of Fortune 1000 companies and thousands more businesses around the world for efficient, reliable data access and transaction processing. With IBM IMS®, the only database environment proven capable of running over 117,000 database updating transaction per second, it’s also a racehorse. In an era defined by data, IMS provides availability, resiliency, and agility for the data and insights enterprises depend on.

But today’s mainframe faces greater challenges than ever. Data volumes and transactions are growing fast, and your competition is accelerating. To mine insights and fuel innovation, business users and developers need better performance, less downtime—or none at all—anywhere, anytime access to data. To meet those demands, mainframe teams need to keep pace with rising complexity, analyze vast amounts of system data, anticipate and prevent problems, and plan effectively for future growth.

Managing IMS is hard enough to wear out the most experienced IMS DBAs and systems programmers—but as it happens, there aren’t many of those mainframe vets around anymore anyway. According to Forrester Research, 23 percent of mainframe developers retired between 2013 and 2018, and 63 percent of those positions are still vacant. The 2020 BMC Mainframe Survey found that 43 percent of mainframe professionals have less than five years of experience.

The good news is that an eager new generation of professionals are entering the data center; 60 percent of the youngest mainframe professionals see it as a growing platform. But the complexities of IMS will put their enthusiasm to the test. To help them come up to speed quickly, drive immediate value, and gain satisfaction in their jobs, they need simpler, smarter ways of working.

Enter BMC AMI Data for IMS

In recent months, you’ve seen a series of new BMC AMI products bringing the power of automated intelligence to mainframe operations. Now, BMC AMI Data for IMS continues and culminates our digital mainframe vision.

BMC AMI Data for IMS builds automation and machine learning into data and transaction management to help you ensure 24/7/365 availability, resiliency, and agility for a transcendent customer experience. It’s like having a modern mainframe data scientist at hand to keep your data accurate, organized, and backed up so it’s always available to the right people at the right time. For newer DBAs, it’s the ultimate in mentorship and professional development, helping them add value like seasoned pros right from the start.

Here’s how BMC AMI Data for IMS gets it done:

  • Modernizing data management with intelligent analytics and automation
  • Enabling seamless DevOps collaboration by letting developers use existing tools to make mainframe database changes the same way as any other platform
  • Managing data across the DevOps pipeline from ideation to testing, deployment, and production
  • Improving backup and recovery performance to decrease RTO and meet compliance requirements—with simulation, estimation and recovery automation capabilities for added peace of mind
  • Automatically optimizing IMS for peak database performance, reliable availability, and more efficient resource consumption
  • Making it possible to view and move data without negative impact to application performance
  • Allowing dynamic changes to IMS Transaction Manager (IMS TM) definitions and ensuring IMS message queue stability and protection.

It’s also worth mentioning that BMC AMI Data for IMS is a great complement to BMC AMI Data for Db2, giving you a wide variety of ways to manage your IMS transaction and IMS and Db2 data environments.

We’re excited to bring the BMC AMI transformation to IMS, as we continue to deliver more value for our customers who need solutions to help them meet rising digital demands. Even in the fast-paced, continually reinvented world of IT, some things really can get better with time.

]]>
BMC AMI Change Manager for IMS® Makes Database Management Easier for Everyone https://www.bmc.com/blogs/bmc-ami-change-manager-for-ims/ Tue, 31 Mar 2020 00:00:39 +0000 https://www.bmc.com/blogs/?p=16844 The breakneck speed of technological change means it’s far more compelling to look forward than backward, which is why people often mistakenly assume that old technologies will disappear into the rear view as new ones emerge. Before the turn of the millennium, for example, IT experts were already predicting that new technologies such as cloud […]]]>

The breakneck speed of technological change means it’s far more compelling to look forward than backward, which is why people often mistakenly assume that old technologies will disappear into the rear view as new ones emerge. Before the turn of the millennium, for example, IT experts were already predicting that new technologies such as cloud computing would spell doom for the venerable mainframe. And yet, decades later, the mainframe remains the most efficient and dependable computing platform in the world. Indeed, the problem is that it has in large part outlasted the careers of the very people who made it what it is today.

As mainframers retire in droves, they take valuable knowledge with them that may prove impossible to replace. Forrester Research reported that 23 percent of mainframe developers retired between 2013 and 2018, and some 63 percent of those positions remain vacant. The problem is only going to get worse, as BMC’s 2019 Mainframe Survey indicates that 37 percent of mainframe developers were between the ages of 50 and 64.

The Rise of the Mainframe Generalist

Even as their subject matter experts retire, enterprises continue to rely on the same tried and true technology that NASA relied on in its bid to put astronauts on the moon. IBM’s Information Management System (IBM® IMS®) was one of the first database systems to be made commercially available, and it remains in use today by around 75 percent of Fortune 1000 companies and thousands more businesses around the world. The system’s exceptional reliability has made it a fixture, particularly in large financial institutions, but IMS database administrators (DBAs) are becoming increasingly rare.

In order to keep IMS systems operating as needed, enterprises are turning any DBAs they can hire into aptly named “universal DBAs.” One of the best tools to ease this transition to DBA commoditization is BMC AMI Change Manager for IMS. BMC AMI Change Manager streamlines changes to the complex, hierarchical IMS database environment and offers universal DBAs a modern interface and a series of automated, agile processes that help simplify IMS maintenance. By turning what used to be a highly specialized role into one that even less experienced DBAs can perform, BMC AMI Change Manager increases IMS availability and productivity and helps mitigate the skills gap left by retiring mainframers.

Empowered DBAs for a Modern Mainframe

The mainframe is still around because it’s been modernized over the years, and the cumulative improvements have turned it into an invaluable computing platform that is critical to digital business. But we are still living in an IT world defined by “do more with less” – more work with fewer resources and less budget. BMC AMI Change Manager for IMS can help you continue to make the most of your IMS investment by enabling DBAs of all skill levels to maintain the agility and efficiency needed from this cornerstone system.

For more information on how BMC AMI Change Manager for IMS is empowering this transition, reach out to our product team today.

]]>
Understanding Mainframe MLC Software Pricing is No Easy Task https://www.bmc.com/blogs/understanding-mainframe-mlc-software-pricing-no-easy-task/ Mon, 22 Aug 2016 14:02:18 +0000 http://www.bmc.com/blogs/?p=9667 As a mainframe professional who is responsible for optimizing the systems, data, and costs of your company’s mainframe, you know how complex things can be. Sorting out how to reduce costs can be cumbersome, exceptionally so, if your team is unfamiliar with the inner workings of how the cost of monthly license charge (MLC) software […]]]>

Understanding-Mainframe-MLC-Software-Pricing

As a mainframe professional who is responsible for optimizing the systems, data, and costs of your company’s mainframe, you know how complex things can be. Sorting out how to reduce costs can be cumbersome, exceptionally so, if your team is unfamiliar with the inner workings of how the cost of monthly license charge (MLC) software is calculated. But don’t feel bad – you’re not alone.

Probably around 20% of those trying to manage MLC costs do not have a good understanding of how the charges are calculated. Another 60% think they do, but have it wrong. I was in this 60% group before I really studied the calculation process. The final 20% do understand how the combined software stack, across multiple LPARs, with a moving four hour rolling average (4HRA) impacts your IBM MLC bill.  Congratulations to you!

In his White Paper, 10 Steps to Reducing Mainframe MLC Costs, David Wilson of SZS Consulting states, “Very few people really understand how costs are derived in the mainframe environment. MLC is the only component with a realistic list price but with over 50 different licensing metrics, most users delegate calculating the price to IBM. With little understanding of the underlying structure (other than more MIPS = more cost), what hope is there to manage the cost down?”

As with any such complicated calculation, there are “it depends” situations; but basically, you take the 4-hour rolling average of all the MSUs consumed by summing up all the software that is running on an LPAR, including MLC products, batch work, other vendor tools, etc…all of it. The highest one-hour amount for the month then determines your MLC bill. If you have multiple LPARs running the same MLC products, you add up the amounts for hour 1 on each LPAR, hour 2 on each LPAR, etc., for each hour in the month and the highest hourly period peak of these hourly sums determines the MLC bill. (You really need a tool to do this properly.)

To make understanding these costs even tougher, yearly price increases are typical for MLC products and now a new pricing model, Country Multiplex, is being rolled out by IBM. (Did I mention that using a tool to do these calculations can greatly simplify this complexity? See Cost Analyzer)

Again, David Wilson states, “If you want to manage something, then you need to understand what influences it. However, most organizations regard MLC charges as an art, not a science, and they lack the ability to dynamically calculate MLC cost themselves.”

Here’s what you should do right now: study and understand the MLC calculation process for your shop. Identify what is driving the peak 4HRA and model proposed actions so you can see if they will make a difference. Then attack the things that will drive down your MLC cost with dynamic capping, workload tuning, subsystem placement and workload placement tactics.

]]>