At first glance, mainframes and green IT seem to be complete opposites. Green IT is the idea of incorporating sustainable actions plus renewable sources into technology, but mainframes are massive, resource-heavy machines.
On the contrary, industry trends indicate that not only are mainframes not yet dead, they’re actually becoming more important to large businesses. Though reasons for this aren’t inherently related to green IT, mainframes are a good argument for improving sustainability in any company’s technology.
(This article is part of our Sustainable IT Guide. Use the right-hand menu to explore topics related to sustainable technology efforts.)
What is green IT?
Green IT, sometimes known as green computing or environmentally sustainable computing, is the intersection of technology and environmental sustainability. Though computers and the internet are today’s efficiency currencies, they generally aren’t as efficient as they could be.
Computers, smart devices, and other hardware rely on raw materials for manufacturing. Then, the hardware requires natural resources the power them. When a computer or phone is replaced by a newer model, disposal becomes a significant problem, with few best answers or practices. For larger pieces of machinery, like servers and mainframes, each piece of tech has a wider footprint, as the rooms that store them often require ample power supplies and correct cooling – another strain on natural resources.
As such, goals for green IT include reducing the reliance on hazardous materials (often used to build IT hardware), maximizing energy efficiency (over the entire lifecycle), and improving the biodegradability or recyclability once a product reaches the end of its lifecycle. Strategies towards green IT include improving the longevity or lifecycle of a product, designing data centers more efficiently, optimizing software and deployment, managing power resources, and recycling products and materials.
The rise (and fall?) of mainframes
Mainframes have been around since the advent of computing. Initially, nearly all computers were mainframes, but, thanks to innovations, mainframes have switched to becoming more of a “master computer”, used by large organizations for important functions and applications like bulk data and transaction processing, enterprise resource planning, and more. A single mainframe often supports numerous devices, like servers, workstations, and other peripherals. (By contrast, casual computers like your home laptop or desktop don’t require mainframes.)
Rising costs and significant maintenance meant mainframes were harder to justify. Plus, companies churned through mainframes, replacing them with cheaper and faster versions. Among many threats over the decades, cloud computing seems most likely to render mainframes obsolete.
Developing technologies often follow the same rise and fall: a new technology is developed thanks to technologies before it, and it takes some time for the new product to be adapted. Once it is developed by the public at large, it may become ubiquitous and seemingly irreplaceable – until something comes along and interrupts the entire structure.
Cloud computing seems to be doing just this. Countless predictions hail cloud computing as the next, new technology, making actual machines unnecessary. Instead, we’ll rely on much lighter devices like laptops, smartphones, tablets, and more. Interestingly, this total switch has yet to occur. Despite companies and individuals migrating their work and products to the cloud, mainframes are actually on the rise among certain customers.
In 2017, IBM, a long-time builder of mainframes, reported its first growth in revenue in five years, with only a quarter of that increase coming from cloud revenue. Surprisingly, more than two-thirds of IBM’s revenue growth came from their Z Systems mainframes – meaning more people are buying mainframes now than in the last several years.
No longer are mainframes the computer machines of the mid-1900s: the size of a room, bulky, and slow. Today’s mainframes are much larger than a normal computer, but generally don’t take up much floor space, barely beyond the size of a single server. These sleek versions are significantly speedier at batch processing and transactions.
So, what’s driving the increased use of mainframes? Modern versions that are essential to centers that fuel big data analysis and cloud computing often run Linux. This open-source operating system makes using a mainframe no different than using Linux on every other platform. Because IT departments no longer need experts in a single mainframe, companies can better understand the master machines and better integrate them into their business needs.
Mainframes and efficiency
Interestingly, Linux isn’t the only reason for increased mainframe use. IBM’s Z Systems, for instance, have a flexible architecture with multiple chips built in, allowing for 140+ configurable processing units. This firmware means that the mainframe itself – not the operating system – can move a process to the right configuration so that is handled in the most efficient manner. (A Linux environment, for example, doesn’t even need to know that the firmware is relocating processes for efficiency.)
Such functionality is key to efficiency. If efficiency in technology is any device that can handle more data and processing with fewer resources, then mainframes can’t be beat. In fact, mainframes generally have the highest resource utilization rates across all hardware: mainframes frequently exceed 95% of their resource utilization. This means that practically everything is in use, which means you need fewer additional hardware products, which reduces your carbon footprint and energy thresholds. (By contrast, other IT systems are designed to run at no more than 70% capacity or less, saving the rest of their resources for self-generated tasks like clean up and maintenance.)
Today’s mainframes are much smaller, which can significantly reduce the financial and natural resources spent on cooling and power. Reduced sizes also offer major cost savings on real estate, especially in tech-heavy cities like San Francisco, Los Angeles, New York City, and Seattle.
In the past, weather-related disasters were often a death sentence for mainframes, as a loss or surge of power could knock out the mainframe and any connected devices. But current models are sturdy in disaster recovery situations. Mainframes can topple over and continue working and, in cases of power loss, they can be restored from remote backups in under 24 hours
Lastly, the lifecycle of mainframes is longer than ever, often running 10 or more years, nearly twice as long as most other business-necessary devices. Sure, building and running mainframes requires plenty of resources, but compared to a cluster of servers or devices that each require power and resources, mainframes still come out on top.
Determining when a mainframe is right for you
If you’re a small company or a start-up, a mainframe may be overkill. Many organizations that run data centers or heavy, frequent processing rely on them, like banks and financial institutions and companies like Google, Amazon, and Facebook.
Still, if you’re in need of the reliability and processing capabilities of such a machine, go beyond a mere financial standpoint to determine if a mainframe is right. Consider the business and technical perspectives – a common rule of thumb is that when you are migrating at least 20 or so servers to a mainframe, the mainframe becomes a financial advantage.
Of course, remember the impact your decisions have on the environment, too: a mainframe that offers more computing efficiency over a longer lifecycle means you’re getting the most out of natural resources without simply churning and burning them.