Gil Peleg – BMC Software | Blogs https://s7280.pcdn.co Mon, 11 Sep 2023 13:34:22 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png Gil Peleg – BMC Software | Blogs https://s7280.pcdn.co 32 32 Why Migrate Mainframe Data to the Hybrid Cloud—and Why Now? https://s7280.pcdn.co/why-migrate-mainframe-data-hybrid-cloud-now/ Thu, 07 Sep 2023 16:35:32 +0000 https://www.bmc.com/blogs/?p=53129 In the rapidly evolving digital landscape, CIOs face a pivotal imperative to migrate mainframe data to the hybrid cloud for data management. Delay is no longer an option. In this blog, I’ll present seven compelling reasons why this transformation must happen now, not later. From cost optimization and heightened security to harnessing artificial intelligence (AI) […]]]>

In the rapidly evolving digital landscape, CIOs face a pivotal imperative to migrate mainframe data to the hybrid cloud for data management. Delay is no longer an option. In this blog, I’ll present seven compelling reasons why this transformation must happen now, not later. From cost optimization and heightened security to harnessing artificial intelligence (AI) and analytics, each driver emphasizes the urgency for CIOs and IT leaders to embrace the hybrid cloud for mainframe data protection. It’s not just a technology shift; it will also strategically propel organizations into a data-driven future, ensuring competitiveness, efficiency, and resilience in today’s dynamic business environment.

1. Cost optimization

Cost optimization is a compelling driver due to the expenses associated with traditional tape-based infrastructures and virtual tape libraries (VTLs). Maintaining these legacy systems demands significant capital for hardware, maintenance, and storage space. By migrating to the hybrid cloud, CIOs can slash these costs, replace upfront capital investments, minimize operational expenditures, and pay only for the resources they need. This strategic shift also liberates organizations from the constraints of costly and complex tape-based systems, providing a more budget-friendly and agile solution for mainframe data management.

2. Reduced complexity

Mainframes and legacy storage systems can create complex and siloed data environments that require significant time and resource commitments to manage. By transitioning to hybrid cloud mainframe data management, CIOs can overcome the intricate challenges posed by tape-based infrastructures and VTLs to streamline data operations and administration, centralize management, and improve accessibility. This transformative shift alleviates the intricacies inherent in tape-based systems, significantly reducing administrative burden and enhancing overall operational efficiency. It’s a strategic move that empowers organizations to optimize mainframe data management while simplifying their IT landscapes.

3. Enhanced security compliance

Cybersecurity threats, including ransomware attacks, are growing in sophistication and frequency, necessitating a fortified security posture. Hybrid cloud mainframe data management provides that posture by ensuring that data is more secure and easier to protect and offering robust defense mechanisms that include encryption, multi-factor authentication, and continuous monitoring. This strategic transition bolsters data protection shields against cyberthreats, and ensures unwavering compliance with stringent regulations like the European Union’s Digital Operational Resilience Act (DORA), US regulations such as Sheltered Harbor and those of the Securities and Exchange Commission (SEC), and the Hong Kong banking system standard, STDB, all of which help mitigate legal risks. .

4. Cyber resilience

In this era of increasing cyberthreats, organizations must prioritize data resilience. Hybrid cloud solutions often include robust disaster recovery capabilities that ensure that mainframe data can be quickly recovered in the event of cyberattack or other disaster, minimizing downtime and data loss. This enhanced resilience fortifies the organization’s ability to withstand unforeseen challenges and provides a crucial safety net for data protection and business continuity.

5. Harnessing AI and analytics

The hybrid cloud isn’t just about data storage; it’s a conduit to a smarter, more agile, and analytics-driven future—and a gateway to advanced AI and machine learning (ML) technologies that CIOs can leverage to decode intricate data patterns, predict trends, and derive actionable insights from mainframe data. This empowers organizations to make data-driven decisions, drive innovation, and gain a competitive edge in today’s data-centric landscape.

6. Addressing workforce challenges

The expertise required to manage tape-based and VTL systems is dwindling, with experienced professionals retiring amid a shortage of new talent to replace them. Migrating to the hybrid cloud alleviates the risk of losing critical mainframe skills while ensuring efficient, continuous data management, regardless of skill sets, with tools newer IT professionals are familiar with and comfortable using. This strategic shift assures CIOs that their mainframe data is in capable hands, both now and in the future.

7. Scalability and flexibility

Traditional mainframe environments often struggle to adapt to changing business demands. Hybrid cloud solutions offer unparalleled scalability, allowing CIOs to effortlessly adjust and allocate resources to meet evolving organizational needs and fluctuating data requirements. This agility ensures that mainframe data can grow with the business, without the constraints of legacy systems, offering adaptability that is essential for a modern IT ecosystem.

Putting it all together: hybrid cloud for gaining a competitive edge

There is much to gain from hybrid cloud data management—especially if you are planning for long-term mainframe investment. In today’s data-driven business landscape, the ability to swiftly access, analyze, and act upon data is a strategic imperative. The hybrid cloud facilitates this by leveraging AI/ML tools to transform data from a passive asset into a dynamic resource that gives organizations a formidable competitive edge. It also provides an agile platform where mainframe data is readily available for real-time processing, analysis, and informed decision-making.

Those data-driven insights then allow organizations to uncover hidden patterns, respond rapidly to market shifts, identify emerging trends, optimize operations, improve customer experiences and innovation, and unlock opportunities that drive growth and competitiveness. Migrating mainframe data to the hybrid cloud also allows organizations to enact robust security measures and fortify their defenses against cyberattacks by harnessing the transformative potential of AI and analytics. Cloud-based mainframe data management is not merely about data management; it’s also about embracing a future where data is the driving force of success. The time to act is now.

Learn more about hybrid cloud mainframe data management, and hear about one organization’s experience modernizing with hybrid cloud solutions, in this on-demand webinar.

]]>
Whether Baby Steps or Giant Steps, Cloud Is the Path to Modernize the Mainframe https://www.bmc.com/blogs/cloud-is-the-path-to-modernize-the-mainframe/ Wed, 21 Jun 2023 07:11:30 +0000 https://www.bmc.com/blogs/?p=52992 Everyone is under pressure to modernize their mainframe environment—keeping all the mission-critical benefits without being tied to a crushing cost structure and a style of computing that often discourages the agility and creativity enterprises badly need. Several general traits of cloud can deliver attributes to a mainframe environment that are increasingly demanded and very difficult […]]]>

Everyone is under pressure to modernize their mainframe environment—keeping all the mission-critical benefits without being tied to a crushing cost structure and a style of computing that often discourages the agility and creativity enterprises badly need.

Several general traits of cloud can deliver attributes to a mainframe environment that are increasingly demanded and very difficult to achieve in any other way. These are:

Elasticity

Leading cloud providers have data processing assets that dwarf anything available to any other kind of organization. So, as a service, they can provide capacity and/or specific functionality that is effectively unlimited in scale but for which, roughly speaking, customers pay on an as-needed basis. For a mainframe organization this can be extremely helpful for dealing with periodic demand spikes such as the annual holiday sales period. They can also support sudden and substantial shifts in a business model, such as some of those that have emerged during the COVID pandemic.

Resilience

The same enormous scale of the cloud providers that delivers elasticity, also delivers resilience. Enormous compute and storage resources, in multiple locations, and vast data pipes guarantee data survivability. Cloud outages can happen, but the massive redundancy makes data loss or a complete outage, highly unlikely.

OpEx model

The “pay only for what you need” approach of cloud means that cloud expenses are generally tracked as operating expenses rather than capital expenditures and, in that sense, are usually much easier to fund. If properly managed cloud services are usually as cost-effective as on-premises and sometimes much more, complex questions of how costs are logged factor into this.

Unlike the mainframe model, there is no single monthly peak 4-hour interval that sets the pricing for the whole month. Also, there is no need to order storage boxes, compute chassis, and other infrastructure components, nor track the shipment and match the bill of materials, or rack and stack the servers, as huge infrastructure is available at the click of a button.

Finally, cloud represents a cornucopia of potential solutions to problems you may be facing, with low compute and storage costs, a wide range of infrastructure-as-a-service (IaaS), platform-as-a-service (PaaS), and software-as-a-service (SaaS) options – including powerful analytic capabilities.

Experiment First

Fortunately, for those interested in exploring cloud options for mainframe environments, there are many paths forward and no need to make “bet the business” investments. On the contrary, cloud options are typically modular and granular, meaning you can choose many routes to the functionality you want while starting small and expanding when it makes sense.

Areas most often targeted for cloud experimentation include:

  • Analytics – Mainframe environments have an abundance of data but can’t readily provide many of the most-demanded business intelligence (BI) and analytics services. Meanwhile, across the business, adoption of cloud-based analytics has been growing but without direct access to mainframe data—it has not reached its full potential. Data locked in the mainframe has simply not been accessible.

Making mainframe data cloud-accessible is a risk-free first step for modernization that can quickly and easily multiply the options for leveraging key data, delivering rapid and meaningful rewards in the form of scalable state-of-the-art analytics.

  • Backup – Mainframe environments know how to do backup, but they often face difficult tradeoffs when resources are needed for so many critical tasks. Backup often gets relegated to narrow windows of time. Factors such as reliance on tape, or even virtual tape, can also make it even more difficult to achieve needed results.

In contrast, a cloud-based backup, whether for particular applications or data or even for all applications and data, is one of the easiest use cases to get started with. Cloud-based backups can eliminate slow and bulky tape-type architecture. As a backup medium, cloud is fast and cost-effective, and comparatively easy to implement.

  • Disaster recovery (DR) –The tools and techniques for disaster recovery vary depending on the needs of an enterprise and the scale of its budget but often include a secondary site. Of course, setting up a dedicated duplicate mainframe disaster recovery site comes with a high total cost of ownership (TCO).

A second, slightly more affordable option, is a business continuity colocation facility, which may be shared among multiple companies and made available to one of them at a time of need. Emerging as a viable third option is a cloud-based business continuity and disaster recovery (BCDR) capability that provides essentially the same capabilities as a secondary site at a much lower cost. Predefined service level agreements for a cloud “facility” guarantee a quick recovery, saving your company both time and money.

  • Archive – Again, existing mainframe operations often rely on tape to store infrequently accessed data, typically outside of the purview of regular backup activities. Sometimes this is just a matter of retaining longitudinal corporate data but many sectors such as the financial and healthcare industries which are heavily regulated are required to retain data for long durations of up to 10 years or more. As these collections of static data continue to grow, keeping it in “prime real estate” in the data center becomes less and less appealing.

At the same time, few alternatives are appealing because they often involve transporting physical media. The cloud option, of course, is a classic “low-hanging fruit” choice that can eliminate space and equipment requirements on-premises and readily move any amount of data to low-cost and easy-to-access cloud storage.

A painless path for mainframe administrators

If an administrator of a cloud-based data center was suddenly told they needed to migrate to a mainframe environment, their first reaction would probably be panic! And with good reason. Mainframe is a complex world that requires layers of expertise.

On the other hand, if a mainframe administrator chooses to experiment in the cloud or even begin to move data or functions into the cloud, the transition is likely to be smoother. That is not to say that learning isn’t required for the cloud but, in general, cloud practices are oriented toward a more modern, self-service world. Indeed, cloud growth has been driven in part by ease of use.

Odds are good that someone in your organization has had exposure to cloud, but courses and self-study options abound. Above all, cloud is typically oriented toward learn-by-doing, with free or affordable on-ramps that let individuals and organizations gain experience and skills at low cost.

In other words, in short order, a mainframe shop can also develop cloud competency. And, for the 2020s, that’s likely to be a very good investment of time and energy.

]]>
5 Reasons ETL is the Wrong Approach for Mainframe Data Migration https://www.bmc.com/blogs/5-reasons-etl-is-the-wrong-approach-for-mainframe-data-migration/ Tue, 20 Jun 2023 13:51:33 +0000 https://www.bmc.com/blogs/?p=52984 Change is good – a familiar mantra, but one not always easy to practice. When it comes to moving toward a new way of handling data, mainframe organizations, which have earned their keep by delivering the IT equivalent of corporate-wide insurance policies (rugged, reliable, and risk-averse), naturally look with caution on new concepts like extract, […]]]>

Change is good – a familiar mantra, but one not always easy to practice. When it comes to moving toward a new way of handling data, mainframe organizations, which have earned their keep by delivering the IT equivalent of corporate-wide insurance policies (rugged, reliable, and risk-averse), naturally look with caution on new concepts like extract, load, and transform (ELT).

Positioned as a lighter and faster alternative to more traditional data handling procedures such as extract, transform, and load (ETL), ELT definitely invites scrutiny. And that scrutiny can be worthwhile.

Definitions provided by SearchDataManagement.com say that ELT is “a data integration process for transferring raw data from a source server to a data system (such as a data warehouse or data lake) on a target server and then preparing the information for downstream uses.”  In contrast, another source defines ETL as “three database functions that are combined into one tool to pull data out of one database and place it into another database.”

The crucial functional difference in these definitions is the exclusive focus on database-to-database transfer with ETL, while ELT is open-ended and flexible. To be sure, there are variations in ETL and ELT that might not fit those definitions, but the point is that in the mainframe world ETL is a tool with a more limited focus, while ELT is focused on jump-starting the future.

While each approach has its advantages and disadvantages, let’s take a look as to why we think ETL is all-wrong for mainframe data migration.

ETL is too complex

ETL was not originally designed to handle all the tasks it is now being asked to do. In the early days, it was often applied to pull data from one relational structure and get it to fit into a different relational structure. This often included cleansing the data, too.

For example, a traditional relational database management system (RDBMS) can get befuddled by numeric data where it is expecting alpha data or by the presence of obsolete address abbreviations. So, ETL is optimized for that kind of painstaking, field-by-field data checking, “cleaning,” and data movement, but not so much for feeding a hungry Hadoop database or modern data lake. In short, ETL wasn’t invented to take advantage of all the ways data originates and all the ways it can be used in the 21st century.

ETL is labor-intensive

All that RDBMS-to-RDBMS movement takes supervision and even scripting. Skilled database administrators (DBAs) are in demand and may not last at your organization. So, keeping the human part of the equation going can be tricky. In many cases, someone will have to come along and recreate their hand-coding or replace it whenever something new is needed.

ETL is a bottleneck

Because the ETL process is built around transformation, everything is dependent on the timely completion of that transformation.  However, with larger amounts of data in play (think, Big Data), this can make the needed transformation times inconvenient or impractical, turning ETL into a potential functional and computational bottleneck.

ETL demands structure

ETL is not really designed for unstructured data and can add complexity rather than value when asked to deal with such data. It is best for traditional databases but does not help much with the huge waves of unstructured data that companies need to process today.

ETL has high processing costs

ETL can be especially challenging for mainframes because they generally incur MSU processing charges and can burden systems when they need to be handling real-time challenges.  This stands in contrast to ELT which can be accomplished using mostly the capabilities of built-in zIIP engines, which cuts MSU costs, with additional processing conducted in a chosen cloud destination. In response to those high costs, some customers have taken the transformation stage into the cloud to handle all kinds of data transformations, integrations, and preparations to support analytics and the creation of data lakes.

Moving forward

It obviously would be wrong to oversimplify a decision regarding the implementation of ETL or ELT—there are too many moving parts and too many decision points to weigh. However, what is crucial is understanding that rather than being focused on legacy practices and limitations, ELT speaks to most of the evolving IT paradigms.

ELT is ideal for moving massive amounts of data. Typically, the desired destination is the cloud and often a data lake, built to ingest just about any and all available data so that modern analytics can get to work. That is why ELT today is growing and why it is making inroads specifically in the mainframe environment. In particular, it represents perhaps the best way to accelerate the movement of data to the cloud and to do so at scale. That’s why ELT is emerging as a key tool for IT organizations aiming at modernization and at maximizing the value of their existing investments.

]]>
Mainframe Best Practices for Affordable Backup and Efficient Recovery https://www.bmc.com/blogs/mainframe-best-practices-for-affordable-backup-and-efficient-recovery/ Tue, 20 Jun 2023 13:21:16 +0000 https://www.bmc.com/blogs/?p=52993 Mainframe teams these days are expected to contain backup and archiving costs while ensuring minimum downtime, especially in disaster recovery situations. While full-blown disasters may be rare, costly outages and interruptions are not, and a 2022 ITIC survey reveals just how expensive they are: 91 percent of mid-sized and large enterprises said that a single […]]]>

Mainframe teams these days are expected to contain backup and archiving costs while ensuring minimum downtime, especially in disaster recovery situations. While full-blown disasters may be rare, costly outages and interruptions are not, and a 2022 ITIC survey reveals just how expensive they are: 91 percent of mid-sized and large enterprises said that a single hour of downtime costs over $300,00, with 44% reporting that the cost was $1-5 million.

When designing a data management solution, it is important to explore cost-effective backup options that allow efficient recovery to cope with the enormous amounts of generated data. At the same time, it is also important to look into how to improve recovery efficiency, even if it might increase the direct backup costs.

Reducing backup costs

The total cost of ownership (TCO) of mainframe data management consists of several direct and indirect costs. Using the following methods, an organization can reduce backup costs while still meeting demanding recovery requirements:

  • Incremental backup: Instead of backing up all data sets, implement solutions that support incremental backups and only back up data sets that have changed since the previous backup process.
  • Deduplication: Significant storage space can be saved by eliminating duplicate copies of repeating data. It is therefore recommended to enable deduplication if your target storage system supports it.
  • Compression: Another way to contain data management costs is to ensure that backup data is compressed before it is sent over the network to the storage system.
  • Leveraging commodity storage: Maintaining tape-related hardware and software imposes substantial costs. Instead, a cost-efficient data management solution like BMC AMI Cloud Data securely delivers mainframe data to any cloud or on-prem storage system. This makes it possible to benefit from pay-as-you-go cloud storage instead of stocking up on tapes and VTLs.

On top of the above-mentioned practices to reduce the TCO of the data management continuum, one should also factor in the costs of archiving data for longer periods of time to meet regulatory requirements. For example, banks have to keep masses of archived data for many years to comply with regulations, most of which will never be accessed. As explained in this blog post, selecting the right kind of storage for this type of data can significantly affect backup costs.

Improving recovery efficiency

A more efficient recovery often requires additional measures in the backup stage, which might actually increase backup costs. However, the staggering costs of unplanned downtime alone can justify the investment, not to mention the heavy non-compliance fees. The following methods can be used for a more efficient recovery:

  • Write Once Read Many (WORM) storage: Keeping backups on WORM storage in the cloud or on-premises prevents accidental or malicious erasure and tampering that will make recovery difficult, more expensive or subject to ransom. In the case of an event, immutable backup data in the cloud is available as soon as the system is up and running without needing to wait for archived data.
  • Multiple snapshots: Taking snapshots, also known as flash copies, of volumes and data sets at regular intervals helps to maintain data set versioning, which is important for automated recovery processes. Snapshots also make it possible to recover a data set in case of logical failure.
  • Stand-alone restore: Stand-alone restore allows bare-metal recovery from tape or cloud in cases of cyberattacks, disasters, and errors. Cloud-based backup platforms like BMC AMI Cloud enable initial program load (IPL) from a cloud server for a quick recovery that significantly reduces unplanned downtime.
  • End-to-end encryption: End-to-end encryption reduces the risk of malicious data corruption that could cause logical failures and other problems making recovery scenarios more complex and more expensive. Encryption is also critical for meeting regulatory requirements regarding data security and privacy.
]]>
Long-Term Cloud Storage: What It Is and Why You Need It https://www.bmc.com/blogs/long-term-cloud-storage-what-it-is-and-why-you-need-it/ Tue, 20 Jun 2023 13:17:43 +0000 https://www.bmc.com/blogs/?p=52991 Enterprises are generating huge volumes of data every year, with an average annual data growth of 40-50 percent. This growth has to be handled using IT budgets that are only growing at an annual average of 7 percent. Such disproportion creates a challenge for mainframe professionals. How can they store all this data cost-effectively? Particularly […]]]>

Enterprises are generating huge volumes of data every year, with an average annual data growth of 40-50 percent. This growth has to be handled using IT budgets that are only growing at an annual average of 7 percent. Such disproportion creates a challenge for mainframe professionals. How can they store all this data cost-effectively?

Particularly challenging is deciding on the right strategy for long-term storage, also known as cold storage, for archived data that is rarely or never accessed. There can be different causes for keeping such data for the long term, which often lasts years or even decades:

  • Financial data is stored for compliance and might be required in case of an audit.
  • Legal information must be kept in case of legal action.
  • Medical archives are stored in vast quantities and their availability is highly regulated.
  • Government data has to be stored for legal reasons, sometimes even indefinitely.
  • Raw data is stored by many enterprises for future data mining and analysis.

Desired attributes of a cold storage solution

Cold storage, also referred to as “Tier 3 storage,” has different needs than Tier 0 (high-performance), Tier 1 (primary), and Tier 2 (secondary) storage. These are some of the considerations to keep in mind when designing your cold storage solution:

  • Scalability – As the amount of generated data on average doubles in less than two years, your cold storage technology accordingly needs to be infinitely scalable.
  • Cost – Cold storage must be as inexpensive as possible, especially because you will need a lot of it. Luckily, as it is rarely accessed it allows compromising on accessibility and performance, which can be leveraged to reduce cost.
  • Durability and reliability – Reliability is the ability of a storage medium not to fail within its durability time frame. Both are important to check, and you will find that some cold storage options are durable but not necessarily as reliable as others, and vice versa.
  • Accessibility – Cold storage is meant only for data that does not need to be accessed very often or very rapidly, yet the ability to access it is still important. As mentioned above, compromising on this aspect enables a lower cost.
  • Security – The security of cold data is vital. If it is stored onsite you need to take the same security precautions as with your active data. If it is in the cloud, you must ensure the vendor has proper security mechanisms in place.

Cold storage technology options for mainframe

Mainframe professionals have three general technology options when it comes to cold storage: tape, virtual tape, and cloud. While tapes are still the dominant cold storage media for mainframes, cloud is gaining momentum with its virtually limitless storage and pay-as-you-go model.

Here is a summary of these technologies, and their relative advantages and disadvantages:

Tape

Tape drives store data on magnetic tapes and are typically used for offline, archival data. Despite many end-of-life forecasts, the tape market is still growing at a compound annual growth rate (CAGR) of 7.6% and is expected to reach $6.5 billion by 2022. Tapes are considered the most reliable low-cost storage medium and, if maintained properly, can last for years.

However, they are also the most difficult to access and it can be quite an ordeal to recover from tapes in case of disaster.

Pros of Tape:

  • Often cheaper than other options, depending on the use case.
  • Full control over where data is stored.
  • Secure and not susceptible to malware or viruses as they are offline.
  • Portable and can be carried or sent anywhere.
  • Easy to add capacity.

Cons of Tape:

  • Capital investment required for large tape libraries.
  • Difficult to access (slow and with bottlenecks).
  • High recovery time objective (RTO).
  • Requires physical access and manual handling (problematic in lockdown, for example).
  • Requires careful maintenance.

Virtual tape libraries (VTL)

A VTL is a storage system made up of hard disk drives (HDDs) that appears to the backup software as traditional tape libraries. While not as cheap as tape, HDDs are relatively inexpensive per gigabyte. They are easier to access than tape and their disks are significantly faster than magnetic tapes (although data is still written sequentially).

Pros of VTL:

  • Scalability – HDDs added to a VTL are perceived as tape storage to the mainframe.
  • Performance – data access is faster than tape or cloud.
  • Compatibility – works with tape software features like deduplication.
  • Familiarity – behaves like traditional tape libraries.

Cons of VTL:

  • Cost varies. Infrastructure, maintenance, and skilled admins should also be considered.
  • Capital investment required.
  • Usually less reliable than other options.
  • Less secure than offline tapes and lacks the latest security features of cloud platforms.

Cloud Storage

Cold storage in the cloud is maintained by third-party service providers in a pay-as-you-go model. Rather than selling products, they charge for usage of storage space, bandwidth, data access, and the like.

Cloud is becoming extremely popular for cold storage, mainly because it is considerably cheaper than on-premises storage. Pay-as-you-go means that it can start at affordable prices without needing to stock up on tapes and VTLs. Also, there is no longer a need to maintain infrastructure or recruit personnel to manage data archives, as these are all handled by the cloud vendor.

The cloud provides superior agility and scalability, and although magnetic tapes are more secure, it also provides higher levels of security and compliance than many businesses can on their own. When it comes to durability, the cloud really excels by storing data redundantly across many different storage systems.

On the downside, administrators need to consider network bandwidth and the cost of uploads and restores, as using cloud is often more expensive than it appears at first glance. The leading vendors of long-term cloud storage are Amazon (Glacier and Glacier Deep Archive), Google (Cloud Storage Nearline and Cloud Storage Coldline), Microsoft (Azure Archive Blob Storage), and Oracle (Archive Storage). These vendors charge low rates for storage space but extra fees for bringing data back on-premises, which might prove costly if too much data is retrieved.

Pros of Cloud:

  • Can be cheaper, especially when being aware of hidden costs.
  • Can improve cash flow thanks to an operating expenses (OpEx) financial model rather than capital expenditure (CapEx).
  • Infinitely scalable.
  • Accessible from anywhere.
  • Advanced data management.
  • High data redundancy and easy replication.
  • Leading-edge security.
  • Easy to integrate with mainframes.

Cons of Cloud:

  • Hidden costs (depends on use).
  • Data retrieval, backup, and RTO times depend on network bandwidth.

Cloud is rising as a mainframe cold storage choice

Cold storage in the cloud offers a unique combination of scalability, reliability, durability, security, and cost-effectiveness that on-premises options are challenged to meet.

So, in which cases cloud is preferable for cold storage over tape and VTL?

  • When data access frequency changes: The cloud offers different cold storage tiers, based on the data access requirements, that balance between data storage cost and the data access frequency. Cold storage tiers can be cost effective, however with high data access frequency you need to be mindful of choosing a service that addresses those access needs.
  • When the data grows quickly or unpredictably: Cloud platforms can scale to infinity with very little effort, unlike on-prem options.
  • When improving cash flow is a priority: Predictable OpEx monthly fees can improve cash flow compared to large upfront investment in on-premises storage and infrastructure.
  • In case of mainframe skills shortage: Attracting and retaining mainframe experts is a challenge to many enterprises. With cloud cold storage, this problem completely goes away.
]]>
Top 5 Reasons Why Mainframe-to-Cloud Migration Initiatives Fail https://www.bmc.com/blogs/top-5-reasons-why-mainframe-to-cloud-migration-initiatives-fail/ Mon, 19 Jun 2023 07:42:44 +0000 https://www.bmc.com/blogs/?p=52985 Mainframe modernization is a broad topic and one that elicits symptoms of anxiety in many IT professionals. Whether the goals are relatively modest, like simply updating part of the technology stack or offloading a minor function to the cloud, or an ambitious goal like a change of platform with some or all functions heading to […]]]>

Mainframe modernization is a broad topic and one that elicits symptoms of anxiety in many IT professionals. Whether the goals are relatively modest, like simply updating part of the technology stack or offloading a minor function to the cloud, or an ambitious goal like a change of platform with some or all functions heading to the cloud, surveys show it is a risky business and, indeed, there are at least five reasons to be wary. But in each case, the right strategy can help!

A focus on lift and shift of business logic

Lift and shift is easier said than done when it comes to mainframe workloads. Mainframe organizations that have good documentation and models can get some clarity regarding business logic and the actual supporting compute infrastructure. However, in practice, such information is usually inadequate. Even when the documentation and models are top notch, they can miss crucial dependencies or unrecognized processes.

As a consequence, efforts to recreate capabilities in the cloud can yield some very unpleasant surprises when the switch is flipped. That’s why many organizations take a phased and planful approach, testing the waters one function at a time and building confidence in the process and certainty in the result. Indeed, some argue that the lift and shift approach is actually obsolete.

One enabler for the more gradual approach is the ability to get mainframe data to the cloud when needed. This is a requirement for any ultimate switchover, but if it can be made easy and routine it also allows for parallel operations, where cloud function can be set up and tested with real data, at scale, to make sure nothing is left to chance and that a function equal to or better than on-premises has been achieved.

Ignoring the need for hybrid cloud infrastructure

Organizations can be forgiven for wanting to believe they can achieve a 100 percent cloud-based enterprise. Certainly, there are some valid examples of organizations that have managed this task. However, for a variety of good, practical reasons, analysts question whether completely eliminating on-premises computing is either achievable or wise.

A “Smarter with Gartner” article, Top 10 Cloud Myths, notes, “The cloud may not benefit all workloads equally. Don’t be afraid to propose non cloud solutions when appropriate.”

Sometimes there’s a resilience argument in favor of retaining on-premises capabilities. Or, of course, there may be data residency or other requirements tilting the balance. The point is that mainframe cloud migration that isn’t conceived in hybrid terms is nothing less than a rash burning of one’s bridges. And a hybrid future, particularly when enabled by smooth and reliable data movement from mainframe to cloud, can deliver the best of both worlds in terms of performance and cost-effective spending.

Addressing technology infrastructure without accounting for a holistic MDM strategy

Defined by IBM as “a comprehensive process to drive better business insights by providing a single, trusted, 360-degree view into customer and product data across the enterprise,” master data management (MDM) is an important perspective to consider in any migration plan.  After taking initial steps to move data or functions to the cloud, it quickly becomes apparent that having a comprehensive grasp of data, no matter where it is located, is vital. Indeed, a 202 TDWI webinar dealt with exactly this topic, suggesting that multi-domain MDM can help “deliver information-rich, digitally transformed applications and cloud-based services.” So, without adaptable, cloud-savvy MDM, migrations can run into problems.

Assuming tape is the only way to back up mainframe data

Migration efforts that neglect to account for the mountains of data in legacy tape and VTL storage can be blindsided by how time-consuming and difficult it can be to extract that data from the mainframe environment. This can throw a migration project off-schedule or lead to business problems if backup patterns are interrupted or key data suddenly becomes less accessible. However, new technology makes extraction and movement much more feasible and the benefits of cloud data storage over tape in terms of automation, access, and simplicity are impressive. 

Overlooking the value of historical data accumulated over decades

A cloud migration is, naturally, a very future-focused activity in which old infrastructure and old modes of working are put aside. In the process, organizations are sometimes tempted to leave some of their data archives out of the picture, either through simply shredding tapes no longer retained under a regulatory mandate or simply warehousing them. This is particularly true for older and generally less accessible elements.

But for enterprises fighting to secure their future in a highly competitive world, gems of knowledge are waiting regarding every aspect of the business – from the performance and function of business units, the shop floor and workforce demographics, to insights into market sectors and even consumer behavior. With cloud storage options, there are better fates for old data than gathering dust or a date with the shredder. Smart organizations recognize this fact and make a data migration strategy, the foundation for their infrastructure modernization efforts. The data hiding in the mainframe world, is truly an untapped resource that can now be exploited by cloud-based services.

Failure is not an option       

Reviewing these five potential paths to failure in mainframe-cloud migration should not be misconstrued as an argument against cloud. Rather, it is intended to show the pitfalls to avoid.  When considered carefully and planfully – and approached with the right tools and the right expectations – most organizations can find an appropriate path to the cloud.

]]>
3 Ways to Decrease the Amount of Time It Takes to Back Up and Archive Mainframe Data https://www.bmc.com/blogs/three-ways-decrease-time-to-back-up-archive-mainframe-data/ Mon, 19 Jun 2023 05:59:24 +0000 https://www.bmc.com/blogs/?p=52982 If you are still using a legacy VTL/Tape solution, you could be enjoying better performance by sending backup and archive copies of mainframe data directly to cloud object storage. When you replace legacy technology with modern object storage, you can eliminate bottlenecks that throttle your performance. In other words, you can build a connection between […]]]>

If you are still using a legacy VTL/Tape solution, you could be enjoying better performance by sending backup and archive copies of mainframe data directly to cloud object storage.

When you replace legacy technology with modern object storage, you can eliminate bottlenecks that throttle your performance. In other words, you can build a connection between your mainframe and your backup/archive target that can move data faster. You can think of this as “ingestion throughput.”

Here are the top three ways you can increase ingestion throughput:

  1. Write data in parallel, not serially

The legacy mainframe tapes used to make backup and archive copies required data to be written serially. This is because physical tape lived on reels, and you could only write to one place on the tape at a time. When VTL solutions virtualized tape, they carried over this sequential access limitation.

In contrast, object storage does not have this limitation and does not require data to be written serially. Instead, it is possible to use a new method to send multiple chunks of data simultaneously directly to object storage using TCP/IP.

  1. Use zIIP engines instead of mainframe MIPS

Legacy mainframe backup and archive solutions use MSUs, taking away from the processing available to other tasks on the mainframe. In effect this means that your mainframe backups are tying up valuable mainframe computing power, reducing the overall performance you can achieve across all the tasks you perform there.

You do not need to use MSUs to perform backup and archive tasks. Instead, you can use the mainframe zIIP engines—reducing the CPU overhead and freeing up MSUs to be used for other jobs.

  1. Compress data before sending it

Legacy mainframe backup and archive solutions do not support compressing data before sending it to Tape/VTL. This means that the amount of data that needs to be sent is much larger than it could be using modern compression techniques.

Using modern techniques, it is possible to compress your data before sending it to object storage. Not only do you benefit from smaller data transfer sizes, but you can increase the effective capacity of your existing connection between the mainframe and storage target. For example, compressing data at a 3:1 ratio would effectively turn a 1GB line into a 3GB line—allowing you to send the same amount of data faster while still using your existing infrastructure.

Faster than VTL: Increase mainframe data management performance

Replacing your legacy VTL/Tape solution with a modern solution that can compress and move data to cloud-based object storage can significantly decrease the amount of time it takes to back up and archive your mainframe data, without increasing resource consumption.

Writing in parallel, leveraging zIIP engines, and employing compression are low-risk, high-reward options that leverage well-known, well-understood, and well-proven technologies to address a chronic mainframe challenge. This can yield immediate, concrete benefits such as reducing the amount of time it takes for you to backup and archive your mainframe data and cutting costs while boosting capabilities.

]]>