Search Results for “immutable” – BMC Software | Blogs https://s7280.pcdn.co Thu, 14 Mar 2024 13:27:39 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png Search Results for “immutable” – BMC Software | Blogs https://s7280.pcdn.co 32 32 What DORA Means for Mainframe Teams in and Around EMEA https://s7280.pcdn.co/what-dora-means-for-mainframe-teams-emea/ Thu, 14 Mar 2024 13:27:39 +0000 https://www.bmc.com/blogs/?p=53460 Over the past month, I have had the opportunity to discuss the European Union’s Digital Operational Resilience Act (DORA) with the mainframe teams of 14 of the largest financial institutions in EMEA and the UK. Here are my key takeaways from those conversations: There is general agreement that for mainframe teams, the DORA requirements are […]]]>

Over the past month, I have had the opportunity to discuss the European Union’s Digital Operational Resilience Act (DORA) with the mainframe teams of 14 of the largest financial institutions in EMEA and the UK. Here are my key takeaways from those conversations:

There is general agreement that for mainframe teams, the DORA requirements are different than previous regulatory guidelines:

  • Penalties that include one percent of annual revenues and criminal liability are getting the attention of executives and board members
  • As DORA calls out “all critical infrastructure,” the spotlight is shining on mainframe infrastructure like never before
  • DORA requires an independent penetration test/security assessment of all critical infrastructure. Only some mainframe teams are heeding that advice.
  • The biggest change in requirements when comparing DORA to other regulations is the ability to prove that your financial institution can recover from a cyberattack—which is much different than a disaster recovery.
  • At least half of the financial institutions have already been engaged in European Central Bank (ECB) stress tests to evaluate their organizational ability to recover from a cyberattack.
  • There is considerable concern over the “interpretation” of the technical/business/resilience requirements for DORA, even after the January final report was published.
  • Most financial institutions are already in the process of implementing immutable backup solutions for their mainframe environments—a key step toward cyberattack resilience.
  • For those organizations implementing immutable backups, nearly all recognize the challenge of determining which immutable backup is appropriate to use for their recovery.
  • Many financial institutions recognize that recovering from an immutable backup poses a critical issue around data loss, potentially losing hours of financial transactions.
  • Most financial institutions have created DORA-specific working groups to guide their IT teams on appropriate measures to take, but even those teams have difficulties translating regulation requirements into IT guidelines.

Bottom line: DORA presents new challenges for mainframe teams, not only because the cyberattack scenario is new, but because the ECB is actively engaging with financial institutions that do business in Europe to prove that they comply with the new objectives.

Learn more about how DORA guidelines help achieve operational resilience in the podcast, “Mainframe Operational Resilience: DORA and Beyond.”

]]>
The Power of Air-Gapped Object Storage as Part of a Mainframe Data Resilience Strategy https://www.bmc.com/blogs/mainframe-data-resilience-air-gapped-cloud-object-storage/ Mon, 05 Feb 2024 08:50:39 +0000 https://www.bmc.com/blogs/?p=53420 Safeguarding sensitive data has become a paramount concern for organizations. As cyberthreats evolve and new regulations to safeguard data emerge, the need for robust data resilience solutions has never been more pressing. Air-gapped object storage is a technology that provides security and protection for mainframe data against cyberthreats. The need for mainframe data resilience It […]]]>

Safeguarding sensitive data has become a paramount concern for organizations. As cyberthreats evolve and new regulations to safeguard data emerge, the need for robust data resilience solutions has never been more pressing. Air-gapped object storage is a technology that provides security and protection for mainframe data against cyberthreats.

The need for mainframe data resilience

It is crucial to understand the importance of data resilience in mainframe systems. Organizations rely heavily on their mainframe data to make informed decisions, conduct day-to-day operations, and maintain a competitive edge. Any disruption due to cyberattacks, natural disasters, or human error can have significant consequences.

Mainframe data resilience strategy ensures the continuous availability, integrity, and accessibility of critical information. Traditional storage solutions may provide some level of protection, but they often fall short in the face of sophisticated cyberthreats. This is where air-gapped object storage should be considered as an additional level of security.

New regulations are being issued globally to ensure companies are protecting their data adequately and can recover from a cyberattack or other logical data corruption. For example, the Digital Operational Resilience Act (DORA) regulation will come into effect in the European Union in January 2025.

The role of BMC AMI Cloud Vault in data resilience strategy

BMC AMI Cloud Vault protects mainframe data from cyberattacks such as ransomware by creating an additional copy on immutable, air-gapped, cloud-based storage, creating a third copy of the data. This enables quick recovery from cyberattacks such as ransomware and reduces the risk of data loss while maintaining compliance with regulatory requirements for enterprise data retention.

The power of BMC AMI Cloud Vault and air-gapped object storage

Unplugging from cyberthreats

Air-gapped storage involves physically isolating the storage infrastructure from the network, creating an “air gap” that serves as a powerful barrier against cyberthreats. Without a direct connection to the internet or any external network, the chances of unauthorized access or data breaches are significantly reduced. BMC AMI Cloud Vault provides the ability to write data directly from the mainframe to air-gapped storage and, after isolating the storage, create a “golden copy” of the data.

Immunity to online attacks

Common cyberthreats, such as ransomware and malware, rely on network connectivity to propagate and infect systems. With BMC AMI Cloud Vault, mainframe data is kept in an air-gapped environment, allowing organizations to create a fortress that remains impervious to online attacks. The air-gapped storage remains untouched and secure even if the network is compromised.

Protection against insider threats

While external threats are a significant concern, insider threats pose an equally formidable risk. Air-gapped storage limits access to authorized personnel who are physically present at the storage location. This minimizes the risk of internal breaches and ensures that only individuals with explicit permissions can interact with the stored data. BMC AMI Cloud Vault, leveraging mainframe security control, helps create end-to-end protection against threats.

The cloud is a different technological environment from the mainframe and relies on a separate set of authorizations and security controls than the mainframe does. A mainframe user with admin privileges, such as a storage administrator, would typically not have admin privileges in the cloud environment. This provides an additional layer of protection in case a mainframe user ID has been compromised.

Guarding against data corruption

Air-gapped object storage enhances data integrity by protecting against accidental or intentional corruption. Since the storage system is isolated and can keep track of any changes to identify attacks, the likelihood of malware altering or deleting critical data is virtually eliminated. BMC AMI Cloud Vault’s ability to recover a specific version of the data ensures organizations can quickly recover their data in its original, unaltered state.

Resilience in the face of disasters

Beyond cybersecurity concerns, air-gapped storage adds an extra layer of resilience against physical disasters. Whether natural calamity, fire, or other catastrophic events, data stored in an air-gapped environment remains sheltered from external factors that could compromise its integrity. With cloud hyper scalers, data is spread over three availability zones by default to ensure maximum availability.

Conclusion

In an age where data is the lifeblood of organizations, ensuring its resilience is non-negotiable. BMC AMI Cloud Vault, with air-gapped object storage as part of a mainframe resilience strategy, offers unparalleled protection against cyberthreats and provides a robust solution for data resilience needs. By adopting this innovative approach, organizations can fortify their data infrastructure, safeguard critical information, and confidently navigate the digital landscape.

Finally, considering new regulations such as DORA, mainframes can no longer afford to rely on existing solutions for data resilience and recovery and must act as soon as possible to ensure compliance.

To learn more about BMC AMI Cloud and how to modernize your data management with hybrid cloud agility check out our hybrid cloud solutions webpage.

]]>
Backing Up Data in the Cloud with BMC AMI Cloud https://www.bmc.com/blogs/backing-up-data-in-the-cloud-with-bmc-ami-cloud/ Tue, 07 Nov 2023 07:22:46 +0000 https://www.bmc.com/blogs/?p=53273 When it comes to mainframe modernization, while pundits use different words, they all agree, job number one is to decide which of the three paths to modernization to take. Modernize in place With this method, mainframe shops see the value in the platform, it’s just the operations that need to be streamlined. They will do […]]]>

When it comes to mainframe modernization, while pundits use different words, they all agree, job number one is to decide which of the three paths to modernization to take.

  1. Modernize in place

With this method, mainframe shops see the value in the platform, it’s just the operations that need to be streamlined. They will do things like reengineer processes, integrate silos, hire next-generation mainframers, and move to modern toolsets made for the mainframe platform. And while they’re doing all of that, they’ll make sure the platform and everything on it is fit-for-purpose, so that the mainframe is given the best opportunity to perform for the business. In keeping with that approach, they’ll remove workloads from the mainframe that can be better delivered elsewhere … especially workloads like backup and recovery.

  1. Modernize by moving away from the mainframe platform altogether

Other shops will decide to modernize their mainframe by outsourcing applications and processes to a SaaS provider, or they’ll rewrite their applications for cloud and open systems computing. As they move their applications to the cloud, they’ll modernize their business by concentrating on what made them great, and leave IT operational excellence and security to the IT experts and cloud/SaaS providers. For these businesses, “right-sizing” the platform means moving away from it entirely and putting all their mainframe workload in the cloud. Of course, before decommissioning the mainframe, all historically mainframe-generated backups of data assets and any active applications need to be archived in the cloud, as well.

  1. Modernize in a hybrid way, by integrating the mainframe environment to cloud

Or said another way, “Render unto cloud what is the cloud’s, and to the mainframe what is the mainframe’s…”

This group will split the difference, modernizing the mainframe by keeping some applications on the platform (generally speaking, the ones that turn the most transactions or require the most processing power), while simultaneously moving some applications off.

In truth, this type of migration has been happening consistently over the last several years, and in an evolutionary (as opposed to revolutionary) way, as applications like payroll, customer relationship management (CRM), and financials moved to SaaS providers like ADP, SFDC, SAP®, and Oracle. Companies have been migrating workloads off the mainframe for years, with the goal of rightsizing it and making it more fit for purpose. Again, for this group, offloading backup workloads to low-cost specialty engines, like the IBM® z Integrated Information Processor (zIIP), and storing mainframe backups in the cloud, dovetails perfectly with their desire to better utilize the mainframe’s compute power for applications that are core to the business.

Looking across the three mainframe modernization approaches mentioned above, the one common denominator is, “Let’s make sure we’re assigning workload to the right platform.”

BMC Mainframe Survey proves the point

Interestingly, among the BMC 2023 Mainframe Survey, many agree that mainframe backup and storage doesn’t belong on the mainframe’s general business-class compute engines. In fact, the following relevant insights floated to the top when we asked 800 mainframe customers about the future of their mainframes:

  • Sixty-two percent said they perceive the mainframe as a platform upon which they can grow workloads (up from 52 percent of respondents who said the same in 2019), indicating that they are planning to either modernize in place, or via a hybrid cloud approach.
  • With all this workload growth, petabytes of data and the number of databases themselves are growing—and they all need to be backed up, secured, and made available for recovery.
    • Data: 61 percent of respondents say they’re seeing a significant increase in data volumes (up from 50 percent in 2019).
    • Databases: 56 percent cited increasing numbers of databases (up from 40 percent in 2019).
  • Thirty-five percent said connecting the mainframe to cloud technologies is one of their top four priorities in 2024.
  • In keeping with rightsizing their mainframes, making them fit for purpose, and making their data more secure, 41 percent said they would be implementing a cloud-based approach to backup and storage in the next year.

When we look at the economics of today’s cloud-based approach to backup and storage systems, we see even more evidence that every company with a mainframe should move their backup processing to zIIP engines and store their backups in the cloud. Below we’ll look at the financial benefits that three important BMC customers, Garanti BBVA, Nedbank, and America First Credit Union, have achieved with a cloud-based backup methodology:

  • Benchmarking before and after states, these three accounts achieved between 5x and 15x faster backups and restores with BMC AMI cloud-based solutions and reduced person hours associated with managing backups by 40 percent.
  • Garanti BBVA reduced backup rack space by 86 percent.
  • America First Credit Union reduced MIPS usage for backup and restore by 60 percent, and backup and virtual tape library software costs by 50 percent.

Other considerations

Then there are the “harder to benchmark” metrics that, when looked at from a business perspective, make just as much, if not more sense than all the technical reasons listed above. For instance, let’s consider the metrics for improved security and data mining for actionable insights.

Protection against ransomware attacks

BMC AMI Cloud backups are infinitely more secure than traditional backups because they are immutable and cannot be accessed by anyone without proper credentials for the mainframe—and the backup software itself. Further, they can never be altered in any way, such as encryption by an entity demanding ransom. These backups are also protected from malicious or unintentional deletion using cloud storage features such as object lock and versioning. These extra layers of controls make them impervious to ransomware attacks.

When we consider the damage that successful ransomware attacks have on the reputations of the companies attacked, and that the average ransom paid is $100K, choosing a storage platform that provides immutable backups and object lock is an obvious choice. It’s no wonder that 70 percent of BMC Mainframe Survey respondents said security is their top mainframe priority, and that Nedbank was able to reduce its data protection costs by 50 percent with BMC AMI Cloud’s immutable backups.

Leveraging mainframe backup data for actionable insights

If we think about it, we see that most companies’ backup data is so large that it essentially represents its own data lake. Yet, because mainframe data is stored in different formats than most data mining tools expect to see, it’s often overlooked when it comes to gleaning actionable insights for the business. When using BMC AMI Cloud backup tools, data is converted in a way that allows it to be analyzed, revealing actionable insights for the business to leverage.

Take as an example, a hypothetical insurance company with hidden mainframe backup data that reveals that 25 percent of its family clients have children reaching driving age this year. Because that mainframe backup data isn’t in a readable format for its analysis tools, it’s difficult to proactively propose additional coverage for these new drivers.

Or consider a bank that keeps backup data, which, if it could be analyzed, would highlight the families that have over $100K in a low-yield savings account, are expecting a child, and are renting their current home. Analysis of that data could show that they’d be a perfect candidate for a new home loan. The business development possibilities are endless here, but many companies don’t see it because their mainframe data is not “mineable,” as it is with BMC AMI Cloud backup solutions.

So, to pull this all together, companies can’t afford to wait when it comes to mainframe modernization. Just like the platform itself, traditional mainframe backups must also transition, especially given the expense of virtual tape library hardware and the software necessary to control it, the extra time and processing power it takes to complete traditional backups, the vulnerability of the data stored, and the difficulty of accessing it for analytical purposes. The only decision left is whether to modernize in place, modernize by moving off the platform, or modernize in a hybrid way.

To learn more about BMC AMI Cloud, click here, and join us on November 14 for a BMC webinar, “Mainframes in the cloud: modernizing data management.”

]]>
Integrated Solutions for an Integrated World https://www.bmc.com/blogs/integrated-mainframe-solutions-for-integrated-world/ Tue, 11 Jul 2023 07:32:46 +0000 https://www.bmc.com/blogs/?p=53044 Thanks to the interconnected hybrid cloud world and the convenience of our phones, the digital world is at our fingertips. Wherever we are, we can access information and services for entertainment, shopping, banking, and health, almost instantly. Our experiences aren’t limited by the type of device we’re using. We can be watching a show on […]]]>

Thanks to the interconnected hybrid cloud world and the convenience of our phones, the digital world is at our fingertips. Wherever we are, we can access information and services for entertainment, shopping, banking, and health, almost instantly. Our experiences aren’t limited by the type of device we’re using. We can be watching a show on a smart TV, continue watching on a smart phone as we leave the house, and then listen in our car. We can seamlessly access and edit the same document in the office, on the plane, and at home.

Retail, banking, and shipping companies strive to provide similar uninterrupted experiences. We no longer need to enter payment details and shipping information on each e-commerce site we visit. Thanks to payment platforms and website integrations from UPS, FedEx, and others, we can purchase items from multiple websites using a single login, then instantly see whether our package has been shipped, where it is now, and when it is expected to arrive.

We’re now accustomed to receiving the same user experience, with the same tools, media, and content, wherever we are and whatever device we’re using. So, why should our expectations of a work experience be any different?

The latest innovations announced for the BMC AMI portfolio are centered on hybrid cloud integration with an open borders approach to mainframe computing, with the aim of creating consistent, complementary experiences not only for mainframe professionals, but for the customers they serve, too.

Bringing the power of the cloud to the mainframe

The new BMC AMI Cloud suite of solutions empowers organizations to adopt a hybrid cloud strategy for mainframe data management. Integration of mainframe data with the hybrid cloud enables your organization to choose the on-premises, private cloud, or public cloud strategy that is best suited for your needs. This provides an efficient and high-preforming alternative to replace or augment proprietary mainframe virtual tape library (VTL) systems.

Storage in the cloud with BMC AMI Cloud Data allows faster access to crucial data and offers improved disaster recovery preparation and response.

BMC AMI Cloud Vault enables the creation of secure off-platform backup copies of data and fast disaster recovery that doesn’t rely on mainframe systems. The creation of immutable copies of data stored in the cloud protects against cyberthreats like ransomware while also enabling standalone (bare metal) data recovery at any location.

BMC AMI Cloud Analytics enables the integration of your mainframe data with artificial intelligence and machine learning (AI/ML) platforms to gain valuable new business insights. By quickly and efficiently moving data to the cloud, then transforming it for use with AI/ML tools (without consuming costly MIPS), the solutions make your mainframe data actionable, opening the door to new possibilities of insight and innovation.

Increased quality, more efficient development

The BMC open-borders approach not only integrates the mainframe with the broader IT ecosystem, it also allows mainframe development, operations, data, and security applications to interact, breaking down siloes and providing full system visibility. New BMC AMI DevX integrations increase developer efficiency, improve application quality, and put the information that developers need at their fingertips.

New Visual Studio Code (VS Code) extensions for BMC AMI DevX File-AID enhance developers’ use of their preferred development environment by streamlining the data browsing and editing of IBM® Multiple Virtual Storage (MVS) data sets, reducing time spent on test data management.

An integration between BMC AMI DevX Abend-AID and BMC AMI DevX Code Pipeline makes it faster and easier for developers to find abending code, fix any issues, test, and move the code back into production.

The ability to reuse test case input stubs in BMC AMI DevX Total Test enables faster generation of new test cases for changed programs.

Stronger security, faster incident response

To ensure optimal enterprise system security, mainframe security can’t be siloed separately from enterprise security strategies. BMC AMI Enterprise Connector for Venafi, which integrates the mainframe with enterprise certificate management solutions, now supports automated bulk certificate management, empowering security teams to implement hundreds, or even thousands, of security certificates on the mainframe each month.

Integration of BMC AMI Security with ServiceNow ITSM solutions supports automated workflows, increasing efficiency and reducing time to response while providing centralized incident response that coordinates security incident management across the enterprise.

Optimizing database reorgs, identifying SQL bottlenecks earlier

An integration between BMC AMI Reorg for Db2® (part of BMC AMI Database Performance for Db2®) with the rules-based automation of BMC AMI Apptune for Db2® (part of BMC AMI SQL Performance for Db2®) enables right-on-time database reorgs, ensuring that reorgs aren’t repeated unnecessarily, reducing CPU usage, helping to minimize costs, and providing for peak response rates and improved application performance.

New enhancements improve the database administrator (DBA) and developer experiences. BMC AMI DevOps for Db2 now integrates with GitHub Actions, joining integrations with Jenkins and Azure DevOps to further developers’ ability to use their tools of choice, while a modern, developer-friendly BMC AMI Command Center for Db2® user interface enables the shift-left identification of SQL bottlenecks. Now, the developer and DBA (especially the next-gen DBA) easily identify SQL bottlenecks. And an enhancement to BMC AMI Change Manager for IMS (part of BMC AMI Administration for IMS) enables systems programmers to route commands from a single screen across multiple IMS systems within an IMSPLEX.

Further enhancements to reporting and log records facilitate database performance optimization and debugging. Enhanced report comparison BMC AMI Fast Path Analyzer/EP history files makes it easier to spot usage trends within BMC AMI Database Advisor for IMS while additional Fast Path log records in BMC AMI Log Analyzer for IMS abend reports provide increased visibility into the debugging process.

Integrated solutions for an integrated world

With our July 2023 quarterly release, BMC continues its commitment to support and advance your organization’s digital transformation. Just as integrations of entertainment, shopping, banking, and other digital experiences—and the convenience they provide—have become commonplace, we believe that the integration of BMC solutions, and of the mainframe with other technologies, improves performance and reliability, leading to optimized experiences for mainframe professionals and end users alike.

Learn more about the enhancements included in the July 2023 quarterly release on the BMC What’s New in Mainframe Solutions page.

]]>
Mainframe Best Practices for Affordable Backup and Efficient Recovery https://www.bmc.com/blogs/mainframe-best-practices-for-affordable-backup-and-efficient-recovery/ Tue, 20 Jun 2023 13:21:16 +0000 https://www.bmc.com/blogs/?p=52993 Mainframe teams these days are expected to contain backup and archiving costs while ensuring minimum downtime, especially in disaster recovery situations. While full-blown disasters may be rare, costly outages and interruptions are not, and a 2022 ITIC survey reveals just how expensive they are: 91 percent of mid-sized and large enterprises said that a single […]]]>

Mainframe teams these days are expected to contain backup and archiving costs while ensuring minimum downtime, especially in disaster recovery situations. While full-blown disasters may be rare, costly outages and interruptions are not, and a 2022 ITIC survey reveals just how expensive they are: 91 percent of mid-sized and large enterprises said that a single hour of downtime costs over $300,00, with 44% reporting that the cost was $1-5 million.

When designing a data management solution, it is important to explore cost-effective backup options that allow efficient recovery to cope with the enormous amounts of generated data. At the same time, it is also important to look into how to improve recovery efficiency, even if it might increase the direct backup costs.

Reducing backup costs

The total cost of ownership (TCO) of mainframe data management consists of several direct and indirect costs. Using the following methods, an organization can reduce backup costs while still meeting demanding recovery requirements:

  • Incremental backup: Instead of backing up all data sets, implement solutions that support incremental backups and only back up data sets that have changed since the previous backup process.
  • Deduplication: Significant storage space can be saved by eliminating duplicate copies of repeating data. It is therefore recommended to enable deduplication if your target storage system supports it.
  • Compression: Another way to contain data management costs is to ensure that backup data is compressed before it is sent over the network to the storage system.
  • Leveraging commodity storage: Maintaining tape-related hardware and software imposes substantial costs. Instead, a cost-efficient data management solution like BMC AMI Cloud Data securely delivers mainframe data to any cloud or on-prem storage system. This makes it possible to benefit from pay-as-you-go cloud storage instead of stocking up on tapes and VTLs.

On top of the above-mentioned practices to reduce the TCO of the data management continuum, one should also factor in the costs of archiving data for longer periods of time to meet regulatory requirements. For example, banks have to keep masses of archived data for many years to comply with regulations, most of which will never be accessed. As explained in this blog post, selecting the right kind of storage for this type of data can significantly affect backup costs.

Improving recovery efficiency

A more efficient recovery often requires additional measures in the backup stage, which might actually increase backup costs. However, the staggering costs of unplanned downtime alone can justify the investment, not to mention the heavy non-compliance fees. The following methods can be used for a more efficient recovery:

  • Write Once Read Many (WORM) storage: Keeping backups on WORM storage in the cloud or on-premises prevents accidental or malicious erasure and tampering that will make recovery difficult, more expensive or subject to ransom. In the case of an event, immutable backup data in the cloud is available as soon as the system is up and running without needing to wait for archived data.
  • Multiple snapshots: Taking snapshots, also known as flash copies, of volumes and data sets at regular intervals helps to maintain data set versioning, which is important for automated recovery processes. Snapshots also make it possible to recover a data set in case of logical failure.
  • Stand-alone restore: Stand-alone restore allows bare-metal recovery from tape or cloud in cases of cyberattacks, disasters, and errors. Cloud-based backup platforms like BMC AMI Cloud enable initial program load (IPL) from a cloud server for a quick recovery that significantly reduces unplanned downtime.
  • End-to-end encryption: End-to-end encryption reduces the risk of malicious data corruption that could cause logical failures and other problems making recovery scenarios more complex and more expensive. Encryption is also critical for meeting regulatory requirements regarding data security and privacy.
]]>
How to Choose an Object Storage Target Repository https://www.bmc.com/blogs/how-to-choose-object-storage-target-repository/ Mon, 19 Jun 2023 05:28:52 +0000 https://www.bmc.com/blogs/?p=52981 For many mainframers, the concept of writing to object storage from zSeries mainframes over TCP/IP is a new concept. The ease of use and the added value of implementing this solution is clear, but there is another question: What to use as a target repository? How do customers decide on a vendor for object storage and whether […]]]>

For many mainframers, the concept of writing to object storage from zSeries mainframes over TCP/IP is a new concept.

The ease of use and the added value of implementing this solution is clear, but there is another question: What to use as a target repository? How do customers decide on a vendor for object storage and whether a private cloud, hybrid cloud, or public cloud should be used? Target repositories can be either an on-premises object storage system, like Hitachi HCP, Cohesity, or a public cloud, such as AWS, Azure or GCP.

The best option for you depends on your individual needs. There are pros and cons in each case. In this post, we break down the factors you need to consider as you choose a target cloud repository that will meet your needs.

Network bandwidth and external connections

Consider the bandwidth of the OSA cards and external bandwidth to remote cloud, if cloud is an option. Is the external connection shared with other platforms? Is a cloud connection already established for the corporation?

For on-premises storage, network connectivity is required, yet it is an internal network with no external access.

Amount of data recalled, restored, or read back from repository

There are added costs for reading data back from the public cloud, so an understanding of expected read throughput is important when comparing costs. If the read rate is high, then consider an on-premises solution.

DR testing and recovery plans

Cloud-based recovery allows recovery from anywhere, and public clouds can replicate data across multiple sites automatically. The disaster recovery or recovery site must have network connectivity to the cloud.

On-premises solutions require a defined disaster recovery setup, a second copy of the object storage off-site that is replicated from the primary site. Recovery at the DR site will access this replicated object storage.

Corporate strategies, such as “Mainframe Modernization” or “Cloud First”

You should be able to quickly move mainframe data to cloud platforms by modernizing backup and archive functions. Cloud also offers either policy-driven and/or automatic tiering of data to lower the cost of cold storage.

If there is no cloud initiative, the on-premises solution may be preferred. Many object storage providers have options to push the data from on-premises to public cloud. So, hot data can be close and cold data can be placed on clouds.

Cloud acceptance or preferred cloud vendor already defined

Many corporations already have a defined cloud strategy and a cloud vendor of choice. You’ll want a vendor-agnostic solution.

The knowledge of defining the repository and maintaining it could be delegated to other groups within the organization familiar with and responsible for the corporate cloud.

Cyber resilience requirements

On-premises solutions can generate immutable snapshots to protect against cyberthreats. An air-gapped solution can be architected to place copies of data on a separate environment that can be detached from networks.

Cloud options also include features like versioning, multiple copies of data, and multi-authentication to protect data and allow recovery.

Floor or rack space availability

With an on-premises solution, floor space, rack space, power, etc. are required. With a cloud solution, no on-premises hardware is required.

Performance

There is no clear-cut performance benefit for either solution. It depends upon the hardware and network resources and the amount of data to be moved and contention from other activity in the shop using the same resources.

Cloud customers with performance concerns may choose to establish a direct connection to cloud providers in local regions to prevent latency issues. These concerns are less relevant when a corporate cloud strategy is already in place.

Costs

Cloud storage is priced by repository size and type. There are many add-on costs for features and costs for reading back. There are mechanisms to reduce costs, such as tiering data. Understanding these costs upfront is important.

On-premises object storage requires a minimum of two systems for redundancy, installation, and maintenance.

]]>
How Focusing on Data Storage Challenges Helps IT Leaders Achieve Greater Cyber Resilience https://www.bmc.com/blogs/how-a-focus-on-data-storage-challenges-will-help-it-leaders-achieve-greater-cyber-resiliency/ Mon, 19 Jun 2023 05:13:34 +0000 https://www.bmc.com/blogs/?p=52979 Establishing cyber resilience continues to grow in difficulty thanks to three main factors: 1) a rapid and continuous increase in data generation, 2) an increase in IT complexity thanks to factors like remote work, and 3) a continuously growing volume and frequency of cyberattacks. Trying to simplify IT and get a better understanding of your […]]]>

Establishing cyber resilience continues to grow in difficulty thanks to three main factors: 1) a rapid and continuous increase in data generation, 2) an increase in IT complexity thanks to factors like remote work, and 3) a continuously growing volume and frequency of cyberattacks. Trying to simplify IT and get a better understanding of your data and how to protect it is an uphill battle for IT leaders.

On top of this, additional pressure to achieve cyber resilience is being fueled by data privacy regulations that are starting to come out of grace periods—doling out hefty fines. There are many companies that have already been fined significant amounts of money. IT leaders are under a great deal of pressure and need to get a handle on the challenges of cyber resilience so they can start making changes that help them better prepare for cyberthreats while lowering risk of non-compliance.

Cyber resilience initiatives cover a broad range of areas from employee happiness to DevOps. It has become increasingly difficult for IT leaders to prioritize and find the best places to focus their limited resources.

When looking across this range, the area with the biggest potential for improvement is in data storage. This is where IT leaders can focus to achieve a real impact on cyber protection.

A 2020 ESG survey found that more than half of organizations have greater than 250TB of data, and 29 percent have more than 500TB of data. Data volume can be overwhelming, but the bigger struggle is the fact that this data resides across many different storage solutions from the mainframe to SaaS providers. And a different approach is needed for each storage type. Data storage is where IT leaders need to focus for the biggest impact.

This article will cover the challenges faced with each type of data storage as well as tips on how to overcome these challenges. Additionally, we will discuss the positive impact on the business in making changes in data storage.

Focusing on data storage will positively impact cyber resilience

Breaking down this initiative into phases will help IT leaders to manage continued progress and show results to executives. More importantly, after each project phase you will be able to deliver compelling results back to the business.

The four phases of building cyber resilience in data storage:

Phase 1

Project: Gain control over the security and backup issues with various data storage solutions and minimize where possible the storage types used.

Business outcomes: Improved recoverability with more control of backups from SaaS providers. Lower storage costs from consolidation of data storage solutions.

Phase 2

Project: Discover, identify, and classify structured and unstructured data in order to move it to the correct storage solution.

Business outcomes: Proof of lower risk of data privacy non-compliance. Proof of lower risk of data theft. Lower storage costs from eliminating or archiving unneeded data.

Phase 3

Project: Put the appropriate security in place to protect your data backups from cyber threats.

Business outcomes: Evidence of improved security with a list of multiple measures that can be explained to executives. Proof of compliance with security measures required for data privacy.

Phase 4

Project: Reevaluate and improve your disaster recovery capabilities, ensuring they meet business needs in all scenarios.

Business outcomes: Evidence of disaster recovery testing results for various scenarios meeting business objectives. Regular reporting showing continued improvement of recovery objectives.

Working through these four phases will yield positive results for cyber resilience as well as cost avoidance, ranging from storage costs to data privacy fines. Each phase provides clear evidence for business executives to prove the value of their efforts. In a time when cyber resilience initiatives are broad and the IT environment complex, a strong focus on data storage will pay real dividends for IT leaders.

Gaining visibility and understanding your data

The most common, fundamental challenge that IT leaders face in cyber resilience initiatives is gaining visibility and understanding of their unstructured data—whether stored on-premises or by cloud solution providers. Most companies have suffered from data sprawl combined with a lack of labeling and categorization standards for data stored on individual hard drives or cloud storage. Mergers and acquisitions, structural changes, and lift and shift migrations have only added to this problem.

Gaining visibility into your data in order to understand it is critical to lowering risk of exposure to breach or to violating data privacy regulations. Both scenarios are painful and costly. Additionally, understanding your data is a key step in helping you to eliminate unneeded data—thereby reducing the amount of data you need to backup.

A great step IT leaders can take here is to invest in an intelligent data management solution. Select a solution that utilizes artificial intelligence to identify unstructured content. Specifically, select one that is pre-trained to recognize things like resumes and invoices that may have personal information in them. These tools can also automate the classification and labeling of this data so you can make better decisions on where the data should be stored, who should have access to it, what level of protection is needed, and what backup strategy to implement.

Strategies vary by storage type

Most large organizations have a long list of data storage types including on-premises servers and mainframes, cloud storage, and various SaaS providers. This poses a challenge to IT leaders looking to ensure data protection and appropriate backup solutions across the various technologies and vendors.

Here are a few considerations that IT leaders need to be aware of for each storage type.

Cloud storage

Many companies have adopted public cloud storage for some of their data, resulting in large hybrid cloud infrastructures. As IT leaders work toward cyber resilience this often requires increasing storage capacity to accommodate rigorous backup needs.

To avoid additional costs, IT leaders should consider adopting software-defined storage solutions that can help them better manage their hybrid environment while maximizing their storage scale. This results in lowered storage costs as well as better performance in recovery. In fact, the case for software-defined storage solutions is so compelling that Gartner predicts that by 2024, 50 percent of global storage capacity will be deployed as software-defined storage.

SaaS

With so much data being generated and stored by third-party SaaS providers, IT leaders need to ensure they have a handle on SaaS backups. Unfortunately, there are no industry standards and backup scenarios for SaaS applications are rare. Yet there are many disaster scenarios where data loss can happen.

IT leaders need to assess the data protection and recovery processes before deploying new SaaS applications. Contractually, it should be clear how data is backed up and accessed—both in case of a disaster and in ending the subscription. Ensuring alignment on recovery objectives is just as important as SLAs for uptime. Consider also deploying a backup solution that can support multiple SaaS applications, so you improve your recoverability while keeping complexity at a minimum.

On-premises & mainframe

While working on cyber resilience, IT leaders are finding that new advances in cloud storage and data protection management are providing opportunities to lower costs while maintaining, or even improving, cyber resilience. On-premises, tape-related backups are often complex, slow, and costly to maintain.

Moving mainframe backups to secure cloud data management and storage can significantly reduce storage costs, simplify operations, reduce backup times, and improve recoverability.

Data Recovery is a critical piece of the process

Once you’ve got your data backups protected, you still need to think about data recovery. IT leaders working on cyber resilience will likely find value in reevaluating their disaster recovery posture.

Key areas of disaster recovery evaluation are:

  • Getting a clear understanding of business expectations on recovery and ensuring that recovery point and recovery time objectives can be met for each data storage solution.
  • Considering the differences between various disasters such as data theft versus ransomware versus an outage and putting unique recovery plans in place for each. Be sure to consider recovery from different locations in this analysis.
  • Planning for recovery performance by looking at ways to orchestrate the recovery process and make it faster. Figuring out the restore order for applications and databases is a necessary aspect of setting and meeting business objectives. Adding appliances to backup clusters can also increase compute without increasing capacity.
  • Making maintenance and testing of disaster recovery a resourced part of operations where the team is not just testing for pass/fail, but looking for ways to continue to improve. Additionally, processes are needed to ensure backup and recovery systems are up to date on configurations and patching.

Backups alone are NOT a foolproof strategy

Once you understand your data and have made appropriate decisions in how to store it, you also have to determine how to appropriately back up critical data. Careful attention also needs to be placed on protecting the backups and ensuring they are accessible should recovery be needed.

Cyberattackers have turned the security backup trend into a new opportunity to exploit. More sophisticated attacks are now targeting backup data—to steal this data, wipe it, and/or to use it as a “roadmap” for the critical data in your system that they need to lock down for a ransomware attack. Combined with the steady increase in volume and frequency of attacks, this is a critical area for IT leaders to address.

This means cyber resilience requires increased focus on the protection of where data backups are stored and having more copies of those backups made and stored in different systems.

To meet these backup challenges IT leaders should focus on:

  • Eliminating network sharing protocols when implementing storage as this is an area of weakness that attackers gain use to gain entry. Instead use secure object storage protocols or secure data movement APIs that utilize encryption in transit.
  • Improving administrative permissioning by using multifactor authentication, separating administrative roles and creating multiperson authorization workflows.
  • Implementing immutable file storage so that backup data can only be deleted in special circumstances, but not by a malicious actor.
  • Ensuring multiple copies of backup data are made and stored at a disaster recovery site or within a cloud provider’s infrastructure. Combine your copy strategy with immutable data storage and restrictive admin controls for best results.
]]>
Redis®* Cache on Production: An Overview and Best Practices https://www.bmc.com/blogs/redis-cache-on-production/ Thu, 06 Oct 2022 07:25:38 +0000 https://www.bmc.com/blogs/?p=52309 In the modern landscape of complex applications, cloud-native technologies empower organizations to build and run scalable applications in public, private, and hybrid clouds. An in-memory cache has become an essential component for developing loosely coupled systems that are resilient, manageable, and observable with microservices, containers, and immutable infra-services. Unlike traditional databases, in-memory data stores don’t […]]]>

In the modern landscape of complex applications, cloud-native technologies empower organizations to build and run scalable applications in public, private, and hybrid clouds.

An in-memory cache has become an essential component for developing loosely coupled systems that are resilient, manageable, and observable with microservices, containers, and immutable infra-services.

Unlike traditional databases, in-memory data stores don’t require a trip to disk, reducing engine latency to microseconds, so they can support an order of magnitude more operations and faster response times. The result is blazing-fast performance with average read and write operations taking less than a millisecond and support for millions of operations per second.

Building on earlier experience using caching solutions like Infinispan and Hazelcast, we evaluated various cloud-based and on-premises cache solutions with the following requirements:

  • Ability to scale out seamlessly, from a few thousand events per second to multimillion events
  • Support for various data types and languages
  • Performance metrics for monitoring
  • Cache/entry-level time to live (TTL) support

Based on our findings, our BMC Helix SaaS solutions leverage Redis and Redisson Client for faster, more accurate, and more efficient ways of delivering innovations for the modern enterprise. Redis, which is short for REmote DIctionary Server, is an in-memory cache data structure that enables low latency and high throughput data access.

If you’re interested in deploying Redis at your organization, keep reading for some tips and best practices based on what we’ve learned from our deployment.

In-Memory Caching Service

First, you will need an in-memory caching service that supports Redis, such as one of the cloud and on-premises in-memory caching services below:

Deployment Types

You should choose your deployment type based on your application use cases, scale, and best practices, while also considering factors such as number of caches, cache size, Pub/Sub workloads, and throughput.

Non Cluster Mode

Figure 1. Non-Cluster deployment with single shard contains one primary and two replica nodes.

Cluster Mode

Figure 2. Cluster deployment with three shards and each contains one primary and two replica nodes.

Sharding

A shard is a hierarchical arrangement of one to six nodes, each wrapped in a cluster, that supports replication. Within a shard, one node functions as the read-write primary node, and all the other nodes function as read-only. Below are a few key points about individual shards:

  • Up to five replicas per shard (one master plus up to five replica nodes)
  • Nodes should be deployed in a shard on multiple availability zones or data centers for fault tolerance
  • In case of a master node failure, one of the replicas will become the master

A production deployment may choose from three shards with three nodes per shard (one master and two replicas); each must reside on a different availability zone/data center. The cluster node type (CPU/memory) and Scale-out and Scale-in decisions are based on the cache types, size, and number of operations per second.

Every shard in a Redis cluster is responsible for a subset of the hash slots; so, for example, you may have a cluster with three replication groups (shards), as follows:

  • Shard 1 contains hash slots from 0 to 5500
  • Shard 2 contains hash slots from 5501 to 11000
  • Shard 3 contains hash slots from 11001 to 16383

Redis Client

We zeroed in on Redisson after evaluating the available APIs based on the use cases and data structure requirements. It provides distributed Java data structures on top of Redis for objects, collections, locks, and message brokers and is compatible with Amazon ElastiCache, Amazon MemoryDB for Redis, Azure Redis Cache, and Google Cloud Memorystore.

Redis Client Key Usages

A streaming application that processes millions of metric, event, and log messages per second has various use cases that require low-latency cache operations, which informed our choice of cache type.

RMap is a Redis-based distributed map object for the Java ConcurrentMap interface that’s appropriate for:

  • Use cases where short-lived caches are required
  • Eviction occurs at cache level and not at key/entry level
  • Clarity exists on the probable cache size and max insert/retrieve operations

RLocalCacheMap is a near-cache implementation to speed up read operations and avoid network roundtrips. It caches map entries on the Redisson side and executes read operations up to 45 times faster compared to common implementations. The current Redis implementation doesn’t have a map entry eviction functionality, so expired entries are cleaned incrementally by org.redisson.eviction.EvictionScheduler. RLocalCacheMap is appropriate for:

  • Use cases where the number of cache keys is certain and won’t grow beyond a certain limit
  • The number of cache hits is high
  • The workflow can afford infrequent cache hit misses

RMapCache is a cache object that supports eviction at key level and is appropriate for use cases that require that functionality and situations where ephemeral cache keys must be cleaned periodically.

Redis-based Multimap for Java allows you to bind multiple values per key.

Redis-based RLock is a distributed reentrant lock object for Java.

Monitoring Key Performance Indicators (KPIs)

The following KPIs should be monitored to ensure that the cluster is stable:

  • EngineCPUUtilization: CPU utilization of the Redis engine thread
  • BytesUsedForCache: Total number of bytes used by memory for cache
  • DatabaseMemoryUsagePercentage: Percentage of the available cluster memory in use
  • NetworkBytesIn: Number of bytes read from the network, monitor-host, shard, and overall cluster level
  • NetworkBytesOut: Number of bytes sent out from a host, shard, and cluster level
  • CurrConnections: Number of active client connections
  • NewConnections: Total accepted connections during a given period

Redis Is Single-Threaded

Redis uses a mostly single-threaded design, which means that a single process serves all the client requests with a technique called multiplexing. Multiplexing allows for a form of implicit pipelining, which, in the Redis sense, means sending commands to the server without regard for the response being received.

Production Issues

As we expanded from a few caches to several caches, we performed vertical and horizontal scaling based on the above key metrics and the cost and recommendations. One critical issue we faced was a warning about high engine CPU utilization, although the application read-write flow was unchanged. That made the whole cluster unresponsive. Scale out and vertical scaling didn’t help, and the issue repeated.

Engine CPU Utilization

Figure 3. Engine CPU utilization of one of the shards that breached a critical threshold.

Troubleshooting Steps

Key Findings

  • Publish and subscribe (pub/sub) operations were high on the problematic shard
  • One of the hash slots had a large number of keys
  • RMapCache seems to be the culprit

Issues with RMapCache

RMapCache uses a custom scheduler to handle the key-level TTLs, which triggers a large number of cache entry cleanups, resulting in huge pub/sub and making the cluster bus busy.

After a client publishes one message on a single node, this node will propagate the same message to other nodes in the cluster through the cluster bus. Currently, the pub/sub feature does not scale well with large clusters. Enhanced input and output (IO) is not able to flush the large buffer efficiently on the cluster bus connection due to high pub/sub traffic. In Redis 7, a new feature called sharded pub/sub has been implemented to solve this problem.

Lessons Learned

  1. Choose cache types based on usage patterns:
    • Cache without key-level TTL
    • Cache with key-level TTL
    • Local or near cache

    For a cache with key-level TTL, ensure that the cache is partitioned to multiple logical cache units as much as possible to distribute among shards. The number of caches may grow by a few thousand without an issue. Short-lived caches with cache-level TTL are an option.

  1. While leveraging the Redisson or other client implementations on top of Redis, be careful with the configuration and impact on the cluster.
    Ensure that the value part is not a collection (if a collection is unavoidable, limit its size). Updating an entry on the collection value type has a large impact on the replication.

Conclusion

Looking to provide a real-time enterprise application experience at scale? Based on our usage and experience, we recommend that you check out Redis along with the Redisson Client.

Experience it for yourself with a free, self-guided trial of BMC Helix Operations Management with AIOps, a fully integrated, cloud-native, observability and AIOps solution designed to tackle challenging hybrid-cloud environments.

*Redis is a registered trademark of Redis Ltd. Any rights therein are reserved to Redis Ltd. Any use by BMC is for referential purposes only and does not indicate any sponsorship, endorsement or affiliation between Redis and BMC.

]]>
Release Management in DevOps https://www.bmc.com/blogs/devops-release-management/ Wed, 30 Mar 2022 00:00:34 +0000 https://www.bmc.com/blogs/?p=13642 The rise in popularity of DevOps practices and tools comes as no surprise to those who have already utilize the new techniques centered around maximizing the efficiency of software enterprises. Similar to the way Agile quickly proved its capabilities, DevOps has taken its cue from that and built off of Agile to create tools and […]]]>

The rise in popularity of DevOps practices and tools comes as no surprise to those who have already utilize the new techniques centered around maximizing the efficiency of software enterprises. Similar to the way Agile quickly proved its capabilities, DevOps has taken its cue from that and built off of Agile to create tools and techniques that help organizations adapt to the rapid pace of development today’s customers have come to expect.

As DevOps is an extension of Agile methodology, DevOps itself calls for extension beyond its basic form as well.

Collaboration between development and operations team members in an Agile work environment is a core DevOps concept, but there is an assortment of tools that fall under the purview of DevOps that empower your teams to:

  • Maximize their efficiency
  • Increase the speed of development
  • Improve the quality of your products

DevOps is both a set of tools and practices as well as a mentality of collaboration and communication. Tools built for DevOps teams are tools meant to enhance communication capabilities and create improved information visibility throughout the organization.

DevOps specifically looks to increase the frequency of updates by reducing the scope of changes being made. Focusing on smaller tasks at a time allows for teams to dedicate their attention to truly fixing an issue or adding robust functionality without stretching themselves thin across multiple tasks.

This means DevOps practices provide faster updates that also tend to be much more successful. Not only does the increased rate of change please customers as they can consistently see the product getting better over time, but it also trains DevOps teams to get better at making, testing, and deploying those changes. Over time, as teams adapt to the new formula, the rate of change becomes:

  • Faster
  • More efficient
  • More reliable

In addition to new tools and techniques being created, older roles and systems are also finding themselves in need of revamping to fit into these new structures. Release management is one of those roles that has found the need to change in response to the new world DevOps has heralded.

(This article is part of our DevOps Guide. Use the right-hand menu to navigate.)

What is Release Management?

Release management is the process of overseeing the planning, scheduling, and controlling of software builds throughout each stage of development and across various environments. Release management typically included the testing and deployment of software releases as well.

Release management has had an important role in the software development lifecycle since before it was known as release management. Deciding when and how to release updates was its own unique problem even when software saw physical disc releases with updates occurring as seldom as every few years.

Now that most software has moved from hard and fast release dates to the software as a service (SaaS) business model, release management has become a constant process that works alongside development. This is especially true for businesses that have converted to utilizing continuous delivery pipelines that see new releases occurring at blistering rates. DevOps now plays a large role in many of the duties that were originally considered to be under the purview of release management roles; however, DevOps has not resulted in the obsolescence of release management.

Advantages of Release Management for DevOps

With the transition to DevOps practices, deployment duties have shifted onto the shoulders of the DevOps teams. This doesn’t remove the need for release management; instead, it modifies the data points that matter most to the new role release management performs.

Release management acts as a method for filling the data gap in DevOps. The planning of implementation and rollback safety nets is part of the DevOps world, but release management still needs to keep tabs on applications, its components, and the promotion schedule as part of change orders. The key to managing software releases in a way that keeps pace with DevOps deployment schedules is through automated management tools.

Aligning business & IT goals

The modern business is under more pressure than ever to continuously deliver new features and boost their value to customers. Buyers have come to expect that their software evolves and continues to develop innovative ways to meet their needs. Businesses create an outside perspective to glean insights into their customer needs. However, IT has to have an inside perspective to develop these features.

Release management provides a critical bridge between these two gaps in perspective. It coordinates between IT work and business goals to maximize the success of each release. Release management balances customer desires with development work to deliver the greatest value to users.

(Learn more about IT/business alignment.)

Minimizes organizational risk

Software products contain millions of interconnected parts that create an enormous risk of failure. Users are often affected differently by bugs depending on their other software, applications, and tools. Plus, faster deployments to production increase the overall risk that faulty code and bugs slip through the cracks.

Release management minimizes the risk of failure by employing various strategies. Testing and governance can catch critical faulty sections of code before they reach the customer. Deployment plans ensure there are enough team members and resources to address any potential issues before affecting users. All dependencies between the millions of interconnected parts are recognized and understood.

Direct accelerating change

Release management is foundational to the discipline and skill of continuously producing enterprise-quality software. The rate of software delivery continues to accelerate and is unlikely to slow down anytime soon. The speed of changes makes release management more necessary than ever.

The move towards CI/CD and increases in automation ensure that the acceleration will only increase. However, it also means increased risk, unmet governance requirements, and potential disorder. Release management helps promote a culture of excellence to scale DevOps to an organizational level.

Release management best practices

As DevOps increases and changes accelerate, it is critical to have best practices in place to ensure that it moves as quickly as possible. Well-refined processes enable DevOps teams to more effectively and efficiently. Some best practices to improve your processes include:

Define clear criteria for success

Well-defined requirements in releases and testing will create more dependable releases. Everyone should clearly understand when things are actually ready to ship.

Well-defined means that the criteria cannot be subjective. Any subjective criteria will keep you from learning from mistakes and refining your release management process to identify what works best. It also needs to be defined for every team member. Release managers, quality supervisors, product vendors, and product owners must all have an agreed-upon set of criteria before starting a project.

Minimize downtime

DevOps is about creating an ideal customer experience. Likewise, the goal of release management is to minimize the amount of disruption that customers feel with updates.

Strive to consistently reduce customer impact and downtime with active monitoring, proactive testing, and real-time collaborative alerts that enable you to quickly notify you of issues during a release. A good release manager will be able to identify any problems before the customer.

The team can resolve incidents quickly and experience a successful release when proactive efforts are combined with a collaborative response plan.

Optimize your staging environment

The staging environment requires constant upkeep. Maintaining an environment that is as close as possible to your production one ensures smoother and more successful releases. From QA to product owners, the whole team must maintain the staging environment by running tests and combing through staging to find potential issues with deployment. Identifying problems in staging before deploying to production is only possible with the right staging environment.

Maintaining a staging environment that is as close as possible to production will enable DevOps teams to confirm that all releases will meet acceptance criteria more quickly.

Strive for immutable

Whenever possible, aim to create new updates as opposed to modifying new ones. Immutable programming drives teams to build entirely new configurations instead of changing existing structures. These new updates reduce the risk of bugs and errors that typically happen when modifying current configurations.

The inherently reliable releases will result in more satisfied customers and employees.

Keep detailed records

Good records management on any release/deployment artifacts is critical. From release notes to binaries to compilation of known errors, records are vital for reproducing entire sets of assets. In most cases, tacit knowledge is required.

Focus on the team

Well-defined and implemented DevOps procedures will usually create a more effective release management structure. They enable best practices for testing and cooperation during the complete delivery lifecycle.

Although automation is a critical aspect of DevOps and release management, it aims to enhance team productivity. The more that release management and DevOps focus on decreasing human error and improving operational efficiency, the more they’ll start to quickly release dependable services.

Automation & release management tools

Release managers working with continuous delivery pipeline systems can quickly become overwhelmed by the volume of work necessary to keep up with deployment schedules. This means enterprises are left with the options of either hiring more release management staff or employing automated release management tools. Not only is staff the more expensive option in most cases but adding more chefs in the kitchen is not always the greatest way to get dinner ready faster. More hands working in the process creates more opportunities for miscommunication and over-complication.

Automated release management tools provide end-to-end visibility for tracking application development, quality assurance, and production from a central hub. Release managers can monitor how everything within the system fits together which provides a deeper insight into the changes made and the reasons behind them. This empowers collaboration by providing everyone with detailed updates on the software’s position in the current lifecycle which allows for the constant improvement of processes. The strength of automated release management tools is in their visibility and usability—many of which can be accessed through web-based portals.

Powerful release management tools make use of smart automation that ensures continuous integration which enhances the efficiency of continuous delivery pipelines. This allows for the steady deployment of stable and complex applications. Utilizing intuitive web-based interfaces provides enterprises with tools for centralized management and troubleshooting that helps them plan and coordinate deployments across multiple teams and environments. The ability to create a single application package and deploy it across multiple environments from one location expedites the processes involved in continuous delivery pipelines and makes the management of them much more simplified.

Related reading

]]>
What’s CNAB? The Cloud Native Application Bundle Explained https://www.bmc.com/blogs/cnab-cloud-native-application-bundle/ Tue, 09 Nov 2021 12:46:43 +0000 https://www.bmc.com/blogs/?p=51082 Cloud-native applications promise performance on the cloud by taking full advantage of cloud-native features integrated with an application programming interface (API) to your containerized applications. However, working with the cloud is inherently complicated—and running containerized applications has its limitations: high deployment complexity and vendor lock-in due to limited API support. Despite the proliferation of open-source […]]]>

Cloud-native applications promise performance on the cloud by taking full advantage of cloud-native features integrated with an application programming interface (API) to your containerized applications.

However, working with the cloud is inherently complicated—and running containerized applications has its limitations: high deployment complexity and vendor lock-in due to limited API support. Despite the proliferation of open-source container orchestration solutions such as Kubernetes, the process of managing multiple containers supporting a large application in a multi-cloud environment is challenging.

In order to address these challenges, Cloud Native Application Bundle debuted in 2018 as a cloud-agnostic package format specification for multi-component containerized applications. How does a CNAB achieve that? What are the main elements of a CNAB system? Who should use CNAB?

Let’s discuss these questions in detail.

The Cloud Native Application Bundle (CNAB) solution

Consider the case of distributed container-based applications that use several API integrations.

Cloud Native Application Bundle

The containers operating as executable units of the software application package the code, libraries, and dependencies in a standardized way to run on traditional data centers or in cloud environments. Additional services including virtual machines (VMs) and Function as a Service (FaaS) are also packaged, along with load balancers and other REST API connected services.

Before a package is deployed for a specific cloud environment, it must include all components necessary to enable full integration and support with the host infrastructure.

The problem arises with the limited integration support and manual configuration efforts required to run the app in dynamic cloud environments or migrating between cloud vendors. Additionally, you have a range of manual tasks associated with configuration management and container package installation, including:

  • Managing and verifying package content credentials
  • Managing offline and versioned installations
  • Testing, signing, and verifying installer details
  • Managing audit trails and regular reporting for regulatory and organizational policy compliance
  • Uninstalling and deprovisioning of resources

Components of CNAB

The Cloud Native Application Bundle format provides a standardized description and guideline for packaging apps to run on multi-cloud environments, edge computing, or other IoT applications and services. CNAB contains the following bundle items:

  • The Bundle description such as version, tags, keywords, and other asset descriptions. These descriptions include the schema version, top-level package information, list of credentials and schema definitions.
  • Details of the installer program, called the ‘invocation image’, that helps locate and execute the referenced items. The map of images and information on invocation images is included.
  • List of parameters required by the bundle, including standardized and user-defined configuration items. Custom actions may be referenced as required.
  • The outputs, path, and environment variables necessary to execute the bundle.

The CNAB packages the bundle and runtime filesystems that can be processed to retrieve the referenced content from existing repositories. Cryptographic verification is adopted to ensure that the sourced repositories are trusted and referenced correctly.

The CNAB document also contains details on how the invoked file images are installed, including the system layout and schema.

Who is CNAB for?

CNAB is an infrastructure tool that can be used by developers, Ops, and even B2B users in a bundle marketplace—or customers directly could use it, too. Consider the CNAB as a unified immutable install of an app container system that can be reused for every installation process in a few simple clicks.

CNAB has many use cases, particularly in a DevOps environment:

Installation automation

A developer building an application may also write a detailed installation process guide as a readme.txt file. With the readme file available, the ops personnel handling the infrastructure provisioning process will want to automate the installation process using a command-line tool. With the CNAB bundles, users don’t have to rewrite the bash script files but instead automate the installation process as part of the CI/CD pipeline.

Deeper automation

Developers can further extend the standardized automation process by transferring the snapshot of a complex but immutable setup to automation systems, use it for digital signing and verification, and continuous deployment. The next set of users including Ops personnel or customers then don’t have to learn and go through the entire process of installing the app as long as they have appropriate credentials.

(Explore the role of automation in DevOps.)

It’s important to understand that CNAB bundles, installation containers, and executable units are the Open Container Initiative (OCI) images by default executed using tools such as Porter. These images can be published to distribution registries including private local registries. The result is that the CNAB now contains not only a single-service component Docker image but the entire application and deployment stack.

This makes immutability and the distribution of containerized stacks convenient for all users in the DevOps teams. In many cases, the process will be a few clicks as long as all standardized specifications are adopted as part of the CNAB best practices.

Related reading

]]>