Glenn Everitt – BMC Software | Blogs https://s7280.pcdn.co Fri, 07 Apr 2023 10:55:04 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png Glenn Everitt – BMC Software | Blogs https://s7280.pcdn.co 32 32 You Don’t Need the Force to Keep Your Mainframe Going https://s7280.pcdn.co/mainframe-ops-dont-need-force/ Wed, 04 May 2022 11:11:34 +0000 https://www.bmc.com/blogs/?p=52016 You don’t have to be a Star Wars fan to know about “The Force.” In the movies, it’s a special ability that gives you many skills, including the ability to sense things—but it is limited to a few. You can be born with it, but it also requires years of training with a Jedi master. […]]]>

You don’t have to be a Star Wars fan to know about “The Force.” In the movies, it’s a special ability that gives you many skills, including the ability to sense things—but it is limited to a few. You can be born with it, but it also requires years of training with a Jedi master.

Keeping your mainframe applications running can seem like a similar endeavor. Only a few have the sense and skill to monitor and react. BMC AMI Ops gives this ability to everyone. It isn’t special hidden knowledge and there isn’t an apprenticeship. You can use BMC AMI Ops Insight to sense problems within “The Force” of your mainframe environments.

Here’s how BMC AMI Ops Insight does it.

  • Intelligent, proactive automation for rapid problem detection and resolution using anomaly detection is like a faithful droid watching hundreds of metrics simultaneously. It’s ready to detect and identify problems, and once it does, it accesses the many pathways that could have caused the problem and backtracks through the causal pathway to help you identify the root cause.
  • Operational resiliency from actionable intelligence, embedded expertise built on the industry’s best experience and knowledge, and simplified management help identify a problem’s root cause. Like decoy space probes, false positive alerts can distract you from your mission of solving the true problem. BMC AMI Ops Insight uses multiple techniques to take you from knowing you have a problem to knowing what is causing a problem. This prevents the subterfuge of misguided alerts and alert storms by grouping related abnormalities together as a single event to help you focus on your mission.
  • Minimal system and management overhead across all solutions, optimized to take advantage of technologies like IBM® Z Integrated Information Processor (zIIP), lowers your total cost of ownership. Just like a droid without power, an advanced monitoring system is useless if you cannot afford to run it on all systems that need monitoring. We have chosen efficient metrics, fast, yet low-overhead machine learning algorithms, and composite metrics that reduce overhead but quickly identify problems. It’s like a super-efficient star drive that can take you anywhere you need to monitor.

The ability to sense a disturbance on the mainframe is no longer limited to a few heroes. Check out BMC AMI Ops and BMC AMI Ops Insight to learn how to empower anyone on your IT operations staff with the mainframe skills of a Jedi master.

]]>
Reducing the Danger of Shadow IT with DevOps https://www.bmc.com/blogs/devops-reducing-danger-of-shadow-it/ Thu, 13 May 2021 13:10:46 +0000 https://www.bmc.com/blogs/?p=49526 Businesses are built on processes, and the good ones help you win. Others, like shadow IT, can hurt you if they aren’t tamed. Implementing DevOps best practices to improve your “true” IT organization, and other business units, can help you do that. If you’re unfamiliar with the term, “shadow IT” describes circumstances where businesspeople contract […]]]>

Businesses are built on processes, and the good ones help you win. Others, like shadow IT, can hurt you if they aren’t tamed. Implementing DevOps best practices to improve your “true” IT organization, and other business units, can help you do that.

If you’re unfamiliar with the term, “shadow IT” describes circumstances where businesspeople contract an outside company to implement needed business applications or services normally handled by your IT department.  Businesspeople usually do this because they have an immediate need, but IT has told them their project can’t be started for several months.

One of the key concerns is that the outside company may ignore company security and coding standards. Usually, these outside companies turn something over to your internal IT department that may not meet the normal or expected standards for code, documentation, or robustness for production environments.

The Dangers of Shadow IT

Many have claimed shadow IT projects can be catalysts for innovation, and maybe that’s true in specific cases. There’s an exciting aspect to the mutinous process of going rogue, successfully getting around bottlenecks with a secret cloud solution and getting things done you otherwise wouldn’t have. But there’s more danger tied to shadow IT than good.

Shadow IT ignores transparency and damages trust. The secrecy of the process means no one or only a few are aware of what you’re doing. If it’s something good, you’re hurting your organization by keeping your ideas or innovation in a vacuum. On the other hand, it could be something you don’t realize conflicts with another initiative or simply doesn’t align with your organization’s goals. When a lack of transparency is realized, it creates distrust between IT and the individual(s) trying to work around it.

Shadow IT creates risk for your business and customers. It’s a quick and easy route to sensitive corporate or customer data living outside your organization’s secure environments in a public cloud. It might be as simple as collaborating on new product requirements in a Google Doc versus your organization’s official collaboration space. It could be a customer data in a public Dropbox folder. Data available outside normal IT controls creates potential risk to the business.

What’s interesting is most employees don’t realize they’re exposing sensitive data when they choose to use an unapproved tool for sharing. Oftentimes, organizations may not be communicating policies and the importance of following certain procedures well enough.

Shadow IT projects are difficult to audit. If someone does something negligent with a shadow IT project that hurts your business or customers, you may not have any recourse to discover what happened without logs in place. It takes time and money to identify an audit trail to uncover what happened or who was involved. And, of course, avoiding security measures that prevent these issues but sometimes slow down projects is often exactly why people resort to shadow IT.

Shadow IT is a workaround that creates more drag. People will use shadow IT to work around bottlenecks. But while shadow IT may make things go faster, because that speed is occurring outside normal IT processes, it can actually slow things down. For example, if IT can’t integrate the workaround into their existing infrastructure, it will take extra time to make a solution work when IT absorbs it.

How DevOps Can Help

The reality is, you can either declare your organization will never do shadow IT and lose the battle, or you can recognize its proliferation and start putting some governance around it. Here’s how DevOps can help.

Communication and Collaboration

DevOps is all about fostering better communication and collaboration between teams and across platforms; it’s all about including people from different areas of the business, even outside IT, from the beginning development stages of new software to the end. If you’re going to “okay” certain projects under shadow IT, there should be communication and transparency about what’s occurring.

To do this, keep sanctioned shadow IT projects within the bounds of IT processes already in place. If you use Agile methodologies, such as two-week sprints, make sure shadow IT projects follow them and, for instance, include a security person and someone knowledgeable with coding standards in sprint reviews. This addition will help keep projects safe and in line.

Quality, Velocity and Efficiency

DevOps is also all about developing and delivering high-quality innovation as efficiently and quickly as possible. If your organization is already doing this, it will reduce the number of shadow IT projects because people won’t have to feel pressured to work around bottlenecks as often—you’ll be reducing them with DevOps instead!

And by prioritizing the most important requirements to be delivered iteratively along a road map, people will develop trust that what they need will come at the right time. If your internal IT department can implement key requirements for the business from their prioritized list, even if the complete list can’t be achieved, it may relieve enough pressure on business units to forego starting shadow IT projects.

Good Tools

Another important aspect of DevOps is enabling people with the tools they need. If people are sharing data on Google Docs, why isn’t your organization providing them with an excellent collaboration tool like Atlassian Confluence?

With DevOps, you prevent shadow IT groups from scouting and using unapproved open-source tools that may not integrate well with your environment or that aren’t properly secured or licensed. Instead, you provide good tools that help people be productive and successful.

Fighting shadow IT is a battle no organization will ever completely win, but letting it run amok is financially dangerous and unsecure. The best tactic for organizations is to control their shadow IT activity by improving their own development, operations and security organizations through DevOps best practices and being willing to address small but high-priority business initiatives quickly.

]]>
Moving From Noise to Alerts with AI/ML Monitoring Products: 10 Key Questions https://www.bmc.com/blogs/ai-ml-monitor-ten-questions/ Fri, 23 Apr 2021 00:00:28 +0000 https://www.bmc.com/blogs/?p=49384 Some monitoring products provide a way to enter historical system data into their AI/ML monitoring products. Having this historical system data allows you to immediately train your AI/ML model. Once this data is analyzed some products are ready to go and can start identifying anomalies in your systems. However, other AI/ML monitoring products do not […]]]>

Some monitoring products provide a way to enter historical system data into their AI/ML monitoring products. Having this historical system data allows you to immediately train your AI/ML model. Once this data is analyzed some products are ready to go and can start identifying anomalies in your systems.

However, other AI/ML monitoring products do not have a way to import (ingest) historical data. These products must monitor your systems for weeks before they have enough data to detect anomalies. But, let’s back up even more. Some products are just toolkits in that you have to select the individual metrics and figure out how to gather the metrics to get them into the monitoring tool. You may also have to adjust the sensitivity to the metrics you have chosen. Not all metrics are created equal some are much better indicators of problems than others. If you select the wrong metrics or the wrong sensitivity, your monitor may not provide any notifications or may provide many false positive notifications.

Metrics chosen often have complex relationships with other metrics that are useful when training your anomaly detection model. Multivariate (many related metrics) analysis requires an understanding of the relationship of the related metrics. Univariate (single metric) analysis only analyzes one metric at a time and is much more likely to cause false positive alerts. These relationships are why multivariate analysis is more valuable than univariate analysis. Related metrics tend to all move based upon their relationship when a problem occurs. Finding these relationships yourself is often time-consuming and difficult. Multivariate analysis requires a large amount of situational data to develop a good model. Ideally, you want your product vendor to have already done this complex analysis and provide a model with multivariate analysis baked into the product.

Once you have done all of that, you think it would be monitoring nirvana but not so. Because you chose your specific metrics, you now must build your dashboards based on those metrics. Again, this is a case where it is much easier to work with a product where the vendor has already done the metric selection, the multivariate analysis, and designed a user interface to intelligently present this information. It is very time-consuming to build dashboards and charts and organize them in a way that is easy to navigate.

You also must test to verify if the model is detecting the expected anomalies and if it is sensitive enough to find the problem early enough to fix them before they cause an outage. So, when a product claims to have embedded intelligence, you should ask questions to determine if it really has the embedded intelligence you need or is it a science experiment that needs a lot of work and testing to get it configured correctly.

Before you decide to buy an AI/ML Product based upon a demo, keep the following questions in mind:

  • Does it support import and processing of historical data for training so it is ready to monitor in days not weeks?
  • Do you have to figure out which metrics you need to monitor out of hundreds of choices?
  • Is the analysis done with multivariate analysis or is it a simplistic univariate analysis doing a simple weighted summary?
  • Are the relationships between all of the metrics already known and defined in the model and ready to do the analysis?
  • Do you need to start collecting new metric data you do not currently collect and feed it into the tool?
  • Are the metric’s sensitivities already known and adjusted for?
  • Are the dashboards already complete and can you easily drill down into the more detailed data in charts when needed?
  • Does the user interface provide an obvious starting point for doing diagnostics when an anomaly is detected?
  • Does the user interface guide problem identification or does it just show a lot of numbers and graphs?
  • Are the monitor intervals frequent enough to not miss early warnings of the potential outage?

It is important to ask these questions before buying an AI/ML monitoring product that requires more work and more expertise than your team can provide to get it configured. You can not overlook the amount of time it takes to provide useful alerts rather than more noise to evaluate the product’s success.

]]>
Don’t Break Down Silos – Bring Them Together https://www.bmc.com/blogs/dont-break-down-silos-bring-them-together/ Thu, 29 Oct 2020 15:11:29 +0000 http://www.compuware.com/?p=48831 Overview: Breaking down silos within an organization may seem like an act of destruction, but at its core, it involves encouraging dialog between teams in an effort to bring them together. An approach using understanding, communication, and measurement can help make this process a smooth one.   Breaking down silos! My mental image was always […]]]>

Overview: Breaking down silos within an organization may seem like an act of destruction, but at its core, it involves encouraging dialog between teams in an effort to bring them together. An approach using understanding, communication, and measurement can help make this process a smooth one.

 

Breaking down silos! My mental image was always of someone placing a stick of dynamite at the foundation of a giant grain silo, and it exploding with the silo dramatically falling over, falling apart, with an explosion of grain and a giant cloud of dust enveloping the entire county.

The truth about breaking down silos in business is its pretty mundane. The fear of dramatic change can stop people from working outside of their team. Eliminating these silos only requires communicating with people outside of your silo. The best team of people for breaking down silos and communicating across the many parts of your company is the DevOps Automation Team. Many companies have landed on using a DevOps Center of Excellence (COE) model where a central team of automation experts goes and helps other parts of the company adopt a DevOps process and start implementing more agile processes.

Telling people you are going to break down their silo isn’t going to help you achieve that goal. It sounds scary and invasive. But if you tell them you want to help them automate some of the work drudgery they face every day you are more likely to get people interested. Then you can introduce them to your DevOps Automation Team.

This Automation Team starts the conversation with how they can help:
• The Automation Team listens to the existing problems in the targeted silo.
• They start to understand the processes that are in place and the business reasons for the processes.
• They start to understand the bottlenecks in the process and propose automation solutions.
• They talk about how other parts of the company solved similar problems.

As the siloed team works with the Automation team an understanding develops:
• The siloed team starts to see that their unique problems are not really unique and that while their jargon may be different, their problems are the same. Other teams also experience slow hand-offs, manual testing/verification, manually gathering data for reporting, manually deploying software or data.
• Then the Automation team starts automating some time-consuming manual processes and shows the siloed team how to change from manual to automated processes.
• The siloed team realizes that they can improve their automated processes by working with other company teams.
• The siloed team starts talking with people outside of their silo.

There are three aspects to breaking down silos: Understanding, Communication, and Measurement.

Understanding: We see automation workshops where people spend a day identifying work processes within their silo and identify work that only one or two people even knew needed to be done. The result is new respect from people that “had no idea you had to do all that work.” The ability to understand the process at a higher level is invaluable for teams outside of the silo. Now, other teams can understand why information is needed and why lead times are required. The workshop results provide a framework to understand the timelines for the process and where automation can play a role.

Communication: Automation facilitates communication; it integrates the different corporate factions into larger business processes that serve customers. You should think of the automation process as a way to bring teams together. Automation smooths the flow of processes and information across boundaries and expands understanding of the connections between teams. Let your DevOps Automation Team be the catalyst to transition your silos into integrated business teams.

Measurement: I recommend measuring progress for two reasons. 1. It helps you understand the next most important process improvement to make. 2. It’s sometimes hard to see the silo is disappearing. How do you know the silo has disappeared? The automated processes move through the siloed team like any other part of the company. When the process fails or needs to be changed, the people outside of the silo easily talk with people inside of the silo to revamp or restart the process.

Slowly, as automation improves and more discussions are had, without realizing it the silo has disappeared. You generally only see localized improvements in the first weeks or months, but these add up to great improvements over a couple of years. Not the dramatic change with the explosion I always envisioned but it does have a dramatic effect on productivity. The silos are gone but the benefits of automation remain.

]]>
How Are You Insuring Against Data Breaches? https://www.bmc.com/blogs/how-are-you-insuring-against-data-breaches/ Thu, 30 Jul 2020 17:58:26 +0000 http://www.compuware.com/?p=48341 Overview: Data breaches are inevitable, making protection against outside threats essential. But what if the breach happens internally? Taking the correct measures now can help mitigate the impact of internal breaches and protect for the future. The ability to record network traffic, view how records were accessed and by whom, and produce compliance reports are […]]]>

Overview: Data breaches are inevitable, making protection against outside threats essential. But what if the breach happens internally? Taking the correct measures now can help mitigate the impact of internal breaches and protect for the future. The ability to record network traffic, view how records were accessed and by whom, and produce compliance reports are just a few of the benefits of implementing auditing software.

We buy insurance for protection against the cost of things we hope will never happen. But we know they will happen to someone, at some time, so we purchase protection which is less costly than enduring the loss. Data breaches do happen, and they are expensive and damaging to all involved. Sadly, they happen frequently, and we do have procedures to mitigate the loss. We continue our war against the outside threat of hackers, putting in firewalls and instituting physical security. All of this does a great job of protecting us and more importantly those that have put their trust in us. We should be, and we believe we are, always looking for ways we can do better. As threats evolve, so must our methods.

An area that is not always given enough attention is the insider threat. As we mentioned above, we provide for physical security and training to make sure those dealing with the public don’t fall prey to those using pretexting, but things happen. What can we do to limit our loss when all else has failed and a breach occurs? How can we respond quickly and definitively when confronted with someone inside our perimeter defense that has access and has breached our data and our trust? Any way we can limit our exposure matters. Any way that we can produce a list of only the customer data accessed by this person limits the exposure and increases the trust the public has in the organization.

What is needed is a way to simply and efficiently record the activity on the system of record, the mainframe. A means to be able, if necessary, to start with a breached record and tie it back to anyone who viewed it, or to take the opposite approach and produce actual screens of activities carried out by a suspected person. By having this means we can quickly isolate the damage and begin the repair, allowing us to meet reporting deadlines and have confidence in our investigation. It would provide evidence which would help in prosecution if necessary. These abilities would be the difference between announcing that we know we were breached, sometime, by someone and so we have to assume it could be all records – or – we can state we know who did it, during which timeframe, and which records were accessed. Instead of notifying and entire customer base of the breach, we can notify only those who will have a concern.

Fortunately, this solution exists and is being used by many for the assurance they need. BMC AMI DevX Application Audit can:

  • Efficiently record network traffic and application screens, archiving them for investigation.
  • Provide insight into user behavior, such as which data a particular user viewed and how it was accessed.
  • Leverage SIEM integrations like BMC AMI Datastream for z/OS provide application-level insight for identification and reduction of cybersecurity threats.
  • Provide the intelligence and reporting required for HIPAA, GDPR, and Australian NDB scheme compliance.
  • Eliminate dependency on specialized mainframe knowledge.
  • Maintain Separation of Duties between system administrator and auditor.

Beyond this there is a unique benefit from this form of “insurance.” Once you have implemented the recording and periodic searching of activity you will also put a warning on the screen to inform those who log in that their activities will be recorded and searched. This warning serves as a valuable deterrent to the malicious breach of data by a trusted user. Someone with criminal intent will realize it is not worth the risk of exploiting the data.

Unfortunately, data breaches are bound to happen and can be especially disruptive if they originate internally. The key to minimizing their impact and deterring future malicious activity lies in the ability to identify their origins and scope. With the right tools, your organization can respond to these breaches and implement measures to mitigate future risk, giving your customers, as well as your employees, peace of mind that their data is safe.

]]>
No Mock Objects for Your Mainframe? Create COBOL Program Stubs https://www.bmc.com/blogs/cobol-program-stubs/ Thu, 23 Mar 2017 09:00:04 +0000 http://insidetechtalk.com/?p=18055 Topaz for Total Test automatically creates and executes COBOL program stubs, allowing you to simplify testing efforts through program isolation.]]>

Mainframe developers blazed such a trail in computer software, the dearth of mock libraries or COBOL program stub libraries today is surprising. In contrast, the open systems world provides many mock object libraries you can use to help isolate the systems required by a unit test.

Program stubs allow you to simplify testing efforts through program isolation. By calling program stubs rather than calling the actual programs, you can minimize the amount of code that must be tested at one time. It may require more tests, but each test will be more focused, and when a failure occurs it will be easier to discern the problem.

However, mainframe teams have long been forced to create their own libraries of COBOL program stubs for testing applications. The time spent maintaining these stub libraries for testing often carries a significant cost. The result is these libraries aren’t maintained to reflect the changes to real programs, so they can no longer be used for testing. In other words, program stubs can go stale in environments where the real programs are changed frequently.

Mainframe teams need an easy way to create and execute COBOL program stubs, which would allow them to ensure this testing process becomes frequent and normalized, thereby increasing application quality. The solution is automated COBOL program stubs creation.

Automated Creation of COBOL Program Stubs

If a program already exists then you should be able to automatically create a program stub to simulate its behavior. This simulation works by returning data as though the program ran. Sometimes program stubs return static data when called and ignore the input parameters. More sophisticated program stubs can use the input parameters to determine the data to return. Regardless, program stubs do not include a complex implementation code and generally just return data.

It’s relatively easy to change the data returned by a program stub. This capability is useful because getting a real program to return specific data can be complicated.  When a program requires a specific error condition to trigger the needed data it can be time consuming so a program stub can really speed up testing of error conditions. Using a COBOL program stub, you can simply set up the program stub to return the data as though an error condition was encountered. This approach is often the easiest way to handle testing error conditions.

Testing error conditions is an area which is often neglected. If an error condition is handled poorly, it can have scary effects on the customer experience. One example could be a network error preventing a mainframe program from accessing a customer bank balance, so it just returns a balance of zero that appears on customers’ mobile banking apps. The bank suddenly has a multitude of customers who are concerned about their bank accounts having a zero balance.

Program stubs can reduce the amount of work to set up a test environment. If you can stub out all the calls to another program or subsystem, you don’t need to have it installed and working in your test environment. (Note that when you are doing integration testing you must have all the real programs installed and running; however, until you get to that phase of testing it can be much easier not to set everything up.)

Tools for Creating COBOL Program Stubs

BMC provides a solution for automating COBOL program stub creation with its BMC AMI DevX Total Test and Xpediter products.

Topaz for Total Test, an automated unit test creation and execution tool, can automatically create a COBOL program stub with data collected from Xpediter, AMI DevX interactive debugger, running your real program. As long as you use the latest version of your real program, you can get a completely up-to-date version of a program stub to use in your testing environments. A program stub can be created automatically in minutes for use with Topaz for Total Test.

Program stubs are also useful in situations where code is not yet complete or will be created by another party. Even if you don’t have an existing program but you have an interface definition and know the data that will be returned, Topaz for Total Test provides a way to manually create a program stub so that you can debug, execute and test your program using a COBOL program stub in place of the unavailable code.

Even in the case where the code isn’t written yet, Topaz for Total Test can simplify the process by allowing you to import a COBOL copybook definition of the interface for the program stub. Once this is done, it’s a matter of entering the data to return from the program stub.

Today, your customers expect quality and velocity. Quality demands frequent testing, and velocity demands automation. Combining the two makes it possible to focus on other areas of development that lead to innovation but require more energy from your team. That’s why a tool like Topaz for Total Test that automatically creates and executes COBOL program stubs, allowing you to simplify testing efforts through program isolation, is essential.

]]>