Robin Reddick – BMC Software | Blogs https://s7280.pcdn.co Tue, 05 Dec 2023 15:07:58 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png Robin Reddick – BMC Software | Blogs https://s7280.pcdn.co 32 32 Supporting the surge of remote workers – 4 steps to ensuring network performance https://s7280.pcdn.co/remote-worker-surge/ Thu, 26 Mar 2020 00:00:30 +0000 https://www.bmc.com/blogs/?p=16808 Companies and organizations around the globe are having to shift from an in-office work environment to remote work environment almost overnight.  This has caused a big change in the way that people perform their work and engage with co-workers and others.  Shifting face to face business, work and personal interactions to online has added significant […]]]>

Companies and organizations around the globe are having to shift from an in-office work environment to remote work environment almost overnight.  This has caused a big change in the way that people perform their work and engage with co-workers and others.  Shifting face to face business, work and personal interactions to online has added significant workloads to digital services and IT systems and infrastructure.

One of the most immediate challenges is making sure that your network performance supports the increase of people using your virtual private network (VPN) and the variety of workloads.  How do you determine what you need for doubling, tripling or more the number of users on your VPN?

Are employees holding audio-only meetings?  Are they having conference calls that require sharing presentations?  Are they doing video conferencing, are they sharing large files?

BMC has the same challenges as other companies around the world.  We nearly doubled the number of remote workers in a matter of a few days.  This blog shares our best practices using TrueSight Capacity Optimization.

There are 4 basic scenarios that need to be considered to ensure you have the network bandwidth needed to support your remote workers.

    1. Modeling your current network bandwidth
    2. Analyzing your bandwidth use to detect if expansion is needed
    3. Model continuity scenarios – if a network goes down, how can the load be distributed and what does that do to remaining networks
    4. Correlate end user response for applications – to determine if slowdown occurs, is it a network issue or compute issue

We used the following metrics in performing these 4 scenarios.

  • Number of VPN Active Sessions
  • Internet Utilization
  • Bandwidth of Network Interface
  • Link usage per connection
  • Input Bit Rate by Network Interface
  • Output Bit Rate by Network Interface
  • Response Times
  • CPU Utilization

1) Model your current bandwidth

The first step to supporting your new remote workers is understanding the current use of your network.

The chart below allows us to gain insight into the number of VPN sessions per location.

You can continue to analyze and visualize active VPN sessions over time to look for trends or potential anomalies.

2) Analyze impact on network and infrastructure

You want to make sure the devices and network servers do not become the bottleneck to good performance.

In the example charts below you can see that there is a significant workload increase on 2 different servers.  There is a 100% increase in workload, but server capacity is still only between 20-40%.  This is OK for now, but we continue to analyze these devices to understand the capacity increase as you add more users.

Below is a chart showing the normal usage.  It shows the increase in usage due as adding additional remote workers and what the new trend in usage will be with this added workload.

Each device is different.  It is important to look at the bit rate utilization on those devices.  The nature of different workloads means different consumption rates.  For example, an audio only conference will use fewer bits that a video conference.  Another example is dependency on AWS cloud services will cause higher network usage.  Remote workers may be backing up their laptops to using capacity of network servers along with network bandwidth.  Monitoring the capacity of the network ports and network servers to understand resource needs for maintaining expected end user performance.

 

You also want to look at the correlation between active sessions and output bit rate by network interface.  This allows you to determine when the circuit will saturate based on the trend for growing number of sessions and current bandwidth.

3) Business Continuity

If you have office locations accessing multiple network servers, it is important to understand the workload at the different locations.  This is important for understanding capacity requirements for network servers at each location.  This is important for understanding bandwidth requirements, but it is also important for developing a continuity plan.    The modeling you did in step 1 should provide you with this information.  You can perform “what-if” analysis and model continuity scenarios.  For example, if the network connectivity at one location failed, how can you distribute the workloads over the remaining networks.

 

 

4) Monitoring end user response time for an application

Your users have an expectation when it comes to response time.  You can model the end-user-response time across the enterprise and ensure that networks are performing as expected.

End user response times for your internal and customer facing applications can be collected from the application performance monitor you are using.  This data can be correlated to network response and server response to identify if or when you will have a performance problem.

 

In the chart below we show that you can add up to 400 users and still maintain current response time.  It is important to rerun this model periodically to see if this correlation holds true over time.

Best practices are to perform regular network performance analysis and modeling on a regular basis, such as weekly.  You should develop a “golden model” so you can compare your weekly analysis with the standard that you set for your company.

Want to learn more about network performance analysis.  Join the TrueSight Capacity Optimization Community and ask questions or share your expertise.

]]>
Moving from Cloud First to Cloud Smart https://www.bmc.com/blogs/moving-from-cloud-first-to-cloud-smart/ Fri, 10 May 2019 00:00:18 +0000 https://www.bmc.com/blogs/?p=14027 For any large government agency, managing cloud migrations can be fraught with risk, complexity, and hidden costs. However, the U.S. Department of Agriculture (USDA) was able to overcome these obstacles and establish winning implementations in the public, or commercial, cloud. How? In this first of our two-part blog, we offer background on the USDA, the […]]]>

For any large government agency, managing cloud migrations can be fraught with risk, complexity, and hidden costs. However, the U.S. Department of Agriculture (USDA) was able to overcome these obstacles and establish winning implementations in the public, or commercial, cloud. How? In this first of our two-part blog, we offer background on the USDA, the IT operations team’s challenges and objectives, and the new services that have been established. In our second post, we’ll detail some of the key strategies that enabled the USDA to effectively manage its cloud migration.

Introduction to the USDA

With nearly 100,000 employees, the USDA is a large agency with a broad and important charter. Comprised of 29 agencies, the USDA provides leadership in such areas as food, agriculture, natural resources, rural development, and nutrition.

Within the USDA, the IT team plays a significant role in supporting an extensive array of services and initiatives. In many respects, the IT operations group has been set up to operate as an independent service provider rather than an internal department. Led by the CIO’s office, the organization delivers services to a range of “customers,” including both internal departments and other federal agencies.

Making the Move to the Public Cloud

Scott O’Hare is director of IT Operations for the USDA. He and his team have been offering hosting and private cloud services to customers for more than 10 years.

In recent months, O’Hare and his team have embarked on a strategic initiative to extend their service offerings into the public cloud provider segment. Now, in addition to its traditional offerings, the team offers services based on Azure and AWS, for example. Through its new services, the team will offer customers a range of options in terms of where they can host applications and platforms. The organization can now support a number of hybrid approaches, such as cloud bursting, where customers may have a private data center and, when usage peaks arise, extend into the public cloud. Plus, customers can now go through a self-service portal that provides easy access to private and public cloud services.

For a federal agency, making this move to the public cloud represented a strategic, complex, and large-scale effort. We recently had the chance to interview O’Hare and learn about his organization, how he and his team successfully incorporated public cloud offerings into their service mix, and some of the key lessons they’ve learned along the way.

For organizations in the midst of making the move into public cloud environments, particularly those within government agencies, the experience of the USDA can provide invaluable insights into pragmatically and successfully adopting new cloud approaches and services.

In the sections below, we start by outlining some of the organization’s key challenges and objectives. We also offer an overview of their new offerings and services. In the second part of this two-part post, we’ll feature some of the top strategies O’Hare and his team employed and the benefits they realized as a result.

Organization Challenges

For some time, the IT operations team had been offering customers cloud services through a private data center model. In recent years, as the cloud market, and the technology landscape more generally, continued to see ever more rapid change, it grew increasingly challenging to keep pace with evolving customer requirements and expectations.

“Especially as a large data center provider, adapting and scaling to meet customer demand was difficult,” O’Hare explained. “When working with a new customer, our team struggled to quickly and efficiently do the necessary procurements, get the required equipment set up, and so on.”

Objectives: Harnessing the Cost Models of the Public Cloud

Given the evolving requirements of the IT operations team and its customers, the advantages of the public cloud began to look increasingly enticing. The team sought to move away from having to make the large capital expenditures that were required to maintain and expand their private cloud. By leveraging public cloud offerings, they could capitalize on cost models that offered several key advantages:

  • They could move to an operating expense payment model, one that would offer them the ability to pay only for the services being consumed.
  • They’d be able to pass their costs directly to their customers as services are used, rather than having to accrue significant expenses up front, well before customers would be paying for the services.
  • They could pass savings on to customers because the IT team wouldn’t have to pay for the entire data center footprint, but rather only for whatever services are actually used.

At the same time, they’d be able to tap into public cloud providers that have practically infinite capacity, so they could effectively scale to support customers’ rapidly expanding workloads and environments.

Overcoming Obstacles on the Public Cloud Journey

O’Hare and his team have been on a journey to the cloud for a number of years. In the past year in particular, the team has made substantial progress.

“The process of completely reinventing approaches and models in the cloud can be daunting,” O’Hare revealed. “Often people can find new technologies and approaches complex and overwhelming, but the reality is that technology doesn’t have to be an obstacle at all.”

Through this effort, O’Hare came to realize that culture tends to represent a bigger obstacle to cloud migration than just about any other area. Introducing cloud and automation initiatives can immediately leave many staff members concerned about their jobs going away, or potentially being changed fundamentally. This can make it difficult to gain the required buy-in and participation. Further, many team members had been with the agency for decades, and had largely been employing the same workflows and tools for much of that time. In these environments, changing the way people work can be a big challenge.

New Service Offerings

By harnessing public cloud services, O’Hare and his team have been able to substantially expand the array of services they offer customers. The team has implemented a consolidated cloud portal. Now, customers can access the portal and browse and choose from a dramatically expanded set of offerings, including on-premises data center services and a range of services available in public clouds, including AWS and Azure. These services are fully managed.

“Our team helps with migration and implementation, effectively handling all the efforts required to get customers’ services up and running,” O’Hare stated. “In addition, the team helps with ongoing maintenance, including patching, compliance reporting, and more.”

Conclusion

For any organization, the promise of the cloud can be massive, however so can some of the risks. For government agencies, many of the hurdles that hinder successful cloud migrations can be magnified. By taking a cloud smart approach, O’Hare and his team at the USDA are poised to maximize the advantages of the cloud, while minimizing risks.

Be sure to keep an eye out for our next post, which will reveal a lot of the key lessons O’Hare and his team learned through this successful cloud migration initiative. In addition, you can hear our entire interview with O’Hare here.

]]>
Why Capacity Management Is Essential to Controlling Cloud Costs https://www.bmc.com/blogs/why-capacity-management-is-essential-to-controlling-cloud-costs/ Mon, 08 Apr 2019 00:00:57 +0000 https://www.bmc.com/blogs/?p=13876 Amazon Web Services. Google Cloud. Microsoft Azure. Private clouds. There’s certainly a dizzying array of options for where to develop, run, store and manage your most critical applications and data. So, it’s no surprise that most organizations primarily focus on which clouds to move to and what the total cost is for their public and/or […]]]>

Amazon Web Services. Google Cloud. Microsoft Azure. Private clouds. There’s certainly a dizzying array of options for where to develop, run, store and manage your most critical applications and data.

So, it’s no surprise that most organizations primarily focus on which clouds to move to and what the total cost is for their public and/or private clouds. But to stay competitive in today’s digital business landscape, organizations must shift their focus to how they’ll effectively control their costs once they’ve moved to the cloud.

The Pressing Need for Capacity Management That Works

It’s clear that most IT organizations don’t have the right resources or insight to comprehensively identify what infrastructure resources are needed or how to forecast their respective costs. Additionally, with multiple buyers of cloud services throughout the organization, IT and business owners often overspend on their infrastructure resources or choose resources that inadequately support their needs.

For example, without proper insight into infrastructure resource usage, , IT operations staff, Cloud operations staff and other IT resource buyers may ensure they have what they need to support a new mission-critical app by buying twice as much as they think they need, just to be safe. This type of waste and inefficiency, is due to the lack of real information and leaves the business at risk for unanticipated shortfalls and performance problems.

Lacking a comprehensive and accurate understanding of ongoing resource usage, changing workloads, trends, and potential bottlenecks, IT must base project requirements on best guesses. When these estimates are too aggressive, under-provisioning leaves the business vulnerable to service degradation or disruption—hurting business productivity and performance, and frustrating customers and end users. When estimates are too conservative, overprovisioning wastes valuable budget and resources, increases administrative overhead, and diverts funds from more beneficial areas.

With the lack of visibility and stability, along with ever-increasing unknown security and compliance risks, ultimately the quality and consistency of end-user experiences suffer. To avoid unexpected downtime for your users and to take full control of your IT infrastructure costs in an ever-changing IT and business environment, you’ll want capacity management that is both automated and effective.

 

Features of Effective Capacity Management Solutions

Effective capacity management should help IT meet the dynamic requirements of the business while controlling and reducing costs. To do this, your capacity management solution should cover three critical needs:

  1. Automatically ensure the right resources are allocated to each application at the right time, so those applications are deployed precisely when they’re needed
  2. Adjust IT resources proactively to address growth, periodic, and cyclic changes in demand, so business and digital services are consistently delivered at a speed that meets customer expectations
  3. Optimize on premises and cloud infrastructure investments while reducing software and service costs

Align IT Infrastructure Resources with Service Demands

IT infrastructure resources are essential foundational elements to run your digital enterprise. So, how do you properly align your IT resources with service demands to optimize resource usage and reduce costs?

Our industry-leading solutions can help you gain full visibility, lower costs, and reduce risks for your entire IT infrastructure—both on-premises and in public clouds.

TrueSight Capacity Optimization gives you unprecedented visibility into your IT environments so you can easily add, remove, or adjust compute, storage, network, and other IT infrastructure resources to meet changing application and service demands. Service views, forecasting, modeling, and reservation capabilities provide the insight you need for future resource allocations and the ability to control the timing and cost of new capital and operating expenses.

BMC BMC Helix Discovery automates asset discovery and application dependency mapping to build a holistic view of all your data center assets, multi-cloud services, and their relationships. Each scan goes into the information and dependencies for all software, hardware, network, storage, and cloud services—providing IT with the proper context needed to create an application map, and reducing risk for IT in the process.

Learn More

Find out how TrueSight can plan for future infrastructure needs, automatically help you predict and control your cloud costs, and learn how BMC Helix Discovery delivers fast, accurate, and secure cloud and on-premises asset visibility.

]]>
3 Essential Steps for Migrating to AWS or Azure Public Cloud https://www.bmc.com/blogs/3-essential-steps-for-migrating-to-aws-or-azure-public-cloud/ Thu, 14 Dec 2017 08:16:05 +0000 http://www.bmc.com/blogs/?p=11604 Organizations have begun moving applications and workloads to public cloud at an increasing rate. According to Gartner, the worldwide infrastructure as a service (IaaS) public cloud market grew 31% in 2016 to total $22.1 billion1. In 2016, Amazon AWS had the No. 1 market share of 44.2%, Microsoft Azure 7.1% of the market and Alibaba […]]]>

Organizations have begun moving applications and workloads to public cloud at an increasing rate. According to Gartner, the worldwide infrastructure as a service (IaaS) public cloud market grew 31% in 2016 to total $22.1 billion1. In 2016, Amazon AWS had the No. 1 market share of 44.2%, Microsoft Azure 7.1% of the market and Alibaba 3.0% of the market.

Organizations are relying on the public cloud for a successful digital transformation. Many organizations have established a cloud strategy and business plan for moving workloads and applications to public cloud infrastructure, but are still working to determine the migration plan to support that strategy. But not all workloads and applications may be suitable for public cloud infrastructure services.

According to ESG Research, 91% of organizations expect to have substantial on-premises infrastructure deployments for the next five years2. Most, but not all, applications and workloads can be migrated to public cloud. But just because you can migrate workloads and applications to public cloud doesn’t mean you should. As indicated, most organizations will have hybrid IT environments for the foreseeable future.

(This tutorial is part of our AWS Guide. Use the right-hand menu to navigate.)

Step 1: Take inventory

The first step in a migration to public cloud is understanding the applications you have, the on-premises infrastructure resources supporting them, and the workload interdependencies. The best way to accomplish this is by using a discovery tool that automates the discovery and dependency mapping process. A discovery tool should provide an accurate inventory of infrastructure that includes:

  • Servers (physical and virtual, hypervisor, OS, CPU, RAM, disk)
  • Software (+EOL), databases, web sites
  • Network devices (switches, load balancers, etc.)
  • Storage (devices, but also their logical partitioning)

Discovery tools will collect server specification information, server to storage relationships, performance data, and details about running processes and network connections. Use the learnings from the discovery tool to help establish a migration sequence, minimize downtime and assist in test plans.

Step 2: Evaluate applications

Not all applications and workloads are well suited for public cloud. When developing a migration plan, it is essential to evaluate the workloads and applications that are candidates for moving to the public cloud. The considerations must be measured against the goals of your cloud strategy and the potential business risk. The key is to examine all applications and workloads under a consistent framework – not on an ad hoc basis.

When assessing the readiness for running applications in the public cloud, all the options need to be considered – from retaining as is, to replacing.

Rehost

Redeploy an application to a cloud-based platform without modifying the application’s code. The reason for doing a lift and shift can be to quickly scale in order to meet a business need. Established applications that were not designed for efficient use of infrastructure will most likely cost more to run in a public cloud. Because of this, a simulated migration is recommended for rehosting applications to prevent cost surprises.

Refactor/Rearchitect

Refactoring/rearchitecting involves making modifications to the application, application framework or runtime environment. This can include making application code or configuration changes to attain some tangible benefit from moving to public cloud platform without making major changes to the core architecture of the applications.

For example, you may swap out a database server for a cloud service equivalent to reduce the amount of time you spend managing database instances. The goal is to get some optimization benefit from public cloud service with very little code change to the application.

Rebuild

For applications that you wrote, redesigning and rebuilding a cloud-native application on a provider’s PaaS may be worth the investment. This typically provides better performance and lower costs (if done correctly). This may be the right choice for applications that are business-critical, but not designed to take advantage of the services offered on a cloud platform. In addition, non x86-based applications, like mainframe and midrange applications that rely on operating systems other than Linux and Windows will need to be rewritten. This is the most expensive option, but the investment to rewrite an application may be worth it if looking to boost agility, reduce costs and improve business continuity.

Replace/Repurchase

For commercial, on-premises applications, replacement with a SaaS offering may be the best solution. Many vendors offer both SaaS and on-premises solutions now. And even if the preference is to run on-premises, many ISVs have upgraded their applications to better run on cloud platforms and it could be a matter of upgrading the application to a more current version.

Retain

Running applications on-premises is always an option. It may make good business sense to keep some applications on-premises. Lower costs and better security and compliance are strong considerations for doing so. Not every application benefits from a cloud platform. Applications that have static workloads with no agile demand, and that are running on stable systems, are good candidates to be retained on-premises.

The application evaluation phase also provides an opportunity to identify applications and workloads that are no longer needed or lack the business justification to warrant the ongoing cost to support them. This is a great time to rationalize your portfolio.

Step 3: Analyze cost

Migrating applications to public cloud platforms is a business decision that requires both financial and technical assessment. This is particularly important if the migration includes rehosted or refactored applications. If a preliminary assessment of the cost difference for running these applications on-premises vs. in a public cloud is not done, you will most likely get a big cost surprise later.

In a recent BMC survey, 40% of respondents stated they are unclear of their costs associated with cloud – even though lower cost was the primary driver (45%) for moving to public cloud.

Every IT organization should have a history of resource usage and workload patterns for their applications, along with the unit cost of infrastructure resources. Many organizations gather this information routinely and use it for chargeback or showback for IT infrastructure costs. If you do not have this information, you can still gain cost insights – they just won’t be as accurate. At a minimum, you need a unit cost for on-premises infrastructure resources and compute, storage and network resources for the workloads and applications you want to migrate. Otherwise you have no costs to compare.

The first step for determining cost of on-premises vs. public cloud costs is to identify the resources needed in public cloud. You can start by identifying the closest type of VM instance and configuration needed in public cloud. For AWS, this will be an EC2 instance and for Azure, a Windows VM. AWS has over 90 different types of VMs to choose from, and each type has multiple configurations from which to choose, and multiple locations or regions. Azure also has similar choices.

Once you have identified the compute resources, you must determine if you want to pay an on-demand price or a 40-60% discount price that you can get with one or three-year service commitments. AWS refers to these discounted VMs as Reserved Instances. If you have not yet selected a public cloud provider, you will want to compare costs across providers. Without having an automated tool to do this analysis for you, this effort can take weeks, depending on the number of applications and workloads you are analyzing.

There may be special storage, database or network resources that you need to identify and factor in as well. Once the analysis is completed, you should view it as a snapshot in time. IT environments are naturally dynamic, so costs will change over time. An ongoing cost management practice is essential to regulating operating and capital expense for a hybrid, multi-cloud environment.

Moving forward

Developing a migration plan takes time and a dedicated team. To streamline the effort and reduce time, you can use a solution like BMC Helix Discovery to automate the discovery and dependency mapping work. Once you are ready to begin migration, both AWS and Azure offer tools to help you move workloads and applications to their platforms.

Migration is just the first step of incorporating public cloud into your IT environment. To keep costs under control, you will need to establish a capacity management practice to ensure you are optimizing the use of public cloud and on-premises resources. And because of the change from capital expense to operating expense associated with this move, you will need to have an ongoing cost management practice to ensure that you adhere to both capital and operating budgets and maximize your IT infrastructure investment.

Done right, you can achieve your goals of greater agility and lower costs by extending your IT environment to include public. It just may not be as simple as you had thought.

1 Gartner Press Release, Gartner Says Worldwide IaaS Public Cloud Services Market Grew 31 Percent in 2016, September 2017, https://www.gartner.com/newsroom/id/3808563

2 ESG Survey, 2017

]]>