Jon Stevens-Hall – BMC Software | Blogs https://s7280.pcdn.co Tue, 28 Mar 2023 12:23:54 +0000 en-US hourly 1 https://s7280.pcdn.co/wp-content/uploads/2016/04/bmc_favicon-300x300-36x36.png Jon Stevens-Hall – BMC Software | Blogs https://s7280.pcdn.co 32 32 How BMC Helix for ServiceOps Advances Agile DevOps for Enterprises https://s7280.pcdn.co/how-bmc-helix-for-serviceops-advances-agile-devops-for-enterprises/ Mon, 13 Mar 2023 10:32:22 +0000 https://www.bmc.com/blogs/?p=52701 Legacy technology has often been seen as an inevitable barrier to DevOps, but in most enterprises, it will never disappear entirely, and in fact, mainframe technology is growing and remains projected to do so through the rest of this decade. A successful modern enterprise DevOps strategy needs a pragmatic approach to leveraging legacy technology and […]]]>

Legacy technology has often been seen as an inevitable barrier to DevOps, but in most enterprises, it will never disappear entirely, and in fact, mainframe technology is growing and remains projected to do so through the rest of this decade. A successful modern enterprise DevOps strategy needs a pragmatic approach to leveraging legacy technology and enhancing it with state-of-the-art ServiceOps technology.

Many BMC customers are large, established enterprises and organizations that have been investing in technology for decades. Nearly all of them have embarked on a DevOps journey, but for many, the impact of these initiatives, while usually very positive, has been limited.

The notion of a fully modernized technology infrastructure might be desirable, but it is an expensive and complex change. It may not be the best short- or long-term business decision to replace every legacy component, particularly if the opportunity cost of that effort reduces the capacity to deliver new, transformative innovations.

This reality has created significant challenges for DevOps, which grew in the first half of the decade in smaller organizations such as start-ups and open-source vendors that tended not to have much legacy technology. In these environments, services are easier to understand and it’s easier to enable rapid, light-touch deployment, compared to a company like a major bank where a simple customer interaction (for example, moving money from one bank to another) may cross many different technology platforms of different ages.

Enterprises are more complicated, so achieving the appropriate balance between the intentionally simplified microservice perspective of a DevOps team and the broader picture of the complex enterprise is more complicated, as well. As a result, organizations frequently lack the confidence and ability to let DevOps break out of small, contained pockets and become a mainstream innovation channel for large, complex, critical services.

This challenge of agile, enterprise-scale DevOps needs to be addressed to achieve higher DevOps maturity. BMC Helix for ServiceOps enables agile, enterprise-scale DevOps in several ways:

  • BMC Helix Intelligent Integrations align ServiceOps seamlessly and in real-time directly with the tooling used by the DevOps teams, reducing handoffs and creating shared understanding through collaboration and automation.
  • Dynamic service modeling with BMC Helix CMDB enables the organization to discover and understand its services, even as they rapidly evolve across various underlying technologies, old and new.
  • BMC Helix Operations Management with AIOps applies machine learning and predictive capabilities to deliver an unparalleled understanding of the risk and impact of changes.
  • BMC Helix IT Service Management provides rapid, AI-driven insight into change risks, giving the organization greater confidence to enable DevOps while reducing impacts that arise as DevOps-managed services interact with the broader technology environment.

Legacy is not a barrier to innovation; you just need the right toolset to make it part of your modern development efforts.

]]>
New BMC Helix Dashboard Brings DORA Metrics to Support DevOps https://www.bmc.com/blogs/bmc-helix-dashboard-dora-devops/ Wed, 01 Feb 2023 16:33:03 +0000 https://www.bmc.com/blogs/?p=52602 In the most recent release of BMC Helix Dashboards, BMC introduced a new DevOps-focused dashboard, the BMC Helix ITSM DevOps Metrics Dashboard, which uses industry-standard DORA metrics to visualize how organizational software development performance impacts a service or application. In this blog post, we will introduce this new dashboard and discuss how these metrics are […]]]>

In the most recent release of BMC Helix Dashboards, BMC introduced a new DevOps-focused dashboard, the BMC Helix ITSM DevOps Metrics Dashboard, which uses industry-standard DORA metrics to visualize how organizational software development performance impacts a service or application. In this blog post, we will introduce this new dashboard and discuss how these metrics are a powerful tool for performance optimization, not only for DevOps-driven software delivery, but also for a wider range of agile, incremental, and collaborative IT work.

What are DORA metrics?

DORA metrics were introduced by DevOps Research and Assessment, Google Cloud’s research program, to measure the state of an organization’s software delivery. They focus on some of the key characteristics identified by DORA as being critical to the performance of an organization in delivering successful outcomes based on DevOps practices.

The four key DORA metrics are:

  • Deployment frequency: For the service or application being worked on, how often does the organization deploy code to production or release it to end users?
  • Lead time for changes: How much time elapses between the initial commit of code into the production deployment process and its successful delivery running code in production?
  •  Time to restore service: How long does it generally take to restore service to users after a defect or an unplanned incident impacts them?
  • Change failure rate: What percentage of changes made to a service or application results in impairment or failure of the service and requires remediation?

DORA subsequently added an additional category, operational performance, which reflects the reliability and health of the service.

DORA metrics in BMC Helix

The new BMC Helix ITSM DevOps Metrics Dashboard brings these metrics and more to life, enabling you to visualize current performance, as well as ongoing performance trends, for change activity against a service. In addition to the four key DORA metrics, the dashboard harnesses the best-in-class ServiceOps and AIOps capabilities of BMC Helix to provide an ongoing view of the health of the service.

The new dashboard also provides the viewer with valuable information about upcoming change activity, as well as additional actionable insights to help drive improvements.

Sample-BMC-Helix-ITSM-DevOps-Metrics-Dashboard

Figure 1. Sample BMC Helix ITSM DevOps Metrics Dashboard.

DORA, of course, has its roots firmly in the DevOps world; key members of the group include Gene Kim and Dr. Nicole Forsgren. For organizations practicing DevOps, this dashboard provides the insights specified by DORA for those activities.

However, the dashboard should not be considered only for code deployment. As explained in the 2021 State of DevOps Report, “These four metrics don’t encompass all of DevOps, but they illustrate the measurable, concrete benefits of pairing engineering expertise with a focus on minimizing friction across the entire software lifecycle.”

This pairing of expert-driven optimization with reduced friction draws comparisons with ITIL® 4, which shares many of the same guiding principles that have underpinned DevOps throughout its short life: iterative progression with continuous feedback; collaboration and visibility of work; optimization; and automation.

Indeed, High Velocity IT is one of the four key practitioner books in the ITIL® 4 library, and specifically adapts the learnings of DevOps to the broader IT environment. As the book’s introduction states, “High velocity does not come at the expense of the utility or warranty of the solution, and high velocity equates with high performance in general.”

As such, we anticipate this dashboard will be of great value to organizations that are active adopters and practitioners of DevOps, as well as any technical organization seeking to implement more changes more quickly and iteratively with greater resilience and automation. The benefits described by DORA in its 2022 State of DevOps Report apply to much more than just DevOps: “The faster your teams can make change, the sooner you can deliver value to your customers, run experiments, and receive valuable feedback.”

]]>
Swarming vs Tiered Support Models Explained https://www.bmc.com/blogs/swarming-support-tiered-support-differences/ Thu, 17 May 2018 00:00:55 +0000 http://www.bmc.com/blogs/?p=12244 What is Swarming Support? It’s a reaction to the perceived shortcomings of a ubiquitous ITSM practice: the tiered support model. Perhaps the most well known organisational structure for IT Service Management is the three-tier support hierarchy. In a typical enterprise, we might find a structure which looks something like this: Level 1: A frontline Service […]]]>

What is Swarming Support? It’s a reaction to the perceived shortcomings of a ubiquitous ITSM practice: the tiered support model.

Perhaps the most well known organisational structure for IT Service Management is the three-tier support hierarchy. In a typical enterprise, we might find a structure which looks something like this:

  • Level 1: A frontline Service Desk, directly fielding incoming customer communication (typically by answering phone calls), with a level of generalised skill intended to enable resolution of a high volume of simpler issues.
  • Level 2: A second tier of support, often closely associated to the Service Desk, but with deeper general or specialist skills.
  • Level 3: Specialist support teams focused on specific technologies and applications.

This structure has become entrenched in the corporate IT support world for a number of positive reasons:

  • Customers are presented with a single communication channel to the IT support organisation, regardless of the nature of their issue.
  • The general technical support skills needed to work in Tier 1 and Tier 2 support are easily found in the workforce. This also makes outsourcing of one or both of these layers straightforward, and as a result this is commonly seen.
  • Specialist technical resources can be insulated from direct contact, ensuring that only properly triaged issues reach them.

The journey of a customer’s case through this structure may start and end at the first line (in fact, in many organizations, customers have the opportunity to resolve their issue through automated self-service — often described as “Level zero”).

There are inevitably many issues, however, which are not resolvable by Level 1 support. These progress to Levels 2 and 3 through a process of escalation:

Typical tiered support model

Level 2 support agents typically handle fewer cases than their Level 1 counterparts, but these tend to be more complex, with a longer average effort on the part of the agent.

Tickets which make their way to Level 3 typically account for a small volume of the overall incoming caseload, but they are also the most complex issues, requiring the most specialist skills, and generally taking the most time to resolve.

Swarming attempts to replace this support structure with something rather different. Advocates of swarming contend that there are fundamental problems with the multi-tiered support model:

  • Tiered Support can lead to cases “bouncing” from one team to another, often multiple times, as the organization attempts to find a single team which can drive the issue to resolution.
  • The model is fundamentally siloed. The use of single-discipline teams reduces opportunities for knowledge dissemination.
  • It leads to queues forming. Often a single issue may wait in a number of teams’ ticket queues, each adding a delay to the issue’s progress to resolution. The answer may be at level 2 or 3, but it takes time to get there, as it waits in a number of teams’ queues along the way.

“Swarming” appeared late in the last decade as a proposal for a new framework for technical support organisation. It explicitly rejects the three-tier orthodoxy, in favour of a model of networked collaboration:

SOURCE: Consortium for Service Innovation — http://www.serviceinnovation.org/intelligent-swarming/

A key pioneer for IT support was Cisco, who set out their new “Model for Distributed Collaboration and Decision Making” in a 2008 white paper, “Digital Swarming”. The concept was subsequently adopted by the Consortium for Service Innovation, and developed into a vision entitled “Intelligent SwarmingSM”. Some of its core principles, in direct opposition to the orthodoxy, are that:

  • There should be no tiered support groups.
  • There should be no escalations from one group to another.
  • The case should move directly to the person most likely to be able to resolve it.
  • The person who takes the case is the one who sees it through to resolution.

The intelligent part of Intelligent SwarmingSM refers to the use of individual or team “reputation” to help select the right people to bring into the Swarm.

Here at BMC, many of our customer support teams use Swarming as an alternative to tiered support. Our model consists of three different types of swarm:

  1. Severity 1 Swarm
  2. Local Dispatch Swarm
  3. Backlog Swarm

Swarming starts as soon as any issue is not immediately resolvable at the point of customer contact. A rapid initial triage results in the distribution of case tickets to one of two “Swarms”:

Initial triage in some Swarm models

Each “Swarm” is actually a small team, focused in near-real-time on the incoming flow of customer cases:

“Severity 1” Swarm

Three agents working on a scheduled weekly rotation.
Primary focus: Provide immediate response, and resolve as soon as possible.

A Severity 1 swarm is focused on a very small percentage of issues which happen to be the most critical. Appropriate people are brought into the swarm to resolve severe cases as quickly as possible. This is not something unique to Swarming, being indistinguishable from a typical “major incident war room”.

Local Dispatch Swarm

Meet every 60–90 minutes
Regional, product-line focused
Primary focus: “Cherry pickers”. What new tickets can be resolved immediately?

Secondary: Validation of tickets before assignment to product line support teams.
Dispatch Swarming addresses a key shortcoming of tiered support: many cases can be solved extremely quickly by the right expert, but there is a delay in getting to them.

The Dispatch Swarm is encouraged to “cherry pick”, disregarding anything that cannot be resolved very quickly. In doing so, they are able to dramatically shorten the time spent achieving resolution for a significant subset of escalated cases.

There are significant secondary benefits, too. The inclusion of inexperienced frontline support staff in these Swarms gives exposure to knowledge that would otherwise only start to be gained after eventual promotion to more specialist teams. Meanwhile, conversely, third-tier support agents are brought closer to the customer.

Backlog Swarm

Swarming that isn’t the result of initial triage is often called the Backlog Swarm.

Example of a backlog swarm model

Meet regularly, typically daily.
Primary focus: Address challenging tickets brought to them by product-line support teams.
Secondary: Replace the role of individual subject matter experts.

Issues which are still open after being processed by the Triage Swarm may be brought to the Backlog Swarm. This brings together groups of skilled and experience technical people, crossing boundaries such as geography and department, with the objective of focusing on the most difficult cases. Cases are referred to them by local engineering and support teams, who are no longer permitted to directly engage individual subject matter experts. They must, instead, always refer those cases to the appropriate Backlog Swarm.

The introduction of Swarming in BMC’s customer support organisation has led to a number of measured improvements across key indicators such as customer satisfaction, mean time to resolve, and backlog size. Importantly, too, we have seen tremendous improvements in skills development and knowledge sharing. As one experienced support analyst put it, “I have probably doubled my knowledge of the products in a year because of Swarming, and I have been here a long time”.

You can learn more about Swarming at the Consortium for Service Innovation’s website.

]]>
The Importance of Knowledge Management in the Digital Enterprise https://www.bmc.com/blogs/importance-knowledge-management-digital-enterprise/ Mon, 26 Sep 2016 15:56:58 +0000 http://www.bmc.com/blogs/?p=9802 Why is Knowledge Management so important to BMC and to our customers, and why has BMC Remedy Service Management Suite’s Knowledge Management earned the highest ratings in 4 out of 5 Use Cases in Gartner’s Critical Capabilities for IT Service Support Management Tools? Digital transformation is rapidly broadening the range of technologies in use in the […]]]>

Why is Knowledge Management so important to BMC and to our customers, and why has BMC Remedy Service Management Suite’s Knowledge Management earned the highest ratings in 4 out of 5 Use Cases in Gartner’s Critical Capabilities for IT Service Support Management Tools?

Digital transformation is rapidly broadening the range of technologies in use in the workplace.  The enterprise is filling with new devices, new platforms, and new types of digitally-enabled services. In countless conversations with customers, we are hearing that Knowledge Management has never been more important to the success of the IT support organization.

In the past, the information technology base underpinning an enterprise was comparatively straightforward. Corporate applications were delivered from standardized datacenters, and accessed using desktop PCs.  IT organizations selected their tools and platforms, and built their skill sets and support processes accordingly.

The digital enterprise of the modern era looks very different. With the rise of the Internet of Things, many business services are now underpinned by entirely new types of devices, frequently in areas of the business that were not previously digitally enabled. Big data systems drive new business opportunities. The datacenter has moved beyond virtualization, to a new hybrid of public and private cloud technologies. Mobile devices are ubiquitous.

Supporting this technological diversity presents a huge challenge for the IT Support organization.  Unless the Service Desk is able to expand its scope of support, are more escalations to second and third line teams, which is expensive, and slows the resolution process.  Knowledge Management is the best way to address this challenge.  Done effectively, it can simultaneously enhance the organization’s efficiency in supporting core technologies, while providing a growing “long tail” of resources in support of the wider range of devices appearing across the enterprise.  When they have access to the right knowledge resources, support agents can achieve more first-time resolutions even for unfamiliar devices, and organically develop a wider set of skills, simply by doing their job.

These benefits are not achievable if the knowledge tool is inferior or poorly integrated. Standalone tools frequently force the support agent to jump from one interface to another, often re-keying information between the two. This “swivel chair” effect reduces the efficiency of knowledge consumption and creation and may even make the knowledge search process so slow and uncertain that   agents’ performance against their time targets gets worse instead of better. Poor integration also slows down the process of creating content, acting as a disincentive to do so, and hence reducing effective knowledge gathering.

BMC Remedy Service Management delivers knowledge automatically, contextually, at the point it is needed; adapting in real time to the information recorded by the agent.  The agent does not have to switch between screens, or even instigate the search manually.  Because knowledge is not just about articles, BMC Remedy Service Management also positions other relevant resources, such as related tickets and outages, in context of the agent’s work.

For knowledge authors, articles can be created directly from support tickets, removing the need to copy and paste information.  The intuitive and powerful editor makes it easy to create elegant and appealing content, and assistive features help the author to avoid creating duplicate content.

Furthermore, full mobile support for both consumption and creation of knowledge ensures that field teams have full access to all of this functionality on supported Android and iOS devices.

The Service Desk Institute, in a report predicting the shape of the support organization in 2017, emphasized that Service Desks should transform into “innovation centres, rekindling curiosity with the latest technology and teaching the business how to harness cutting-edge equipment.” To deliver this vision, the knowledge tool needs to be collaborative; today’s digital services are not built using single technologies, so effective sharing of knowledge is vital to enable valuable content to be built and shared. BMC Remedy Service Management harnesses social collaboration to ensure that great teamwork can create great knowledge.  It also provides the option to use the practices defined by the Knowledge Centered Support (KCS) framework, building collaborative knowledge management into the “day job” of the organization’s support professionals.

It is clear that pervasive knowledge creation and usage is critical to the success of a digital service organization. This is why we have substantially increased our investment into the Knowledge Management capabilities of BMC Remedy Service Management over the past several years.  World-class Knowledge Management is a core, foundational component of our ITSSM suite. We believe this is reflected in the 2016 Gartner Critical Capabilities for IT Service Support Management tools, in which BMC Remedy Service Management Suite received the highest scores in Intermediate-Maturity, High-Maturity, Basic Digital Workplace ITSSM, and Digital Advanced Workplace ITSSM.

Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.

Read the rest of our series on Knowledge Management:
Knowledge Management Strategies

Knowledge Centered Support

]]>
Knowledge Centered Support: The Framework For Service Desk Transformation https://www.bmc.com/blogs/how-to-transform-your-service-desk-with-knowledge-centered-support/ Tue, 09 Feb 2016 09:01:47 +0000 http://www.bmc.com/blogs/?p=9112 The Digital Revolution is creating huge opportunities for innovation and productivity in the workplace.  However, this brings significant challenges to enterprise support functions. The scope of technical support is widening as technology moves from the back-office to the front-line. As a result, support organizations are expected to provide support for an ever-wider range of technology, […]]]>

The Digital Revolution is creating huge opportunities for innovation and productivity in the workplace.  However, this brings significant challenges to enterprise support functions. The scope of technical support is widening as technology moves from the back-office to the front-line. As a result, support organizations are expected to provide support for an ever-wider range of technology, and in an increasingly agile fashion.

Knowledge Management is fast emerging as a way to mitigate this challenge. Effective use of knowledge significantly improves self-service experiences for end customers, and generates faster and more successful outcomes for those issues which still reach the service center.

However, effective implementation of Knowledge Management is still proving to be a significant challenge in the industry. Organizations often struggle with either of two significant challenges: building a knowledge base, and using it successfully once it’s there.

It is often difficult for support teams to find the time to capture good knowledge, especially if it is not formally entrenched as a priority for them. Some enterprises have addressed this with a dedicated knowledge team, but a team like this is unlikely to have sufficient subject matter expertise in all areas.  As a result, they still rely on input from those over-worked specialists.

Even where knowledge has been captured, making good use of it can be a challenge. As Katrina Pugh and Nancy M. Dixon put it the HBR Article on Knowledge Management, Don’t Just Capture Knowledge—Put It to Work: “what’s the point of capturing organizational knowledge if it’s going to be tossed into some file and forgotten?”  This effect can be exacerbated in organizations who have addressed the content creation challenge by embracing wide participation or even crowdsourcing:  noise can overwhelm the signal, making knowledge difficult to find.

Each of these issues creates a negative cycle: support teams are unwilling to spend time searching a knowledge base if they don’t expect to find answers.

Knowledge Centered Support, or KCS, is one solution to this challenge. It is becoming increasingly important to many of our major customers and appears prominently in the Gartner 2015 “Hype Cycle”.  KCS is so important, in fact, that we have embedded it into BMC Remedy Service Management, specifically the December 2015 releases of Remedy 9.1 and Smart IT 1.3.01.

So, what is all the fuss about, and why should KCS be something you consider in your support organization?

Knowledge Centered Support was developed by the Consortium for Service Innovation to address the knowledge challenge head-on, by moving knowledge creation and enhancement away from the periphery, and putting it at the heart of every support interaction.

KCS is a framework of two halves.  This continual engagement with knowledge, on an issue-by-issue basis, is defined in what KCS calls the “Solve Loop.” The key expectations of this are:

  • When dealing with an issue, each support person is expected to look for knowledge that corresponds to the issue and correctly defines a solution. This gets linked to the issue.
  • If an article exists, but it needs some revisions, then the support person should flag it with appropriate suggestions.
  • If there is no content already available, they are encouraged to create a new article, describing the context of the customer’s issue, and the steps taken to resolve it.

This leads, of course, to much more rapid creation of content. It also requires a certain commitment from the organization, because it can add additional effort to each support case.

If the Solve Loop is working effectively, then a lot of new content is created (which is no bad thing, says KCS, because it fosters a long tail effect). The other half of KCS, known as the “Evolve Loop,” provides a framework of more proactive processes, designed to ensure participation and quality.

  • Important articles get special attention. These can be identified by usage stats collected from the actions of people participating in the Solve Loop.
  • Missing knowledge is identified by tracking user searches which have not resulted in successful linkage to knowledge. Where necessary, articles can be created to fill the gaps.
  • Knowledge authors are guided by coaches, using a set of automated performance indicators, manual reviews of articles (known as Article Quality assessments), and general mentoring.   Users are progressed through a set of competency and trust levels, moving from “Candidate,” through “Contributor,” to “Publisher.”

KCS Practices

image source: KCS Version 5.3

If KCS is working well, the Solve Loop and the Evolve Loop combine to form a virtuous cycle. A significantly increased amount of content is created as a matter of course during day-to-day activities, and proactive effort gets focused on high priority items to ensure that the most important content is of the highest quality.  Users’ participation is encouraged and their skills are enhanced.

KCS requires organizational commitment and management support. However, its advocates point to the significant benefits that can be realized for the support organization, its staff, and (importantly) its customers.  That commitment, KCS argues, is paid back handsomely.

By embedding KCS best practices in Remedy we are enabling our customers to adopt and implement knowledge management as an integral part of the support process. This new functionality will empower our customers to significantly improve self-service and create a more efficient service desk capable of supporting the modern digital enterprise.

You can learn more about KCS at the Consortium for Service Innovation’s website.

Read the rest of our series on Knowledge Management:
Knowledge Management Strategies

Importance of Knowledge Management for the Digital Enterprise

]]>
IT Asset Management Strategies in the Digital Enterprise https://www.bmc.com/blogs/it-asset-management-strategies-in-the-digital-enterprise/ Mon, 21 Sep 2015 20:35:45 +0000 http://www.bmc.com/blogs/?p=8757 IT assets are rapidly evolving, and so must our practices. It is an interesting time for IT asset management (ITAM). The digital revolution in enterprises has had several profound effects on the corporate IT landscape. Rapid changes in technology have brought an increasingly diverse range of new technologies into ITAM’s scope. The increasing role of […]]]>

IT assets are rapidly evolving, and so must our practices.

It is an interesting time for IT asset management (ITAM). The digital revolution in enterprises has had several profound effects on the corporate IT landscape. Rapid changes in technology have brought an increasingly diverse range of new technologies into ITAM’s scope. The increasing role of IT as a frontline business enabler, rather than as a backroom function, means that there is more focus on IT’s direct impact on the financials of the business. These changes make IT asset management both more important and more complex.

To address the complexity challenge, IT asset management strategies and methods need to evolve. To do that, we need to understand the differences between asset classes, and what they mean for ITAM.

Asset Classes

  • Low Speed vs. High Speed
    Some assets move slowly. A physical server rack mounted in a datacenter, is likely to remain in a single location for years. Its procurement and provisioning cycle is measured in days or even weeks.A cloud server instance is, by comparison, extremely fast-moving. It can be instantiated in minutes. It may shift rapidly across physical instances. In some cases, it might exist only for a short time (the default lifespan of a BMC pre-sales demo instance, for example, is three hours).
  • Individual Context vs. Service Context
    It is also important to think about each asset’s function and stakeholders. Client devices such as laptops and phones are typically assigned to a single user or to a small, defined group. They will usually have no single-service context because each user is likely to be using the devices to access a broad range of services. In contrast, a server will tend to be used—indirectly—by a large group of consumers, but will typically underpin a single service.

Building a Comprehensive Asset Strategy

An effective IT asset management strategy needs to account for both static and rapid assets, and for either individual or service contexts.

We can plot different types of IT assets on a chart with two axes: One for the rapidity of the asset and one for its context. Since every organization uses assets in slightly different ways, the precise position of each asset class will vary. However, an illustration of common asset class positioning might resemble this:

ITAM-chart1

In the digital enterprise, a comprehensive IT asset management strategy will require appropriate methodologies and tools for each class of asset.

To maintain accurate data on static assets, we can use traditional discovery tools because a periodic data update is probably enough.

For assets such as Docker containers and public cloud servers, however, traditional periodic discovery will not be enough; the server may have appeared and disappeared before the discovery tool ever has the chance to see it. If that matters to you (or, perhaps, to your vendors’ software license auditors), then a more agile data collection method will be needed.

The service context of those server assets is important too, hence our methodology must focus on recording and managing that. End-user devices such as PCs, conversely, require us to monitor assignment to individuals, as this is the context we care about.

The quadrant of each asset in our chart determines the way we need to manage them:

ITAM-chart2

Almost every enterprise runs on a diverse set of IT assets, old and new, “legacy” and modern. By applying this model, and determining the appropriate strategy for each asset class, the IT asset manager will be able to realize the full benefits of effective ITAM for any asset type—now and into the future.

]]>
Knowledge Management Strategies: Exploring the Long Tail https://www.bmc.com/blogs/knowledge-management-strategies-exploring-the-long-tail/ Tue, 23 Jun 2015 14:41:10 +0000 http://www.bmc.com/blogs/?p=8483 In 2007, Chris Anderson, then Editor-in-Chief of Wired, wrote a seminal article entitled The Long Tail, in which he described a unique advantage held by big online retailers, such as Amazon, over traditional physical media outlets. For bricks-and-mortar stores, it has never been economical to stock anything beyond a relatively thin slice of titles. Even large […]]]>

In 2007, Chris Anderson, then Editor-in-Chief of Wired, wrote a seminal article entitled The Long Tail, in which he described a unique advantage held by big online retailers, such as Amazon, over traditional physical media outlets.

For bricks-and-mortar stores, it has never been economical to stock anything beyond a relatively thin slice of titles. Even large bookshops carry only a tiny fraction of the books in print. An average record store needs to sell two copies of a CD each year to pay the rent on the shelf space the CD occupies (and, as a result, even large retailers like Walmart might only carry a range of music corresponding to the top 40,000 tracks on the market).

In contrast, Anderson cited the usage statistics of Rhapsody, a streaming service which — at the time — carried 750,000 tracks. Regardless of whose numbers were studied, the retailers or the streaming service, there was a huge demand for a limited number of the most popular tracks, and very low demand for most others.

For Rhapsody, however, shelf space was not a consideration. A digital track’s only footprint is the storage required to hold it, and storage is cheap. Interestingly, Rhapsody discovered there was a monthly audience not just for the top 40,000 tracks, but the top 400,000. Each month, more than half the tracks in its inventory found somebody wanting to listen to them.

Anderson named this effect the “Long Tail,” after the distinctive shape of the power curve obtained by plotting interactions for each item.

In the digital workplace, one area where the Long Tail increasingly manifests itself is Knowledge Management. The growth of SaaS has pushed up the average number of cloud services consumed by enterprises to over 900. The Internet of Things will bring tens of billions of new smart devices, in many previously unconnected areas of the business. That’s going to be a lot of new stuff to support and maintain, and a lot of demand on a broad range of knowledge.

If usage of Knowledge Articles in a large and mature content library is charted, we typically see a similar power curve to the one described by Anderson. A small number of articles will sit on the left: accounting for the vast majority of views, citations, and re-uses. The vast bulk of the articles find themselves on the Long Tail of the power curve, but each is hardly touched. So is it worth bothering with them at all?

To answer that, we can look at the three rules defined in Anderson’s article, for providing value from a Long Tail. Applying these three principles consistently and over time ensures that a solid set of long-tail knowledge is built up over time, and, importantly, that it provides value. As the digital workplace evolves, this broad content may be vital in helping your support organization to meet the challenge.

1) Make everything available

The Long Tail’s key differentiator is its near-unlimited breadth of coverage. Anderson’s article contrasted bricks-and-mortar DVD retailers with the then-embryonic Netflix. Having “broken the tyranny of physical space,” it did not matter to Netflix how infrequent or widely-spread the viewers of a particular TV episode were, “only that some number of them exist, anywhere.” With incidental costs cut to negligible levels, almost any level of demand, however low, makes it worthwhile hosting an obscure film or episode.

Likewise, for our Knowledge Repository to produce the benefits of a Long Tail effect, we need to ensure that content is available for the large breadth of obscure, rare, and quirky issues which we might need to support. It’s important that it is there, however rarely each instance might be visited. It might be straightforward to manually identify the top 10 issues reported by users, but the next 1000 are an unscalable challenge. You can’t be selective. As Anderson puts it: “In a Long Tail economy, it’s more expensive to evaluate than to release. Just do it!”  However, this requires some sharp focus on costs, something zeroed-in on in Anderson’s second rule.

2) Cut costs

Anderson argued for differential charging at the consumer level — to pull customers “down the tail” with cheaper prices. For Knowledge Management, however, consumption costs are not as relevant as production costs, so this is where we must apply the rule.

A typical benchmark might put the cost of a first-line resolution at $25, versus $75 for the second line. If popular, a frequently used knowledge article gets used at the front-line a thousand times, and prevents an escalation each time, it is generating a theoretical savings — by itself — of $50,000. It’s worth putting significant effort into these articles. Embellish them with video. Promote them on the front page of the service portal.

Meanwhile, one thousand knowledge articles on the Long Tail, each used once, might deliver the same savings, but if we spend more than $50 producing each of those articles, the savings is wiped out (and that’s before we account for the cost of any articles which are never re-used at all.)

An effective Long Tail strategy for Knowledge Management, therefore, requires very lean costs in knowledge production. It’s a high-volume, low-margin process. This is where frameworks like Knowledge Centered Support (KCS) are attractive: KCS looks to derive knowledge content from day-to-day support processes, rather than from a separate, unaligned task of knowledge creation.

3) Help me find it

Anderson’s article highlighted the story of Touching the Void, a book about a disastrous mountain climbing exhibition in South America, written in 1988 by Joe Simpson. After modest success, it was nearly forgotten and almost out of print when, in 1997, Jon Krakauer published Into Thin Air, an account of a tragic series of events on Mount Everest. Krakaeur’s book was a bestseller, but something strange started to happen. Online sales of Touching the Void began to soar. The publisher rushed out a new edition to meet the sudden demand.

The reason for this was, primarily, Amazon recommendations. The online retailer’s algorithms were linking purchasers of Krakauer’s book to Simpson’s. Touching the Void, having lurked for years as just another title on Amazon’s Long Tail, became hugely relevant due to an algorithm.

Users of Knowledge Management are typically under time pressure. One service desk agent, in 2014, describes the “gamble” that he faced on every customer interaction. Knowledge can help drive better first-time fix performance, but that was only one of his key targets. The other was average handling time. If agents feel it will take them a long time to find knowledge, or even worse, that they might search in vain, they will worry more about the time target.

To benefit from that Long Tail, therefore, we need fast, preferably automated identification of the right Knowledge Article from the wider bulk of content. If we can’t find the right content, the Knowledge Base risks becoming a WORN (Write Once, Read Never) technology.

This rapid identification has been a big focus of our innovation with Remedy: the Smart IT UX finds knowledge automatically, in context. The agent no longer needs to gamble. Learn about the new Remedy with Smart IT.

Read the rest of our series on Knowledge Management:
Knowledge Centered Support

Importance of Knowledge Management for the Digital Enterprise

]]>
First Call Resolution: #1 Service Desk & ITSM Metric Explained https://www.bmc.com/blogs/service-desk-software-the-key-to-first-call-resolution/ Tue, 16 Jun 2015 21:54:36 +0000 http://www.bmc.com/blogs/?p=8476 In an ideal world, it might seem like there would be no need for service desk calls at all. Realistically, we know that there will always be good reasons for customers to call. We still want to provide the best possible service. We want to do it effectively and efficiently. The best possible customer experience […]]]>

In an ideal world, it might seem like there would be no need for service desk calls at all.

Realistically, we know that there will always be good reasons for customers to call. We still want to provide the best possible service. We want to do it effectively and efficiently.

The best possible customer experience ends with the issue resolved before the end of the call. It means a happy customer, and no further action required. That, of course, is why first-time resolution should be a priority measure of success within any service desk organization.

But there’s also a compelling business reason. Benchmark studies typically put the cost of a first-line service desk resolution, even in the best-performing organizations, at between US$20 and $30. If that ticket escalates to the second line, it’s usually several multiples more expensive.

The cost of a third-line team’s time is much higher still. After all, they are probably being diverted from innovation and development activities, which add business value.

In fact, it’s easy to model the business case for a high resolution rate, as this example shows, for an organization processing 10,000 tickets per month:

Graph_700x400_05092015

A smarter service desk, therefore, invests time and commitment to achieving the best possible fix rate.

A key tool for a smarter service desk in driving better resolution rates is the use of pre-existing knowledge. But experience with service desks all over the world has demonstrated a recurrent challenge: Service desk agents tend to be measured by more than just their fix-rate. Calls need to be answered, so the time taken to handle each customer’s issue is frequently a pressure area, too.

When a service desk agent is under pressure on both metrics, they face what one agent described to me as “a gamble:” Do they search for knowledge or is their time target more pressing?

When making improvements to the service desk experience by upgrading your software, there are three best practices to target:

  1. Provide a single field for call-logging. Eliminate the long form and choose intelligent software to manage the hard work of sorting customer information and problem statements.
  2. Be sure your software package will also scan the system for current outages and similar tickets in parallel with each customer call.
  3. Bring knowledge management to the service desk agent in real time, so they do not have to decide whether to search for it.

Ensuring that your service desk software includes these capabilities will aid in faster call resolution times and lower costs for your organization. Best of all, a smarter service desk leads to happier customers.

]]>
ITAM 2015: The evolving role of the IT Asset Manager https://www.bmc.com/blogs/itam-2015-the-evolving-role-of-the-it-asset-manager/ Mon, 21 Jan 2013 00:00:00 +0000 http://bmcsoftware.wpengine.com/itam-2015-the-evolving-role-of-the-it-asset-manager/ In a previous post, we discussed the fact that IT Asset Management is underappreciated by the organizations which depend on it. That article discussed a framework through which we can measure our performance within ITAM, and build a structured and well-argued case for more investment into the function.  I’ve been lucky enough to meet some […]]]>

In a previous post, we discussed the fact that IT Asset Management is underappreciated by the organizations which depend on it.

That article discussed a framework through which we can measure our performance within ITAM, and build a structured and well-argued case for more investment into the function.  I’ve been lucky enough to meet some of the best IT Asset Management professionals in the business, and have always been inspired by their stories of opportunities found, disasters averted, and millions saved.  ITAM, done properly, is never just a cataloging exercise.

As the evolution of corporate IT continues at a rapid pace, there is a huge opportunity (and a need) for Asset Management to become a critical part of the management of that change.  The role of IT is changing fundamentally: Historically, most IT departments were the primary (or sole) provider of IT to their organizations. Recent years have seen a seismic shift, leaving IT as the broker of a range of services underpinned both by internal resources and external suppliers. As the role of the public cloud expands, this trend will only accelerate.

Here are four ways in which the IT Asset Manager can ensure that their function is right at the heart of the next few year’s evolution and transition in IT:

1: Ensure that ITIL v3’s “Service Asset and Configuration Management” concept becomes a reality

IT Asset Management and IT Service Management have often, if not always, existed with a degree of separation. In  Martin Thompson’s survey for the ITAM Review, in late 2011, over half of the respondents reported that ITSM and ITAM existed as completely separate entities.

Despite its huge adoption in IT, previous incarnations of the IT Infrastructure Library (ITIL) framework did not significantly detail IT Asset Management as many practitioners understand it. Indeed, the ITIL version 2 definition of an Asset was somewhat unhelpful:

“Asset”, according to ITIL v2:
“Literally a valuable person or thing that is ‘owned’, assets will often appear on a balance sheet as items to be set against an organization’s liabilities. In IT Service Continuity and in Security Audit and Management, an asset is thought of as an item against which threats and vulnerabilities are identified and calculated in order to carry out a risk assessment. In this sense, it is the asset’s importance in underpinning services that matters rather than its cost”

This narrow definition needs to be read in the context of ITIL v2’s wider focus on the CMDB and Configuration Items, of course, but it still arguably didn’t capture what Asset Managers all over the world were doing for their employers: managing the IT infrastructure supply chain and lifecycle, and understanding the costs, liabilities and risks associated with its ownership.

ITIL version 3 completely rewrites this definition, and goes broad. Very broad:

“Service Asset”, according to ITIL v3: Any Capability or Resource of a Service Provider.   Resource (ITILv3):    [Service Strategy] A generic term that includes IT Infrastructure, people, money or anything else that might help to deliver an IT Service. Resources are considered to be Assets of an Organization  Capability (ITIL v3): [Service Strategy] The ability of an Organization, person, Process, Application, Configuration Item or IT Service to carry out an Activity. Capabilities are intangible Assets of an Organization.”

This is really important. IT Asset Management has a huge role to play in enabling the organization to understand the key components of the services it is providing. The building blocks of those services will not just be traditional physical infrastructure, but will be a combination of physical, logical and virtual nodes, some owned internally, some leased, some supplied by external providers, and so forth.

In many cases, it will be possible to choose from a range of such options, and a range of suppliers, to fulfill any given task. Each option will still bear costs, whether up-front, ongoing, or both. There may be a financial contract management context, and potentially software licenses to manage. Support and maintenance details, both internal and external, need captured.

In short, it’s all still Asset management, but the IT Asset Manager needs to show the organization that the concept of IT Assets wraps up much more than just pieces of tin.


2: Learn about the core technologies in use in the organization, and way they are evolving:

A good IT Asset Manager needs to have a working understanding of the IT infrastructure on which their organization depends, and, importantly, the key trends changing it. It is useful to monitor information sources such as Gartner’s Top 10 Strategic Technology Trends, and to consider how each major technology shift will impact the IT resources being managed by the Asset Manager.  For example:

Big Data will change the nature of storage hardware and services.  Estimates of the annual growth rate of stored data in the corporate datacenter typically range from 40% to over 70%. With this level of rapid data expansion, technologies will evolve rapidly to cope.  Large monolithic data warehouses are likely to be replaced by multiple systems, linked together with smart control systems and metadata.

Servers are evolving rapidly in a number of different ways. Dedicated appliance servers, often installed in a complete unit by application service providers, are simple to deploy but may bring new operating systems, software and hardware types into the corporate environment for the first time. With an increasing focus on energy costs, many tasks will be fulfilled by much smaller server technology, using lower powered processors such as ARM cores to deliver perhaps hundreds of servers on a single blade.

Image of Boston Viridis U2 server

An example of a new-generation server device: Boston’s Viridis U2 packs 192 server cores into a single, low-power unit

Software Controlled Networks will do for network infrastructure changes what virtualization has done for servers: they’ll be faster, simpler, and propagated across multiple tiers of infrastructure in single operations. Simply: the network assets underpinning your key services might not be doing the same thing in an hour’s time.

“The Internet of Things” refers to the rapid growth in IP enabled smart devices.
Gartner now state that over 50% of internet connections are “things” rather than traditional computers. Their analysis continues by predicting that in more than 70% of organizations, a single executive will have management oversight over all internet connected devices. That executive, of course, will usually be the CIO. Those devices? They could be almost anything. From an Asset Management point of view, this could mean anything from managing the support contracts on IP-enabled parking meters to monitoring the Oracle licensing implications of forklift trucks (this is a real example, found in their increasingly labyrinthine Software Investment Guide). IT Asset Management’s scope will go well beyond what many might consider to be IT.

SF parking meter - an example of an IP enabled "thing"

A “thing” on the streets of San Francisco, and on the internet.

3: Be highly cross functional to find opportunities where others haven’t spotted them

The Asset Manager can’t expect to be an expert in every developing line of data center technology, and every new cloud storage offering. However, by working with each expert team to understand their objectives, strategies, and roadmaps, they can be at the center of an internal network that enables them to find great opportunities.

A real life example is a British medical research charity, working at the frontline of research into disease prevention. The core scientific work they do is right at the cutting edge of big data, and their particular requirements in this regard lead them to some of the largest, fastest and most innovative on-premise data storage and retrieval technologies (Cloud storage is not currently viable for this: “The problem we’d have for active data is the access speed – a single genome is 100Gb – imagine downloading that from Google”).

These core systems are scalable to a point, but they still inevitably reach an end-of-life state. In the case of this research organization, periodic renewals are a standard part of the supply agreement. As their data centre manager told me:

“What they do is sell you a bit of kit that’ll fit your needs, front-loaded with three years support costs. After the three years, they re-look at your data needs and suggest a bigger system. Three years on, you’re invariably needing bigger, better, faster

With the last major refresh of the equipment, a clever decision was made: instead of simply disposing of, or selling, the now redundant storage equipment, the charity has been able to re-use it internally:

We use the old one for second tier data: desktop backups, old data, etc. We got third-party hardware-only support for our old equipment”.

This is a great example of joined-up IT Asset Management. The equipment has already largely been depreciated. The expensive three year, up-front (and hence capital-cost) support has expired, but the equipment can be stood up for less critical applications using much cheaper third party support. It’s more than enough of a solution for the next few years’ requirements for another team in the organization, so an additional purchase for backup storage has been avoided.

bigdata

4: Become the trusted advisor to IT’s Financial Controller

The IT Asset Manager is uniquely positioned to be able to observe, oversee, manage and influence the make up of the next generation, hybrid IT service environment. This should place them right at the heart of the decision support process. The story above is just one example of the way this cross-functional, educated view of the IT environment enables the Asset Manager to help the organization to optimize its assets and reduce unnecessary spend.

This unique oversight is a huge potential asset to the CFO. The Asset Manager should get closely acquainted with the organization’s financial objectives and strategy. Is there an increased drive away from capital spend, and towards subscription based services? How much is it costing to buy, lease, support, and dispose of IT equipment? What is the organization’s spend on software licensing, and how much would it cost to use the same licensing paradigms if the base infrastructure changes to a newer technology, or to a cloud solution.

A key role for the Asset Manager role in this shifting environment is that of decision support.  A broad and informed oversight over the structure of IT services and the financial frameworks in place around them, together with proactive analysis of the impact of planned, anticipated or proposed changes, should enable the Asset Manager to become one of the key sources of information to executive management as they steer the IT organization forwards.

]]>
Building the case for IT Asset Management with Benchmarking https://www.bmc.com/blogs/building-the-case-for-it-asset-management-with-benchmarking/ Sat, 28 Jan 2012 00:00:00 +0000 http://bmcsoftware.wpengine.com/building-the-case-for-it-asset-management-with-benchmarking/ This is a long article, but I hope it is an important one. I think the IT Asset Management sector has an image problem, and it’s one that we ought to address. I want to start with a quick story: Representing BMC software, I recently spoke at the Annual Conference and Exhibition of the International […]]]>

This is a long article, but I hope it is an important one. I think the IT Asset Management sector has an image problem, and it’s one that we ought to address.

I want to start with a quick story:

Representing BMC software, I recently spoke at the Annual Conference and Exhibition of the International Association of IT Asset Managers (IAITAM).  I was curious about how well attended my presentation would be. It was up against seven other simultaneous tracks, and the presentation wasn’t about the latest new-fangled technology or hot industry trend. In fact, I was concerned that it might seem a bit dry, even though I felt pretty passionate that it was a message worth presenting.

It turned out that my worries were completely unfounded.  “Benchmarking ITAM; Understand and grow your organization’s Asset Management maturity” filled the room on day 1, and earned a repeat show on day 2. That was nice after such a long flight. It proved to be as important to the audience as I hoped it would be.

I was even more confident that I’d picked the right topic when I’d finished my introduction and my obligatory joke about the weather (I’m British, it was hot, it’s the rules), I asked the first few questions of my audience:

“How many of you are involved in hands-on IT Asset Management?”

Of the fifty or so people present, about 48 hands went up.

“And how many of you feel that if your companies invested more in your function, you could really repay that strongly?”

There were still at least 46 hands in the air.

IT Asset Management is in an interesting position right now.  Gartner’s 2012 Hype Cycle for IT Operations Management placed it at the bottom of the “Trough of Disillusionment”… that deep low point where the hype and expectations have faded.  Looking on the bright side, the only way is up from here.

It’s all a bit strange, because there is a massive role for ITAM right now. Software auditors keep on auditing. Departments keep buying on their own credit cards. Even as we move to a more virtualized, cloud-driven world, there are still flashing boxes to maintain and patch, as well as a host of virtual IT assets which still cost us money to support and license. We need to address BYOD and mobile device management. Cloud doesn’t remove the role of ITAM, it intensifies it.

There are probably many reasons for this image problem, but I want to present an idea that I hope will help us to fix it.

One of the massive drivers of the ITSM market as a whole has been the development of a recognized framework of processes, objectives, and – to an extent – standards. The IT Infrastructure Library, or ITIL, a huge success story for the UK’s Office of Government Commerce since its creation in the 1980s.

ITIL gave ITSM a means to define and shape itself, perfectly judging the tipping point between not-enough-substance and too-much-detail.

Many people, however, contend that ITIL never quite got Asset Management. As a discipline, ITAM evolved in different markets at different times, often driven by local policies such as taxation on IT equipment. Some vendors such as France’s Staff&Line go right back to the 1980s. ITIL’s focus on the

Configuration Management Database (CMDB) worked for some organizations, but was irrelevant to many people focused solely on the business of managing IT assets in their own right.  ITIL v3’s Service Asset Management is arguably something of an end-around.

However, ITIL came with a whole set of tools, practices and service providers that helped organizations to understand where they currently sat on an ITSM maturity curve, and where they could be. ITIL has an ecosystem – and it’s a really big one.

Time for another story…

In my first role as an IT professional, back in 1997, I worked for a company whose IT department boldly drove a multi-year transformation around ITIL.

Each year auditors spoke with ITIL process owners, prodded and poked around the toolsets (this was my part of the story), and rated our progress in each of the ITIL disciplines.

Each year we could demonstrate our progress in Change Management, or Capacity Management, or Configuration Management, or any of the other ITIL disciplines. It told us where we were succeeding and where we needed to pick up. And because this was based on a commonly understood framework, we could also benchmark against other companies and organizations. As the transformation progressed, we started setting highest benchmark scores in the business. That felt good, and it showed our company what they were getting for their investment.

But at the same time, there was a successful little team, also working with our custom Remedy apps, who were automating the process of asset request, approval and fulfillment.  Sadly, they didn’t really figure in the ITIL assessments, because, well, there was no “Asset Management” discipline defined in ITIL version 2. We all knew how good they were, but the wider audience didn’t hear about them.

Even today, we don’t have a benchmarking structure for IT Asset Management that is widely shared across the industry. There are examples of proprietary frameworks like Microsoft’s SAM Optimization Model, but it seems to me that there is no specific open, widely accepted “ITIL for ITAM”.

This is a real shame, because Benchmarking could be a really strong tool for the IT Asset Manager to win backing from their business. There are many reasons why:

  • Benchmarking helps us to understand where we are today.
  • More importantly, it helps us to show where we could get, how difficult and expensive that might be, and what we’re missing by not being there.

Those two points alone start to show us what a good tool it is for building a case for investment. Furthermore:

  • Asset Management is a very broad topic. If we benchmark each aspect of it in our organizations, we can get a better idea of where our key strengths and weaknesses are, and where we should focus our efforts.
  • Importantly, we can also show what we have achieved. If Asset Management has an image problem, then we need a way to show off our successes.

And then, provided we work to a common framework…

  • Benchmarking gives us an effective way of comparing with our peers, and with the best (and worst!) in the industry.

At the IAITAM conference, and every time I’ve raised this topic with customers since, there has been a really positive response. There seems to be a real hunger for a straightforward and consistent way of ranking ITAM maturity, and using it to reinforce our business cases.

For our presentation at IAITAM, we wanted to have a starting point, so we built one, using some simple benchmarking principles.

First, we came up with a simple scoring system. “1 to 4” or “1 to 5”, it doesn’t really matter, but we went for the former.  Next, we identified what an organization might look like, at a broad ITAM level, at each score. That’s pretty straightforward too:

Asset Maturity – General Scoring Guidelines

  • Level 1: Little or no effective management, process or automation.
  • Level 2: Evidence of established processes, automation and management.  Partial coverage and value realization. Some automation.
  • Level 3: Fully established and comprehensive processes. Centralized data repository. Significant
    automation.
  • Level 4:  Best-in class processes, tools and results. Integral part of wider business decision support and strategy.  Extensive automation.

In other words, Level 4 would be off-the-chart, industry leading good. Level 1 would be head-in-the-sand barely started.  Next, we need to tackle that breadth. Asset, as we’ve said, is a broad subject. Software and hardware, datacenter and desktop, etc…

We did this by specifying two broad areas of measurement scope:

  • Structural:  How we do things.  Tools, processes, people, coverage.
  • Value: What we achieve with those things.  Financial effectiveness, compliance, environmental.

Each of these areas can now be divided into sub-categories. For example, on “Coverage” we can now describe in a bit more detail how we’d expect an organization at each level to look:“Asset Coverage” Scoring Levels

  • Level 1: None, or negligible amount, of the organization’s IT Assets under management
  • Level 2: Key parts of the IT Asset estate under management, but some significant gaps remaining
  • Level 3: Majority of the IT Asset estate is under management, with few gaps
  • Level 4: Entire IT Asset estate under full management by the ITAM function.

This process repeats for each measurement area. Once each is defined, the method of application is up to the user (for example, separate assessments might be appropriate for datacenter assets and laptops/desktops, perhaps with different ranking/weighting for each).

You can see our initial, work-in-progress take on this here: https://communities.bmc.com/communities/people/JonHall/blog/2012/10/17/asset-management-benchmarking-worksheet.  We feel that this resource is strongest as a community resource. If it helps IT Asset managers to build a strong case for investment, then it helps the ITAM sector.

Does this look like something that would be useful to you as an IT Asset Manager, and if so, would you like to be part of the community that builds it out?

]]>