BMC Helix AIOps

What’s New in BMC Helix AIOps

Learn all about our latest release

The 25.2 BMC Helix ITOM release introduces new AI agents and new observability and discovery capabilities that help diagnose and prevent major incidents while optimizing application performance.

BMC Helix AIOps Release

In the new release, we’ve added BMC HelixGPT Post Mortem Analyzer, BMC HelixGPT Insight Finder, LLM Observability, Application Observability with Open Telemetry Logs, and Deep Container Discovery.

Get deep insights into the cause of incidents to prevent them from reoccurring

Get deep insights into the cause of incidents to prevent them from reoccurring

After an incident occurs, it’s time-consuming and difficult to understand all the factors that caused it and which measures should be taken to prevent it from happening again.

  • Get a detailed explanation after an incident happens, analysis of the underlying root cause, the impact to operations, and the resolution.

  • Learn from system failures so that you can take preventative steps to keep them from reoccurring.

Automatically create dashboard reports on issue status and service health with natural language chat

Automatically create dashboard reports on issue status and service health with natural language chat

It’s difficult to track and visualize issue status and understand how those issues are impacting service health.

  • Chat with an AI agent to automatically create dashboards that show whether an issue is being worked on, the timeline for fixing it, and the factors that caused it.

  • Easily create, save, and share dashboards based on the criteria and incident timelines that you specify.

Get extended visibility into application performance by correlating Open Telemetry span logs with traces

Get extended visibility into application performance by correlating Open Telemetry span logs with traces

To ensure users have a good application experience, you need deep visibility into application performance.

  • Correlate span log data with traces to capture latency, response time, duration, and error rate.

  • Get situational insights from Open Telemetry data that help you to debug and troubleshoot application performance issues more efficiently.

Improve LLM application performance and optimize training costs while maximizing quality and accuracy

Improve LLM application performance and optimize training costs while maximizing quality and accuracy

Large language model (LLM) applications consume more resources than traditional applications, and it’s important to measure their quality and accuracy to ensure they are performing well for users.

  • Use observability data to create dashboards with LLM quality and evaluation metrics.

  • Optimize LLM training costs with dashboards that track token usage and compute power with GPU utilization metrics like power usage, memory, and temperature.

Simplify license, security, and compliance management for containerized software

Simplify license, security, and compliance management for containerized software

While containers make application development more efficient, it can be difficult to inventory and track containerized software.

  • Get automated discovery of containerized software by extending ssh-based worker node scans into containers

  • More easily manage software licenses and apply security patches in a timely manner with containerized software visibility.

Other AIOps enhancements

Automate capacity reporting with BMC Helix Dashboards

Automate capacity reporting with BMC Helix Dashboards

You can now automate the creation, distribution and archival of capacity reports with BMC Helix Continuous Optimization.

  • Build a customized capacity reporting template in BMC Helix Dashboards.

  • Automate the creation and email distribution of capacity reports on a cadence you decide.

  • Manage different report types in one place and create user access controls for retrieving archived reports.

Try BMC Helix AIOps

Learn more

Explore the BMC Helix Platform

Learn more