Observability Unveiled: Key Insights from IBM’s SRE Expert

Observability Unveiled: Key Insights from IBM’s SRE Expert

During the Grafana and Friends Meetup in Chennai, I had the opportunity to attend an insightful session by Manojkumar, an SRE professional from IBM. His talk centered around observability and how IBM tackles real-world challenges using Grafana and AI.

Out of all the sessions that day, this one stood out as my personal favorite, and I couldn’t wait to share some key takeaways here!The talk covered four critical components in modern observability systems: logs, metrics, traces, and profiling.

These make up the foundation for any robust observability setup, and he explained how each one plays a role in monitoring and troubleshooting large-scale infrastructures.

The Observability Stack - Logs, Metrics, Traces, and Profiling

  1. Logs:

    • Logs are often the first step in diagnosing issues.

    • They provide a granular record of everything happening within the system, from user activities to errors.

    • At IBM, logs are used to trace the precise sequence of events that can lead to potential failures or performance degradation.

  2. Metrics:

    • Metrics come in when you need to track the overall health of your system.

    • By monitoring things like CPU usage, memory consumption, and response times, metrics give a top-level view of how different components are performing.

    • While logs help you understand the "what" and "when," metrics help you catch patterns before they escalate into critical issues.

  3. Traces:

    • Traces become vital in distributed systems where a single request might travel through multiple services.

    • IBM uses traces to monitor each step of the request path, allowing them to pinpoint bottlenecks and understand complex interactions between microservices.

  4. Profiling:

    • Profiling takes observability to the next level by digging into the code execution itself.

    • It’s useful for spotting inefficiencies in resource usage (like CPU or memory) at a granular level, making it easier to optimize and fine-tune system performance.

    • Profiling provides the precision needed to identify which parts of the code need optimization, especially in performance-critical applications.


Real-World Challenges & Solutions in Observability

Manojkumar didn’t just talk theory — also shared practical challenges he faced and the solutions implemented using Grafana and AI. Three problems, in particular, stood out:

Problem 1: Missing Logs in the Centralized Logging System

One of the biggest challenges they encountered was missing logs in their centralized logging system. Relying on CloudWatch metrics alone led to gaps in visibility, which made it hard to troubleshoot incidents.

To close the gaps, they decided to incorporate ElasticSearch metrics alongside CloudWatch data. This approach gave them a more comprehensive view and reduced the chance of missed log entries, ensuring no critical data was lost in the process.

Problem 2: Where to Start Diagnostics?

With data pouring in from multiple sources—Prometheus, MySQL, Oracle, AWS, Azure—it can be overwhelming to know where to begin diagnosing a system issue.

The team built a collective dashboard that aggregates data from all these different sources. This unified view streamlined their diagnostics process, allowing them to get a clearer picture faster. Instead of hunting for data in different places, everything was available in one interface, which reduced the mean time to recovery (MTTR).

Problem 3: Multiple Alerts for a Single Issue

Receiving multiple alerts for the same underlying issue was a common problem, leading to alert fatigue. This made it difficult to focus on the real issue amidst the flood of notifications.

By utilizing LLMs (Large Language Models) and KNN (K-Nearest Neighbors) algorithms, they were able to intelligently group alerts. The system consolidated related alerts into one primary notification using AI-driven operations through ClickHouse*, drastically cutting down on unnecessary noise. This way, the team could focus on solving the root cause without getting overwhelmed by redundant alerts.*


Why This Talk Stood Out for Me

As someone deeply interested in observability and system health, Manojkumar’s talk felt incredibly relevant and timely. I’ve worked with observability tools like Grafana and Cribl, but seeing how IBM integrates AI to enhance monitoring was eye-opening. Their ability to handle large-scale infrastructure challenges using observability and AI offered a glimpse into the future of system monitoring.

The solutions they’ve implemented—whether it's creating multi-source dashboards or using AI for alert grouping—demonstrate how powerful modern observability tools have become. It also reinforced the idea that observability is not just about collecting data; it’s about making sense of it efficiently to keep systems running smoothly.


His talk has inspired me to dive even deeper into observability. In the coming weeks, I’ll be exploring more advanced Grafana features and tools like LGTM Stack (Loki, Grafana, Tempo, Mimir) and Cribl for smarter log management.

Stay tuned as I continue this journey into understanding how we can use observability to improve system reliability and performance.