[DevoxxGR2025] Why OpenTelemetry is the Future
Steve Flanders, a veteran in observability, delivered a 13-minute talk at Devoxx Greece 2025, outlining five reasons why OpenTelemetry (OTel) is poised to dominate observability.
Unified Data Collection
Flanders began by addressing a common pain point: managing multiple libraries for traces, metrics, and logs. OpenTelemetry, a CNCF project second only to Kubernetes in activity, offers a single, open-standard library for all telemetry signals, including profiling and real user monitoring. Supporting standards like W3C Trace Context, Zipkin, and Prometheus, OTel allows developers to instrument applications once, regardless of backend. This eliminates the need for proprietary libraries, simplifying integration and reducing rework when switching vendors.
Flexible Data Control
The OpenTelemetry Collector, deployable as an agent or gateway, provides robust data processing. Flanders highlighted its ability to filter sensitive data, like personally identifiable information, before export. Developers can send full datasets to internal data lakes while sharing subsets with vendors, offering unmatched flexibility. OTel’s modularity means you can use its instrumentation, collector, or neither, integrating with existing systems. This vendor-agnostic approach ensures data portability, as switching backends requires only configuration changes, not re-instrumentation.
Enhanced Problem Resolution
OTel’s context and correlation features link traces, metrics, and logs, accelerating issue resolution. Flanders showcased a service map visualizing errors and latency, enriched with resource metadata (e.g., Kubernetes pod, cloud provider). This allows pinpointing issues, like a faulty pod causing currency service errors, reducing mean-time-to-resolution. With broad adoption by vendors, users, and projects, and stable support for core signals, OTel is a production-ready standard reshaping observability.