GigaOm Market Insight Brief: The observability imperative

A man in a plaid shirt sits at a table with laptops and coffee cups, while a green graphic featuring a checklist and gear icon highlights observability on the right side of the image.
ACF Image Blog

A new GigaOm observability report reveals enterprises that adopt open standards and implement top-down cost governance will gain significant competitive advantages in operational efficiency and incident response velocity

Amanda Mitchell, Chief Editor, Chronosphere
Amanda Mitchell | Chief Editor

Amanda Mitchell, Chief Editor at Chronosphere, brings years of expertise creating B2B content and driving content strategy. She oversees all content creation at Chronosphere from the blog to Demand Gen assets to video.

5 MINS READ

Containerization and complexity

Organizations have reached an inflection point in observability. Today, 97% of enterprises run significant containerized workloads in production, and enterprise observability has evolved from “nice-to-have” capability to a business-critical requirement.

The next wave is already here. Within 24 months:

  • Highly containerized environments (75% + of workloads) will surge 8% to 23%—a 188% increase
  • Majority containerization (51%+ of workloads) is expected to increase from 40% to 55%
  • Overall container adoption will increase by 25% across the board

As containerization accelerates, so does complexity and the observability challenges that come with it.

These findings come from GigaOm’s recent, Market Insight Brief: The Observability Imperative. Read on for more highlights.

The cost crisis, gaps in visibility and scale, and the risks of lock-In

Cost management has become the primary operational challenge in observability. Containerized environments generate unprecedented telemetry volumes that make cost-aware observability tooling mandatory—not optional.

The numbers tell the story:

  • 53% of organizations identify “high telemetry volume or data cost” as their top operational challenge
  • 47% fail to achieve accurate budget forecasting, even among those with mature observability programs
  • 39% cite a lack of visibility across teams and systems
  • 39% report limited scalability as their primary platform constraint
  • 63% identify vendor lock-in as problematic for long-term strategy

“The most pressing urgency drivers are not necessarily technical limitations, but rather imposed through cost constraints.”

Cost pressure is the driver behind every gap

These challenges aren’t separate problems, but rather symptoms of the same cost crisis:

  • Visibility gaps emerge because data ingestion, retention, and cross-team coverage are curtailed to control spend.
  • Limited scalability reflects the same constraint where organizations want to ingest more telemetry but are blocked by rising costs.
  • Vendor lock-in becomes more painful as bills climb and enterprises face high switching costs once deeply entrenched in proprietary platforms.

GigaOm’s research reveals the greatest risks in observability are not about tool selection, but about how costs and complexity scale in cloud native environments. Without intentional governance, organizations quickly encounter runaway expenses, fragmented policy enforcement, and long-term lock-in.

  • Risks of cost escalation without controls:
    • Telemetry ingestion, storage, and query expenses often exceed forecasts and create budgeting volatility.
    • Teams face an impossible choice between: ingest all telemetry for visibility or limit data to control costs.
    • Multiple consoles and conflicting rule sets slow incident response.

The era of “collect everything and figure it out later” is over. Modern observability requires intentional data governance, cost controls, and architectural flexibility from day one.

Organizations without open standards policies find themselves locked into vendor specific instrumentation that becomes increasingly expensive to modify or replace; creating switching friction that compounds over time.

How organizations respond: ROI, maturity, automation, and GenAI

Despite these challenges, organizations investing in observability platforms report compelling returns. Average ROI sentiment scores of 4.17 out of 5 indicate strong satisfaction with observability investments. More concretely:

  • 24% of enterprises report greater than 50% MTTR reduction
  • 53% achieve 25–50% improvement in incident resolution times

Most enterprises (69%) have established Observability Centers of Excellence, with 64% placing ownership within IT operations organizations. This reflects the growing operational maturity, but also highlights the need for cross-functional coordination as these programs scale.

The shift toward automation

Organizations are moving beyond reactive monitoring toward predictive and automated operations:

  • 65% prefer either “automated triage with manual approval” or “fully automated” incident response

This represents a fundamental shift in operational philosophy that requires both technical and cultural transformation. Containerized workloads accelerate this need, as dynamic incident patterns simply outpace manual monitoring approaches.

GenAI adoption is accelerating

Enterprises are embracing artificial intelligence to enhance observability capabilities:

  • 86% support GenAI for automated incident response
  • 83% want AI-driven cost optimization recommendations
  • 76% are open to AI-generated post-incident summaries
  • 78% are comfortable training GenAI on internal data with appropriate security controls (Q47–Q51)

GenAI adoption will accelerate competitive separation. Early adopters implementing AI-driven incident response and cost optimization are already achieving measurable advantages in MTTR and operational efficiency.

Three practices that separate high performers

Based on GigaOm’s analysis of leading enterprises, three practices consistently separate high performers from peers. These practices are not aspirational “nice-to-haves,” but proven approaches that reduce cost risk, avoid lock-in, and accelerate operational maturity:

Cost Governance Engineering

  1. Tiered retention policies based on data criticality
  2. Dynamic sampling to manage volume without losing visibility
  3. Real-time cost monitoring with team-level accountability
  4. High performers reduce data volumes by reshaping and optimizing telemetry rather than discarding it, preserving visibility while controlling cost

Open Standards Foundation

  1. Platform consolidation built on open standards frameworks
  2. Explicit policies requiring OpenTelemetry compatibility and data portability rights
  3. This approach enables vendor optionality while reducing operational complexity

Predictive Operations Implementation

  1. Moving beyond reactive monitoring to predictive capabilities powered by automation and AI
  2. Proactive issue detection and resolution before user impact

The new competitive battleground

Cost management has become a defining factor in observability success. With telemetry costs emerging as the primary operational challenge, organizations that engineer cost governance into their observability architecture gain sustainable advantages in scaling and operational efficiency.

The choice is clear: consolidate strategically around open standards and cost-aware platforms, or accept the compounding disadvantages of fragmented, expensive, and increasingly obsolete observability architectures.

GigaOm observability report

Learn why enterprises implementing open standards and cost governance will gain competitive advantage.

Share This: