Gartner: Explainable AI Will Push LLM Observability to 50% of GenAI Deployments by 2028

Gartner released a prediction today that by 2028, the growing importance of explainable AI (XAI) will drive LLM observability investments to cover 50% of all GenAI deployments — up from just 15% today. The forecast underscores a critical gap in how organizations are deploying generative AI: most are flying blind on what their models are actually doing and why.

LLM observability goes beyond traditional IT monitoring to focus on AI-specific metrics like hallucination rates, bias detection, and token utilization. These solutions provide actionable insights into model behavior for development teams, MLOps engineers, and SREs responsible for keeping AI systems reliable in production. Gartner argues that XAI — which clarifies why a model produced a particular output — and observability — which validates how that output was generated — are complementary capabilities that must mature together.

The financial stakes are significant. Gartner forecasts the global GenAI models market will exceed $25 billion in 2026 and reach $75 billion by 2029. Without proper XAI and observability foundations, the analyst firm warns that GenAI initiatives risk being confined to low-risk, internal tasks where output verification is manually manageable — limiting the return on these massive investments.

Sources

Commentary

The 15%-to-50% jump that Gartner is projecting tells you something important: most enterprises deploying GenAI today have no real visibility into what their models are doing. They are shipping AI-powered products and workflows while essentially trusting the black box. That is fine for internal prototypes and low-stakes automation, but it is completely untenable for regulated industries, customer-facing applications, or anything where a hallucination could cause real harm.

The observability tooling market is going to explode. Companies like Arize, Langfuse, WhyLabs, and dozens of startups are positioning for exactly this wave. The real question is whether observability will become a built-in expectation — like APM for web applications — or remain an afterthought that only the most mature AI teams implement. Given the regulatory pressure building around AI transparency in the EU and elsewhere, my bet is on the former.

You May Have Missed