The following is a guest post from Stephen Elliot, Program Vice President, Management Software and DevOps, at the global market intelligence firm, IDC.

Over the past 15 years, enterprise IT organizations have been through massive transformation, from application performance management to the newer practice of observability. The adoption of multiple cloud architectures, Agile development processes, DevOps cultural practices, software-defined infrastructure, and modern application technologies have increased complexity and forced development, DevOps, platform engineers, and I&O teams to move faster and adopt observability capabilities. The silo-based, fragmented process tools and workstreams of traditional application performance management are no longer cost-efficient or fast enough to move an IT organization from a reactive, cost-center posture to a proactive, strategy-driving organization. C-Suite business executives increasingly understand that their customer engagement and product innovation capabilities are built on their technology architecture and processes. An IT investment in observability can help deliver sustainable competitive differentiation by enabling great customer experiences, agility, and the potential for profitable growth.

To make the transition to observability, IT executives should focus on several core areas that enable optimized business outcomes and measurable key performance indicators (KPIs) that matter to business stakeholders. These areas include:

  • Telemetry data: The ability of products to collect data into a singular unified platform, at real time or near real time, from across a plethora of applications, infrastructures, and architectures. The better the data quality, the better the analytic outcomes.
  • SaaS delivery: The opportunity for IT executives to receive observability capabilities as a SaaS-delivered service, in a multi-tenant fashion with high levels of built-in security.
  • Pricing predictability and transparency: The need for customers to clearly understand pricing model differentiation leads to clarity and predictability, having models that enable broader adoption more easily across users and teams while widening the variety of data collected and the number of applications observed.
  • Full-stack observability: The use of observability for understanding system state across all components and data via a unified data platform.
  • Analytics: The application of advanced analytic models to detect and explain anomalies, correlate incidents and alerts, and reduce alert fatigue. Analytics also enables transparency into why incidents are correlated and integrates results with existing workflows.
  • Modern, easy-to-use interfaces: The ability to provide contextual value across IT teams and stakeholders, including application support, DevOps, SREs, developers, architects, and I&O professionals.
  • Culture: The enablement of a data-driven, analytics-intensive culture with a focus on collaboration and preventing problems—versus reacting to them—while measuring success with business KPIs.

Observability value spans business and IT

Today’s business environment demands that IT executives move quickly to identify potential service problems that can impact revenues, the customer experience, and company reputation. Delivering observability that drives business outcomes requires that teams use the solution with a set of objectives and clear, measurable results. These results are often measured using KPIs that observability supports, using a focus on reliability and velocity themes to map metrics to business KPIs. Technology metrics are important, but business KPIs drive the business and observability value creation. Business KPIs can be defined as revenue, Net Promoter Score (NPS), profits, churn rates, and retention, as well as more specific vertical KPIs that have unique value to the business organization. Business KPIs must be used by IT executives to communicate the business value of observability to business leadership teams.

One of the goals for observability should be to help IT stakeholders and teams make their increasingly complex environments (i.e., systems, applications, clouds, etc.) observable. Simply put, you can’t measure performance and react properly with an environment that you can’t manage or see. Creating an observable environment involves delivering fast (and deep) analysis and visualizations to various IT teams in context, as well as providing guidance to DevOps and development teams on observability designs and continuous improvement strategies. The visualizations provide different teams with critical data from a single platform, using terms and context they understand with drill-down capabilities into high data cardinality. This is important because developers often instrument code with high data cardinality metadata that provides the critical business context. In addition to Cloud Centers of Excellence (CoE) and DevOps teams, some organizations have established new roles such as observability engineers, who are tasked with providing observability services and practices to various IT groups to drive operational and performance efficiencies. For IT executives, observability provides an opportunity to deliver business benefits spanning capacity planning, forecasting, performance, and observability-driven backlogs. It defines value for business customers across parameters such as cost and quality or speed/throughput while enabling an improved focus on delivering quality software products faster and more reliably.

Stephen Elliot is Program Vice President, Management Software and DevOps, at IDC. He advises senior IT, business, and investment executives globally in the creation of strategy and operational tactics that drive the execution of digital transformation and business growth. View posts by .

Interested in writing for New Relic Blog? Send us a pitch!