OpenTelemetry provides community-contributed instrumentation that standardizes telemetry data collection from your applications and services, without vendor lock-in. With OpenTelemetry tracing already stable and metrics stability fast approaching, New Relic now provides an enhanced OpenTelemetry offering that includes support for OpenTelemetry Protocol (OTLP) over HTTP/1.1 and infinite tracing along with curated experiences for the ingested data. This offering enables faster troubleshooting and provides a seamless transition to OpenTelemetry.

Get started with OpenTelemetry and New Relic

The best way to get started with OpenTelemetry is to try it out with your own applications and services. This way you’ll get value from the instrumentation right away and you’ll be instrumenting services with which you’re already familiar. 

If that’s not the case for you, you can use our fork of the Online Boutique cloud-native microservices as your demo application—or one of the New Relic OpenTelemetry examples.  

 

Ingest data

After you have instrumented your services with the OpenTelemetry SDK/API, you can ingest your data into New Relic One using our native OTLP endpoint. Ingesting data with the OTLP endpoint requires only a few steps:

  1. Point your OTLP (recommended) or OTLP/HTTP exporter at the corresponding New Relic OTLP endpoint: otlp.nr-data.net:4317 or https://otlp.nr-data.net:4318.

  2. Add a header attribute, api-key. Its value is the Account License Key for the New Relic account you want to send your data to. 

Here’s what that might look like in the otel-config.yaml file if you’re using an OpenTelemetry collector:

exporters:
 otlp:
   endpoint: otlp.nr-data.net:4317
   headers:
     api-key: ${NEW_RELIC_LICENSE_KEY}

Alternatively, you might use the OTLP/HTTP exporter:

exporters:
 otlphttp:
   endpoint: https://otlp.nr-data.net:4318
   headers:
     api-key: ${NEW_RELIC_LICENSE_KEY}

For more information, check out the OpenTelemetry quick start documentation.

View OpenTelemetry data through New Relic

Now that your application is sending OpenTelemetry data to New Relic, you will instantly see the data on a dynamic UI that lets you group, facet, and filter metrics such as response time, error rate, and throughput using any dimension of the OpenTelemetry attributes (service.version, http.status_code, thread.name, and so on). This also includes curated views of databases, transactions, errors, and <a href="https://newrelic.com/blog/best-practices/distributed-tracing-guide">distributed tracing</a> that will be useful for troubleshooting. Here is a glimpse of a couple of scenarios (illustrated with the microservices demo app) where these curated experiences for OpenTelemetry would come in very handy for any engineer. 

Daily monitoring of your environment

If you’re an Ops/DevOps engineer, you’ll need to understand on a regular basis what’s happening across your entire environment.  New Relic Explorer has all your OpenTelemetry data in one place without your configuration. You can discover emerging issues in real time, without relying on static, pre-configured thresholds or dashboards, because crucial changes in your estate are highlighted across all your accounts. You get quick insights into changes and health across your entire environment so you can analyze, understand, and resolve issues faster. New Relic Explorer includes several approaches to visualizing, exploring, and understanding your entire estate:

  • New Relic Lookout provides a real-time view across all your accounts, highlighting changes in all your telemetry in an easy-to-understand, accessible user experience that requires no configuration. An intuitive circle visualization, with color indicating severity of recent changes and size conveying scale, draws your attention where it’s needed most.
  • New Relic Navigator helps you quickly understand the health of your OpenTelemetry services across all accounts, focusing on specific groups of entities based on tags and quickly drill into services exhibiting issues. In the following screenshot, several OpenTelemetry services are alerting.  

Based on this visualization, however, it’s still unclear how these alerting services are connected. Although New Relic Explorer gives you a bird’s eye view of your system, what if you want to visualize the services and their dependencies in your distributed architecture? 

AutoMaps do exactly that: they help you quickly identify performance problems. You can use automaps and the related entities widget to help you identify health issues in other services and understand how services connect to your infrastructure components.  

The automap for the FrontEnd service in the following screenshot shows its dependence on five other services, including the AdService. automaps provide a path that you can trace down to identify the root cause.

Troubleshooting an error

You can also create an alert to issue an error based on thresholds you’ve configured with your OpenTelemetry service and troubleshoot these errors. The status color of these OpenTelemetry services will appear in New Relic Explorer or the activity stream, where you can also see the details of the critical or warning violations.

In the previous screenshot, you see that AdService is showing a critical violation and that the error rate has crossed the threshold, according to the activity stream.  Let’s take a look at the traces that have errors. 

When you click one of the error traces you see a trace map at the top, such as the one pictured above. Trace maps help you visualize all the different services involved in the execution of a specific request (a trace) alongside the individual spans. Each entity includes rich context on hover, and the ability to see more entity-specific details without leaving the trace context. Additionally, you get the error details that you can use to root out the cause of the error, including any stack trace generated by the OpenTelemetry instrumentation. The error details are surfaced based on span events ingested in accordance with the OpenTelemetry specification. In the following screenshot, note that additional error details are in the red box.

Besides tracing errors, you can also ingest logs as span events or you can send your application logs to New Relic One using any of the supported log forwarders and correlate the performance of your services with the corresponding logs. With logs in context, you can now troubleshoot problems faster by jumping directly into the specific log lines that are related to the trace you are investigating, as shown below. 

New Relic OpenTelemetry is ever-evolving

As the OpenTelemetry project continues to mature, we’re continuously evolving our support for OpenTelemetry in New Relic. Our goal is to empower engineers to harness the complete power of New Relic One regardless of the instrumentation source so you can quickly discover the data you need to determine the root cause of issues and optimize your applications’ and services’ performance. 

If you have any feedback about using New Relic with OpenTelemetry, select the feedback button in the top navigation in New Relic, and send us your valuable suggestions.

If you haven’t already, sign up for our free tier today and start sending your OpenTelemetry data to your New Relic account now—you’ll get 100 GB of data free every month.