Over 15 years ago, NGINX started its life as an open source web server designed to be fast, stable, and reliable.

Today it’s one of the world’s fastest web servers and popularly used as a content cache and media streaming server. Available in both open source and commercial models, it’s incredibly efficient and works well on low-cost and high-end servers, as it was built to handle requests asynchronously with a small memory footprint.

In addition, NGINX is extremely popular as a reverse proxy to other services. This means that NGINX acts as a frontend that passes incoming web requests to a backend server (or services on the same server) running Ruby, Python, Java, .Net Core, or PHP-FPM. NGINX is also a generic TCP, UDP, and mail proxy.

If NGINX is struggling to serve requests quickly, it’s usually a symptom of problems in your application or architecture since NGINX will serve as a predictor for application performance issues. Close monitoring of NGINX is critical to ensure the health of your web application and server environment.

Key NGINX metrics

Let’s take a look at some key performance metrics for NGINX that you should consider monitoring.

Tip: Some monitoring functionality depends on using the open source or commercial versions of NGINX. To view all available metrics for both open source and commercial NGINX, check out the NGINX onhost integration docs (which you can learn how to set up below).

net.connectionsAcceptedPerSecond: The number of accepted client connections per second. When checking this metric, consider a typical number of connections for the hour of the day or the day of the week, so you aren’t distracted by ordinary variations. A significant drop could be an early warning sign of DNS issues; a moderate rise might be in response to an advertising campaign, or a large rise could indicate a brute force attack or attempts at DoS attacks.

net.connectionsActive: Active connections are especially important if you have many long-lived connections, such as WebSockets or other web services. If there are too many active connections, such as keep-alive connections, your application might deny some users access or waste resources. Conversely, allowing too few active connections might degrade performance as users are forced to reconnect more often.

net.connectionsDroppedPerSecond: A high number of dropped connections should prompt you to look at connectivity or other servers connected to the one reporting issues. Perhaps check database indexes, thread blocking, or inefficient long-running transactions.

net.connectionsReading and net.connectionsWriting: Reading and writing, connection rates in which NGINX is reading a request header or writing back to the client ordinarily keep steady with each other. If you find one rising out of sync with the other, try to benchmark it and then tune your server performance.

net.connectionsWaiting: The current number of idle client connections waiting for a request. This should ideally be 0 (or a low value) and should only spike occasionally. If you’re getting consistently high wait counts, look at your database connections and query speeds.

net.requestsPerSecond: The total number of client requests per second is an important metric of how well your server performs. If you’re getting a high number of connections dropped or connections waiting, try these tuning tips from NGINX—covering worker processes, keep-alive connections, logging, and resource limits—to help benchmark and tune your server’s performance.

Monitoring NGINX with New Relic

The NGINX integration uses the New Relic Infrastructure agent to collect and send performance metrics from your NGINX server to the New Relic platform.

You can use the integration to monitor either the open source or commercial edition of NGINX. The integration can monitor all of the metrics listed above, but for commercial versions of NGINX, the integration also provides useful aggregates of client connection counts and detailed error reports for 2xx, 3xx, 4xx, and 5xx HTTP error responses. These are especially useful for early warnings of missing pages (4xx) or code errors and exceptions (5xx).

Let’s walk through the steps needed to set up monitoring for open source NGINX running on an Ubuntu server. You can also monitor NGINX running as a service in Kubernetes or on Amazon ECS.

Note: This process assumes you’ve already set up an Ubuntu server and the correct extension module for open source NGINX

Install the agent and integration

  1. From New Relic One, navigate to your account drop-down (in the top-right corner) and select Add more data.
  2. Select your operating system (in this case Ubuntu), and follow the prompts to get your license key and select your Ubuntu version.
  3. To deploy the Infrastructure agent and the NGINX integration, run the following commands on your server:
    • Import Infrastructure agent GPG Key.
      curl -s https://download.newrelic.com/infrastructure_agent/gpg/newrelic-infra.gpg | sudo apt-key add -
    • Add the New Relic repository (view all distributions here).
      printf "deb [arch=amd64] https://download.newrelic.com/infrastructure_agent/linux/apt bionic main" | sudo tee -a /etc/apt/sources.list.d/newrelic-infra.list
    • Install the infrastructure agent (newrelic-infra) and NGINX integration (nri-nginx).
      sudo apt-get update && sudo apt-get install -y newrelic-infra nri-nginx

Configure the NGINX integration

The configure the integration, navigate to the integrations configuration folder, make a copy of the sample configuration file, and then edit the config file:

cd /etc/newrelic-infra/integrations.d
sudo cp nginx-config.yml.sample nginx-config.yml

The NGINX integration config defaults set the environment as production and set the role as load balancer. You can change these configurations as needed.

When you’re done, exit and save any changes, and restart the Infrastructure agent:

systemctl restart newrelic-infra

View NGINX data in New Relic

To start monitoring NGINX performance, navigate to Infrastructure > Third-party Services > NGINX dashboard.

Depending on  your server’s traffic, it might be a little flat and boring (boring is good though), or you might be in the middle of something interesting.

For example, here’s a typical instance of Connections Accepted per second for a test server under load:

On the Requests per second chart, requests are running a little higher than expected:

Considering the high load, you might want to check for any dropped connections.

You can use the time picker to get long term trends or fine detail. In this case, check out 10 minutes around the peak to see what happened with dropped connections.

Now you can see a short-term drop, which was picked up again not long afterward. You can keep an eye on this in case you need to increase resources available to your server.

When you’re ready for more, you can query data and understand integration data in more detail. Just remember that the metrics you need for the NGINX integration are attached to the NginxSample event type.

Next steps

The NGINX integration helps you keep your NGINX servers healthy. By providing early warnings on critical metrics, you can help prevent failures that might result in a poor user experience in your web applications.

By the way, the integration is open source software. That means you can browse its source code and send improvements, or create your own fork and build it.

If you are ready to take control of your databases, sign up for 100GB of ingest per month and one Full-Stack Observability user license—free forever!

Robert works as a full-stack developer in the financial services industry. He's passionate about sustainable software development, building solid software, and helping to grow teams. He's had 20+ years of development experience across many areas that include scheduling, logistics, telecommunications, manufacturing, health, insurance, and government regulatory and licensing sectors. Outside work, he enjoys electronics, IoT, and helping a number of nonprofit groups. View posts by .

Interested in writing for New Relic Blog? Send us a pitch!