New Relic’s Instances Tab

Heroku logoWe’ve recently revamped the ‘Instances’ tab, which is a special feature we provide to customers using Heroku’s New Relic addon. The instances tab can be found in the navigation under Monitoring > Instances.

Heroku caps the amount of memory application processes can use. Heroku’s docs say if the app instances running within each dyno use more than 512Mb of memory they will begin to swap and eventually the virtual server (i.e. the dyno) will be restarted. When this happens Heroku will write R14 and R15 errors to the application’s log.

New Relic has put several graphs together on the instances tab which can help you tell if your app is using too much memory and being restarted by Heroku.

To test this out I deployed a Rails application to Heroku which intentionally leaks memory each request. I added this code to my ApplicationController:

[sourcecode language=”ruby”]
# in app/controllers/application_controller.rb
class ApplicationController < ActionController::Base
before_filter do
$leak ||= ”
$leak << ("a" * 1.megabytes)

Now every time this app serves a request it will add another 1048576 characters to a string held in memory. Since this string is stored in a global variable it will never be GC’d and the memory will never be reclaimed.

I enabled New Relic’s uptime pinger which hits the site twice a minute to see if it’s responding. You can see the average memory for process increasing slowly.

Average Memory Usage per Instances

Then I started hammering the site with requests. It’s easy to do this with a little taco bell programming:

[sourcecode language=”ruby”]
while [ true ]; do
done | xargs -P 5 -n 1 curl

This will send a steady load of five concurrent requests to the site (you can control the concurrency number with xargs -P flag).

You can see resident memory usage spiking to ~ 450Mb, which is close to the Heroku servers’ physical limit of 512Mb. The server then starts swapping.

Average Memory Usage Per Instance 2

At this point heroku logs is filling up with messages that look like this:

[sourcecode language=”ruby”]
heroku[web.1]: Error R14 (Memory quota exceeded)
heroku[web.1]: Process running mem=2260M(441.4%)
heroku[web.1]: Error R14 (Memory quota exceeded)
heroku[web.1]: Process running mem=2567M(501.4%)
heroku[web.1]: Error R15 (Memory quota vastly exceeded)
heroku[web.1]: Stopping process with SIGKILL
heroku[web.1]: State changed from up to crashed
heroku[web.1]: Process exited with status 137
heroku[web.1]: State changed from crashed to starting
heroku[web.1]: Starting process with command `bundle exec unicorn -p 37621 -c ./config/unicorn.conf.rb`
heroku[web.1]: State changed from starting to up
app[web.1]: ** [NewRelic] INFO : Starting the New Relic Agent.

Usually Heroku is nice enough to restart the dyno for you (which restarts all processes freeing the leaked memory). I did find some cases where the app crashed and wasn’t cleanly restarted, so it’s good to enable Availability Monitoring so you’ll know if your dynos get into a bad state.

We hope the instances tab helps you keep misbehaving dynos under control.

Sam Goldstein is engineering manager, agents, for New Relic. He manages Browser Application Monitoring team. He's been writing Ruby for almost a decade and is the author of several semi-popular gems, including diffy and timetrap. View posts by .

Interested in writing for New Relic Blog? Send us a pitch!