Performance Tuning with Flood IO and New Relic (Part 2): Slow transaction and database traces

This is the second in a series of three guest posts demonstrating the basic principles of performance tuning from Flood IO co-founder and CTO Tim Koopmans. 

In the first post of this series, we introduced the basic concepts of performance tuning and demonstrated how you can simulate load using Flood IO and analyze performance using New Relic.

This post shows how to identify and tune problems in our application under test using slow transaction and database traces. (The third post addresses finding additional functional errors)

Slow transaction traces

Tracking the top 5 Web transactions by percent of wall clock time in New Relic helps prioritize points of interest (POI). AsyncController and CachingController have the highest percentage of clock time, so we focused on those first.


The same transactions are consistently slow (+10s) in Flood IO.


New Relic automatically captured slow transaction traces for these transactions:


The AsyncController trace gave us call count and duration, which helped further isolate a POI.


Trace details show 99.82% of time is being spent in application code within the ERB template of the index view.

Ouch! Looking at the application code revealed a blocking 10s call to the send_email_to_user method. Moving this long running task to an asynchronous background process should improve response times. Using the queue classic gem provided a simple interface to a message queue since we already had PostgreSQL and weren’t looking to add any additional infrastructure. Problem solved! Finally, we added the queue and repeated the baseline. Our prediction was sound. You can see that async no longer sits in the top 5 slowest transactions in Flood IO, and New Relic confirms the result.

Database traces


big_list has become the slowest transaction in Flood IO, and slow transaction traces in New Relic shows the same result.


Trace details show 91% of time in the find_by_sql method of the WebSiteController. Drilling down into SQL details, the query plan showed 100K rows returned:


The business logic only requires a record count and the first record in the collection we can quickly test the hypothesis and a leaner solution.

Problem solved! We modify the query and repeat the baseline. Our prediction was sound. It looks like big_list is no longer in the top 5 slowest transactions in Flood IO. New Relic confirms the same.

We haven’t finished with the database yet, though. The next step was to sort by highest throughput and look at the top 5 database operations by wall clock time.


The find method of the Icon model was called frequently and was the slowest in terms of wall clock time. Trace details also showed the index method of the LoopController makes this call.


We hypothesized that by eager loading — the mechanism for loading the associated records of the objects returned by Model.find using as few queries as possible — the associations we can avoid an N + 1 query problem. We modified the query and repeated the baseline.

Our prediction was sound and New Relic shows much more sane throughput in the database:


Next steps

The first post in this series introduced the basic concepts of performance tuning and how you can simulate load using Flood IO and analyze performance using New Relic. This second post revealed how we tuned additional performance problems present in the test. Finally, the last post in this series will show you how to find and tune the remaining performance problems present in our application under test.

Tim Koopmans is introducing more and more people to the art of performance testing and debunking myths along the way. If you’re already monitoring your application’s performance Tim will show you practical ways to correlate the myriad of metrics available into meaningful information. Tim is also author of a popular Ruby based DSL for JMeter and CTO of View posts by .

Interested in writing for New Relic Blog? Send us a pitch!