The main benefit of using RPM when testing application performance is clear: you get instant insight into the throughput and response time of your application while under load, and you have all the tools at your disposal for analyzing and optimizing your application based on real-time data.
What you may not know is that RPM includes a few extra tools, which can greatly help with performance testing.
You may be familiar with Deployments in RPM if you’ve already integrated the recipe into your Capfile release scripts. Deployments lets you identify in graphs when a new version of an application is deployed to a server.
Deployments in RPM are convenient time markers that you can use to point out significant events in your application, such as the start and end of every test. For example, you could insert Deployments in places to identify the peak workload during the test.
Here’s an example showing a deployment with a description of the test parameters being run:
In the charts, the deployment shows up as a vertical marker with a short annotation:
Deployment markers provide you with a good frame of reference to correlate the RPM data with test runs. They also gives you a quick reference for the performance profile before and after the marker.
You can create deployments two ways, using Capistrano or a simple command script.
Using the deployments script
The script usage varies slightly between java and ruby, but the ruby version would look like this:
newrelic_cmd -a "Staging" -r "Build #43223" -c "Increased cache size by 10mb"
Staging in this case is the app_name associated with the performance testing environment.
Refer to the documentation for the available options and java usage.
If you’ve integrated deployment support into your Capistrano Capfile then you can do the same thing like this:
cap newrelic:notice_deployment \ "-Snewrelic_desc=Start the test with load=25x" \ "-Snewrelic_revision=Alpha 0" \ "-Snewrelic_changelog=Threads: 100, Caching enabled, Joe's patches included"
We highly recommend using Notes both during testing and during production monitoring. Notes are a way of taking a snapshot of a graph and making annotations so you can easily refer back to it later, or share it with other users. While some users capture screenshots and e-mail them to colleagues, RPM Notes are much more effective. You are capturing the live data in a time window and can combine multiple graphs into a single report with links back to their original context. You can refer to the notes when doing analysis after the fact and you can collaborate with team members adding their thoughts right into the note itself.
Not familiar with Notes? It’s easy to get started. Just hit the “Add Note” link on any of the graphs in RPM.
Some of the most useful tools for performance testing are the Scalability Analysis charts. Unlike the usual time series charts in RPM and other tools, these charts plot response times against throughput, not time. This is especially useful in the context of performance testing and capacity planning since one of the primary goals is to identify at what workload the system becomes saturated.
Often when developing test reports the analyst has to correlate throughput and response time indirectly, after the fact, by generating the series side by side in a spreadsheet, correlated against a time series, then plotted with a scatter plot. The result is a chart that shows how CPU and Response Times plot against load.
If you use RPM, you can run a test by generating a slow, steadily increasing workload. Commercial load testing tools generally have this feature built in, otherwise you’ll need to re-run your workload generation tool in steps. But don’t worry about timing it perfectly or gaps in the workload–the scalability charts will factor them out. What you get at the end of the run is a chart with a nice spectrum of data points showing your DB request latency and front end request times as a function of load.
You can see patterns such as a linear rise to indicate when your system is saturated, or a cluster on the right to show where you’ve reached your maximum throughput.
Here’s a chart that shows the performance profile the day before and after an optimization. You can see the cluster on the right shows improved response times over a given load curve. The shift shows you the magnitude of performance and the increasing effect as the load improves
- Deployment markers are a great way to highlight events during testing. They make it easier to correlate the RPM data with test runs and they also give you a quick reference for the performance profile before and after the marker.
- The Notes feature is an easy and effective easy of collaborating with your colleagues. In both testing and live production environments, Notes provide much more information than screenshots and deeper context for every team member.
- Scalability graphs present a more elegant alternative to using spreadsheets for correlating throughput and response time. At the end of the day you get more informative charts that do a much better job of plotting key data points such as DB latency against front-end request times as a function of load.
These are just a couple way that you can extend your use of RPM during testing. We’d love to hear about any additional techniques that you may be using or any feedback about the features discussed here.