Testing the New Relic Browser Agent

A little less than a year ago, we embarked on a bold plan to extend the foothold we had with Real User Monitoring (RUM) to monitor even more things! We weren’t quite sure yet what those things would be, but one thing we did know is that we needed a better testing and release strategy.

At the time, we had no continuous integration testing, and our release was tied to releases of our website. It was obvious that our first goals would be to decouple our releases from the website, and to move to a build, test, release process. Decoupling the releases from the website was easy as we have moved scripts from being hosted directly from our website to being hosted by AWS S3 through a CDN that maps things under the js-agent.newrelic.com hostname. The release process was a larger challenge that is still an ongoing development focus for us.

We wanted to rearchitect much of the existing scripts so they could be more extensible for future features. But last year when we were starting the project, we were instrumenting over two million page loads each minute! We had to be sure that any new release would not break existing functionality. This meant our first task was inventorying all features and APIs we supported. We created a wiki page with all the details, and each time we thought we were done we found new permutations of how the existing browser agent worked.

After figuring out the functional surface of our browser agent, we then needed a mechanism to test each function. The first question to answer was: how can we easily test JavaScript code that runs in a browser? The simplest answer is to use a controllable, headless browser. We settled on PhantomJS, though we also could have used SlimerJS or even CasperJS on top. Now we needed a way to control the browser in our tests. Being JavaScript developers, we settled on using Node.js for our tests and WD.js for controlling the browser using the WebDriver interface.

Next on our plate was to choose a test runner. None of us on our team are fans of BDD style testing; we wanted a very lean test runner and library with simple assertions and standards-compliant machine-readable output. This led us to use tape as our test framework, and we have been quite pleased with it. Our tests run in Jenkins, and the TAP Jenkins plugin reads our test output perfectly.

This setup worked great for us in the beginning, but we also needed to verify that all our functionality worked as expected in real browsers. For this need, we turned to Sauce Labs. Sauce Labs provides a way to run browsers in their hosted cloud platform using the same WebDriver interface we already were using for PhantomJS. We currently test against 21 different combinations of browser, OS, and platform.

Tying everything together, we use Browserify to modularize our code, Grunt to build, minify, and run tests, and Jenkins to do all this each time we merge changes to master. Each test run currently includes up to 99 end-to-end tests per browser, depending on each browser’s features, and per agent configuration, of which we have three, for a total of 2,855 individual tests. This total also doesn’t include unit tests, which are likewise run per browser and per agent configuration and consist of up to 177 individual tests each. Once all the tests pass, a separate Jenkins job deploys our changes to js-agent.newrelic.com. With this system, we have confidence that each release will be better than the last. Each time we come across corner cases (hello Prototype and IE!), our tests get larger and we become more assured of the stability of each release.


View posts by .

Interested in writing for New Relic Blog? Send us a pitch!