A while back, New Relic switched our Page Views and AJAX features so you can categorize your data by URL. Part of the motivation for the switch was to allow supporting New Relic Browser applications without requiring an APM agent to automatically inject instrumentation into your code. We also wanted to focus Browser on the frontend, where the URL is prominent and server-named transactions might be less familiar.

In continuation of this shift, New Relic Browser recently began partitioning AJAX data by HTTP method (GET, POST, etc.) in addition to URL. As you are probably aware, just because the URL matches, that doesn’t necessarily mean you will be running the same code.

Consider, for example, an application that implements a RESTful interface to manage users. You don’t want the data about AJAX calls that create a user to be lumped in with the AJAX calls that delete a user. Instead, you’d like to monitor the performance of “POST /user/123” and “DELETE /user/123” separately.

Now you can!

screenshot showing AJAX data

How we got there

The instrumentation was already sending the HTTP methods for AJAX, so the only code change required was to include the HTTP method in the metric name by modifying a String—and all the associated timeslice and error data would then be grouped to the right place.

More than just coding

So, if updating this code was easy, you might ask, “Why the blog post?” Well, it turns out that this project shed some light on a common error in the software industry.

In many cases, writing code can be complicated. Since only a limited number of in-demand people can do it, when a software change is required, the scale of the problem is often equated to the complexity of the code change. That’s not always correct. Sure, sometimes the programming really does dominate the total amount of work involved, but other times a small code change can take a large amount of effort before it delivers the business value you are seeking—that’s especially true given the vast amounts of AJAX data generated by our customers and our customers’ customers.

As mentioned above, Browser categorizes this data into different groups we call metrics. Partitioning these metrics by HTTP method added a multiplier to the amount of AJAX data going through the infrastructure. To smooth the transition for our users, we also needed to continue writing the legacy metrics that aren’t partitioned to our databases.

Multiple teams involved

While New Relic has a very robust ability to receive data, we wouldn’t want to surprise the teams that work on those areas with an extra jolt of AJAX traffic, so we needed to coordinate the change with our data intake pipeline and database management teams. Once they were in the loop, we also needed to work with our support staff and marketing to make sure they were up to speed on the new feature. This is all in addition to feature flagging, UI design and implementation, updating documentation, testing for quality assurance, and writing this blog post.

So the next time you are estimating the amount of work that goes into creating a feature, whether you are the engineer coding it or a user making a seemingly trivial feature request, keep in mind there’s a lot more to making the change than just the coding. Sometimes, that might be one of the easiest parts.

Dan Rufener is the lead for New Relic’s intelligence platform. He’s based in Portland, Oregon. View posts by .

Interested in writing for New Relic Blog? Send us a pitch!