Monitoring a Refactor with Care

Any engineer who’s worked on a codebase for any length of time knows where there are areas ripe for refactoring. That feature that was built as fast as possible and littered with TODOs and FIXMEs. Obese models or controllers. That 400 line method. While it always feels nice to dream of making that code better, large changes can be wrought with fear that you won’t kill your site in the process. So, how can you be sure you’ve done it right?

Fail

The user experience field has taught us that A/B testing can be a great tool for driving product development. The same concept can be applied to your code refactor. Using New Relic’s error tracking, you can test two implementations of code against each other in production and know when your new refactored code doesn’t match the original logic. I recently used this technique to slim down one of our God models and prep the area for extraction into a service. Here’s the how and why.

Trimming the Fat

Over time, Rails models and controllers can quickly become the dumping ground for everything mildly related to their original purpose. It’s as if all your distant relatives moved into your apartment with all their belongings and varied schedules. Good luck keep things organized and not having misunderstandings!

Not unlike many legacy Rails code bases out there, our main app has a few fat models and controllers. I recently located a chunk of logic that I needed to tease out so I could make extensive changes to it. Inspired by Code Climate’s article about refactoring fat models and this great Railscast, I chose to apply the Service Object design pattern to refactor the code and gain a better understanding of what it was doing.

I first gathered all the involved methods into one place (a Ruby module works well for this interim step). Once I could see everything isolated, I extracted the methods into a new Ruby class and began writing extensive tests to document the cases I could find. In this instance I was lucky that the code consisted of one major method that returned a key value supported by a number of smaller helper methods. This gave me an easy transition into a service class with one public method that returned the value I needed, and thus, one method to test. I also broke each supporting private method up into the smallest bite possible.

Refactoring confidently requires either complete confidence in your test suite or super thorough integration tests. Classic examples of refactoring speak to relying on a solid test suite that ensures you’ve done a good job. But what if you don’t have that? What if you don’t have the information you need about how your users are using your site in order to write thorough tests?

Dealing with Inadequate Test Coverage

Unfortunately, I wasn’t completely sure our tests covered all possible use cases. I also wanted to be as conservative as possible in my changes, since the model I was working on is central to our business and the logic I extracted was deeply entangled and complex.

Using QA is another way to catch holes in your test suite, but even that has its limitations since it’s hard to mimic the way users access your site. Rather than rack my head trying to guess at missing use cases, I used New Relic’s error tracking to report mismatches between the refactored and original code.

First, I wrote an audit method to check the values against each other and report them to the New Relic using the agent’s #notice_error method. I wrapped the call in a begin..end block to handle any major problems that might arise with my refactor.


def audit_refactored_method(original_method_value)
  # Wrap the service object up in a begin..rescue to catch any unintended errors
  begin
    service_object_instance = ServiceObject.new

    unless service_object_instance.value == original_method_value
      custom_params = {
        # ...any params you might need outside of the request to clue you into 
        # any test case conditions you might have missed
      }

      NewRelic::Agent.notice_error("Refactored ServiceObject mismatch", custom_params)
    end
  rescue Exception => e
    NewRelic::Agent.notice_error("Refactored ServiceObject exception", e)
  end
end

I then used the audit method at the bottom of the original method before returning its value.


def old_ugly_method
  my_value = # crazytown logic...
  audit_refactored_method(my_value)
  my_value
end

Now, I was ready to see if it had all worked.

Let the Monitoring Begin

Once the code was deployed to production, I regularly checked back to New Relic’s error page and searched for my error (Protip: creating a custom error class makes it much easier to search for). When an error appeared I had all the info I needed to improve the test suite coverage. Custom parameters passed to the agent can provide the guts for any missing test cases.

What’s great about this is that I only care when I have an error. Once I was confident the code had been tested for an adequate amount of time, I was able to remove the old stuff (as well as the audit method) and revel in how much more understandable my refactored code was. Success also meant that I’d created an excellent set of tests.

But how long should you wait before declaring your refactor a success? Deciding how long to audit the refactored code depends on a number of things. Unfortunately, there is no standard rule-of-thumb for how long this audit should go on, so ultimately you’ll need to use your best judgement. You can ask yourself a number of questions to help with that gut check.

  • Are there a lot of code paths the user could take?
  • How confident are you in your refactor?
  • What is the level of risk involved if you don’t catch an edge case?
  • What is the application’s pattern of traffic for the area of code you are changing? You can look at your throughput for the transactions involved to get an idea of how heavily it will get exercised in production.

If any of the above questions make you a bit less certain about your refactor, you should lean towards monitoring it over a longer period of time. You’ll thank yourself if any issues crop up later.

Final Thoughts

As they grow, code bases can always benefit from becoming more clear, testable, modular and performant. Fear shouldn’t stop you from making improvements and rearranging. Testing, QA, and now A/B testing your code with New Relic can help strengthen your faith that you’re making things better for everyone.

Word of caution: This method of testing your code should be used wisely. Since you are doubling the code for the refactored area, you’ll need to be thoughtful about performance implications. Again, use that judgement part of your brain.

Katie is a jill of all trades with a passion for making people's lives better through design and craftmanship. Pursuing software was the next not so seemingly logical step after having worked in fields ranging from architecture to heavy construction. She currently is part of the APM team at New Relic writing Ruby and Javascript. When not building something, she's exploring the forest in the PNW or on the hunt for good food and drink. View posts by .

Interested in writing for New Relic Blog? Send us a pitch!