Mikey Butler, New Relic’s senior vice president of engineering, says there’s two parts to his job: the products we create and the way in which we create them. We sit down to talk with him about how our engineering processes are evolving. (Part 1 of this series examined Mikey’s journey to New Relic and the importance of engineering management. Part 2 looked at the challenges of scalability.)

New Relic: Let’s talk about how we create our products.

mikey butlerMikey: We are actually changing how we do engineering. What we did in the past was what I call agile creation but waterfall deployment. We would very quickly develop code, but we would store it up for months at a time and then release it all at once.

We’re pivoting to a much simpler model with respect to how we do software development, where a significant subset of the engineering team focuses on one project, gets it done, and then cycles to the next project, rather than spread them over several projects.

What we’re calling our Next Generation Engineering Process is designed to increase the productivity of the team and increase the probability that things get done properly and on time. First, it avoids people being distracted from jumping in and out of a number of different projects. It also avoids the situation in which, a third of the way in, you have three projects all one-third done instead of having one of them completely done and serving customers.

New Relic: How does DevOps play a role in this?

Mikey: The next-generation, service-oriented, highly distributed architectures that help define modern Web offerings are much more difficult to fit in the traditional data center ops model. They’re much more congruent with next-generation DevOps models. And we were young enough as a company that we could start with DevOps.

Our structure is DevOps all the way.

New Relic: Why is that so important going forward?

Mikey: The more agile your operations are, the better they fit with the way we see the future of computing.

The traditional IT ops model tends to be based on shrink-wrapped or periodically released software that typically resides in a data center and is low touch and highly static, with highly controlled variables so that nothing changes. With the fluidity of the Web and Web apps, however, you don’t really have that option. You can’t chase change away.

I think that the software of tomorrow is going to be updated frequently, it’s going to be service-based and connected by messaging mechanisms, and it’ll be highly distributed, both in the cloud and in the data center.

That means the ability to control it like you can with the traditional on-premise data center models goes away. And the promise of stability that came from the traditional data center model breaks down if the control isn’t there.

With the distribution and the scaling requirements implied by distributed service-oriented architectures, there is no control. So you have to be able to achieve stability in spite of a lack of control. And that’s where DevOps becomes critical.

 

Background image courtesy of Shutterstock.com.

fredric@newrelic.com'

Fredric Paul (aka The Freditor) is Editor in Chief for New Relic. He's an award-winning writer, editor, and content strategist who has held senior editorial positions at ReadWrite, AllBusiness.com, InformationWeek, CNET, Electronic Entertainment, PC World, and PC|Computing. His writing has appeared in MIT Technology Review, Omni, Conde Nast Traveler, and Newsweek, among other places. View posts by .

Interested in writing for New Relic Blog? Send us a pitch!