In this post I’m going to dive into one of the most tumultuous topics in software development. Measuring developer productivity is often seen as a fool’s errand, but more and more development teams are beginning to see its value. The hard part, however, comes when moving beyond the concept into setting a particular metric for developer productivity.
To properly measure the productivity of a software development team and its progress on a given project, it’s imperative to move away from industrial-era management processes and lead a team with modern practices. Effective software development teams are inclusive, diverse, and open to change and learning. The ideal is a proactive team that can identify progress and know where it stands in regards to shared expectations of project completion and effort. This is a joint cause for management, team leadership, and team members. In addition to allowing better measures of software development productivity, this approach actually increases that productivity, helping to create happy end users who get what they need, as well as useful things they didn’t even know they needed.
Before moving forward, it’s important to define some key concepts. When I talk about measuring something, the intent is to establish one or more known metrics that represent one or more known actions over time. This measurement can then show trends over time that can be used to make decisions. Note that measuring does not always coincide with a systemic view of events. Thus, more important than tracking a set of metrics is understanding who, what, when, where, and how success actually takes place.
Success, in this context, identifies the completion of work that directly leads to a specific result around a user or customer interaction. This interaction may be a decreased or increased usage of a particular feature, it may be a more fluid use of the data or interface provided to the user, or it may simply be increased satisfaction with the software leading to increased sales or adoption of the software. Success means creating a desired behavioral change in the customer or user of the software.
I have had the most success, both personally and in teams, when everyone agrees to and focuses specific understandings of success. First, success must specifically be identified by the customer or end user. Second, the identifying metrics must not override the actual, systemic understanding of how a team works to build software. Metrics can and will be gamed, so they may tell a misleading story unless the people tracking them understand the systemic nature of a team building a product.
The systemic nature of a team
Workflow: These include the systems that a team works with, such as source control, frameworks and libraries, tickets, and other such systems. Workflow may also include the physical working conditions, commute, and related external efforts of someone’s standard of living, along with external work influences, and related notions like staying healthy and focused so that one can work effectively.
Societal: These systems comprise the largely unspoken social contract among people, including the sense of professionalism and other abstract notions within a team. It covers such dynamics as whether a team is inclusive or exclusionary, diverse, and open to change based on social interactions.
Process: Process systems dictate how workflow is be done. These aren’t so much packages like ticket systems but the idea of meetings, work hours, and related mechanisms that businesses use to structure how employees to go about their workday.
All of these systems and others are relationally intertwined under the idea of systems thinking. When understanding a team with the idea of systems thinking, there are several more ideas that help define the team as a system:
- The team is composed of parts. This includes individuals with different characteristics, competencies, and attitudes around problem solving, workflow, societal, and process practices. Other parts of the team can be teams themselves.
- All members of the team are related—directly or indirectly—in some way.
- A team has boundaries within which it operates.
- These boundaries, like the definition of success, are best defined by the end users or customers of the software the team is building.
- A team can be temporal in nature; it may add or lose parts of itself. This could be a change of process, the addition or loss of team members, or simply the beginning or end of a temporary software project.
- A team may be physically located in a shared central office, disparate locations, or a mix of both.
- A team is an autonomous system that must be able to fulfill its mission via internal decision making.
- The team’s processes consist of inputs, transformed to emit outputs.
Only when all these elements are properly understood can an observer start to determine how to measure developer productivity over time. And even then the measurements are not static. Each measurement must be fluid based on the team’s mission, efforts, changing tasks, and new understandings. Metrics taken from an ineffective workflow system are likely providing inaccurate trends over time and should be removed. Leadership must remain vigilant against bad metrics as they become gamed, inaccurate or simply outdated.
Process measuring at the beginning
Now that we’ve established some context, let’s look at some real-world ideas to measure actual software development team productivity, specifically team product effort and product delivery.
Each week a team should determine some type of kick off. It might be a meeting; it may be sitting down and starting on a punch list of work items. It might be an architecture or systems planning session. It might simply be a walk to the local coffee shop to get everyone’s brains working in a team setting. It just needs to be an inclusive event where the team can discuss and detail the coming week’s efforts.
This is also the point to derive productivity measurements from the intent for the week. If a team intends to build features A and B and partially research feature C, then that is the basic metric for the week. If management, leadership, and the development team itself are honest and realistic, the kick off estimates of what can be accomplished for the week will become more and more accurate.
A key factor is how management behaves around the team’s weekly goals. Management has to create an environment of team accountability and inclusion or the accuracy of the weekly estimates will continually fluctuate and leadership won’t be able to get a clear picture. Don’t inappropriately pressure them to take on more than they can actually accomplish, or they’ll become overstressed and actually deliver less in the long run.
The next opportunity to accrue measurements comes at the end of the week. A time should be set aside, similar to the weekly kick off, to recollect and observe what the team actually accomplished by the end of the week. Where all of the intended goals reached? What roadblocks came up? Was it a recurring roadblock? Was there unnecessary stress or interruption during the week? Did any other unplanned outliers occur? Have those outliers appeared before? Should they be added to the list of things that should be regularly taken into account during the weekly kick off meetings?
There’s one more big issue that should be addressed in the weekly retrospective: What does done mean in this context? Specifically, how did the team define done versus how the end user or customer defined it? If they’re not the same, how should that gap be filled? Coming to a shared definition of done is just as important as a shared understanding of measurement and success.
Note: The work week makes sense as a measurement time frame, and having a weekend between a kickoff and a retrospective is disruptive to the flow of a team.
Finally, all of these approaches are based on a proactive vs. reactive product development process. That means the team controls its path toward success. Reactive processes—an unfortunate holdover from the industrial revolution era of management—treat the software team like factory workers responsible only for their individual pieces of work, with no connection to the big picture. Reactive processes make productivity measurements suspect of gamification or other corruption, so management often gets an inaccurate picture of the team’s progress.