Impact: a better way to measure codebase change

December 01, 2016
cover image

From the early days at Apple, Andy Hertzfeld highlights why managing by Lines of Code is sort of ridiculous:

Bill Atkinson … had completely rewritten the region engine using a simpler, more general algorithm which, after some tweaking, made region operations almost six times faster. As a by-product, the rewrite also saved around 2,000 lines of code.

He was just putting the finishing touches on the optimization when it was time to fill out the management form for the first time. When he got to the lines of code part, he thought about it for a second, and then wrote in the number: -2000.

Moving Beyond Counting Lines of Code

Contributions in software take on different forms, and distinguishing them requires greater care than weighing raw lines of code.

To break this down a little further, consider the following example:

code quality image

One engineer makes a contribution of 100 new lines of code to a single file.

Compare that to another engineer’s contribution, which touches three files, at multiple insertion points, where they add 16 lines, while removing 24 lines.

The significance of each contribution can’t be boiled down to just the amount of code being checked in. Even without knowing specifics, it’s likely that the second set of changes were more difficult to implement, given that they involved several spot-edits to old code.

The engineer’s path probably looked something like this:

  1. Read the old code
  2. Invest time in understanding the original intent of that code
  3. Check whether intended changes might create collateral damage
  4. Make the changes
  5. Remove and cleanup any irrelevant code
  6. Sanity check the approach afterwards

Compared to greenfield development, which skips over half of these steps, the second contribution carries a much higher cognitive load. This also demonstrates why simplistic metrics fail to describe the work involved in software engineering.

“Measuring programming progress by lines of code is like measuring aircraft building progress by weight.” — Bill Gates

Introducing Impact

Instead, consider a way to measure the significance of code changes that respects the nuances of software engineering. It’s called Impact.

Impact attempts to answer the question: “Roughly how much cognitive load did the engineer carry when implementing these changes?”

Impact takes the following into account:

  1. The amount of code in the change
  2. What percentage of the work is edits to old code
  3. The surface area of the change (think ‘number of edit locations’)
  4. The number of files affected
  5. The severity of changes when old code is modified
  6. How this change compares to others from the project history

In the example from earlier, the second contribution is more impactful: The change modifies previous work, the edits happened in 4 different locations, and 3 different files were affected.

Even without knowing the severity of changes or comparing to historical changes, it’s probably safe to assume that the second contribution was more ‘expensive,’ and therefore carries is higher impact score.

Three ways to use Impact

Here are a few ways engineering teams use impact to gauge contributions and recognize engineering performance:

1. Did our staffing changes make a difference?

Throwing more bodies at software development does not guarantee for moving faster. As Fred Brooks pointed out, “adding manpower to a late software project makes it later.” Impact allows teams to understand and communicate about how staffing changes affect progress.

Over the course of a year, one company scaled and substantially staffed up their engineering team. Using impact, this leader had concrete data for the board room:

hiring chart

“We quadrupled our staff this year, and were under pressure to show that this was a good move. By comparing impact before and after, we were able to demonstrate that staffing up has helped us ship features more than 4x faster. Not only was staffing up a good idea, it relieved the pressure on our existing team helped them to be even more effective.” —Director of Engineering

2. Who are the silent heroes?

It’s not too difficult to recognize and congratulate highly visible work, like when someone ships a shiny new feature. But managers have had a tough time when it comes to noticing less glamorous work like paying down technical debt.

By paying attention to impact, unsung heroes get recognized:


“If we hadn’t been able to see it, we wouldn’t have realized how much Aaron had been contributing behind the scenes. Last month he did over half of all the work in one repo and really carried this re-implementation.” — CTO


3. Does yesterday’s work match today’s standup?

People don’t always like to raise their hand if they’re stuck. Sometimes it’s a matter of pride, and other times it feels like a solution to a tough problem is right around the corner.

When the only feedback loop is the daily standup, this can be particularly challenging. It forces team leads to rely on self-reporting to gauge if their team is moving the ball forward.

With the addition of hard data—particularly data is that’s aware enough to understand when someone is working on a hard problem—managers can know where to focus:

yesterday's impact chart

Does movement in the codebase match the narrative from today’s standup? If there’s a large delta between the two, the manager knows where to investigate.

“By keeping tabs on daily impact, I have a much better idea of who is on a roll, and who might need some extra help.” — Engineering Manager