A Data-Driven Approach to Leading Software Teams

July 14, 2017

This year’s GitHub Satellite in London focused on the way teams work and on how to improve workflows with the right tools for your team. GitPrime was asked to present along with speakers from Codacy, Rollbar, CircleCI, GitHub, and more to educate, inspire, and explore industry best practices.

GitPrime President, John Witchel, shared about a data-driven approach to leading software teams, and how to effectively use data from git repositories to enable software developer productivity.

a data driven approach to leading software teams

John walked the audience through a series of use cases for measuring productivity, to demonstrate how productivity is measured in software engineering, and why KPI’s matter to developers.

Here is a video of the talk:

Below is a lightly edited transcript:

My name is John Witchel. I'm the President and CEO of GitPrime. Thank you very much for coming.

I’d like to begin by asking a question: is it possible to get a 20% increase in productivity with an existing team? That's a big number.

I’m going to spend a little bit of time today showing you how that can happen.

GitPrime is in the business of productivity metrics for engineering teams for the purpose of creating a more productive engineering team. Fundamentally, GitPrime we’re dealing with “the black box of engineering”. The black box of engineering is an expensive and largely unsolved problem today. At the heart of that problem, is that it’s surprisingly difficult to track developers.

Ironic, right? Developers are the ones that help track virtually every other group in the enterprise today. We're the ones that wrote the software that became Salesforce. We did it for marketing, accounting, and inventory. Yet we're the last group to be using trackable metrics. That's a bad thing. I think that it's very, very difficult to be your best if you're not being measured in some way.

If you're trying to get the most out of your engineering team, the right way to do that is to measure. The following are a few things that GitPrime measures, which are also some of the questions that we ask our customers. Ask these of your own team:

  • Which teams on your group, in your enterprise made the biggest impact last month? By what metric?
  • Was this week more productive than last week? According to whom and by what metric?
  • How much of last month's work went to paying down technical debt? As a percentage of your total burn? How do you know?
  • How do you show others that your team is working hard? What data supports this?

We've all been on teams where everybody is busting their ass. You can't seem to get credit for all the work that you're doing, but if you could display it on a board and show people outside of engineering, it might change the conversation.

Rather than continuing to talk conceptually about how GitPrime does that, I want to dive into a series of animations from our product. I'll talk through how we do what we do, and how to use this tool.

Understand your work behavior

productivity analytics for software teams

This is the entry point into our product: the commit workflow report. What you see here is a consolidated view of all of your repos. As a quick note for context: GitPrime takes all the repos in your enterprise, reads them, analyzes them, generates a huge set of metadata, and then produces reports. Git is the primary source of all of our information.

What we’re looking at is essentially a rough cut of how big each individual commit is by these team members. Now, I think that everybody in this room would agree that counting lines of code is a terrible idea. Right?

For someone who knows how to read this report, seeing how big and how regular a developer’s commits are can be a good indicator of whether they are succeeding, struggling, in need of a little help, or (most commonly) should be left alone. If we look at Allen (at the top), we see a person who is lightly committing. Compared to his other two teammates, he looks like he's a problem. He's actually not a problem. Allen is working on a very difficult engineering problem - he's tuning some sequel. As his manager, I don't expect more than one or two commits over a couple of weeks, given the difficult nature of this problem. Nathan here, checking in tons of stuff. He's doing an early stage part of the project. And Peter - tons and tons of merges. When I “turn off” merge commits, he's still in pretty good shape.

By using a simple search, I can check whether or not the developers who are supposed to be doing work on my ticket are actually doing that, without interrupting them. That's a very special thing; I think we all know that an interrupted engineer is an engineer who just dropped to almost zero productivity. How many times have we all been there? I've got 15 variables in my head. Then, somebody goes, "Hey, man. Can I just get a second of your time?" You're like, "Actually, you can have 20 minutes, because I've totally forgotten what I was thinking about.” If we can stop that interruptive behavior regarding statuses, we boost productivity. If we know to step in when someone's struggling, we boost productivity.

Compare this week to your typical

GitPrime has a whole set of metrics. Some of which will feel familiar to you, some of which are new. In this report, we can rank order people by various data points. Most importantly, we can mark where you are, relative to your rolling average, or your “typical”.

productivity analytics for software teams

Churn. Let's take this column right here - third to the right. This is your churn rate. Now, everybody churns. Churning is when you write a line of code, and then you rewrite a line of code. We set a line in a stem that basically says if you churn code (if you write a line of code and rewrite that same line of code within three weeks), that's fine. You just don't get double credit for it. If you churn too hard, it may be an indication that there's a problem.

Let’s take a look at Katherine, at the top of the churn pile. She rewrote 90% of her code in the last three weeks. That's would be problematic, except for the fact that she's in prototyping, in which case, that's totally normal behavior. I can see she's normally at 38%, give or take, but I also know she was put on that project just two weeks ago, so I'm actually going to ignore that data set. Luke here, however, actually has a problem. He's churned 60% of his code in the last seven days, whereas he's normally churning in the low single digits. He's working on a standard admin feature, just rewriting some pages. Why is his churning so much more?

I don't know, but as a manager, I need to sit up out of my chair. I'm going to leave Katherine alone. I'm going to leave Mark alone. I'm going to leave Alexander alone, and I'm just going to go check in with Luke. That kind of precision management increases the productivity of everybody else on this board and potentially is going to save Luke from a pretty serious problem. In this particular case, it was a serious problem, and because I'm a fantastic manager - problem solved.

Checking in. Day since last non-trivial check-in. It's okay to not check in everyday, but you can't do it too often. If you get more than about a week, you need to have a reason why you didn't check in. Usually, there’s a good reason.

At the end of the day, GitPrime is a signaling tool. It's a signal to a manager of where to put your attention.

tt100 Productive. TT100, time in hours required to write a hundred lines of code after churn. This one’s not a very interesting statistic when taken at one point in time. However, when it’s taken over a long period of time, this metric ends up being an outstanding indicator of productivity, especially for engineers in certain classes of work. For developers who are doing new features, working in fast-moving parts of the code front-end, HTMLers - this ends up being an excellent area to signal that someone's stuck.

Again, I can ignore some of these guys because I know they're doing tuning, or some other area where I wouldn't expect a tremendous amount of output. However, if John C (towards the bottom of the list) is an HTMLer who only writes new features on prototyping, I'd check if he’s having a problem. Make sense? It's a whole lot better than walking around the office and going, "Hey, man. How's it going? Well, good? Good? Oh, okay. Cool." Nobody ever admits to it not going good. If you ask any engineer in the middle of their day, "How's it going," they will always say good. Then, you have to probe. You have to push. That whole time that you're pushing and nudging and asking, trying to uncover problems, you're wasting your developer’s time. You're taking productivity away from your team. This aims to solve that.

Compare your position to the rest of the team

productivity analytics for software teams

When you get down in the weeds, it sometimes pays to take a look at where an engineer is in position to the rest of the team. What we have here is two axis: Throughput (net lines of code, after churn), and Churn. Here’s a breakdown of who you’ll find in the four quadrants:

  • The upper-right quadrant: this is where you have your “prolific” programmers. These are people who output a tremendous amount of work with very little rewrite.
  • The lower right quadrant: this is where you’ll often see managers and part-time team leads - people who aren’t necessarily supposed to have high throughput. Their work tends to be small and precise in nature, like touch-ups. Otherwise, they’re considered “perfectionists”.
  • In the lower left, “could be stuck” quadrant: you have people who churn a lot and have a net output of very little.
  • And finally, you’ll find your “explorers” in the upper-left quadrant: These are people who have high throughput and high churn. You see this most commonly in developers who are writing HTML, CSS, and front-end Java script. These guys will write code, get feedback from an internal stakeholder, write some more code, and show it to the stakeholder again.

Explore your team’s progress over time

productivity analytics for software teams

Put all of this data together, and you get trends. Let’s take a look at Impact. Impact essentially measures the “cognitive load” your team carried when implementing changes. It is a function of the amount of code you're writing, the number of edit locations, and the number of files affected (to name a few).

Let’s say I go in, fix a helper method, and tie it to some front-end piece. All my code would be fairly concentrated across the entire file base, right? Now, compare that to the guy next to me, who touched 15 files all over the map, has a couple lines of code up at lines 1 - 20, and edits down on like 30. He carried a higher cognitive load, or made a greater impact to the codebase, than I did.

Commit volume is exactly what it sounds like. It’s a more sophisticated method of measuring raw output of the team. Let’s look at trend lines. One of the things that's very, very common is because we can drop in, because we can drop in events onto our reports, we can drop in events on our reports, we can see things that are driving productivity.

Normally, that's a release, right? It could also be burning man. It could also be the loss of a key player on the team, resignation or the joining of a key player on the team. It becomes very easy to see what's making a difference on your team. We measure and look at commits. This is a report showing items out for review. This is an extremely helpful data point or report for people that are trying to see that last mile get through the door.

Analyze your code review process, and identify stagnant business value

productivity analytics for software teams

Samantha opened up a PR five months ago - it's still out. I don't know what's going on there, but let's kill the PR, take it off, or close it out. It's not okay for a PR to be out for five months, period.

This is rank-ordered based on the PR’s age. You can scroll down and see the oldest ones.

You can also get a sense of how hard and fast this release is coming in. 81,000 lines of code for review? This is a worst case scenario. If you have 81,000 lines of code out for review, and you're a week out from lockdown, you can bet that that release is going to go badly. For reference, 32 open PR’s is a lot - you should be closing PR’s on a big team. On a team of 50 people, you should be closing PR’s a a couple days after they’re opened, at most.

Very quickly, the "How's it going," and "Does it feel good," conversation start to change. It starts being about data. Of course, it can be intimidating for most engineers when they look at these reports for the first time - your manager is now taking a close look at your data. The most important thing for engineers to remember is that data is our friend. Data is where the truth is.

When we get in arguments in our organizations, if it's about the data, more often than not we win, because data depoliticizes conversations. It stops being about what we feel and about product managers and stakeholders - they're always very good at talking. If you can just get out of the talking part of it and get into the data part of it, the truth usually comes out.

productivity analytics for software teams

While it may at first be intimidating to know that a group of people are looking at your commits, your PR’s, your workflows, and your impact, you’ll find that it’s a good thing for engineering. We can also look at commenting. One of the things I love about GitHub is that it's a great platform for social coding. It's simply a better tool for quickly communicating with other engineers, getting feedback, looping that feedback in, and getting peer review. That is, again, intrinsically good. It's a good thing, but it's important to understand the flow of commits. If one person on a large team carries an unnatural load of commit review and commenting, that's a sign of low productivity. Usually, that person is the alpha dog of the team. Instead, if we spread out his peer review and his commenting with other engineers, those other engineers get better. Alpha dog can do more of the heavy lifting that he or she was originally hired to do.

What we’re looking at here is who the PR was opened by, commented on by, and merged by. This shows you who's doing the social work around the release. Again, you see a number of high-level metrics, very, very good for early signaling, very good for indicating that there's a problem before that problem is fully manifest.


I would conclude that there's a lot of value there. It's a level up from interrupting every single engineer a couple of times a week. It's a level up from retrospective surveys. It's a level up from not knowing. It's a level up from having to argue and debate and negotiate the value of your work with somebody who is, by definition, a better arguer, and negotiator than you are.

At the end of the day, as a developer, I just want to be recognized for the quality and strength of my work. The more I can clearly show/articulate/evidence that, the better and healthier I will be as an engineer. The better my team will be as a whole. Hopefully, as GitPrime continues to grow and be successful, we'll all be better for it.

Thank you.