A traditional perspective is that code review allows development teams to find bugs before they hit production. While not entirely wrong, we believe that’s a narrow view and there is far more value to be realized within the review process.
Yet, if there’s an opportunity to capture more value, then there must be something inhibiting us from capturing that value. In code review (we use the words PR, pull request, and code review interchangeably), a team ideally progresses through seamless interactions that lead to incorporated feedback and everything is terrific. No problem!
But as a practical matter, that’s not how it goes every time.
Because engineers are human beings, every code review has tension built into it. Engineers have their own ways of doing things, and the whole point of code reviews is that someone else is weighing in with their own (sometimes differing) opinions. At times that tension can benefit from a third-party assist.
The problem is that leaders and managers are typically not a part of the review process on any meaningful scale.
To become meaningfully involved, leaders needed a set of signals in this nonstop stream of pull requests to help them identify which reviews to participate in and how best to participate.
In our mission to elevate engineering leadership with objective data, our data-science team studied over 100 signals in the data to make better PRs, better engineers, and ultimately better products. You can find more on the Pluralsight Flow Code Review and Collaboration reports here.
This multi-part post covers seven dynamics you can recognize in the data, and act on to improve your team’s effectiveness through code review. By emphasizing how teams of human beings work together, you can dig into fertile opportunities for tangible improvement.
We’ve organized the seven pull request dynamics into two groups: processes and people.
In this first post, we’ll examine how you can build effectiveness by focusing first on your team’s process, before moving on to your team’s people. So we’re going to dive into four signals that indicate that you can improve the processes your team has in place.
1. Unreviewed code and pull requests
Unreviewed pull requests occur when engineers open a code review and it gets approved without any comments, often by the submitters themselves. As a general rule, engineers should never merge their own code, and in fact, most companies don’t permit them to. Unreviewed pull requests bypass any form of check on the code, as well as skipping the opportunity for improvement and learning.
If the code is worth putting into the main code branch, it is worth having somebody review it. Yet as a practical matter, unreviewed pull requests happen a lot, for any number of reasons.
With today's tools, unreviewed pull requests are typically tough to spot, on a large team with hundreds of pull requests flowing through on a regular basis and deadlines on the horizon. Flow data can help identify these instances for you.
Unreviewed Pull Request, as seen in Flow.
In this view, you can see all of the pull requests that this team opened in a single sprint. Six of them, in yellow, are still unreviewed. Three are unreviewed but still open (instances which we’ll cover in the next two sections below), but three have already been merged into the main.
Those are the unreviewed pull requests you want to take a closer look at. Scan them every single time they occur.
You can evaluate them on a case-by-case basis. Even though you’re checking them out after they’ve been merged, you want to make sure that any problems aren’t going to run out of control. The sooner you check them, the easier it is to intervene. Sometimes, if it’s a trivial commit, you can just give QA a heads-up to take a close look there. Sometimes, those unreviewed pull requests are non-trivial. Walk those back if you can.
In any case, you want to reduce the quantity and frequency of unreviewed pull requests as a best practice. If certain engineers are repeat offenders, have an informal conversation with them about the importance of the code review process. You typically don’t need to get heavy-handed with them. Focus your conversation on better planning, more so than policies and rules. Most engineers will understand your reasoning and do their part to reduce unreviewed pull requests going forward.
2. Pull request hotspots
Part of the challenge of managing the pull request process is that the vast majority of pull requests are routine. They open, someone makes a few comments, maybe the engineer commits some changes, and then they're approved and merged.
The pull requests that don’t go that way can cause some pretty serious disruption. We call those disruptors pull request hotspots, and they’re pretty much the opposite of unreviewed PRs.
Hotspots happen when engineers conduct much more commenting than normal, with lots of back and forth, especially when it’s over a very little bit of code. You can also look for an unusual number of follow-on commits that drag the review process on for days. In either case, you’re looking for excessive activity that’s not reaching any conclusions. These hotspots often signal uncertainty or disagreement.
Of course, excessive and problematic commenting depends on your team’s baseline. The first thing you want to do is understand what your team’s normal behavior is.
Pull Request dynamics, as seen in Flow.
These charts indicate normal behavior for this example team. According to the data, this team typically resolves its PRs promptly, in just over an hour. But we can also see, by looking at the outliers, that despite a pretty good process, not everything goes as planned. These circles indicate some potential hotspots.
Outliers in the Pull Request process, as seen in Flow.
You can dig into those outliers to better understand the activity going on and why those particular pull requests are necessitating so many comments and follow-on commits without any resolution.
Drilling down into code review metrics in Flow.
Sorting by highest activity, and working from the top down, this code review has an abnormal amount of activity, as indicated by the bars and circles by the cursor. It was opened seven days ago, and it’s not large, but it already has two follow-on commits, eight comments, and no approval. Considering that this team typically approves a PR in about an hour, a seven-day PR may be out of control. It’s time for you to step in.
You can scan the conversation to get a sense of the review. In this case, the conversation is cyclical and isn’t resolving. That means it’s time for you to get these engineers in a room (or on the phone, if the team is distributed) to sort it out. In almost any case, spending thirty minutes at a whiteboard will likely prevent several more days of spin. Furthermore, it will pay dividends in knowledge sharing.
Whatever approach you take, you want to get engineers engaged and talking through the problems. What’s the context of the hotspot—disagreement, or healthy dialogue? Use that insight to form your response.
Also, encourage the engineers to resolve and close those PRs, because open pull requests can drive uncertainty in a team. Alternately, excessive follow-on commits may indicate that the work isn’t ready for review, in which case it may be better to withdraw that pull request until the engineer can have it better prepared for the review process.
3. Long-running code reviews and PRs
Long-running pull requests may appear similar to hotspots, but you can really improve your process by drilling into them differently. Long-running code reviews may not be a problem in and of themselves, unlike hotspots, but they tend to signal other underlying problems—such as disagreement or uncertainty between engineers. Such uncertainty can result from missing requirements, last-minute changes to the design specs, missed components, or simply inertia.
Indicated in the data below shows a pull request that’s potentially being overlooked. It was opened more than a month ago. Three people commented about one line of code right away, and then that turned into radio silence.
Neglected or overlooked code review, as seen in Flow.
This code review is stuck. Maybe it just got dropped. Maybe a war is about to break out over a disagreement and everyone is trying to avoid setting off the powder keg. Whatever the case, you have a logjam and no one quite knows how to resolve it. Rather than jumping into the muck, everyone has backed off.
As a leader, you can take a look at the comments to see if you can get a read of the situation. Figure out if the engineers are grappling with disagreements or uncertainties. And then, having identified at least part of the problem behind the long-running PR, seize the moment to step in and bring resolution to the review.
We recommend you start by talking with the submitter. Ultimately, concluding that code review is the submitter’s responsibility, and you don’t want to take that accountability from them lightly. Check in to see how they understand the problem—and offer them help in navigating the waters with their teammates.
Perhaps everyone needs to move the written conversation to a verbal one. Maybe they need half an hour at the whiteboard. Maybe they just need your encouragement to go ahead and resolve the pull request themselves.
In any case, you’re bringing closure to a code review and to your team. A long-running pull request is like a rock in your team’s shoe. Identifying these irritants with the data can make a significant difference in helping engineers work well together and advance as a team.
4. Blocked pull requests
Blocked pull requests are expensive and frustrating. Your engineers want to be productive, but in the review process, you need feedback to be productive. And a blocked pull request happens when engineers are stuck waiting for that feedback.
When engineers submit PRs, they’re in the mindset for that code. They’re thinking about it, and it’s fresh in their minds. After a few hours, though, their concentration is by necessity somewhere else. And their recollection of that code tends to get fuzzier and more vague by the day. So if code reviewers are slow to comment, that delay can impair their teammates’ ability to be productive.
In other words, it’s critical that reviewers provide their comments promptly to ensure their full impact.
As a leader, sure, you can nag your team to get their reviews in quicker. But nagging doesn’t exactly boost morale. It’s helpful when you can bring data into the conversation.
Slow moving or blocked pull request, as seen in Flow.
Here, Sarah opened a pull request and waited six days until a reviewer made a comment. She had four follow-on commits to make, which she accomplished pretty quickly, and then her PR was finally approved seven days later.
That means it took nearly two weeks from the time she wrote the code to when it finally reached QA. That’s less than ideal in almost any environment. But the delay on Sarah’s code wasn’t due to Sarah’s performance. Her reviewer failed to respond to her PR in a timely and responsible fashion.
"Reaction Time" as seen in Flow.
Her reviewer, Bodhan, has his relevant stats highlighted on this screen. It appears that his time-to-response in this scenario is a demonstrable pattern. On average, he takes a day to respond, and he takes even longer with Chris than he does with Sarah.
This kind of pattern becomes toxic because it speaks to respect for an engineer’s teammates. Even if no disrespect is intended, keeping teammates blocked is inconsiderate of their time and productivity.
In this example, Bodhan bundles all his review work to the end of his days and then jams it all in. That’s efficient for him, because he’s busy with his own work as a senior engineer. But it comes at a cost to the rest of the team.
Fortunately, this patterns tends to be fairly easy to manage. It’s important to set clear expectations with the blocking engineers, and you can focus the discussion on the impact to others. Many times, blocking engineers just don’t realize the repercussions of their patterns. They’re concentrated on their own work and haven’t even considered that other people are waiting for them.
In the data, you can identify an engineer’s responsiveness and reaction time, so an easy place to start is by establishing some goals. We recommend establishing 4-8 hours max as a target reaction time. Usually, by four hours, engineers have a break in their work—lunch, or a meeting, or whatever gaps are normal in your workplace culture. Those breaks are a good time to encourage them to do their reviews. By focusing on that time to first comment, you get a pull request off to a strong start and set the tone for the whole process.
Usually, an informal chat with a blocking engineer will make all the difference in the world. When people realize their teammates are depending on them, they almost always correct course correct. Every now and again, you might encounter a recurring problem, but we find that almost never happens with this pattern.
Up next: The people and collaboration side of code reviews
That covers the four process-based dynamics of code review best practices that you can identify with data and take action on. In the next post, we’ll continue our exploration into the people-based dynamics of code reviews.
5 keys to successful organizational design
How do you create an organization that is nimble, flexible and takes a fresh view of team structure? These are the keys to creating and maintaining a successful business that will last the test of time.Read more
Why your best tech talent quits
Your best developers and IT pros receive recruiting offers in their InMail and inboxes daily. Because the competition for the top tech talent is so fierce, how do you keep your best employees in house?Read more
Technology in 2025: Prepare your workforce
The key to surviving this new industrial revolution is leading it. That requires two key elements of agile businesses: awareness of disruptive technology and a plan to develop talent that can make the most of it.Read more