19 software engineering metrics + how to track them effectively
Learn how software engineering metrics help teams measure the quality of their dev process and make positive, actionable changes.
Mar 29, 2023 • 3 Minute Read
To steer software engineering teams in the right direction, it’s imperative to have the right metrics that show you where you’ve been, where you are today, and where you’d like to be tomorrow.
Chances are, your executive team doesn’t speak in lines of code or story points. Instead, they want data that tells a story. Software engineering metrics can help you do just that. Measuring different aspects of your team’s process can help you tell the story of how your team is maturing and what can be improved.
Below, we dive into some essential software engineering metrics to track, and share a method for prioritizing the right metrics for your team.
What are software engineering metrics?
Software engineering metrics are a way to measure the quality and health of your software development process. These metrics should be objective, simple to understand, and consistently tracked over time.
While teams may try to “game” the metrics in their organization, this will end up hurting your team in the long run. Software engineering metrics should be used as objective signals to drive better outcomes for your organization.
It’s important to note the difference between software engineering metrics and application and performance monitoring (APM) metrics. APM metrics (like website uptime) track the actual performance of a piece of software or app. Software engineering metrics, on the other hand, measure developer productivity, team health, and delivery predictability.
Why are the metrics we see in Jira not enough?
While Jira metrics provide a solid foundation for tracking fundamental Agile metrics, they don’t encompass all aspects of the software development lifecycle.
Jira metrics lack depth and highlight surface-level issues rather than showing leaders how to improve organizational performance. Tools like Pluralsight Flow merge various software engineering metrics to provide you with a deep dive into historical trends, team health, and collaboration to help you identify bottlenecks in your process.
Types of software engineering metrics
There are a number of software engineering metric types to focus on as a team. At the heart of these metrics, you want to look at how well your team is hitting goals and developing productive code. Some indicators act as lagging indicators that reflect past performance, while others, known as leading indicators, act as potential predictors.
Software engineering metrics fall within a few distinct buckets and map to certain aspects of the software development lifecycle:
- Predictability: Metrics under this category answer the questions “Are we moving fast enough?” and “Do we have a steady, efficient pace?” Understanding these metrics enable leaders to more predictably estimate outcomes, timelines, and ROI. These metrics are typically a mix of lagging and leading indicators that work together to help aid your team in making data-driven forecasts.
- Activity: These metrics—when used properly—can show how organizational inefficiencies are getting in the way of work getting done; they are typically a mix of indicators that can help you understand past performance, as well as your current position.
- Culture: Culture metrics answer the question, “Are we instilling a collaborative culture in our software development process?” Pull request metrics are key to seeing how collaborative your team is during the code review process. These metrics are typically lagging metrics, reflecting on past collaboration.
- Efficiency: These metrics evaluate the engineering efficiency of your team, whether there are processes or roadblocks hurting your team, and whether developers are taking feedback from their pull requests and iterating on it. Efficiency metrics are a mix of leading and lagging metrics that help to build a complete picture of your process.
- Reliability: Reliability metrics work as a sort of control variable to help teams understand if they’re sacrificing quality for speed. In most cases, these metrics are lagging metrics that examine past performance for quality issues.
Coding metrics
Coding metrics allow developers to track and measure the quality of code they’re creating; these include popular DevOps-focused metrics based on the DORA model. Tracking this historical data provides the insights teams need to improve the reliability and maintainability of code.
1. Lead time for changes
Lead time for changes measures the time it takes from when code is committed to when it’s deployed. This metric gauges how long it takes to create and deliver value to the user.
A high lead time for changes can indicate there are bottlenecks within your software development process. A low lead time for changes shows that your team is efficient in reacting to changes—whether it’s responding to feedback, fixing a bug, developing a new feature, or maintaining your codebase.
2. Deployment frequency
Deployment frequency measures how often code is successfully deployed into production. This metric measures a team’s speed and agility. Teams deploying more often (such as several times a day or once a week) indicate elite or strong-performing teams.
If your team is deploying less frequently (such as once every few weeks), it may be a sign to reduce your deployment size so it’s easier to review, test, and deploy. Software deployment tools can also help make your process more efficient.
3. Rework
Rework is code that’s rewritten or deleted within three weeks of being created. While some rework is to be expected, spikes in rework can indicate that an engineer is struggling with a project or that a project’s requirements were unclear. For a senior engineer, a typical rework rate is between 20 - 30%.
4. Impact
A metric designed by Pluralsight, Impact measures the scale of changes to the codebase. This metric is an approximate measure of the cognitive load the engineer carried when implementing code changes. Impact is a good explainer metric—if efficiency or coding days drops, was it due to work with above-average complexity? Team leads can obtain a better indication of their team’s capacity using this within Pluralsight Flow.
Impact takes into account:
The amount of code in a change
What percentage of the work is edits to old code
The number of edit locations
The number of files affected
The severity of changes when old code is modified
How this change compares to others from the project history
5. Legacy refactor
Legacy refactor measures the updates and edits to code older than three weeks. Tracking this metric helps you better understand how much time is spent paying down technical debt.
6. New work
New work is brand-new code that doesn’t replace or fix other code. This metric shows the amount of code a team or individual writes for new products and features.
Your new work target will depend on the stage of business you’re in. For example, a new company might aim for more than half of their work to be new work, indicating you’re building a new product.
7. Commit complexity
Commit complexity measures how likely it is that a particular commit will cause problems. This metric calculation includes how large the commit is, the number of files it touches, and how concentrated the edits are. Commit complexity essentially looks at the riskiness associated with a commit.
Commit complexity is an important metric to track because it can help teams prioritize which commits to review first—and which commits will require extra time and attention.
8. Throughput
Throughput measures a team's overall work output across a specific timeframe, such as a few hours, days, or weeks. By examining your throughput, team leads can better understand the efficiency or productivity of a development workflow. A system with a low throughput may have bottlenecks or other inefficiencies that you may need to address.
A critical aspect of throughput is defining a completed task, like the number of features completed per sprint or the software deployments pushed to production. By further analyzing throughput, you can tweak your process for improvement and plan for potential future capacity needs.
9. HALOC
A particularly focused metric, hunk aware lines of code (HALOC) measures the changes made to code within a version control system. When a developer makes a change in a system such as Git, it is commonly displayed as a "diff," which shows the differences between old and new versions of a segment of code. A code section representing an adjacent block of changes is called a hunk.
With the HALOC metric, each hunk of code is examined, and the number of insertions and deletions of code is counted. The higher number of the two factors is considered the HALOC. By using the HALOC metric, you can better understand the amount of code changed and modified by a particular developer.
Collaborative metrics
Team collaboration metrics give managers crucial insight into how teams are responding to the code review process. These metrics provide insight into the overall collaboration and team culture, as well as bottlenecks impacting deployment.
10. Responsiveness
Responsiveness is a measure of how long it takes a submitter to respond to a comment on their pull request with another comment or code revision. This metric gives teams insight into whether team members are responding to code review feedback in a timely manner.
Lowering this metric ensures that pull requests are being reviewed and merged in an appropriate time frame. For context, the industry norm for leading contributors is 1.5 hours and six hours for typical contributors.
11. Unreviewed PRs
Unreviewed PRs is the percentage of pull requests without comments or approvals. This metric shows you how many pull requests are merged without being reviewed.
Leading contributors typically have 5% unreviewed PRs, and typical contributors have 20% unreviewed PRs.
12. PR iteration time
PR iteration time is the average time in hours between the first and final comment on a pull request. This metric gives you insight into how long it takes to implement changes requested on PRs.
High iteration time can indicate the initial pull request had poor or misaligned requirements or that the reviewer’s requested changes were time-intensive and out of scope.
13. Iterated PRs
Iterated PRs is the percentage of pull requests with at least one follow-on commit. This simple metric allows you to see how many pull requests require additional work before being merged.
14. Reaction time
Reaction time is the time it takes for a reviewer to review a pull request and respond to a comment. This metric helps answer the question: Are reviewers responding to pull requests in a timely manner?
The goal is to drive this number down, as a lower reaction time indicates better collaboration between submitter and reviewer. A typical reaction time for a leading contributor is six hours, and 18 hours for a typical contributor.
15. Thoroughly reviewed PRs
Thoroughly reviewed PRS is the percentage of merged pull requests with at least one regular or robust comment. This metric aims to ensure pull requests are being thoroughly reviewed.
Too many pull requests without thorough reviews can be a sign of rubber-stamping during the code review process. On the flip side, a high thoroughly reviewed PRs percentage indicates strong code review quality and healthy team collaboration.
16. Time to merge
Time to merge is the average time in hours from when pull requests are created to when they’re merged. This metric tells you how long pull requests are in review. Long-running pull requests can be costly to your business and result in delayed releases.
17. Time to first comment
Time to first comment is the average time in hours from when pull requests are created to when they receive their first comment. Driving down this metric helps reduce waste, cycle time, and context switching.
18. Follow-on commits
Follow-on commits measure the number of code revisions added to a pull request after it is opened for review. Tracking this metric gives you insight into the quality of your code review process. If you see a spike in follow-on commits, that’s an indication you may need better planning and testing.
19. Sharing index
Sharing index measures how information is being shared across a team by looking at who’s reviewing whose pull requests. Tracking the sharing index can help you understand the number of people regularly participating in code reviews.
This metric is a great way to gauge how well senior members are sharing knowledge with more junior developers and identifying situations where knowledge silos are happening.
Benefits of tracking software engineering metrics
The goal of tracking these metrics should be to improve processes, quality, and collaboration across your team. The benefits of tracking software engineering metrics include:
Increasing understanding of how work is being done
Identifying problems/bottlenecks
Managing workloads and resources
Drawing objective conclusions to aid in decision making and goal setting
Of course, it’s not enough to simply track software engineering metrics. You must also make it a habit to report on and review these metrics on a regular basis to track growth over time. Turning metrics into actionable insights is the key to a successful development workflow.
Turning software engineering metrics into actions
Gathering metrics is one part of the efficiency equation, but how those metrics enable effective management and leadership is the most critical aspect. As you analyze your metrics, set a goal to identify any potential trends or patterns. Is there a particular metric that has changed drastically? Has that metric been changing over a short or wide period? Is the change positive or negative for your team?
Aim to understand the metrics' trajectory and the reasons behind their change. With this knowledge, you can take action using the recorded data. Remember that observing metrics is an ongoing process of improvement, driving your team toward greater efficiency.
Example: You may notice that your team's commits per day are decreasing; this could signal a productivity issue or indicate that larger code commits are being worked on, requiring more time to submit. You could examine this further by viewing your HALOC metric, giving you a better view of overall velocity based on the amount of code produced.
How to align software engineering metrics with your organizational goals
With so many metrics to track, it can be helpful to take a step back and consider your organizational goals as you’re determining what metrics you want to prioritize.
One way to do that is with the Goal/Question/Metric method. This method is broken down into three levels:
Goals (conceptual level): Start by defining the goals you’re trying to achieve.
Questions (operational level): Ask clarifying questions you’re trying to answer with the data you collect.
Metrics (quantitative level): Assign a set of metrics to every question to answer it in a measurable way.
Goal/Question/Metric can be used for a variety of outcomes. You may want to use the method to improve technical quality, product quality, or delivery team health. Using Goal/Question/Metric will ultimately help you identify and clarify your business goals, and establish metrics that will help you track and measure progress toward these goals.
FAQ
Do you still have questions about software engineering metrics and how they can affect your team's development? Here are answers to the most frequently asked questions.
What is a KPI in software engineering?
KPI stands for key performance indicator, a set of metrics that help track performance and project health. Software development KPIs are quantifiable metrics that can be used as specific data points to showcase your team's progress. They can also spotlight potential areas for improvement, aiding decision-making. Common KPIs include:
- Cycle time
- Lead time
- Deployment frequency
- Mean time to resolution
- Defect rate
- Customer satisfaction
What is the difference between metrics and KPIs?
While the terms are often used interchangeably, metrics and KPIs are not the same. Metrics are data points that help track progress, while KPIs are a subset of metrics that identify progress toward a specific goal.
For example, lines of code is a metric that tells you the size of a codebase, while mean time to resolution is KPI, as it relates to the goal of maintaining system stability.
What is the difference between a KPI and an OKR?
While similar in that they help measure overall progress, KPIs and objectives and key results (OKRs) have different focuses. KPIs are measurable metrics that monitor ongoing health and performance; they are tracked over time and can help with efficiency, effectiveness, and progress. OKRs are qualitative; they allow teams to set ambitious goals and focus on achieving overall objectives or obtaining a specific result.
How Pluralsight Flow can help you measure key software engineering metrics
You can measure almost anything, but you can't (and shouldn’t) pay attention to everything. Pluralsight Flow helps you track the right metrics all within one DevOps metric dashboard to give you the workflow insights you need. While other analysis tools focus on nothing more than code, Flow takes its job further, providing team-based metrics that give you a deeper understanding of your team’s progress.
To find out more about how Pluralsight Flow can help you track metrics beyond the standard Jira or DORA set, schedule a demo with our team today.