Discover more from Tech World With Milan Newsletter
This week’s issue brings to you the following:
DevEx: A New Method For Measuring Developer Productivity
Measuring Software Developer Productivity by McKinsey
Defining, Measuring, and Managing Technical Debt at Google
So, let’s dive in.
DevEx: A New Method For Measuring Developer Productivity
In the latest paper for ACM Queue by Abi Noda, Margaret-Anne Storey, Nicole Forsgren, and Michaela Geriler, called "DevEx: What Actually Drives Productivity," authors provided a framework for measuring and improving developer experience (DevEx).
What they found is that developers are happier when they feel more productive. So, they cannot deliver as much value as possible when they have obstacles. Different things cause a terrible developer experience, such as interruptions, poor tooling, unrealistic deadlines, working on low-value tasks, and more.
They identified three core dimensions of developer experience that have a direct effect on it:
Feedback loops - the speed and quality of responses to actions performed.
Cognitive load - the amount of mental processing required for a developer to perform a task
Flow state - a mental state in which a person performing an activity is fully immersed in a feeling of energized focus, full involvement, and enjoyment.
So, to create DevEx metrics, we can use their three core dimensions with two methods:
Perceptions - Gathered through surveys.
Workflows - Gathered from systems.
, for cognitive load for perceptions, we can ask how complex a codebase is or the ease of understanding documentation. On the workflow side, we can check the time it takes to get answers to technical questions or the frequency of documentation improvements.
In DevEx, we should capture developer perceptions and their workflows. Picking the right KPIs is essential to track overall success.
To learn how to measure developer productivity using other methods, check my previous text on this topic.
Measuring Software Developer Productivity by McKinsey
McKinsey has developed an approach that leverages surveys or existing data, such as backlog management tools, to measure software developer productivity. This method builds upon existing productivity metrics and aims to unveil opportunities for performance enhancements. This approach’s implementation has led to significant improvements, including reductions in product defects, enhanced employee experiences, and boosted customer satisfaction.
A nuanced system is essential for measuring developer productivity. There are three pivotal types of metrics to consider:
System Level Metrics: These are broad metrics, like deployment frequency, that give an overview of the system's performance.
Team Level Metrics: Given the collaborative nature of software development, team metrics focus on collective achievements. For instance, while deployment frequency can be a good metric for systems or teams, it's unsuitable for individual performance tracking.
Individual Level Metrics: These zero in on the performance of individual developers.
Two sets of industry metrics have been foundational in this space. The first is the DORA metrics, developed by Google's DevOps research team, which are outcome-focused. The second is the SPACE metrics, created by GitHub and Microsoft Research, which emphasize developer well-being and optimization. McKinsey's approach complements these by introducing opportunity-focused metrics, offering a comprehensive view of developer productivity.
These metrics include the following:
Inner/outer loop time spent: software development activities are arranged in the inner and outer loops. An inner loop includes activities such as coding, building, and testing, while an outer circle includes tasks developers must do to push their code to production: integration, release, and deployment. We want to maximize the time developers spend in the inner loop. Top tech companies aim for developers to spend up to 70% of their time making inner loops.
Developer Velocity Index benchmark: This survey measures an enterprise’s technology, working practices, and organizational enablement and benchmarks them against peers.
Contribution analysis: involves assessing contribution by individuals to a team’s backlog. Team leaders may be able to establish clear expectations for output with the help of this kind of understanding, which will enhance performance.
Talent capability: score describes an organization's unique knowledge, skills, and abilities based on industry-standard capability maps. The "diamond" distribution of skill, with most developers in the middle range of competency, is what businesses should ideally aim towards.
It's crucial to do everything correctly when measuring developer productivity. Simple metrics, like lines of code or several code commits, can be misleading and may lead to unintended consequences. For instance, focusing solely on a single metric can incentivize poor practices. It's essential to move beyond outdated notions and recognize the importance of measuring to improve software development.
Does this way of measuring developer productivity make sense to you? Reply to this newsletter or write in the comments your opinion.
Here is a Kent Beck answer to McKinsey:
Defining, Measuring, and Managing Technical Debt at Google
In the latest paper by Google Engineers, they researched how to define, measure, and manage Technical Debt. They use quarterly engineering satisfaction surveys to analyze the results.
Definition of Technical Debt
Google took an empirical approach to defining technical debt. They asked engineers about the types of technical debt they encountered and what mitigations would be appropriate to fix this debt. This resulted in a collectively exhaustive and mutually exclusive list of 10 categories of technical debt, including:
Migration is needed or in progress: This may be motivated by the need for code or systems to be updated, migrated, or maintained.
Code degradation: The codebase has degraded or not kept up with changing standards over time. The code may be in maintenance mode, needing updates or migrations.
Documentation on project and application programming interfaces (APIs): Information on your project’s work is hard to find, missing, or incomplete.
Testing: Poor test quality or coverage, such as missing tests or poor test data, results in fragility and flaky tests.
Code quality: Product architecture or project code must be better designed. It may have been rushed or a prototype/demo.
Dead and abandoned code: Code/features/projects were replaced or superseded but still need removal.
The team needs more expertise: This may be due to staffing gaps, turnover, or inherited orphaned code/projects.
Dependencies: Dependencies are unstable, rapidly changing, or trigger rollbacks.
Migration could have been better executed or abandoned: This may have resulted in maintaining two versions.
Release process: The rollout and monitoring of production need to be updated, migrated, or maintained.
Measuring Technical Debt
Google measures technical debt through a quarterly engineering survey. They ask engineers about which of these categories of technical debt have hindered their work. The responses to these surveys help Google identify teams that struggle with managing different types of technical debt. , they found that engineers working on machine learning systems face different types of technical debt compared to engineers who build and maintain back-end services.
They focused on code degradation, teams needing more expertise, and migrations being required or in progress. Then, they explored 117 metrics proposed as indicators of one of these forms of technical debt—the results were that no single metric predicted reports of technical debt from engineers.
Managing Technical Debt
Over the last four years, Google has made a concerted effort to define better, measure, and manage technical debt. Some of the steps taken include:
They are creating a technical debt management framework to help teams establish good practices.
Creating a technical debt management maturity model and accompanying technical debt maturity assessment that evaluates and characterizes an organization's technical debt management process.
We are organizing classroom instruction and self-guided courses to evangelize best practices and community forums to drive continual engagement and sharing of resources.
Tooling that supports the identification and management of technical debt (for example, indicators of poor test coverage, stale documentation, and deprecated dependencies)
It's important to note that zero technical debt is not the goal at Google. The presence of deliberate, prudent technical debt reflects the practicality of developing systems in the real world. The key is to manage it thoughtfully and responsibly.
Read more about managing Technical Debt:
🎁 This week’s issue is sponsored by Product for Engineers, PostHog’s newsletter dedicated to helping engineers improve their product skills.
Subscribe for free to get curated advice on building great products, lessons (and mistakes) from building PostHog, and research into the practices of top startups.