Survey: how do you use metrics to work better? 📊 (win $150!)
We need your help to create a big industry report! Let's connect the dots between developer experience, productivity, and profitable engineering.
TL;DR — we are working on a big survey 🗳️ to learn how engineering teams use data and metrics to improve the way they work.
We need your help to figure this out! 👇
The best insights will be quoted in the final report.
We are also giving away 20 paid memberships for free ($150 each!) 🎁 to people who answer the survey! We will draw the winners next month as soon as the survey closes.
Over the recent years we have seen the rise of research and frameworks to help measure (and improve) engineering productivity.
For the most part, we are talking about fantastic works: DORA & SPACE, among many others, have driven healthy conversations about what “productivity” means, what good developer experience looks like, and how elite teams should aspire to operate.
We have written about this many times on Refactoring, and have had countless conversations with people in the community.
But as much as I believe this space has done wonders to create awareness about these topics, and is fighting the good fight of making engineering processes more data-driven, I have also found that it suffers from two issues:
🤹 Lack of cohesion — by now, there are several frameworks out there, each proposing its own set of KPIs, each possibly covering a small slice of your team’s work. E.g. DORA metrics are great, but are only about delivery. You can’t measure the success of your engineering org with a simple set of metrics.
🔧 Lack of implementation details — for the most part, these works lack a certain real-world touch. They bring in good theory but are light on implementation: how do you work on these numbers? Who should be involved? What are the ceremonies? What’s the cadence?
So, many tech leaders find there is a gap between the metrics’ world and their teams’ reality, and don’t know how to bridge it.
To quote Mike Tyson: “Everyone has a plan until they get punched in the face.” And boy do engineers and managers get punched in the face sometimes (figuratively speaking — usually).
We want to help bridge this gap. To do this, I believe we need two things:
🖼️ The big picture — existing frameworks are awesome, we don’t need to create new ones. But I believe we need to put existing metrics on a bigger map, figure out how they relate to each other, what is the scope and the boundaries.
🏈 Playbooks — we need to understand how teams can use this data to improve. In real life. With details.
The first part is important to create a shared vocabulary — because, unfortunately, we are not there yet. Some concepts, even popular ones like cycle time, have different definitions based on who you ask. Others are simply blurred, like developer experience vs developer productivity.
The second part is crucial to making all this conversation valuable. How do the best teams embed data in their processes? What do small startups do? What about big tech? Is there a difference between remote and co-located teams?
To me, the only way to get past the “it depends” wall, is to bring examples.
We want to create the best work we have ever done on this, so we are doing two things:
1) We are partnering with LinearB 📊
LinearB is an industry leader in data-driven engineering. They work with thousands of companies, I have known the founders and the team for many years, and we have already worked together in the past.
They are the perfect partner for this because they bring expertise and a treasure trove of data from their customers, which we will combine with that of our community 👇
2) We are creating a big community survey 🗳️
Next week we will launch a wide survey on the newsletter and the community to collect your stories about how you improve your engineering team: what numbers you track, how you use them, who is involved, and more.
We want to get real-world stories and create playbooks for other tech leaders to use.
The goal is to develop a practical approach to improve your engineering maturity through data.
The results of the survey will be published on Refactoring. It will be a deep industry report, with hundreds of contributions, including both hard numbers and quotes and stories from tech leaders all around the world.
Share your stories! 👇
UPDATE: we created the survey! You can participate below.
The best insights will be quoted in the final report.
We are also giving away 20 paid memberships for free ($150 each!) 🎁 to people who answer the survey! We will draw the winners next month as soon as the survey closes.
Looking forward to hearing your stories!
Luca
Oh boy... the #1 issue IMO isn’t the metrics or the technical implementation—it’s the human side. I always make sure to include this in the Risk section of my engineering metrics reports: 'Misinterpretation of the information conveyed by these metrics by stakeholders outside the team.'
Keeping metrics from being weaponized is a challenge. And I’d be cautious about introducing them in cultures where engineering is treated as a cost center—it can do more harm than good.
There is some interesting research in this space from my colleagues at Google -
Developer Productivity for Humans, 7 part series:
Part 1: https://ieeexplore.ieee.org/document/9994260
Part 2: https://ieeexplore.ieee.org/document/10043615
Part 3: https://ieeexplore.ieee.org/document/10109339
Part 4: https://ieeexplore.ieee.org/document/10176199
Part 5: https://ieeexplore.ieee.org/document/10273824
Part 6: https://ieeexplore.ieee.org/document/10339107
Part 7: https://ieeexplore.ieee.org/document/10372494