The AI Productivity Paradox πββοΈ
Exploring why AI often falls short of expectations, and what good teams are doing to bridge the gap.
Hey, Luca here! This is a weekly essay from Refactoring! To access all our articles, library, and community, subscribe to the full version:
Last week I finally caught up with the latest DORA Research about AI. It is a massive, 140 pages doc, exploring the state of AI assisted development β and it gave me a lot to think about.
By now, there are a lot of reports out there that try to figure out how engineering teams are doing with respect to AI. Broadly speaking, there are three main areas that are important to measure:
π± Adoption β how much engineers are using AI tools.
πͺ΄ Productivity β how much AI is making individuals and teams work faster.
π³ Impact β how much all of this turns into business outcomes.
These work as increasing levels of maturity, and increasingly better proxies for true value: adoption by itself is not very useful, productivity is somewhat better, and impact is what we truly care about.
The problem is: these topics are also in ascending order of how hard they are to measure. A lot of the available data is either fuzzy, or self-reported, or both, which makes it hard to trust.
So today I want to cover some of the ideas that keep surfacing in these types of research, common pitfalls to avoid, and mental models I have seen the best teams use to trend in the right direction β the one that goes from βadoption is highβ, to βimpact is highβ.
Here is the agenda:
π Productivity placebo β are you more productive, for real?
β Bottlenecks β is the whole pipeline more productive?
π Quality traps β are you more productive at the expense of something else?
βοΈ Collaboration sucks β an unpleasant but useful mental model.
πͺ Making things simpler β how do you trend in the right direction?
Letβs dive in!


