Hey, Luca here, welcome to a weekly edition of the💡 Monday Ideas 💡 from Refactoring! To access all our articles, library, and community, subscribe to the full version!
🎟️ Scale AI coding without sacrificing quality
On February 23rd I will join a webinar with Adam Tornhill and our friends at CodeScene about how to use AI to improve your code quality — instead of making it worse!
I am a big fan of CodeScene and all of their recent work, including their MCP server and their code health whitepaper.
If you want to hang out and have a chat there, you can RSVP for free below! 👇
1) 💊 Beware AI productivity “placebo”
Research on productivity is tricky, because most data is just… self-reported.
The way we usually investigate AI’s impact is by asking developers how they feel about it — and engineers are generally satisfied. However, we also know that the only time we tried to measure productivity using proper control groups, developers thought they were more productive, while they actually were not.
In other words, it’s productivity placebo: AI triggers the feeling of progress without guaranteeing real output.
Has it ever happened to you to get into a multi-hour bug-fixing rabbit hole, try everything and nothing works, you give up for the day, and then the morning after you fix it in 5 minutes?
This happens because relentless bug fixing is like a fake state of flow. You enter a workflow with quick feedback loops (change something → see if it works), and it feels cognitively easy to sustain, so you can go on for hours. But it feels easy precisely because you are not truly engaged — you are working on autopilot, and unsurprisingly, not making real progress.
There is a way to work with AI that feels exactly like this. You feel productive, but you are not.
So how do you escape it? To me it’s about good work hygiene:
Stay engaged — try to understand what the AI is doing, provide feedback, and steer the output actively.
Do not multi-task — pick one of two extremes. Either set things up so the agent is autonomous for a long time, allowing you to meaningfully do something else (e.g. a meeting), or move in short iterations that keep you focused on the task without wandering to email, browser, or Slack. Frequent context switch makes us stupid and less capable of intercepting AI mistakes.
Make frequent pauses — get up for 5 mins every 30 (pomodoro-style), have a quick walk, and get back to work. Avoid your own brain rot by forcing yourself to reset frequently.
I covered this recently in this article 👇
2) 🏋️♂️ Hard work is a lagging indicator
There is a big feud in tech over hard work. It’s between teams who work long hours / 996 / hustle all the time, vs those who focus on a more measured approach, in the style of “slow is smooth, smooth is fast”.
It has become a culture war — people attach part of their identities to one camp, and often disdain the other.
I believe this whole conversation is based on a fallacy.
In my experience, motivated people naturally work hard — because they care, not because you told them so. When you hire talented people and give them the right amount of direction and agency, the result is... they work hard!
Working hard feels fun when people do so because they enjoy pushing themselves. It’s not fun when they do it out of fear, or to seek approval. That’s where teams crash and burn.
So the question is: have you created the conditions where people want to work hard? Or do they do it because they feel they need to?
Hard work works best as a lagging indicator — the natural output of a healthy, high-agency environment. You can’t force it upstream, but you can create the conditions for it to happen downstream.
I talked about this (and a lot more) with Greg Foster, co-founder of Graphite, in this recent piece 👇
3) 🎙️ Strategic quitting
In August I interviewed Annie Duke on the podcast. Annie is one of the world’s top experts in decision-making and we talked about it at length during the interview.
When making decisions under uncertainty, Annie argued that one of the most important skills is quitting — yet it’s severely undervalued due to cultural associations with failure.
The core logic is simple: since we’re neither omniscient nor have time machines, we’ll always learn new information after making decisions. Sometimes this information suggests we should change course.
“What more important skill could you have than the ability to quit what you’re doing? You make a decision under conditions of uncertainty, afterwards you find out new stuff... and then you get to stop.”
People don’t quit when they should because they:
Associate quitting with failure and lack of character
Fear judgment from others (though research shows people actually respect good quitting)
Fall victim to sunk cost fallacy
Annie uses the extreme example of Shavano Keith, who broke her leg at mile 8 of the London Marathon but continued running for 18 more miles despite medical advice to stop.
You can’t think your way out of cognitive biases, but you can create processes that help. Two of the most effective ideas are:
📋 Kill criteria — define in advance what signals would indicate it’s time to quit.
👥 Accountability partners — have others help enforce your kill criteria.
Annie explains that there’s a huge psychological difference between encountering a warning signal on the fly versus having pre-committed to act on that signal. The signal is identical, but your response will be dramatically different.
Here is the full interview with Annie:
You can also find it on 🎧 Spotify and 📬 Substack
And that’s it for today! If you are finding this newsletter valuable, subscribe to the full version!
1700+ engineers and managers have joined already, and they receive our flagship weekly long-form articles about how to ship faster and work better together! Learn more about the benefits of the paid plan here.
See you next week!
Luca




I’ve had projects where AI sped me up and others where it kept me looping. The difference wasn’t the tool. It was whether I had a clear “stop rule.” Same in ultraruns: knowing when to push vs pull the plug saves the season, not just the race