Hey, Luca here, welcome to a weekly edition of the💡 Monday Ideas 💡 from Refactoring! To access all our articles, library, and community, subscribe to the full version:
Resources: 🏛️ Library • 💬 Community • 🎙️ Podcast • 📣 Advertise
1) 🌱 Review your notes weekly
Most people who know me are aware I’m a note-taking nerd. And as a full-time writer, I am afraid I’ve become even more insufferable about it.
I usually avoid giving specific advice—everyone should find the tools and workflows that work for their life. But there’s one thing I recommend to everyone: do a weekly review of your notes to 1) organize them, and 2) trash what doesn’t deserve long-term storage.
During the review, I go through everything I captured that week. The key is separating reviewing from capturing and processing everything at once. When you do this, interesting things happen:
Many things that seemed interesting days ago no longer do. I delete over 60% of what I capture.
When you’re in “organization mode”, you find more connections and work faster than doing it hastily on the spot.
It becomes a genuinely pleasurable activity—like a mini time capsule where each idea feels like a rediscovery.
I also use this time to clear email, DMs, downloads, and tasks—everything that piles up during the week—so each Monday starts with a clean slate.
If you set personal goals, it’s also the perfect time to check progress and plan your next actions.
The weekly review is by far my most important routine. I talked more about it (and my whole note-taking nerdiness) in this recent article:
2) 🔨 Explore → Embrace → Empower
In October we released our biggest industry report ever, about the state of AI adoption in real-world engineering teams.
We connected the dots about how 400+ teams use AI, and we came up with a three-step process: Explore → Embrace → Empower
🌱 Explore
Adoption begins with personal exploration—engineers getting familiar with AI tools and their ergonomics. This is largely bottom-up. Managers can encourage it by providing tools without performance expectations, identifying champions, and creating knowledge-sharing ceremonies.
Early wins include small automations, AI-written features, minor refactoring, and more docs and tests.
🪴 Embrace
Once basic proficiency exists, graduate AI usage into team practices: AI doing first-pass code reviews, better testing and documentation standards, improved meeting summaries.
Critically, these must become actual standards— e.g. if AI makes testing easier, enforce tests in PRs. Some things might feel risky, but if it doesn’t feel risky, is it real change?
Create feedback loops through retros and 1:1s to continuously tweak what’s working.
🌳 Empower
What do we do with the residual capacity AI creates?
The best teams use it to expand people’s scope—engineers going full-stack, PMs creating prototypes, designers trying frontend. Benefits include reduced coordination costs, higher velocity, and stronger growth.
AI keeps improving, so we keep adapting. Let’s get to work!
You can find the full report below 👇
3) 🧠 Create mode vs Review mode
Beyond productivity gains, I’ve been paying attention to how using AI makes me feel about doing work.
And the concerning part is I often feel... dumber. Or more precisely, less engaged — my brain enters a lazier mode that’s hard to steer back.
I call it create mode vs review mode:
🎨 Create mode
Create mode is when you produce output through some non-obvious process—writing an algorithm, or an essay, or an important email.
You make connections between ideas and turn them into something new. It feels draining but rewarding, like a workout. Every time you create these connections, you’re rewiring your brain, improving your mental models.
🔬 Review mode
Review mode is when your brain compares a draft against rules about what it should look like and improves it.
It feels 10x cheaper energy-wise, and I feel as humans we’re genuinely good at it — spotting details and steering output tactically.
But review mode has limits. Most critically, it’s hard to radically change course once you have a draft. The draft becomes an anchor. Daniel Kahneman, in Thinking, Fast and Slow called it the anchoring bias, and provided a hilarious example 👇
In an experiment, Kahneman asked participants to estimate the percentage of African countries in the United Nations. Before answering, they spun a wheel that landed on either 10 or 65. Those who saw 10 guessed an average of 25%, while those who saw 65 guessed 45%. The initial number, despite being random (and they knew it!), significantly influenced their estimates.
I feel this is the trap with AI assistance. Having a first version in front of you restricts what you can create yourself. There’s a mix of anchoring, sunk cost, and genuine laziness that kicks in.
So what’s the solution?! I explored some heuristics in this recent article 👇
And that’s it for today! If you are finding this newsletter valuable, subscribe to the full version!
1700+ engineers and managers have joined already, and they receive our flagship weekly long-form articles about how to ship faster and work better together! Learn more about the benefits of the paid plan here.
See you next week!
Luca




