Shipping fast and often is the #1 element shared by top performing engineering teams. Elite teams release multiple times a day, and it takes less than one hour to go from code committed to code successfully running in production.
This idea was made popular in 2018 by the Accelerate book — a thorough research study that convincingly linked good business results to good DevOps metrics.
I also wrote about it in the past:
Three years after Accelerate, this is even more true. As highlighted in the recent State of DevOps report, by Google, the share of "elite teams" jumped from 7% in 2018, to 26% in 2021.
How is this related to code reviews? 👇
🔍 Code Reviews
Code review is the process of having one developer (or more) to review another developer's code before release.
Code reviews have two main goals:
Improve quality — of code before being released
Share knowledge — across the team
These are both crucial goals. Better quality translates into less failures and less maintenance down the line. Sharing knowledge makes teams more resilient and engineers grow faster.
This has been proven time after time, but still, in this need for speed, reviews are one of the biggest offenders.
When you think about it, we may work hard to shave 5 to 10 minutes off our deploy pipeline, and then add hours of delay as the code is stuck in review.
For this reason, the whole process is still somewhat controversial, with many proposals to make it better.
These proposals are not necessarily at odds with the two main goals. I am a strong proponent of good code reviews, but I also believe the standard PR process is wildly inefficient.
This article is about how we can optimize reviews and PRs to be as fast as possible, without compromising on quality and knowledge sharing.
It includes:
🗺️ Tactics — practical advice on how to improve your code review process.
📋 Guidelines — how elite companies like Google, Yelp, and GitLab do code reviews.
🔨 Tools — to help you make faster and better reviews.
📚 Readings — the best articles I know about the topic.
Let’s go 👇
🗺️ Tactics
Here are seven actionable tactics you can use to do good reviews, faster. As you will see, they are more about the process than the review itself.
1) Keep them small
Doing small PRs is one of the best ways to have a short turnaround.
How small? In my experience, if you break down work properly, 90% of the time you can issue PRs that are less than 200 lines of code.
You can go even further: at Google, 90% of code changes are <24 lines of code.
Small PRs improve both speed and quality:
Speed — reviewers need to block a smaller time to review it, so they are more likely to be able to do it sooner.
Quality — fewer lines of code means the change is easier to understand, which leads to better reviews.
2) Do them as soon as possible
Even if you want code reviews to be asynchronous, you should be adamant about doing them as soon as possible.
When the code is stuck for hours (or days!), the author needs to switch to something else while they wait for it — and when the review is ready, they need to context switch again to make changes or fix bugs.
This leads to cascading delays that ultimately have a strong impact on your velocity.

Here are two ways you can help your team have a quick review turnaround:
🚨 Set notifications — simple notifications go a long way. Tools like LinearB or Botany can notify developers when a review is required on their side. We made an experiment with Pull Panda (now retired 😢) years ago and it halved our review time.
🎯 Set goals — analytics tools like Waydev and LinearB help you discover how long it takes for your team to review code. Use them to assess your current situation, set improvement goals, and discuss them in retros.
3) Automate parts of the review
Make the reviewer do less work by automating part of their job. A few ways:
Include tests — of course, comprehensive tests make the code more comprehensible, replicable, and vastly reduce reviewer’s work.
Apply linting — perform static analysis automatically in your CI pipeline, so reviewers can spare themselves stylistic errors, suspicious constructs, etc.
Keep a list of common mistakes — create a space in your docs to write down common comments made during reviews, so the reviewer can point to them instead of giving detailed explanations every time. Here is one such document created by the Golang team: Code Review Comments.
4) Use stacked diffs and trunk-based development
Many big companies like Facebook and Google have a code review workflow that is not based on PRs, but rather on stacked diffs.
Stacked diffs are a way to have your code reviewed without firing up a PR. You take a portion of your local changes and make it available for review. You can stack these changes into multiple requests for review, so you never get stuck waiting for reviews.

This workflow, other than being faster, allows to (mostly) get rid of branches and work in trunk-based development mode. There is convincing evidence that teams who adopt trunk-based workflows achieve higher velocity and shorter cycle time.
If this seems crazy to you, you may reflect on the fact that engineers working in some of the best companies in the world commit directly to master all the time (and often on a monorepo), unless they have a reason not to.
The bad news is that to make this workflow happen you need some tooling, and unfortunately there isn’t much today. Here are the two main things you can try:
Phabricator — originally developed as an internal tool at Facebook, it then became an open source project, but today it is no longer maintained (you can still self-host it). There is a community attempt to maintain it at Phorge, but its future is unclear.
Graphite — a new, promising tool that implements the stacked diffs workflow. As of today, you can join the waitlist ¯\_(ツ)_/¯.
If you want to learn more about stacked diffs and trunk-based dev, you can check these articles:
📑 Stacked Diffs vs Pull Requests — by Jackson Gabbard
📑 DevOps tech: Trunk-based development — by Google
5) Consider the Ship / Show / Ask framework
If code reviews are about 1) improving quality, and 2) sharing knowledge, should they always be required? Do all changes have room for improvement, or bring new knowledge to share?
Rouan Wilsenach argues you are better off adopting the Ship / Show / Ask framework, which considers three cases:
🚢 Ship — You make the change directly on your mainline, without asking for a code review. This works when you fix unremarkable bugs, add features using established patterns, or do collateral changes like improving docs.
🔍 Show — You create a PR, run all the CI pipeline, merge it without anyone’s approval, and then ask for feedback. This way you release quickly but still create space for discussion. It works in situations where you want to share knowledge but don’t necessarily need feedback, or the feedback is valuable but shouldn’t be blocking.
❓ Ask — You make changes on a branch, open a PR, and wait for the review before merging. This is the standard process most companies adopt nowadays.
This framework is about choosing every time what is the best strategy, based on the type of change you are making. It encourages active reflection instead of adopting always the same pattern.
In my professional life I have pretty much always worked in “Ask” mode. If you want my take, I wouldn’t be comfortable with the “Ship” mode unless we are talking of really small, negligible changes. However, I see how the “Show” pattern might be a good way to get the best of both worlds in many scenarios.
6) Do Pair Programming
Pair programming completely removes the need for code reviews. When you pair, your code is automatically reviewed by your partner, and it can be merged as soon as it’s done.
It also fixes the paradoxical problem that the more engineers are conscientious about code reviews, the more they end up despising them 👇
"Reviews, done right, have all the painful parts of a software change: understanding what the change is for, loading up the relevant code and tests into working memory, getting my local environment up to see the change, making the tests run. They have none of the fun parts: refactoring to clarity, changing code and seeing a difference. They take hours of time and all my concentration away from whatever it is that I’m personally trying to do" — Chelsea Troy
Pairing is my favorite way to improve the review process. You can also check out my previous article about it:
7) Write a good PR description
Including a good description goes a long way to helping the reviewer understand what’s there and what they should review.
But what does a good PR description look like?
We had a great discussion about this on the community in the last few days. Here are a few contributions 👇
My approach is very simple:
– A summary of what the CR is about.
– A testing summary on how and what was tested.
– It all should be linked to your project story/task.If the story task is small enough the CR should be proportionate to it. IMHO if the CRs are getting too big probably the problem is on the story/task breakdown.
— Roberto, Engineering Manager at Amazon
We write very small stories so the PRs are self-explanatory. In addition we try to apply pair programming to almost every story.
— Pasqualino, CTO at MioAssicuratore
We used custom templates with checklists for PRs (and tickets). Biggest benefit I saw from this as soon as devs get familiar with it is robustness and code quality. By having to tick off a box, there was a sense of responsibility to fulfil the requirements.
— Corvin, Founder at Jaco
I tried different approaches, at the end we got good results asking to provide the definition of done to every ticket. This helps us having the context, the input and the output of the activity. Every task should always be understandable.
— Gabriele, CTO at VoxLoud
📋 Guides
Here are a few public guidelines from great companies about how they do code reviews:
Google — Code Review Developer Guide. A thorough guide that includes links to further docs to deep dive.
Yelp — Code Review Guidelines. Great guide that includes both instructions on how to actually review code, and on how to optimize the whole process.
GitLab — Code Review Guidelines. This is a different take because GitLab is an open source project, which adds tons of complexity. It includes issues like balancing the load on maintainers, how to identify domain experts, and more.
🔨 Tools
Here are a few tools that can help you improve your review process.
🔨 LinearB — it improves your dev process by sending you updates and notifications about things you should do, including code reviews. Also, if you are a premium subscriber of Refactoring you get a nice deal! 🎁
🔨 Graphite — an up-and-coming tool that allows you to adopt the stacked diffs workflow for your reviews. Right now you can only join the waitlist, but the videos are very promising.
🔨 Waydev — gives you plenty of analytics about your dev process, including code reviews. It helps you identify bottlenecks and speed up the process. If you are a premium subscriber, check it out in the deals section! 🎁
📚 Readings
📑 Code Reviews (Synchronous and Asynchronous) — in this great article, that I referenced last week too, Eduardo Ferro describes different approaches to code reviews, with upsides and downsides.
📑 Stacked Diffs vs Pull Requests — the best article you can find about the famous but elusive “stacked diffs”, and why they should mark an improvement to the regular PR workflow.
📑 Ship / Show / Ask — thought-provoking piece that encourages you to move past dogmatic approaches and reflect on the value of each single review.
My three cents in the topic of stacked changes: https://github.com/VirtusLab/git-machete/ - lightweight, ready out of the box and actively maintained, also as an IntelliJ plugin (https://github.com/VirtusLab/git-machete-intellij-plugin)
Correct me if I’m wrong, I think Gerrit (https://gerrit-review.googlesource.com/Documentation/intro-user.html#gerrit) is a good battle-tested tool for stacked reviews. Seems like it’s an open source version of Google’s internal tool