7 Comments
Jun 19Liked by Luca Rossi

Only thing with the "after the fact review": Who decides if, what, when gets reviewed?

Expand full comment
author

My bet would be:

1) Set team rules (e.g. update docs == no review; db migration == blocking review; new hires == blocking review, etc.) and general guidelines

2) Make engineers decide

Then whenever something goes wrong, do a post-mortem and possibly update rules

Does this make sense? 🤔

Expand full comment

Absolutely. Just wanted to be mindful that certain things fall through the cracks if it's decided to "do them later". :)

On a different note, I'm currently experimenting with Graphite, for stacked PRs. I'm liking it so far. Very smooth experience and it really encourages small PRs.

Expand full comment
Jun 19Liked by Luca Rossi

Great article, Luca.

It's amazing how you produce such clarity from a complex, multifaceted problem.

I really like this line and it has definitely given me food for thought about my own team's process.

"Counterintuitively, having consistently helpful and valuable reviews is a smell that you can do better in other parts of the process."

Expand full comment
author

Thank you so much Aidan! It was a hard article to write, took a lot of time to clear my thoughts on this.

Expand full comment
Jun 23·edited Jun 23

Loved the article, but I think it's missing a few important points on the non-blocking reviews:

1. If you make code reviews non-blocking, eventually incentives within the business can (and will) push teams to essentially skip almost all reviews. It's much more expensive to fix the "no reviews" or "bad reviews" problem later, even though it feels faster in the moment.

2. Code reviews are super useful as a mentoring system, not just as a method of assuring code quality. If you work on a team entirely composed of experienced people, you _might_ be able to rely on their judgment about what does or doesn't need a review. But even for myself (and I've done 10,000+ code reviews and was the maintainer of Google's code review guidelines) I would be cautious about that for Reason #1 above, because I know that even for me, there will be times when I think "this is probably fine," and it really did need a review. And then if you work on a team with a lot of junior engineers, they likely don't have the experience necessary to make that judgment correctly, in the first place.

3. Standards like SoC 2 require a human being to look at every change before it goes to production. and as I understand it, the auditors aren't that forgiving or willing to listen to "well I think this one was okay to ship without review." (I'm not endorsing the auditors' viewpoint, just pointing out that we have to deal with it, very often.)

All that said, for myself I've often wondered if it would be better to remove blocking reviews because I would rather have a culture where people can fix things quickly and feel empowered to do so. I know that in my personal development, I will merge a bad change and then immediately fix it right after I see a problem. But that depends on a very deep personal drive that I have to clean up code. (Like, I will happily refactor code all day, and have done so for months at a time, quite happily.)

Expand full comment

Thanks for the article, Luca.

I'm a Scrum Master with no technical background. I'm working in a new team for a couple of months now and they have a code review process that I think could be improved. The team is not the real owner of the code base. So for every PR, they need a review from a member of the team + a review from one of the developer working in the team owning the code. It takes ages and I do not really know if all reviews add real value (some devs are complaining that requests are only about cosmetic changes, we still have a lot of basics defects catched by manual QA after the review, coding standards do not seem to spread to new hires, ...). I would like to challenge the status quo with metrics. I have explained my thoughts to the team and asked them how we could measure the efficiency of our review process. They told me there is no real/easy way to do it. I am not convinced. I can understand that measuring review "efficiency" can scared both teams. My goal is to start the discussion and challenge the status quo because I think we can make life easier for everybody.

Could you help on that ?

Thanks

Expand full comment