Where I work we perform code review for everything pull request before it can be sent to production unless we have been pairing on it (and even then a fresh re-read is often useful) and I’ve been thinking a lot recently about all the different aspects of code review. What follows is a kind of incomplete taxonomy, or layer model of code review, as much for my own reference as anything else.

0: Comprehensibility

I’ve numbered this ‘0’ because if the code isn’t comprehensible I can’t really progress much further. I regard comprehensibility to be the primary marker of good code. All the perfectly implemented patterns, all the idealised style, the total absence of anything that anyone might label a “code smell”, they are all irrelevant if I can’t read the code and have it make sense.

When I hit a piece of code I can’t make sense of, I like to get the author to explain it, ideally in person. In the process of verbalising what the code is doing, the author will often figure out a nicer way of structuring their code to make more sense, be simpler, or simply name things better (naming things well really is one of the hardest things we do). It’s a bit like rubber duck-debugging.

This is such an important element of code review because if the code is not readable now, it’s only going to be worse in 6 months or a year when someone needs to modify it and the original author is no longer around.

Comprehensibility applies to the PR itself too. IF I can’t make sense of the commit history then I’m going to find it harder to make sense of the code. Git is your friend, rewrite the history until it is a narrative that makes sense to the reviewer, even if it is a tall tail. No-one needs to see your mistakes, patches, and fixes.

1: Testing

While there are times when testing might be overkill, in general we want to make sure that the tests are complete, and like the code itself, are comprehensible. Ideally you should be able to read the specs and understand what the code is supposed to do.

2: Completeness

This is where the tests come in handy. It’s often easier to see from the specs whether the code is covering all the eventualities.

3: Correctness

This starts with the tests. If the tests are wrong then the code will be wrong (assuming they pass). But the tests and the code can both be wrong and the whole lot will pass. Most of us have had that moment when they realised the spec was testing the wrong thing, making the wrong comparison, or was tautologically true, no matter what the code did. Often the best way to confirm the correctness is to tweak the code, break it until the tests fail and confirm they fail the right way.

4: Unintended Effects

This is often a great opportunity to transfer knowledge about complex systems from those more familiar with it, to those just getting to grips with it. In any sufficiently complex system the law of unintended consequences will bite you eventually. This is especially true with any system with a data-store. Databases, message buses, object stores all act as ways for code to communicate in often very indirect and hard to predict ways. They are like massive, multi-dimensional global variables, accessible from almost anywhere. This is true of any system, monolith, microservice, or event-sourced. In a micro-service architecture the individual components may be fine, but that message sent out across the network could easily have wide-ranging effects that may not be obvious to a newcomer to the code. This is where the old hands get to explain another bit of the system in a context that will probably make it stick in a way that just lecturing someone in advance, or a 1000 pages of documentation never will.

5: Conformance

Whether it’s coding style, or common architectural approach, having a shared way of doing things helps to make a system easier to understand. You or I may prefer another way, but it is more important to stick to a shared way than to forge our own path to make some point or other.

6: Improvement

This is another point where learning happens, that moment when you get to say “I like this, but did you know you can do that…”.

Of course the learning can happen the other way too, I’ve frequently had moments when I thought “I didn’t know you could do that in Ruby…“, often when looking at code written by junior developers.

When discussing better ways of doing things it’s important not to fall into the trap of thinking “this isn’t how I’d do it, therefore it’s wrong”. I like to get the author to talk me through why they did things the way they did, because there is often a good reason, or there’s an assumption that turns out to be wrong. In the first case it’s good to make that clear in the code for future developers who may also be unaware of the assumptions made, and make breaking changes as a result. In the second case there’s another opportunity to educate the author about the system, or to clear up any misunderstandings.