On a project I've worked on, a lot of effort was put into verifying that the code in the merge request was bug free.
This meant that when doing code reviews one would not just review the code structure, class responsibilities, code readability etc, but also the correctness of the code logic.
In my opinion the code logic and code behavior should be described and documented in the automation tests making the code review all about the code, and not about the business.
I would appreciate your thoughts on this.
Top comments (3)
I find it particularly hard to state that you "should" or you "shouldn't" do that..
For instance, I've worked in a company that used PRs just like that, with code review working through the new code's logic. There were some cases where this approach led to a massive overhead in code review, making some people spend hours reviewing code.
However, that approach also made us avoid hundreds of bugs throughout the 5 years I've been there, specially when the one reviewing was one of the 2 developers that were there for more than 8 years.
Perhaps the best way to reach an agreement in this subject would be to get some examples on bugs you've not found and had to solve weeks later, and see how many hours you had to spend on them. Compare that to how long you usually take reviewing code, and you'll have a more solid view on "is it worth it?" (you might also consider how many bugs you have actually found using this approach, maybe your team is experienced enough to make it unnecessary).
In environments where pairing is rare and code reviews are the favoured means of getting a second pair of eyes across some code, I've typically made a point of at least running the code I'm reviewing.
Usually in the course of running the code, I'll experiment with a few dodgy cases just to make sure everything behaves as expected. Extra points if I merge the PR branch locally to be truly sure it plays nicely with the master branch.
It can (quite commonly) be difficult to infer the full scope of a piece of code's behaviour just by reading it (and its test cases). I'm a huge advocate for self-documenting code, but if that code was written by one person and I'm only the second set of eyes to examine it, its pretty likely that something dodgy has slipped past the author.
Additionally, by interacting with it in my own idiosyncratic ways, I may surface a use case that the author (of the requirement and/or the code) hadn't considered.
Ultimately, I don't see the harm in putting on our QA hat while we inspect our colleagues code. All it's doing is shortening the cycle time between bug creation and discovery, which is better for everyone.
What I'm not suggesting is every developer running through a full, manual, regression for every PR they receive. That'd result in diminishing returns very swiftly.
If it does the wrong thing it doesn't matter how elegant it is. Of course reviewers should consider correctness a criterion for approval. That doesn't mean whiteboarding flowcharts or inspecting every conditional with a microscope. Tests help with this but don't achieve it on their own: a test documents the intent of the code, not that the intent matches the requirement.