Code reviews are one of the most underrated parts of building great software.
It might seem tough and confusing at first, but it's surprisingly easier than you think.
This post briefly outlines 11 practical ways and strategies to make code reviews as a developer.
This will help you get better at code reviews.
🎯 What is code review?
Before going deep into the points, let's take a moment to understand what code review means.
A code review is when developers check each other’s code to improve its quality before merging or shipping it.
The term code review
can mean a lot of things, from simply reading your friend's code to a 20-person meeting where every line is analyzed in detail.
There are mainly two roles involved:
-
Reviewer
: who reads the code and decides when it’s ready to be merged into the team’s codebase. -
Author
: who writes the code and sends it for review, mainly over pull requests.
The review ends when the reviewer approves
the changes. You would have seen LGTM, shorthand for “looks good to me” which means the same thing.
If you're interested in learning more, the guide about What is a code review? by GitLab is a great place to start. It lists merits, demerits, a few approaches and some best practices.
Let's cover the tips and strategies you should consider while doing code reviews.
1. Use AI tool for contextual feedback.
Using an AI tool can help you save time and make sure the quality code being merged is very good.
I searched a lot of tools (mainly on Reddit) and the best one I could find is CodeRabbit.
As per the official website, 5M pull requests have been reviewed using CodeRabbit which says a lot about credibility. And that matters when you work on a private codebase as well.
It uses machine learning algorithms to analyze your codebase, identify potential issues and provide context-aware feedback on pull requests.
✅ You get line-by-line feedback and simple 1-click fixes.
✅ You can create issues with real-time chat on review comments.
✅ It can integrate with GitHub, GitLab, Azure DevOps, BitBucket cloud (beta).
All you need to do is use commands to get certain operations like @coderabbitai summary
, @coderabbitai review
. The AI also learns over time and identifies certain best practices specific to the repository.
You can watch this quick demo!
In the free version, you get a summarization of pull requests (which is the most important) and they offer a free trial to help you understand if it's a good fit.
You can find more on the docs and CodeRabbit is open source if you want to check it out.
You can also read how Linux Foundation used AI code reviews to reduce manual bottlenecks in open source software.
There are some other tools as well if you want to explore them:
-
Codacy
- identifies and fixes code quality issues automatically. -
Synk
- for finding and fixing vulnerabilities in your code/dependencies. -
Bito
- code review assistant to help you spot issues. -
Qodo
- to improve review quality while reducing back-and-forth delay. -
Code Review GPT GitHub Actions
- automates code reviews using GPT directly within GitHub Actions. -
GitHub Copilot
- most trusted AI pair programmer. -
PullRequest
- combines AI with expert human review.
I know the descriptions almost sound the same, just explore other blogs to find more tools.
2. Link your review to principles, not just opinions.
I'm a maintainer in an open source project so I know how hard it really is to give feedback.
If a programmer sends you a change list that they think is awesome and you write them a big list of reasons why it’s not, that can send a completely wrong message.
Most of the time, authors think of criticism of their code as an indirect way that they are an incompetent programmer, which is definitely not the case.
When leaving feedback on code, explain both your suggested change and the reason for the change.
A simple change of tone can make a lot of difference:
❌ We should combine these two functions
.
✅ ”This function handles both authentication and logging, which violates the single responsibility principle. Let's separate them.”
Writing this way will turn your opinion based feedback into a constructive way. Be objective and give concrete evidence in the form of links where it's possible.
3. Use common techniques for different types of reviews.
There are many techniques that developers aren't aware of, such as:
⚡ The Show and Tell Review
: the author shows their changes to the reviewer, explaining the reason behind those decisions.
⚡ Checklist based Review
: using a predefined checklist for consistency to avoid overlooking critical areas.
⚡ Rubber duck review
: the author explains their code to the reviewer as though they are explaining it to a "rubber duck".
⚡ Checklist Automation review
: combines automated tools with a manual review to find routine issues like style violations.
⚡ Two Peas in a Pod
: you comment on one line of code in the conversation while another contributor gives feedback on another line code in the same pull request.
⚡ The Chameleon Review
: you adapt your PR review based on the type of contribution that your peer is making.
⚡ Teach them Review
: reviewer points out issues but also explains why the changes are necessary.
⚡ Commit-by-Commit Review
: each commit is reviewed separately, making it easier to track changes and understand the thinking process.
There are more techniques like Pattern recognition
, Change impact analysis
, Trace-based code reading
and more which you can explore yourself.
If you're interested in reading more, there are a couple of interesting blogs you should check out:
- How to Give Good Feedback for Effective Code Reviews by freeCodeCamp.
- 10 Best Code Review Techniques by awesome code reviews.
Once you know these techniques, just apply them with your personal experience and it will give you a better sense of direction.
4. Request the changes, not command them.
There is a high risk of normal code review conversations turning into personal opinions.
Imagine saying to your team member “Get me that report, and while you're at it, grab me coffee too”, it would be very demanding and not what the other person would expect.
Feedback framed as command | Feedback framed as request |
---|---|
Reviewer: "Refactor the User class into multiple smaller classes." |
Reviewer: "Could we refactor the User class into multiple smaller classes?" |
Author: "I don't think it's necessary. The class is fine as it is." | Author: "We could, but I believe splitting it might overcomplicate things. What do you think?" |
If the command feels more direct, it could lead to a defensive response.
People appreciate having control over their work. When you make a request, it gives them a sense of ownership.
Being extra gentle with your feedback doesn't require 10x hard work, but it makes things a lot smoother.
5. Using code review checklists makes work a lot easier.
A systematic approach to doing code reviews will lead to a faster and more accurate process.
That is how the concept of a review checklist
works. It's a set of guidelines or items reviewers follow every time they review code.
You don't have to make one from scratch, rather just download a ready-made list and adjust it based on your needs.
You can make it more focused on your tech stack and focus on some specific areas like accessibility or security.
It builds a shared understanding among the team members about which things are important and reduces conflicts, disagreements or unnecessary back-and-forth during code reviews.
For example, if you're working on a React app, you might want to include checks for hook usage, component reusability or efficient state management.
Everyone on the team will be on the same page about what to look for when reviewing code. Over time, this will speed up the review process without compromising on quality.
There is a free gumroad product by Michaela where you can find a decent checklist.
You can also check out code review checklist on GitHub with 900+ stars.
6. Avoid wasting time on tasks linters and formatters can easily handle.
Our attention span is reducing day by day, with such a huge overload of information.
Between meetings, emails and all the other distractions, finding time to focus on code is really hard. Reading someone's else code drains your mental stamina.
My advice is to not waste your time and energy on chore work which can be easily done by computers.
For instance, instead of manually explaining indentation issues to the author, a decent formatting tool can take care of it in seconds.
Effort required with a human reviewer | Effort required with a formatting tool |
---|---|
Reviewer looks for whitespace issues and finds mistakes. | Nothing! |
Reviewer writes a note explaining the issue. | |
Reviewer double-checks the note for clarity. | |
Author reads the note and fixes the indentation. | |
Reviewer verifies the fix. |
You can apply this to other repetitive tasks in code reviews as well. Here are a few examples:
Task | Automated Solution |
---|---|
Verify if the code follows style guidelines | Code linters like ESLint (for JavaScript) or Pylint (for Python) |
Check for broken links in documentation | Link-checking tools like Markdownlint or HTML linting tools |
Verify if code follows security best practices | Security linters like SonarQube or Brakeman (for Ruby on Rails) |
Check for spelling and grammar in comments | Spellcheck tools like Code Spell Checker or write-good for markdown files |
Ensure no secrets (API keys, passwords..) are hard-coded | Secret scanning tools like Git-secrets or TruffleHog |
Check if the project has proper test coverage | Code coverage tools like Istanbul (JavaScript) or Jacoco (Java) |
Verify if dependencies are up to date | Dependency management tools like Dependabot or Greenkeeper |
Instead of wasting time fixing basic mistakes, they can focus on more complex issues.
Plus, no one likes hearing about mistakes from a human, it's much easier on the ego if it comes from a computer!
Tip: Work with your team to set up automated checks in your code review workflow like using pre-commit hooks in Git or webhooks in GitHub.
7. Approve once the remaining fixes are simple.
A lot of reviewers believe they should only approve the code once every single thing is addressed. This can lead to unnecessary back-and-forth
delays for both the author and the reviewer.
If there are only small issues left like a typo
or a variable name
then make it clear they are optional so the author knows it's not a condition for approval.
Don’t hold up code just because a variable name isn’t perfect.
Spending a little extra time on the rare 2% cases is better than causing unnecessary delays for the other 98%? Think about it.
8. Look for opportunities to split up large reviews.
I've seen Pull Requests with 30+ file changes and 1000+ lines of code being added.
It's very hard to review such large changes at once. Most of the time, we as reviewers end up missing something crucial.
One logical solution is to split up large reviews. Instead of just asking the author to do it, help find logical breakpoints. If changes affect files independently, group them by file.
For more complex cases, you can find simple logic and move them to a separate change list.
If the code quality is poor, make it clear that a split is necessary.
Reviewing a couple of messy 300-line change lists is better than one massive 600-line code dump.
9. Restrict yourself to high-level feedback.
I've always avoided giving too much feedback at once to keep the author from feeling overwhelmed.
Start with high-level feedback on big issues, such as architecture problems, major bugs. The ones that carry the most impact.
Once those are fixed, move on to smaller less-important details like naming or minor changes.
This way, the author can focus on fixing the biggest issues first without getting pinned down by small things that may not even be urgent.
10. Don't forget there’s a real human at the other end of the conversation.
In the mess of tight deadlines, it’s easy to forget there's a real human at the other end of the conversation.
🎯 Fighting bias.
We all have a subconscious bias, a recent study at Google showed that developers that identify as women get more pushback during code reviews than peers that identify as men. Which is kind of shocking!
It’s important to be aware of them and take necessary actions (kind of reviews of reviews) to reduce bias or anything related to prejudice in code reviews.
🎯 Avoiding Stalemate.
The worst possible outcome of a code review is a stalemate
where you refuse to sign off on the changelist without further changes, but the author refuses to make them.
The tone of the discussion would be very tense so it's necessary to talk it out. Honest and simple communication will break that situation.
🎯 Simple tone.
Avoid using “you” in code reviews.
Do you notice a difference in these?
❌ Can you rename this variable to something more descriptive?
✅ Can we rename this variable to something more descriptive?
This small change in tone can make a difference.
11. Celebrating good code.
Code reviews don’t always have to be about pointing out what’s wrong. It’s also a great chance to highlight the good stuff!
Saying something like “I love how you broke that down, it's so much easier to follow” can really help. A little compliment goes a long way (even if you don't realize it).
It shows you're not just out to point out mistakes, but you're actually noticing when they do things in the right way.
It's not about making people feel good but creating a positive vibe that pushes them to do even better.
I've used my experience as an open source maintainer so I hope you found something useful.
Let me know if you have any feedback or other things to note while doing code reviews.
Have a great day! Until next time :)
You can check my work at anmolbaranwal.com. Thank you for reading! 🥰 |
|
---|
Top comments (24)
I have a simple list that helps me set the tone when adding comments.
[🎉 Praise] Insert praise here
[💡 Suggestion] Insert suggestion here
[🦠 Bug] Report possible bug here
[🤔 Question] Ask clarification
[🤏 Nitpick] Minor inconvenience to reviewer that can be fixed, but doesn't block
This list can easily be adjusted to satisfy personal preference or team needs, and it really helps set the tone so you don't come off as a jerk.
thats really cool, simple and effective
Awesome to know! Emojis are somehow tied to emotions, so they really helps a lot in expressing inner feelings. Thanks for reading Ossi! 🙌
🌟 Awesome article! Love the human touch—it really makes a difference! 🙌
Honestly, when I first started on GitHub, I had no clue where to begin 😅, but over time, I figured things out. Articles and guides like this help a ton—seriously, kudos to you! 🎉
Your insights really show your experience & expertise—great job! 👏🔥
Thanks for reading, Madhurima! One suggestion would be just write whatever you feel after reading the post, even if there are grammatical errors.. ai just destroys that emotion.
appreciate your support :)
I agree! Its just that I tend to overthink and want it to be mistake-free, so it sometimes sounds too polished. 🤷♀️
Thanks for your suggestion though. I will make sure next time.
Re #6 - that would be brilliant. Unfortunately it's proven not to be that simple in practise. We tried to introduce clang-format on a 4yr old C++ code base, however there are so many options, and so many dependencies between options, that it was taking a long time to find an acceptable set to try to match up with our existing style and avoid wholesale, disruptive changes to the code layout. We got to the point of using it to reject commits but, day by day, we'd find new sections of code where we couldn't get an easily readable layout and ended up with numerous sections where it was switched off. Unfortunately, with clang-format, it's all or nothing; there's no granularity implemented to allow specific rules to be disabled. For example, if we wanted to define an aggregate in a tabular format, clang-format would remove the extra spaces we'd put in for alignment, so we had to switch formatting off round that whole section, which meant other rules weren't being checked.
Of course, I realise:
I'm thinking, at this point, that perhaps an AI tool, that can analyse our code and work out what we like, might help for that, but it needs to be self-hosted and work with git/bitbucket data center.
Any suggestions would be welcomed, as it seems that expecting a developer to follow layout rules on their own these days is asking a bit much! :-)
That’s a super detailed breakdown, appreciate that! Finding the balance can indeed be complex (especially for c++). I believe it's much easier in other languages... I haven't worked much with legacy C++ codebases (only did it for CP) but you might find tools like AStyle, Tabby, Cody useful. I'm not sure which would be the perfect fit so you should check over reddit (there are always interesting discussions).
Anyway, thanks for reading John. 🙌
Thanks for your reply. Interestingly, AStyle was something I'd got some success with but, at the time, it was looking for a new maintainer or Andre had just taken over, so I was a bit hesitant to adopt it, having been stung by abandoned tools in the past (admittedly not OSS, but CodePro Analytix, when Google bought the company out)! I may take another look, and at Tabby and Cody, which I haven't investigated yet.
Thanks again.
Good tips Anmol ✅
Awesome! Thanks for reading, Kiran.
I wanted this for my
ask-her-out
website. Someone keeps on spamming! HuuhSpamming the final boss huh. Found the guy lol (it's just 2 prs). You haven't seen real spamming… people go crazy over points. 😂
I'm not a part of such event. That guys really ignored my comments. It's so painful. Huuh..
LGTM = Let's Get This Merged, no?
I wish it was. But there's another round of review by some other dev/maintainer. Haha!
Yeah, it should be right. When I was reviewing something, there was a rule of 3 approval reviews, which was really bad (especially for small changes). Imagine asking 3 people to say LGTM... sometimes people just aren't that free, so it just gets delayed.
But in most cases, once a single maintainer approves it, the others will follow... kind of a trust thing (at least in a good organization).
Thanks for reading, James. 🙌
Great list of tools 🔥🔥
Thanks Om! Appreciate your support.
Thanks a lot
I have used CodeRabbit and worked on a few articles about it.
Soon I will be publishing an article about CodeRabbit.
Thanks for sharing this article, Anmol.
Awesome Bonnie! 🔥 I'm also trying CodeRabbit and really excited to see how useful it is.... usually, I’m not a fan of AI stuff, but some are definitely worth it.
Looking forward to your article :)
awesome
let's code future
Hey Sachin, thanks for dropping by! 🔥 Feels great to publish under Let's Code Future, plus I really enjoy reading your articles on Medium :)