Raiders of the lost art of Software Reviews

Reminiscing back to the beginning of my software developer career, code reviews were a regular, and if I’m being honest, tortuous experience. As a young precocious software developer (in C) who thought he knew more than he did, I regularly treated reviews as if they were an episode of the 90s family gameshow Gladiators. Reviews in mind started with “Gladiators are you ready!”.

John Anderson & Wolf — UK Gladiators

My Team Leader, “Steve” happened to be a local Quality Assurance Representative (QAR), and boy, he know a thing or two about procedure and ceremony when it came to the company’s standard around reviews. The review process was classic Fagan and went something like this …

Step 1 — you would read the documentation describing the change and would work out what code was required to be created or updated (identifying any change control numbers that should be cross referenced),

Step 2 — you would book out a review number (PM5.7/XXX) for the work and schedule an initial meeting with your (hopefully friendly) reviewer — normally a Team Leader (the chair — in my case Steve) and the Technical Authority for the project,

Step 3 — then you would hold an initial review — called a “level 1” to discuss the changes you were proposing to undertake. This would be recorded on the review paperwork including any actions given during the meeting,

Step 4 — Once the “level 1” was complete, you could enjoyed some well earned software development time. This was the bit I enjoyed the most! The change control numbers would need to be cross references in the file headers of anything you created or updated, for example:

#ifndef EXAMPLE_H
#define EXAMPLE_H
#ident “@(#)example.c 1.3 9/4/96 12:15:13”
** Project Name : XXXX
** Project Number : 1234
** File Name : redacted.h
** Ref Number : 1234/T6/PC/0001
** Description : This file contains data types and prototypes
** Configuration Record
** Review Version Author Date Change No.
** 1970/PM5.7/198 1.1 A Another 10/04/96 Original
** 1970/PM5.7/666 1.2 S Brittain 04/07/96 1970/PM5.9/123

Step 5 — Once the code, unit test and any other documentation associated with the change had been completed, you would schedule a detailed review of the work — this was the big review called the “level 2”. This involved writing up the review form, printing out all your changes onto computer paper, and making a nice little review pack for each reviewer. Like Oliver Twist with his bowl, you would take this over to their desks, smile and hope for the best. To review this took quite a bit of time and effort for each reviewer, so you often had a significant delay before you could hold the review meeting. During this period you often would keep peaking over to see if the reviewers were reviewing your work, and if so, how much red pen was being scribbled onto the review material,

Step 6 — You would hold the “level 2” review. The review itself would go through every part of the change, and any adjustments including typos would need to be hand written by the reviewee on the review paper work. If your work was rubbish, you would have to go around this process a couple of times (the dreaded “Revise and Reschedule” outcome), however if the issues were minor you may get away with a more pride preserving “Accepted subject to change”. Only if you were extremely lucky, and very rarely, would your work pass without comment — i.e. “Accepted” — this seemed to be the least favoured outcome fore reviewers as it suggested the review added little or no value.

Step 7 — Only after the “level 2” was completed to your Team Leaders satisfaction, and physically signed off, would you move onto the next stage — a “Level 3” End Product Review (EPR). This was essentially a demonstration that the partially merged back in code worked and the tests passed (most of which were manual). This needed to be whiteness prior to any changes being committed to source control. Again, your work may need to be repeated if faults were found, however being slightly intimidated by my team leader and fiercely proud of my work, rarely did this happen to me!

If you think the above sounds like something from a bygone age when time to market and cost pressures where significantly less — you are correct. The following excerpt (circa 1995) from my staff manual at the time is a nice insight into a simpler age …

Internal Access Policy in Staff Manual circa 1995

But I digress, whilst the review process I described above created some extremely high quality code and taught me a lot about the art of C programming and my place in the industry, my early experiences of reviews were not always a pleasant one.

With the move to newer development practices and techniques, software reviews have received very little attention from the developer community. With the exception of Pair / Mob Programming that garner some dialog in agile circles and estimation techniques, not much is said about reviews. The lack of attention around reviews is a little odd however. Modern development approaches often try to bake in frequent and regular reviews in a manner that legacy approaches often ignored, and many modern techniques attempt to mitigate the problems of conscious and unconscious bias in teams — something that reviews are particularly good at helping with. For example, many developers will be familiar with Scrum’s Sprint Review sand Retrospectives, and Planning Poker is an estimation method with its own built in review approach. De Luca’s Feature Driven Development, Beck’s XP and Cockburn’s Crystal addressed the art of reviews with, for example, techniques such as Design Inspection, Pair programming, and Side by Side programming. But Technical reviews continue to get little or no air time today.

So what is a Review?

Unsurprisingly, there is an standard definition for reviews in the software industry that still holds value today. IEEE Std 1028–2008 sets out a 5 different kinds of software reviews:

  1. Management reviews — A broad systematic review of the management approaches and artefacts around the software product or development processes
  2. Technical Reviews — A review by engineers to examine the suitability of the product for its intended use
  3. Audits — A review by third party assessors to check compliance against processes and policies
  4. Walk-through — A review by an engineer who leads the team and or other interested parties through the product with Q&A as they go
  5. Inspections — A visual inspection of the software to detect anomalies (e.g. Fagan inspections, Pull requests)

In addition to the types of review, another key consideration is the review’s timeliness. Different immediacy levels provide different opportunities to inspect and adapt, each with their own pros and cons. The following is a useful taxonomy taken from Herbi Bodner (see ref 1):

ApproachDescriptionSituation to Use

  • Instant — Reviews during development — Suitable for complex tasks. Ideally, the software developers should be at a similar experience level.
  • Synchronous — Review immediately after a unit of work is complete — Great for when the reviewer lacks knowledge of the goal or lots of review comments are expected. Suitable when context shifting is not a significant problem for the reviewers
  • Asynchronous — Review at some point after development — Reduces context shifting issues associated with Synchronous approach.
  • Batched — Review at some later point in time, and change is combining with other changes (e.g. prior to end of sprint) — Great for when a lengthy cycle time between development and comments is not so much of an issue. Must have confidence that code is developed to a high standard prior to review.

For those familiar with System Thinking, the lack of immediacy of certain techniques compared to others will be a concern, some may be alarmed at the impact of programmer context shifting, whilst others may be more concerned about the granularity of the review material (narrow v broad scope). There are pros and cons with each kind of review approach and the timeliness of the review, so my suggestion is that we need to embrace a diverse strategy to get the best of all worlds whilst mitigating some of the issues associated with any one approach.

Why should we care?

Reviews take time and effort; hence they cost real money which needs to be considered an investment. The types of reviews I went through early in my career must have added a huge double digit percentage to development — probably in the region of 33%-50%. So what benefits should we hope to get back as a return on our investment in software reviews?

Benefit 1: Spot logical errors early/increase quality

As far as “facts” go in the software industry, the notion that “spotting mistakes early is the most effective way of reducing software development rework”, remains broadly uncontested. Smart Bear (ref 2) presents a case example of two similar projects one using reviews and the other not — with a reduction of 90% in customer found defects found in the former. On a more personal level, albeit thankfully rare, dealing with disgruntled users or customers around the quality of a product is never pleasant or pleasing. Any opportunity to reduce this outcome is effort well spent in my book.

As David F Rico noted when assessing software metrics; a well-run software inspection process is the most cost effective way to increase the quality of software. To put that in numbers, “Code inspection cost of quality (CoQ) is quite impressive. It takes an hour to repair defects by code inspections, 10 hours by testing, and 100 hours by debugging“ — “code inspections have an ROI of 300% during development and 900% over the total life cycle.“ (Ref 4). An enticing return on any investment!

However, many people mistake this to be the only benefit. I do think there are other compelling reasons though…

Business 2: Knowledge Sharing

Software products are complex, and this fact is not diminished even with the latest technologies. As we build upon more and more libraries/tools/technologies, the range of things a developer has to understand is increasing fast. Back in my day, I just had to learn C, but now you have to understand several languages, infrastructure as code, framework configuration, deployment approaches for CI, security etc etc. We can’t be experts in all of these.

I suspect we have all been in situations where one long serving developer becomes indispensable. We worry about them taking time off, and have sleepless nights about the risk of them resigning. In some cases these “knowledge silos” result in bad organisation behaviours — perhaps you dare not challenge the individual, you work around “their issues”, or you don’t score them appropriately in performance review rounds through fear of losing them. Common sense tells us this situation is bad and that knowledge must be shared. For me, here in lies one of the main benefits of effective reviews — knowledge sharing. This may help education new team members, or it may help developers become more rounded and T-shaped team members — but spreading knowledge should simply be regarded as best practice.

Benefit 3: Completeness

Your code may work, match all locally supported standards and pass all your automated code checkers — but is it complete? Have all the potential requirements or considerations been catered for? Is the code reusable or is inadvertently creating technical or support debt. Is it testable, long term. Talking personally, I know I often become blinkered. Cognitive bias (see ref 3) may make me overlook the limitations of my work. Additional pairs of eyes offered by embracing a culture of reviews is invaluable at overcoming this, and whilst we should not expect every comment to land with a whooping and applause more associated with an Apple product release, in the long run it is good habit to get into. We should look critically at our review techniques and make sure that they are not box ticking exercises (e.g just hitting accept on pull requests). We should also utilise a broad range of techniques and reflection points to identify as many issues as we possibly can, as early as we can.

Benefit 4: Experience recognition

Why do you think experienced developers get paid more that inexperienced developers? The difference is not always fully accountable from purely a productivity perspective (e.g. lines of code per day). Organisations are prepared to pay more because of experience and wider competencies — like demonstrating an ability to not screw up a delivery. So purely from an economic standpoint, if we are prepared to pay for experience, we should maximise the return on this investment. The value proposition of an experienced hire should be higher than for a newbie developer. Experienced team members should be used to review and help improve other team members, both within their teams and the organisation as a whole. Experienced developers should be expecting and wanting to do this, and we should be expecting, mandating and rewarding this behaviour.

This plays into benefit 2 around knowledge sharing — namely accelerating the spread of wisdom. It also builds a wider learning and sharing culture with a drive towards technical excellence. A culture of technical excellence has a number of key advantages. Firstly it is cheaper than expensive external courses that sometimes have questionable return on investments. Secondly, and more importantly, it builds an environment that people want to be a part of. Over the many years I’ve been in the IT industry, the consistent traits I have seen in every successful developer I have come across is the desire to learn and improve. The ability to do this from fellow developers is the cornerstone of any great software development centric organisation, and it is probably the most cost effective and valuable way to delivery L&D that exists today.

Common Pitfalls

I discussed my blog with a few devs before publishing including one of our top software gurus. They recognised the problem and offered some more specific issues:

  1. No reviews — say no more. A common view is that reviews are not “agile”. Given that one of the fundamental agile principles is “Continuous attention to technical excellence and good design enhances agility”, we can hopefully attribute that idea to pure ignorance.
  2. Reviews being too finer grained — reviews are done only at commit and therefore only look at the problem in isolation. Instead they should be performed at the unit/story level and the Definition of Done used to include an assessment of whether the code is of sufficiently high quality. That’s absolutely not to say that the reviewee may not ask for coaching from the reviewer during the journey to submission, more that it should be a common expectation that submitted code is expected to be at a level of quality sufficient to pass a review.
  3. Reviews being insufficiently interactive — the reviewer does not know whether their comments will actually get reflected back into the code. When Linus Torvalds reviews submissions into the Linux kernel he, as reviewer and maintainer, makes any edits he sees fit and checks them in. The reviewer could, and in many cases should, do the same.
  4. Reviews are conducted only as peer reviews — whilst we all would like to run teams comprising solely of Linus Torvalds clones, the reality is teams will have mixed ability and experience levels. This is a normal and healthy sign if we want to promote important things like personal development, career progression, stretch goals etc. However, teams should not be shy to use the talents and experiences of their most capability developers to educate, provide exemplary leadership and develop others, even if they are self organised as is the Agile norm. This is a clear illustration of maximising return on investement from the most experienced and probably most expensive team members.

Embedding Reviews

To embedded change we need to rewire the organisation brain. Reviews must be seen in a positive light aimed at delivering value rather than something that is mandated for risk mitigation purposes of senior management. If the plan to reinvigorate the take up of technical reviews hinges on simply mandating them, then we should probably give up now. At best the instruction will be ignore, at worse, people will go through the motions, become disillusioned and give up. The beneficiaries of a review culture needs to be the software development teams themselves and we should recognise the value for it to become a virtuous circle. We absolutely want people to actively request a review rather than it being imposed on them.

The art of reviewing itself needs skills and these may need to be fostered and developed. Reviewers need to work hard to ensure comments and feedback do not offend. They need to feedback a range of positive as well as negative points to ensure reviews are fair and balanced. Reviewees need to focus on receiving feedback graciously and without taking offence, embracing the independent perspective as valid and valuable — even if they don’t personally agree with everything. The training needs around technical reviews should not be under estimated.

With a positive outcome in mind, reviews should be targeted where they offer most value; time and effort should not be wasted where it is not required. For example, this could be the top 10 scariest source files, a critical part of the design, or an area of the product or project system that few people understand or want to go near. The area therefore needn’t require active development nor change in order to benefit from a review. To maximise ROI, reviews need to be timely; delivering value quickly and regularly needs to be the aim.

Reviews should support a critical look from both a narrow and broad perspective. Management and Technical reviews should be used to provide broad coverage but less depth, walk-throughs should be used to illuminate complex parts of a system or workflow, and inspections employed to look at very narrow areas of concern. Unless you want a Birmingham Screwdriver (a.k.a. only a hammer), a broad tool chest of different review techniques and types should be developed by an organisation to cover the different types of reviews and the different levels of immediacy.

Undertaking reviews needs to become an opportunity and expectation for more senior developers to sprinkle their magic within the organisation; with the aim of explicitly developing others (i.e. a powerful form of training on the job). Reviewers should look to become role models for more junior developers, and the role of reviewers need to become a responsible and desirable one. Reviewers need to remain approachable and personable and absolutely must not become intellectually aggressive or demeaning of others. Undertaking a review must not be seen by teams as a witch hunt, rather a positive and rewarding experience. Above all else, reviews must add business value — it is explicitly an investment of time and effort and therefore everyone must be looking for value to be derived from them.

Finally, as we want reviews to be a cultural thing, the adoption of a review culture should be supported by leadership throughout the organisation. It should not be seen as a nice to have, or something that is only relevant for others, nor should the role of reviewer been seen as a bit part or something that is done in the margins.

In my opinion, the beneficial role of reviews in software development has been generally underplayed and undervalued, especially around the cultural advantages. For my part, I know I have not given the topic the full attention it deserves. However with initiatives such as Agile in Audit gaining momentum (a whole new topic to discuss — see ref 5), I believe now is the time to focus more time and energy to drive up organisational performance and culture through the adoption of software reviews.










Agile Enthusiast & Engineering Leader

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Google Kubernetes Engine read-only volumes

Locks on Programming

How to write Clean Code?

7 practical tips for getting CISSP Mindset and exam success

JUnit 5 Apache Spark Testing

The 10 Commandments of Software Design

An image of a Mickey Mouse figurine on top of a piano. Monthly News / November, 2021

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store


Agile Enthusiast & Engineering Leader

More from Medium

The benefit of having Software Developers that think like your customers.

Why we should apply Scrum Framework?

Combining best practices is not always a best practice

Take Your Time and Avoid These Rookie Management Mistakes