|
|
Subscribe / Log in / New account

The review gap

By Jonathan Corbet
March 29, 2017
The free-software community is quite good at creating code. We are not always as good at reviewing code, despite the widely held belief that all code should be reviewed before being committed. Any project that actually cares about code review has long found that actually getting that review done is a constant challenge. This is a problem that is unlikely to ever go completely away, but perhaps it is time to think a bit about how we as a community approach code review.

If a development project has any sort of outreach effort at all, it almost certainly has a page somewhere telling potential developers how to contribute to the project. The process for submitting patches will be described, the coding style rules laid down, design documents may actually exist, and so on; there is also often a list of relatively easy tasks for developers who are just getting started. More advanced projects also encourage contributions in other areas, such as artwork, bug triage, documentation, testing, or beer shipped directly to developers. But it is a rare project indeed that encourages patch review.

That is almost certainly a mistake. There is a big benefit to code review beyond addressing the shortage of review itself: there are few better ways to build an understanding of a complex body of code than reviewing changes, understanding what is being done, and asking the occasional question. Superficial reviewers might learn that few people care as much about white space as they do, but reviewers who put some genuine effort into understanding the patches they look at should gain a lot more. Reviewing code can be one of the best ways to become a capable developer for a given project.

It would, thus, behoove projects to do more to encourage review. Much of the time, efforts in that direction are of a punitive nature: developers are told to review code in order to get their own submissions reviewed. But there should be better ways. There is no replacing years of experience with a project's code, but some documentation on the things reviewers look for — the ways in which changes often go wrong — could go a long way. We often document how to write and contribute a patch, but we rarely have anything to say about how to review it. Aspiring developers, who will already be nervous about questioning code written by established figures in the community, are hard put to know how to participate in the review process without this documentation.

Code review is a tiring and often thankless job, with the result that reviewers often get irritable. Pointing out the same mistakes over and over again gets tiresome after a while; eventually some reviewer posts something intemperate and makes the entire community look uninviting. So finding ways to reduce that load would help the community as a whole. Documentation to train other reviewers on how to spot the more straightforward issues would be a good step in that direction.

Another good step, of course, is better tools to find the problems that are amenable to automatic detection. The growth in use of code-checking scripts, build-and-test systems, static analysis tools, and more has been a welcome improvement. But there should be a lot more that can be done if the right tools can be brought to bear.

The other thing that we as a community can do is to try to address the "thankless" nature of code-review work. A project's core developers know who is getting the review work done, but code review tends to go unrecognized by the wider world. Anybody seeking fame is better advised to add some shiny new feature rather than review the features written by others. Finding ways to recognize code reviewers is hard, but a project that figures out a way may be well rewarded.

One place where code review needs more recognition is in the workplace. Developers are often rewarded for landing features, but employers often see review work (which may result in making a competitor's code better) in a different light. So, while companies will often pay developers to review internal code before posting it publicly, they are less enthusiastic about paying for public code review. If a company is dependent on a free-software project and wants to support that project, its management needs to realize that this support needs to go beyond code contributions. Developers need to be encouraged to — and rewarded for — participating in the development fully and not just contributing to the review load of others.

As a whole, the development community has often treated code review as something that just happens. But review is a hard job that tends to burn out those who devote time to it. As a result, we find ourselves in a situation where the growth in contributors and patch volume is faster than the growth in the number of people who will review those patches. That can only lead to a slowdown in the development process and the merging of poorly reviewed, buggy code. We can do better that, and we need to do better if we want our community to remain strong in the long term.


to post comments

debugger = live code browser

Posted Mar 30, 2017 4:23 UTC (Thu) by marcH (subscriber, #57642) [Link]

[slightly off-topic]

> there are few better ways to build an understanding of a complex body of code than reviewing changes, understanding what is being done, and asking the occasional question.

Another, consistently underrated way is to use a debugger. Not to fix any bug (yet) but to just place random breakpoints and then gaze at stack traces, browse values of variables... or to realize that some breakpoint doesn't get hit because that part of the code is actually a corner case not worth spending a lot of time analyzing... or to realize that some other breakpoint gets hit all the time which proves it's in the critical path... etc.

Just like for any other "browsing" type of activity, a graphical interface is more efficient here.

Granted, a debugger is not be enough to raise you to the "level _above_ the sources". But it's a fast track to get there eventually.

No gain no pain

Posted Mar 30, 2017 4:59 UTC (Thu) by marcH (subscriber, #57642) [Link] (3 responses)

> Anybody seeking fame is better advised to add some shiny new feature rather than review the features written by others.

Well said.

> So, while companies will often pay developers...

References? I've never seen any [project] manager explicitly allocate resources for code reviews. I've seen *time* allocated for code reviews when planning but no resources. It's like reviews are expected to be performed by fairies. In the best cases the time of more senior people / maintainers is not tracked as strictly and code reviews are supposed to happen in that buffer; competing with other untracked tasks like meetings, writing status reports etc. Among others this means more junior people are effectively not expected to do reviews / discouraged to perform any.

Metrics is another issue. What can't be measured can hardly be rewarded. Measuring software production is certainly not an exact science yet there are some meaningful metrics available "out of the box" and regularly looked at: number of lines, number of commits, number of features, number of tests written, number of tests passing,... It would be great for Patchwork/Gerrit/etc. to start computing some code review metrics and make them available.

> ... to review internal code before posting it publicly, they are less enthusiastic about paying for public code review.

Rather than private versus public I think the more relevant distinction is: own code versus common or competitor's code.

No gain no pain

Posted Mar 30, 2017 13:32 UTC (Thu) by NAR (subscriber, #1313) [Link] (2 responses)

In the last few years everywhere I worked, no code was allowed into the main branch without review. In some cases it was fully automated (reviewers had to give +2 in gerrit before the code was merged), in some cases it was half-automated (an other developer had to merge the changes), in some cases it was "enforced" by gentleman's agreement (don't merge it until someone reviewed it). So reviewing is part of the work, just like compiling and testing.

No gain no pain

Posted Mar 30, 2017 14:05 UTC (Thu) by marcH (subscriber, #57642) [Link] (1 responses)

> So reviewing is part of the work, just like compiling and testing.

Yes of course. Except for one small difference (the whole point of the article): someone *else* has to review. How is reviewers' work planned, measured, accounted for? What are the incentives? How are they rewarded? Etc.

No gain no pain

Posted Mar 30, 2017 14:59 UTC (Thu) by nedrichards (subscriber, #23295) [Link]

In most of those kinds of corporate contexts the pool of people contributing reviews and the pool of people writing new features is the same pool. So you effectively 'swap reviews' with another developer or several other developers and on average the scheduling 'works out'. The only time I've seen review as a formally allocated task is when integrating code from an external contractor or contributor.

It's a good point though.

Review credits?

Posted Mar 30, 2017 5:08 UTC (Thu) by marcH (subscriber, #57642) [Link] (2 responses)

> Much of the time, efforts in that direction are of a punitive nature: developers are told to review code in order to get their own submissions reviewed. But there should be better ways.

Assuming no better way that works is found, maybe a slightly more positive perspective on this would be to make somewhat official the concepts of a review "market" and review "credits" that authors trade with each other?

Review credits?

Posted Mar 30, 2017 11:36 UTC (Thu) by sheepgoesbaaa (guest, #98005) [Link] (1 responses)

Yes, you'd like that wouldn't you? Credits would be traded around like an unregulated currency, opening up an underground market of scum and villainy. Dealers would stand on the street corners, furtively advertising the purity of their stock: "100% authentic, no chatbots". Moguls would hire heavies to go from repo to repo, saying things like "you got some nice patches coming in. It'd be a shame if they never got reviewed". The light in a young man's eyes, there since his first Hello World, would snuff out as his pimp directs yet another nervous prospect to his room. As they would enter, laptop clutched in their clammy hands, the boy would tonelessly list the services available: $5 for a once over, $15 for comments, $25 for a full regression and (he'll lean in for this; it's illegal in most states and he's on his 3rd strike) $50 for end-to-end testing. As he remembers a time when he dreamed of writing software, beautiful software, a single tear will roll down his cheek.

You disgust me.

Review credits?

Posted Apr 6, 2017 0:40 UTC (Thu) by nix (subscriber, #2304) [Link]

Does LWN have a flash fiction award? Because this comment, evidently ported in from a wonderfully bizarre alternate world, deserves one.

Credits in pull requests

Posted Mar 30, 2017 13:54 UTC (Thu) by wsa (guest, #52415) [Link]

One approach I'd like to share is to give praise in pull requests to people who tested or reviewed patches. I posted a quick'n'dirty hack for git-pull-request to lkml and git here in January [1] to show what I mean. And while I anticipated that the AWK script was to be replaced, it sadly turned out that the proper way to interface with git was too much of a sideproject for me. So, I still use my hackish script for pull requests (like [2] for 4.11). Maybe this gives an idea to someone how to do better...

[1] https://lkml.org/lkml/2017/1/15/164
[2] https://lkml.org/lkml/2017/2/24/695

The review gap

Posted Mar 30, 2017 14:28 UTC (Thu) by bferrell (subscriber, #624) [Link] (1 responses)

Unfortunately, the far more attractive concept of release early, release often is more often chosen (I *think* it's related to publish or perish, in concept). The developers I've worked with certainly seem to prefer it.

"publish or perish"

Posted Mar 31, 2017 2:59 UTC (Fri) by spwhitton (subscriber, #71678) [Link]

At least among academics, "publish or perish" refers to the need to publish if you want to get a permanent academic job, so not the same thing.

The review gap

Posted Mar 30, 2017 14:51 UTC (Thu) by marcH (subscriber, #57642) [Link]

> Code review is a tiring and often thankless job, with the result that reviewers often get irritable.

Since people often identify with their work, authors can get irritable too. This again puts off reviewers, especially new ones.

> but perhaps it is time to think a bit about how we as a community approach code review.

From a whole industry perspective code reviews are still at the "alpha" stage. They can produce spectacular but isolated results. In very many cases, code reviews crash, randomly reboot, are delivered late and incomplete while requiring overtime. That's when they're actually delivered at all, think rubber-stamping.

For instance there are countless resources to help people write better code: books, FAQs, HOWTOs, classes, code examples, college courses,... Where and how do you learn how to perform a good code review? Including soft skills.

Kernel code reviews are probably among the best in terms of quality. However most don't have any deadlines and some never complete.

The review gap

Posted Mar 30, 2017 16:11 UTC (Thu) by jani (subscriber, #74547) [Link] (3 responses)

Talking of recognition, I think it would be a step in the right direction if the LWN kernel development statistics included Reviewed-by's and Tested-by's. Like it or not, people look at the stats, they are referenced within companies, and LWN can decide what information is presented. LWN could directly impact what is seen as valuable contributions; currently it's just number of commits and lines of code authored.

The obvious counter argument is that not all review is accounted for, but starting to publish the stats anyway actually puts pressure in having more diligence in giving credits to whom credit is due across the kernel.

Reviewed-by

Posted Mar 30, 2017 16:19 UTC (Thu) by corbet (editor, #1) [Link] (2 responses)

I do occasionally toss those in, but the real effect of doing so, as far as I can tell, is to highlight that almost no review activity is actually documented that way. So I don't do it all that often; maybe that should change.

Reviewed-by

Posted Mar 30, 2017 19:11 UTC (Thu) by blackwood (guest, #44174) [Link]

There's at least some subsystems which do this right, might be good to highlight those.

For the stats itself, when I go and do review stats I always count maintainer s-o-b (when they're not the author of the patch, too) as reviews. It's not accurate either, but I think a much better reflection of reality - at least I don't bother adding my r-b tag when I also add my s-o-b to a patch, it's imo implied. It would at least highlight the disproportional review burden maintainers in some subsystems shoulder.

Reviewed-by

Posted Mar 31, 2017 12:34 UTC (Fri) by butcher (guest, #856) [Link]

Can't start a feedback loop if you don't have some sort of instigator... go for it, regularly.

The review gap

Posted Mar 30, 2017 17:52 UTC (Thu) by iabervon (subscriber, #722) [Link]

I wonder how often new people look at patches to try to figure out what's going on without reporting back. I know I've looked over a lot of patches to learn about the code without saying anything about it.

Making this a useful source of review seems like it would be mostly a matter of making documentation more discoverable; the result of a new person looking at a patch is generally "I can't see why this is okay", which may be because it's not okay, but is more likely because the reviewer doesn't know some important property of the system yet. While it seems reasonable to say that authors have to write patches that can be understood, I think it would be even beyond "If you've going to add 5-level page tables, you have to make the generic gup() suitable for x86" to say, "and you have to write the introduction to Linux virtual memory", but if the documentation got useful entry points for reviewers, or there were suitable experts who'd respond to new reviewers' questions about other people's patches, there could be "so clear, even a drive-by reviewer understands it" reviews.

The review gap

Posted Mar 30, 2017 18:38 UTC (Thu) by riddochc (guest, #43) [Link]

I think this article highlights an important problem: the implication of "show me the code" is that writing code is considered the highest form of proof we use in the free software community -- and social proof, at that.

Suppose someone were to want to contribute to the kernel (or any other similarly large project) by starting to do code reviews on patches. I suspect that doing a *good* code review would be difficult to do without first having a solid understanding of at least the subsystem involved. My intuition may be quite wrong here, but wouldn't an unfamiliar email address piping up to offer polite reviews not really be quite as welcome as patches fixing things? I mean, outside of the "code review really matters" discussion, people do seem to treat it like addressing random people's critiques of their work is an immense burden. And it can be too easy for someone to see a review of their work and read it as a review of *them* instead.

These are just the sort of things someone who's a bit conflict-avoidant but otherwise interested in improving the quality of systems might worry about. Not that being overly conflict-avoidant leads to good results anyway, of course.

You know, hypothetically, and all that...

The review gap

Posted Mar 30, 2017 19:34 UTC (Thu) by pj (subscriber, #4506) [Link]

Anyone interested in encouraging code review could do worse than to run through https://smartbear.com/SmartBear/media/pdfs/best-kept-secr... ; it's a little dated, and you can skip the last chapter as it's mostly a shill for their code review software, but overall the points it makes are generally still valid and useful.

The review gap & the CII best practices badge

Posted Mar 31, 2017 17:46 UTC (Fri) by david.a.wheeler (guest, #72896) [Link]

The CII Best Practices Badge project is currently developing draft criteria for higher-level badges. They currently include two_person_review and code_review_standards, which try to get at this issue in a practical way. Comments/suggestions welcome! If you have suggestions, please submit them via our issue tracker.


Copyright © 2017, Eklektix, Inc.
This article may be redistributed under the terms of the Creative Commons CC BY-SA 4.0 license
Comments and public postings are copyrighted by their creators.
Linux is a registered trademark of Linus Torvalds