2021-06-18: Review reviewers: Thoughts Inspired by a Bad Review

I came across this review of one of my papers. My postdoctoral advisor always told us to treat reviews not personally but seriously. I do not want to disclose the venue and I do not know who reviewed my paper, but I would like to post the review as an example of a bad review. 

This paper employ xxxxxxxx. There are some obvious and serious deficiencies as follows:


1. The engineering work is solid while the expression is more like some technical report.

2. Though the developed parsing tool is automated, its performace is not sufficient to be applied in real-world industrial scenarios.

3. The applicability and generality of the proposed tool is a little limited, mainly focusing on plots in engineering and environmental science journal papers.

4. The contributions stated in the paper is not focused and clear, leading to the core research innovation not highlighted.

First, there are several editorial errors in this review, which indicated that the reviewer did not do proofreading before submission. 

However, I would like to emphasize that review does not help the conference chairs to make a decision and authors to improve the work. The reviewer did not provide any justifications for any of his points. Note that I only anonymized the first line, which is a summarization of our work. All points are at high levels and it seems one can use it as a template for any reviews. Such a review should be ignored by the conference chair and by the authors. 

This is just one typical example I saw among numerous reviews I've seen. In my opinion, a review should be written like a short paper. The summary is a background and the decision is a conclusion. For each criticism made, the reviewer should provide justifications and the justifications should be convincing enough to support the criticism. If possible, the reviewer should provide suggestions on how to improve the work by listing additional references, suggesting new experiments, or providing examples. 

The computer science academy has a long tradition and a fairly established system to evaluate authors. However, we do not have a well-established system to evaluate reviewers. The openreview.net and several open access journals (such as the Frontiers journals) are good starts, but a reviewer credit system requires a federation of publishers, conferences, workshops, and digital library search engines. Making this happening as a cross-domain system may not be feasible for the short term, but at least some effort can be done for certain domains or even for certain subdomains. 

The basic idea is that we should review reviewers. All reviewers should be linked to their ORCIDs. A mechanism should be established to calculate the gain/loss of credits for a reviewer based on the review quality. This should involve human and computer models. Computers are good at counting, linking, and calculation and humans are good at analyzing the structures and contributions. Reviewer's review credits should be transparent, dynamic, and open. 

-- Jian Wu

Comments