Categories: Newsletter Issue 2019:4


An Interview with Amitabh Chandra, Editor of the Review of Economics and Statistics

By David Slusky

What surprised you the most about being an editor of a major general interest economics journal?

I never thought that the single best predictor of getting a paper accepted, would be clear and accessible writing, including an explanation of where the paper breaks down, instead of putting the onus of this discovery on the reader.

It’s my sense that a paper where the reviewer has to figure out what the author did, will not get accepted. Reviewers are happy to suggest improvements, provided they understand what is happening and that makes them appreciate clear writing and explaining. They become grumpy and unreasonable when they believe that the author is making them work extra to understand a paper and most aren’t willing to help such an author. They may not say all this in their review, but they do share these frustrations in the letter to the editor. This is one reason that I encouraged a move towards 60-70% desk-rejections at RESTAT—if an editor can spot obvious problems with clarity or identification within 15 minutes, then why send it out for review?

Of course, all of this results in the unfortunate view that “this accepted paper is so simple, but my substantially more complicated paper is much better,” when the reality is that simplicity and clarity are heavily rewarded. We don’t teach good writing in economics—and routinely confuse LaTeX equations with good writing—but as my little rant highlights, we actually value better-writing. So this is something to work on.

Related, most reviewers want short papers (so do editors). The world has also changed, and economics is more empirical, so adding a 3-5 page “theory section” that produces uncertain comparative-statics is a waste of three pages, that has also annoyed and tired the reader. Theory is great if it can clear up clutter. But if it can’t, or worse, if it adds to clutter, then this is not being empathic about a reader’s needs.

Third, editors want to accept papers—at RESTAT we tried all the time to increase acceptances, but reviewers often hold papers to a very high, sometimes unreasonable, bar. I would sometimes get reviewers who were engaged in a triple-aim: reviewing the paper, advertising their own training, and signaling their high standards. But the second and third activities are useless for an author. The review process is not a seminar. It’s not the job-market. It’s not an examination. We should really teach more on “how to review a paper” in graduate school.

Are there any nonobvious ways that health economics papers are different from other applied micro papers?

I am not an unbiased commentator. But in general, I think that papers in health economics are better—much better on average—than papers in other areas of applied economics. Our field works on interesting questions, finds interesting answers, and these questions and answers interest people who are not in our field. My fellow editors at the Review of Economics and Statistics—none of whom is a health-economist—also felt that the questions that health economists asked were more interesting and better motivated than a lot of other applied papers.

Where this breaks down is when health-economists write health-services research papers and submit these papers to an economics journal instead of to a medical journal or health-services journal. This “two-audience production” is unique to our field—other fields don’t do this. I think that some of us health-economists forget that health-services research and health-economics are corelated but different activities. Economist reviewers at a top general-interest journal will not take kindly to a health-services research paper—so why submit it there?

What are the most common errors that reviewers make when reviewing health papers for you?

There are three errors that reviewers make. First, many junior reviewers write really long reviews to show that they were thorough. This doesn’t help—if the paper has 8 problems then the editor is often most interested in the top two.

Second, some reviewers can also have really high standards in a way that creates lots of Type II errors—never accepting a paper. At the Review of Economics and Statistics, we were writing to accept more papers, but reviewers made this hard by using an impossible standard for identification.

Finally, and this is rare, but a by-product of the “triple-aim” (described above): some reviewers write reports with innuendo and meanness—I never went back to them and still think very poorly of these individuals. To be mean, when protected by the veil of an anonymous review process, is a deep pathology.

My advice is: write short reviews—don’t over referee or rewrite the paper—you are the reviewer, not the author. Be kind. Be kind. Be kind. Kindness is not the same as low standards, but posing questions and raising challenges with curiosity and humility. Always remember that an editor is reading the review, sharing it with other editors, and one’s nastiness is noted and remembered especially when directed towards a new member of the profession.

What do you wish more authors did before submission?

What a great question: authors of empirical papers should do two related things. First, make sure that their abstracts are jargon free and literature free. So never include something like “Chandra and Slusky (2020)…” in an abstract for the makes the paper seem narrow and unimportant, even when it’s broad and important.

Second, make sure that the introduction of the paper clearly summarizes the question, intuition for the answer, the approach, and the findings in a way that a first-year graduate student in economics would understand. Don’t put a giant literature review into the introduction and put your reader to sleep during the first 5 minutes that you have their attention. Do not think that accessibility is a bad thing. Do not assume that math is a good thing.

Is the revise and resubmit process working well for you? If so, what is making it work so well? If not, how could it be improved?

At the Review of Economics and Statistics, we moved to more of a “conditional contract” approach with R&R decisions. In other words, if we gave you an R&R decision, we were basically saying, “do these things and we’ll take the paper.” This preserves everyone’s time, and speeds up the review process but it does come at a cost: we give up the option to publish papers that may improve as a result of the first-round comments, but where we (editors) thought that author’s setting or data did not permit this improvement. This is where subjectivity creeps in: an author who wrote a confusing paper may not be viewed as being up to the task of simplifying it. Was the initial submission confusing because of not being taught how to write well, or is this just a muddled approach? Here’s where an editor’s knowledge of an author can come in. But this is also highly subjective and privileges networks.

We spent a lot of time analyzing the scope for this bias with data, opening up our databases on editor performance to researchers, and selecting editors who had wide networks.

Amitabh Chanda is the Ethel Zimmerman Wiener Professor of Public Policy at the Harvard Kennedy School and the Henry and Allison McCance Professor of Business Administration at Harvard Business School and the Editor (and Co-Chair) of the Review of Economics and Statistics.

David Slusky is an Associate Professor of Economics at the University of Kansas and the Editor of ASHEcon’s newsletter.