Request for Replication Issue — Generic

June 2018

Request for Proposals
Replication and Extension of Influential Papers

Academic economics and finance has, by-and-large, not been a science. To be a science, it must not just be that research is replicable, but that research is routinely replicated before it is trusted. It's not about distrusting authors. It's that everyone can make mistakes. I am in the same boat. Over the last decades, I have published many papers that have not been replicated by others. Do not trust their findings until someone else has confirmed them!

Historically, replication has not been our priority. Indeed, our standards have been so low that it is sometimes called a successful "replication" when, in the same data set with the same specification, the T-statistics (or even just the signs) have roughly looked similar. This is not a successful replication. A replication should have zero disagreement.

A replication is not an extension. Yes, our profession also needs to learn whether findings can be extended to other contexts (e.g., when updated, or with simple variables and robustness tests). Science needs both replication and extension. The former is to weed out errors and keep authors honest; the latter is to learn when findings were a fluke.

The Plan

The CFR is planning to regularly publish issues dedicated to replicating and extending the most influential empirical papers in financial economics. It is explicitly not the goal of these replication issues to disprove (or prove) the papers. The replications are meant to be as objective as possible. The contract between an invited replicating team (often headed by a senior researcher, with some junior researchers or Ph.D. students) and the CFR is that the journal will publish the replicating paper even (or especially if) all the findings of the original paper hold perfectly, then and now. The papers will be refereed, but the expectation is that they will be published if competently executed.

Papers to be replicated for these issues will be selected based on their impact in the area, and not for an editorial prior of whether they hold or do not. It should be viewed is a professional recognition of importance when a paper is selected into one of these issues.

Mandatory Paper Outline

  1. Pure replication from the original underlying data.
  2. Out-of-sample tests—performance since publication. Best inference today.
  3. Plain specification robustness tests.
  4. Additional author-discretionary higher-level tests and discussions.

For more information about the replication paper requirements and the review and publication processed, please see the Call for Papers at

Replication Issues

The issue on liquidity, covering Acharya and Pedersen (2005), Amihud (2002), and Pastor and Stambaugh (2003), is almost complete and expected to be published in 2019.

The next two issues in the series, for which we are soliciting proposals, follow below. If you are interested in authoring a replicating and extending paper, please contact the CFR Editor in charge and cc Ivo Welch. Include a description of the team members, the paper to be replicated, and any potential conflicts of interest. The lead author on a team must have an existing publication record.

Higher-Order Moments: Full RFP

The next issue will be on higher-order moments, edited by Juhani Linnainmaa. The complete call is at Suggested papers are

Regional Finance: Full RFP

The next issue thereafter will be on regional finance, edited by Zhi Da. The complete call will be posted soon. Suggested papers are

Future Issues

Authors with a strong publication record who want to edit a future issue on an important finance topic should contact Ivo Welch. It is important that editors not be conflicted themselves with respect to what the findings will be.

Earlier More Detailed Call Description: May 2017

Special Replication Issues

Following the success of its first such issue, the Critical Finance Review is planning to regularly publish issues dedicated to replicating the most influential empirical papers in financial economics. It is explicitly not the goal of these replication issues either to prove or to disprove the papers. The replications are meant to be as objective as possible. The CFR wants to reduce the incentives of authors to slant the results either favorably or unfavorably. The contract between an invited replicating team (often headed by a senior researcher, with some junior researchers or a Ph.D. student) and the CFR is that the journal will publish the replicating paper even (or especially if) all the findings of the original paper hold perfectly.

Papers for such issues are not selected because the editors have a prior on whether they are replicable. They are selected because they are influential flagship papers in the area. It is a professional recognition to have one's paper selected into one of these issues.

Mandatory Paper Outline

The format of replication studies should be in roughly equal parts (and in this order):

  1. Pure replication from the original underlying data. Exact replication is a sine-qua-non.

    Replication should be an attempt to use the same sample and methods employed in the original paper to obtain the same figures as those in the key tables reported in the original paper. (1) This part of the paper exists not only to confirm that there were no coding errors in the original paper, but to keep the starting point as close to the original paper as possible. The authors of the replicating paper are also required to publish their replication source code and some data (ideally full data sets).

    We do not expect replication problems. However, it may be the case that the replication may not succeed for no or little fault of the original authors. For example:

    • The original data (even CRSP and Compustat) may have been corrected or updated over the years.
    • The original paper may not have spelled out all details. (Doing so is nearly imspossible in an academic paper.) The replicating paper should also clarify method and procedures that are not immediately clear from the original papers.

    If this is the case, we as a profession still want to find out. The point is not to blame original authors—the point is to learn.

    Important: The idea is not to replicate all the values in all the tables—just the key ones. Replicating authors who want to replicate more or all results (i.e., more than what is suitable for print) can do so, but such results will go into an online appendix. Again, the idea of part 1 is to confirm the exact numbers, and distribute the source that makes this possible.

  2. Out-of-sample tests—performance since publication. Because we usually publish replications of papers 10-20 years old, we now can learn (a) whether the effects have become weaker; (b) whether the results continued to hold out of sample; (c) if they did not, whether they were so opposite that the full sample inference by now has changed (e.g., from statistically significant to insignificant).
  3. Plain specification robustness tests. This could add new tests: winsorizing, alternative weighting schemes, alternative timing, common additional controls, different standard error assumptions, and/or a placebo. Ideally, the replicating paper should also show or at least discuss when such tests support (or challenge) the original paper's conclusions.
  4. Additional higher-level tests and discussions. This could include interpretations of issues such as (corrections of) endogeneity, even if this could arguably be considered an omitted variables issue. It could be about time-series rather than cross-sectional (or vice-versa) association (e.g., fixed-effects), which can be different in meaning and interpretation but interesting from the perspective of the hypothesis. Ut could also contain interpretation of the findings through a different lens than that proposed in the original paper. This aspect of the paper could be a good venue for the replicating authors to publish original thoughts and discussion points that would otherwise not be easy to communicate to the broader profession.

Journal-Author Contract

Again, the CFR emphasizes that it is committed to publishing replication papers that conclude that the original paper was perfect.

Each paper is expected to be replicated by two teams working independently. If replication turns out to be difficult, teams can also help one another in the pure-replication part of the work. The first (replication) part is expected to be identical across both teams. If a team cannot replicate the original paper independently, it is then encouraged to communicate with the other replicating teams. If no team can replicate the original paper, then the teams are asked to coordinate with me for communication with the original author(s). We want to minimize the imposition on the original authors. In any case, we hope that replication will not be painful. After all, the original paper should have set out the recipe.

The CFR does reserve the right to ask teams to remove outright incorrect tests and execution, but will give replicating authors extensive latitude in deciding on good tests. For example, the editor and referee may feel that value-weighting removes too much interesting observations, but if the replicating authors insist that it is important and interesting, it will likely survive to the published paper.

Regardless of outcome, the original authors will be invited to provide non-anonymous feedback on the first submission of the papers and to publish their own perspectives on the replicating papers. They get the last word. Disagreements are welcome—insinuations are not.

As to the key incentive that makes participation for replicating teams worth their while, please recall that the replication paper—unlike others—will be published. It is not the usual wild goose chase—the desparately search for astonishing findings. Moreover, we know from the psychology replication issue that previous replication papers were very influential. They are beginning to transform their discipline. We hope we can do this for finance and accounting, too. Be part of it!


The second issue in this replication series will be dedicated to .... Tom and Jerry have graciously agreed to serve as the editors for this issue.

The empirical papers for this issue were selected based largely (but not exclusively) on objective citation counts. They are:

  1. Donald Duck
  2. Mickey Mouse
  3. Bugs Bunny
  4. ...
The CFR is hereby soliciting proposals for replication for each of these papers.


The intended timeline is

Submission of Interest in 3-6 months (application selection)
Confirmed Replication in 6-12 months (first part: source code, data sets)
First Submission in 12-18 months
Review Process 3-4 months
Final Submission in 24 months
Original Author Responses in 27 months
Issue Creation in 36 months
Publication in 42 months

Team Objectivity

Members of the replication teams should be and strive to remain objective—if a third party could perceive a personal conflict-of-interest, either positive or negative, please indicate this in the proposal to the editor. The CFR's preferred goal is to select not only teams that are objective, but also viewed as objective. In case of doubt, please ask.

  • It is not a conflict of interest or lack of objectivity if authors have an opinion or hunch that the paper to-be-replicated is likely to hold up or not. In fact, some submitters may have already worked on replication earlier.
  • It is a conflict of interest if the replication and original authors have had a history of repeated disagreements.
  • It is a lack of objectivity if the replicating authors are intent on proving an outcome.


If interested, please contact the assigned issue editor yosemite sam and cc the CFR Editor Ivo Welch with a description of the team members, the paper to be replicated, and any potential conflicts of interest. The lead author on a team must have an existing publication record.


A Professional Appeal To Participate in Our Venture

The effort involved to author a replicating paper is much less than it is with an ordinary paper. There is a clear road map of what is required and a direct route to publication. It should require less effort than even an invited paper. The work can be done together with coauthors and/or phd students.

This will be the first time our profession will have ever tried to execute objective systematic replication. We ask famous researchers to help us lend some prestige to this first-time undertaking.

As for me, I would like every researcher to consider helping on this collective task at least once in their lifetime, and to view it as a necessary service to our academic profession, similar to refereeing, and regardless of whether it makes friends or enemies. I am worried about what our academic enterprise means if even famous people prefer to free-ride and not help build an objective replicated knowledge base. If the famous don't care enough to do it, how can we ask others?

Note that it is not necessary that the replicating team is built around an expert in the subject. After all, this is a replicating with an outside perspective. It should need good financial empiricists, not experts.

Lack of Replication (not just replicability) is not my problem. It is our collective problem. If we cannot get this done as a collection of hundreds of academic researchers, what meaning does our professional endeavor really have?

What does our profession need most? More published papers? More referee reports? More of "everyone knows this is false" insinuations (which, as editor of the CFR, I have heard too many times without empirical support)? Or do we need unbiased replication and confirmation/rejection of our most important base findings? Where do you think you can contribute the most to our science?

Replication (and not just replicability) is vitally important for the profession. The CFR is not trying to debunk papers. It is trying to bring objectivity into (and remove politics from) the knowledge-building process.

/calls/RFP-specialissues.html Last modified: