Re: Peer Review Reform Hypothesis-Testing

From: Stevan Harnad <harnad_at_ecs.soton.ac.uk>
Date: Mon, 14 Apr 2003 23:58:52 +0100

I have not read the Cochrane http://www.cochraneconsumer.com/
study on peer review (reviewed below), but all the accounts I have read of
it suggest that (as one would expect from a study in which the critical
variable is not subjected to controlled manipulation) it neither tested
nor showed anything substantive.

    "based.. on 21 studies... Almost half on the effects of concealing
    the identity of reviewers and/or authors... Few... assessed the
    impact of peer review on... importance, usefulness, relevance,
    or quality... Only one small study tested the validity of the peer
    review procedure itself." [BMJ 2003;326:241]

To test the effect of peer review on the quality and validity of
research results, we have to compare sufficiently large, representative,
and comparable samples, with and without peer review, in some objective
quantifiable way across a sufficiently long time-interval. On the face
of it, comparing the *same* sample of research results, before and after
peer review (preprints vs. postprints) would appear to go some way in
this direction: Is that what the one small Cochrane study did? What was
the objective quantitative measure of comparative quality and reliability
of the research results?)

(And even here, there is the "Invisible Hand" effect that must be
taken into account too: *All* research today is written in anticipation
of having to answer to peer review; this affects the quality even of
pre-refereeing research results. Remove the invisible-hand constraint, and
who knows what effect that would have on the quality level of unrefereed
preprints: To discount this is to discount the deterrent effect of police
presence in the neighborhood, by noting that crime rates are not much
different while the police are and are not patrolling a given block. I
expect that similar null conclusions could be drawn about the value of
hand-washing, based on spot-checks for flu on days you have and haven't
forgotten to wash your hands...)

    Harnad, S. (1998) The invisible hand of peer review. Nature
    [online] (5 Nov. 1998)
    http://helix.nature.com/webmatters/invisible/invisible.html
    Longer version in Exploit Interactive 5 (2000):
    http://www.exploit-lib.org/issue5/peer-review/

Referee-anonymity and blind vs. nonblind peer review are the perennial,
trivial variables we keep badgering (indecisively) in connection with
peer review. But the substantive question -- whether we seriously believe
that expert work need *not* be assessed for quality, answerably, by
qualified experts, before being certified as ready for use -- has simply
never been put to the test (in any discipline, scientific or scholarly).
The latest (untested) proposal is to swap post-hoc ad-lib online
commentary for a-priori peer review. (Caveat pre-emptor...)

    Peer Review Reform Hypothesis-Testing
    http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/0479.html

    A Note of Caution About "Reforming the System"
    http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/1169.html

    Self-Selected Vetting vs. Peer Review: Supplement or Substitute?
    http://www.ecs.soton.ac.uk/~harnad/Hypermail/Amsci/2340.html

Last thought (a methodological one): What is the "null hypothesis"
here? That peer review *does* play a causal role in maintaining
the quality-level (such as it is) of our current refereed research
literature, until empirically shown otherwise? Or that it *doesn't*,
until empirically shown to do so? Where is the burden of proof, after
our 300 years of exclusive reliance on our expert-vetted research
literature (such as it is)? (One also wonders what expertise a health-care
consumer organization like http://www.cochraneconsumer.com/ can bring to
bear on the general question of the causal contribution of peer review
to research quality (as opposed to the quality of research applications):
the direct "consumers" of research, after all, are the peer community,
and it is their research time and efforts that a-priori peer-review is
intended to buffer from having to contend directly with raw results of
unknown quality and validity.

Stevan Harnad

> I expect that you all know about the recent Cochrane report on Peer
> Review. The following is an article from the BMJ 2003;326:241 (1 February).
>
> Little evidence for effectiveness of scientific peer review
> Caroline White, London
>
> Despite its widespread use and costs, little hard evidence
> exists that peer review improves the quality of published
> biomedical research, concludes a systematic review from the
> international Cochrane Collaboration.
>
> Yet the system, which has been used for at least 200 years,
> has only recently come under scrutiny, with its assumptions
> about fairness and objectivity rarely tested, say the review
> authors. With few exceptions, journal editors and
> clinicians around the world continue to see it as the
> hallmark of serious scientific endeavour.
>
> Published last week, the review is the third in a series
> from the Cochrane Collaboration Methods Group. The other
> reviews look at the grant application process and technical
> editing.
>
> Only the latter escapes a drubbing, with the reviewers
> concluding that technical editing does improve the
> readability, accuracy, and overall quality of published
> research.
>
> The Cochrane reviewers based their findings on 21 studies of
> the peer review process from an original trawl of only 135.
> These were drawn from a comprehensive search of biomedical
> print and online databases, and information received from
> bodies such as the World Association of Medical Editors.
>
> Almost half of the available research focused on the effects
> of concealing the identity of reviewers and/or authors,
> which, the Cochrane authors conclude, has little impact on
> quality. Few studies assessed the impact of peer review on
> the importance, usefulness, relevance, or quality of
> research. Only one small study tested the validity of the
> peer review procedure itself.
>
> On the basis of the current evidence, "the practice of peer
> review is based on faith in its effects, rather than on
> facts," state the authors, who call for large, government
> funded research programmes to test the effectiveness of the
> system and investigate possible alternatives.
>
> "As the information revolution gathers pace, an empirically
> proven method of quality assurance is of paramount
> importance," they contend.
>
> Professor Tom Jefferson, who led the Cochrane review,
> suggested that further research might prove that peer
> review, or an evolved form of it, worked. At the very least,
> it needed to be more open and accountable.
>
> But he said that there had never even been any consensus on
> its aims and that it would be more appropriate to refer to
> it as "competitive review."
>
> Not only did peer review pander to egos and give researchers
> licence to knife each other in the back with impunity, he
> said, but it was also "completely useless at detecting
> research fraud" and let editors off the hook for publishing
> poor quality studies.
>
> In the latest report from the Committee on Publication
> Ethics, Professor Peter Lachmann, until recently president
> of the UK Academy of Medical Sciences, comments: "Peer
> review is to science what democracy is to politics. It's not
> the most efficient mechanism, but it's the least
> corruptible."
>
> The report can be accessed from the National Electronic
> Library for Health (http://www.nelh.nhs.uk)
>
> http://bmj.com/cgi/content/full/326/7383/241/a?2003
> BMJ Publishing Group Ltd
>
> News in Brain and Behavioural Sciences - Issue 84 - 25th January, 2003
> http://human-nature.com/nibbs/issue84.html
Received on Mon Apr 14 2003 - 23:58:52 BST

This archive was generated by hypermail 2.3.0 : Fri Dec 10 2010 - 19:46:57 GMT