Page 1 of 2 12 LastLast
Results 1 to 10 of 13

Thread: Peer review

  1. #1
    Scobblelotcher Sistamatic's Avatar
    Type
    INTP
    Join Date
    Jan 2014
    Posts
    4,442
    INTPx Award Winner

    Peer review

    This thread is inspired by this very disturbing article that I hope deserves it's own thread:

    http://www.sciencemag.org/content/342/6154/60.full

    What happens if the words "peer reviewed" cease to mean anything?
    Last edited by Sistamatic; 02-20-2015 at 03:39 AM.

  2. #2
    Amen P-O's Avatar
    Type
    INTP
    Join Date
    Dec 2013
    Posts
    665
    Quote Originally Posted by Sistamatic View Post
    What happens if the words "peer reviewed" cease to mean anything?
    I guess we'll be forced to think for ourselves...


    Edit:

    I don't think there are going to be serious problems with people trusting peer review in general. Specific journals will be targeted and discredited. -- For example: How often has this bogus paper been cited by any real scientists? Probably not very much I would guess.


    On the other side of the argument: The issue of trusting peer review journals authorities has always been a tightrope walk. Ultimately, only the journals with good quality control are going to be regarded as being trustworthy. Every time somebody finds a bullshit paper in a journal, it affects that journal's credibility.
    Last edited by P-O; 02-20-2015 at 04:41 AM.
    Violence is never the right answer, unless used against heathens and monsters.

  3. #3
    Utisz's Avatar
    Type
    INxP
    Join Date
    Dec 2013
    Location
    Ayer
    Posts
    2,769
    I would say that "peer reviewed" ceased to mean anything quite a long time ago, if it ever meant anything in the first place. There is no central organisation regulating "peer review" hence it's not even clear what it means. In the strictest sense it just claims that a paper was sent to some folks, typically three, hopefully in a related area, who eventually okayed it to be published.

    There's been a bunch of similar stories and similar experiments down through the years showing that "peer review" is fallible ... although fly-by-night open access journals are a relatively new phenomenon, shit papers getting through peer review or negligent peer review practices aren't new. My favourite line of these were the papers automatically generated by SCIgen accepted by various comp. sci. venues (to no real surprise to people in comp. sci. even if one of the conferences was sponsored by IEEE).

    In general, peer review is not a binary thing ... it's not a case of a paper being peer reviewed or not. It's a question first and foremost of where it was published, what it contains, if it makes sense, and (if applicable) if it has been reproduced or if what it claims actually translates into working practice. Getting "published" is easy. Getting published in a good journal with a reputation and heritage at stake and a community interested in upholding a good reputation is hard(er).

    An experienced researcher will be very much aware of caveat emptor with respect to peer review ... and reading such a article in such a journal (which they probably won't do anyways) will quickly know it's bullshit. And yep, even companies like Elsevier publish shit journals. From a scientific perspective though, it's not hard to filter out this sort of noise. (What is more difficult is cases where results are fabricated just so or are subject to bias but are published in good journals ... even peer review in good venues operates on a basic principle of good faith in the first instance since a reviewer is not able to, e.g., go into the lab and make sure that everything was reported accurately ... they almost always have to assume good faith).

    The real problem with peer review is when folks ... like journalists say ... take something that's "peer reviewed" as infallible or to justify an appeal to authority or whatever. Or when naive researchers looking for a quick journal publication get duped by some predatory publisher/journal who (I guess like a lot of well-esteemed academic publishers ironically enough) make their money from exploitative practices.



    Peer review is ... well it is a flawed system, but the least of its concerns is some shitty venues accepting nonsense papers imo. I think the biggest real threat to peer review at the moment is that the rate of submissions have increased so much (esp. from developing countries) that credible reviewers are quickly getting swamped, which leads to lazy reviewing whereby shitty papers get through good venues, non-credible or non-expert researchers getting asked to review papers that might not be in their area, papers getting reviewed based on authority or bias (safe papers from researchers known to the community are more likely to get accepted than "risky" papers that might actually be excellent by virtue of their novelty or impact but are harder to evaluate or more risky to accept because of that fact). Having reviewed upwards of 100 papers myself and being on the editorial board of a journal and having furniture that smells of rich mahogany and all that shit, I can say that about 1/3 of third-party reviews I encounter seem to have given the paper a skim for plausibility and then thrown together a couple of paragraphs of review, even in the very top conferences of computer science.

    EDIT:

    Quote Originally Posted by P-O View Post
    I don't think there are going to be serious problems with people trusting peer review in general. Specific journals will be targeted and discredited. -- For example: How often has this bogus paper been cited by any real scientists? Probably not very much I would guess.


    On the other side of the argument: The issue of trusting peer review journals authorities has always been a tightrope walk. Ultimately, only the journals with good quality control are going to be regarded as being trustworthy. Every time somebody finds a bullshit paper in a journal, it affects that journal's credibility.
    PLOS ONE (+1 ... it's a terrible pun)

  4. #4
    Now we know... Asteroids Champion ACow's Avatar
    Type
    INTP
    Join Date
    Dec 2013
    Location
    Melbourne, Australia
    Posts
    2,267
    Was I the only one who noticed in the original article about sending a joke article with lots of methodological flaws to lots of shitty.open access journals that he didn't set up a control group?

  5. #5
    Scobblelotcher Sistamatic's Avatar
    Type
    INTP
    Join Date
    Jan 2014
    Posts
    4,442
    INTPx Award Winner
    @ACow I did notice, especially in that I really would like to know how much better/worse these bogus papers would have fared at "better" journals, but at the same time I understand they cannot have submitted "real" papers because generating just one of those takes so much work and using previously published ones would have raised red flags. Sometimes field science can't help but be a little bit sloppier than the lab variety.

    Quote Originally Posted by Utisz View Post

    The real problem with peer review is when folks ... like journalists say ... take something that's "peer reviewed" as infallible or to justify an appeal to authority or whatever. Or when naive researchers looking for a quick journal publication get duped by some predatory publisher/journal who (I guess like a lot of well-esteemed academic publishers ironically enough) make their money from exploitative practices.



    Peer review is ... well it is a flawed system, but the least of its concerns is some shitty venues accepting nonsense papers imo. I think the biggest real threat to peer review at the moment is that the rate of submissions have increased so much (esp. from developing countries) that credible reviewers are quickly getting swamped, which leads to lazy reviewing whereby shitty papers get through good venues, non-credible or non-expert researchers getting asked to review papers that might not be in their area, papers getting reviewed based on authority or bias (safe papers from researchers known to the community are more likely to get accepted than "risky" papers that might actually be excellent by virtue of their novelty or impact but are harder to evaluate or more risky to accept because of that fact). Having reviewed upwards of 100 papers myself and being on the editorial board of a journal and having furniture that smells of rich mahogany and all that shit, I can say that about 1/3 of third-party reviews I encounter seem to have given the paper a skim for plausibility and then thrown together a couple of paragraphs of review, even in the very top conferences of computer science.
    Yes, my concern isn't a theoretical one but a practical one. I know most scientists aren't going to be fooled by bogus articles published in the Journal of Podunkadubioustanford's College of Medicine just because they have nice letterhead, but most people aren't scientists. Watching my parents and a number of others who have both internet connections and desperation for any kind of hope try to navigate this while battling cancer or anything else is painful, and there aren't enough hours in a day for me to answer every question and try to explain why the "scientific research" they are pinning their entire future on isn't necessarily reality. Finding "proof" that they are being conned is exhausting and damn near impossible. You need to be more than smart to tell the difference, you need to be educated in your field. And lets face it, most of the journalists presenting science to the public just aren't. Science is magic, Tesla is Merlin, and the way it is presented to the public turns it into a wince-worthy poorly thought out magic system from an shitty self published sci fi debaucle. While "peer-review" has never been perfect, it didn't used to be a rubber stamp you could buy on the internet and use on toilet paper.

    http://www.economist.com/news/leader...nce-goes-wrong

    "Last year researchers at one biotech firm, Amgen, found they could reproduce just six of 53 “landmark” studies in cancer research. Earlier, a group at Bayer, a drug company, managed to repeat just a quarter of 67 similarly important papers. A leading computer scientist frets that three-quarters of papers in his subfield are bunk. In 2000-10 roughly 80,000 patients took part in clinical trials based on research that was later retracted because of mistakes or improprieties."

    Here's another interesting trend:

    "...failures to prove a hypothesis are rarely even offered for publication, let alone accepted. “Negative results” now account for only 14% of published papers, down from 30% in 1990."
    Last edited by Sistamatic; 02-20-2015 at 05:32 PM.

  6. #6
    Member Ruby_Bookrose's Avatar
    Type
    INTP
    Join Date
    Jan 2015
    Location
    weird place...
    Posts
    216
    Quote Originally Posted by Sistamatic View Post
    @ACow I know most scientists aren't going to be fooled by bogus articles
    I agree - I think a lot of these studies are propaganda from the publishing world. There's a giant turf war over dissemination. "OFFICIAL", big-name journals and publishers have an incentive to undermine perceptions of rigor and validity in open-access publishing. Lots of $$$ in play. Of course they want to make the claim that "rigorous" peer review only happens in licensed contexts but there are lazy reviewers working for all the journals (open or not).

    Publishers don't want researchers to make their findings available for free. As this catches on, universities will start dropping licensed journals and databases (eg. ebsco, wiley, etc.) right and left. Our system just dropped wiley as one example.
    Last edited by Ruby_Bookrose; 02-20-2015 at 05:49 PM. Reason: fixed a word

  7. #7
    Scobblelotcher Sistamatic's Avatar
    Type
    INTP
    Join Date
    Jan 2014
    Posts
    4,442
    INTPx Award Winner
    Quote Originally Posted by Utisz View Post
    I think the biggest real threat to peer review at the moment is that the rate of submissions have increased so much (esp. from developing countries) that credible reviewers are quickly getting swamped
    I agree this is a big problem ... probably the lion's share of it. It's like an age structure pyramid, only instead of the energetic base providing resources for the aging top, the top must provide that most valuable resource, time, for the ever-widening base. Is there a solution for this, or is it just a problem we have to live with?

  8. #8
    Mens bona regnum possidet ferrus's Avatar
    Type
    INTP
    Join Date
    Dec 2013
    Location
    Barcelona, Catalonia
    Posts
    5,669
    Quote Originally Posted by Sistamatic View Post
    I agree this is a big problem ... probably the lion's share of it. It's like an age structure pyramid, only instead of the energetic base providing resources for the aging top, the top must provide that most valuable resource, time, for the ever-widening base. Is there a solution for this, or is it just a problem we have to live with?
    Heh, my experience of what might be used as alternatives are usually Libertarian global warming deniers who wish to propagate some kind of 'online community' form of review - which to me sounds like the tyranny of the majority (or the majority mobilised by money) made manifest. Which is funny because outside the scope of science and in the scope of politics and economics that is exactly what they usually wet the bed over.
    Die Logik ist keine Lehre, sondern ein Spiegelbild der Welt. Die Logik ist transcendental. - Wittgenstein

  9. #9
    Scobblelotcher Sistamatic's Avatar
    Type
    INTP
    Join Date
    Jan 2014
    Posts
    4,442
    INTPx Award Winner
    Here's an interesting approach:

    Faculty 1000 calls "for science to more formally embrace post-publication peer review, and stop fetishizing the published paper."
    Their policy is stated as, "The journal will publish all submissions immediately, beyond an initial sanity check."

    Here's how it works: http://f1000research.com/about

    Essentially, they publish your paper, as is. You then have to find referees who meet certain qualifications. Your paper is out there for everyone to read while your reviewers review it. After/if the peer review process is accomplished, your paper gets a status change from pending review to reviewed. If you make additional minor findings, you can update your paper.

    In 2013, they temporarily waived fees for all negative results papers for a time in an attempt to address address the issue of a slipping negative results ratio.

    Their approach does nothing to address the inverted pyramid issue though, does it. In fact, might make it worse.

  10. #10
    Member Ruby_Bookrose's Avatar
    Type
    INTP
    Join Date
    Jan 2015
    Location
    weird place...
    Posts
    216
    Quote Originally Posted by ferrus View Post
    sounds like the tyranny of the majority (or the majority mobilised by money) made manifest.
    actually it's always insiders that decide what gets to count for knowledge (open access or not).
    Read this?


Similar Threads

  1. Literary Review's Bad Sex Awards
    By gator in forum Arts & Entertainment
    Replies: 44
    Last Post: 11-21-2015, 06:36 AM
  2. The INTPplex Political Bias Review
    By Osito Polar in forum News, Culture & History
    Replies: 5
    Last Post: 01-12-2014, 08:08 PM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •