The manipulation of objectivity: an excellent demonstration

In a fine paper published today in the New York Times Amy Qin explained to the Readers of the journal how the struggle for scientific reputation drives scientists, this time from China, to publish fake research. Quantitative measures, specifically  impact factors  play the main role in career promotions. (What I believe is much better than promotion based on political loyalty).  “In June, Sichuan Agricultural University in Ya’an awarded a group of researchers about $2 million in funding after members got a paper published in the academic journal Cell.”. Why not? Cell has an impact factor 30, and I would like to believe that a journal with such a well-deserved long-term reputation still has a reliable peer review system. (We, editors of journals with much lower impact know very well how difficult to find reliable reviewers).

I think the key paragraph of the paper is this: “In America, if you purposely falsify data, then your career in academia is over,” Professor Zhang said. “But in China, the cost of cheating is very low. They won’t fire you. You might not get promoted immediately, but once people forget, then you might have a chance to move up.” The bad news is that fraud techniques are more and more sophisticated, but the banning from the participation in the scientific game for 99 years might have some repulsive power.

I might be too optimistic…

10 thoughts on “The manipulation of objectivity: an excellent demonstration”

  1. I agree with Peter. I think the point here is not that science cannot distinguish fake from real research. Instead, the point is what journals are now willing to do to boost their impact factor, given the massive increase in journals worldwide and the costs and challenges associated with journal life today. Here, for example, is an article on journal self-citation to boost impact factors.

    http://blogs.nature.com/news/2013/06/new-record-66-journals-banned-for-boosting-impact-factor-with-self-citations.html

    —————
    New record: 66 journals banned for boosting impact factor with self-citations

    19 Jun 2013 | 19:32 BST | Posted by Richard Van Noorden | Category: Uncategorized

    Science publishers are sending out decidedly mixed messages about how seriously they take the impact factor — the much-maligned measure of how often the average research paper in a journal is cited.

    A record number of journals — 66 of them, including 33 37* new offenders — have been banned from this year’s impact-factor list (released today) because of excessive self-citation or because of ‘citation stacking’ (in which journals cite each other to excessive amounts). This year, the named-and-shamed titles include the International Journal of Crashworthiness and the Iranian Journal of Fuzzy Systems. Only 51 were banned last year (28 new offenders), and 34 the year before that. Along with the record numbers, Thomson Reuters has posted a new explanation of why it decides to ban journals — essentially because the self-citations distort the rankings. *Thomson Reuters updated the number of new offenders from 33, to 37, on 20 June.

    Like

  2. It is not the first time that I read about the misconducting of authors. Obviously, all these cases are well established. However, as everything, even publication has another side. It is how journals behave in a wrong way towards authors. There are plenty of such ways. One of them is when Nature refused a paper on this topic within 1 day on the basis that on the basis that the paper would be obviously rejected. Practically, there are no international standards what a reviewing process must satisfy. As far as I know, there is no international forum where the authors can submit complain against journals.

    My suggestion is that leading universities should make a gentlemen’s agreement that there is no promotion to associate professorship without 5 reviews made in good quality for journals having impact factor in the last two years. The same number should be 10 in the case of professorship. Universities may claim good quality reviews even from full professors, for example 5 in every 5 years. If the top universities make this agreement, then all other universities will want to prove that their are at least as good and will obey the new rules.

    Like

  3. Let me share a few thoughts on rankings. In China, the impact factor of the candidate papers is really important for the academic promotion, grant awards etc. Because it shows whether the candidate is capable of getting academic promotion or experts in related fields and ranking can provide a standard of reference. While some fraud scandals happened in many countries such as USA and Japan, somebody may ask: is it probable that it is much more frequent in China or it is the biased perspective of the West? In my view, China’s Internet media has developed rapidly. The scale of Internet users is very big. So once the fraud scandals appear, it will immediately spread around. It does not mean that scandal happens a lot in China. I think that’s a one-sided view. One might think that the consequences of the uncovered frauds, so the scandals, have much minor consequences in China than in the US. It was clear to me that It’s impossible in China. As mentioned above, China has a huge well-developed internet social media. If the scandal is revealed, the consequences are very serious and irreparable.
    In a word, this book will give readers great help to understand the importance of ranking in everyday life.
    Looking forward to reading more.

    Like

  4. How much effort, if any, is being spent nowadays on spotting sham research? It seems like this could become an interesting intellectual challenge for some people, and possibly, a business. Demand might arise upon supply – or before it. The methods of such an enterprose may be discipline-specific though – I wonder to what extent they could be ”domain general”. I imagine it could inclunde anything from statistical anaylsis through theory to undercover agents in some cases.

    Like

Leave a comment