Journals Tackle Scientific Fraud

From Wikinews:

While carefully examining the data for every publication would be prohibitively expensive, several techniques to screen for falsified data have been developed. For example, images can be checked for after-the-fact manipulation. The scandals have led to some readers looking more carefully at suspicious papers for evidence that the data are human-manufactured. Jan Hendrik Schön’s fraud was partly revealed due to just such analysis by readers in 2002.

Some institutions are taking further steps to prevent fraud. In October, Emory University introduced a tip line where people can anonymously report suspicions of scientific misconduct. So far, five investigations have been launched based on reports to the tip line.

While many steps are being taken to deter future scandals, some scientists have pointed out that such fraud does not discredit all science. Dieter Imboden, the president of the Swiss National Science Foundation’s Research Council, has said: “Bear in mind that every experiment will be repeated at some stage and it is one of the important principles of science that only things which can be verified independently by different groups are considered to be safe scientific facts.”

Related Wikinews


One thought on “Journals Tackle Scientific Fraud”

  1. I’ve just finished attending a scholarly publishing meeting here in DC, and this was indeed one of the topics covered – in a talk title “Still Safe at Any Speed?” by Brian Crawford of the American Chemical Society. He mentioned a number of technologies publishers may start looking at to try to automatically catch more of these things:

    • checking cited references (validity, relevance)
    • checking for deposited data (if that’s a requirement of the publication, automatically ensure a URL provided contains what it should, of right sort and size of content etc)
    • plagiarism checks – looking for matching content in other articles
    • forensic image analysis, to catch “photoshopping” etc.
    • statistical analysis – looking for digit preferences (non-randomness) in the experimental numbers for instance
    • other tools to check experimental data validity.

    It’s sad we have to impose such things, but scientists are human too, and can be influenced by their own personal views and egos and interests. One problem specifically mentioned at the meeting was authors who don’t disclose conflicts of interest in bio-medical clinical trials, that may favor a drug for instance that they have a financial interest in. Journals have definitely become a lot more aware of the problems recently.

Comments are closed.