Recently, Japanese stem cell researcher Hisashi Moriguchi was found to be lying about a startling study using induced pluripotent stem cells to treat six heart patients. In fact, there was no treatment of any patients and the University of Tokyo, where Dr. Moriguchi worked, has dismissed him fired.
A few years ago, Korean scientist Hwang Woo-Suk falsely claimed to have cloned the first human embryos and obtained embryonic stem cells from them. Seoul National University, the institution Dr. Woo-Suk was associated with at the time, confirmed the experiments were made up. Dr. Woo-Suk's latest project is to clone a wooly mammoth in collaboration with Russian scientists, though it is unclear why anyone would trust his work.
Scientific misconduct has not been limited to Asia of course. Peter Francis of Oregon Health and Science University (OHSU) admitted he faked results for experiments claiming to use stem cells to repair retinal degeneration in mice and monkeys. He used the falsified data on grant applications for government research funding. Earlier this year OHSU dismissed him.
Two different psychology researchers at the Dutch Erasmus University of fabricating data on consumer behavior for many years, and the University of Connecticut researcher, Dipak Das, working on the health benefits of resveratrol in red wine made up data for 145 research projects (sorry, wine enthusiasts).
Is There An Epidemic of Fraud?
With so many scientific misconduct cases over the last several years, are scientific results still trustworthy? Is there an epidemic of fraudulent science?
Scientific advancement relies on obtaining reliable data. Theories are developed, invalidated, corrected, or supported all based on data. There are often competing models to explain the same data, in fact this is usual. Successful models win out when new data no longer supports the alternatives. Without reliable data, there is no science. While the egregious cases certainly don't represent the majority of scientists, the question is whether they represent a symptom of a larger issue. Are these just the tip of the iceberg?
Most Errors Are Not Honest Mistakes
For some years, researchers have noticed that a whole lot of reported results in biomedical journals are just not reproducible. These results were documented in Why Most Published Research Findings Are False. Many of the explanations focus on the difficulty of publishing negative results, poor data analysis, and how competitive investigators are susceptible to rationalizations that bias how they pick data and set up experiments. However, is there possibly a more direct reason? A recent study showed that most retractions are due to misconduct, not inadvertent errors.
Chasing Red Herrings and Going Down Rabbit Trails
For drug development, this trend has created a serious problem. Most major drug companies actively sift the scientific literature to find genetic candidates to which they can develop drugs. The first step is to validate the findings of articles with potentially promising new drug target candidates. For several years, it was clear that many, if not most, of the findings did not hold up under scrutiny.
In 2011, frustrated Bayer scientists retrospectively analyzed 4 years of results from 67 potentially interesting published studies that found genes, mostly cancer related, that could potentially be good candidates for which drugs might be developed. For 43 of the studies, the in-house data Bayer produced showed major inconsistencies with the published data. 64% of the studies were not reproducible!
Several months after the Amgen report, C. Glenn Begley, former global cancer research for Amgen, reported that result in 47 of 53 publications his groups sought to reproduce over a 10-year period could not be replicated.
In response to these results, pharmaceutical companies seem to be modifying their approach to finding new drug targets. There is some indication they are taking a more skeptical view of the literature, developing stronger in-house programs to find and evaluate new targets independently, and working more closely with biotechnology companies and academic that have a proven ability for good science. They certainly need to, given the lack of new reliable targets.
Why the Sudden Problem with Publication Reliability?
Is the reliability problem with published data really a new phenomena though? There are plenty of examples of egregious scientific fraud, from the Piltdown man to cold fusion. Scientific greats such as Isaac Newton and John Dalton faked their data. It's likely the dedicated monk Gregor Mendel, the founder of genetics, made a few data adjustments. So, it certainly doesn't seem to be a new problem. Why does it suddenly seem like a crisis then?
With the advent of the biotechnology industry and the digitalization of journals, science has entered a new era. The number of new scientific publications has grown by about 4.7% per year since the middle of last century. With on-line publishing and advanced search engines, it is also now possible for investigators to keep track of more published information. Articles easily overlooked in the past now pop up in a search as a result of a few key strokes. Simply, there are now more publications and they all get better exposure.
Starting the 1970s, the emerging biotechnology industry changed the landscape of drug development. Rather than blindly screening large libraries chemicals for some biological activity, a more directed rational approach to develop new drugs took hold. The new strategy exploited the recently developed tools of biotechnology to delve into the underlying genetic and biological mechanisms behind disease, then finding compounds that interacted with these factors to affect the disease biology in predictable ways.
However, this rational drug discovery approach places a much greater reliance on prior academic research to build up models of how the diseases such as cancer develop and progress and identify the aspects that might be susceptible to therapeutic treatment. Research findings are not merely interesting academic science. Mistakes have costly consequences. Companies make program decisions allocating money, resources, and time based on new findings in relevant publications. This intersection of science and business creates a dynamic that places a lot more scrutiny on published science.
Biotech Needs Better Reliability
The pharma and biotech industry is looking for a solution and, where there's a problem, there is opportunity. The most straightforward fix would be an independent replication of the data by an independent lab. The recently started Reproducibility Initiative is trying to encourage this approach by connecting publishing laboratories with groups that will run an independant validation of the study on a fee-for-service basis. While this would certainly help engender confidence in the results, it is also pretty costly and it can be very time consuming to reproduce some studies. Given this, it is not clear how successful this approach will be.
While it would probably not eliminate the irreproducibility problem, some basic procedural changes to publication and grant application requirements may be a practical way to improve the situation. For example, more emphasis of primary data disclosure, greater transparency of the peer review process, and an increased requirements for statistical rigor would likely address at significant part of the problem. What is clear is that, until something changes are made, it's a good idea to keep in mind the Benjamin Franklin's saying, "Believe none of what you hear and half of what you see." In this case, though, make that half of what you read. The trick, of course, is figuring out which half to believe.