You are currently browsing the monthly archive for April 2014.

Image

The Milken Institute Global Conference, which explores solutions to pressing challenges including healthcare, took place today in Los Angeles.  FasterCures reported on the conference proceedings with the blog post below – two facts caught our attention:

  • Only 1 of every 10,000 academic discoveries make their way into the hands of patients….
  • Industry has 0.1% of scientists in world.  They are as good as any other scientists, but they’ll only have 0.1% of ideas. 

These facts truly bring home the message that academia and industry must work together.  And advocacy groups such as RSRT can play a major role in catalyzing these interactions.

FasterCures Blog

“We don’t just want to chase cures, we want to catch them,” said National Institutes of Health (NIH) Director Francis Collins at the Milken Institute Global Conference today in Los Angeles.  His comment aptly captured the prevailing sentiment of a panel of life science experts who came together to discuss creative strategies for speeding and improving medical progress across diseases in the face of limited resources. 

As moderator Melissa Stevens (Deputy Executive Director, FasterCures) pointed out, our knowledge of disease has never been deeper, but our need for cures has also never been greater.  Only 1 of every 10,000 academic discoveries make their way into the hands of patients, and the cost of developing a new therapy can soar as high as $1 billion. But it’s not all bad news.  With greater cross-sector collaboration and increasing levels of openness in research, we are poised to capitalize on the scientific opportunities before us.  “But we have to stop playing like solo artists, and start playing like a band,” said Stevens.

When asked how that band could best jam together, Collins gave the example of NIH’s Accelerating Medicines Partnership. A new venture between the NIH, 10 biopharmaceutical companies – including Johnson & Johnson and GlaxoSmithKline, both represented on the panel – and several non-profit organizations, it seeks to transform the current model for developing new diagnostics and treatments by jointly identifying and validating promising biological targets of disease.

Melinda Richter, Head of Janssen Labs, talked about industry’s commitment to sustaining innovation through collaboration, citing Johnson & Johnson’s role as key architect of Transcelerate and creator of the YODA project, as examples. “Jannsen labs enables scientists to think about their science in a commercial way, and creates a financial marketplace where people with money looking for technology and people with technology looking for money can find each other.” She went on to note that industry has a duty to make sure there is a strong investment profile for individuals looking to put money into the field. 

“There is a new level of humility within industry,” said Moncef Slaoui, Chairman, Global R&D and Vaccines at GlaxoSmithKline, who noted that the private sector recognizes the need to embrace open innovation and collaboration to solve medical challenges. “Industry has 0.1% of scientists in world.  They are as good as any other scientists, but they’ll only have 0.1% of ideas.  The other ideas are happening elsewhere so we need to figure out where and how to combine forces.”

Continue reading

by Diana Gitig

Science, Nature, and Cell, The New England Journal of Medicine, The Lancet – these most prestigious of scientific and medical journals are published on a weekly basis, each week’s issue brimming with amazing new discoveries claiming to expand the state of knowledge in their respective fields, or better yet, to shatter current paradigms and shift future research to a new direction. Yet not every published paper stands the test of time; few manage to actually shatter paradigms, and there are those whose results even fail to be replicated by other scientists. The process of peer review is the method most journals use to vet their papers, to try to ensure that the results they publish are correct more often than not.

It works like this: after years of toil by graduate students and postdocs, a lab head prepares a manuscript describing their hypothesis, the experimental methods they used to test the hypothesis, the results of those experiments, and their interpretations of those results. Sometimes results prove the hypothesis to be true, and sometimes to be false. Either way, the results often suggest avenues for future research. Then the researchers must choose a journal, and send their manuscript off to the editors.

If the paper is obviously terrible or fraudulent, the editors will reject it outright. And if it is obviously earth shattering – and has well-controlled experiments, and an argument that flows logically from the results – they will accept it immediately without reservation. Since in the real world neither of these things ever actually happens, editors usually send the paper out for peer review, asking two to four scientists familiar with the field their opinions of the paper.

These peer reviewers must assess if the experiments used were the most appropriate ones available to test the hypothesis in question; if the experiments were performed properly; if the authors’ conclusions are consistent with the results obtained; and if the findings are significant – i.e. new and sexy – enough to warrant publication. Often, the reviewers will suggest that the authors modify wording, or  perform additional experiments, before the paper is published. This back and forth can take up to a year. These reviewers are anonymous, so the authors don’t get to engage with them directly. And the reviewers don’t ultimately decide if the paper gets published; the editors of the journal make that decision, based on the reviewers’ recommendations. If the paper is rejected, the authors are free to try the whole process again at a different journal.

Like most things in this world, peer review is not perfect. Reviewers must obviously be familiar with the topic at hand, so they are often colleagues – and can be competitors – of the researcher whose work they are reviewing. They can hold up the publication, or utilize the ‘insider information’ they glean from the paper to advance their own research. But on a less nefarious level, they are busy scientists who are not being compensated for their time reviewing this new paper, so it is often not their top priority. Nor have they had any training as to how to review a paper, since it is not built into science education. They also never get an assessment of their reviews, so they don’t know if they were helpful or if they need to improve. And peer review is not designed to pick up fraud or plagiarism, so unless those are really egregious it usually doesn’t.

Funding requests, like those submitted to RSRT, are subject to a very similar system. Just like journal editors, the people handing out research money rely on expert opinions to decide who gets how much. A grant is slightly trickier than a paper submitted for publication, though, because nobody knows a priori if the proposed experimental methods will work as hoped, or how significant the results might be.  As mentioned above, these things are difficult enough for reviewers to assess once the results are in – and in a grant application, the experiments haven’t even been done yet.

To minimize this risk RSRT employs a fastidious peer review. Reviewers are selected with painstaking attention to fields of expertise and potential conflicts of interest including philosophical or personality conflicts. Proposals are judged for relevancy to RSRT’s mission, scientific merits of proposed experiments and strength of the investigator.

There are stirrings of change to deal with these problems. Many scientists think that established journals have a chokehold on research by deciding what gets published, and are playing with a more open system whereby scientists publish their findings online – often for free, in contrast to traditional journals which charge a hefty fee for publishing a paper – where they are then subject to a more transparent post-publication peer review. Some examples are PLoSOne, BioMedCentral, and F1000Research. Other researchers think pre-publication reviews should be signed, so the reviewer has some accountability.

Forums that allow for ongoing critiquing of papers after publication are gaining momentum.  Examples include PubMed Commons, PubPeer, Open Review. RSRT is a fan of post publication peer review and has long employed this approach to evaluate papers in the Rett field.

One way scientists assess the relative importance of an academic journal is by its impact factor, a way to measure a journal’s prestige. It measures the average number of times recent articles published in the journal have been cited in a given time period, usually a year. Journals with higher impact factors – like those that began this piece – are deemed more important than those with lower ones. Impact factors have been published annually since 1975 for journals that are indexed in Journal Citation Reports and have been tracked by Thomson Reuters (ISI) for three years.

No scientific paper is intended as the be all and end all of truth. That is how the scientific method works, and where its beauty lies; each discovery is “true” only until new experimental evidence comes along that refutes it. Peer review cannot guarantee that a paper’s results will be reinforced over time. But it does act as a gatekeeper or first responder, trying to ensure that papers that are published in scientific journals are experimentally and logically sound.

References/ Further reading

http://arstechnica.com/science/2010/11/the-vagaries-of-peer-review/

http://boingboing.net/2011/04/22/meet-science-what-is.html

http://www.wired.com/wiredscience/2012/02/is-the-open-science-revolution-for-real/

http://blogs.scientificamerican.com/the-curious-wavefunction/2013/01/29/peer-review-pitfalls-possibilities-perils-promises-scio13/

http://johnhawks.net/weblog/topics/metascience/journals/tracz-interview-f100research-2013.html

http://wokinfo.com/essays/impact-factor/