WASHINGTON — At SLAS 2017, the annual international conference and exhibition from the Society for Laboratory Automation and Screening, the topic of reproducibility was center stage. Led by moderator Richard Harris, an NPR science correspondent, panelists in a special session discussed the challenges and solutions that both industry and academia have when it comes to reproducibility.

"This really isn't an industry vs. academia issue. It's an issue for science and how we do science," said panelist Cathy Tralau-Stewart, associate director of the Catalyst program and associate professor of therapeutics at the University of California San Francisco.

In a survey conducted by the science journal Nature last year, 52 percent of researchers agreed that there is a significant "crisis" of reproducibility, while 31 percent agreed that failure to reproduce published results means the result is probably wrong.

The issue of how to exactly define reproducibility was highlighted in a recent Amgen study. The study found that researchers were unable to replicate results of 47 out of 53 papers that were seminal to launching drug discovery programs.

"When you take a published paper from a high-profile journal and try to reproduce the results, sometimes you can't replicate it on the first attempt. Sometimes it requires another attempt or it turns out that the results were not robust and the effect disappears," said SLAS Special Session Co-Chair Lenny Teytelman, Ph.D., when discussing the purpose of this session prior to the start of the conference.

"There are lots of reasons for it, but the core problem depends on the study and how you define reproducibility versus robustness."

Already there have been steps taken in increasing awareness of reproducibility in peer review policies.

The National Institutes of Health (NIH) is just one example. In January of 2016, the NIH's Office of Extramural Research (OER) revised their application instructions and review criteria to enhance reproducibility of research findings through scientific rigor and transparency.

"This is not something that NIH can do alone," said panelist Tara Schwetz, senior advisor to the principal deputy director at NIH. Schwetz also added: "We are starting to evaluate the process and see kind of try to figure out what works and what doesn't."

Veronique Kiermer, executive director of the Public Library of Science (PLOS) also agreed with the sentiments of Schwetz: "It’s not one person's problem. It's not one sector's problem. I think it will take all of us to really address the issue of reproducibility."

So what can the industry do to tackle this issue?

"You say that it's a multifaceted problem, but I feel like there's a lot of things that a lot of us could do really fast," pointed out audience member Joanne Kamens, Ph.D., executive director at Addgene.

Panelist Elizabeth Iorns, co-founder and CEO of Science Exchange, responded that easier accessibility to research data used by public funding would be a start. She gave the example of how Kamens' company, Addgene, easily gave PLOS data used for their experiments, while Iorns' team tried for more than a year to get access to reagents used for research.

Human nature is another big hurdle in reproducibility.

"Retractions are the nuclear option. That's a good thing. They should be reserved for that," said Ivan Oransky, co-founder of Retraction Watch, a blog that writes about retractions in the scientific field. Oransky described retractions as "the worst in human behavior on steroids."

Oransky mentioned that two-thirds of the retractions mentioned on Retraction Watch are due to misconduct, and about another 20 percent of retractions are due to human error. He pointed out that incentives may help institutions be more willing to share data or help retractions to not occur because researchers would take more time to make sure their experiments were accurate rather than try to be the first to publish.

For example, NIH in March 2013 created the Center for Open Science (COS), which "aims to increase openness, integrity and reproducibility of scientific research," according to the mission statement on their website.

"I think certain guidelines and certain suggestions from journals, government agencies, from institutions themselves -- where there's best practices that should be enforced, there shouldn’t be any leeway at all," said panelist Richard Neve, a senior research scientist at Gilead Sciences. "You should be able to enforce these things but again not to the point where regulations stifle."

Neve suggested pointing to things that the science field can do better rather than what's wrong. That means looking at the issue of reproducibility early on.

"We need education," he said. "We need to define what is needed in science. And define what a good syllabus at a very young age to understand what good science is. Good project practice, well-run experimental design and trying to figure out what the end experience should be."

One thing is for certain: The topic of research reproducibility in the life science field remains an open discussion.

"I hope that the audience will leave with a deeper appreciation for the complexity of this issue," Teytelman said.