It’s become commonplace to hear reports on new research study findings, especially if they may apply to large groups of people. The majority of study results that are produced by researchers throughout the sciences and the social sciences and enter the media stream are conducted by distinguished academics and published in prestigious scholarly journals. The resultant findings become part of the core knowledge base by which we as a society advance our understanding of the world.
Given this rigorous vetting process, we – the general public – have traditionally trusted these findings to be true. And generally, they are. Researchers produce robust results that are replicated by others, the current standard of accuracy. However, researchers have long suspected that this isn’t always the case, and because the discipline of psychology teaches us about ourselves, psychological study results often receive the greatest amount of media attention.
When news of several high-profile cases of psychologists committing fraud and faulty statistical analyses were exposed, researchers’ fears were confirmed and many were concerned that the public’s trust would erode. In 2011, the University of Virginia psychologist Brian Nosek became so concerned that he embarked on a project to discover the true scope of the problem.
Thus was born the Reproducibility Project. Nosek and his team recruited more than 250 researchers to voluntarily reproduce 100 experiments in close collaboration with the original authors. The results, announced in 2015, stunned the research community. As reported in Nature, according to the replicators’ qualitative assessments, “only 39 of the 100 replication attempts were successful.”
Rather than merely lament those dismal statistics, Nosek came away with several approaches for achieving more accurate findings.
At noon on Wednesday, March 22, Nosek will present a talk on “Improving Openness and Reproducibility in Scholarly Communication” in Knight Hall Emerson Auditorium.
In it, he will discuss how the scholarly culture needs to shift toward open access, open data, and open workflow. This is partly an incentives problem, partly an infrastructure problem, and partly a coordination problem. The Center for Open Science (COS), a non-profit technology and culture change organization co-founded and directed by Nosek, is working on all three. Central components of COS’s strategy include:  fostering a commercial environment that monetizes on service delivery, not controlling access to content,  providing a free and open public goods infrastructure that scholarly communities can brand and operate based on their local norms, and  coordinating across disciplinary and stakeholder silos to align scholarly practices with scholarly values.
In addition, the public is invited to Nosek’s psychology seminar discussion at 3 p.m. Tues., March 21 in the Psychology Building, Room 216. This program is primarily for individuals working in psychological science and related fields, but will be open for those who want “the big picture.” However, the Assembly Series lecture will stand on its own.
For the departmental talk, “Shifting Incentives from Getting It Published to Getting It Right,” Nosek will cover the following:
The currency of academic science is publishing. Producing novel, positive, and clean results maximizes the likelihood of publishing success because those are the best kind of results. There are multiple ways to produce such results: (1) be a genius, (2) be lucky, (3) be patient, or (4) employ flexible analytic and selective reporting practices to manufacture beauty. In a competitive marketplace with minimal accountability, it is hard to avoid (4) but, there is a way. With results, beauty is contingent on what is known about their origin. With methodology, if it looks beautiful, it is beautiful. The only way to be rewarded for something other than the results is to make transparent how they were obtained. With openness, I won’t stop aiming for beautiful papers, but when I get them, it will be clear that I earned them.
For more information about the Reproducibility Project, see below:
Center for Open Science, “Massive Collaboration Testing Reproducibility of Psychology Studies Publishes Findings”
Science Friday, “Putting Scientific Research to the Test”