When it comes to research, freedom of choice can sometimes get in the way of good science.
A new study found that when hundreds of researchers are given the same data to analyze and hypotheses to test, they often reach different conclusions. Not because of the data, but because of how the researchers chose to analyze the data.
To investigate the effect of choice on a grand scale, an international team of researchers recruited nearly 200 other researchers on social media for an experiment. FIU physics professor Angela R. Laird and her graduate students Katie Bottenhorn and Taylor Salo were among those that signed onto the study.
“Collaboration and sharing can be a key part of developing new ideas or achieving progress, but this is often limited to designing studies or interpreting results,” Laird said. “This project was our first experience with large-scale, collaborative data analysis where the focus was on the variability of results.”
Neuroscience, psychology, statistics, and economics researchers from 70 teams across the globe investigated the issue of variability by independently analyzing the same brain imaging dataset and testing nine pre-defined hypotheses. According to the study, the choices they made in the analysis can have a major effect on reported results.
Laird is an authority in the field of human functional brain mapping, and Bottenhorn and Taylor Salo are pursuing doctoral degrees in Cognitive Neuroscience. The students were responsible for the data analysis and reporting. They had up to three months to complete the analysis and to provide detailed information as to how they analyzed the data.
“Their enthusiasm for team-based science and improving the rigor and reproducibility of data analyses in our field has transformed many aspects of work in our lab and allowed us to more fully integrate current best practices for open science,” Laird said.
Bottenhorn said now is a particularly good time to understand how large teams can work together remotely and continue to move science forward in really important ways, even without collecting new data and meeting together, in person.
For Salo, participating in this collaboration was a way to investigate not just how variable researcher practices could potentially be, but also how variable they are in practice. When researchers can’t reproduce studies, they can’t refute or support claims. They can’t build upon previous findings. He said understanding that variability is key to improving reproducibility in the field of neuroimaging.
Across the nine hypotheses, on average 20 percent of teams reported a result that was different from the majority of teams — falling somewhere between complete consistency across teams and completely random results.
The authors recommend transparency in data and code sharing, making it easier for others to try to reproduce particular findings. They emphasize hypotheses and analysis plans should be made public before an experiment is performed. This helps identify when researchers have explored certain data pipelines. Lastly, they recommend all data be analyzed through multiple pipelines and that those results be used to obtain consensus findings.
“Our findings highlight the fact that it is hard to estimate the reproducibility of single studies that are performed using a single analysis pipeline,” Laird said. “Our findings also emphasize the urgent need to develop new practices and tools to overcome the challenge of variability across analysis pipelines and its effect on analytic results.”
Laird’s work focuses on mining and exploring big data in neuroimaging to better understand brain function. She is the director of Center for Imaging Science and the Neuroinformatics and Brain Connectivity Laboratory at FIU. She is also co-principal investigator of the Adolescent Brain Cognitive Development Study (ABCD Study) — the largest long-term study of brain development and child health in the United States. She has ranked among the top one percent most cited for the field of neuroscience and behavior between 2002 and 2012 and again from 2014 to 2019. This means her published work in neuroscience has consistently been judged as significant and particularly useful by her peers in the research community.
The study was led by researchers from Tel Aviv University and Dartmouth College in collaboration with other institutions in the U.S. including FIU and universities in Austria, Belgium, Canada, China, France, Germany, Italy, Spain, Sweden, Switzerland, Taiwan, the United Kingdom and others.
The findings were recently published in Nature.
Angela Nicoletti contributed to this story.