Stanford President Marc Tessier-Lavigne, who will step down from his position Aug. 31, was initially accused of scientific misconduct, but that’s not why he lost his job. He lost it because he failed to adequately lead his labs, and because of the repercussions that failure had for his leadership of a premier research institution. In his own words, Tessier-Lavigne resigned because Stanford “needs a president whose leadership is not hampered” by discussions of problems with his own research. As someone who studies and instructs graduate students on the responsible conduct of research, I am encouraged by what I see in this case as a step towards expecting more from researchers.
A prominent person’s fall from grace often signals a healthy environment able to identify and address threats. Mark Tessier-Lavigne’s resignation suggests that leaders may now be held more accountable for meeting standards of research integrity that go beyond merely not lying about their work. Ultimately, his resignation may signal — or establish — higher public expectations for research integrity and encourage us to build structures to support them.
advertisement
By the usual metrics of funding, publications, and recognition, Tessier-Levigne was clearly a leader in his field. But the panel investigating the accusations was tasked with assessing his “approach to correcting issues or errors in the scientific record” and his “management and oversight of his scientific laboratories.” They concluded that he “failed to decisively and forthrightly correct mistakes in the scientific record.” Moreover, they noted that given the “unusual frequency of manipulation and/or substandard scientific practices” in his labs across many years and different locations, “there may have been opportunities to improve laboratory oversight and management.”
To put it simply, he failed to foster a culture of research integrity and model it for his trainees and collaborators by confronting allegations quickly and openly.
Tessier-Levigne’s resignation is an unusual consequence of accusations of research misconduct. The closest example of this kind of consequence for an academic leader may be Terry Magnuson, former vice chancellor for research at the University of North Carolina at Chapel Hill, who resigned in 2022 after admitting to plagiarism in federal grant applications.
advertisement
However, Magnuson’s actions fit the standard federal policy definition of research misconduct, defined narrowly as encompassing only fabrication, falsification, and plagiarism. When someone is accused of and found to have committed misconduct, possible consequences include employment termination, debarment from grant funding, or even civil liability. When found not to have committed misconduct, they typically return to their previous life.
Thus, one might have expected Tessier-Lavigne to be in the clear with the report’s conclusion that there is no evidence he committed misconduct or clearly knew about misconduct in his labs. Instead, he lost his job for behavior that has up until this point not typically been subject to consequences.
For instance, it seems that there was pressure for researchers in Tessier-Levigne’s lab to perform — but not unusually so. One of Tessier-Lavigne’s former postdocs told STAT, “I would say categorically that I think there was no more pressure in Marc’s lab than a lot of other labs.” Stories of toxic lab cultures, competitive researchers, and intense pressure for results that lead to grant funding and publications are widespread. This does not excuse his failure to address numerous questions about his research over the years, or what some reporting described as his preferential treatment of students who had results. As STAT reported previously, an anonymous former student observed, “When you didn’t please him, you didn’t get any attention.” But there have been consequences for him, and this is the most conspicuous recent example of a high-profile researcher bearing the consequences of failing to prevent such a culture.
Exacerbating these issues of research culture is the challenge of assigning responsibility in multi-author publications. Modern research is more expensive, interdisciplinary, international, and collaborative than it has been historically, with the consequence that the number of authors on publications has proliferated. It is unrealistic to think that one person can adequately oversee all work in a project. But if no individual actually can be completely responsible, isn’t everyone off the hook?
In many research collaborations, not all authors see raw data. That happens for many good reasons — for example they might they lack appropriate training to understand it, or the data include identifying details limiting who may view them.
But for science to work, someone must accept that responsibility. In his resignation letter, Tessier-Levigne endorsed this expectation: “Although I was unaware of these issues, I want to be clear that I take responsibility for the work of my lab members.”
Leaders are the only people in a research project who can create a microclimate that supports rigorous, honest research. This includes: cultivating a research culture in which expectations for scientific rigor and ethical action are clear and supported; being open, transparent, and responsive when problems arise; and otherwise modeling high standards in research. Tessier-Lavigne failed to do this, and if the panel had found otherwise, he might not have needed to resign.
But individuals alone can only do so much. Knowing that humans are fallible, imperfect, and prone to temptation, we should also create and support good practices with institutional, disciplinary, and national structures to foster research integrity.
In some ways, this is a story about how such structures, built in the past decade or so precisely to improve scientific rigor, helped to identify and draw attention to cases like this. For example, PubPeer, where the problems with Tessier-Levigne’s research were initially identified, was created “to improve the quality of scientific research by enabling innovative approaches for community interaction.” Data sleuths have taken it upon themselves to support good science by calling out problematic practices, and the Open Science movement makes it easier to identify problematic data, methods, or conclusions.
But these grassroots efforts are not enough. Even the toppling of a high-profile researcher does little to support structural change, and in fact can misdirect our focus to only individual solutions. For years there have been calls for data auditing at the institutional level, less focus on the metrics that reduce a researcher’s success to dollars or citations, training in good practices of mentoring, and the creation of a federal research integrity agency. These would be excellent steps toward publicly emphasizing the importance of research integrity and assigning responsibility to institutions to do more to support it.
This case emphasizes the importance of both individual and institutional efforts to improve research rigor and reliability. When looking for leaders, we should seek and select not only those with the most research funding or highest citation counts, but also those who know how to foster an ethical research culture, including rapidly and transparently addressing anything that might affect research integrity. At the same time, because we can’t reasonably expect that all researchers will behave optimally, we must consider structural tools to foster research integrity.
Lisa M. Rasmussen is a professor of philosophy at the University of North Carolina at Charlotte. She studies and teaches research ethics, and is the editor-in-chief of Accountability in Research.