Data is the pillar of integrity in published research. How do journal editors detect integrity issues? How can publishers support integrity? A recent seminar for HKUST researchers brought up a good discussion.
The Research Data Management Symposium organized by the Library was held from 6th to 13th October. Professor Anirban Mukhopadhyay, the Associate Provost of HKUST and Dr Rebecca Grant, the Head of Data and Software Publishing at F1000, gave the first talk in the Symposium. They shared some research integrity issues in getting research published and some insights from the perspectives of the editor-in-chief of a journal and the publisher.
As a former editor-in-chief, Professor Mukhopadhyay explained to us the publication review process, which involved authors, peer reviewers, and editorial team members. Any of them could raise integrity issues in submitted works under review. He shared three anonymized stories to illustrate the importance of data integrity.
Three Stories
In story 1, a reviewer alerted to the editor that the paper under review was previously reviewed and rejected by another journal due to suspicious experiment and data analysis. This journal then rejected the submission. Yet, the paper was subsequently published in a third journal with the problematic experiment and statements removed.
In story 2, the associate editor spotted problems in the data of a submission as it was unrealistically “balanced”. The associate editor asked the authors to provide raw data. After a few years of email communication and investigation, the editor concluded that the dataset was fraudulent. Eventually the authors’ universities were informed.
In story 3, an author Y requested a publisher to retract the papers he/she co-authored with X. Y did not trust X because X had no transparency in handling data. After much legal communication and negotiations, the retraction action and the version of retraction notice were finally agreed upon.
Lessons Learnt
Prof. Mukhopadhyay concluded with three lessons-learnt:
- Retraction is a quasi-legal process. Retractions are “black and white decisions in areas of gray” (Mukhopadhyay, Raghubir & Wheeler, JCP 2020); data integrity issues are seldom black-and-white. Sometimes, the editors may need to evaluate evidence outside of their expertise. Thus, editors’ decisions are made with stress and criticism; furthermore, not everything about the decision process can be revealed.
- Malfeasance cannot always be caught. Suspicious cases can often be detected, but it is hard to prove that malpractice took place. Nevertheless, the taint of suspicion remains; it cannot be washed away easily.
- The evidence is always in the data. The raw data and the descriptive statistics can often reveal issues in research integrity. Data that is well-preserved and properly shared can help authors demonstrate integrity, clarify their practice and support their innocence when questions arise, which could happen even years after the research is conducted.
A Final Thought: Trust
Prof. Mukhopadhyay concluded with a final thought on trust. Research collaboration essentially is a division of labour, and trust among collaborators holds the research team together. Transparent data management can strengthen this mutual trust.
In the second half of the seminar, Dr Rebecca Grant told us how publishers use workflows and data-related requirements to support research integrity. The post next week will summarize Dr Grant’s talk.
– By Lester Chan, Library
Hits: 275
Go Back to page Top
- Category:
- Research Data Management Tips
Tags: RDM Symposium, research data, research integrity
published October 28, 2021
last modified March 11, 2022