Monday, May 29, 2017

WCRI 2017, Day 2

Day 1                                                                                                            Day 3
Day 2 of the WCRI 2017 opened with a plenary session that was entitled "Transparency and Accountability." Boris Barbour (a neuroscientist with the École Normale Supérieure, Paris) introduced the PubPeer community and spoke about how they ensure academic quality. PubPeer has been online since 2012 and provides a sort of online journal club for discussing issues with published papers. Any publication with an identification number, such as a DOI, can be commented on. They have collected over 70 000 user comments about papers in 2 200 journals. Their main rules on comments:
  • Comments must be based on publicly verifiable information (personal communications do not count and will be removed)
  • There is a permanent right of reply for the authors
  • Show the original data
  • Community surveillance enforces following the rules
  • Remember, the publication was the author's choice, stay polite.
Of course, he remarked, if you don't want your research to be discussed, perhaps you shouldn't publish it.  He suggested these three blog posts for more reading about PubPeer:
Then Stephan Lewandowsky, a psychologist currently at the University of Bristol, spoke on "Being open but not naked: Balancing transparency with resilience in science."  He gave some examples of open data being, as he called it, "weaponized". It is, of course, clear that data can be twisted and misused, but I am not sure if that is a good reason to avoid open data. He ranted a bit about blogs and Twitter, and then noted: "Science should be open and transparent, but there is a distinction between science and noise, commercial interests, or political propaganda on the other. Openness and transparency aid the dissemination of political propaganda." His solution to the perceived problem with open data is establishing symmetry:
  • People who request data must be competent and must operate in an institutional context of accountability.
  • People who request data must preregister their intentions (and conform to them)
  • Participants' consent must be considered.
  • Data availability and limits should be enshrined in peer-review record at the time of publication.
I personally find this too narrow and open data very important. In particular, there are many good researchers outside of an institutional context, just as there are bad researchers within the institution. It's not just a question of the openness of the data.

Jet Bussemaker, the Minister of Science, Culture and Education in the Dutch Government, then spoke on The importance of independent research in today’s society. She gave an example of a publication by a Dutch researcher that turned out to be erroneous, and was retracted by the first author. Honesty is so important to academic integrity. She was adamant that government should not be in the business of regulating scientific conduct, that needs to be done by the scientists themselves.

The second plenary session was opened by the South African Minsiter of Science and Technology, Naledi PandorShe pointed out a number of issues: African scientists tend to be junior partners in collaborative research, not principal investigators. Researchers from around the world are glad to visit African countries, but not so keen on researching together. Despite many African research departments being underfunded, they do all they can to keep up with the Western world. There is an online review platform for research ethics committees, Rhinno, that is being used by many countries in Africa. She noted that although 10 % of the world's population lives in Africa, only 1 % of the clinical trials are held there and thus, the results may be skewed. She closed with noting that the empowerment of women is critical to development in Africa.

The plenary session was closed by a very brief talk by Robert-Jan Smits,
Directorate-General for Research and Innovation with the European Commission, on research integrity as a responsibility for everyone. He spoke of the EU platform about academic corruption ETINED (Ethics, Transparency, and Integrity in Education), but noted that the EU does not want to become the European science police department. Science must be built on trust.

After lunch there were five sessions in parallel in three blocks. In the first block I really wanted to hear 3 talks in 3 different rooms, but I ended up listening to 2 talks in one room, 2 in another.

Clemens Festen from Rotterdam in the Netherlands spoke about their new regulation for scanning all PhD theses with a so-called plagiarism detection system, after they had a severe case of plagiarism. It turned out to be too difficult, as the PhD-Theses were so large, even after removing all graphics and tables, which was a lot of work. As part of another investigation they ran 250 known duplicates through the system, and were surprised to find only half of them flagged by the system. So they have moved from focusing on finding plagiarism to letting the PhD students use the system on their work to see if the literature list is formatted properly, that is, someone else has formatted it just the same way.

Sven Hendrix from Hasselt in Belgium spoke about whistleblowers and the scientists they accuse both deserving protection, as even if the whistleblower is annoying, they may acutually be right with their allegations and the scientific record needs correcting. He himself was accused (and aquitted) of academic misconduct, so he is interested in writing about what to do when one is falsely accused of academic misconduct. He noted that
national and international, trustworthy independent institutions are needed where whistleblowers AND the accused scientists can get advice and counseling.

Ivan Oransky from RetractionWatch spoke about an investigation they did into attempting to find people who had been charged with a criminal offense for academic misconduct and sentenced to some sanction. They found 39 cases and classified them as directly involving academic misconduct (for example, falsifying drug test results), or indirectly (grant issues, attempting to bribe a government inspector inspecting the lab for safety violations), and one perimeter case in which a scientist ordered cyanide in order to kill his wife, obtained it because he was a scientist, and used it. He also noted the case in Italy in which scientists were charged for not warning about an earthquake, but this case has been dismissed by the Italian courts. 

Anisa Rowhani-Farid (Kelvin Grove, Australia) looked at how open data is provided by authors at the British Medical Journal in her PhD thesis. She screened for 160 articles that were data-based, and had been published since the BMJ started its open data policy. She found many excuses, was ignored, the published links did not work anymore, or she was told to apply for permission and told it would take 6-8 months to obtain access. She was only able to access 24 % of the data that was supposed to be available openly.

After coffee I joined the seminar on predatory publishing. Ana Marušic (Split, Croatia) was moderating, there were three speakers and a good discussion at the end.
  • David Moher from Ottawa, Canada asked if there are differences between open access journals and traditional subscription journals? They looked at 100 journals from the former Beall's list and 100 legitimate Open Access Journals and looked at 56 data points. They found many differences, and have posted a list of criteria of identifying such journals. 
  • Jocelyn Clark, Executive Editor of The Lancet, gave some insight as to why such journals are so popular in developing countries. There is a massively growing research output in these countries, an increasing pressure to disseminate and publish, a feudal publish or perish system, there is easy access to and targeted marketing of predatory journals, and unfortunately rather limited knowledge/training in publishing.
  • Jadranka Stojanovski (University of Zadar, Croatia) spoke of the many shades of journal publishing. Croatia spends fully 20 % of its research budget on subscriptions! She suggested a composite rating for journal quality based on efficiency, focus, impace, scope, and selectivity. 
During the lively discussion the point was made that we should perhaps not be talking about subscription and predatory publishers, but big-business-publishers and newcomers. The Leiden Manifesto for research metrics was mentioned, that involves 10 principles to guide research evaluation. It was noted that there are many parallels between contract cheating and publishing in predatory journals.

The final session I attended was "Re-thinking retractions" led by Elizabeth Moylan from BioMedical Central (SpringerNature) with Daniele Fanelli (Stanford University), Richard P. Mann (University of Leeds, UK), Ivan Oransky (RetractionWatch), and Virginia Barbour (past chair, COPE, UK). After each gave a short presentation, Daniele and Virginia on proposed changes and variations of retractions (Daniele's is under review, Virginia's on bioRxiv), Richard about having to retract a paper, and Ivan about their "Doing the right thing award" (DiRT), a good discussion ensued. There was much discussion about how to link articles with retractions and the various versions, whether it was really necessary to name different types of retractions, and a bit of a spat over whether is is usually the junior author who is "at fault" (neither side had evidence to cite). A final discussion on intent was nicely closed by Ivan, who noted that if you require absolute proof of intent in order to speak of  a fraudulent publication,  then you will never, ever retract a paper unless you have emails stating that someone wants to commit fraud. And if such emails exist, they would love to have them.

Tomorrow is another day packed with talks, I will be chairing a session so will not be able to report in too much detail on those talks. We are also having dinner together, so I may not get to blogging tomorrow. 

No comments:

Post a Comment