Friday, June 2, 2017

WCRI 2017, Day 4

Day 3

I really wanted to hear John Ioannidis (Stanford University, Stanford, U.S.A.) speak in the morning about "Re-analysis and replication practices in reproducible research," but I was so tired that I didn't make it until later. I did have time, though, to speak with Skip Garner. I learned that eTBLAST, the text comparison and search tool that populated the Déjà vu database from MEDLINE, was turned off when he left his previous school. But there is a follow-on project, HelioBLAST. More on this later.

Ana Marušíc led the session on Retractions that included a very curious case. There were three talks about this case, it was a shame that they could not have given one talk three times longer (and without two different topics in between).

Alison Avenell (University of Aberdeen, UK) gave the first two talks. She spoke first about "Novel statistical investigation methods examining data integrity for 33 randomized controlled trials in 18 journals from one research group." While preparing a Cochrane study she and her colleages noted a rather odd set of studies by the same Japanese authors that managed to recruit and interview 500 women with Alzheimer's and 280 males and 374 females with stroke in just a few months, interviewing the participants every four weeks over a five-month period. And the studies all had the same results, although the patients were supposedly different.

Doing some statistics on the values reported showed it highly unlikely that the data was not fabricated. They wrote to the authors and quickly received a reply that this was an error, they would correct it. Instead of a retraction, however, there was only a correction published.

By now Alison's group was looking at the 33 other RCT studies from the group that they could find. They were published in 16 journals (the highest one was the JAMA) over 15 years with a total of 26 co-authors at 12 institutions. The group tried to see what the impact of these papers was, that is, in how many reviews this incorrect data was currently used. They found 12 of the studies in secondary publications, one guideline, 8 trials that used these results as the basis for their research rationale involving over 5000 people! That means that even with conservative costs of $500 / person / study, $2.5 million were spent, thinking that they were expanding on solid research. And since they were only looking at English-language publications, the impact was probably even wider.

In all, it took three years to get the JAMA paper retracted. Someone from the audience noted that it is difficult to get journals to retract papers anyway, mostly for legal reasons. Andrew Grey (University of Auckland, New Zealand) reported on the problems they had getting any of the papers retracted and their own paper about the case published (Neurology. 2016 Dec 6;87(23):2391-2402. Epub 2016 Nov 9). He used a timeline that got more and more complicated as time passed as they kept writing back to unresponsive journals. He identified some interesting issues:
  • How should journals deal with meta-reviews that are based on retracted work?
  • Should journals be more forthcoming in the face of unresolved concerns? If it takes 3 years to retract an article, there will be many people who read the paper and perhaps acted on it.
  • Should published correspondence about retracted papers also be retracted?
  • They also emailed medical societies and institutions at which the authors worked - should they have done this?
One of the other talks was by Marion Schmidt (DZHW, German Centre for Higher Education Research and Science Studies, Berlin, Germany) about an analysis she did on the annotation of retractions in PubMed and the Web of Science. She first determined that the word "retraction" is defined differently by various organizations. She noted that many retractions-based studies are often based on selecting papers marked "Retracted Publication" as the type of the article in PubMed. She conducted a title-based search in PubMed and on the Web of Science using "Withdrawal" in the title and the type of article marked as retracted, then validated manually. Surprise! There are retractions in PubMed that are not listed on the WoS, and vice versa. And not all withdrawals are marked at all. Sometimes a withdrawal in one database is marked as retracted in the other one. She concluded that the formats used by publishers do not translate loss-free into different databases and wonders how citing authors can even be aware of a retraction if even PubMed and WoS do not agree. Even if there were a database of retractions (the audience noted: Retraction Watch!), people would have to check all their references against it.

The other talk in the session was by Noemi Aubert Bonn (Hasselt University, Diepenbeek, Belgium). For some reason, it was in the retractions session, although it was not about retractions but is research about research integrity: How is it performed, how is it published, what are the consequenses?

In a plenary session about the Harmonization of RI Initiatives, Maura Hiney from the Health Research Board Ireland (HRB) and the lead author on the ALLEA European Code of Conduct for Research Integrity (2017) charted the development that has been made at WCRI: At the first conference people were discussing whether or not there really was a research integrity problem. Later conferences grappled with defining it, finding methods to investigate it, defining who is responsible for it, and now that there are so many different definitions and methods and policies, how can they be harmonized? Simon Godecharle had presented various maps at the 2013 WCRI showing the wide variations that exist in Europe alone, starting with language. At least by 2017 there are now less countries that have no policy at all.

Daniel Barr from Deakin University, Australia, spoke on the "Positive Impacts of Small Research Integrity Networks in Asia and Asia-Pacific," recurring on the Singapore Statement and noting that RIOs, research integrity officers, are quickly becoming the norm at universities.

Alison Lerner from the National Science Foundation, U.S.A. spoke about the NSF's role in "Promoting Research Integrity in the United States." She spoke of their processes of auditing and investigating cases of fraud, and noted that they have had some extensive plagiarism cases, some of which also involved fraud. Both PubPeer and Retraction Watch
were given a shout-out as non-governmental bodies that work on monitoring integrity.

I then did some more session hopping, as the interesting talks were in different rooms.

Skip Garner talked about finding potential grant double-dippers, it is a similar process to finding duplicate abstracts in MEDLINE or duplicate abstracts for papers given at different conferences (or the same conference in different years).   He spoke a bit about Déjà vu and how eventually many of the duplicates he uncovered were retracted. But the rate of retractions is lower than the rate of new questionable manuscripts in the scientific corpus, which is worrying. Even two years after a retraction, 20 % of them are not tagged as such, and thus people continue to use them.

For fun (yes, computer people have perhaps different ideas of "fun" than other folks) he downloaded the abstracts from scientific meetings that had more than 5000 abstracts each and permitted a longitudinal investigation because the meeting recurrs yearly or every other year. These were compared to each of the other abstracts at the meeting itself, with all abtracts in the previous meetings, and with his collection of Medline abstracts.

He encountered multiple submissions, replicate abstracts with different presenting authors, replicate abstracts from previous years, and plagiarized abstracts. He assured the audience that he did not run this meeting :)

His double-dipping work has been published (Double dipping, same work, twice the money) and reported on (Funding agencies urged to check for duplicate grants) in Nature in 2013. [Drat, I should have downloaded the first one when I was in Amsterdam. The VU has full access to Nature, my school doesn't. Of course, I could buy it for $18....].

During Q&A he was asked if he had reported the cases he found. Indeed he did, and the journals didn't like it. Seems the US government also subpoenaed his database...

Miguel Roig (St. John’s University, NY, U.S.A.) spoke about Editorial expressions of concern (EoC). He and some of the Retraction Watch crew pulled EoCs out of PubMed and examined them. They looked at the wording of the EoC and the eventual fate of the paper. Only 7 % resulted in a correction, 32 % resulted in a retraction, in 4 % of the cases the matter was resolved. For the rest (almost 58 %!!!)  there was no follow-up information to be found, even if the EoC was published four years previously. He referred to a very recent publication (Feb. 2017) on the same topic, Melissa Vaught, Diana C. Jourdan & Hilda Bastian, "Concern noted: a descriptive study of editorial expressions of concern in PubMed and PubMed Central" in  Research Integrity and Peer Review 2017 2:10. He closed encouraging journals to be more specific about the reason for the concern and to use EoCs more often.

Mario Malicki (University of Split School of Medicine, Split, Croatia) spoke about his "hobby project" (i.e. no funding) looking at third party inquiries of possible duplicate
publications. He discovered that the National Library of Medicine will assign a tag of "duplicate publication" in the [pt] field if it finds a pair during manual indexing. But there is no action taken, and since the mark is hard to find, people don't see them. He downloaded 555 potential duplicate publications, and checked to see if they had been retracted. He contacted 250 editors about the duplicates, although 16 editor emails could not be located at all. Not all editors bothers to answer his inquiry, although a few of these were eventually retracted. The correspondence with the editors was evaluated, as there were specific questions asked, such as: are you aware of the duplicate publication field tagging in Medline? Only 1 was aware of this, 15 said no, 165 did not bother to answer the additional question!

Mario catalogued the answers and the reasons given for not taking action, and as far as he obtained information, the excuses of the authors and above all of the publishers for their errors. It seems common that an article is published twice in different volumes (104 times), or doubly published in a sister journal (64 times) or even published twice in the same volume (21 times). Over the span of 4 years, 9 % the articles identified have been retracted. He did not determine the publishers or the precedence of the publications.

J.M. Wicherts (Tilburg University, Tilburg, The Netherlands) has a theory, namely that transparency and integrity of peer review are somehow linked. In order to show this, he set up QOAM: Quality of Open Access Market. Here the readers rate a journal on various paramters on a scale of 1 to 5. Since it was not made clear which is best (1 is the top grade in Germany), this has a cross-cultural issue. To date about 5000 ratings have gone in, there is one particularly active person. He saw this positively, I would check to make sure it is not someone hired by certain journals. As a quick test, I chose my favorite rabid anti-vaxxer paper published in a journal that was on the now-defunct B-list. Sure enough, it was in there, with three reviews and a grade of 4.6. I don't really believe that this is a good idea.

At the closing session Nick Stenek presented the Amsterdam Agenda for assessing the effectiveness of what are seen as the most important ways to promote integrity in research that had been worked on over the past days.

It was quite an experience, these two conferences in Brno and in Amsterdam. They were different in focus, but both offered much for me to learn. And it was fantastic to meet all these people I have corresponded with by email in person! The next WCRI will be in 2019 in Hong Kong, jointly organized by people in Hong Kong and Melbourne.

I have one other link that I picked up from a tweet I want to preserve here: The Authorship and Publication subway map from QUT Library and Office of Research Ethics & Integrity.

Over and out!

Thursday, June 1, 2017

WCRI 2017, Day 3

Day 2                                                                                                             Day 4
Day 3 of the World Conference on Research Integrity began with a plenary session on the role institutions play in research integrity. Bertil Anderson, the president of the Nanyang Technological University Singapore, spoke on  "Research and Research integrity - a key priority for a young and fast rising university." He reported in a refreshingly open manner about quite a number of cases of academic integrity his university was concerned with. In addition to much plagiarism, authorship disputes, self-plagiarism, there was outright fraud. He asked how much a university can do to investigate a case when something has happened that also hits the media? He presented four cases:
  1. NTU retracts NIE academic papers after malpractice investigations (The Straits Times), 2016
    A professor, hired in 2006, was contractually obligated to publish 10 papers in 3 years. An external whistleblower alerted the university to data fabrication that included an invented person and an invented company, eventually the police and the Ministry of Education were involved. The case led to 21 retractions. Such a clause is no longer in contracts.
  2. 3 Singapore-based scientists linked to research fraud (The Straits Times), 2016
    NTU professor fired for data falsification (New Press), 2016
    This case involved Western blot imagery manipulation and three institutions in Singapore, the USA and New Zealand. Two PhDs were revoked (and this has to be done by the president of the country, not the university president), nine papers were retracted and the professor dismissed for willful negligence. The national research organization is also seeking repayment. As a problematic side effect, students of this professor are now left without a supervisor, and are often not accepted on joint programmes elsewhere because of the tarnished reputation of laboratory. They are innocent persons who suffer.
  3. Adventures in Copyright Violation: The Curious Case of Utopian Constructions (Blog, Lincoln Cushing), 2012
    This was a case of images being mis-used. The owner of the copyright on the pictures stumbled over his pictures and wrote to the full professor at the Arts School who was using them. All he wanted was his name on the images and a link to his site. The professor refused, a lengthy investigation including external people ensued (that also uncovered additional problems) and ended with the professor being dismissed. 
  4. Fake Peer Reviews, the Latest Form of Scientific Fraud, Fool Journals (Chronicle of Higher Education, paywalled), 2012
    A scientist managed to hack into the Elsevier system for referee reports. He added sentences along the lines of "paper X needs to be referenced" in order to increase his own citation index. It turned out that there were 122 instances of such hackings. The scientist resigned, but NTU referred the case to the Singapore Police Force under the "Misuse of Computers" legislation, and the scientist has apparently left the county.
Anderson concluded by discussing the challenges in developing a culture of research integrity in a rapidly developing university in a competitive environment. There are aggrevating factors such as hierarchy, hetrogeneous faculty, tolerance of misconduct, and also the fact that investigations of research integrity cases need competencies outside the traditional university framework, such als lawyers. He emphasized that no university or institution can be immune to research fraud, and thus they need to have clear procedures defined.

The second speaker in the plenary session was Mai Har Sham, the associate vice president of research at The University of Hong Kong. She noted that just recently there was a meeting of the Asia-Pacific Research Integrity Network with 110 participants from 20 countries. She specified three areas in which the institution is called on to take action:
  1. Determination and commitment - provide policies, resources and infrastructure support
  2. RCR education, skills training, setting up a data management system andsupporting platforms
  3. Taking the initiative for quality assurance and risk managment
The third speaker was Jay Walsh, the vice president for research at Northwestern University, USA. He noted wryly that if you have never been quoted out of context, you haven't been quoted enough. How true. He then embarked on a short description of how we learn: We gather evidence and we form stories.We then develop a hypothesis about how the words in the stories work. We take data, distill it into information, coalesce that to knowledge, from which we develop wisdom.

But things can go wrong that warp the stories, the data, the information, the knowledge and/or the wisdom. Beyond inadequate methods, poor data, and poor practices, he referred to a paper that recently identified 235 forms of bias.

He feels that the path forward involves the training of researchers in the responsible conduct of research (RCR). Since it is hard to change the curriculum, the funders should make RCR training a requirement. Speaking of funders, he desires a single system for handling FFP  (falsification, fabrication & plagiarism) cases, as each funder has a different process. There need to be robust RCR courses, and professors could be given credit for teaching such courses. It is also vital to have a system that allows students and post-docs to come forward with problems without retribution, although this is so easy to say and so hard to do. The root causes of research integrity issues are wrong incentives. These need to be solved, otherwise we are just treating the symptoms.

I then chaired a session on Authorship. This seemed to me to be such a trivial topic, as the focus of publication is on communication from a group of authors with a collective of readers, so I find a ranking to be unnecessary. But there are many and various forms of author orderings and inclusions and perceptions of what they all mean. And where there are differences of opinion, there are fights, somtimes quite intense and protracted. It was interesting to see people investigating this from all sides, I'll just list them here, as I was busy dealing with the time and the questions during the session:
  • Authors without borders: Investigating international authorship norms
    among scientists & engineers
    Dana Plemmons (University of California, Riverside, U.S.A.)
  • Experiences of the handling of authorship issues among recent doctors
    in medicine in Sweden
    Gert Helgesson (Karolinska Institutet, Stockholm, Sweden)
  • A philosophical framework for a morally legitimate definition of
    scientific authorship
    Mohammed Hosseini (Utrecht University, Utrecht, The Netherlands)
  • The perceptions of researchers working in multidisciplinary teams on
    authorship and publication ethics
    Zubin Master (Albany Medical College, Albany, NY, U.S.A.
  • An investigation of researchers’ understanding and experience of
    scientific authorship in South Africa
    Lyn Horn (University of Cape Town, Cape Town, Republic of South Africa)
The final plenary session was about interventions that work. The session chair, Lex Bouter, remarked that the only thing that people have widely learned to use is text-matching software, all other investigations of interventions have either shown no effect, or an effect in the wrong direction.

The first speaker was Klaas Sijtsma, who was vice-dean at the Tilburg University School of Social and Behavioral Sciences under the deanship of Diederik Stapel when Stapel confessed to the rector that he had, indeed, committed extensive academic misconduct. Sijtsma was first named interim dean, then continued on as dean and will be stepping down in the coming school year. In his talk "Never Waste a Good Crisis: Towards Responsible Data Management," he spoke about this scandal that also touched the University of Amsterdam and the University of Groningen and involved dozens of articles and book chapters, and affected several PhD theses that were granted on the basis of analysis of fraudulent data.

They were lucky to have had a confession, so that Stapel's contract could be terminated, although there were still many committees needed to investigate all of the publications. There were a few criteria that were identified that permitted a culture to thrive in which the frauds were possible: Staple was unusual in that he preferred to work alone, he would not allow his PhD students to collect their own data, and he presented unlikely results to journals.

The University Tilburg is taking the following steps to foster a climate of integrity:
  • Each PhD student must have at least 2 supervisors.
  • Master theses and PhD theses are scanned for plagiarism (although they did this already).
  • An official formula is read aloud publicly when a doctorate is awarded. It refers to the young doctor's obligation to academia and society to act with integrity.
  • The university has a Code of Conduct.
  • Every staff member must sign an integrity code.
  • There is a now independent Integrity Officer and Research Committee.
Additionally, the School of Social and Behavioral Sciences took two actions:
    •    They intensified classes on research ethics and research integrity.
    •    The dean instituted a "Science Committee" in the Spring of 2012.

This, it seems, was one of the best ideas they had. This committee is tasked with auditing a small sample (about 20 out of 500) of the articles published by members of the school each year. Their task is to assess the quality of the data storage and to look closely at how well the research methods are described. The committee thus learns where there are problems in preserving data, and advises the school's management team and the researchers about data storage, completeness of data sets, honoring subjects' privacy, access to data, and making the data available to others. They are not out to "witch-hunt" for fraudsters, but just to eyeball the data. That, however, keeps the various research groups on their toes and thinking about these aspects of their data before they publish. In turn, this creates a better research atmosphere.

Sijtsma has often been asked, why he didn't design a universal data storage system and data management policy first? Well, it seems he understands that computer systems are often too complex, take a lot of time, are very expensive, and tend to encounter unpleasant technical surprises. It would have taken too long, grass would have grown over the scandal, and the sence or urgency would have disappeared. So he installed the committee first. They set up rules and regulations, announced annual random audits. Now the groups were motivated to come up with a data policy that suited their needs best.

The worked quite well! Some groups are better than others, people tend to only arrange their storage only when they are audited. When they leave the school, they lose commitment. No consistent data storage system, but was a deliberate choice, so much more to do.

Do these interventions work?  He reports that they do. They won't prevent new affairs, but they do encourage RCR and reduce QRP (Questionable Research Practices). He also noted in the Q&A that the university decided not to rescind the doctorates of the people using Stapel's fraudulent data, as they did not know that the data was false.

For more information on the Stapel scandal, see the report: Flawed science: The fraudulent research practices of social psychologist Diederik Stapel.

Patricia Valdez, the extramural Research Integrity Officer(RIO) of the National Institute of Health spoke on the NIH Perspective on Research Integrity.

Her focus was on the reproducibility crisis, as the NIH invests $30 billion of taxpayer money annually. They don't want to waste money trying to reproduce something that is erroneous. They are focusing on evaluating the rigor of the methodology and the transparency of the research in the hope that this will have an effect on reproducability.

She referred to a 2017 book by Richard Harris, Rigor Mortis: How sloppy science creates worthless cures, crushes hope and wastes billions (Basic Books). The take-home message from the book is: Teach students methods the first year, not facts!

Ian Freckelton closed the session speaking on Research Misconduct and the Law: Intervening to Name, Shame and Deter. He is a lawyer (Queen's Counsel at the Victorian Bar in Australia) and a professor for Law and Psychiatry at the University of Melbourne.
He published a book in 2016 called Scholarly Misconduct and the Law (Oxford University Press).

After reading to us the most important bit from Stapel's book about the fraud (so we don't have to read it), he raced us through criminal law, which is invoked to shame and deter, as it has been applied to research misconduct. Then he spoke of a number of cases (I've put in links to press articles or the Wikipedia for more detail):
He also spoke about another book by Tom Nichols, The Death of Expertise (2017), about when experts lie, and noted that there are many other cases of fraud that have not reached the courts. He closed with the observation that the law is a very slow, blunt instrument and that criminal prosecution is not the answer, but notes that research fraud is not victimless. A court decision would, however, vindicate whistleblowers and hopefully present a high deterrence factor.

We then were ferried by boat through the canals of Amsterdam and the Amstel River to our dinner. I'll try to get a short description of day 4 out by tomorrow!

Monday, May 29, 2017

WCRI 2017, Day 2

Day 1                                                                                                            Day 3
Day 2 of the WCRI 2017 opened with a plenary session that was entitled "Transparency and Accountability." Boris Barbour (a neuroscientist with the École Normale Supérieure, Paris) introduced the PubPeer community and spoke about how they ensure academic quality. PubPeer has been online since 2012 and provides a sort of online journal club for discussing issues with published papers. Any publication with an identification number, such as a DOI, can be commented on. They have collected over 70 000 user comments about papers in 2 200 journals. Their main rules on comments:
  • Comments must be based on publicly verifiable information (personal communications do not count and will be removed)
  • There is a permanent right of reply for the authors
  • Show the original data
  • Community surveillance enforces following the rules
  • Remember, the publication was the author's choice, stay polite.
Of course, he remarked, if you don't want your research to be discussed, perhaps you shouldn't publish it.  He suggested these three blog posts for more reading about PubPeer:
Then Stephan Lewandowsky, a psychologist currently at the University of Bristol, spoke on "Being open but not naked: Balancing transparency with resilience in science."  He gave some examples of open data being, as he called it, "weaponized". It is, of course, clear that data can be twisted and misused, but I am not sure if that is a good reason to avoid open data. He ranted a bit about blogs and Twitter, and then noted: "Science should be open and transparent, but there is a distinction between science and noise, commercial interests, or political propaganda on the other. Openness and transparency aid the dissemination of political propaganda." His solution to the perceived problem with open data is establishing symmetry:
  • People who request data must be competent and must operate in an institutional context of accountability.
  • People who request data must preregister their intentions (and conform to them)
  • Participants' consent must be considered.
  • Data availability and limits should be enshrined in peer-review record at the time of publication.
I personally find this too narrow and open data very important. In particular, there are many good researchers outside of an institutional context, just as there are bad researchers within the institution. It's not just a question of the openness of the data.

Jet Bussemaker, the Minister of Science, Culture and Education in the Dutch Government, then spoke on The importance of independent research in today’s society. She gave an example of a publication by a Dutch researcher that turned out to be erroneous, and was retracted by the first author. Honesty is so important to academic integrity. She was adamant that government should not be in the business of regulating scientific conduct, that needs to be done by the scientists themselves.

The second plenary session was opened by the South African Minsiter of Science and Technology, Naledi PandorShe pointed out a number of issues: African scientists tend to be junior partners in collaborative research, not principal investigators. Researchers from around the world are glad to visit African countries, but not so keen on researching together. Despite many African research departments being underfunded, they do all they can to keep up with the Western world. There is an online review platform for research ethics committees, Rhinno, that is being used by many countries in Africa. She noted that although 10 % of the world's population lives in Africa, only 1 % of the clinical trials are held there and thus, the results may be skewed. She closed with noting that the empowerment of women is critical to development in Africa.

The plenary session was closed by a very brief talk by Robert-Jan Smits,
Directorate-General for Research and Innovation with the European Commission, on research integrity as a responsibility for everyone. He spoke of the EU platform about academic corruption ETINED (Ethics, Transparency, and Integrity in Education), but noted that the EU does not want to become the European science police department. Science must be built on trust.

After lunch there were five sessions in parallel in three blocks. In the first block I really wanted to hear 3 talks in 3 different rooms, but I ended up listening to 2 talks in one room, 2 in another.

Clemens Festen from Rotterdam in the Netherlands spoke about their new regulation for scanning all PhD theses with a so-called plagiarism detection system, after they had a severe case of plagiarism. It turned out to be too difficult, as the PhD-Theses were so large, even after removing all graphics and tables, which was a lot of work. As part of another investigation they ran 250 known duplicates through the system, and were surprised to find only half of them flagged by the system. So they have moved from focusing on finding plagiarism to letting the PhD students use the system on their work to see if the literature list is formatted properly, that is, someone else has formatted it just the same way.

Sven Hendrix from Hasselt in Belgium spoke about whistleblowers and the scientists they accuse both deserving protection, as even if the whistleblower is annoying, they may acutually be right with their allegations and the scientific record needs correcting. He himself was accused (and aquitted) of academic misconduct, so he is interested in writing about what to do when one is falsely accused of academic misconduct. He noted that
national and international, trustworthy independent institutions are needed where whistleblowers AND the accused scientists can get advice and counseling.

Ivan Oransky from RetractionWatch spoke about an investigation they did into attempting to find people who had been charged with a criminal offense for academic misconduct and sentenced to some sanction. They found 39 cases and classified them as directly involving academic misconduct (for example, falsifying drug test results), or indirectly (grant issues, attempting to bribe a government inspector inspecting the lab for safety violations), and one perimeter case in which a scientist ordered cyanide in order to kill his wife, obtained it because he was a scientist, and used it. He also noted the case in Italy in which scientists were charged for not warning about an earthquake, but this case has been dismissed by the Italian courts. 

Anisa Rowhani-Farid (Kelvin Grove, Australia) looked at how open data is provided by authors at the British Medical Journal in her PhD thesis. She screened for 160 articles that were data-based, and had been published since the BMJ started its open data policy. She found many excuses, was ignored, the published links did not work anymore, or she was told to apply for permission and told it would take 6-8 months to obtain access. She was only able to access 24 % of the data that was supposed to be available openly.

After coffee I joined the seminar on predatory publishing. Ana Marušic (Split, Croatia) was moderating, there were three speakers and a good discussion at the end.
  • David Moher from Ottawa, Canada asked if there are differences between open access journals and traditional subscription journals? They looked at 100 journals from the former Beall's list and 100 legitimate Open Access Journals and looked at 56 data points. They found many differences, and have posted a list of criteria of identifying such journals. 
  • Jocelyn Clark, Executive Editor of The Lancet, gave some insight as to why such journals are so popular in developing countries. There is a massively growing research output in these countries, an increasing pressure to disseminate and publish, a feudal publish or perish system, there is easy access to and targeted marketing of predatory journals, and unfortunately rather limited knowledge/training in publishing.
  • Jadranka Stojanovski (University of Zadar, Croatia) spoke of the many shades of journal publishing. Croatia spends fully 20 % of its research budget on subscriptions! She suggested a composite rating for journal quality based on efficiency, focus, impace, scope, and selectivity. 
During the lively discussion the point was made that we should perhaps not be talking about subscription and predatory publishers, but big-business-publishers and newcomers. The Leiden Manifesto for research metrics was mentioned, that involves 10 principles to guide research evaluation. It was noted that there are many parallels between contract cheating and publishing in predatory journals.

The final session I attended was "Re-thinking retractions" led by Elizabeth Moylan from BioMedical Central (SpringerNature) with Daniele Fanelli (Stanford University), Richard P. Mann (University of Leeds, UK), Ivan Oransky (RetractionWatch), and Virginia Barbour (past chair, COPE, UK). After each gave a short presentation, Daniele and Virginia on proposed changes and variations of retractions (Daniele's is under review, Virginia's on bioRxiv), Richard about having to retract a paper, and Ivan about their "Doing the right thing award" (DiRT), a good discussion ensued. There was much discussion about how to link articles with retractions and the various versions, whether it was really necessary to name different types of retractions, and a bit of a spat over whether is is usually the junior author who is "at fault" (neither side had evidence to cite). A final discussion on intent was nicely closed by Ivan, who noted that if you require absolute proof of intent in order to speak of  a fraudulent publication,  then you will never, ever retract a paper unless you have emails stating that someone wants to commit fraud. And if such emails exist, they would love to have them.

Tomorrow is another day packed with talks, I will be chairing a session so will not be able to report in too much detail on those talks. We are also having dinner together, so I may not get to blogging tomorrow. 

WCRI 2017, Day 1

                                                                                                                    Day 2
After the wonderful conference in Brno about plagiarism (days 1 - 2 - 3) I am now attending the 5th World Conference on Research Integrity 2017 at the Free University (VU) in Amsterdam. Today there were 9 pre-conference workshops and the opening session. I attended two half-day workshops, the opening session, and the reception. I will try to blog all of the sessions I attend, although so many interesting talks are in parallel - there are 5 parallel sessions, and they are necessary as there are over 800 people attending!

Workshop 6: How to investigate allegations of research misconductSession facilitators Paul Taylor (RMIT, Melbourne) & Daniel Barr (Deakin University)

Since I am often the person at VroniPlag Wiki who informs institutions of cases of research misconduct, I was very curious to hear from the other side what processes they (should) follow.

The first important point was understanding that because research is done by humans, there will be errors. There are also pressures that can cause some humans to respond in ways that others do not find acceptable. There was some discussion about what exactly is meant by "research misconduct" and if one should perhaps speak of "breach of research integrity" in order to move away from personal accusations towards a focus on the scientific record. If there are errors there, they must be corrected, preferably in a timely manner.

I found the questions asked of the institutions about their environment to be excellent:

    •    Is there a clear and available policy or process?
    •    Are there independent sources of advice?
    •    Are the right people providing this advice?
    •    Is there one place that receives complaints?
    •    Does the process include reporting back or publicly announcing the results?

I have often struggled to find the processes of various institutions, in particular the place to address my concerns. I also find that many institutions do not report back to me what they have decided, and more problematically, don't necessarily do anything to correct the scientific record because of legal issues.

It was clear that it is not easy to come up with policy and process that can cover every case - they are all so different. But splitting an investigation into two phases seems to be quite common. In the first phase, there is a preliminary assessment made: Does the complaint appear to have merit? Is it in our jurisdiction? If so, then there is sometimes a determination made if this complaint is made in good faith, or if it appears to be vexatious (a new adjective I learned today that totally fits the situation of A trying to point out errors in B's work, who is his bitter rival, or C raising a complaint for the 10th time with no new evidence). If an investigation is warranted, a report that includes all the evidence gathered up until now should be prepared. There are not necessarily hearings held at this point.  

Susan Zimmermann and Karen Wallace, from Canadian Secretariat on Responsible Conduct of Research (representing the three major funding agencies in Canada) gave a presentation and led the discussion on conducting the investigation. In a nutshell, this process is as follows:

    1.    Choose the right people to conduct the investigation
    2.    Gather relevant information
    3.    Make a finding
    4.    Prepare a report

One interesting point was that in Canada, in order to apply for funding from any of these three organizations, a researcher must agree that if found to have committed serious misconduct in such an investigation, he or she agrees that their personal information (name, type of misconduct, etc.) may be provided to the public. After all, they pay for this with their taxes. This makes it legal to publish names and findings.

Jillian Barr and Belinda Westmann from NHMRC (the Australian National Health and Medical Research Council) spoke about implementing the outcomes (a much better word than sanctions or punishments). In Australia there are funding agreements between NHMRC and the institutions receiving the funding as to how they must conduct investigations and how they implement outcomes and report back to the funding agency. In particular, if it had been determined that a publication is to be retracted, they want to see the retraction. If institutions do not cooperate, they lose the right to apply for funds.

There were many interesting topics touched on, and many interesting cases briefly mentioned.

After lunch I attended
Workshop 7: Teaching and training in RE/RI: The relevance of Moral Case Deliberation

Since I often write ethical case studies in computer science for a German-language computer science journal (the case studies are also published online at Gewissensbits), I wanted to hear more about this method of dealing with case studies.

The workshop was led by Guy Wissershoven, Fenneke Blom, & Giulia Inguaggiato,  from the Department of Medical Humanties at the VU Amsterdam. Guy and colleagues have developed a structured method of deliberating cases that involve dilemmata, in particular those encountered in clinical practice, especially in neonatology. There are a number of publications about this, for example Suzanne Metselaar, Bert Molewijk & Guy Widdershoven, Beyond Recommendation and Mediation: Moral Case Deliberation as Moral Learning in Dialogue in The American Journal of Bioethics.

This structured method of discussing a case with a group of people helps find a solution, as people tend to branch off onto other topics, or assume a know-it-all stance in suggesting solutions right away. The steps keep one focused on the dilemma at hand with its possible resolutions. It involves 7 steps:
  1. Case presentation
  2. Formulating the dilemma, the potential actions, and the harms that each action would incurr
  3. Asking questions for elucidation
  4. Analysing from various perspectives the values and norms involved (for example, for the value "respecting older people" there is the norm "I give my seat in a crowded bus to an older person who enters the bus")
  5. Individual judgements by each of the participants
  6. Dialogue about the judgements and potential repair mechanisms for the harms
  7. Evaluation of discussion
First, Guy presented such a case to the group of 20 persons at the workshop. Then we were split into two groups, and each group worked on one real dilemma. We promised to keep the dilemmata confidential, but there were quite lively discussions in both of the groups - it was hard to quit and gather back for some time of reflection!

Opening session

Lex Bouter from the VU (with his co-chairs Tony Mayer and Nick Steneck) opened the conference, welcoming over 800 participants from 52 countries.

The rector of the VU, Vinod Subramaniam, welcomed us and touched on many issues a university has to deal with today. It was good to see someone from the leadership of a university with so much understanding of the issues and that there are no easy answers to the problems. He noted that the Netherlands Code of Conduct for researchers is currently undergoing revision and should be published by 2018. The version from 2004 was last updated in 2012.

José van Dijck,  the president of the Royal Netherlands Academy of Arts and Sciences, gave a short talk about monitoring the research process. She formulated the motto "In Researchers we Trust (that is why we welcome everyone to monitor)".

The session closed with a play by Het Acteurgenootschap/Pandemonia: The ConScience App, a play about scientific integrity. It was long, but it sure packed a punch. There were so many issues about scientific integrity compressed into these few scenes. I spoke with the actors afterwards, they have spent over 2 years touring with this piece, in Dutch and in English, and speaking to audiences about it afterwards. A great way to get a discussion on this subject going, I think!

Then we had earned our Dutch specialties, cheese and bitterballs and herring and Jenever. I didn't manage to find the stroopwafels, more's the pity. It was wonderful stumbling onto people I've corresponded with over the years, and seeing some people again I haven't seen for a while. I had a nice chat walking back to the hotel, and since it was such a warm evening, many of us stood outside the hotel talking some more. I'm really looking forward to days 2-4, I hope I can keep up blogging!

Friday, May 26, 2017

Brno, Day 3

Day 2
The last day of the plagiarism conference in Brno - time has just flown by! It's been so wonderful to talk (and share some wine) with colleagues from around the world who are concerned with academic integrity. Here's a short overview of the talks I heard today:
  • Thomas Lancaster from Staffordshire University opened up the third day with his talk on "Rethinking Assessment by Examination in the Age of Contract Cheating." He first showed us some current newspaper articles about contract cheating, then ads from sites offering exam sit-ins, and all sorts of technology that can be used for cheating: Special cheating watches, mini-earpieces, a pen with a camera, boxer shorts (!!) with communication devices built in, and a mobile phone cover that looks like an ancient calculator and actually works, so that it passes a quick check by a proctor. There is quite a market for such tools, apparently. He also showed ads for people wanting others to take the exams for them that contract cheating sites insist that potential "authors" pass. So we have cheaters cheating to be employed as cheating enabelers .... He brought up an important question involving so-called "smart drugs" (Nootropics): Should the use of such drugs to enhance performance be considered cheating, as they are not available to everyone? It was noted that coffee and cigarettes can be considered a nootropic as well. 
  • Trudy Somers, from the online university Northcentral University suggested taking lessons from how businesses attempt to fight corruption and embezzlement. She notes that the Fraud Triangle or Diamond is used to explain situations in which this can occur: When there is pressure or incentives to do so, the person has the opportunity, there is a ready rationalization, and they have the capability to do so. 
  • Wendy Sutherland-Smith, from Deakin University in Australia, spoke about a system that is in place in 5 out of 6 Australian states: There are student advocates who are there to ensure that students do not face academic integrity hearings alone and know their rights as well as the formal procedures and range of potential outcomes. She notes that the largest problem students with integrity issues face is pressure and a fear of failure. Many in such a situation think that everyone else is cheating, and when they see others getting away with contract cheating, they rationalize (see above) that they can do it, too. Sie suggests introducing academic integrity modules in core units, increasing legitimate support (also for online students!), pressure governments for national legislation on contract cheating, and increase contract cheating awareness campaigns (there will be one in October, I didn't note the date, will add it here when I find it). She also suggested using technology for identification of students, I am quite opposed to such surveillence technologies, personally. She closed encouraging us to focus on EDD: Education, Deterrence and Detection, and to involve students in the issue, as they are our allies in the fight against contract cheating.
  • Veronika Králíová, a master's student of Tomáš Foltýnek, conducted an analysis of the ghostwriter market in Czechia. She was able to identify more than 100 sites, although it was not possible to determine if the same person or company was behind multiple sites.  She then looked at the log files for her university for three months and found tens of thousands of accesses to these sites. She also commissioned two papers (and the ethics of this was questioned during the discussion) and then surveyed people online to ask if they had ever used such a service. 8 % stated that they had, 60% of them had asked a friend or classmate, the rest used the services of a company. She suggests that, among other things, her university re-direct student attempts to access cheating sites to a page that informs the student about the legitimate help they can get at the university
  • Patrick Juola, from Duquesne University, in Pittsburgh, USA, spoke about using stylometrics to detect whether the authors of two papers are probably the same person or not. He introduced an interesting case he was involved with, determining that the author "Robert Galbraith" was most probably JK Rowling, the author of the Harry Potter books. After a newspaper picked this up, Rowling admitted that she was, indeed, the author. He emphasized that any seven word string that you write would most probably be unique unless you are quoting someone or using a set term or saying. I've been saying this for years, but no one believes me, so now I can quote Juola on it 😀. He has a company that offers authorship comparison services, and notes that determining multiple authorship is still a research question.
  • There was then a discussion panel on "Strategies for Addressing Contract Cheating" with Thomas Lancaster, Phil Newton, Shiva Sivasubramaniam, and Chloe Walker. I think we could have discussed with these four until sundown, at least, but there was only an hour available. An interesting discussion flared up over whether the outsourcing of writing work to services in disadvantaged countries is colonial exploitation. It was also noted that some students are getting overassessed, and the burden of grading all of these assessments increases the workload for the teachers. The topic of gift authorship was also briefly touched on. I think Chloe summed it up nicely when she said: Ethics gets subsumed to the practicalities of Real Life.
  • Teddi Fishman, the former director of the International Center of Academic Integrity had the job of summing up the conference. One of her important points is that we get bogged down in dealing with what we don't want: plagiarism, grade inflation, data manipulation, contract cheating. She suggested that we refocus our efforts on what we want: Skill acquisition, verifiable & trustworthy data, and learning. We have to require that the students participate actively in the learning, and we need to introduce more interactivity into the process, getting away from boring lectures. She strongly encouraged us to be brave and try out new formats of assessment, for example, students submitting videos of themselves doing what they need to be learning, or some such. And then to practice what she preached, she had someone prepare some slides she had not seen before, and she used them to sum up the conference, a sort of Powerpoint Karaoke. There were some really difficult pictures presented, but she always came up with something good! 
I read some of the papers for talks that I was not able to hear because they were in parallel sessions. I'd like to comment briefly on two of these here.
  • Marco Cosentino, Franca Marino, Chandana Haldar and Georges J. M. Maestroni give an account of the experience they had of being added as honorary authors to a (rather flawed) paper and having to expend much effort (and wait a long time) for a retraction to be published.
  • Julius Kravjar is looking to extend the thesis repository that he and colleagues run with their plagiarism detection system SVOP in Slovakia to a pan-European repository of theses and dissertations. He examines various issues that would have to be dealt with if there was to be such a repository. 
There were so many good discussions over the last three days, during sessions and during outings. For example, on the bus I discovered that the person sitting next to me, Erik Borg, is one of the chapter authors for a book that is in preparation! We've exchanged many emails but never met in person. There were also many representatives from various countries that are members of the Council of Europe who were there to learn. I find that quite heartening that they are planning on getting active about academic integrity! I didn't see any German officials, although there were participants from Germany, with talks and posters. I will try to spread the word about the European Network for Academic Integrity!

As a Swiss Army Knife-carrying person I was quite enchanted with these knives in chocolate:

Bizarrely, I had the following tweet in my timeline after tweeting up a storm the past three days on contract cheating:

I guess they didn't understand what I was tweeting about....

Brno, Day 2

Day 1                                                                                                                  Day 3 → 
Just got back from a long afternoon and evening of a social program that included a lot of history of the region. Here is a short list of the talks that I heard today at the plagiarism conference in Brno:
  • The conference began today with a keynote speach from Calin Rus from the Intercultural Institute of Timisoara, Romania. He spoke about "Competences for Democratic Culture Model" that the Council of Europe is proposing for setting out the values, attitudes, skills, knowledge, and cultural understanding that should be the basis for education in countries that are members of the Council of Europe. In a way this boils down to showing respect to people you fundamentally disagree with. Rus linked intercultural competencies to a culture of academic integrity. This was widely acclaimed as being a good description of the way forward towards a "whole school" approach to education.
  • Phil Newton from Swansea University then gave a rousing keynote on dealing with contract cheating. He referred us to the Australian site on the topic, cheatingandassessment. He walked us through the current business of ordering and selling bespoke essays that is part of the "gig-economy." He noted that contract cheating is big, cheap, quick, versatile, and established, so we have to learn to deal with it. We can't eradicate it, but we can make it harder to get away with. Setting short turnaround times is not a solution. He proposes ALE, not the beer, but Assessment design, Law and Education. Included in the education must be the educators. His group looked at 20 books on teacher education and found that only 12 of them even used the word "plagiarism", and none contained even the words "academic integrity," We need to engage, educate, and understand our students and design assissments to limit the influence of contract cheating. We need to focus on what students can do (i.e. demonstrate in a viva/oral exam), use portfolios and personal & specific assessments. He notes that the ghostwriter market is legal at the moment, but we should consider how to perhaps make it illegal. During the discussion it was noted that many academics are living in precarious situations and may be driven to work as ghostwriters.
  • Ali Tahmazov & Cristina Costinius, from the text-matching software company Strike Plagiarism, spoke of the role of politics in academic plagiarism. They cautioned about countries trying to have their own software programmed just for their particular country. There is a limited use for one-size-fits-all solutions.
  • Irene Glendinning, Dita Dlabolová, and Dana Linkeshová spoke about exploring issues challenging academic integrity in South East Europe. They obtained funding from the Council of Europe for a project SEEPPAI in which they interviewed students and teachers in Albania, Bosnia & Herzegovina, Croatia, Montenegro, the Former Yugoslav Republic of Macedonia, and Serbia. They used the same instruments as the EU-Erasmus-funded IPPHEAE project, and thus could compare the results with the results of the EU countries (there are 50 countries in the Council of Europe). The results are published on the SEEPPAI web page. When asked if there was any reaction from the respective governments, a member of the audience noted that Montenegro is currently purchasing a country-wide licence for test-matching software.
  • Gábor Király from the Budapest Business School in Hungary spoke about comparing lecturers’ and students’ understanding of student cheating. I was unable to watch the presentation, as the use of Prezi unfortunately caused me to suffer an acute case of vertigo. 
  • Salim Razı, from the Canakkale Onsekiz Mart University in Turkey (and who will be organizing the plagiarism conference in 2018) presented the Turkish situation with respect to policies for plagiarism and academic integrity. His goal is to set and enforce nationwide, consistent and standard sanctions for plagiarism. He notes that just because there are no investigations by an ethics board, this does not mean that there is no plagiarism. It is just being dealt with on another level. He called for plagiarism awareness training to begin as early as possible.
  • Jonathan Kasler, Tel Hai College, Israel, conducted an interesting survey using Don McCabe's self-reporting questionnaire on cheating. He found significant differences between the Hebrew- and the Arabic-speaking students at his school. A good debate ensued as to whether this was due to cultural differences, or to the L2 problem, that is, the Arabic-speaking students are not working in their mother language.
  • Emilia Sercan, an assistant professor for journalism at the University of Bucharest, Romania and an investigative journalist, reported on plagiarism in PhD theses awarded just at military universities in Romania. That alone was hair-raising enough. She has, working alone, documented 15 cases of blatent plagiarism in doctoral dissertations in the past two years. The prime minister, Victor Ponta, was involved in a long, drawn-out plagiarism scandal involving his PhD that eventually ended in him asking for his doctorate to be withdrawn. There were so many cases that involved high-ranking politicans, she was only able to briefly present a few. The political reaction has been to restrict public access to doctoral dissertations.
  • The moning session ended with Chloe Walker from the University of Oxford, UK, presenting a working paper on the Nairobi Shadow Academy. In Kenya there is a very large group of so-called "academic writers" that write bespoke essays in English. She has managed to contact many such writers and has had 220 answer a written survey. She interviewed 29 in a semi-structured manner and had in-depth interviews with 4 writers. She also did some first-person ethnography, pretending for two weeks to be a potential writer for one of the agencies. She is looking at, among other things, the motivations of the writers. For most, it is the money, although some only get paid 3-4$ per page, while others can earn 25-30 $/page. It is a career that can be done from home and is very flexible. One writer defended his work with: "If I don't stock cigarettes in my shop, that won't keep people from smoking." She closed with the question as to how it is possible for a 2nd year medical student in Kenya to write a graduate-level philosophy paper of passable quality for a German university, having had no prior training, never attending a lecture, and having never read a philosophy text in his life. I noted that the reason is probably that the paper was not even read by the grader. I look forward to reading her dissertation when she has it published.
So, those were the talks I managed to visit today, more coming up on Day 3, the final day.

Wednesday, May 24, 2017

Brno, Day 1

Day 2
The Third International Conference Plagiarism across Europe and Beyond is currently taking place in Brno in the Czech Republic. I am attending just for the fun of it and so enjoying speaking with all these people who, like me, are interested in promoting academic integrity and dealing with the plagiarism problem. Here is a short review of the talks I heard today:
  • The opening keynote was by Tracey Bretag, from the University of South Australia, on "Evidence-based responses to contract cheating." She noted that many teachers have had a feeling that there is a massive problem with contract cheating, but since it has finally hit the media in the form of scandals such as the MyMaster or the Airtasker ones, there is now attention being focused on cheating behaviors. She and her group have been collecting data on contract cheating, surveying both students (14 086) and staff (1 147) on how wrong they find various cheating activities and whether they have observed such behavior or done it themselves. They also asked for the outcomes (a much, much better term than "penalty" or "sanction") of being discovered. Surprisingly, they were able to isolate a group of about 600 cheaters and could compare their attitudes to the non-cheaters. Cheaters thought that more than 60 % of other students cheated, so they perhaps think that it's okay for them to cheat. Staff was much more realistic, assuming between 1 and 10% of students cheating. They are still evaluating the data, but it is clear that just "fixing" assignment design is not the answer! Forcing exams instead of writing papers is not the answer, as there are more opportunities to cheat in exams, for example by sending someone else to take the exam or using electronic devices. Problematic was that only 3% of the cases detected resulted in suspension, although that is communicated as the outcome for being caught cheating. 23 % of the staff noted that they were not informed of the results of their informing official bodies of a cheating incident.
  • Tomáš Foltýnek from the Mendel University in Brno then introduced the European Network for Academic Integrity that was founded yesterday in Brno with currently nine institutions. Institutions can be members, individuals can join as supporters. They want to focus on collecting and communicating best practice about academic integrity, and for organizing more conferences like the current one. In 2008 there will be a conference in Turkey, in 2009 in Lithuania. They will also be offering workshops to members. Annual fee for institutions is to be 300 €, for supporters 50 €. They are also supported by the Erasmus+ program and the Council of Europe.
  • Jeffrey Beall from the University of Colorado, Denver, gave a keynote on Detecting and reporting Plagiarism in Predatory Journals and Other Publications. He noted dryly that it sometimes seems that plagiarism is only important when the person accused is your enemy. If your hero is caught plagiarizing, it's not a problem. There are very few incentives for people to report plagiarism, and there can even be more punishment for the informer than for the plagiarist. He gave some examples of publishers that exploit the gold open-access model and promise peer review, but don't acutually do so. There was even one journal for which we could have still submitted an article today and had it published on May 31! Beall has charted some of the fake citation indices, and they appear to only ever increase, never to decrease. He published an article, Advice for Plagiarism Whistleblowers, together with a colleague, Marx Fox, who ended up in court over a case of whistleblowing, but was able to win his case (1 - 2). Beall also discussed the plagiarism in Martin Luther King, Jr.'s thesis.
  • The "Gold Sponsors" of the conference were each given a short slot to present material, with the express purpose of NOT giving a sales presentation. Goa Borrek (Turnitin) spoke about his Journey in Academic Integrity, and James Bennett (Urkund) about Why Percentages are not your Friend. Borrek presented a model "From Plagiarism to Academic Excellence" that looked a lot like the Academic Integrity Maturity Model, but I was in the third row and could not make out the text on the slide, so I am not sure. Bennett was preaching to the converted with a contrived example demonstrating that percentages can be extremely misleading. I questioned why, then, does Urkund use so many percentages in their reports and got a rather roundabout answer saying that people expect them. 
  • I chaired a session on "Internationalisation, student mobility and academic integrity" that had an interesting collection of talks about various countries and cultures. Eckhard Burkatzki (Germany) spoke on Cultural Differences regarding expected utilities and costs of plagiarism, investigating students from Germany, Denmark and Poland. They come from different trust cultures and turned out to have significantly different attitudes towards plagiarism. Amanda McKenzie and Jo Hinchliffe (Canada) had some interesting Integrity Insights from India they collected during visits to nine different Indian universities. For example, some universities use biometric scanners for fingerprints in the lecture halls and the students must check in and out of the lectures and the exams. They noted that many second and third tier universities do not prepare students well for Master's level work in Canada. Stephen Gow (UK) used critical theory to look at academic integrity issues for students from China. Bob Ives (USA) gave two talks, one on a meta-study he is doing on  predictors of academic dishonesty and some patterns and predictors of academic dishonests in Romania and Moldavia. The session closed with M. Shahid Soroya (Pakistan) giving an overview about the Status of Academic Integrity in Pakistan. They, too, have had some plagiarism scandals that made the news (3), which has driven the Higher Education Council to issue standard operating procedures for dealing with plagiarism cases. Pakistan offers the use of Turnitin at all the universities, and offers training programs for teachers. Mystifyingly, they define a threshold of 18 % or less reported by Turnitin to be "original." It was not possible to learn the reasoning behind the choice of this number.
  • In the session on "Best practices and strategies for awareness, prevention, detection of academic misconduct" Andrzej Kurkiewicz spoke about the new procedures for dealing with academic integrity cases in Poland. This seemed to involve far too many official offices and also only dealt with cases that were discovered internally. I asked about how they deal with cases that whistleblowers alert them to, but apparently they are not seen as being part of this process. Poland is also putting together a central repository of graduate theses. Andrei Rostovtsev is from the Dissernet group of academics in Russia that publicly documents plagiarism in Russian doctoral dissertations. They do a quick comparison of the long abstracts that are publicly available on Russian dissertations and have found thousands of highly similar dissertations. One rather amusing pair involves chocolate and beef. All the words dealing with chocolate were replaced by words about beef processing, the rest of the thesis is identical. There is a film about the group, showing for example the bullet hole that appeared in Rostovtsev's window in his apartment one morning. One does not make many friends documenting plagiarism in Russia, it seems. They are currently using the meta data published with the abstracts to illustrate the networks of researchers. The clusters and networks so identified are very close to the clusters of plagiarism previously identified. Ines Friss de Kereki spoke on using MOSS and JPlag to detect collusion in computer science homework programs.
We had a nice dinner at the science and technology museum in Brno, and enjoyed playing with the exhibits. I did not ride the bicycle over the tightrope, but some brave souls did.

More talks tomorrow!

Saturday, May 20, 2017

Plagiarism in a CRISPR paper

A blog on stem cells, The Niche, has published an account of the publishing of an erratum on a CRISPR journal article. CRISPR is the technology used to copy slices out of DNA and paste them in other places, if I understand this terribly oversimplified description correctly. The author who was plagiarized kept exact notes on his extensive correspondence with the publisher (Springer). It's an interesting read, as it raises multiple issues.

Monday, February 27, 2017

Catching up on VroniPlag Wiki

I haven't written about the work at VroniPlag Wiki for a while, so here is some of the more interesting things that happened in the past year or so:
  • There were three important verdicts handed down for VroniPlag Wiki cases, all of them affirming the university decisions to rescind the doctorates in question:
  • In another legal case (Ssk: Verwaltungsgericht Düsseldorf, 15 K 1920/15) as reported by the Legal Tribune Online, although the judge made it clear that the university would win its case, it still settled the case without judgement on rather strange terms: The thesis can be submitted to another university, but not Düsseldorf again.
  • In March 2016 the Medical University of Hanover determined that the current German Minister of Defense had plagiarized, but not enough to warrant rescinding the doctorate (discussed on this blog previously). A number of attempts have been undertaken in order to obtain information on which documented fragments were considered plagiarism and which ones were not, but I keep hitting a brick wall here, although it would be useful for the scientific community to know why specific fragments were considered to not be plagiarisms. The university was informed of another five dissertations (Acb, Bca, Lcg, Wfe in medicine, Cak in dentistry) and a habilitation (Mjm) that also include extensive text parallels that could be construed as plagiarism, but there has been no public progress made to date.
  • The University of Münster, with 23 cases in medicine alone,  announced in February 2017 that they have completed their investigation that took 3 years and 12 meetings of the committee. The Westfälische Nachrichten report that eight doctoral degrees have been withdrawn and 14 persons reprimanded, although the university won't say which degrees have been withdrawn. One author has died, and thus that investigation was discontinued. One doctoral advisor of two of the withdrawn degrees, according to the paper,  has been stripped of additional funding and personell and is prohibited from taking on doctoral students. The story was picked up by dpa and published in a number of online publications, for example Spiegel Online.
  • The often-heard argument that natural scientists don't plagiarize can be considered refuted with this doctoral thesis in chemistry that contains text overlap on over 90 % of the pages: Ry
  • A law dissertation from the University of Bremen, Mra, that was published in 2016, was documented with extensive plagiarism from, among other sources, the Wikipedia.
  • The documentations published for two habilitations, Chg (law, 2005) and Ank (dentistry, 1999), bring the total number of documented habilitations to eleven cases.
  • One dissertation, Gma, about TV game shows such as Who wants to be a Millionaire?, copied extensively from at least 13 Wikipedia lemmata.
  • One of the cases published in February 2017, Pak, includes not only text from five Wikipedia articles, but preserves the links from the articles as underlines in the text., Med. Diss. LMU München, p. 18-19

I will be speaking with a colleague in March at a conference about the Dr. Wikipedia phenomenon.

Friday, February 24, 2017

Understanding Citation

I've just graded a stack of papers handed in by computer science students. They were for the most part dreadful: written in chatty blog style, nothing referenced or the statement being referenced not actually to be found at the reference given, no evidence of any proofreading, even missing niceties such as page numbers or captions on figures. I'm not even going to start in on the missing structure of an academic paper.

People, proper citation is not rocket science! And it is not an instrument of torture that instructors force students to use. We cite to give our readers a chance to follow our reasoning, to check up on us, and to demonstrate the research that was done.

Kate Williams and Jude Carroll have a nice guide to referencing [1, p. 26–7]:
You need to reference when you:
  • use facts, figures or specific details you pick from somewhere to support a point you’re making – you report
  • use a framework or model another author has devised. Let’s say you ‘acknowledge
  • use the exact words of your source – you quote
  • restate in your own words a specific point, finding or argument an author has made – you paraphrase
  • sum up in a phrase or a few sentences a whole article or chapter, a key finding/conclusion, or a section – you summarize.
You don’t need to reference if you:
  • believe that what you are writing is widely known and accepted by all as ‘fact’. This is usually called ‘common knowledge’
  • can honestly say, ‘I didn’t have to research anything to know that!’.
If finding it out did take effort, show the reader the research you did by referencing it! 
That rather puts in a nutshell when to reference. But I find that my students don't even know how to reference. I had one paper with 14 URLs listed under the heading "Sources", but none referenced from the text. Do you expect me to look through all of your URLs to find where you got the notion that Alan Turing used a Turing Machine to break the Enigma code?

Many papers had one reference (usually given as a footnote number!) for each paragraph, so I am probably to assume that they took the entire paragraph from this source. One paper used "vgl." (German for "cf.") in a footnote for every single reference given. No, that means that there is more information about this topic to be found there, not that you took this snippet from that source.

As a computer scientist, the structure of referencing something appears so simple to me. I use this in the talks that I give, and a recent attendee asked me if I had published it anywhere. I actually haven't, because it seemed so obvious. But here it is, in case anyone wants to use it!

The secret of good referencing is to clearly mark where something from someone else begins and where it ends, and to tell your readers where you got it from. That's all!  As a computer scientist I use parentheses to mark the beginning and end, and an arrow as the notation for the exact reference:
Where does it start, where does it end, where did it come from?
If you are giving a direct quotation, the "parentheses" are your quotation marks, and the arrow is your reference. There many different ways of doing the latter: Author-year, number (in parentheses or brackets), or the strange computer science label using initials of author's last names and publication year ([Smi11] for Smith, 2011), but in general it looks like this:
Direct quotation
Instead of these quotation marks, fancier ones can be used, or the entire text can be indented. If you are giving an indirect quotation, you begin with the name of the author and use the reference as a combined closing and reference:

Indirect quotation
If you use something by Smith in the next sentence, you can use something like "Smith continues..." or "Additionally, she feels ..." or some such for making it clear that it is still Smith talking and not you.

Was that really so hard to understand?

It goes without saying that the reference given MUST be the source for the statement and not some random reference because you forgot to note down where you found it. The punishment for not taking proper notes is having to look it up again to verify that you have it right. It is sometimes very sobering to see that you have it exactly backwards....

One last word of advice: Don't quote the Wikipedia! It's a great place to start your research, and then you look up all those cool references at the bottom of the page and use them as your references. If the Wikipedia is wrong, please fix the article for the next people wanting to know about the topic. Only if you are doing research about the Wikipedia should you be quoting it. And if you must, please use the "Cite this page" link! It's on every page but the front page of every single Wikipedia. And it will give you a proper reference to copy in many popular styles.
It's almost always been there, but so few people have ever seen it

Now, what to do about my students who will be writing their theses next semester? I think we need Writing Boot Camp at German universities sooner than later. They are not learning this in school, and we are not teaching it at university yet. Since they don't read academic literature, they don't know what an academic paper is to look like. And online they easily find blogs and Wikipedia, so they copy that style. We've got work to do...

[1] Williams, K. & Carroll, J. (2009). Referencing & Understanding Plagiarism . Basingstoke: Palgrave Macmillian

Sunday, January 8, 2017

An Exercise in Plagiarism Prevention

As Diane Pecorari [1] never seems to tire of saying, the best way to deal with plagiarism is to prevent it happening by educating students about why we reference other works and how to do it.  I quite agree! Not only has Pecorari put together many good classroom activities in order to achieve this goal, but others such as Margaret Price [2] have also offered good ideas. Price speaks of a classroom lecture of Mike Mattison at the University of Massachusetts-Amherst that she observed in 1999 and describes an exercise that he did with his students:
Another area for possible focus in the classroom is the differences among, and possible intersections of, what we mean by paraphrasing, quoting, and our own words. In an inventive approach to this subject, Mike Mattison distributes colored pencils to his students and asks each of them to create a legend at the top of a peer's paper: one color for what they determine to be paraphrases, one for quotes, and one for the author's own words. Students go through each other's papers, underlining sections, lines, and words in appropriately alternating colors. They then retrieve their own papers and examine the alternation of colors for balance and flow. Although in the class I observed Mike did not ask his students to discuss the problem of distinguishing between "outside" words and the author's own words, his exercise would be an ideal lead-in to this conversation. Students could also try this with their own papers.
That sounds like a brilliant idea, so I adapted it for my Master's seminar in a computing program the other day. The students are currently writing their final theses, due in about 8 weeks. It is, of course, late for such instruction, but better late than never.

I instructed the students to bring two copies of 4-5 pages from their thesis in which they reference the work of others, and one copy of their literature list. They were also to bring highlighters and a red pen. I gave them the instruction two weeks in advance and repeated it 48 hours before class. As more than one student admitted, the 48-hour-reminder induced enough panic to get them to finally quit programming and get some writing done.

We have a block of 4x45-minute-long hours every other week for the course, the first hour of the session we had other topices to attend to.

For the exercise I brought five pages from my book [3] and a sack full of highlighters and colored pens, as I know that my students often forget to bring writing implements.

In the second hour I sat down at our overhead camera projector with three markers and my text. I defined a legend for the colors and then did the highlighting on my own text: What is from me, what is quoted, what is an indirect quotation or a paraphrase. They peppered me with questions! I thought I was only going to need 15 minutes for this, we had to break off after 4 pages and 45 minutes!

Then it was their turn. I paired off the 12 participants so that no one was together with someone with whom they are working closely, and had them get to work marking up one copy of their own writing and one copy of the partners. The readers were given the literature lists and were asked to spot-check a few of the references to see if they were correctly recorded. A red pen was to be used to mark up any spelling or grammar errors encountered. They were made to sit apart so that they didn't get nervous sneaking peeks at their partner. Then they were to discuss the results with their partner.

It took about 25-30 minutes of very intensive work before they had worked through the exercise, then they began very spirited discussions of the differences between their own markup and the perception of their readers. I was called on by all the groups to "judge" differences of opinion. Two groups discussed the issues for the next 60 minutes!

I asked each group for some feedback on the exercise. They all appreciated the exercise, because it taught them how to see what they write from a reader's perspective. They know what they have written themselves and what is from other people, but were not making this clear. It was also hard having peers mark up spelling errors in red - two students sat correcting their spelling errors on the spot. The feedback that was most surprising for me was one student who noted that he felt quite relieved now. There has been an intensive discussion of plagiarism in Germany since 2011 and many students are scared that they are somehow not quoting properly and will get accused of plagiarism. Now he felt secure that he was doing it mostly right, and that he had learned about the points where he needed to make things a bit more clear in the exercise.

I highly recommend trying out this exercise, although I don't know how I would survive a larger group, as I had waiting lists for going around and explaining details to each group.

[1] Pecorari, D. (2013). Teaching To Avoid Plagiarism: How To Promote Good Source Use. Maidenhead, UK: Open University Press.
[2] Price, M. (2002). “Beyond “Gotcha!”: Situating Plagiarism in Policy and Pedagogy”, In College Composition and Communication, Vol. 54 No. 1, S. 109.
[3] Weber-Wulff, D. (2014). False Feathers: A Perspective on Academic Plagiarism. Berlin: Springer Verlag.