Plagiarism & False Data in Academic Papers

There are countless credible accusations of (academic) misconduct that go uncorrected; I myself have published articles challenging the integrity of hundreds of papers. The majority of them have not been retracted, corrected or even remarked upon. I would wager that most reasonably large universities (my own included) have faculty members who are known to have plagiarized, fabricated, falsified, claimed undue credit, hidden financial conflicts of interest or misbehaved in numerous other ways and who have seemingly gone unpunished."

New York University professor Charles Seife writing in the New York Times

12 Articles on Cheating with AI & AI Detectors

The Trouble With AI Writing Detection – Inside Higher Ed

College application season is here. So is the struggle to find out if AI wrote students’ essays – Cal Matters 

If using ChatGPT to write essays becomes widespread, those students who elect not to use it, who prefer to do the work themselves, may suffer a penalty for doing so. – Chronicle of Higher Ed

Results of a new survey flip the early narrative on ChatGPT—that students would rush to use it to cheat on assignments and that teachers would scramble to keep up—on its head. Half of students, ages 12-18, said they have never used ChatGPT. – Ed Week

OpenAI debates when to release its AI-generated image detector – Tech Crunch

Universities Rethink Using AI Writing Detectors to Vet Students’ Work – Bloomberg 

Identifying AI’s flaws motivates students and helps them build confidence, which can discourage cheating. Pointing out where it still really messes up is very powerful for empowering students to see their own strengths as human thinkers. – Chronicle of Higher Ed

Students cheat out of desperation so one professor will give multi-level assignments that force students to submit papers at various stages to keep track of their progress. – Yahoo News

The AI Detection Arms Race Is On And college students are developing the weapons, quickly building tools that identify AI-generated text—and tools to evade detection. – Wired

Simply leaving it up to students to decide whether they’re going to do the work, without further comment or intervention or negative sanction from me, is a failure of pedagogy. – Chronicle of Higher Ed

AI detectors have low efficiency, and simple modifications can allow even the most robust detectors to be easily bypassed. – Science Direct 

Suspicion, Cheating & Bans: AN Hits America's Schools (podcast) – New York Times

8 good quotes about students cheating with AI   

Is it cheating to use AI to brainstorm, or should that distinction be reserved for writing that you pretend is yours? Should AI be banned from the classroom, or is that irresponsible, given how quickly it is seeping into everyday life? Should a student caught cheating with AI be punished because they passed work off as their own, or given a second chance, especially if different professors have different rules and students aren’t always sure what use is appropriate? Chronicle of Higher Ed 

What about students cheating by using ChatGPT instead of doing their own writing? The thing about technology is that it is interfering with the very weak proxies we have of measuring student learning, namely homework and tests. (Generative AI) is just another reminder that it’s actually really hard to know how much someone has learned something, and especially if we’re not talking to them directly but relying on some scaled up automated or nearly automated system to measure it for us. MathBabe Cathy O’Neil

Sometimes, though, professors who felt they had pretty strong evidence of AI usage were met with excuses, avoidance, or denial. Bridget Robinson-Riegler, a psychology professor at Augsburg University, in Minnesota, caught some obvious cheating (one student forgot to take out a reference ChatGPT had made to itself) and gave those students zeros. But she also found herself having to give passing grades to others even though she was pretty sure their work had been generated by AI (the writings were almost identical to each other). Chronicle of Higher Ed 

As professors of educational psychology and educational technology, we’ve found that the main reason students cheat is their academic motivation. The decision to cheat or not, therefore, often relates to how academic assignments and tests are constructed and assessed, not on the availability of technological shortcuts. When they have the opportunity to rewrite an essay or retake a test if they don’t do well initially, students are less likely to cheat. The Conversation

Lorie Paldino, an assistant professor of English and digital communications at the University of Saint Mary, in Leavenworth, Kan., described how she asked one student, who had submitted an argument-based research essay, to bring to her the printed and annotated articles they used for research, along with the bibliography, outline, and other supporting work. Paldino then explained to the student why the essay fell short: It was formulaic, inaccurate, and lacked necessary detail. The professor concluded with showing the student the Turnitin results and the student admitted to using AI. Chronicle of Higher Ed 

Our research demonstrates that students are more likely to cheat when assignments are designed in ways that encourage them to outperform their classmates. In contrast, students are less likely to cheat when teachers assign academic tasks that prompt them to work collaboratively and to focus on mastering content instead of getting a good grade. The Conversation

A common finding (from our survey): Professors realized they needed to get on top of the issue more quickly. It wasn’t enough to wait until problems arose, some wrote, or to simply add an AI policy to their syllabus. They had to talk through scenarios with their students. Chronicle of Higher Ed 

Matthew Swagler, an assistant professor of history at Connecticut College, had instituted a policy that students could use a large language model for assistance, but only if they cited its usage. But that wasn’t sufficient to prevent misuse, he realized, nor prevent confusion among students about what was acceptable. He initiated a class discussion, which was beneficial: “It became clear that the line between which AI is acceptable and which is not is very blurry, because AI is being integrated into so many apps and programs we use.”  Chronicle of Higher Ed

22 quotes about cheating with AI & plagiarism detection

Students should know that this technology is rapidly evolving: future detectors may be able to retroactively identify auto-generated prose from the past. No one should present auto-generated writing as their own on the expectation that this deception is undiscoverable. Inside Higher Ed

Alex Lawrence, professor at Weber State University, described it as “the greatest cheating tool ever invented.” Wall Street Journal

Some plagiarism detection and learning management systems have adapted surveillance techniques, but that leaves systems designed to ensure original work “locked in an arms race” with systems designed to cheat. Inside Higher Ed

Popular essay submission portal Turnitin is developing its own detector, and Hive claims that its service is more accurate than others on the market, including OpenAI’s very own, and some independent testers have agreed. Tech Radar 

While faculty members will likely spend some time trying to identify a boundary line between AI assistance and AI cheating with respect to student writing, that may not be the best use of their time. That path leads to trying to micromanage students’ use of these models. Inside Higher Ed

You can have tools like Quillbot (that can) paraphrase the essays ChatGPT gives you so it doesn't look too obvious. Mashable

“If I’m a very intelligent AI and I want to bypass your detection, I could insert typos into my writing on purpose.” said Diyi Yang, assistant professor of computer science at Stanford University.  Inside Higher Ed 

But what about the cheaters, the students who let a chatbot do their writing for them? I say, who cares? In my normal class of about 28 students, I encounter one every few semesters whom I suspect of plagiarism. Let’s now say that the temptation to use chatbots for nefarious ends increases the number of cheaters to an (unrealistic) 20 percent. It makes no sense to me that I should deprive 22 students who can richly benefit from having to write papers only to prevent the other six from cheating (some of whom might have cheated even without the help of a chatbot). Washington Post 

If a teacher’s concern is that students will “cheat” with ChatGPT, the answer is to give assignments that are personal and focused on thinking. We don’t have to teach students to follow a writing algorithm any more; there’s an app for that. Forbes

What’s to stop a student from getting ChatGPT to write their work, then tweak it slightly until it no longer gets flagged by a classifier? This does take some effort, but a student may still find this preferable to writing an entire assignment themselves. Tech Radar 

If the concern is that students could cheat, it’s worth remembering that they could cheat six months ago and 60 years ago. Students taking a brand-new exam could already get answers to test questions in minutes from services like Chegg. Students could already plagiarize — or pay someone else to write their entire paper. With the entrance of ChatGPT, “what’s changed is the ease and the scope. Chronicle of Higher Ed

If ChatGPT makes it easy to cheat on an assignment, teachers should throw out the assignment rather than ban the chatbot. MIT Tech Review

Professors can create conditions in which cheating is difficult, giving closed-book, closed-note, closed-internet exams in a controlled environment. They can create assignments in which cheating is difficult, by asking students to draw on what was said in class and to reflect on their own learning. They can make cheating less relevant, by letting students collaborate and use any resource at their disposal. Or they can diminish the forces that make cheating appealing: They can reduce pressure by having more-frequent, lower-stakes assessments. Chronicle of Higher Ed

Unlike accusations of plagiarism, AI cheating has no source document to reference as proof. “This leaves the door open for teacher bias to creep in.” Washington Post

Despite their positive attitude towards AI, many students (in a survey say they) feel anxious and lack clear guidance on how to use AI in the learning environments they are in. It is simply difficult to know where the boundary for cheating lies. Neuroscience News

While the AI-detection feature could be helpful in the immediate term, it could also lead to a surge in academic-misconduct cases, Eaton said. Colleges will have to figure out what to do with those reports at a moment when professors have yet to find consensus on how ChatGPT should be dealt with in their classrooms. Chronicle of Higher Ed

“Do you want to go to war with your students over AI tools?” said Ian Linkletter, who serves as emerging technology and open-education librarian at the British Columbia Institute of Technology. “Or do you want to give them clear guidance on what is and isn’t okay, and teach them how to use the tools in an ethical manner?” Washington Post

Even if detection software gets better at detecting AI generated text, it still causes mental and emotional strain when a student is wrongly accused. “False positives carry real harm,” he said. “At the scale of a course, or at the scale of the university, even a one or 2% rate of false positives will negatively impact dozens or hundreds of innocent students.” Washington Post 

On many campuses, high-course-load contingent faculty and graduate students bear much of the responsibility for the kinds of large-enrollment, introductory-level, general-education courses where cheating is rampant. How can large or even mid-sized colleges withstand the flood of nonsense quasi-plagiarism when academic-integrity first responders are so overburdened and undercompensated? Chronicle of Higher Ed

Bruce Schneier, a public interest technologist and lecturer at Harvard University’s Kennedy School of Government, said any attempts to crackdown on the use of AI chatbots in classrooms is misguided, and history proves that educators must adapt to technology. Washington Post

Harsh punishments for cheating might preserve the status quo, but colleges generally give cheaters a slap on the wrist, and that won’t change. Unmonitored academic work will become optional, or a farce. The only thing that will really matter will be exams. And unless the exams are in-person, they’ll be a farce, too. Chronicle of Higher Ed

“I think we should just get used to the fact that we won’t be able to reliably tell if a document is either written by AI — or partially written by AI, or edited by AI — or by humans,” computer science professor Soheil Feizi said. “We should adapt our education system to not police the use of the AI models, but basically embrace it to help students to use it and learn from it.” Washington Post

Also:

21 quotes about cheating with AI & plagiarism detection                        

13 quotes worth reading about Generative AI policies & bans                   

20 quotes worth reading about students using AI                                    

27 quotes about AI & writing assignments            

22 examples of teaching with AI                                                           

27 thoughts on teaching with AI   

13 thoughts on the problems of teaching with AI                                               

A basic explanation of the new AI bot called ChatGPT

U.S.-based AI research company OpenAI, the San Francisco company behind the text-to-image creation tool named DALL-E, has created a chatbot that responds to user-submitted queries. The model was trained using reinforcement learning from human feedback. ChatGPT (GPT stands for “generative pre-trained transformer”) shows how far artificial intelligence—particularly AI text generators—has come. Because it remembers what you've written or said, the interaction has a dynamic conversational feel. That makes it different from other chatbots, which are static. It could be the basis for a medical chatbot to answer patient questions about very specific symptoms or serve as a personalized therapy bot.

Give the software a prompt — and it creates articles, even poetry. It writes code, too. And explains the code. Or makes correction to errors in code. GPT-3 came before it. Both are generative models. That means they are trained to predict the next word in a sentence. It’s like a topnotch autocompletion tool. What separates ChatGPT from GPT-3 is that ChatGPT goes beyond predicting the next word to also follow the users instructions. Training with examples of human conversations has made the experience with the bot more familiar to users.

ChatGPT is being used to rewrite literary classics, create a bible song about ducks, string cheese sonnet, explain scientific concepts, explain how to remove a peanut butter sandwich from a VCR in the style of the King James Bible, or write a story about a fictitious Ohio-Indiana war. The New York Times gushes, “ChatGPT is, quite simply, the best artificial intelligence chatbot ever released to the general public.” Some tech observers predict it could one day replace Google.

But the software has limitations: such as not having information about 2022 because it doesn’t “crawl” the web like Google in search of new information, it can spit out “plausible-sounding but incorrect answers,” and while it is designed to not provide inappropriate content as creators have taken steps to avoid racist, sexist and offensive outputs that have popped out of other chatbot, there is likely to be some hiccups in that process.

Some warn about its potential abuse—blurring the lines between original writing and plagiarism.

Mike Sharples, a U.K. professor, says such technology “could become a gift for student cheats, or a powerful teaching assistant, or a tool for creativity.” 

Ars Technica reporter Benj Edwards writes:

"[I]t’s possible that OpenAI invented history’s most convincing, knowledgeable and dangerous liar — a superhuman fiction machine that could be used to influence masses or alter history." 

Decide for yourself whether we’re on the cusp of new creativity or massive fraud. Create a free account using your email here. Or try the Twitter bot if you’d prefer not to sign up.

Articles about ChatGPT: 

New AI chatbot is scary good – Axios

OpenAI’s new chatbot ChatGPT could be a game-changer for businesses – Tech Monitor  

Google is done. Here’s why OpenAI’s ChatGPT Will Be a Game Changer – Luca Petriconi

The College Essay Is Dead – The Atlantic

The Brilliance and Weirdness of ChatGPT – New York Times

ChatGPT Is Dumber Than You Think - The Atlantic

The Lovelace Effect – AI generated texts should lead us to re-value creativity in academic writing - London School of Economics

Hugging Face GPT-2 Output Detector

AI is finally good at stuff, and that’s a problem - Vox

ChatGPT: How Does It Work Internally? - Toward AI

Your Creativity Won’t Save Your Job From AI - The Atlantic

Could your public photos be used in an AI deepfake? - Ars Technica

API access is expected early in 2023, so companies can create products based on the software. Later next year, rumors say OpenAI will introduce an even better AI model named GPT-4.

Why are some people compelled to cheat?

The fear of losing something appears to be a greater motivator to cheat than the lure of a gain.

Kerry Ritchie, who researches how to improve teaching at the University of Guelph in Ontario, Canada, says the majority of academic cheating is conducted by high-achieving students, (60% of offenders earned grades 80% or more). While cheating in education is not the same as cheating during play, if there are similarities it's that those at the top feel a pressure to maintain their status. Players are more likely to behave dishonestly if they can say that it benefits other people as well as themselves.

William Park writing in BBC Future

Few people can detect a liar

In daily life, without the particular pressures of politics, people find it hard to spot liars. Tim Levine of the University of Alabama, Birmingham, has spent decades running tests that allow participants (apparently unobserved) to cheat. He then asks them on camera if they have played fair. He asks others to look at the recordings and decide who is being forthright about cheating and who is covering it up. In 300 such tests people got it wrong about half of the time, no better than a random coin toss. Few people can detect a liar. Even those whose job is to conduct interviews to dig out hidden truths, such as police officers or intelligence agents, are no better than ordinary folk.

The Economist