So, you detected a high likelihood that a student submission was generated by AI. Now what?
This semester, the likelihood of students submitting AI-Generated text has increased with the spreading awareness of “ChatGPT” and other technologies like it.
Detecting the use of AI-Generated Text has been a hot topic of conversation, and there are plenty of resources popping up to detect.
But what is missing is the answer to the question – “what do I do after I detect my student has submitted unedited AI-generated text?”
This guide exists to help you navigate what happens after AI Text Detection
This guide was developed with the help of educators, educational admin, and Packback employees. This guide is representative of Packback’s opinion. Your institution may have strict policies and processes regarding academic integrity and dishonesty. We recommend first consulting your institution’s policy on AI-generated text before taking any action communicated in this guide.
We also want to make it clear that we are purposefully not calling AI text detection “AI plagiarism”. Read more about why we are avoiding that term below.
What is AI-Generated Text Submission?
AI generated text submission is a form of potential academic dishonesty where students utilize a generative AI platform, like ChatGPT, to create a large body of text. They then submit this text as their own work.
Students may not know this is academic dishonesty, mainly because ChatGPT and generative AI platforms like it are very new and your institution may not have communicated a precedent on use of generative AI.
Use of AI generated text – even with a detector – is currently very hard to prove. With historical plagiarism, you can easily “prove” a student of academic dishonesty by a quick search, finding the plagiarized source and comparing the student sample to the source. The ability to compare student text to a published source doesn’t exist with AI generated text.
Beyond that, AI detection models are far from perfect. It’s possible your student simply wrote a very general, generic piece of text. And because it was such a statistically probable way to write about a concept, an AI detector falsely flagged the work as “written by AI”.
Modifying your Syllabus
If you do not want students utilizing generative AI programs in your class, we highly recommend adding a small string of text to your syllabus along the lines of:
“All written work submitted in this course must be originally produced by you, the student. If you utilize an outside source, you must properly cite the source in the assignment.
Utilizing a generative AI platform like ChatGPT and submitting the work as your own is a form of academic dishonesty. If it is suspected that you’ve utilize generative AI on an assignment, I will reach out to request an assignment review meeting”
If you’d like your students to utilize generative AI in an academically honest way – similar to how they would utilize and cite outside sources- consider adding a different syllabus language that coaches them on productive and wise use of generative AI. Here is an example of syllabus language from an instructor at Wharton:
What to do when you detect or suspect AI-Generated Text
If you are utilizing an AI-text detector via Packback or another platform, and you are hit with a “high likelihood that this text contains AI-generated content” report, you can follow the steps below to proceed. Keep in mind that false flags are bound to happen if the student writes a highly generic piece of text with little specificity.
If you aren’t using a detector but your “spidey senses” are tingling, you can also follow the steps below to address your suspicions. Many educators have been there – a student who fails every assessment and hardly comes to class suddenly submits a spectacular paper. It smells fishy!
If you’re utilizing a Packback product – AI text detection and plagiarism checking are built into the automated submission workflows.
If you’re not utilizing Packback Questions or Deep Dives, Packback will launch a free & public AI-Detector tool for instructors on February 15th, 2023.
First: score the submission. If the student didn’t meet the goals of the assignment because they utilized AI or plagiarized, they will receive a low score and that itself should warrant ample feedback.
Second: Review the submission. If the submission received a passing or high score, you’ll want to review the submission carefully for red flags to verify the use of AI generated text.
Here’s a pathway for specific submission review we’ve found work well for instructors:
- Check Sources: If the assignment requires cited sources, check the source credibility and quality. Generative AI is not effective at citing credible and/or relevant sources. If you notice that the titles of cited sources lift language verbatim from other areas of the student writing, it is possible that these are AI-generated “sources” that do not, in fact, exist. You can also take a look at how the student has used the sources in the writing. Are they simply summarizing the source? Or are they using the source to verify claims, justify their writing, and ultimately form sensical arguments? It’s much more likely that AI did the former, and not the latter.
- Did the student cite non-existent sources? Huge red flag (for many reasons!)
- Did the student cite non-relevant sources and simply summarize them or include some quotes? Red flag
- Did the student cite relevant sources and simply summarize them or include a quote or two? Could be AI, could be human! Keep analyzing the submission.
- Did the student cite relevant sources and utilize the sources to verify claims in their writing? Sounds more human than AI
- Is the Work Specific? Take a look at the specificity of the writing. Writing that is general in nature is likely to have been generated by AI, but it is possible that your student is simply writing as themselves without the skill necessary to display deeper levels of understanding and thought. In cases like this, the student needs additional support and coaching from you to ensure that they understand highly general writing is not effective writing, even if they did write it themself. If the student scored poorly on the assessment because of highly general writing, the coaching conversation should be focused on quality of writing rather than potential use of AI. If the student was able to achieve a high score on the assignment with a highly general, non-specific writing submission, we recommend you modify your course assignment to increase “assignment authenticity”. For more information on how to safeguard your assignment against academic dishonesty by modifying your writing prompts, feel free to review one of our ChatGPT webinars.
- Read the student work for content accuracy: Because large language models rely on statistical probability of language rather than fact-checking, one of their telltale signs is asserting content inaccuracies with high confidence. Does this writing rely on ideas or contain multiple supporting points that are simply incorrect? If so, it’s likely AI generated the text.
- Compare the submission to other writing samples: If you have the ability and the capacity, compare the submission to previous writing samples. This is helpful data for you to validate or dismiss your concerns, and also helpful data for review during a potential student meeting.
Lastly, engage in a discussion with your student. If after reviewing the submission you have good reason to believe the student didn’t author their submitted work, we recommend you do the following:
- Schedule time to speak live with the student: Reach out to the student and ask the student to attend office hours or schedule a check-in (virtual or in person). Let them know you won’t be able to finish grading their submission until the meeting is scheduled & completed
- Share your observations without sharing an accusation: During the meeting, explain to the student that their submission didn’t quite sound like them and explain any other issues with the submission (it had fake sources, it had non-relevant sources, it wasn’t very specific/didn’t meet the goals of the assignment, it wasn’t consistent with previous writing samples, or it had a decent amount of faculty inaccuracies).
- Ask them about the process: Ask the student how they researched, prepared to write, and wrote their paper. Dig into their process? How long did it take them? When did they start the process? How much effort did the student dedicate to editing?
- Give the student a chance to re-submit: it’s possible the student didn’t use AI, and just needed thorough feedback. It’s also possible the student used AI, and this conversation properly “caught” their attention and is the future deterrent they needed. Regardless, because it is challenging to firmly prove the use of AI-generated-text, it’s best to offer the opportunity to edit and resubmit the assignment and see if the student was able to learn from their mistakes.
It’s possible the student will admit to using a generative AI program to write a draft or outline of their paper. If this happens, we highly encourage you to thank them for being candid and ask them a bit more about their prompt and editing process. Perhaps you can use this as an opportunity to share the flaws of generative AI programs and give them direct coaching on how to build stronger assignments. If your class clearly states that AI programs aren’t allowed at any point in the writing process, communicate this again.
We hope this guide is a helpful resource as you navigate through a new landscape in content creation and potential misuse of impressive new technologies.
If you’re interested in learning more about Packback and how we use instructional AI to prevent plagiarism, reach out to our team to request a demonstration at Curious@packback.co