By Sydney Cole
Guest Writer
Delaware County Community College, like many schools nationwide, has adopted AI screening technologies to keep pace with the surge in generative writing software students can access. However, false-positive cases raise new anxieties for teachers and pupils alike, as academic integrity and trust are called into question.
“I felt extremely insulted and confused—and of course scared,” said current DCCC student, Silver Smith. In reflecting on their experience of getting a false-positive flag, they said the incident changed how they write. “I go out of my way to sound like me in essays,” they wrote in a text. “I’ve kinda shifted around my punctuation too. I use less em dashes (and I love em dashes).”
Former DCCC student Dustin Gamble, now studying at West Chester University, says AI screening tools have changed his writing even without personally experiencing a false flag. The tools make him less willing to take creative risks. “I see it as more of a transaction than a learning opportunity,” Gamble said. “[Screening tools] always give me a little bit of anxiety… and it puts up a barrier, because a lot of the time I like writing—but I don’t like being worried.”
Generative writing tools such as Grammarly, ChatGPT, and Claude are freely available to anyone, including students in their academic careers. Their rise has sparked an academic arms race between generative and detection technologies. These screening tools are far from perfect, and DCCC’s software of choice—Turnitin—is among those with high false-positive rates.
DCCC Professor Dr. Susan Ray, who studies AI in higher education, is not fond of the software. “Turnitin’s AI plagiarism-detection is terrible,” Ray said. “That’s what [the school] has told [faculty] to use, but the error rate is one in fifty.”
Turnitin officially states that its false-positive rate is less than 1%, while a Washington Post analysis found error rates as high as 50% in small control groups. Research from the University of San Diego indicates that neurodivergent students, students with reading disabilities, and non-native English speakers are disproportionately flagged, adding to the hurdles disadvantaged students face.
In contrast, Dr. Ray said skilled writers may encounter suspicion for the opposite reason. “[Turnitin] tends to flag really good writers. It’s kind of punishing [them] for being an above-average student writer,” Ray said, adding, “For [these students], it makes education feel like, ‘why bother? Because I’m working so hard and you think it’s not me.’”
These inaccuracies don’t just weigh down students but also the faculty left to interpret the results. According to Mark Keierleber of The 74, an online news organization, over half of educators nationwide say they’ve grown distrustful of their students’ work.
At DCCC, confusion extends beyond detection tools and into overall policy. The college has no formal policy on AI use, instead leaving individual discretion with the instructors. This leaves students with a different understanding of how—or whether—they can use AI, potentially changing their writing and studying habits from class to class. This lack of consistency can cause stress for both students and teachers.
DCCC Communications Professor Maria Boyd speaks on the mental toll of navigating AI suspicion. “I’m a teacher, not a cop…I have no interest in cultivating my lie-detector skills,” Boyd said. “If I wanted to do that, I’d go work for the FBI…It’s not good for my mental health.”
Professor Boyd tries to “assume good intent” with every case and has taken creative approaches to circumvent the issue entirely. She’s begun experimenting with oral assessments and creative assignments that focus on understanding, not written polish. Boyd argued, “AI can be a tool or a weapon—it’s how it’s used.”
Meanwhile, more reliable AI software is emerging on the market. Programs like Pangram boast a one-in-ten-thousand false-positive rate, according to Ray’s research. She has purchased a subscription for herself and a colleague with grant money, and they’re currently gathering data on its performance. Several faculty have made the switch, and DCCC has offered to reimburse the cost. Ray hopes that one day detection tools will no longer be necessary. “Ideally, we’ll get to a point where AI is ethically integrated into classrooms,” she said, “not feared [as] something to catch or punish.”






Leave a comment