Brown DLD Faculty Guides

AI Detection and Limitations

Updated on

AI detection tools use algorithms to analyze writing and estimate whether it was generated by AI. These tools examine text features such as sentence structure, vocabulary, and grammatical patterns to make predictions. However, the results should be interpreted with caution.

Limitations

  • Accuracy Is Unreliable: AI detectors frequently produce false positives and false negatives. Results can vary significantly when the same text is tested multiple times.

  • Poor Performance on Nonstandard Formats: These tools can struggle with short texts or nontraditional formats like poetry, scripts, or bulleted lists.

  • Opaque Methods: Most detectors do not explain how they reach their conclusions, making results difficult to verify.

  • Easily Circumvented: With minor edits or paraphrasing, AI-generated text can often avoid detection. Human-AI hybrid writing is especially hard to flag.

  • Equity Concerns: Detection tools are more likely to flag writing by multilingual students or those with less conventional writing styles, potentially reinforcing bias.

Practical Recommendations

  • Use Scores as Conversation Starters: Detection results should not be treated as definitive evidence. If a concern arises, speak with the student. Ask how they approached the assignment and what tools, if any, they used.

  • Make it a Teaching Moment: Students may not understand your AI policy or expectations. Use these moments to offer guidance and clarify what responsible AI use looks like in your course.

  • Reach Out to Administration: If you are concerned that your student willfully used AI in violation of course policies, contact Love Wallace, Associate Dean of the College for the Academic Code, for guidance on how to proceed. For graduate studies, contact [email protected].

Click to copy

Useful Resources

Previous Article AI Tools and Accessibility, Data Protection, and Privacy
Still need help? Contact [email protected]