Meant to solve the problem of AI cheating, machine-written text detectors seem to bring even more challenges to educators’ and editors’ lives. How do AI checkers determine whether the writing is robot-generated or human-made, and how reliable are their judgments? Let’s look into this subject.
What are AI detectors?
AI detectors are brand new tools that emerged as a response to the ground-breaking challenge of AI chatbots, which has drastically changed the educational and writing landscape. The AI-checking tool analyzes the text for specific parameters and decides whether the writing is likely machine-generated or authentic. Why “likely”?
AI algorithms and detectors are constantly evolving. However, it’s not only chatbots that learn to write more human-like; human writing can also have AI-characteristic traits. So, while top-notch AI checkers provide up to 95-99% accuracy, false-positive results when human writing is marked as AI and false-negative results when AI writing is passed as human still happen.
Hence, modern AI checkers flag the potentially problematic parts of the text, signaling that it has traits of machine-generated content, so the teacher or editor can pay attention to this extract.
How do AI detectors work?
AI checkers are trained on a significant amount of chatbot-generated and human-written content to learn to distinguish between the two. As a result, the tool “knows” the traits typical for both types of text and can decide whether the piece has any signs of robot writing.
One crucial aspect of AI’s text-producing is predictability. The model is prone to choosing the most probable words for the sentence, as it is trained to pick the wording people most often use in their content. Meanwhile, human writing is much more complex and random.
The AI detector analyzes the predictability/randomness ratio, the text structure, grammar, syntax, and other stylistic details to determine whether the given content is characteristic of chatbot writing.
Let’s make an example.
Mark goes to work every day. – Very predictable, AI would tend to choose this word.
Mark goes to work every third Tuesday of the month. – Unpredictable, but still logical. Less likely AI-generated.
Mark goes to work every stupid time his annoying colleague calls him. – Rather surprising and creative, AI would unlikely choose this wording.
Mark goes to work every peony rose, and sunflower. – Highly unpredictable and makes no sense. Most likely written by a human.
How to interpret AI detector results?
The best way to use an AI detector is to treat it as a compass. The checker can highlight the parts of the text that need attention, and then one can use other tools to make a final decision. What can help?
- Analyze the parts flagged as AI. There is probably nothing to worry about if these are random words, as one would hardly ask the chatbot to generate separate words. If the detector highlights sentences or paragraphs, the chances of them being AI output increase.
- Look into how the text was written. Open the Integrito report to analyze the part of the writing marked as AI. You will see how long the author composed the document and whether the problematic extract was crafted or copy-pasted out of nowhere.
- Talk to the author. Asking questions about the content of the text helps to determine whether the person has composed it authentically or used an AI output.
Can AI detectors be trusted?
To conclude the controversial topic, AI detectors can be trusted as long as the final decisions involve human judgment. Modern checkers can catch treats that are characteristic of AI output with up to 95-99% accuracy. The best way to use this information is to look into the flagged parts of the content and analyze them with other methods.