New AI Detector Provides Transparency in Identifying AI-Written Content

Copyleaks' AI Logic feature tested on various texts including Trump administration report
Copyleaks, an AI detection company, has introduced a new feature called AI Logic that provides greater transparency in identifying AI-generated content. The feature not only determines whether text was likely written by artificial intelligence but also explains the reasoning behind its assessment by highlighting specific phrases and passages. This approach resembles plagiarism detection software, showing when text matches known AI-generated content or contains phrasing statistically more common in AI writing than human composition.
The company tested its technology on the Trump administration's Make America Healthy Again Commission report, which had previously faced scrutiny for allegedly containing references to nonexistent academic studies. Copyleaks' system identified 20.8% of the report as potentially AI-written, flagging sections about children's mental health that contained phrases appearing more frequently in AI-generated text. The Trump administration had defended the report, attributing issues to minor citation and formatting errors while maintaining the substance remained valid.
KEY POINTS
- •AI detector tested on Trump report
- •System found 20.8% potential AI content
- •False positives remain a challenge
Copyleaks CEO Alon Yamin explained that the technology uses two approaches: AI Source Match, which compares text against a database of known AI-generated content, and AI Phrases, which identifies terms statistically more likely to appear in AI writing. The system was tested on various texts including classic literature, partially AI-written articles, and completely fabricated news stories, with varying degrees of accuracy in detection and explanation.
Despite improvements in transparency, the technology still presents challenges including false positives where human-written text may be incorrectly flagged as AI-generated. Yamin acknowledged these limitations, stating that the goal is not to be the ultimate arbiter of truth but to provide tools that help humans make better assessments about content authenticity. The increasing volume and speed of content production makes identifying trustworthy information increasingly difficult, highlighting the need for both technological tools and human judgment.
The article notes that when using AI detectors, users should examine specifically what content is being flagged as potentially AI-written. Occasional suspicious phrases may be coincidental, while multiple flagged paragraphs might warrant closer scrutiny. Yamin advised human writers concerned about false positives to maintain their authentic voice, emphasizing the importance of preserving the human element in writing.