False positives happen
Studies put false positive rates between 3.8 and 17.1 percent. Formal writing, technical writing, and non native English all get flagged more than they deserve. Never accuse anyone based on a single number.
Free tool
Paste any English text. Get a score in under a second. Works on output from GPT, Claude, Gemini, Llama, and Mistral.
0 / 1000 words
How it works
Drop in 30 to 1,000 words. The text leaves your browser only to reach the detector.
Sapling reads the patterns. Word predictability, sentence rhythm, token level fingerprints. Every major model family is in scope.
A score lands on screen with a verdict. Every sentence gets its own number so you can see where the robot lives.
What we look for
Modern detectors do not chase typos or punctuation. They measure statistical fingerprints. The kind that survive a quick rewrite.
Reading the score
Scores live on a sliding scale. We group them into three bands so you can act on the result without doing math.
| Band | Score | What it means |
|---|---|---|
| Likely human | 0 to 29% | Reads like a person wrote it. The math is on your side. |
| Mixed signals | 30 to 69% | Some bits look human, other bits look manufactured. This is what edited AI drafts usually score. |
| Likely AI | 70 to 100% | The fingerprints are loud. Treat it as a flag, not a verdict. |
Where it breaks
Honest beats hype. Here are the rough edges every workflow needs to plan around.
Studies put false positive rates between 3.8 and 17.1 percent. Formal writing, technical writing, and non native English all get flagged more than they deserve. Never accuse anyone based on a single number.
Edit an AI draft hard enough and it walks past the detector. Adversarial tricks like swapping characters or breaking syntax can knock accuracy down by around 30 percent.
Short passages do not carry enough signal. Anything under 30 words is mostly noise. That is why we ask for more.
Use the score next to your judgement, your sources, and your version history. The detector starts the conversation. Your team finishes it.
Who uses this
Spot check freelance and agency drops before they ship. Catch the ones that need a real rewrite.
Audit your blog and landing pages. Google does not love lazy AI spam, and neither does your audience.
Open a conversation with a student or co author. Never close one based on a single number.
Sanity check cover letters and writing samples. Pair the score with everything else you already know about the candidate.
Inside COMAS’
Sapling does not only power this free tool. It sits at the end of every COMAS’ article. If a draft scores too high, the agents rewrite it before you ever see it.
Hunts down real sources, statistics, and studies before a single sentence hits the page.
Drafts the article around your keyword, search intent, stance, and tone samples.
Tightens structure, checks facts, and drops AIO summaries in for AI Overview citation.
Rewrites the result in your brand voice. Voice learned from past articles you uploaded.
Scores the finished draft with the same model you just used. Above threshold, it loops back through the humanizer. Below, it ships to your dashboard.
Most tools generate. We generate, then audit. That is why a COMAS’ article rarely needs rescuing.
FAQ
Real questions. Honest answers. No marketing fluff.
Nowhere near 100 percent. Studies put modern detectors at roughly 79 to 94 percent accuracy. False positive rates sit between 3.8 and 17.1 percent. Treat the score as one input. We surface the raw model score too so you can judge for yourself.
Sort of. Sapling knows the patterns from OpenAI GPT, Anthropic Claude, Google Gemini, Meta Llama, and Mistral. We do not name the model. We just tell you how AI shaped the writing feels.
Partly. The more a person rewrote it, the lower the score gets. That is the point. Heavily edited AI is half human work. A score in the 30 to 69 mixed band usually means an AI draft that someone polished.
Sometimes. Detectors lean on uniform, formal patterns. Those are common in non native English and academic writing. Never use one score as evidence of cheating.
The tool is free and open. We cap each IP at one check per day so the lights stay on for everyone. It resets at 00:00 UTC. Need more? Sign up.
No. Your text travels to Sapling, gets scored, and stops there. We save a one way hash of your IP plus the date so the daily cap works. Never the raw IP. Never the text.
Be honest, you should not be chasing this. Rewrite the draft so it actually says something you mean. That is what Comas was built for. Long form articles in your voice, with your sources, ready to publish.
Per sentence scores use a different formula than the overall number. A long human article can still hold a few short stock sentences that score high on their own. The full document still reads as human.
Detection is step one
COMAS’ is a team of agents that writes long form articles in your brand voice. Trained on your past writing. Fact checked. Ready to publish.