Free tool

Free AI Detector.

Paste any English text. Get a score in under a second. Works on output from GPT, Claude, Gemini, Llama, and Mistral.

0 / 1000 words

How it works

Three steps. Thirty seconds.

  1. 1

    Paste your text

    Drop in 30 to 1,000 words. The text leaves your browser only to reach the detector.

  2. 2

    Run the detector

    Sapling reads the patterns. Word predictability, sentence rhythm, token level fingerprints. Every major model family is in scope.

  3. 3

    Read the score

    A score lands on screen with a verdict. Every sentence gets its own number so you can see where the robot lives.

What we look for

What gives a robot away.

Modern detectors do not chase typos or punctuation. They measure statistical fingerprints. The kind that survive a quick rewrite.

Perplexity
How surprising your word choices are. Models grab the safest next word almost every time. People grab the weird one more often than they think.
Burstiness
How much your sentence length jumps around. AI tends to write everything at the same medium length. People drop a one word punch then a long meandering thought.
Token probability shape
The pattern of word by word odds across the whole passage. Even after a quick edit, the math still smells like a model.
Model fingerprints
Each model family has tics. The detector learned what GPT, Claude, Gemini, Llama, and Mistral all sound like at the token level.

Reading the score

What does a 42 percent score mean?

Scores live on a sliding scale. We group them into three bands so you can act on the result without doing math.

BandScoreWhat it means
Likely human0 to 29%Reads like a person wrote it. The math is on your side.
Mixed signals30 to 69%Some bits look human, other bits look manufactured. This is what edited AI drafts usually score.
Likely AI70 to 100%The fingerprints are loud. Treat it as a flag, not a verdict.

Where it breaks

No detector is 100 percent accurate. Including this one.

Honest beats hype. Here are the rough edges every workflow needs to plan around.

False positives happen

Studies put false positive rates between 3.8 and 17.1 percent. Formal writing, technical writing, and non native English all get flagged more than they deserve. Never accuse anyone based on a single number.

False negatives too

Edit an AI draft hard enough and it walks past the detector. Adversarial tricks like swapping characters or breaking syntax can knock accuracy down by around 30 percent.

Length matters

Short passages do not carry enough signal. Anything under 30 words is mostly noise. That is why we ask for more.

One signal of many

Use the score next to your judgement, your sources, and your version history. The detector starts the conversation. Your team finishes it.

Who uses this

Built for editors. Not for inquisitors.

  • Editors and content teams

    Spot check freelance and agency drops before they ship. Catch the ones that need a real rewrite.

  • SEO and marketing leads

    Audit your blog and landing pages. Google does not love lazy AI spam, and neither does your audience.

  • Educators and researchers

    Open a conversation with a student or co author. Never close one based on a single number.

  • Recruiters and hiring teams

    Sanity check cover letters and writing samples. Pair the score with everything else you already know about the candidate.

Inside COMAS’

We run this same check inside our pipeline.

Sapling does not only power this free tool. It sits at the end of every COMAS’ article. If a draft scores too high, the agents rewrite it before you ever see it.

  1. 1 Step 1

    Researcher

    Hunts down real sources, statistics, and studies before a single sentence hits the page.

  2. 2 Step 2

    Writer

    Drafts the article around your keyword, search intent, stance, and tone samples.

  3. 3 Step 3

    Editor

    Tightens structure, checks facts, and drops AIO summaries in for AI Overview citation.

  4. 4 Step 4

    Humanizer

    Rewrites the result in your brand voice. Voice learned from past articles you uploaded.

  5. 5 Quality gate

    Sapling gate

    Scores the finished draft with the same model you just used. Above threshold, it loops back through the humanizer. Below, it ships to your dashboard.

Most tools generate. We generate, then audit. That is why a COMAS’ article rarely needs rescuing.

FAQ

AI detector questions, answered.

Real questions. Honest answers. No marketing fluff.

How accurate is this AI detector?

Nowhere near 100 percent. Studies put modern detectors at roughly 79 to 94 percent accuracy. False positive rates sit between 3.8 and 17.1 percent. Treat the score as one input. We surface the raw model score too so you can judge for yourself.

Can it tell if text is from ChatGPT, Claude, or Gemini?

Sort of. Sapling knows the patterns from OpenAI GPT, Anthropic Claude, Google Gemini, Meta Llama, and Mistral. We do not name the model. We just tell you how AI shaped the writing feels.

Does it work on edited or paraphrased AI text?

Partly. The more a person rewrote it, the lower the score gets. That is the point. Heavily edited AI is half human work. A score in the 30 to 69 mixed band usually means an AI draft that someone polished.

Will it flag non native English writing as AI?

Sometimes. Detectors lean on uniform, formal patterns. Those are common in non native English and academic writing. Never use one score as evidence of cheating.

Why is there a one check per day limit?

The tool is free and open. We cap each IP at one check per day so the lights stay on for everyone. It resets at 00:00 UTC. Need more? Sign up.

Do you store the text I paste?

No. Your text travels to Sapling, gets scored, and stops there. We save a one way hash of your IP plus the date so the daily cap works. Never the raw IP. Never the text.

How can I make AI text undetectable?

Be honest, you should not be chasing this. Rewrite the draft so it actually says something you mean. That is what Comas was built for. Long form articles in your voice, with your sources, ready to publish.

Why are some sentences flagged when the overall score is low?

Per sentence scores use a different formula than the overall number. A long human article can still hold a few short stock sentences that score high on their own. The full document still reads as human.

Detection is step one

Stop guessing. Start shipping.

COMAS’ is a team of agents that writes long form articles in your brand voice. Trained on your past writing. Fact checked. Ready to publish.

Try COMAS’ free