@deckrd/sdk
detects AI in any chat your product touches. Detection runs locally, in
your runtime. Message content never reaches our servers. Vibe coders get
a Tinder-grade trust layer in less time than it takes to read this paragraph.
Runs in your Node, browser, edge, or RN runtime. We never see your users' messages. The privacy claim is auditable: grep our SDK source for fetch( and find exactly three calls — none of them carry content.
Linguistic style, glyphs, response timing, mirror, persona consistency, and more. When generators improve at one signal, the others still hold. Free tier ships with the npm package; paid tiers get fresh tunings via OTA every week.
npm install @deckrd/sdk + a score() call. Drop-in for Next.js, Cloudflare Workers, Vercel Edge, Deno, React Native, Bun, or any modern Node. No build pipeline, no external services, no model hosting.
Free tier works without a license key. Add one when you're ready to remove the attribution badge or upgrade volume.
Each score() is one billable detection. You decide when to re-evaluate
(every message, every five, or on-demand) — there is no implicit cost
for buffering.
Don't take our word for it. Open the SDK source and grep for
fetch(:
| Call | Carries | When | Content? |
|---|---|---|---|
| License validate | SHA-256 of the key | Lazy, cached 24h | never |
| Usage ping | Counts + label totals | Batched, periodic | never |
| Config OTA (paid) | One-way pull of lexicons | Weekly + manual refresh | never |
Detection runs entirely on your runtime. We can't see your users'
messages even if we wanted to. The 11 text detectors live in
@decker/shared;
they're heuristic functions over message arrays, no remote inference,
no model hosting.
One detection = one call to score(). You control granularity (every message vs. every fifth vs. on-demand) and predictability follows.