Skip to main content
CertNode
Provenance

When AI output gets challenged later,
have evidence that holds up.

Sign every AI output with a cryptographic receipt at the moment of generation. Independent timestamp, independent verification, multi-model. When a customer disputes, a regulator audits, or eDiscovery hits, you have proof, not log screenshots opposing counsel will challenge.

100 signings/month free. Then $0.01/signing. No subscription.

When this becomes a real problem

Four specific situations. If any of these is plausible for your business, you need cryptographic provenance for AI output. If none of them are, you probably don't.

Customer dispute

AI gives wrong advice and a customer sues

Your support bot tells a customer something inaccurate. They rely on it, lose money, and sue. Your logs say one thing, the customer claims another. Without a third-party witness to what was actually generated, this becomes a credibility contest you may lose.

eDiscovery

Litigation hits and AI content is evidence

Opposing counsel asks how you know your AI records weren't generated for the litigation itself. With FRE 902(13)/(14) self-authenticating digital evidence, foundation cost drops from days of expert testimony to a one-paragraph certification.

Enterprise procurement

"How do you audit AI?", question 31 of 47

"We log to Datadog" doesn't close enterprise deals. "Each AI output is signed at generation with a receipt the buyer's team can verify cryptographically without our cooperation" does. Procurement gates close more deals than features open.

Regulatory inquiry

EU AI Act Article 50 + sector regulators

Article 50 enforces August 2026. FTC, FDA, FINRA, state AGs already inquiring about AI usage. Internal logs are your evidence; regulators want independently verifiable evidence. Cryptographic signing produces the latter.

Four more scenarios + the common pattern →

AI detection is losing. Provenance is winning.

Every new model release breaks the detectors. Cryptographic signatures don't care how good the model gets.

AI detection

  • Infers from writing patterns. ~50% accuracy on modern models.
  • Flags human writing as AI. Flags AI writing as human.
  • Degrades with every model release.
  • Courts and regulators don't accept statistical inference as proof.

AI provenance

  • Cryptographic signature at creation. Not inference, proof.
  • Verifiable accuracy when signed. Doesn't degrade over time.
  • Three-layer timestamp: CertNode + RFC 3161 TSA + optional Bitcoin anchor.
  • Aligned with FRE 902(13)/(14) standards for self-authentication of digital records.

How it works

One SDK call. Every AI output signed with an independent cryptographic receipt.

sign-ai-output.ts
import { CertNode } from '@certnode/sdk'
import Anthropic from '@anthropic-ai/sdk'

const claude = new Anthropic()
const cert = new CertNode({ apiKey: process.env.CERTNODE_KEY })

const response = await claude.messages.create({
  model: 'claude-opus-4-7',
  messages: [{ role: 'user', content: 'Write marketing copy' }]
})

// One extra line signs the output with provenance
const signed = await cert.signAIOutput({
  output: response.content[0].text,
  model: 'claude-opus-4-7',
  provider: 'anthropic',
})

// signed.receiptId -> verifiable at certnode.io/verify/[id]

What you get

Three-layer timestamps

CertNode internal timestamp + RFC 3161 from an independent Time Stamping Authority + optional Bitcoin anchor via OpenTimestamps. Three independent verification paths on every signature.

Platform-agnostic

REST API works with Claude, OpenAI, Mistral, Llama, or any model. MCP server for Claude-native integration. npm SDK for TypeScript and Node. Works wherever you generate content.

Court-ready evidence

Designed for admissibility under FRE 902(13)/(14) self-authenticating digital evidence. Optional PDF evidence package with chain-of-custody framing for disputes or regulatory requests.

Who this is for

We are deliberately narrow. Compliance-grade, not creator-grade.

Built for

  • Compliance teams preparing for EU AI Act Article 50 (enforces August 2026)
  • AI startups facing enterprise procurement that asks "how do we audit AI output?"
  • Legal and eDiscovery teams that may need to prove AI authorship in litigation
  • Regulated industries (healthcare, finance, government) with audit-trail requirements
  • Multi-model stacks where you need a neutral third party signing across providers

Not for you if

  • You are a creator looking to badge your work as AI-made, try Adobe Content Credentials for that audience.
  • You want to detect AI in someone else's content (this is provenance, not detection, they are not the same thing).
  • You only need to sign Anthropic Claude and don't expect any other model, wait for Anthropic native signing if/when they ship it.
  • You are looking for marketplace credentials, NFT-style provenance, or creator royalty rails.

Pay-as-you-go pricing

No subscription. No tiers to predict. 100 signings every month free. Then $0.01 per signing with automatic volume discounts as you grow.

Monthly volumePrice per signing
0 to 100 signingsFree
100 to 10,000 signings$0.010
10,000 to 100,000 signings$0.007
100,000 to 1,000,000 signings$0.004
1,000,000+ signings$0.002
Enterprise (annual prepay)Contact sales

Verifications are free

Anyone can verify a CertNode signature at no cost, with no account required.

Volume discounts auto-apply

Your rate drops automatically as your monthly volume grows. No tier migration or sales call required.

Pay only for what you use

No subscription. A training-data run that signs 500k in one week, then nothing for a month, pays for exactly 500k.

Sign your first AI output in under 5 minutes.

Sign up with email. No card required for the free tier. Get your API key and start signing.