Integrating Anthropic Claude
This is the canonical integration. The Verdifax SDK ships a one-call helper that takes a Claude prompt and response and returns a sealed manifest hash you can store alongside the model output.
Install
pip install verdifax anthropic
Two-call pattern
The simplest pattern: call Claude, then attest the result.
import anthropic
import verdifax
claude = anthropic.Anthropic() # picks up ANTHROPIC_API_KEY
resp = claude.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[{"role": "user", "content": "Summarize this document..."}],
)
output = resp.content[0].text
receipt = verdifax.attest_claude_response(
prompt="Summarize this document...",
response=output,
program_id="a" * 64, # your registered program id
route_id="claude-summarize-v1",
registry_record_hash="b" * 64, # your registry record hash
)
print(f"Claude said: {output}")
print(f"Verdifax seal: {receipt.manifest_hash}")
receipt.manifest_hash is the artifact you keep. Same prompt + same response = same hash, every time.
Why Verdifax tags the payload as Claude-specific
Inside the helper, Verdifax canonicalizes the prompt + response with a provider-specific prefix:
verdifax.helper.claude.v1
verdifax.helper.prompt.v1
<prompt>
verdifax.helper.response.v1
<response>
This means an identical prompt + response pair sent through
attest_openai_response() produces a different manifest hash than
through attest_claude_response(). That's intentional — the same text
under different providers is meaningfully different evidence.
Reusing a connection pool
For high-throughput servers, share one VerdifaxClient:
from verdifax import VerdifaxClient
import anthropic
claude = anthropic.Anthropic()
verdifax_client = VerdifaxClient()
def attest_one(prompt: str) -> str:
resp = claude.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[{"role": "user", "content": prompt}],
)
output = resp.content[0].text
receipt = verdifax.attest_claude_response(
prompt=prompt,
response=output,
program_id="a" * 64,
route_id="claude-summarize-v1",
registry_record_hash="b" * 64,
client=verdifax_client,
)
return receipt.manifest_hash
Async (FastAPI / async workers)
from verdifax import AsyncVerdifaxClient
import anthropic
claude = anthropic.AsyncAnthropic()
async def attest_async(prompt: str):
resp = await claude.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[{"role": "user", "content": prompt}],
)
output = resp.content[0].text
async with AsyncVerdifaxClient() as v:
receipt = await v.attest(
payload=f"prompt:{prompt}\nresponse:{output}",
program_id="a" * 64,
route_id="claude-summarize-v1",
registry_record_hash="b" * 64,
)
return receipt.manifest_hash
What gets sealed
The helper sends Verdifax exactly two pieces of data: the prompt text and the response text. It does not send your Anthropic API key, conversation metadata, or system prompts. If you need those sealed too, build the payload yourself and call verdifax.attest() directly.
