VerdifaX

What is Verdifax?

Verdifax is the cryptographic proof layer for AI. It converts every AI execution into a sealed manifest hash that any third party can independently verify — without trusting Verdifax, the model, or the operator.

In one sentence

Verdifax records every step of an AI decision, seals it into a cryptographic artifact, and produces a SHA-256 hash that anyone can re-derive from the same inputs to confirm the decision happened the way it was reported.

What problem does it actually solve

AI systems make decisions that affect money, health, and national security. When those decisions are questioned by a regulator, a court, or a counterparty, the operator typically reaches for logs. Logs have three fundamental problems:

  1. Logs can be edited. A log file is a story the operator tells about themselves. There is no cryptographic guarantee it reflects what actually ran.
  2. Models cannot explain themselves. "Show me the SHAP values" is not a compliance answer.
  3. Evidence breaks at the worst possible moment. The day a regulator asks "what AI made this decision?" is the day you discover your audit trail does not actually trace anything.

Verdifax replaces logs with a cryptographic seal that the operator cannot modify and the regulator can verify on their own machine.

What you get when you run Verdifax

Every execution produces an ExecutionManifest — a structured record of the run — which is collapsed into a single 64-character SHA-256 hash. That hash is the proof artifact. It can be:

  • Pasted into a compliance report
  • Sent to a regulator as evidence
  • Stored alongside the model output it certifies
  • Re-derived from the same inputs by anyone, at any time, on any machine

If the recomputed hash matches, the decision is provably the one Verdifax recorded. If it doesn't match, the record was tampered with.

What Verdifax is not

Read this section. It matters.

Honesty

Verdifax does not certify that an AI decision is correct — it certifies that the decision was made the way you said it was made. Truthful operation is provable; truth of the underlying judgment is not. We say this loudly because the difference is important to regulators and to you.

Specifically, Verdifax does not:

  • Evaluate whether the model's output is accurate, fair, or safe.
  • Verify that training data was lawfully obtained.
  • Make claims about bias or alignment.
  • Replace red-teaming, evaluation, or human review.

What Verdifax provides is the substrate that makes those other claims defensible: when you say "this model produced this output for this input in this environment", Verdifax is the receipt.

Where Verdifax fits

Verdifax sits between your AI system and your auditor:

Inputs ──► AI system ──► Output
                │
                └──► Verdifax pipeline ──► Manifest Hash ──► Auditor

The auditor never has to trust your AI vendor. They re-derive the hash from the same inputs and confirm independently.

Continue