Integrating LangChain
LangChain pipelines are composed of Runnable objects. The
Verdifax pattern: invoke the chain, then attest the canonical
representation of the input + output.
Install
pip install verdifax langchain langchain-anthropic
Wrap a chain
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_anthropic import ChatAnthropic
import verdifax
prompt = ChatPromptTemplate.from_messages([
("system", "You are a compliance officer."),
("user", "{question}"),
])
model = ChatAnthropic(model="claude-sonnet-4-6")
chain = prompt | model | StrOutputParser()
question = "Does an AI-generated denial letter need to be human-reviewed under HIPAA?"
output = chain.invoke({"question": question})
receipt = verdifax.attest(
payload=f"q:{question}\na:{output}",
program_id="a" * 64,
route_id="langchain-compliance-v1",
registry_record_hash="b" * 64,
)
print(receipt.manifest_hash)
A reusable Runnable
For reuse, build a small Runnable that attests as a side effect:
from langchain_core.runnables import RunnablePassthrough, RunnableLambda
import verdifax
def attest_step(payload: dict) -> dict:
receipt = verdifax.attest(
payload=str(payload["output"]),
program_id=payload["program_id"],
route_id=payload["route_id"],
registry_record_hash=payload["registry_record_hash"],
)
return {**payload, "manifest_hash": receipt.manifest_hash}
attestable_chain = (
{"output": chain, "program_id": lambda _: "a" * 64,
"route_id": lambda _: "langchain-v1",
"registry_record_hash": lambda _: "b" * 64}
| RunnableLambda(attest_step)
)
result = attestable_chain.invoke({"question": "..."})
print(result["output"])
print(result["manifest_hash"])
With LangGraph
For multi-step agents, attest at the graph's terminal node so the entire trajectory is sealed in one hash. Concatenate the agent's transcript and pass it as the payload.
