Integrating OpenAI
Same shape as the Claude integration. One helper, one call after the model response.
Install
pip install verdifax openai
Basic pattern
from openai import OpenAI
import verdifax
client = OpenAI() # picks up OPENAI_API_KEY
resp = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "user", "content": "Classify this support ticket..."}
],
)
output = resp.choices[0].message.content
receipt = verdifax.attest_openai_response(
prompt="Classify this support ticket...",
response=output,
program_id="a" * 64,
route_id="openai-classify-v1",
registry_record_hash="b" * 64,
)
print(receipt.manifest_hash)
Streaming
If you stream tokens, attest the assembled response after the stream completes:
chunks = []
stream = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": prompt}],
stream=True,
)
for event in stream:
if event.choices[0].delta.content:
chunks.append(event.choices[0].delta.content)
output = "".join(chunks)
receipt = verdifax.attest_openai_response(
prompt=prompt,
response=output,
program_id="a" * 64,
route_id="openai-stream-v1",
registry_record_hash="b" * 64,
)
Function calling / tool use
Tool calls produce structured outputs, not free text. Attest the canonical JSON of the tool call:
import json
resp = client.chat.completions.create(
model="gpt-4o",
tools=[...],
messages=[...],
)
tool_call = resp.choices[0].message.tool_calls[0]
canonical = json.dumps({
"name": tool_call.function.name,
"arguments": json.loads(tool_call.function.arguments),
}, sort_keys=True)
receipt = verdifax.attest(
payload=canonical,
program_id="a" * 64,
route_id="openai-tools-v1",
registry_record_hash="b" * 64,
)
The sort_keys=True is important — it canonicalizes the JSON so identical tool calls always seal to the same hash.
