The precipiq.integrations.langchain module ships a BaseCallbackHandler subclass that hooks into LangChain’s callback system and logs every LLM, tool, and chain invocation as a Precipiq decision record. Attach it once to a runnable and every call is automatically captured — inputs, outputs, model name, and run ancestry — with no changes to your pipeline logic.
The adapter is import-time optional. Nothing from the langchain package is imported unless you explicitly import precipiq.integrations.langchain, so the base precipiq SDK stays thin for users who don’t need it.
Install
pip install precipiq langchain-core
Usage
from precipiq import Precipiq
from precipiq.integrations.langchain import PrecipiqLangChainCallback
pq = Precipiq(api_key="pq_test_demo_key_REPLACE_ME")
callback = PrecipiqLangChainCallback(
pq,
agent_id="qa-bot", # appears on every decision
action_type="llm_call", # default for LLM events
)
# Attach to any LangChain runnable:
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(callbacks=[callback])
llm.invoke("What is the AI Consequences Ledger?")
What each callback produces
Every LangChain callback — on_llm_start, on_chain_start, on_tool_start, on_llm_end, and others — produces a decision record containing:
inputs — the prompt or tool input passed to the model.
outputs — the LLM completion, tool output, or chain result.
confidence — defaults to 0.5 because LangChain doesn’t expose token log-probabilities through the callback interface. Override this in the constructor.
metadata — LangChain’s run_id, parent_run_id, and the model name, so you can reconstruct the full call graph in the Precipiq dashboard.
Streaming responses
Streaming LLM calls fire on_llm_new_token for every token in the completion. Sending one decision per token would flood the Precipiq API and fill your ledger with thousands of partial records.
The adapter avoids this by aggregating tokens into an internal buffer keyed by run_id. When on_llm_end fires, the buffer is flushed as a single consolidated decision containing the full completion. The ledger records the complete output exactly once, regardless of how many tokens it took to produce.
Handler options
callback = PrecipiqLangChainCallback(
pq,
agent_id="qa-bot",
action_type="llm_call",
default_confidence=0.5, # used when logprobs unavailable
human_in_loop=False, # set True if a human gate is upstream
include_prompts_in_inputs=True, # False => metadata only (for PII)
)
| Option | Default | Description |
|---|
agent_id | required | Labels every decision with this agent identifier. |
action_type | "llm_call" | The action type stamped on LLM events. |
default_confidence | 0.5 | Confidence score used when LangChain doesn’t expose log-probabilities. |
human_in_loop | False | Set to True if a human review gate exists upstream of this LLM call. |
include_prompts_in_inputs | True | Set to False to record only metadata IDs and token counts, not the prompt text. |
If your prompts contain personally identifiable information (PII), pass include_prompts_in_inputs=False. The hash chain still proves the decision occurred even without the prompt text stored in the ledger.