LangChain
XeroML provides a native LangChain integration via a CallbackHandler that automatically captures all LangChain events as structured observations.
Installation:
pip install xeroml langchain-openaiUsage:
from xeroml import CallbackHandlerfrom langchain_openai import ChatOpenAIfrom langchain.schema import HumanMessage
handler = CallbackHandler()llm = ChatOpenAI(model="gpt-4o", callbacks=[handler])
response = llm.invoke([HumanMessage(content="What is XeroML?")])With chains:
from langchain.chains import LLMChainfrom langchain.prompts import PromptTemplate
template = PromptTemplate( input_variables=["product"], template="Write a tagline for {product}.")
chain = LLMChain(llm=llm, prompt=template)result = chain.run("XeroML")Adding user/session context:
from xeroml import CallbackHandler, propagate_attributes
def handle_request(message: str, user_id: str, session_id: str) -> str: handler = CallbackHandler() llm = ChatOpenAI(model="gpt-4o", callbacks=[handler])
with propagate_attributes(user_id=user_id, session_id=session_id): return llm.invoke([HumanMessage(content=message)]).contentInstallation:
npm install @xeroml/langchain @xeroml/otel @opentelemetry/sdk-node @langchain/openaiSetup:
import { NodeSDK } from "@opentelemetry/sdk-node";import { XeroMLSpanProcessor } from "@xeroml/otel";
const sdk = new NodeSDK({ spanProcessors: [new XeroMLSpanProcessor()],});sdk.start();Usage:
import { XeroMLCallbackHandler } from "@xeroml/langchain";import { ChatOpenAI } from "@langchain/openai";import { HumanMessage } from "@langchain/core/messages";
const handler = new XeroMLCallbackHandler();const llm = new ChatOpenAI({ model: "gpt-4o", callbacks: [handler] });
const response = await llm.invoke([new HumanMessage("What is XeroML?")]);With propagation:
import { propagateAttributes } from "@xeroml/tracing";
async function handleRequest(message: string, userId: string, sessionId: string) { return await propagateAttributes({ userId, sessionId }, async () => { const handler = new XeroMLCallbackHandler(); const llm = new ChatOpenAI({ callbacks: [handler] }); return await llm.invoke([new HumanMessage(message)]); });}What Gets Captured
The LangChain integration captures:
- Chain start/end with inputs and outputs
- LLM call inputs (prompt), outputs (completion), and token usage
- Tool calls and their results
- Agent reasoning steps
- Retriever queries and retrieved documents
- Errors at any step