Get Started with Tracing
This guide walks you through sending your first trace to XeroML. You can use an AI coding agent to instrument your app automatically, or follow the manual steps for your framework.
Prerequisites
-
Create a XeroML account or self-host XeroML
-
Generate API keys from your project settings
You’ll need both a secret key (
sk-xm-...) and a public key (pk-xm-...). -
Set environment variables in your project
Terminal window XEROML_SECRET_KEY="sk-xm-..."XEROML_PUBLIC_KEY="pk-xm-..."XEROML_BASE_URL="https://cloud.xeroml.com"
Instrument Your Application
Install the XeroML SDK:
pip install xeromlReplace your OpenAI import with the XeroML wrapper:
from xeroml.openai import openai
# Use exactly like the standard openai clientresponse = openai.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": "Hello, world!"}])All OpenAI calls are automatically traced — no other changes needed.
npm install @xeroml/openai @opentelemetry/sdk-nodeInitialize OpenTelemetry with the XeroML span processor:
import { NodeSDK } from "@opentelemetry/sdk-node";import { XeroMLSpanProcessor } from "@xeroml/otel";import OpenAI from "@xeroml/openai";
const sdk = new NodeSDK({ spanProcessors: [new XeroMLSpanProcessor()],});sdk.start();
const openai = new OpenAI();
const response = await openai.chat.completions.create({ model: "gpt-4o", messages: [{ role: "user", content: "Hello, world!" }],});npm install ai @ai-sdk/openai @xeroml/tracing @xeroml/otel @opentelemetry/sdk-nodeimport { NodeSDK } from "@opentelemetry/sdk-node";import { XeroMLSpanProcessor } from "@xeroml/otel";import { generateText } from "ai";import { openai } from "@ai-sdk/openai";
const sdk = new NodeSDK({ spanProcessors: [new XeroMLSpanProcessor()],});sdk.start();
const { text } = await generateText({ model: openai("gpt-4o"), prompt: "Hello, world!", experimental_telemetry: { isEnabled: true },});pip install xeroml langchain-openaifrom xeroml import CallbackHandlerfrom langchain_openai import ChatOpenAI
handler = CallbackHandler()
llm = ChatOpenAI(model="gpt-4o", callbacks=[handler])response = llm.invoke("Hello, world!")npm install @xeroml/core @xeroml/langchain @opentelemetry/sdk-nodeimport { NodeSDK } from "@opentelemetry/sdk-node";import { XeroMLSpanProcessor } from "@xeroml/otel";import { XeroMLCallbackHandler } from "@xeroml/langchain";import { ChatOpenAI } from "@langchain/openai";
const sdk = new NodeSDK({ spanProcessors: [new XeroMLSpanProcessor()],});sdk.start();
const handler = new XeroMLCallbackHandler();const llm = new ChatOpenAI({ model: "gpt-4o", callbacks: [handler] });const response = await llm.invoke("Hello, world!");pip install xeromlUse the @observe decorator for automatic instrumentation:
from xeroml import observe, get_client
xeroml = get_client()
@observe()def my_agent(user_input: str) -> str: # Your LLM logic here response = call_llm(user_input) return response
result = my_agent("What is XeroML?")xeroml.flush()Or use the context manager for explicit control:
from xeroml import get_client
xeroml = get_client()
with xeroml.start_as_current_observation(name="my-span", type="span") as obs: result = call_llm("Hello") obs.update(output=result)
xeroml.flush()npm install @xeroml/tracing @xeroml/otel @opentelemetry/sdk-nodeimport { NodeSDK } from "@opentelemetry/sdk-node";import { XeroMLSpanProcessor } from "@xeroml/otel";import { startActiveObservation, flushXeroML } from "@xeroml/tracing";
const sdk = new NodeSDK({ spanProcessors: [new XeroMLSpanProcessor()],});sdk.start();
await startActiveObservation({ name: "my-trace", type: "span" }, async (obs) => { const result = await callLLM("Hello"); obs.update({ output: result });});
await flushXeroML();Additional Integrations
XeroML also supports:
| Integration | Link |
|---|---|
| LlamaIndex | /integrations/frameworks/llamaindex |
| CrewAI | /integrations/frameworks/crewai |
| Ollama | /integrations/model-providers/ollama |
| LiteLLM | /integrations/gateways/litellm |
| AutoGen | /integrations/frameworks/autogen |
| Google ADK | /integrations/frameworks/google-adk |
Verify Your Trace
After running your instrumented code, open the XeroML dashboard and navigate to Traces. Your trace should appear within a few seconds.
Next Steps
Now that you’re sending traces, explore these features to get more value:
- Group related traces into Sessions — for multi-turn chat applications
- Separate environments — keep dev and production data isolated
- Add tags and metadata — for filtering and analysis
- Track users — monitor per-user costs and quality