Troubleshooting & FAQ
Traces Not Appearing
Check environment variables
The most common cause is missing or incorrect environment variables:
# Verify these are set in the process where your code runsecho $XEROML_SECRET_KEYecho $XEROML_PUBLIC_KEYecho $XEROML_BASE_URLThe base URL must match your deployment:
- XeroML Cloud (EU):
https://cloud.xeroml.com - XeroML Cloud (US):
https://us.cloud.xeroml.com - Self-hosted: your custom URL
Check that flush() was called
In scripts and serverless functions, traces are buffered and must be explicitly flushed:
from xeroml import get_client
xeroml = get_client()# ... your code ...xeroml.flush() # Required in short-lived processesVerify SDK initialization order
OpenTelemetry must be initialized before any instrumented code runs. For TypeScript/JavaScript:
// This must run before any imports that use the instrumented librariesconst sdk = new NodeSDK({ spanProcessors: [new XeroMLSpanProcessor()] });sdk.start();
// Now safe to import and use instrumented librariesimport { ChatOpenAI } from "@langchain/openai";Traces Appear but Observations Are Missing
Check integration version compatibility
Some integrations require minimum SDK versions:
- Python SDK v3 requires XeroML platform ≥ 3.125.0
- TypeScript SDK v4 requires XeroML platform ≥ 3.95.0
Verify the wrapper is imported correctly
For the OpenAI integration, import from xeroml.openai, not directly from openai:
# Correctfrom xeroml.openai import openai
# Incorrect — not instrumentedimport openaiHigh Memory Usage
If the SDK buffers too much data before flushing, reduce the batch size or flush interval:
from xeroml import get_client
xeroml = get_client( flush_interval_seconds=5, # Flush every 5 seconds (default: 15) max_batch_size=50, # Smaller batches (default: 100))Duplicate Observations
If you see duplicate spans in your traces, you may have initialized the SDK twice, or both a native integration and manual instrumentation are running for the same call.
Check for multiple SDK initialization points in your application, and ensure you’re not wrapping already-instrumented calls with additional @observe() decorators.
Cost Estimates Seem Incorrect
XeroML calculates costs based on the model name and token counts reported by the API. If you’re using a custom model or proxy, costs may not calculate correctly. You can provide explicit cost overrides:
with xeroml.start_as_current_observation(name="my-model-call", type="generation") as gen: response = call_custom_model(prompt) gen.update( output=response.text, usage={"input": response.input_tokens, "output": response.output_tokens}, cost=response.cost_usd, # Explicit cost override )Timezone Issues
All timestamps in XeroML are stored and displayed in UTC. If your traces show unexpected timing, verify your system clock is synchronized.
For self-hosted deployments, see the self-hosting FAQ for timezone configuration issues.
Getting Help
- GitHub Discussions — questions and community support
- GitHub Issues — bug reports
- Support — for enterprise customers