Get Started with Prompt Management
This guide walks through creating a prompt in XeroML and fetching it in your application using the SDK.
Prerequisites
-
Create a XeroML account or self-host XeroML
-
Set your API credentials as environment variables:
Terminal window XEROML_SECRET_KEY="sk-xm-..."XEROML_PUBLIC_KEY="pk-xm-..."XEROML_BASE_URL="https://cloud.xeroml.com"
Create a Prompt
Via the UI
- Open your XeroML project and navigate to Prompts
- Click New Prompt and give it a name (e.g.,
movie-critic) - Choose the type: Text or Chat
- Write your prompt using
{{variable_name}}for dynamic values - Click Save — this creates version 1
Via the SDK
Text prompt:
from xeroml import get_client
xeroml = get_client()
xeroml.create_prompt( name="movie-critic", type="text", prompt="As a {{criticlevel}} movie critic, write a review of {{movie}} in {{num_words}} words.", labels=["production"],)Chat prompt:
xeroml.create_prompt( name="support-agent", type="chat", prompt=[ {"role": "system", "content": "You are a helpful support agent for {{company}}."}, {"role": "user", "content": "{{user_message}}"}, ], labels=["production"],)import { XeroMLClient } from "@xeroml/client";
const xeroml = new XeroMLClient();
await xeroml.createPrompt({ name: "movie-critic", type: "text", prompt: "As a {{criticlevel}} movie critic, write a review of {{movie}} in {{num_words}} words.", labels: ["production"],});curl -X POST https://cloud.xeroml.com/api/public/v2/prompts \ -u "pk-xm-...:sk-xm-..." \ -H "Content-Type: application/json" \ -d '{ "name": "movie-critic", "type": "text", "prompt": "As a {{criticlevel}} movie critic, write a review of {{movie}} in {{num_words}} words.", "labels": ["production"] }'Fetch and Use a Prompt
from xeroml import get_client
xeroml = get_client()
# Fetch the production versionprompt = xeroml.get_prompt("movie-critic")
# Compile variablescompiled = prompt.compile( criticlevel="seasoned", movie="Inception", num_words="200")
# Use with OpenAIfrom xeroml.openai import openai
response = openai.chat.completions.create( model="gpt-4o", messages=[{"role": "user", "content": compiled}],)import { XeroMLClient } from "@xeroml/client";import OpenAI from "@xeroml/openai";
const xeroml = new XeroMLClient();const openai = new OpenAI();
const prompt = await xeroml.getPrompt("movie-critic");const compiled = prompt.compile({ criticlevel: "seasoned", movie: "Inception", num_words: "200",});
const response = await openai.chat.completions.create({ model: "gpt-4o", messages: [{ role: "user", content: compiled }],});from xeroml import get_client, CallbackHandlerfrom langchain_openai import ChatOpenAI
xeroml = get_client()prompt = xeroml.get_prompt("movie-critic")
compiled = prompt.compile(criticlevel="seasoned", movie="Inception", num_words="200")
handler = CallbackHandler()llm = ChatOpenAI(model="gpt-4o", callbacks=[handler])response = llm.invoke(compiled)import { XeroMLClient } from "@xeroml/client";import { generateText } from "ai";import { openai } from "@ai-sdk/openai";
const xeroml = new XeroMLClient();const prompt = await xeroml.getPrompt("movie-critic");const compiled = prompt.compile({ criticlevel: "seasoned", movie: "Inception", num_words: "200" });
const { text } = await generateText({ model: openai("gpt-4o"), prompt: compiled, experimental_telemetry: { isEnabled: true },});Next Steps
- Data Model — understand types, labels, and versioning in depth
- Version Control & Labels — deploy prompts without code changes
- Link to Traces — correlate prompt versions with quality metrics
- Playground — test prompts interactively