Documentation Index
Fetch the complete documentation index at: https://docs.lumiqtrace.com/llms.txt
Use this file to discover all available pages before exploring further.
The LumiqTrace Python SDK instruments your Python LLM application with a single init() call and a monkey-patch for OpenAI. It uses a background daemon thread to flush events without blocking your application, and registers an atexit handler to flush on process exit.
Installation
Initialization
Call lumiqtrace.init() once at startup before making any LLM calls. All parameters are keyword arguments.
import lumiqtrace
lumiqtrace.init(
api_key="lqt_your_api_key_here",
environment="production",
store_prompts=False,
sample_rate=1.0,
debug=False,
)
Parameters:
Your LumiqTrace API key. Must start with lqt_. Find this in your project settings.
environment
string
default:"production"
Environment label attached to every event. Use "staging" or "development" to separate traces by environment.
When True, prompt text and completion text are stored alongside traces. Disabled by default for privacy.
Fraction of events to send, between 0.0 and 1.0. Set to 0.1 to trace 10% of calls.
When True, logs internal flush errors to stdout. Enable during integration testing.
Tracing OpenAI calls
Call lumiqtrace.patch_openai() after init() to automatically trace all OpenAI calls — both synchronous and asynchronous — without modifying your existing code.
import openai
import lumiqtrace
lumiqtrace.init(api_key="lqt_your_api_key_here")
lumiqtrace.patch_openai()
client = openai.OpenAI()
# Sync
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "What is the capital of France?"}],
)
print(response.choices[0].message.content)
# Async
import asyncio
import openai as openai_async
async def main():
async_client = openai_async.AsyncOpenAI()
response = await async_client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": "Translate 'hello' to Spanish."}],
)
print(response.choices[0].message.content)
asyncio.run(main())
Tracing other providers
The Python SDK includes patches for Anthropic, Google Generative AI, and OpenRouter. Call the appropriate patch function after init().
Anthropic
import anthropic
import lumiqtrace
lumiqtrace.init(api_key="lqt_your_api_key_here")
lumiqtrace.patch_anthropic()
client = anthropic.Anthropic()
message = client.messages.create(
model="claude-sonnet-4-6",
max_tokens=1024,
messages=[{"role": "user", "content": "Explain monads in plain English."}],
)
print(message.content[0].text)
Google Generative AI
import google.generativeai as genai
import lumiqtrace
lumiqtrace.init(api_key="lqt_your_api_key_here")
lumiqtrace.patch_google()
genai.configure(api_key="your_google_api_key")
model = genai.GenerativeModel("gemini-2.5-flash")
response = model.generate_content("What is quantum entanglement?")
print(response.text)
OpenRouter
import openai
import lumiqtrace
lumiqtrace.init(api_key="lqt_your_api_key_here")
lumiqtrace.patch_openrouter()
client = openai.OpenAI(
base_url="https://openrouter.ai/api/v1",
api_key="your_openrouter_api_key",
)
response = client.chat.completions.create(
model="anthropic/claude-sonnet-4-6",
messages=[{"role": "user", "content": "Hello from OpenRouter!"}],
)
print(response.choices[0].message.content)
Context enrichment
Use with_lumiqtrace_context to attach a user_id, session_id, or custom tags to all traces generated within a function scope.
import lumiqtrace
from lumiqtrace import with_lumiqtrace_context
lumiqtrace.init(api_key="lqt_your_api_key_here")
lumiqtrace.patch_openai()
import openai
client = openai.OpenAI()
def handle_request(user_id: str, session_id: str, message: str) -> str:
with with_lumiqtrace_context(user_id=user_id, session_id=session_id, tags={"feature": "chat"}):
response = client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": message}],
)
return response.choices[0].message.content
Set user_id and session_id on every request that involves a logged-in user. This enables per-user cost breakdown and session replay in the LumiqTrace dashboard.
Prompt management
The PromptClient fetches versioned prompts from the LumiqTrace prompt library at runtime. Results are cached locally for 5 minutes.
from lumiqtrace import PromptClient, get_client
import lumiqtrace
lumiqtrace.init(api_key="lqt_your_api_key_here")
lumiqtrace.patch_openai()
prompts = PromptClient(get_client())
# Fetch by label
prompt = prompts.get("support-reply", label="production")
# Compile variables
text = prompts.compile(prompt, {
"customer_name": "Alex",
"order_id": "ORD-7821",
})
import openai
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": text},
{"role": "user", "content": "I need help with my order."},
],
)
See the TypeScript prompt management docs for the full API reference — the Python PromptClient exposes the same methods: get, list, create, update_labels, compile, and clear_cache.
In Python, method and parameter names use snake_case: update_labels, clear_cache, prompt_type, commit_message.
Tracing functions with @lumiqtrace.trace
Use the @lumiqtrace.trace decorator to trace any sync or async function as a custom span. The name parameter sets the operation name in the trace view.
import lumiqtrace
lumiqtrace.init(api_key="lqt_your_api_key_here")
lumiqtrace.patch_openai()
@lumiqtrace.trace(name="rag-pipeline")
async def run_rag(query: str) -> str:
# All OpenAI calls inside here inherit the same trace context
docs = await retrieve_documents(query)
response = await openai_client.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": f"Use these docs: {docs}"},
{"role": "user", "content": query},
],
)
return response.choices[0].message.content
@lumiqtrace.trace(name="answer-question")
def answer_sync(question: str) -> str:
# Works with synchronous functions too
response = openai_client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": question}],
)
return response.choices[0].message.content
Agent tracing with with_agent
Use the with_agent context manager to trace a multi-step agent workflow. It creates an agent span and exposes methods to log plans, trace tool calls, and record handoffs to other agents.
import lumiqtrace
from lumiqtrace import with_agent
lumiqtrace.init(api_key="lqt_your_api_key_here")
lumiqtrace.patch_openai()
with with_agent(
name="CustomerSupportAgent",
role="specialist",
framework="custom",
tools=[{"name": "lookup_order", "description": "Lookup order by id"}],
) as agent:
# Log the steps the agent plans to take
agent.log_plan(["Lookup order", "Check refund policy", "Draft response"])
# Trace a tool call — wraps the callable and records args + result
order = agent.trace_tool(
"lookup_order",
{"order_id": "123"},
lambda: {"id": "123", "status": "delivered", "total": 49.99},
)
# Record a handoff to another agent
agent.delegate_to("RefundPolicyAgent", "refund requested")
with_agent parameters:
Display name for this agent in traces and the agent registry.
role
string
default:"specialist"
Role label such as "coordinator", "specialist", or "planner".
Framework name, e.g. "custom", "langchain", "google-adk".
List of tool definition objects with name and description fields. These appear in the tool discovery view.
AgentContext methods:
agent.log_plan(steps: list[str]) — records a planning span with the steps list
agent.trace_tool(name, args, callable) — wraps a callable, records args and return value as a tool span
agent.delegate_to(target_name, reason) — records a handoff span
All LLM calls made inside a with_agent block automatically inherit the agent’s trace context. You do not need to pass the agent object to nested functions.
Flushing in serverless environments
In short-lived processes such as AWS Lambda, Vercel Functions, or Cloud Run, the background flush thread may not have time to send events before the process exits. Call lumiqtrace.flush() explicitly before the handler returns to guarantee delivery.
import lumiqtrace
lumiqtrace.init(api_key="lqt_your_api_key_here")
lumiqtrace.patch_openai()
def lambda_handler(event, context):
response = openai_client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": event["prompt"]}],
)
result = response.choices[0].message.content
# Ensure all buffered events are sent before Lambda freezes
lumiqtrace.flush()
return {"statusCode": 200, "body": result}
Omitting lumiqtrace.flush() in serverless environments is the most common cause of missing traces. Always call it before your handler returns.