Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.lumiqtrace.com/llms.txt

Use this file to discover all available pages before exploring further.

Traces not appearing in the dashboard

Symptom: You initialized the SDK and made LLM calls, but the dashboard shows no data after 15–30 seconds.
Verify the API key you passed to init() starts with lqt_ and matches the one shown in Settings → API Keys for your project. A typo or a key from a different project will cause all events to be silently dropped (the SDK logs a warning, but does not throw).Enable debug: true to see SDK activity:
lumiqtrace.init({
  apiKey: process.env.LUMIQTRACE_API_KEY!,
  debug: true, // logs flush success/failure to console
});
The SDK batches events and flushes them every 2 seconds by default. If your script or Lambda handler exits before the flush fires, events are lost. Call flush() explicitly before the process ends:
// At the end of a script or serverless handler:
await lumiqtrace.getClient().flush();
See the Serverless guide for environment-specific patterns.
If you self-host LumiqTrace, make sure you set baseURL to your API server’s address. The default is https://api.lumiqtrace.com.
lumiqtrace.init({
  apiKey: "lqt_...",
  baseURL: "https://api.yourdomain.com", // self-hosted
});
The SDK POSTs to https://api.lumiqtrace.com/v1/ingest over HTTPS (port 443). If your environment blocks outbound traffic, you will need to allowlist this endpoint. Use debug: true to see if flush requests are failing with network errors.
lumiqtrace.init() must be called before any wrapped LLM client is created. If you call wrapOpenAI() before init(), the wrapper has no client to send events to.
// ✓ Correct — init before wrap
lumiqtrace.init({ apiKey: "lqt_..." });
const openai = lumiqtrace.wrapOpenAI(new OpenAI());

// ✗ Wrong — wrap before init
const openai = lumiqtrace.wrapOpenAI(new OpenAI());
lumiqtrace.init({ apiKey: "lqt_..." });

Cost showing $0.00 for all events

Symptom: Traces appear in the dashboard but the cost_usd column is always zero.
The SDK computes cost from a built-in pricing table keyed on the exact model string. If you use a model ID the SDK doesn’t recognise, cost_usd is set to 0 rather than throwing an error.Check your model strings against the supported models reference. Common mismatches:
You sendSDK expects
gpt4ogpt-4o
claude-3-5-sonnetclaude-3-5-sonnet-20241022
gemini-progemini-1.5-pro
If you use a model not in the table, open an issue on GitHub — we add new models with each release.
If input_tokens or output_tokens are 0, cost will also be 0. This can happen if the provider response doesn’t include usage data — for example, streaming calls where you don’t consume the full stream before closing it.For streaming calls, make sure you iterate the full response before the function returns:
const stream = await openai.chat.completions.create({ stream: true, ... });
for await (const chunk of stream) {
  // consume every chunk
}
// Usage data is in the final chunk — the SDK captures it automatically

Parent-child spans not linking (broken trace tree)

Symptom: Multi-step agent workflows show as separate disconnected traces instead of a nested tree.
The SDK uses AsyncLocalStorage (Node.js) or ContextVar (Python) to propagate trace context. If you break the async chain — for example, by using setTimeout, setImmediate, or a non-async callback — context is lost.Always use await for async operations inside withLumiqtraceContext or withAgent:
// ✓ Context propagates correctly
await withAgent({ name: "MyAgent" }, async (agent) => {
  await agent.traceTool("fetch-data", {}, () => fetchData());
});

// ✗ Context is lost inside setTimeout
await withAgent({ name: "MyAgent" }, async (agent) => {
  setTimeout(() => fetchData(), 0); // no await — context is dropped
});
If you call wrapped LLM clients outside a withLumiqtraceContext block, each call gets its own isolated trace context and cannot link to siblings.Wrap your entire request handler:
export async function POST(req: Request) {
  const { userId, sessionId } = await getSession(req);

  return withLumiqtraceContext({ userId, sessionId }, async () => {
    // All LLM calls inside here share the same trace context
    const summary = await summarize(text);
    const response = await generate(summary);
    return Response.json({ response });
  });
}
If you don’t await withAgent(...), the agent span may close before inner spans complete, breaking the parent-child link.
// ✓ Correct
await withAgent({ name: "MyAgent" }, async (agent) => { ... });

// ✗ Wrong — span closes immediately
withAgent({ name: "MyAgent" }, async (agent) => { ... });

Guardrails adding too much latency

Symptom: Adding guardrails: true increases your p99 latency by 200–800ms.
AI-powered guardrail types (toxicity, topic-block, custom LLM judge) call an AI model for every check, adding 80–800ms per call. For latency-sensitive hot paths, use keyword or regex guardrails instead — these typically complete in under 5ms.Reserve AI guardrail types for lower-frequency or non-time-critical operations.
If your threat model only requires input validation, disable post-checks:
const openai = lumiqtrace.wrapOpenAI(new OpenAI(), {
  guardrails: { pre: true, post: false }, // skip the output check
});
If failClosed: true is set and the guardrail service is slow or intermittently erroring, every guardrail service error will block your LLM call. Switch to failClosed: false unless you specifically need fail-safe blocking behavior.

Events missing in serverless (Lambda / Vercel / Cloud Run)

See the complete Serverless deployment guide — this is the most common cause of missing traces and deserves a full walkthrough. Quick fix: call flush() before your handler returns.
# Python Lambda
def handler(event, context):
    result = call_llm(event["prompt"])
    lumiqtrace.flush()  # must be last line before return
    return {"body": result}
// TypeScript Lambda / Vercel
export default async function handler(req, res) {
  const result = await callLLM(req.body.prompt);
  await lumiqtrace.getClient().flush(); // must await before responding
  res.json({ result });
}

401 Unauthorized from the ingest endpoint

Cause: One of the following:
  1. The API key was revoked or rotated and the old key is still in use
  2. The x-api-key header is missing from the request
  3. The key belongs to a different project
Go to Settings → API Keys, check that the key prefix matches the one your application is using, and rotate if necessary.

Rate limit 429 with “Monthly quota exceeded”

This is not a per-minute rate limit — it means your organization has exhausted its monthly event quota. Retrying later in the same month will not succeed. Options:
  • Upgrade your plan from Settings → Billing
  • Reduce your SDK sampleRate to send fewer events: lumiqtrace.init({ sampleRate: 0.5 })
  • Wait for your quota to reset at the start of next month

Prompt text not showing in trace detail

Cause: Prompt text storage is disabled by default for privacy. Enable it with storePrompts: true:
lumiqtrace.init({
  apiKey: "lqt_...",
  storePrompts: true, // send and store prompt + completion text
});
Review your data retention policy before enabling this. Once stored, prompt text is subject to your plan’s retention window.

Python SDK not capturing async OpenAI calls

lumiqtrace.patch_openai() patches both sync and async clients. If async calls are still not traced, verify:
  1. patch_openai() is called before any openai module is imported in your actual call path
  2. You are using openai.AsyncOpenAI() (not a pre-initialized client from before the patch)
import lumiqtrace
lumiqtrace.init(api_key="lqt_...")
lumiqtrace.patch_openai()

# Create client AFTER patching
import openai
client = openai.AsyncOpenAI()  # ✓ this is patched

Still stuck?

If none of the above resolves your issue, enable debug: true in your SDK init and share the console output. You can open an issue on GitHub or email [email protected].