Documentation Index
Fetch the complete documentation index at: https://docs.lumiqtrace.com/llms.txt
Use this file to discover all available pages before exploring further.
Traces not appearing in the dashboard
Symptom: You initialized the SDK and made LLM calls, but the dashboard shows no data after 15–30 seconds.Wrong or missing API key
Wrong or missing API key
init() starts with lqt_ and matches the one shown in Settings → API Keys for your project. A typo or a key from a different project will cause all events to be silently dropped (the SDK logs a warning, but does not throw).Enable debug: true to see SDK activity:Process exited before flush
Process exited before flush
flush() explicitly before the process ends:- TypeScript
- Python
Wrong baseURL
Wrong baseURL
baseURL to your API server’s address. The default is https://api.lumiqtrace.com.Firewall or proxy blocking outbound requests
Firewall or proxy blocking outbound requests
https://api.lumiqtrace.com/v1/ingest over HTTPS (port 443). If your environment blocks outbound traffic, you will need to allowlist this endpoint. Use debug: true to see if flush requests are failing with network errors.SDK not initialized before LLM call
SDK not initialized before LLM call
lumiqtrace.init() must be called before any wrapped LLM client is created. If you call wrapOpenAI() before init(), the wrapper has no client to send events to.Cost showing $0.00 for all events
Symptom: Traces appear in the dashboard but thecost_usd column is always zero.
Model name not in the pricing table
Model name not in the pricing table
cost_usd is set to 0 rather than throwing an error.Check your model strings against the supported models reference. Common mismatches:| You send | SDK expects |
|---|---|
gpt4o | gpt-4o |
claude-3-5-sonnet | claude-3-5-sonnet-20241022 |
gemini-pro | gemini-1.5-pro |
Token counts are zero
Token counts are zero
input_tokens or output_tokens are 0, cost will also be 0. This can happen if the provider response doesn’t include usage data — for example, streaming calls where you don’t consume the full stream before closing it.For streaming calls, make sure you iterate the full response before the function returns:Parent-child spans not linking (broken trace tree)
Symptom: Multi-step agent workflows show as separate disconnected traces instead of a nested tree.Context not propagated across async boundaries
Context not propagated across async boundaries
AsyncLocalStorage (Node.js) or ContextVar (Python) to propagate trace context. If you break the async chain — for example, by using setTimeout, setImmediate, or a non-async callback — context is lost.Always use await for async operations inside withLumiqtraceContext or withAgent:withLumiqtraceContext missing in request handler
withLumiqtraceContext missing in request handler
withLumiqtraceContext block, each call gets its own isolated trace context and cannot link to siblings.Wrap your entire request handler:withAgent not awaited
withAgent not awaited
await withAgent(...), the agent span may close before inner spans complete, breaking the parent-child link.Guardrails adding too much latency
Symptom: Addingguardrails: true increases your p99 latency by 200–800ms.
Using AI-powered guardrails on the hot path
Using AI-powered guardrails on the hot path
Running both pre and post checks unnecessarily
Running both pre and post checks unnecessarily
failClosed causing extra retries
failClosed causing extra retries
failClosed: true is set and the guardrail service is slow or intermittently erroring, every guardrail service error will block your LLM call. Switch to failClosed: false unless you specifically need fail-safe blocking behavior.Events missing in serverless (Lambda / Vercel / Cloud Run)
See the complete Serverless deployment guide — this is the most common cause of missing traces and deserves a full walkthrough. Quick fix: callflush() before your handler returns.
401 Unauthorized from the ingest endpoint
Cause: One of the following:
- The API key was revoked or rotated and the old key is still in use
- The
x-api-keyheader is missing from the request - The key belongs to a different project
Rate limit 429 with “Monthly quota exceeded”
This is not a per-minute rate limit — it means your organization has exhausted its monthly event quota. Retrying later in the same month will not succeed.
Options:
- Upgrade your plan from Settings → Billing
- Reduce your SDK
sampleRateto send fewer events:lumiqtrace.init({ sampleRate: 0.5 }) - Wait for your quota to reset at the start of next month
Prompt text not showing in trace detail
Cause: Prompt text storage is disabled by default for privacy. Enable it withstorePrompts: true:
Python SDK not capturing async OpenAI calls
lumiqtrace.patch_openai() patches both sync and async clients. If async calls are still not traced, verify:
patch_openai()is called before anyopenaimodule is imported in your actual call path- You are using
openai.AsyncOpenAI()(not a pre-initialized client from before the patch)
Still stuck?
If none of the above resolves your issue, enabledebug: true in your SDK init and share the console output. You can open an issue on GitHub or email [email protected].