Wrap yourDocumentation Index
Fetch the complete documentation index at: https://docs.lumiqtrace.com/llms.txt
Use this file to discover all available pages before exploring further.
OpenAI client instance with wrapOpenAI(). Every call your existing code makes is traced automatically — no changes to call sites required.
Installation
Setup
Example
What gets captured
| Field | Details |
|---|---|
| Model | gpt-4o, gpt-4o-mini, o1, etc. |
| Input tokens | From usage.prompt_tokens |
| Output tokens | From usage.completion_tokens |
| Cost | Calculated from token counts and OpenAI pricing |
| Latency | Total time from request to response |
| Status | success or error with error message |
| Finish reason | stop, length, tool_calls, content_filter |
Streaming
wrapOpenAI() supports streaming responses. Token counts are accumulated from the stream chunks.
Embeddings
Embedding calls are traced with model, token count, and latency. Cost is calculated from the embedding model pricing.storePrompts must be true in lumiqtrace.init() to store prompt text alongside traces. Disabled by default for privacy.Next steps
- Agent tracing — wrap multi-step agent logic
- Manual spans — add custom spans around non-LLM operations
- PII redaction — automatically redact sensitive data before storage