LiteLLM provides a unified interface for 100+ LLM providers. Wrap your LiteLLM client withDocumentation Index
Fetch the complete documentation index at: https://docs.lumiqtrace.com/llms.txt
Use this file to discover all available pages before exploring further.
wrapLiteLLM() to trace all calls regardless of provider.
Installation
Setup
Example
What gets captured
| Field | Details |
|---|---|
| Model | The model string passed to LiteLLM (e.g. gpt-4o, claude-sonnet-4-6, gemini/gemini-2.5-flash) |
| Input tokens | From LiteLLM’s normalized usage field |
| Output tokens | From LiteLLM’s normalized usage field |
| Cost | From LiteLLM’s _hidden_params.response_cost if available, otherwise token-based |
| Latency | Total call duration |
LiteLLM normalizes provider responses. If a provider returns cost data, LiteLLM exposes it in
_hidden_params.response_cost — LumiqTrace uses this when available for more accurate cost attribution.