Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.lumiqtrace.com/llms.txt

Use this file to discover all available pages before exploring further.

LiteLLM provides a unified interface for 100+ LLM providers. Wrap your LiteLLM client with wrapLiteLLM() to trace all calls regardless of provider.

Installation

npm install @lumiqtrace/sdk litellm

Setup

import { LiteLLM } from "litellm";
import { lumiqtrace } from "@lumiqtrace/sdk";

lumiqtrace.init({ apiKey: process.env.LUMIQTRACE_API_KEY! });

const litellm = lumiqtrace.wrapLiteLLM(new LiteLLM());

Example

const response = await litellm.completion({
  model: "gpt-4o",
  messages: [{ role: "user", content: "What is observability?" }],
});

What gets captured

FieldDetails
ModelThe model string passed to LiteLLM (e.g. gpt-4o, claude-sonnet-4-6, gemini/gemini-2.5-flash)
Input tokensFrom LiteLLM’s normalized usage field
Output tokensFrom LiteLLM’s normalized usage field
CostFrom LiteLLM’s _hidden_params.response_cost if available, otherwise token-based
LatencyTotal call duration
LiteLLM normalizes provider responses. If a provider returns cost data, LiteLLM exposes it in _hidden_params.response_cost — LumiqTrace uses this when available for more accurate cost attribution.