Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.lumiqtrace.com/llms.txt

Use this file to discover all available pages before exploring further.

Before diving into the dashboard, it helps to understand how LumiqTrace structures the data it collects from your agents. Everything starts with a single operation — an LLM call, a tool invocation, an agent turn — and builds upward through a clear hierarchy.

The data hierarchy

Organization
  └── Project  (one API key per project)
        └── Trace  (one end-to-end agent run)
              └── Span  (one step: LLM call, tool call, agent turn…)
Every piece of data in LumiqTrace belongs to a project, and every project belongs to an organization. Within a project, individual operations are captured as spans, and related spans are grouped into traces. A trace represents one complete agent run from start to finish.

Event

An event is the most fundamental unit of data in LumiqTrace. Every time your instrumented code executes an operation — an LLM call, a tool invocation, an agent turn — the SDK captures a single event and sends it to the ingest endpoint. Events are immutable once stored.

Identity fields

event_id — unique UUIDv4 for this eventtrace_id — links this event to others in the same agent runspan_id — unique ID for this specific operationparent_span_id — set when this step is nested inside another span

Operation fields

provider"openai", "anthropic", "google", or "custom"model — the exact model string, e.g. "gpt-4o"span_kind — what type of operation this is (see below)operation"chat", "embed", "tool", "agent", or "custom"

Performance fields

latency_ms — total time from start to finishttft_ms — time to first token (streaming LLM calls only)input_tokens — tokens consumed in the promptoutput_tokens — tokens generated in the completioncached_tokens — prompt tokens served from provider cache

Cost and status

cost_usd — computed by the SDK from per-model token pricingstatus"success", "error", "timeout", "rate_limited", or "cancelled"error_code — provider error code when status is not "success"error_message — up to 500 characters, sanitized before storage
Events also carry context fields for filtering: environment, user_id, session_id, and a tags map of arbitrary string key-value pairs.
Prompt text is not stored by default. The SDK always computes a SHA-256 hash of the prompt (prompt_hash) for deduplication, but the raw text is only sent if you explicitly set storePrompts: true when calling lumiqtrace.init().

Span

A span is an event that carries trace context. Every event is technically a span — the term emphasizes that the event participates in a parent-child relationship with other events in the same agent run. Two fields link spans together:
  • span_id — a unique identifier for this specific operation
  • parent_span_id — the span_id of the operation that triggered this one
When you look at a trace in the dashboard, the flame graph renders the span tree: each bar represents one span, its width is proportional to its duration, and indentation shows nesting depth.
In multi-agent systems where one agent delegates to another, or a retriever is called inside an LLM prompt construction step, the SDK automatically propagates trace context using AsyncLocalStorage (Node.js) or ContextVar (Python). Spans link correctly without any manual effort.

Span kinds

Every span has a span_kind field that describes the type of operation it represents. This controls how the span is rendered in the flame graph and how it appears in the agent registry.
Span kindEmitted byWhat it represents
llmProvider wrappersA direct call to an LLM API
agentwithAgent(), ADK/LangChain integrationsThe execution scope of a single agent
toolagent.traceTool(), LangChain/ADK integrationsA tool or function call by an agent
planningagent.logPlan()Steps the agent planned before executing
handoffagent.delegateTo(), multi-agent frameworksAn agent delegating to another agent
retrieverManual spans, LangChain retriever handlerA document retrieval operation (RAG)
guardrailSDK guardrail clientA pre- or post-LLM content check
generalManual startSpan()Any other custom operation
The SDK sets span_kind automatically for all wrapper-generated spans. For manual spans:
const { span } = startSpan({
  name: "vector-search",
  span_kind: "retriever",
  provider: "custom",
});
See the full span kinds reference for flame graph rendering details.

Trace

A trace is a group of spans sharing the same trace_id. It represents one complete end-to-end agent run — from the first operation to the last, across all agent turns, tool calls, and LLM interactions. For a simple chatbot making one API call per message, a trace contains one span. For a multi-agent workflow that routes, retrieves context, delegates to specialists, and synthesizes a response, a trace may contain dozens of nested spans across multiple agents. In the dashboard, the Traces view shows the flame graph for a full trace:
  • The x-axis is wall-clock time from start to end of the root span
  • Each bar is a span coloured by status (green = success, red = error, yellow = slow)
  • Hovering shows span details; clicking opens the full span detail panel
  • The total trace cost is the sum of cost_usd across all spans
A trace is not an object you create explicitly. It emerges automatically from spans that share a trace_id. The SDK generates a new trace_id for each top-level agent run and propagates it to all nested operations through context propagation.

Project

A project is an isolated container for agent events. Every project has exactly one API key and all events ingested with that key are scoped to that project. Projects are the primary unit of data isolation. Use separate projects for:
  • Different applications — one project per service or agent system
  • Different environments — or use the environment field to keep production and staging in one project with easy filtering
Each project has its own retention period, dashboard metrics, alert rules, guardrails, prompt library, and AI analysis results.
Treat your project API key like a password — never commit it to source control. Rotate it from Settings → API Keys if compromised. The old key stays valid for 24 hours to allow smooth rollover.

Organization

An organization is the top-level workspace. It holds your projects, team members, and billing subscription. All usage — events ingested, data retained, members invited — counts against the organization’s plan.
RoleAccess
OwnerFull access including billing, org deletion, and all admin actions
AdminManage projects, API keys, alerts, AI features; read billing
MemberRead-only dashboard access
Billing is per organization, not per project or user. Your plan’s event quota is shared across all projects and resets monthly.

Cost calculation

LumiqTrace computes cost_usd client-side in the SDK before events are sent. Costs appear immediately in the dashboard without any server-side enrichment step.
cost_usd = (input_tokens × input_price
           + output_tokens × output_price
           + cached_tokens × cached_price) ÷ 1,000,000
Cached tokens — prompt tokens served from the provider’s prompt cache — are billed at a reduced rate. The SDK extracts cached token counts from provider responses automatically. If the SDK encounters a model it does not recognise, cost_usd is set to 0 rather than throwing an error. See the supported models reference for the full pricing table.
See your cached token savings on the Costs page under the Cache Hit Ratio card. Increasing your cache hit rate — by keeping static system prompts at the start of your context — is often the easiest way to reduce agent operating costs.