# LumiqTrace ## Docs - [AGENTS](https://docs.lumiqtrace.com/AGENTS.md) - [API keys: authenticate your SDK integrations](https://docs.lumiqtrace.com/account/api-keys.md): Understand how LumiqTrace API keys work, how to create and rotate them safely, and best practices for keeping your keys secure in production. - [Billing: plans, usage, and payment management](https://docs.lumiqtrace.com/account/billing.md): Compare LumiqTrace pricing plans, understand overage billing, view your month-to-date usage, and manage your subscription and payment method through Stripe. - [Organizations: workspaces, members, and settings](https://docs.lumiqtrace.com/account/organizations.md): Learn how to create and manage organizations in LumiqTrace, invite team members, assign roles, and control access to your projects and billing. - [Authentication — LumiqTrace API](https://docs.lumiqtrace.com/api-reference/authentication.md): Authenticate LumiqTrace API requests using an x-api-key header for ingest or a Bearer token for data and AI endpoints. Covers error codes and examples. - [Errors and Rate Limits — LumiqTrace API](https://docs.lumiqtrace.com/api-reference/errors.md): HTTP error codes, rate limits, and retry guidance for all LumiqTrace API endpoints, including ingest, dashboard, and AI features. - [List Events — LumiqTrace API](https://docs.lumiqtrace.com/api-reference/events.md): Retrieve a cursor-paginated list of raw LLM events for a project, with filtering by model, status, time range, environment, and user. - [Ingest Events — LumiqTrace API](https://docs.lumiqtrace.com/api-reference/ingest.md): POST /v1/ingest — send LLM events to LumiqTrace as gzip-compressed NDJSON. Accepts up to 100 events per batch. Returns 202 Accepted with event count. - [Metrics Endpoints — LumiqTrace API](https://docs.lumiqtrace.com/api-reference/metrics.md): Query project KPIs, cost breakdowns, latency percentiles, and error analytics via four dedicated LumiqTrace metrics endpoints. - [OpenTelemetry ingest](https://docs.lumiqtrace.com/api-reference/otel.md): Send OpenTelemetry-formatted traces to LumiqTrace using the OTLP HTTP protocol. Use this endpoint when integrating with OpenTelemetry-instrumented codebases. - [Get Trace — LumiqTrace API](https://docs.lumiqtrace.com/api-reference/traces.md): Retrieve a complete nested span tree for a trace ID, including all child spans and their metadata, for debugging LLM call chains. - [Core concepts: the LumiqTrace data model](https://docs.lumiqtrace.com/concepts.md): Learn how LumiqTrace structures agent observability data — events, spans, span kinds, traces, projects, and organizations — and how cost is calculated from token counts. - [Agent Registry](https://docs.lumiqtrace.com/dashboard/agent-registry.md): A live map of every agent in your system, the tools they use, and how they connect to each other. - [Agents](https://docs.lumiqtrace.com/dashboard/agents.md): Per-agent performance dashboards — cost, latency, error rate, and tool usage broken down by agent. - [AI features — intelligent insights for your agent operations](https://docs.lumiqtrace.com/dashboard/ai-features.md): The AI Hub gives you four AI-powered tools: cost optimization recommendations, anomaly detection with plain-English explanations, natural language queries, and root cause analysis on agent traces. - [Alerts — get notified when your agent metrics cross a threshold](https://docs.lumiqtrace.com/dashboard/alerts.md): Create alert rules that fire when agent error rates, costs, or latency exceed your thresholds. Get notified by email or webhook within minutes of a breach. - [Costs — understand and control your LLM spend](https://docs.lumiqtrace.com/dashboard/costs.md): Break down your LLM spend by model and over time, track cache savings, see your month-to-date total, and get a 30-day forecast to avoid billing surprises. - [Datasets](https://docs.lumiqtrace.com/dashboard/datasets.md): Manage evaluation datasets — create from traces, upload CSV, and run batch evaluations. - [Errors](https://docs.lumiqtrace.com/dashboard/errors.md): Track, group, and triage errors across your agents — model errors, tool failures, timeouts, and rate limits. - [Evaluations — measure and trend custom LLM metrics](https://docs.lumiqtrace.com/dashboard/evaluations.md): Define custom evaluation metrics, run them against your traces, and track results over time to detect quality regressions and measure the impact of prompt changes. - [Guardrails — configure content safety policies](https://docs.lumiqtrace.com/dashboard/guardrails.md): Create and manage guardrail policies in the LumiqTrace dashboard to block, redact, or flag unsafe LLM inputs and outputs without redeploying your application. - [Incidents — detect, correlate, and resolve LLM issues](https://docs.lumiqtrace.com/dashboard/incidents.md): The Incidents page surfaces active and resolved issues detected across your LLM traffic, correlates related anomalies and errors, and tracks resolution state. - [LumiqPilot — AI copilot for your LLM operations](https://docs.lumiqtrace.com/dashboard/lumiqpilot.md): LumiqPilot is an action-first AI copilot that manages alerts, guardrails, prompts, and SDK configuration through natural language conversation, with full audit logging. - [Dashboard overview — your agent health at a glance](https://docs.lumiqtrace.com/dashboard/overview.md): The overview page surfaces your four key agent metrics, a cost and request timeline, top models by spend, and recent errors — all in one place. - [Performance — latency metrics and throughput analysis](https://docs.lumiqtrace.com/dashboard/performance.md): Track P50, P90, and P99 latency, time-to-first-token for streaming calls, and request throughput across models, environments, and time ranges. - [Prompts — version control and manage your prompt library](https://docs.lumiqtrace.com/dashboard/prompts.md): Store, version, label, and roll back prompt templates in the LumiqTrace prompt library. Fetch prompts at runtime via the SDK so you can update them without redeploying. - [Sessions — track multi-turn user conversations](https://docs.lumiqtrace.com/dashboard/sessions.md): Use session IDs to group multiple LLM calls from a single user conversation and analyze session-level cost, latency, and turn count in the Sessions dashboard. - [Simulations — test LLM behavior before shipping](https://docs.lumiqtrace.com/dashboard/simulations.md): Run your LLM application against a dataset of test cases to catch regressions before they reach production. Compare outputs, scores, and latency across prompt versions. - [Tools](https://docs.lumiqtrace.com/dashboard/tools.md): Track every tool your agents call — usage counts, failure rates, latency, and argument/return value inspection. - [Traces — inspect every agent run and its steps](https://docs.lumiqtrace.com/dashboard/traces.md): Browse your full agent run log, filter by model or status, and drill into a flame graph showing every agent turn, tool call, and LLM interaction with token, cost, and latency details. - [Environments — separate production, staging, and development data](https://docs.lumiqtrace.com/guides/environments.md): Use the environment tag and project structure to cleanly separate LLM traces from different deployment environments in LumiqTrace. - [LumiqTrace MCP server](https://docs.lumiqtrace.com/guides/lumiqtrace-mcp.md): Connect Claude Desktop, Cursor, or any MCP client to LumiqTrace to query your traces, metrics, and errors in natural language. - [Production checklist](https://docs.lumiqtrace.com/guides/production-checklist.md): Everything to verify before shipping LumiqTrace to production: API key security, sampling, flush handling, alert rules, and data privacy settings. - [Serverless deployment guide](https://docs.lumiqtrace.com/guides/serverless.md): Ensure LumiqTrace events are reliably sent from AWS Lambda, Vercel Functions, Netlify Functions, and Google Cloud Run by handling flush correctly in each environment. - [Testing and mocking](https://docs.lumiqtrace.com/guides/testing-and-mocking.md): Disable LumiqTrace in tests, mock the SDK for unit testing, and configure CI environments. - [Webhooks — receive real-time alert notifications](https://docs.lumiqtrace.com/guides/webhooks.md): Configure LumiqTrace to POST a signed JSON payload to your endpoint when an alert fires. Covers payload schema, HMAC signature verification, delivery guarantees, and integration examples. - [LumiqTrace: Agent Observability for Production AI](https://docs.lumiqtrace.com/index.md): LumiqTrace gives you complete visibility into every agent run, tool call, and LLM interaction. Trace multi-agent systems, monitor costs, debug failures, and optimize your AI operations in one dashboard. - [LumiqTrace: Agent Observability for AI Teams](https://docs.lumiqtrace.com/introduction.md): LumiqTrace is an agent observability platform. Trace every agent run, tool call, LLM interaction, and delegation across your entire AI system — in two lines of code. - [Audit log](https://docs.lumiqtrace.com/platform/audit.md): A tamper-evident log of every action taken in your LumiqTrace organization — who did what, and when. - [Environment variables reference](https://docs.lumiqtrace.com/platform/environment-variables.md): Complete reference for all environment variables used by the LumiqTrace API server, including database connections, auth configuration, AI providers, and integrations. - [Security — authentication, isolation, and data handling](https://docs.lumiqtrace.com/platform/security.md): LumiqTrace's security architecture: API key design, session authentication, multi-tenant isolation at the database layer, PII handling, and audit logging. - [Self-hosting LumiqTrace](https://docs.lumiqtrace.com/platform/self-hosting.md): Deploy LumiqTrace on your own infrastructure with an Enterprise license. Get a Docker image, license key, and full data sovereignty for your agent observability stack. - [Quickstart: instrument your first agent in 5 minutes](https://docs.lumiqtrace.com/quickstart.md): Send your first agent trace to LumiqTrace by wrapping your LLM client or agent framework with the SDK. No manual instrumentation required — two lines of setup. - [Changelog](https://docs.lumiqtrace.com/reference/changelog.md): Release history for the LumiqTrace platform and SDKs. - [Supported models and pricing](https://docs.lumiqtrace.com/reference/models.md): All LLM models supported by LumiqTrace's built-in pricing table, with per-token costs for input, output, and cached tokens used in cost_usd calculation. - [Span kinds reference](https://docs.lumiqtrace.com/reference/span-kinds.md): Complete reference for all LumiqTrace span kinds — what each represents, when it is emitted, and how it appears in the trace flame graph and agent registry. - [Agent tracing — withAgent and multi-agent systems](https://docs.lumiqtrace.com/sdk/agent-tracing.md): Use withAgent to trace multi-agent workflows, capture planning steps, tool calls, and agent-to-agent handoffs as linked spans in the LumiqTrace dashboard. - [SDK config propagation — update live applications without redeployment](https://docs.lumiqtrace.com/sdk/config-propagation.md): Understand how LumiqPilot and the dashboard push SDK configuration changes to running applications in real time via the ingest response config version field. - [FastAPI middleware — automatic HTTP request tracing](https://docs.lumiqtrace.com/sdk/fastapi-middleware.md): Add LumiqFastAPIMiddleware to your FastAPI app to automatically trace every HTTP request and instrument individual LLM calls with @observe_llm and @observe_span. - [Flask middleware — automatic HTTP request tracing](https://docs.lumiqtrace.com/sdk/flask-middleware.md): Add LumiqFlaskMiddleware to your Flask app to automatically trace every HTTP request and instrument LLM calls with @observe_llm and @observe_span decorators. - [Guardrails — content checking for LLM calls](https://docs.lumiqtrace.com/sdk/guardrails.md): Enable pre- and post-LLM content checks on any wrapped client to block, redact, or flag unsafe inputs and outputs using your configured guardrail policies. - [Anthropic](https://docs.lumiqtrace.com/sdk/integrations/anthropic.md): Trace Anthropic Claude API calls automatically — messages, tool use, and streaming — with wrapAnthropic(). - [AWS Bedrock](https://docs.lumiqtrace.com/sdk/integrations/bedrock.md): Trace Amazon Bedrock model invocations with wrapBedrock(). - [Google ADK](https://docs.lumiqtrace.com/sdk/integrations/google-adk.md): Trace Google Agent Development Kit (ADK) agents and multi-agent systems using instrumentADK, wrapADKRunner, and wrapADKAgent in TypeScript and Python. - [Google Generative AI](https://docs.lumiqtrace.com/sdk/integrations/google-genai.md): Trace Gemini API calls — generateContent, chat, and streaming — with wrapGoogle(). - [Groq](https://docs.lumiqtrace.com/sdk/integrations/groq.md): Trace Groq inference calls with wrapGroq() — optimized for high-throughput, low-latency workloads. - [LangChain](https://docs.lumiqtrace.com/sdk/integrations/langchain.md): Trace LangChain chains, agents, tools, and retrievers automatically using LumiqtraceCallbackHandler — covering TypeScript and Python with full span-level detail. - [LiteLLM](https://docs.lumiqtrace.com/sdk/integrations/litellm.md): Trace LiteLLM calls across any provider with wrapLiteLLM(). - [Mistral](https://docs.lumiqtrace.com/sdk/integrations/mistral.md): Trace Mistral AI API calls with wrapMistral(). - [OpenAI](https://docs.lumiqtrace.com/sdk/integrations/openai.md): Trace OpenAI chat completions, embeddings, and responses automatically with two lines of code. - [OpenRouter](https://docs.lumiqtrace.com/sdk/integrations/openrouter.md): Trace calls made through OpenRouter's unified API with wrapOpenRouter(). - [Manual spans — tracing custom operations](https://docs.lumiqtrace.com/sdk/manual-spans.md): Use startSpan to instrument RAG pipelines, custom model wrappers, evaluation harnesses, and any non-LLM operation you want to appear in your LumiqTrace traces. - [PII redaction — protect sensitive data in traces](https://docs.lumiqtrace.com/sdk/pii-redaction.md): Configure LumiqTrace's built-in PII redaction to automatically remove sensitive values from tags and metadata before they are sent to the ingest endpoint. - [Prompt management — versioned prompts with the SDK](https://docs.lumiqtrace.com/sdk/prompt-management.md): Fetch, compile, create, and manage versioned prompts from the LumiqTrace prompt library using PromptClient, with built-in caching and variable substitution. - [Python SDK — lumiqtrace](https://docs.lumiqtrace.com/sdk/python.md): Install and configure the LumiqTrace Python SDK to trace OpenAI calls, decorate functions, manage agent workflows, and flush events in serverless environments. - [TypeScript SDK — @lumiqtrace/sdk](https://docs.lumiqtrace.com/sdk/typescript.md): Install and configure the LumiqTrace TypeScript SDK to automatically trace OpenAI, Anthropic, Google, and OpenRouter LLM calls with zero code changes. - [Troubleshooting](https://docs.lumiqtrace.com/troubleshooting.md): Solutions to the most common problems when integrating LumiqTrace with your agents — from missing traces to cost calculation issues and broken span trees. ## Optional - [Status](https://status.lumiqtrace.com)