Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.lumiqtrace.com/llms.txt

Use this file to discover all available pages before exploring further.

Wrap your Groq client with wrapGroq() to trace every inference call, including latency breakdowns useful for Groq’s fast inference speeds.

Installation

npm install @lumiqtrace/sdk groq-sdk

Setup

import Groq from "groq-sdk";
import { lumiqtrace } from "@lumiqtrace/sdk";

lumiqtrace.init({ apiKey: process.env.LUMIQTRACE_API_KEY! });

const groq = lumiqtrace.wrapGroq(new Groq({ apiKey: process.env.GROQ_API_KEY! }));

Example

const completion = await groq.chat.completions.create({
  model: "llama-3.3-70b-versatile",
  messages: [{ role: "user", content: "Summarize this in one sentence." }],
});

What gets captured

FieldDetails
Modelllama-3.3-70b-versatile, mixtral-8x7b-32768, gemma2-9b-it, etc.
Input tokensFrom usage.prompt_tokens
Output tokensFrom usage.completion_tokens
CostCalculated from token counts and Groq pricing
LatencyTotal request duration (Groq latency is typically under 1s)