Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.lumiqtrace.com/llms.txt

Use this file to discover all available pages before exploring further.

Wrap your GoogleGenerativeAI client with wrapGoogle(). All generateContent and chat calls are traced automatically.

Installation

npm install @lumiqtrace/sdk @google/generative-ai

Setup

import { GoogleGenerativeAI } from "@google/generative-ai";
import { lumiqtrace } from "@lumiqtrace/sdk";

lumiqtrace.init({ apiKey: process.env.LUMIQTRACE_API_KEY! });

const genAI = lumiqtrace.wrapGoogle(new GoogleGenerativeAI(process.env.GOOGLE_API_KEY!));

Example

const model = genAI.getGenerativeModel({ model: "gemini-2.5-flash" });

const result = await model.generateContent("Explain token pricing in two sentences.");
console.log(result.response.text());

What gets captured

FieldDetails
Modelgemini-2.5-flash, gemini-2.5-pro, etc.
Input tokensFrom usageMetadata.promptTokenCount
Output tokensFrom usageMetadata.candidatesTokenCount
CostCalculated from token counts and Google pricing
LatencyTotal request duration
Finish reasonSTOP, MAX_TOKENS, SAFETY, etc.

Chat sessions

const model = genAI.getGenerativeModel({ model: "gemini-2.5-flash" });
const chat = model.startChat();

const result = await chat.sendMessage("What is LumiqTrace?");
console.log(result.response.text());
Each sendMessage call is traced as a separate span in the session.

Next steps