Run through this checklist before deploying LumiqTrace to a production environment. Each item addresses a common failure mode found in production integrations.Documentation Index
Fetch the complete documentation index at: https://docs.lumiqtrace.com/llms.txt
Use this file to discover all available pages before exploring further.
SDK configuration
API key is in a secret manager, not hardcoded
Your Verify with:
lqt_ API key should never appear in source code or committed to version control. Set it as an environment variable and read it at runtime:grep -r "lqt_" src/ — this should return no matches.environment is set to 'production'
Explicitly set the environment tag so your data is filtered correctly in the dashboard and anomaly detection runs on the right baseline:
storePrompts policy is decided
storePrompts defaults to false — prompt text is never stored. If you enable it, you are storing user inputs and model outputs. Review your:- Privacy policy (does it cover LLM input storage?)
- Data retention window (matches your plan’s setting)
- Any regulatory requirements (GDPR, HIPAA, SOC 2)
false and use tags for structured metadata instead.sampleRate is tuned for your traffic volume
At high traffic, 100% tracing generates a lot of events and cost. Consider sampling:
| Monthly LLM calls | Recommended sampleRate |
|---|---|
| < 50K | 1.0 (trace everything) |
| 50K – 500K | 0.5 – 1.0 |
| 500K – 5M | 0.1 – 0.25 |
| > 5M | 0.05 – 0.1 |
Flush is called before process exit (serverless only)
If you deploy to Lambda, Vercel Functions, Netlify, or similar short-lived environments, verify
flush() is called at the end of every handler. See the Serverless guide.redactKeys covers all PII fields in your tags
Review every
tags key your application sends. Add any fields that contain sensitive data to redactKeys:Dashboard setup
At least one alert rule is configured
Before going live, create a baseline error-rate alert so you get notified if something breaks immediately after deployment:
- Metric:
error_rate - Condition:
>0.05 (5%) - Window: 15 minutes
- Notify: your on-call email or Slack webhook
Cost alert is configured
Unexpected LLM code paths can cost hundreds of dollars in minutes. Create a cost alert:
- Metric:
cost_usd - Condition:
>your daily budget - Window: 60 minutes
Run the Cost Optimizer once before launch
Navigate to AI Hub → Cost Optimizer and review the recommendations before launch. Model-switching opportunities are easiest to implement before traffic hits the feature.
Security
One API key per service
Avoid sharing a single API key across multiple services or environments. Use separate keys so you can rotate or revoke one without affecting others. See API Keys.
API key rotation schedule is documented
Add API key rotation to your team’s security calendar — every 90 days is a reasonable cadence for most teams. Document the rotation process so the next person doesn’t have to figure it out from scratch.
Quick verification after deployment
After deploying, run this end-to-end check:- Make one LLM call through your application
- Wait 10–15 seconds
- Open the LumiqTrace dashboard and navigate to Traces
- Confirm the trace appears with
environment: production, the correct model, non-zero token counts, and a non-zero cost
debug: true in your SDK and redeploy — the console output will show whether events are being created and whether flushes are succeeding.
All items checked? Your LumiqTrace integration is production-ready.