Alerts let you set thresholds on your agent metrics and receive a notification the moment one is crossed. Instead of checking the dashboard manually, you define the conditions that matter — a cost spike, an agent error rate surge, or a latency increase — and LumiqTrace notifies you automatically. Alert rules are evaluated every 5 minutes against your live data.Documentation Index
Fetch the complete documentation index at: https://docs.lumiqtrace.com/llms.txt
Use this file to discover all available pages before exploring further.
Plan limits
| Plan | Alert rules | Webhooks |
|---|---|---|
| Free | None | — |
| Pro | Up to 5 rules | Not available |
| Team | Unlimited | Not available |
| Scale | Unlimited | Included |
Creating an alert rule
Click New rule to open the rule creation dialog.Choose a metric
Select the metric you want to monitor:
error_rate— percentage of agent operations that fail, timeout, or are rate-limitedcost_usd— total agent spend in USD for the windowlatency_ms— average response time in millisecondstoken_count— total tokens consumedrequest_count— total operations
error_rate to catch tool failures and model errors, and cost_usd to guard against runaway agent loops.Set the condition and threshold
Choose whether the rule fires when the metric is greater than (
>) or less than (<) your threshold value.Choose a time window
Select how long a window to aggregate before comparing to your threshold.
| Window | Best for |
|---|---|
| 5 min | Catching sudden spikes from agent errors |
| 15 min | Sustained error rate increases |
| 30 min | Cost trend alerts |
| 60 min | Latency degradation patterns |
Add a webhook URL (Scale plan)
Enter a URL to receive a POST when the alert fires. See the Webhooks guide for the payload schema, signature verification, and integration examples with Slack and PagerDuty.
Alert history
The alert history table shows the last 30 times any alert fired. Each entry shows the time, rule name, actual metric value, and configured threshold. If a rule fires very frequently, consider raising the threshold or increasing the time window to reduce noise. High-frequency alerts onerror_rate often indicate an agent tool that fails intermittently — investigate the failing tool rather than silencing the alert.