Setup Real-Time Alerts
Configure intelligent alert rules to catch security events, anomalies, and threshold violations. Integrate with Slack, PagerDuty, email, and custom webhooks.
Overview
Alerts notify your team when critical events occur or patterns emerge in your audit logs. Trailbase's alert engine evaluates rules in real-time as events are ingested, ensuring immediate notification of security incidents and policy violations.
Alert Types
1. Threshold Alerts
Trigger when a specific event occurs more than N times in a time window:
import { TrailbaseClient } from '@trailbase/sdk';
const trailbase = new TrailbaseClient({
apiKey: process.env.TRAILBASE_API_KEY!,
tenantId: process.env.TRAILBASE_TENANT_ID!,
});
// Alert on 5+ failed login attempts in 5 minutes
await trailbase.createAlertRule({
name: 'Excessive Failed Logins',
type: 'threshold',
condition: {
action: 'user.login',
outcome: 'failure',
count: 5,
window_seconds: 300,
},
severity: 'high',
channels: ['email', 'slack'],
webhookIds: ['webhook_security_slack'],
cooldownMin: 30, // Don't re-alert for 30 minutes
});2. Pattern Alerts
Match specific event patterns or metadata values:
// Alert on any data export action
await trailbase.createAlertRule({
name: 'Data Export Detected',
type: 'pattern',
condition: {
action: { matches: '*.export' }, // Wildcard match
},
severity: 'medium',
channels: ['email'],
webhookIds: ['webhook_compliance_team'],
});
// Alert on privileged actions by non-admins
await trailbase.createAlertRule({
name: 'Privileged Action by Non-Admin',
type: 'pattern',
condition: {
action: { in: ['user.delete', 'role.grant', 'config.update'] },
metadata: {
actor_role: { not: 'admin' },
},
},
severity: 'critical',
channels: ['pagerduty', 'slack'],
});3. Anomaly Alerts
Detect unusual behavior using statistical analysis:
// Alert on unusual activity from a specific user
await trailbase.createAlertRule({
name: 'Unusual User Activity',
type: 'anomaly',
condition: {
actor_id: 'user_123',
baseline_window_days: 30,
threshold_stddev: 3, // 3 standard deviations from normal
},
severity: 'medium',
channels: ['slack'],
});4. Compliance Alerts
Trigger when compliance checks fail:
// Alert on any compliance framework failure
await trailbase.createAlertRule({
name: 'Compliance Check Failed',
type: 'compliance',
condition: {
framework: 'any', // Or specific: 'SOC2', 'HIPAA', 'GDPR'
passed: false,
},
severity: 'critical',
channels: ['email', 'pagerduty'],
webhookIds: ['webhook_security_team'],
});Channel Configuration
Email Alerts
// Configure email notification
await trailbase.updateAlertChannels({
email: {
enabled: true,
recipients: [
'security-team@example.com',
'compliance@example.com',
],
template: 'default', // Or 'minimal', 'detailed'
},
});Slack Integration
// Create Slack webhook
await trailbase.createWebhook({
name: 'Security Alerts Slack',
url: 'https://hooks.slack.com/services/YOUR/WEBHOOK/URL',
events: ['alert.triggered'],
headers: {
'Content-Type': 'application/json',
},
});
// Slack message format (automatically formatted by Trailbase)
{
"text": "🚨 Alert Triggered: Excessive Failed Logins",
"blocks": [
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "*Alert:* Excessive Failed Logins\n*Severity:* High\n*Triggered:* 2026-02-10 14:32:15 UTC"
}
},
{
"type": "section",
"text": {
"type": "mrkdwn",
"text": "*Details:*\n• Action: user.login\n• Outcome: failure\n• Count: 5 in 5 minutes\n• Actor: user_alice"
}
}
]
}PagerDuty Integration
// Create PagerDuty webhook
await trailbase.createWebhook({
name: 'PagerDuty Critical Alerts',
url: 'https://events.pagerduty.com/v2/enqueue',
events: ['alert.triggered'],
headers: {
'Content-Type': 'application/json',
'Authorization': `Token token=${process.env.PAGERDUTY_API_KEY}`,
},
});
// PagerDuty payload (automatically formatted)
{
"routing_key": "YOUR_INTEGRATION_KEY",
"event_action": "trigger",
"payload": {
"summary": "Trailbase Alert: Excessive Failed Logins",
"severity": "error",
"source": "Trailbase",
"custom_details": {
"alert_name": "Excessive Failed Logins",
"condition": "5 failed logins in 5 minutes",
"actor": "user_alice"
}
}
}Custom Webhooks
Send alerts to any HTTP endpoint:
// Create custom webhook
await trailbase.createWebhook({
name: 'Custom Security System',
url: 'https://yourdomain.com/api/trailbase-alerts',
events: ['alert.triggered', 'compliance.check.failed'],
headers: {
'Content-Type': 'application/json',
'X-API-Key': process.env.CUSTOM_WEBHOOK_KEY!,
},
retryConfig: {
maxAttempts: 5,
backoffMultiplier: 2,
},
});
// Your webhook handler
export async function POST(req: Request) {
// Verify signature
const signature = req.headers.get('X-Trailbase-Signature');
const body = await req.text();
const expectedSignature = crypto
.createHmac('sha256', process.env.TRAILBASE_WEBHOOK_SECRET!)
.update(body)
.digest('hex');
if (signature !== expectedSignature) {
return new Response('Invalid signature', { status: 401 });
}
const payload = JSON.parse(body);
// Process alert
if (payload.event_type === 'alert.triggered') {
await handleAlert(payload.data);
}
return new Response(JSON.stringify({ received: true }), {
status: 200,
});
}Alert Rule Examples
Security Use Cases
Brute Force Detection
await trailbase.createAlertRule({
name: 'Brute Force Attack',
type: 'threshold',
condition: {
action: 'user.login',
outcome: 'failure',
count: 10,
window_seconds: 60,
groupBy: 'actor.ip', // Track per IP address
},
severity: 'critical',
channels: ['pagerduty', 'slack'],
});Privilege Escalation
await trailbase.createAlertRule({
name: 'Privilege Escalation Attempt',
type: 'pattern',
condition: {
action: 'role.grant',
metadata: {
granted_role: { in: ['admin', 'superuser'] },
},
},
severity: 'critical',
channels: ['email', 'pagerduty'],
metadata: {
description: 'Alert when admin or superuser role is granted to any user',
},
});Suspicious After-Hours Activity
await trailbase.createAlertRule({
name: 'After-Hours Data Access',
type: 'pattern',
condition: {
action: { matches: 'data.*' },
event_time: {
hour: { between: [22, 6] }, // 10 PM to 6 AM
},
},
severity: 'medium',
channels: ['slack'],
});Compliance Use Cases
Unauthorized Access Attempts
await trailbase.createAlertRule({
name: 'Access Denied Spike',
type: 'threshold',
condition: {
outcome: 'denied',
count: 20,
window_seconds: 600, // 10 minutes
},
severity: 'high',
channels: ['email'],
metadata: {
compliance_control: 'SOC2-CC6.3',
},
});Bulk Data Operations
await trailbase.createAlertRule({
name: 'Bulk Delete Operation',
type: 'pattern',
condition: {
action: { matches: '*.delete' },
metadata: {
bulk_operation: true,
affected_count: { gte: 100 },
},
},
severity: 'high',
channels: ['email', 'slack'],
});Managing Alerts
List Active Alerts
// Get all firing alerts
const alerts = await trailbase.listAlerts({
status: 'FIRING',
severity: 'high',
});
alerts.forEach((alert) => {
console.log(`Alert: ${alert.name}`);
console.log(` Triggered: ${alert.triggeredAt}`);
console.log(` Count: ${alert.triggerCount}`);
console.log(` Details: ${JSON.stringify(alert.context)}`);
});Acknowledge Alert
// Mark alert as acknowledged
await trailbase.acknowledgeAlert('alert_abc123');
// Alert status changes: FIRING -> ACKNOWLEDGED
// Further triggers are suppressed during cooldown periodResolve Alert
// Mark alert as resolved
await trailbase.resolveAlert('alert_abc123');
// Alert status changes: ACKNOWLEDGED -> RESOLVED
// Alert rule continues to monitor for new triggersUpdate Alert Rule
// Modify existing alert rule
await trailbase.updateAlertRule('rule_xyz789', {
severity: 'critical', // Escalate severity
condition: {
count: 3, // Lower threshold
window_seconds: 300,
},
cooldownMin: 60, // Increase cooldown
});Disable Alert Rule
// Temporarily disable alert rule
await trailbase.updateAlertRule('rule_xyz789', {
enabled: false,
});
// Re-enable later
await trailbase.updateAlertRule('rule_xyz789', {
enabled: true,
});Alert Dashboard
Build a real-time alert dashboard for your security team:
// components/AlertDashboard.tsx
'use client';
import { useEffect, useState } from 'react';
export function AlertDashboard() {
const [alerts, setAlerts] = useState<any[]>([]);
useEffect(() => {
async function fetchAlerts() {
const res = await fetch('/api/alerts?status=FIRING');
const data = await res.json();
setAlerts(data.alerts);
}
fetchAlerts();
const interval = setInterval(fetchAlerts, 10000); // Refresh every 10s
return () => clearInterval(interval);
}, []);
return (
<div>
<h2>Active Alerts ({alerts.length})</h2>
{alerts.map((alert) => (
<div
key={alert.id}
style={{
background: getSeverityColor(alert.severity),
padding: '1rem',
borderRadius: '8px',
marginBottom: '1rem',
}}
>
<div style={{ display: 'flex', justifyContent: 'space-between' }}>
<div>
<h3>{alert.name}</h3>
<p>{alert.message}</p>
<p style={{ fontSize: '0.875rem', opacity: 0.8 }}>
Triggered: {new Date(alert.triggeredAt).toLocaleString()}
</p>
</div>
<div style={{ display: 'flex', gap: '0.5rem' }}>
<button onClick={() => acknowledgeAlert(alert.id)}>
Acknowledge
</button>
<button onClick={() => resolveAlert(alert.id)}>
Resolve
</button>
</div>
</div>
</div>
))}
</div>
);
}
function getSeverityColor(severity: string) {
const colors = {
low: 'rgba(34, 197, 94, 0.2)',
medium: 'rgba(251, 191, 36, 0.2)',
high: 'rgba(249, 115, 22, 0.2)',
critical: 'rgba(239, 68, 68, 0.2)',
};
return colors[severity as keyof typeof colors] || colors.medium;
}Testing Alerts
Test Alert Rule
Trigger a test alert to verify your configuration:
// Manually trigger test alert
await trailbase.testAlertRule('rule_xyz789');
// This sends a test notification through all configured channels
// without actually evaluating the conditionSimulate Alert Conditions
// Create events that should trigger alert
const testActorId = 'test_user_123';
// Send 5 failed login events (should trigger "Excessive Failed Logins")
for (let i = 0; i < 5; i++) {
await trailbase.log({
action: 'user.login',
actor: {
id: testActorId,
email: 'test@example.com',
},
resource: {
type: 'session',
id: `session_${i}`,
},
outcome: 'failure',
});
}
// Wait for alert to fire
await new Promise((r) => setTimeout(r, 2000));
// Check if alert was triggered
const alerts = await trailbase.listAlerts({
status: 'FIRING',
ruleId: 'rule_xyz789',
});
console.log(`Alert triggered: ${alerts.length > 0}`);Best Practices
- Use cooldown periods: Prevent alert fatigue by spacing notifications
- Set appropriate severity: Reserve "critical" for incidents requiring immediate action
- Test before deploying: Always test alert rules in a staging environment
- Document alert runbooks: Include remediation steps in alert descriptions
- Review alerts regularly: Tune thresholds based on actual usage patterns
- Avoid alert spam: Too many alerts leads to important ones being ignored
Alert Fatigue
Monitor your alert acknowledgment rate. If less than 70% of alerts are being acknowledged, you may have too many low-value alerts. Adjust thresholds or disable noisy rules.
Next Steps
Set up automated audit exports to provide your customers with their own audit data.