Production-ready templates for incident response runbooks covering detection, triage, mitigation, resolution, and communication. - Creating incident response procedures - Building service-specific runbooks
1. Overview & Impact 2. Detection & Alerts 3. Initial Triage 4. Mitigation Steps 5. Root Cause Investigation 6. Resolution Procedures 7. Verification & Rollback 8. Communication Templates 9. Escalation Matrix
# [Service Name] Outage Runbook ## Overview **Service**: Payment Processing Service **Owner**: Platform Team **Slack**: #payments-incidents **PagerDuty**: payments-oncall ## Impact Assessment - [ ] Which customers are affected? - [ ] What percentage of traffic is impacted? - [ ] Are there financial implications? - [ ] What's the blast radius? ## Detection ### Alerts - `payment_error_rate > 5%` (PagerDuty) - `payment_latency_p99 > 2s` (Slack) - `payment_success_rate < 95%` (PagerDuty) ### Dashboards - [Payment Service Dashboard](https://grafana/d/payments) - [Error Tracking](https://sentry.io/payments) - [Dependency Status](https://status.stripe.com) ## Initial Triage (First 5 Minutes) ### 1. Assess Scope ```bash # Check service health kubectl get pods -n payments -l app=payment-service # Check recent deployments kubectl rollout history deployment/payment-service -n payments # Check error rates curl -s "http://prometheus:9090/api/v1/query?query=sum(rate(http_requests_total{status=~'5..'}[5m]))"
### 2\. Quick Health Checks - Can you reach the service? `curl -I https://api.company.com/payments/health` - Database connectivity? Check connection pool metrics - External dependencies? Check Stripe, bank API status - Recent changes? Check deploy history ### 3\. Initial Classification Symptom Likely Cause Go To Section All requests failing Service down Section 4.1 High latency Database/dependency Section 4.2 Partial failures Code bug Section 4.3 Spike in errors Traffic surge Section 4.4 ## Mitigation Procedures ### 4.1 Service Completely Down
### 4.2 High Latency### 4.3 Partial Failures (Specific Errors)### 4.4 Traffic Surge## Verification Steps## Rollback Procedures## Escalation Matrix Condition Escalate To Contact \> 15 min unresolved SEV1 Engineering Manager @manager (Slack) Data breach suspected Security Team #security-incidents Financial impact > $10k Finance + Legal @finance-oncall Customer communication needed Support Lead @support-lead ## Communication Templates ### Initial Notification (Internal)
### Status Update
📊 UPDATE: Payment Service Incident### Resolution Notification
✅ RESOLVED: Payment Service Incident`### Template 2: Database Incident Runbook`markdown # Database Incident Runbook ## Quick Reference | Issue | Command | |-------|---------| | Check connections | `SELECT count(*) FROM pg_stat_activity;` | | Kill query | `SELECT pg_terminate_backend(pid);` | | Check replication lag | `SELECT extract(epoch from (now() - pg_last_xact_replay_timestamp()));` | | Check locks | `SELECT * FROM pg_locks WHERE NOT granted;` | ## Connection Pool Exhaustion ```sql -- Check current connections SELECT datname, usename, state, count(*) FROM pg_stat_activity GROUP BY datname, usename, state ORDER BY count(*) DESC; -- Identify long-running connections SELECT pid, usename, datname, state, query_start, query FROM pg_stat_activity WHERE state != 'idle' ORDER BY query_start; -- Terminate idle connections SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE state = 'idle' AND query_start < now() - interval '10 minutes'; `## Replication Lag` -- Check lag on replica SELECT CASE WHEN pg_last_wal_receive_lsn() = pg_last_wal_replay_lsn() THEN 0 ELSE extract(epoch from now() - pg_last_xact_replay_timestamp()) END AS lag_seconds; -- If lag > 60s, consider: -- 1. Check network between primary/replica -- 2. Check replica disk I/O -- 3. Consider failover if unrecoverable `## Disk Space Critical` # Check disk usage df -h /var/lib/postgresql/data # Find large tables psql -c "SELECT relname, pg_size_pretty(pg_total_relation_size(relid)) FROM pg_catalog.pg_statio_user_tables ORDER BY pg_total_relation_size(relid) DESC LIMIT 10;" # VACUUM to reclaim space psql -c "VACUUM FULL large_table;" # If emergency, delete old data or expand disk
## Best Practices ### Do's - **Keep runbooks updated** - Review after every incident - **Test runbooks regularly** - Game days, chaos engineering - **Include rollback steps** - Always have an escape hatch - **Document assumptions** - What must be true for steps to work - **Link to dashboards** - Quick access during stress ### Don'ts - **Don't assume knowledge** - Write for 3 AM brain - **Don't skip verification** - Confirm each step worked - **Don't forget communication** - Keep stakeholders informed - **Don't work alone** - Escalate early - **Don't skip postmortems** - Learn from every incident ## Resources - [Google SRE Book - Incident Management](https://sre.google/sre-book/managing-incidents/) - [PagerDuty Incident Response](https://response.pagerduty.com/) - [Atlassian Incident Management](https://www.atlassian.com/incident-management)