Build real-time API monitoring with metrics collection (request rate, latency percentiles, error rates), health check endpoints, and alerting rules. Instrument API middleware to emit Prometheus metrics or StatsD counters, configure Grafana dashboards with SLO tracking, and implement synthetic monitoring probes for uptime verification.
prom-client (Node.js), prometheus_client (Python), or Micrometer (Java)http_request_duration_seconds histogram (with method, path, status labels), http_requests_total counter, and http_requests_in_flight gauge./health endpoint returning structured health status including dependency checks (database connectivity, cache availability, external service reachability) with response time for each./ready endpoint separate from health that returns 503 during startup initialization and graceful shutdown, for load balancer integration.See ${CLAUDE_SKILL_DIR}/references/implementation.md for the full implementation guide.
${CLAUDE_SKILL_DIR}/src/middleware/metrics.js - Prometheus metrics collection middleware${CLAUDE_SKILL_DIR}/src/routes/health.js - Health check and readiness endpoints${CLAUDE_SKILL_DIR}/monitoring/dashboards/ - Grafana dashboard JSON definitions${CLAUDE_SKILL_DIR}/monitoring/alerts/ - Alerting rule definitions (Prometheus AlertManager or Grafana)${CLAUDE_SKILL_DIR}/monitoring/synthetic/ - Synthetic monitoring probe scripts${CLAUDE_SKILL_DIR}/monitoring/slo.yaml - SLO definitions and error budget configuration| Error | Cause | Solution |
|---|---|---|
| Metrics cardinality explosion | High-cardinality labels (user ID, request ID) on metrics | Use bounded label values only (method, status code, endpoint group); aggregate user-level data in logs |
| Health check false positive | Health endpoint returns 200 but dependent service is degraded | Include dependency checks with individual status; use structured response with degraded state |
| Alert fatigue | Too many low-severity alerts firing during normal operations | Tune alert thresholds using historical baselines; implement alert grouping and deduplication |
| Dashboard data gap | Metrics not collected during deployment rollout window | Configure Prometheus scrape interval < deployment duration; use push-based metrics during deploys |
| SLO miscalculation | Error budget calculation uses wrong time window or includes planned maintenance | Exclude maintenance windows from SLO calculation; align window with business reporting period |
Refer to ${CLAUDE_SKILL_DIR}/references/errors.md for comprehensive error patterns.
RED method dashboard: Request rate, Error rate, and Duration panels per endpoint, with drill-down from overview to individual endpoint detail, including top-10 slowest endpoints by p99.
SLO-based alerting: Define 99.9% availability SLO with 30-day rolling window, alert when 1-hour burn rate exceeds 14.4x (consuming daily error budget in 1 hour), with PagerDuty escalation.
Dependency health matrix: Dashboard showing real-time health status of all downstream dependencies (database, cache, external APIs) with latency sparklines and circuit breaker state indicators.
See ${CLAUDE_SKILL_DIR}/references/examples.md for additional examples.