Implement API throttling policies that protect backend services from overload by controlling request concurrency, queue depth, and processing rates. Apply backpressure mechanisms including concurrent request limits, priority queues, circuit breakers, and adaptive throttling that adjusts limits based on real-time backend health metrics.
Retry-After during the open state.X-Throttle-Limit, X-Throttle-Remaining, and X-Throttle-Reset for client-side awareness.Retry-After, and recovery behavior when load subsides.See ${CLAUDE_SKILL_DIR}/references/implementation.md for the full implementation guide.
${CLAUDE_SKILL_DIR}/src/middleware/throttle.js - Concurrency and request rate throttling middleware${CLAUDE_SKILL_DIR}/src/middleware/circuit-breaker.js - Circuit breaker for downstream service protection${CLAUDE_SKILL_DIR}/src/middleware/priority-queue.js - Tier-based request prioritization${CLAUDE_SKILL_DIR}/src/config/throttle-config.js - Per-endpoint throttle policy definitions${CLAUDE_SKILL_DIR}/tests/throttle/ - Load tests validating throttle engagement and recovery| Error | Cause | Solution |
|---|---|---|
| 503 Service Unavailable | Concurrency limit reached for the endpoint | Return Retry-After header with estimated wait time; include throttle state headers |
| 503 Circuit Open | Circuit breaker tripped due to downstream failures | Return cached response if available; provide circuit reset time in response body |
| Queue overflow | Request buffer exceeded maximum depth | Reject with 503; alert operations team; consider scaling backend capacity |
| Stale throttle state | Redis connection lost; throttle counters become inaccurate | Fall back to in-process counters; reconnect with backoff; log state inconsistency |
| Priority starvation | Low-tier requests never served under sustained high-tier load | Reserve minimum throughput percentage for each tier to prevent complete starvation |
Refer to ${CLAUDE_SKILL_DIR}/references/errors.md for comprehensive error patterns.
Database-heavy endpoint protection: Apply concurrency limit of 10 to a report generation endpoint that runs expensive aggregation queries, queueing additional requests with estimated wait times.
Multi-tier SaaS throttling: Enterprise tier gets 100 concurrent requests, Pro tier gets 25, Free tier gets 5, with priority queue ensuring enterprise requests are served first during contention.
Adaptive autoscaling trigger: Throttle middleware emits metrics that trigger horizontal pod autoscaling when throttle engagement rate exceeds 20% sustained over 5 minutes.
See ${CLAUDE_SKILL_DIR}/references/examples.md for additional examples.