Deploy Deepgram integrations to various cloud platforms and container environments including Docker, Kubernetes, AWS Lambda, Google Cloud Run, and Vercel.
Build a multi-stage Dockerfile with non-root user, health checks, and production optimizations.
Choose deployment target: Docker Compose (simple), Kubernetes (scale), or serverless (event-driven).
Use platform-native secret management (K8s Secrets, AWS Secrets Manager, GCP Secret Manager).
Configure liveness and readiness probes against /health endpoint.
Set up HPA (Kubernetes) or managed scaling with CPU-based thresholds at 70% utilization.
Build environment-aware deploy script that runs build, tests, deploys, and smoke tests.
| Issue | Cause | Resolution |
|---|---|---|
| Container fails to start | Missing secret or env var | Check secret mounts and env config |
| Health check failing | Service not ready | Increase initialDelaySeconds |
| OOM kills | Memory limit too low | Increase memory limit (512Mi minimum) |
| Connection refused | Wrong port mapping | Verify containerPort matches app |
set -euo pipefail
docker build -t deepgram-service .
docker run -p 3000:3000 -e DEEPGRAM_API_KEY=$DEEPGRAM_API_KEY deepgram-service # 3000: 3 seconds in ms
./scripts/deploy.sh staging # Deploy to staging
./scripts/deploy.sh production # Deploy to production
See detailed implementation for advanced patterns.