Aurora DSQL is a serverless, PostgreSQL-compatible distributed SQL database. This skill provides direct database interaction via MCP tools, schema management, migration support, and multi-tenant patterns.
Key capabilities:
Load these files as needed for detailed guidance:
When: ALWAYS load before implementing schema changes or database operations Contains: Best Practices, DDL rules, connection patterns, transaction limits, data type serialization patterns, application-layer referential integrity instructions, security best practices
When: Always load for guidance using or updating the DSQL MCP server Contains: Instructions for setting up the DSQL MCP server with 2 configuration options as sampled in mcp/.mcp.json
When: Load when you need detailed MCP tool syntax and examples. PREFER MCP tools for ad-hoc queries — execute directly rather than writing scripts. Contains: Tool parameters, detailed examples, usage patterns, input validation
When: MUST load when making language-specific implementation choices. ALWAYS prefer DSQL Connector when available. Contains: Driver selection, framework patterns, connection code for Python/JS/Go/Java/Rust
When: Load when looking for specific implementation examples Contains: Code examples, repository patterns, multi-tenant implementations
When: Load when debugging errors or unexpected behavior. SHOULD always consult for OCC errors, connection failures, or unexpected query results. Contains: Common pitfalls, error messages, solutions
When: User explicitly requests to "Get started with DSQL" or similar phrase Contains: Interactive step-by-step guide for new users
When: MUST load when creating database roles, granting permissions, setting up schemas for applications, or handling sensitive data. ALWAYS use scoped roles for applications — create database roles with dsql:DbConnect.
Contains: Scoped role setup, IAM-to-database role mapping, schema separation for sensitive data, role design patterns
When: MUST load when performing DROP COLUMN, RENAME COLUMN, ALTER COLUMN TYPE, or DROP CONSTRAINT Contains: Table recreation pattern overview, transaction rules, common verify & swap pattern
When: Load for DROP COLUMN, ALTER COLUMN TYPE, SET/DROP NOT NULL, SET/DROP DEFAULT migrations Contains: Step-by-step migration patterns for column-level changes
When: Load for ADD/DROP CONSTRAINT, MODIFY PRIMARY KEY, column split/merge migrations Contains: Step-by-step migration patterns for constraint and structural changes
When: Load when migrating tables exceeding 3,000 rows Contains: OFFSET-based and cursor-based batching patterns, progress tracking, error handling
When: MUST load when migrating MySQL schemas to DSQL Contains: MySQL data type mappings, feature alternatives, DDL operation mapping
When: Load when translating MySQL DDL operations to DSQL equivalents Contains: ALTER COLUMN, DROP COLUMN, AUTO_INCREMENT, ENUM, SET, FOREIGN KEY migration patterns
When: Load when migrating a complete MySQL table to DSQL Contains: End-to-end MySQL CREATE TABLE migration example with decision summary
When: MUST load all four at Workflow 8 Phase 0 — query-plan/plan-interpretation.md, query-plan/catalog-queries.md, query-plan/guc-experiments.md, query-plan/report-format.md Contains: DSQL node types + Node Duration math + estimation-error bands, pg_class/pg_stats/pg_indexes SQL + correlated-predicate verification, GUC experiment procedures + 30-second skip protocol, required report structure + element checklist + support request template
The aurora-dsql MCP server provides these tools:
Database Operations:
Documentation & Knowledge:
Note: There is no list_tables tool. Use readonly_query with information_schema.
See mcp-setup.md for detailed setup instructions. See mcp-tools.md for detailed usage and examples.
awsknowledge)Consult for verifying DSQL service limits before advising users. The numeric limits below are defaults that may change — when a user's decision depends on an exact limit, verify it first:
| Limit | Default | Verify query |
|---|---|---|
| Max rows per transaction | 3,000 | aurora dsql transaction limits |
| Max data size per transaction | 10 MiB | aurora dsql transaction limits |
| Max transaction duration | 5 minutes | aurora dsql transaction limits |
| Max connections per cluster | 10,000 | aurora dsql connection limits |
| Auth token expiry | 15 minutes | aurora dsql authentication token |
| Max connection duration | 60 minutes | aurora dsql connection limits |
| Max indexes per table | 24 | aurora dsql index limits |
| Max columns per index | 8 | aurora dsql index limits |
| IDENTITY/SEQUENCE CACHE values | 1 or >= 65536 | aurora dsql sequence cache |
When to verify: Before recommending batch sizes, connection pool settings, or schema designs where hitting a limit would cause failures. No need to verify for general guidance or when the exact number doesn't affect the user's decision.
Fallback: If awsknowledge is unavailable, use the defaults above and note to the user
that limits should be verified against DSQL documentation.
Bash scripts in scripts/ for cluster management (create, delete, list, cluster info), psql connection, and bulk data loading from local/s3 csv/tsv/parquet files. See scripts/README.md for usage.
Use readonly_query with information_schema to list tables
Use get_schema to understand table structure
Use readonly_query for SELECT queries
Always include tenant_id in WHERE clause for multi-tenant apps
MUST build SQL with safe_query.build() — see mcp/tools/input-validation.md
Use transact tool with list of SQL statements
Follow one-DDL-per-transaction rule
Always use CREATE INDEX ASYNC in separate transaction
ALTER COLUMN TYPE, DROP COLUMN, DROP CONSTRAINT → Table Recreation Pattern (Workflow 6)
CREATE INDEX ASYNC exclusivelytransact(["CREATE TABLE ..."])
transact(["ALTER TABLE ... ADD COLUMN ..."])
Recovery — batch fails midway: Rows already updated keep their new value (each batch committed
in its own transaction). Resume by filtering on the unset state — e.g. add
WHERE new_column IS NULL (or the sentinel value) to the next UPDATE — and continue from there.
Re-running the entire migration is safe because the filter naturally excludes completed rows.
INSERT: MUST validate parent exists with readonly_query → throw error if not found → insert child with transact.
DELETE: MUST check dependents with readonly_query COUNT → return error if dependents exist → delete with transact if safe.
safe_query.build() — use allow()/regex() for
values (emits 'v'), ident() for table/column names (emits "v").
See input-validation.md
tenant_id in the WHERE clause; reject cross-tenant access at the application layerMUST load access-control.md for role setup, IAM mapping, and schema permissions.
DSQL does NOT support direct ALTER COLUMN TYPE, DROP COLUMN, DROP CONSTRAINT, or MODIFY PRIMARY KEY. These operations require the Table Recreation Pattern — creating a new table, copying data, dropping the original, and renaming. This is a destructive workflow that requires user confirmation at each step.
MUST load ddl-migrations/overview.md before attempting any of these operations.
MUST load mysql-migrations/type-mapping.md for type mappings, feature alternatives, and migration steps.
Explains why the DSQL optimizer chose a particular plan. Triggered by slow queries, high DPU, unexpected Full Scans, or plans the user doesn't understand. REQUIRES a structured Markdown diagnostic report is the deliverable beyond conversation — run the workflow end-to-end before answering. Use the aurora-dsql MCP when connected; fall back to raw psql with a generated IAM token (see the fallback block below) otherwise.
Phase 0 — Load reference material. Read all four before starting — each has content later phases need verbatim (node-type math, exact catalog SQL, the >30s skip protocol, required report elements):
>30s skip protocolPhase 1 — Capture the plan. ALWAYS run readonly_query("EXPLAIN ANALYZE VERBOSE …") on the user's query verbatim (SELECT form) — ALWAYS capture a fresh plan from the cluster, even when the user describes the plan or reports an anomaly. MAY leverage get_schema or information_schema for schema sanity checks. When EXPLAIN errors (relation does not exist, column does not exist), MUST report the error verbatim — MUST NOT invent DSQL-specific semantics (e.g., case sensitivity, identifier quoting) as the root cause. Extract Query ID, Planning Time, Execution Time, DPU Estimate. SELECT runs as-is. UPDATE/DELETE rewrite to the equivalent SELECT (same join chain + WHERE) — the optimizer picks the same plan shape. INSERT, pl/pgsql, DO blocks, and functions MUST be rejected. MUST NOT use transact --allow-writes for plan capture; it bypasses MCP safety.
Phase 2 — Gather evidence. Using SQL from catalog-queries.md, query pg_class, pg_stats, pg_indexes, COUNT(*), COUNT(DISTINCT). Classify estimation errors per plan-interpretation.md (2x–5x minor, 5x–50x significant, 50x+ severe). Detect correlated predicates and data skew.
Phase 3 — Experiment (conditional). ≤30s: run GUC experiments per guc-experiments.md (default + merge-join-only) plus optional redundant-predicate test. >30s: skip experiments, include the manual GUC testing SQL verbatim in the report, and do not re-run for redundant-predicate testing. Anomalous values (impossible row counts): confirm query results are correct despite the anomalous EXPLAIN, flag as a potential DSQL bug, and produce the Support Request Template from report-format.md.
Phase 4 — Produce the report, invite reassessment. Produce the full diagnostic report per the "Required Elements Checklist" in query-plan/report-format.md — structure is non-negotiable. End with the "Next Steps" block from that reference so the user can ask for a reassessment after applying a recommendation. When the user says "reassess" (or equivalent), re-run Phase 1–2 and append an "Addendum: After-Change Performance" to the original report (before/after table, match against expected impact) rather than producing a new report.
psql fallback (MCP unavailable). Pipe statements into psql via heredoc and check $?; report failures without proceeding on partial evidence:
TOKEN=$(aws dsql generate-db-connect-admin-auth-token --hostname "$HOST" --region "$REGION")
PGPASSWORD="$TOKEN" psql "host=$HOST port=5432 user=admin dbname=postgres sslmode=require" <<<"EXPLAIN ANALYZE VERBOSE <sql>;"
Safety. Plan capture uses readonly_query exclusively — it rejects INSERT/UPDATE/DELETE/DDL at the MCP layer. Rewrite DML to SELECT (Phase 1) rather than asking transact --allow-writes to run it; write-mode transact bypasses all MCP safety checks. MUST NOT run arbitrary DDL/DML or pl/pgsql.
awsknowledge returns no results: Use the default limits in the table above and note that limits should be verified against DSQL documentation.