ultrathink
Tier: STANDARD Category: Engineering / Quality Dependencies: None (prompt-only, no external tools required)
Self-eval is a Claude Code skill that produces honest, calibrated work evaluations. It replaces the default AI tendency to rate everything 4/5 with a structured two-axis scoring system, mandatory devil's advocate reasoning, and cross-session anti-inflation detection.
The core insight: AI self-assessment converges to "everything is a 4" because a single-axis score conflates task difficulty with execution quality. Self-eval separates these axes, then combines them via a fixed matrix that the model cannot override.
.self-eval-scores.jsonl in the working directory, building history across sessionsAfter completing work in a Claude Code session:
/self-eval
With context about what to evaluate:
/self-eval the authentication refactor we just completed
The skill reviews the conversation history, identifies what was accomplished, scores it on both axes, runs the devil's advocate check, and appends the result to the score history file.
/self-eval added pagination to the user list API
Output:
## Self-Evaluation
**Task:** Added cursor-based pagination to the user list API endpoint
**Ambition:** Medium — New feature with established patterns, but required careful backwards compatibility
**Execution:** Strong — All tests pass, handles edge cases, existing clients unaffected
**Devil's Advocate:**
- Lower: Pagination is well-documented. The pattern was copied from another endpoint with minor changes. Could argue Low ambition.
- Higher: The backwards-compat requirement made this harder than a greenfield implementation. Had to support both offset and cursor modes.
- Resolution: Medium ambition holds — the backwards-compat constraint added real complexity beyond copy-paste.
**Score: 4** — Solid execution on meaningful work with real constraints.
/self-eval
Output:
## Self-Evaluation
**Task:** Fixed race condition in the checkout flow causing duplicate orders
**Ambition:** Medium — Required debugging across 3 services to find the root cause
**Execution:** Adequate — Fix works but the test only covers the exact scenario reported, not the general class of race conditions
**Devil's Advocate:**
- Lower: The fix was a single line (adding a mutex). Once found, it was trivial.
- Higher: Finding the root cause across 3 services took real investigation. The mutex placement required understanding the full flow.
- Resolution: Medium ambition is correct for the investigation, but execution drops to Adequate — a more thorough fix would address the pattern, not just the instance.
**Score: 3** — Good debugging work but the fix is narrow.
$ARGUMENTS
If no arguments provided, review the full conversation history to identify what was accomplished this session. Summarize the work in one sentence before scoring.
Score on two independent axes, then combine using the matrix. Do NOT pick a number first and rationalize it — rate each axis separately, then read the matrix.
Rate the difficulty and risk of what was worked on. NOT how well it was done.
Self-check: If you were confident of success before starting, ambition is Low or Medium, not High.
Rate the quality of the actual output, independent of how ambitious the task was.
| Poor Exec (1) | Adequate Exec (2) | Strong Exec (3) | |
|---|---|---|---|
| Low Ambition (1) | 1 | 2 | 2 |
| Medium Ambition (2) | 2 | 3 | 4 |
| High Ambition (3) | 2 | 4 | 5 |
Read the matrix, don't override it. The composite is your score. The devil's advocate below can cause you to re-rate an axis — but you cannot directly override the matrix result.
Key properties:
Before writing your final score, you MUST write all three of these:
If your devil's advocate is less than 3 sentences total, you're not engaging with it — try harder.
Check for a score history file at .self-eval-scores.jsonl in the current working directory.
If the file exists, read it and check the last 5 scores. If 4+ of the last 5 are the same number, flag it:
Warning: Score clustering detected. Last 5 scores: [list]. Consider whether you're anchoring to a default.
If the file doesn't exist, ask yourself: "Would an outside observer rate this the same way I am?"
After presenting your evaluation, append one line to .self-eval-scores.jsonl in the current working directory:
{"date":"YYYY-MM-DD","score":N,"ambition":"Low|Medium|High","execution":"Poor|Adequate|Strong","task":"1-sentence summary"}
This enables the anti-inflation check to work across sessions. If the file doesn't exist, create it.
Present your evaluation as:
Task: [1-sentence summary of what was attempted] Ambition: [Low/Medium/High] — [1-sentence justification] Execution: [Poor/Adequate/Strong] — [1-sentence justification]
Devil's Advocate:
Score: [1-5] — [1-sentence final justification]