Turn engineering work into evidence-backed impact statements for performance reviews, self-reviews, promotion packets, and weekly updates. Uniquely mines Copilot CLI session logs, git history, and PRs to reconstruct forgotten work.
USE FOR: "brag", "log work", "what did I do", "backfill", "performance review", "self-review", "promo packet", "weekly update", "status report", "write impact statement", "what did I ship", "I forgot to log my work", "review prep", "accomplishments" DO NOT USE FOR: project management, sprint planning, time tracking, ticket creation
| User wants... | Mode | Output |
|---|---|---|
| Log one accomplishment | Capture | 1 impact-first entry |
| "What did I do last week?" | Backfill | Entries grouped by week, mined from git/PRs/sessions |
| Prep for review or promo | Review Pack | Entries grouped by impact theme + STAR narratives |
save_to_brag_sheet, git, gh) before choosing a workflow.(evidence needed) — never silently omit.Every entry uses impact-first framing with three required parts:
Did [action] → [result/impact] → [evidence]
Do not output an entry unless it includes all three parts. If evidence is missing, ask for it or mark as "(evidence needed)".
| ❌ Don't | ✅ Do instead |
|---|---|
| "Fixed a bug in auth" | "Fixed token refresh race condition → eliminated 401s affecting 12% of API calls → PR #247" |
| "Worked on dashboards" | "Built latency dashboard in Grafana → on-call detects P95 spikes in <2min → deployed to prod" |
| Invent a metric: "saved 40% of eng time" | Ask: "Do you have a rough estimate, or should I keep this qualitative?" |
| One entry per commit | Group related commits into one entry with highest-impact framing |
| Passive voice: "The pipeline was improved" | Active voice: "Built CI matrix → caught Windows-only bug before release" |
| List technologies used | State the outcome: "Migrated 4 services to IaC → deploy time 45min → 8min" |
| Silently drop weak entries | Mark (evidence needed) and present for user to fill in |
Not every entry needs a metric. Use the strongest evidence available:
| Strength | Evidence type | Example |
|---|---|---|
| 🥇 Best | Quantified metric | "Reduced P95 latency from 800ms to 120ms" |
| 🥈 Strong | PR, commit, or doc link | "PR #312, design doc in wiki" |
| 🥉 Good | Observable outcome | "Unblocked Team X", "Resolved Sev2 incident Y" |
| ✅ Acceptable | Qualitative + context | "Reduced toil for on-call rotation — see updated runbook" |
| ⚠️ Weak | Activity only | "Worked on auth" — reframe or mark (evidence needed) |
Never invent a metric to fill the gap. Qualitative evidence with context beats fabricated numbers.
| ID | Emoji | Use for |
|---|---|---|
pr |
🚀 | Merged PRs, shipped features |
bugfix |
🐛 | Bug fixes, incident patches |
infrastructure |
🏗️ | Infra, deployments, migrations |
investigation |
🔍 | Root cause analysis, debugging |
collaboration |
🤝 | Reviews, mentoring, design discussions |
tooling |
🔧 | Dev tools, scripts, automation |
oncall |
🚨 | Incident response, on-call wins |
design |
📐 | Design docs, architecture decisions |
documentation |
📝 | Docs, runbooks, guides |
Follow this decision tree:
If save_to_brag_sheet tool is available → use extension tools directly (save_to_brag_sheet, review_brag_sheet, generate_work_log). Do not reference or attempt to call these tools unless they are confirmed available.
If git or gh CLI is available → backfill from commits and PRs (see Backfill section below)
Otherwise → guided interview: "What did you work on?", "Who benefited?", "What's the evidence?"
For each entry, walk through: What (the deliverable) → Why (who benefits) → Evidence (PR, metric, link). Output formatted markdown the user can paste into a review doc.
When the user asks "what did I do last week" or "backfill my history":
Follow these steps in order. Do not draft entries until scanning is complete.
Check what's available, then mine each source:
git --version 2>/dev/null # for commit mining
gh --version 2>/dev/null # for PR mining
ls ~/.copilot/session-state/ 2>/dev/null # Copilot session logs
Git commits — recent commits by the user in the current repo:
git log --author="$(git config user.email)" --since="2 weeks ago" \
--pretty=format:'%h|%ad|%s' --date=short --no-merges
PR history — merged PRs across repos:
gh pr list --author @me --state merged --limit 20 \
--json number,title,repository,mergedAt
Copilot session history (unique to this skill):
~/.copilot/session-state/<session-id>/workspace.yaml
summary, cwd, repository, branch
summary fieldIf none of these sources are available, fall back to the guided interview.
Cluster related signals into one entry:
Write impact-first entries for each group. Assign categories.
Show all drafted entries to the user. Adjust based on feedback.
Format as markdown grouped by week:
## Week of 2025-04-14
### 🚀 PRs & Features
- **Migrated auth service to managed identity** → eliminated 3 secret rotation incidents/quarter → PR #312
### 🏗️ Infrastructure
- **Built CI pipeline for copilot-brag-sheet** → 107 tests across 3 OSes × 3 Node versions → shipped v1.0.0
When the user is preparing for a performance review (Connect, annual review, etc.):
| ✅ Strong | ❌ Weak |
|---|---|
| Outcome-first, quantified | Activity list ("worked on X") |
| Tied to customer/team impact | No beneficiary mentioned |
| Includes evidence (PR, metric) | No measurable result |
| Shows ownership or leadership | Pure task completion |
For longer narrative sections, use STAR: Situation → Task → Action → Result.
For Microsoft employees using the Connect preset, frame entries around Core Priorities: delivering results, customer obsession, teamwork, and growth mindset.
Before finishing, ensure:
(evidence needed) if missing)The user may work across multiple repos. Before concluding there's nothing to backfill:
gh pr list --author @me --state merged for cross-repo PRsPerformance reviews often cover 6–12 months. Explicitly set the date range:
git log --author="$(git config user.name)" --since="2024-07-01" --until="2025-01-01" --oneline
PR history (gh pr list --state merged) is more reliable for long time ranges than commit logs.
Not every entry needs a number. See the Evidence Ladder above. Acceptable evidence includes PR links, "unblocked Team X", or qualitative outcomes with context. Never invent a metric to fill the gap.
~/.copilot/session-state/ only exists if the user has run Copilot CLI sessions. Don't error — silently skip and note: "No Copilot session history found; scanning git and PRs only."
The user might say "brag about this feature to my team" (a launch announcement, not a work entry). Confirm intent if ambiguous.
If multiple authors appear on the same commits, ask: "Should I credit this as your work, shared work, or skip it?"
For automatic background tracking of every Copilot CLI session (files edited, PRs created, git actions), install the copilot-brag-sheet extension. It adds save_to_brag_sheet, review_brag_sheet, and generate_work_log tools to every session.