Originally contributed by maximcoding — enhanced and integrated by the claude-skills team.
Transform meeting transcripts into concrete, evidence-backed feedback on communication patterns, leadership behaviors, and interpersonal dynamics.
Scan the target directory for transcript files (.txt, .md, .vtt, .srt, .docx, .json).
For each file:
YYYY-MM-DD prefix or embedded timestamps)Speaker 1:, [John]:, John Smith 00:14:32, VTT/SRT cue formattingPrint a brief inventory table so the user confirms scope before heavy analysis begins.
Different tools produce wildly different formats. Normalize everything into a common internal structure before analysis:
{ speaker: string, timestamp_sec: number | null, text: string }[]
Handling per format:
<v Speaker>) or prefixed.Name: or [Name] prefixes per line. If no speaker labels exist, warn the user that per-speaker analysis is limited.speaker/text fields (common Otter/Fireflies export).If timestamps are missing, degrade gracefully — skip timing-dependent metrics (speaking pace, pause analysis) but still run text-based analysis.
Run all applicable analysis modules below. Each module is independent — skip any that don't apply (e.g., skip speaking ratios if there are no speaker labels).
Calculate per-speaker:
Produce a per-meeting summary and a cross-meeting average if multiple transcripts exist.
Red flags to surface:
Scan the user's speech for hedging and avoidance markers:
Hedging language (score per-instance, aggregate per meeting):
Conflict avoidance patterns (requires more context, flag with confidence level):
For each flagged instance, extract:
low (single hedge word), medium (pattern of hedging in one exchange), high (clearly avoided a necessary conversation)Count occurrences of: "um", "uh", "like" (non-comparative), "you know", "actually", "basically", "literally", "right?" (tag question), "so yeah", "I mean"
Report:
Only flag this as an issue if the rate exceeds ~3 per 100 words. Below that, it's normal speech.
Classify the user's questions:
Good listening indicators:
Poor listening indicators:
Report the ratio of open/clarifying/building vs. closed/leading questions.
Only apply when the user is the meeting organizer or facilitator.
Evaluate:
Track the emotional arc of the user's language across the meeting:
Flag energy drops — moments where the user's engagement visibly decreases (shorter turns, less substantive responses). These often correlate with discomfort, boredom, or avoidance.
Structure the final output as a single cohesive report. Use this skeleton — omit any section where data was insufficient:
# Meeting Insights Report
**Period**: [earliest date] – [latest date]
**Meetings analyzed**: [count]
**Total transcript words**: [count]
**Your speaking share (avg)**: [X%]
---
## Top 3 Findings
[Rank by impact. Each finding gets 2-3 sentences + one concrete example with a direct quote and timestamp.]
## Detailed Analysis
### Speaking Dynamics
[Stats table + narrative interpretation + flagged red flags]
### Directness & Conflict Patterns
[Flagged instances grouped by pattern type, with quotes and rewrites]
### Verbal Habits
[Filler word stats, contextual spikes, only if rate > 3/100 words]
### Listening & Questions
[Question type breakdown, listening indicators, specific examples]
### Facilitation
[Only if applicable — agenda, decisions, action items]
### Energy & Sentiment
[Arc summary, flagged drops]
## Strengths
[3 specific things the user does well, with evidence]
## Growth Opportunities
[3 ranked by impact, each with: what to change, why it matters, a concrete "try this next time" action]
## Comparison to Previous Period
[Only if prior analysis exists — delta on key metrics]
After delivering the report, offer:
Include this section in output only if the user seems unsure about how to get transcripts:
.vtt from cloud recordings..txt or .json from the web dashboard..docx or .json — both work..vtt.Recommend YYYY-MM-DD - Meeting Name.ext naming convention for easy chronological analysis.
| Anti-Pattern | Why It Fails | Better Approach |
|---|---|---|
| Analyzing without speaker labels | Per-person metrics impossible — results are generic word clouds | Ask user to re-export with speaker identification enabled |
| Running all modules on a 5-minute standup | Overkill — filler word and conflict analysis need 20+ min meetings | Auto-detect meeting length and skip irrelevant modules |
| Presenting raw metrics without context | "You said 'um' 47 times" is demoralizing without benchmarks | Always compare to norms and show trajectory over time |
| Analyzing a single meeting in isolation | One meeting is a snapshot, not a pattern — conclusions are unreliable | Require 3+ meetings minimum for trend-based coaching |
| Treating speaking time equality as the goal | A facilitator SHOULD talk less; a presenter SHOULD talk more | Weight speaking ratios by meeting type and role |
| Flagging every hedge word as negative | "I think" and "maybe" are appropriate in brainstorming | Distinguish between decision meetings (hedges are bad) and ideation (hedges are fine) |
| Skill | Relationship |
|---|---|
project-management/senior-pm |
Broader PM scope — use for project planning, risk, stakeholders |
project-management/scrum-master |
Agile ceremonies — pairs with meeting-analyzer for retro quality |
project-management/confluence-expert |
Store meeting analysis outputs as Confluence pages |
c-level-advisor/executive-mentor |
Executive communication coaching — complementary perspective |