Guide product managers through creating an Opportunity Solution Tree (OST) by extracting target outcomes from stakeholder requests, generating opportunity options (problems to solve), mapping potential solutions, and selecting the best proof-of-concept (POC) based on feasibility, impact, and market fit. Use this to move from vague product requests to structured discovery, ensuring teams solve the right problems before jumping to solutions—avoiding "feature factory" syndrome and premature convergence on ideas.
This is not a roadmap generator—it's a structured discovery process that outputs validated opportunities with testable solution hypotheses.
An OST is a visual framework (Teresa Torres, Continuous Discovery Habits) that connects:
Structure:
Desired Outcome (1)
|
+-----------+-----------+
| | |
Opportunity Opportunity Opportunity (3)
| | |
+-+-+ +-+-+ +-+-+
| | | | | | | | |
S1 S2 S3 S1 S2 S3 S1 S2 S3 (9 total solutions)
Use workshop-facilitation as the default interaction protocol for this skill.
It defines:
Other (specify) when useful)This file defines the domain-specific assessment content. If there is a conflict, follow this file's domain logic.
Use template.md for the full fill-in structure.
This interactive skill follows a two-phase process:
Phase 1: Generate OST (extract outcome, identify opportunities, map solutions) Phase 2: Select POC (evaluate solutions, recommend best starting point)
Agent suggests:
Before we create your Opportunity Solution Tree, let's gather context:
Stakeholder Request or Product Initiative:
Product Context (if available):
You can paste this content directly, or describe the request briefly.
Agent asks: "What's the desired outcome for this initiative? (What business or product metric are you trying to move?)"
Offer 4 enumerated options:
Or describe your specific desired outcome (be measurable: e.g., "Increase trial-to-paid conversion from 15% to 25%").
User response: [Selection or custom]
Agent extracts and confirms:
Agent generates 3 opportunities based on the desired outcome and context provided.
Agent says: "Based on your desired outcome ([from Q1]) and the context you provided, here are 3 opportunities (customer problems or needs) that could drive this outcome:"
Example (if Outcome = Increase trial-to-paid conversion):
Opportunity 1: Users don't experience value during trial — "New users sign up but don't complete onboarding, never reach 'aha moment,' abandon before seeing core value"
Opportunity 2: Pricing is unclear or misaligned — "Users unsure if paid plan is worth it; don't understand what they get for the price; pricing page confusing"
Opportunity 3: Free plan is 'good enough' — "Users stay on free tier indefinitely because it meets their needs; no compelling reason to upgrade"
Agent asks: "Which opportunity feels most critical to explore first, or would you like to modify/add opportunities?"
User response: [Selection or custom]
Agent generates 3 solution ideas for the selected opportunity.
Agent says: "For Opportunity [X]: [Name], here are 3 potential solutions to test:"
Example (if Opportunity 1 selected: Users don't experience value during trial):
Solution 1: Guided onboarding checklist — "Interactive checklist that walks users through core workflows step-by-step (e.g., 'Create your first project,' 'Invite a teammate,' 'Complete a task')"
Solution 2: Time-to-value triggers — "Automatically prompt users when they're stuck (e.g., 'You haven't created a project yet—here's a template to get started')"
Solution 3: Human-assisted onboarding — "Offer 15-min onboarding call with CSM for high-value trial users (enterprise, team plans)"
Agent asks: "Would you like to explore solutions for another opportunity, or move to POC selection?"
User response: [Selection or custom]
After generating solutions, the agent helps select which solution to test first.
Agent says: "Let's evaluate your solutions to pick the best starting point. I'll score each solution on Feasibility (how hard to build), Impact (how much it could move the outcome), and Market Fit (how well it aligns with customer needs)."
Agent generates evaluation table:
| Solution | Feasibility (1-5) | Impact (1-5) | Market Fit (1-5) | Total Score | Rationale |
|---|---|---|---|---|---|
| Solution 1: Guided onboarding checklist | 4 | 4 | 5 | 13 | High feasibility (UI pattern), proven impact (industry standard), strong market fit (users expect onboarding) |
| Solution 2: Time-to-value triggers | 3 | 3 | 4 | 10 | Medium feasibility (requires analytics integration), moderate impact (depends on trigger quality), good market fit |
| Solution 3: Human-assisted onboarding | 5 | 5 | 3 | 13 | High feasibility (no dev work), high impact (personal touch), lower market fit (doesn't scale, high-touch only) |
Scoring criteria:
Agent recommends:
Recommended POC: Solution 1 (Guided onboarding checklist)
Why this POC:
Alternative POC: Solution 3 (Human-assisted onboarding)
Not recommended for POC: Solution 2 (Time-to-value triggers)
Agent asks: "How will you test this solution? What's the experiment?"
Offer 3 enumerated options:
Or describe your experiment approach.
User response: [Selection or custom]
After completing the flow, the agent outputs:
# Opportunity Solution Tree + POC Plan
## Desired Outcome
**Outcome:** [From Q1]
**Target Metric:** [Specific, measurable goal]
**Why it matters:** [Rationale]
---
## Opportunity Map
### Opportunity 1: [Name]
**Problem:** [Description]
**Evidence:** [From context]
**Solutions:**
1. [Solution A]
2. [Solution B]
3. [Solution C]
---
### Opportunity 2: [Name]
**Problem:** [Description]
**Evidence:** [From context]
**Solutions:**
1. [Solution A]
2. [Solution B]
3. [Solution C]
---
### Opportunity 3: [Name]
**Problem:** [Description]
**Evidence:** [From context]
**Solutions:**
1. [Solution A]
2. [Solution B]
3. [Solution C]
---
## Selected POC
**Opportunity:** [Selected opportunity]
**Solution:** [Selected solution]
**Hypothesis:**
- "If we [implement solution], then [outcome metric] will [increase/decrease] from [X] to [Y] because [rationale]."
**Experiment:**
- **Type:** [A/B test / Prototype test / Concierge test]
- **Participants:** [Number of users, segment]
- **Duration:** [Timeline]
- **Success criteria:** [What validates the hypothesis]
**Feasibility Score:** [1-5]
**Impact Score:** [1-5]
**Market Fit Score:** [1-5]
**Total:** [Sum]
**Why this POC:**
- [Rationale 1]
- [Rationale 2]
- [Rationale 3]
---
## Next Steps
1. **Build experiment:** [Specific action, e.g., "Create onboarding checklist wireframes"]
2. **Run experiment:** [Specific action, e.g., "Deploy to 50% of trial users for 2 weeks"]
3. **Measure results:** [Specific metric, e.g., "Compare activation rate: checklist vs. control"]
4. **Decide:** [If successful → scale; if failed → try next solution]
---
**Ready to build the experiment? Let me know if you'd like to refine the hypothesis or explore alternative solutions.**
See examples/sample.md for full OST examples.
Mini example excerpt:
**Desired Outcome:** Increase trial-to-paid conversion from 15% to 25%
**Opportunity:** Users don’t reach "aha" moment during trial
**Solution:** Guided onboarding checklist
Symptom: "Opportunity: We need a mobile app"
Consequence: You've already converged on a solution without exploring the problem.
Fix: Reframe opportunities as customer problems: "Mobile-first users can't access product on the go."
Symptom: "We know the solution is [X], just need to build it"
Consequence: Miss better alternatives, no learning.
Fix: Generate at least 3 solutions per opportunity. Force divergence before convergence.
Symptom: "Desired Outcome: Improve user experience"
Consequence: Can't measure success, can't prioritize opportunities.
Fix: Make outcomes measurable: "Increase NPS from 30 to 50" or "Reduce onboarding drop-off from 60% to 40%."
Symptom: Picking a solution and moving straight to roadmap
Consequence: No validation, high risk of building wrong thing.
Fix: Every solution must map to an experiment. No experiments = no OST.
Symptom: Generating 20 opportunities, 50 solutions, never picking one
Consequence: Team stuck in discovery, no progress.
Fix: Limit to 3 opportunities, 3 solutions each (9 total). Pick POC, run experiment, learn, iterate.
skills/problem-statement/SKILL.md — Frames opportunities as customer problemsskills/jobs-to-be-done/SKILL.md — Helps identify opportunities from JTBD researchskills/epic-hypothesis/SKILL.md — Turns validated solutions into testable epicsskills/user-story/SKILL.md — Breaks experiments into deliverable storiesskills/discovery-interview-prep/SKILL.md — Validates opportunities through customer interviewsSkill type: Interactive
Suggested filename: opportunity-solution-tree.md
Suggested placement: /skills/interactive/
Dependencies: Uses skills/problem-statement/SKILL.md, skills/jobs-to-be-done/SKILL.md, skills/epic-hypothesis/SKILL.md, skills/user-story/SKILL.md