Scientific, customer-centric approach to conversion rate optimization based on the CRE Methodology(TM). Extraordinary improvements come from understanding WHY visitors don't convert, not from copying competitors or applying generic tips.
Don't guess -- discover. The methodology rejects "best practices" and "magic buttons" in favor of evidence-based optimization. Most websites underperform not because of bad design, but because no one has systematically researched why visitors leave without converting.
The foundation: Every visitor who doesn't convert has a reason. Your job is to discover those reasons through research, then systematically eliminate them with evidence and proof. This customer-centric approach consistently outperforms intuition, competitor copying, and "expert" opinions.
Goal: 10/10. When reviewing or creating landing pages, funnels, or conversion flows, rate them 0-10 based on adherence to the principles below. A 10/10 means full alignment with all guidelines; lower scores indicate gaps to address. Always provide the current score and specific improvements needed to reach 10/10.
Core concept: A systematic 9-step process for optimizing conversion rates, moving from defining success metrics through research, experimentation, and scaling wins across the business.
Why it works: Random optimization efforts fail because they skip the critical research steps. The CRE process forces you to understand visitors before changing anything, ensuring changes are based on evidence rather than opinion.
Key insights:
Product applications:
| Context | CRO Process Step | Example |
|---|---|---|
| Landing page audit | Steps 1-3: Define goals, map funnel, research visitors | Identify that 70% of traffic bounces because value prop is unclear |
| Checkout optimization | Step 2: Map funnel for blocked arteries | Discover shipping cost shock causes 40% cart abandonment |
| New feature launch | Steps 6-8: Strategize, design, experiment | A/B test two positioning approaches before full rollout |
| Email sequence | Step 9: Scale wins | Apply winning objection-handling copy from landing page to drip emails |
| Competitor response | Step 4: Market intelligence | Transfer proven strategies from adjacent industries |
Copy patterns:
Ethical boundary: Never manipulate test results or cherry-pick data. Report all tests, including failures, and wait for genuine statistical significance.
See: testing-methodology.md for detailed ICE scoring, A/B vs. multivariate guidance, and statistical rigor.
Core concept: Visitors don't convert for specific, discoverable reasons. Research methods -- exit surveys, chat logs, support tickets, sales calls, reviews -- reveal the "voice of the customer" and their real objections.
Why it works: Companies guess why visitors leave, but guesses are almost always wrong. Direct research consistently uncovers objections that teams never anticipated, and the language customers use is more persuasive than any copywriter's invention.
Key insights:
Product applications:
| Context | Research Method | Example |
|---|---|---|
| Exit intent | On-site survey (Hotjar, Qualaroo) | "What's preventing you from signing up today?" |
| Post-purchase | Email survey within 7 days | "What almost stopped you from buying?" |
| Objection mining | Support ticket analysis | Search for "but", "however", "worried about" patterns |
| Voice of customer | Sales call recordings | Capture exact language customers use to describe problems |
| Competitive gaps | Review mining (yours and competitors') | Negative reviews = unaddressed objections |
Copy patterns:
Ethical boundary: Respect customer privacy in research. Anonymize data, get consent for recordings, and don't survey so aggressively that you degrade the user experience.
See: RESEARCH.md for tools, survey questions, and data analysis methods.
Core concept: Every company has overlooked proof elements -- testimonials not displayed, awards not mentioned, statistics not highlighted, guarantees not prominent, team credentials hidden. These are "persuasion assets" that must be inventoried, acquired, and displayed.
Why it works: Visitors make decisions based on evidence and proof, not claims. A bold claim without proof is just noise. A modest claim with overwhelming proof is irresistible. Most companies sit on goldmines of proof they never use.
Key insights:
Product applications:
| Context | Persuasion Asset | Example |
|---|---|---|
| Landing page header | Logo bar + rating | "Trusted by 10,000+ companies" with 5 recognizable logos |
| Pricing page | Risk reversal | "30-day money-back guarantee, no questions asked" |
| Product page | Specific testimonial | Photo + name + company + "Increased conversion by 47% in 3 weeks" |
| Checkout flow | Trust badges near forms | Security certification, payment logos, guarantee seal |
| About page | Team credentials | Years of experience, certifications, publications, patents |
Copy patterns:
Ethical boundary: Never fabricate testimonials, inflate statistics, or display fake trust badges. All proof must be genuine and verifiable.
See: PERSUASION.md for the full persuasion assets checklist and psychological triggers.
Core concept: The Objection/Counter-Objection (O/CO) table is the core CRE technique. Create a two-column table mapping every visitor objection to specific, evidence-backed counter-objections.
Why it works: Visitors arrive with objections. If the page doesn't address them, visitors leave. The O/CO framework ensures no objection goes unanswered, and counter-objections are placed exactly where objections naturally arise during the reading flow.
Key insights:
Product applications:
| Context | Objection Type | O/CO Example |
|---|---|---|
| Trust | "Why should I believe you?" | Specific testimonials, media logos, awards, money-back guarantee |
| Price | "Is it worth the money?" | ROI calculator, cost comparison vs. alternatives, payment plans |
| Fit | "Will it work for MY situation?" | Case studies from similar customers, segmented landing pages, free trial |
| Timing | "Why should I act now?" | Cost of delay calculation, genuine limited-time offers, seasonal relevance |
| Effort | "How hard will this be?" | "Done for you" framing, "Set up in 5 minutes", step-by-step breakdown |
Copy patterns:
Ethical boundary: Address real objections with honest counter-objections. Never dismiss legitimate concerns or use deception to overcome valid hesitations.
See: OBJECTIONS.md for the full O/CO framework, research methods, and counter-objection techniques.
Core concept: Every experiment needs a documented hypothesis linking a specific change to an expected outcome with a reason grounded in research. Prioritize using ICE scoring (Impact, Confidence, Ease).
Why it works: Without a hypothesis, you're just changing things randomly. The hypothesis forces you to articulate WHY a change should work, which means it must be grounded in customer research. ICE scoring prevents teams from wasting time on low-impact "meek tweaks."
Key insights:
Product applications:
| Context | Hypothesis Example | ICE Score |
|---|---|---|
| Headline rewrite | "If we use customer language from surveys, conversion will increase because visitors see their own words reflected" | I:8, C:9, E:10 = 9.0 |
| Video testimonial | "If we add video testimonial addressing price objection, signups will increase because visitors need trust proof" | I:7, C:7, E:6 = 6.7 |
| Checkout redesign | "If we simplify checkout to one page, completion will increase because analytics show 40% drop at step 2" | I:9, C:6, E:3 = 6.0 |
| Button color | "If we change button from blue to green, clicks will increase because green means go" | I:2, C:2, E:10 = 4.7 |
Copy patterns:
Ethical boundary: Report all test results honestly, including failures. Never cherry-pick data or run tests until you get the result you want.
See: testing-methodology.md for ICE scoring tables and detailed prioritization.
Core concept: Run controlled experiments comparing page versions to determine which performs better, using proper statistical rigor to ensure results are real, not random noise.
Why it works: Without controlled experiments, it is impossible to distinguish real improvements from random variation. Proper A/B testing methodology prevents the most common errors: peeking and stopping early, insufficient sample size, ignoring practical significance, and the multiple comparison problem.
Key insights:
Product applications:
| Context | Test Type | Example |
|---|---|---|
| Concept validation | A/B test (2-4 variants) | Test two fundamentally different page layouts based on different customer insights |
| Element optimization | Multivariate (100k+ visitors) | Test 3 headlines x 3 images x 2 CTAs on proven winning page |
| Low traffic | Bold A/B test | Make dramatic changes detectable with smaller samples (~4,000 visitors for 50% lift) |
| High traffic | Rapid iteration | Run parallel tests on non-overlapping pages, 10-20 tests/month |
| Post-test | Scale wins | Apply winning insights across landing pages, ad copy, email sequences |
Copy patterns:
Ethical boundary: Never manipulate statistical methods to manufacture significance. Report confidence intervals honestly and acknowledge when results are inconclusive.
See: testing-methodology.md for statistical significance, sample size calculations, and platform comparison.
| Mistake | Why It Fails | Fix |
|---|---|---|
| Copying competitors blindly | You don't know if their approach works for them, let alone for you | Research YOUR visitors' objections and build YOUR evidence |
| Testing button colors before understanding objections | Addresses surface symptoms, not root causes; tiny effects waste sample size | Do customer research first, then test big changes based on findings |
| Assuming you know why visitors leave | Teams are almost always wrong about visitor motivations | Use exit surveys, chat logs, and support analysis to discover real reasons |
| Using "best practices" without validation | What works elsewhere may not work for your audience, product, or context | Treat best practices as hypotheses to test, not rules to follow |
| Making decisions based on HiPPO | Highest Paid Person's Opinion is not data; authority bias kills optimization | Let research and test results determine changes, not seniority |
| Optimizing pages without funnel context | Improving one step may shift problems to another; miss biggest opportunities | Map entire funnel first, identify blocked arteries, prioritize by impact |
| Making "meek tweaks" instead of bold changes | Small changes rarely reach statistical significance; wastes time and traffic | Test changes that could double conversion, not nudge it 2% |
| Giving up after one failed test | The opportunity still exists; you just haven't found the solution yet | Investigate why, go back to research, try a bolder change |
Audit any landing page or conversion flow:
| Question | If No | Action |
|---|---|---|
| Do we know the ONE action visitors should take on this page? | Page lacks focus, visitors are confused | Define single primary conversion goal and remove competing CTAs |
| Have we researched why visitors aren't converting (not guessed)? | Optimization is based on assumptions, not evidence | Run exit surveys, analyze chat logs, review support tickets |
| Do we have an O/CO table mapping objections to counter-objections? | Visitor objections go unanswered on the page | Build O/CO table from research, place counter-objections at friction points |
| Is the value proposition crystal clear within 5 seconds? | Visitors bounce before understanding the offer | Run 5-second test, rewrite headline using customer language |
| Are persuasion assets visible (testimonials, awards, guarantees)? | Page makes claims without proof, visitors don't believe | Audit persuasion assets, acquire missing ones, display prominently |
| Have we mapped the full funnel and identified blocked arteries? | Optimizing wrong page or missing biggest opportunity | Map traffic volume at each stage, compare to benchmarks, prioritize by impact |
When optimizing any page:
This skill is based on the CRE Methodology(TM) developed by Conversion Rate Experts. For the complete methodology, detailed case studies, and advanced techniques, read the original book:
Dr. Karl Blanks and Ben Jesson are the cofounders of Conversion Rate Experts (CRE), the world's leading agency specializing in conversion rate optimization. Their clients have included Google, Apple, Amazon, Facebook, Dropbox, and many other technology leaders. CRE's methodology has been recognized with a Queen's Award for Enterprise (Innovation), the UK's highest business honor. Blanks holds a PhD in user experience and previously managed teams of usability researchers at Hewlett-Packard. Jesson's background is in direct-response marketing and web development. Together they developed the CRE Methodology, which has been applied across hundreds of websites and consistently delivered significant conversion improvements. Their book Making Websites Win distills this methodology into a systematic, repeatable process for evidence-based website optimization.
Audit websites and landing pages for conversion issues and design evidence-based A/B tests.
See testing implementation details for output format specifications.
| Error | Cause | Resolution |
|---|---|---|
| Authentication failure | Invalid or expired credentials | Refresh tokens or re-authenticate with testing |
| Configuration conflict | Incompatible settings detected | Review and resolve conflicting parameters |
| Resource not found | Referenced resource missing | Verify resource exists and permissions are correct |
Basic usage: Apply cro methodology to a standard project setup with default configuration options.
Advanced scenario: Customize cro methodology for production environments with multiple constraints and team-specific requirements.