At a previous company, we had a saying: "A roadmap isn't a list of features. It's a sequence of bets about what will matter."
The challenge, of course, is that you never know for certain which bets will pay off. You can analyze usage data, conduct user research, study competitors—and still end up shipping features that nobody wants.
I've shipped those features. More than I'd like to admit. And the lesson I've learned is that the problem usually isn't execution—it's prioritization. We built the wrong things, brilliantly.
This article is about how synthetic users have changed my approach to prioritization. Not by eliminating uncertainty—nothing can do that—but by helping me make better bets with more confidence.
The Prioritization Problem
Most teams prioritize features using some combination of:
- Customer feedback (often biased toward vocal customers)
- Sales requests (often biased toward closing specific deals)
- Competitive pressure (often reactive rather than strategic)
- Stakeholder opinions (often based on intuition rather than evidence)
- Frameworks like RICE or ICE (often with made-up "impact" scores)
None of these are wrong, exactly. But they all share a common limitation: they're backward-looking. They tell you what customers say they want, not necessarily what will create value when you ship it.
Adding Simulated Buyers to the Mix
Here's how I use synthetic users in my prioritization process:
Step 1: Generate the Idea List
I start with traditional sources: customer feedback, sales requests, competitive intelligence, team brainstorming. This gives me a list of 20-50 potential features or improvements.
Step 2: Create Persona Scenarios
For each feature, I describe the specific scenario where a user would encounter and benefit from it. This forces precision—you can't evaluate a feature in the abstract.
Step 3: Simulated Buyer "Pitch Sessions"
I "pitch" each feature to synthetic personas representing different customer segments. I describe the problem it solves and how it would work, then ask:
- "How important is this problem to you?"
- "How does this compare to your current workaround?"
- "Would this make you more likely to recommend the product?"
- "What concerns would you have?"
Step 4: Pattern Analysis
After pitching to multiple personas, patterns emerge. Some features generate enthusiasm across segments. Some only resonate with specific personas. Some reveal concerns we hadn't considered.
Step 5: Refine the Prioritization
I use synthetic user feedback to adjust my confidence in each feature's impact score. Features that generated broad enthusiasm get boosted. Features that raised unexpected concerns get flagged for more investigation.
A Real Example
At SocioLogic, we were debating between two features: advanced persona customization and persona export capabilities. Both seemed valuable; we couldn't build both this quarter.
Traditional signals were mixed. Some users had requested each feature, but not enough to be definitive. Stakeholders had opinions but no clear consensus.
So I ran synthetic user research. I pitched both features to synthetic personas representing our core segments: product managers, UX researchers, and marketing strategists.
The result was surprising. Advanced customization got lukewarm responses: "That sounds cool, I guess I might use it." Export capabilities generated immediate enthusiasm: "Oh, I would definitely need that. We have compliance requirements."
We prioritized export. Post-launch data confirmed the decision—export was used by 3x more users than we projected, while our later customization feature matched projections almost exactly.
What I've Learned
- Features that solve "cool" problems often underperform features that solve "necessary" problems. Synthetic users are good at surfacing this distinction.
- Enthusiasm levels vary by persona. A feature that excites one segment might bore another. Synthetic research helps you understand the distribution.
- Concerns matter as much as enthusiasm. A feature that raises red flags with synthetic users will likely face adoption challenges with real users.
- This is input, not output. Synthetic user feedback is one signal among many. But it's a signal I didn't have before, and it's made my bets better.
A Framework for Integration
Here's how I recommend integrating synthetic user research into prioritization:
| Traditional Input | Synthetic Research Adds |
|---|---|
| Customer feedback | Reactions from non-vocal segments |
| Sales requests | Validation that demand isn't deal-specific |
| Competitive analysis | User perspective on competitive features |
| Stakeholder opinions | Evidence to evaluate intuitions |
The goal isn't to replace traditional inputs—it's to add another perspective that helps you make better decisions.
Because in the end, the best roadmap isn't the one with the most features. It's the one with the right features, shipped in the right order, for the right people.