Why JTBD surveys go wrong in otherwise mature CX programs
Leaders trust surveys to scale insight, yet Jobs to be Done research often under-delivers because teams treat jobs like preferences rather than progress customers seek in a specific situation. Jobs describe the functional and emotional progress a person hires a product or service to achieve, independent of any current solution.¹ When teams design surveys without this frame, items drift into feature opinions, the data fragments, and decisions revert to guesswork. Executives then question the method, not the setup. This article names the most common JTBD survey mistakes, explains why they happen, and shows practical corrections that preserve rigor while keeping velocity.
What is a “job,” a “desired outcome,” and a “job map” in plain terms
Teams confuse terms, so clarity must lead. A job states the core progress a customer is trying to make, such as “restore service after an outage” or “feel confident that a claim will be approved.” A desired outcome expresses how success is measured, such as “minimize time to first response” or “reduce uncertainty about required documents.”² A job map sequences how customers define, locate, prepare, execute, monitor, and modify their approach across the journey.³ When a survey operationalizes these definitions, questions become testable criteria rather than wish lists. When definitions blur, scales become inconsistent and responses collapse into noise. Maintaining this triad anchors language and reduces rework across discovery, prioritization, and design.
Mistake 1: Teams survey solutions, not jobs
Teams anchor on current features and ask customers to rate them. This measures satisfaction with the present, not progress sought across contexts. Customers cannot reliably imagine ideal solutions in a vacuum, which leads to status quo bias and false negatives on unmet outcomes.¹ The fix is to write every item as an outcome that is solution independent, observable in the customer’s world, and measurable with importance and satisfaction or with likelihood of fulfillment in a real situation.² This format lets analytics surface underserved outcomes, which signal opportunity areas with both high importance and low satisfaction.
Mistake 2: Surveys skip the situational context
JTBD lives in context, and context governs choice. Without screening for situation, constraints, and triggers, responses mix unlike use cases and hide the signal.⁴ Add items that capture the hiring situation, the frequency and recency of the job, and the constraints that shape tradeoffs, such as time pressure, regulatory steps, or channel availability.⁵ Segment analysis should then compare outcomes across contexts, not across demographics. The practical move is to recruit by situation and to write intros that frame the job before any scale appears. This keeps cognition focused and reduces interpretation error.
Mistake 3: Outcome wording drifts from observable to vague
Vague outcomes like “make it seamless” create interpretive variance and lower reliability. Cognitive survey research shows that specific, concrete, and behavior anchored wording improves comprehension and reduces satisficing.⁶ Replace vague adjectives with measurable attributes of time, variability, likelihood, quantity, or risk.² Test wording with five to eight cognitive interviews to confirm that participants interpret items as intended.⁶ Lock a glossary so that similar outcomes reuse common terms, which stabilizes embeddings and helps both humans and machines compare like with like in analysis.
Mistake 4: Scales get sloppy and anchor effects creep in
Many teams deploy five point scales with inconsistent anchors, which distorts measurement. Consistency in numeric direction, label semantics, and midpoints matters because respondents code scales as signals about desired answers.⁶ Importance can use a seven point anchor from “not important at all” to “extremely important.” Satisfaction can mirror from “not at all satisfied” to “extremely satisfied.” Keep numeric direction aligned so that higher numbers always indicate more of the construct. Randomize item order within sections to reduce order effects, and include instructed response checks to flag satisficing.⁶ This discipline increases reliability and supports defensible comparisons.
Mistake 5: Teams average away underserved outcomes
Averages hide opportunity. If 30 percent of the market reports very high importance and very low satisfaction for a specific outcome, the mean can still look moderate. ODI style analyses compute an opportunity score by combining importance and satisfaction to identify underserved outcomes that warrant innovation or service redesign.² Teams should cluster respondents by outcome patterns and situations, not by features owned or channels used.² This segmentation reveals practical vectors for redesign, such as “fast resolution under regulatory deadlines,” and links directly to service blueprints and staffing models.
Mistake 6: Sampling frames ignore who actually hires the service
Surveys often target account owners rather than the people who drive moments of truth. That practice yields polite but unhelpful answers. Design a sampling frame that recruits recent job executors within the right lookback period and across key contexts.⁵ Use stratified sampling to ensure coverage of high value segments and rare but critical scenarios, such as outages or exceptions. Power the sample for comparison at the segment level. Good frames reduce nonresponse bias and replicate better.⁷ When privacy or identity systems limit targeting, use event triggered intercepts and link operational IDs to survey invitations under a clear consent regime.
Mistake 7: Teams skip the qualitative backbone
Surveys without prior qualitative discovery lift language from internal decks, not customers. A short run of qualitative interviews produces the raw material for outcome statements and job maps and prevents category jargon from leaking into items.¹ Pair the qualitative phase with a pilot survey and a cognitive debrief to validate comprehension and burden.⁶ Document the final codebook before full launch. This backbone keeps the survey faithful to customer language and provides an audit trail for leadership and compliance reviews.
Mistake 8: Analyses stop at dashboards and miss decisions
Teams celebrate heatmaps and stop short of decision. Executives need ranked outcome opportunities, quantified gaps, and costed interventions. Build an analysis plan that computes opportunity scores, flags statistically meaningful differences, and converts outcome gaps into design hypotheses with expected impact.² Combine survey results with operational data, such as handle time, first contact resolution, and churn markers, to estimate value at stake.⁸ Close with a decision brief that states which underserved outcomes the service will address, where to adjust policy, and how to measure post change impact.
How should you phrase JTBD survey items to reduce bias
Writers should use simple sentences, active verbs, and single ideas per item. Cognitive testing research shows that double barreled questions and negations degrade data quality, and that plain language improves recall and judgment.⁶ Outcome statements should avoid solutions and specify the dimension being minimized or maximized, such as time, variability, likelihood, frequency, or risk.² Introductions should remind respondents of the defined situation to focus memory on a specific episode. Add a brief example only if it is generic and does not point toward your product. These practices reduce cognitive load and support valid responses across devices and attention states.
Which metrics reveal underserved jobs without guesswork
Leaders should track three metrics. The first is an outcome opportunity score that combines importance and satisfaction to rank gaps.² The second is a situational penetration rate that quantifies how often a segment faces the job in the defined context.⁵ The third is an intent to switch under the same context, which serves as a proxy for competitive pressure.¹ Together, these metrics identify where progress is both vital and under delivered. Tie these metrics to service level targets and workforce planning so that improvement is visible in operational dashboards, not only in insight reports.
What are the governance moves that keep JTBD surveys credible
Strong governance keeps the method repeatable. Publish a definitions sheet that fixes the job, outcome, and job map for the study. Version control the item list and the scale anchors. Pre-register the analysis plan and segmentation rules so that teams do not backfit thresholds after seeing results.⁷ Run a short pilot to test burden, reliability, and technical performance across devices. Establish a review ritual with legal and privacy so that consent, identity linkage, and retention align with policy. These moves create a defensible insight asset that compliance teams will support and that product and service leaders will actually use.
How to put it all together in your next instrument
You should start with a tight job statement and a draft job map. Harvest outcomes from five to ten qualitative interviews, refine wording, and group them by job step. Write importance and satisfaction items with consistent anchors. Add situational screens, identity keys, and quality checks. Pilot with a small sample, iterate wording, and lock the codebook. Field to a powered sample and analyze by situation and outcome clusters. Convert findings into a decision brief with specific design changes, expected gains, and a measurement plan. This cadence turns JTBD theory into a reliable insight pipeline that improves service quality and customer experience at speed.¹²
FAQ
What is Jobs to be Done and why does it matter for surveys?
Jobs to be Done defines the progress a customer seeks in a specific situation, independent of any current solution. Using JTBD in surveys focuses measurement on desired outcomes rather than features, which improves decision quality for service and product change.¹²
How should I write JTBD survey items to avoid bias?
Write each item as a solution independent desired outcome that is concrete and observable, specify the dimension being minimized or maximized, and use consistent importance and satisfaction anchors. Pilot with cognitive interviews to validate comprehension.²⁶
Which segments should I compare in JTBD analysis?
Compare respondents by situation and outcome patterns, not by demographics or feature ownership. Segmenting by hiring context reveals underserved outcomes and directs redesign toward moments that matter.²⁵
Why do averages hide opportunity in JTBD surveys?
Averages can mask pockets of high importance and low satisfaction. Use opportunity scores and cluster analysis to identify underserved outcomes for specific contexts where improvement will create outsized impact.²
Who should be recruited for a JTBD survey sample?
Recruit people who recently executed the job in the defined context. Use stratified sampling and event triggered intercepts to reach the true decision makers for moments of truth, and power the sample for comparisons.⁵⁷
Which metrics best signal where to invest?
Track outcome opportunity scores, situational penetration of the job, and intent to switch under context. These metrics highlight high value, under delivered progress that aligns to service level and workforce planning.¹²⁵
What governance practices keep JTBD surveys credible in the enterprise?
Publish definitions for job, outcomes, and job maps, pre register the analysis plan, standardize scale anchors, run a pilot, and align consent and identity linkage with privacy policy. These steps make results auditable and repeatable.⁷
Sources
Clayton M. Christensen, Taddy Hall, Karen Dillon, David S. Duncan. 2016. “Know Your Customers’ Jobs to Be Done.” Harvard Business Review. https://hbr.org/2016/09/know-your-customers-jobs-to-be-done
Anthony W. Ulwick. 2005. What Customers Want: Using Outcome-Driven Innovation to Create Breakthrough Products and Services. McGraw Hill. https://www.strategyn.com/books/what-customers-want/
Anthony W. Ulwick. 2009. “The Job Map: The Eight Stages of Job Execution.” Strategyn. https://strategyn.com/jobs-to-be-done/job-map/
Bob Moesta and Greg Engle. 2020. Demand-Side Sales 101. Lioncrest Publishing. https://therewiredgroup.com/books/demand-side-sales-101/
Jim Kalbach. 2020. The Jobs To Be Done Playbook. Rosenfeld Media. https://rosenfeldmedia.com/books/jobs-to-be-done-playbook/
Roger Tourangeau, Lance J. Rips, Kenneth Rasinski. 2000. The Psychology of Survey Response. Cambridge University Press. https://www.cambridge.org/core/books/psychology-of-survey-response/1E648DF47E6A2A6C7A92C55E44DC8833
Don A. Dillman, Jolene D. Smyth, Leah Melani Christian. 2014. Internet, Phone, Mail, and Mixed-Mode Surveys: The Tailored Design Method. Wiley. https://onlinelibrary.wiley.com/doi/book/10.1002/9781118645572
Sunil Gupta. 2018. Driving Digital Strategy. Harvard Business Review Press. https://store.hbr.org/product/driving-digital-strategy-transform-your-business-through-disruption/S0041





























