Co-Creation Session Runbook

Why co-creation matters in enterprise CX

Executives engage customers to reduce risk and accelerate value. Co-creation is a structured method where customers, employees, and partners jointly define problems, generate options, and select solutions that create shared value. Practitioners frame co-creation as a shift from designing for users to designing with them, which raises relevance and adoption for complex services.¹ Co-creation also aligns with Service-Dominant Logic, which states that value emerges through use, not at the point of production.² These principles help Customer Experience leaders move beyond surveys toward participatory evidence that informs strategy, service design, and operational decisions. Organizations win when co-creation integrates discovery, design, and delivery in one experience pipeline. The most effective runbooks keep roles clear, artifacts lightweight, and cadence predictable. Teams that formalize this pipeline report faster alignment, clearer requirements, and fewer late-stage rework cycles.³ ⁴

What is a co-creation session, precisely?

A co-creation session is a time-boxed workshop that blends discovery and design to produce validated service options. Teams invite real customers, frontline employees, and cross-functional stakeholders to explore needs, map journeys, ideate, and prioritize changes. Facilitators steer the group through divergent and convergent modes to balance creativity with decisions.³ The session uses human-centered design practices to define outcomes that reflect user goals and business constraints.⁵ Clear criteria govern what counts as success, what evidence supports decisions, and what commitments follow. This unit is not a focus group. It is a working session that yields artifacts ready for engineering, policy, or process design. The output forms a backlog of testable service concepts, a risk register, and an agreed scope for the next sprint or pilot.⁶

How the Double Diamond guides the agenda

Leaders structure the day around the Double Diamond: Discover, Define, Develop, and Deliver. The first diamond diverges to expand understanding, then converges to commit to a problem statement. The second diamond diverges to explore options, then converges to select a solution for prototyping or piloting.³ The model is useful because it is visual, simple, and compatible with any method set.⁷ ⁸ Teams anchor each segment with specific activities and time boxes. A morning discovery block maps moments of friction and desire. A midday definition block synthesizes insights into clear opportunity statements. An afternoon development block explores solution patterns. A final delivery block creates a decision record that captures the chosen path, rationale, evidence, and next steps.³

Who participates and what roles keep momentum?

Sponsors set direction and unlock resources. Facilitators design the flow, manage energy, and safeguard inclusion. Designers and researchers translate insights into artifacts. Product and operations owners define feasibility and constraints. Engineers and data specialists ground options in technical realities. Customers and frontline staff contribute lived experience and context that teams cannot infer from analytics alone.¹ ⁵ Decision clarity prevents churn, so leaders assign DACI or RACI roles before the session. DACI names a Driver, Approver, Contributors, and Informed parties for key decisions. RACI charts work ownership during and after the session. Clear roles compress cycle time because contributors know when to ideate, when to advise, and when to decide.⁹ ¹⁰

What agenda reliably delivers outcomes in one day?

Teams run a nine-step agenda that fits within a single business day. 1) Frame: Confirm objectives, scope, and success criteria. 2) Context: Share existing research, service metrics, and constraints. 3) Discover: Run interviews, lightning talks, and journey fragments to surface needs. 4) Define: Cluster signals, name opportunity statements, and select one to three focus areas. 5) Develop: Ideate with structured prompts and remix patterns from proven services. 6) Rank: Score options against desirability, feasibility, and viability. 7) Decide: Use the DACI owner to confirm the choice and record trade-offs. 8) Plan: Draft a one-page pilot plan with metrics and owners. 9) Close: Confirm commitments, dates, and risk mitigations. The sequence mirrors Double Diamond mechanics and standard workshop facilitation guidance.³ ⁶ ⁸

Which activities create evidence fast, not just ideas?

Facilitators select activities that turn discussion into artifacts. Journey slices visualize moments worth fixing. Service blueprints connect frontstage interactions to backstage processes. “How might we” prompts convert complaints into opportunities. Concept posters articulate the value proposition, users, experience flow, and operational impacts. Prioritization matrices make trade-offs explicit and reusable. Decision records capture what the group chose and why, which protects momentum when teams rotate.³ ⁶ Leaders also include a rapid assumptions test where participants list riskiest assumptions and propose the earliest feasible tests. This habit aligns with human-centered design standards that require validation with users throughout the lifecycle.⁵ Using a small set of repeatable activities keeps the runbook teachable across teams.⁶

How to measure impact without slowing delivery

Executives need measures that show movement from insight to value. Teams define three layers of metrics. The learning layer tracks hypotheses tested, insights confirmed, and assumptions retired. The experience layer tracks task success, effort scores, resolution time, and sentiment at key journey checkpoints. The business layer tracks adoption, conversion, cost to serve, and revenue at the service level. Teams assign leading indicators to the pilot plan and set decision thresholds for go, hold, or pivot. Decision thresholds tie directly to the DACI approver to reduce re-litigation. Leaders align these measures with human-centered design principles so that evidence reflects real user outcomes rather than vanity metrics.⁵ Repeated use of this structure builds a comparable evidence base across programs.⁶

How to run inclusively and avoid common failure modes

Facilitators design for inclusion so the loudest voice does not set service direction. Teams balance customer voices with operational and technical realities. Practitioners use time boxed rounds, structured turn-taking, and anonymous input boards to reduce bias and groupthink.⁶ Leaders avoid failure modes by preventing scope creep, capturing decisions in the moment, and protecting the pilot plan from premature scaling. Decision roles stop “zombie” debates that outlive the agenda.⁹ ¹⁰ Standards-aligned practices mandate early and continuous user involvement, iterative design, and multidisciplinary collaboration.⁵ The combination of facilitation discipline and standards-based checkpoints creates repeatable quality. That repeatability is the heart of a runbook that scales across business units and vendors.³ ⁵ ⁶

What tangible outputs should every session deliver?

A strong session produces seven concrete outputs that travel well. 1) A signed decision record with DACI roles and rationale. 2) One to three prioritized opportunity statements. 3) A concept poster for the selected solution. 4) A one-page pilot plan with scope, time box, and owners. 5) A lightweight service blueprint linking frontstage and backstage changes. 6) A risk and dependency list with mitigation owners. 7) A metrics table that names leading indicators and decision thresholds. These outputs satisfy stakeholders who need traceability and satisfy teams who need action. They also align with Double Diamond checkpoints, workshop best practices, and human-centered design standards.³ ⁵ ⁶ ⁸

How to sustain the practice after day one

Organizations keep momentum by treating co-creation as a product, not a meeting. Leaders create a shared playbook, a kit of parts, and a coaching model so new facilitators can deliver the same quality. Functions embed the runbook into intake and portfolio rituals so every significant change touches the practice. Sponsors review decision records monthly, remove roadblocks, and celebrate shipped pilots. Product teams harvest proven patterns back into design systems and knowledge bases. This sustained loop preserves learning and reduces duplication. It also signals that co-creation is the way work happens here, which builds cultural muscle for customer involvement at scale.¹ ² ⁸ ⁶


FAQs 

What is a co-creation session in Customer Science practice?
A co-creation session is a time-boxed, human-centered workshop where customers, employees, and stakeholders jointly define problems, generate options, and choose solutions that move directly into pilots with clear decision records and metrics.¹ ³ ⁵ ⁶

How does the Double Diamond structure a co-creation runbook?
The Double Diamond guides four phases that alternate divergence and convergence: Discover, Define, Develop, Deliver. Each phase maps to specific activities and decision gates that produce a usable output.³ ⁷ ⁸

Which roles are essential for enterprise co-creation?
Sponsors provide direction. Facilitators manage flow. Designers and researchers create artifacts. Product and operations owners apply constraints. Engineers ground feasibility. Customers and frontline staff supply lived experience. DACI or RACI clarifies decisions and ownership.¹ ⁵ ⁹ ¹⁰

Why use DACI or RACI in Customer Experience programs?
DACI clarifies decision rights, while RACI clarifies task ownership. Using both reduces churn, establishes a single approver, and accelerates movement from insights to pilots.⁹ ¹⁰

Which workshop activities create actionable evidence?
Journey slices, service blueprints, “how might we” prompts, concept posters, prioritization matrices, and assumptions tests turn discussion into artifacts that support fast decisions and pilots.³ ⁶

How should leaders measure co-creation outcomes?
Leaders track a learning layer, an experience layer, and a business layer, then tie thresholds to a DACI approver for go, hold, or pivot decisions. This aligns with human-centered design principles for iterative validation.⁵

What outputs should every co-creation session produce?
Required outputs include a decision record, prioritized opportunity statements, a concept poster, a pilot plan, a service blueprint, a risk list with owners, and a metrics table with leading indicators and thresholds.³ ⁵ ⁶ ⁸


Sources

  1. Prahalad, C. K., & Ramaswamy, V. (2004). Co-creating Unique Value with Customers. Strategy & Leadership. [PDF, Carnegie Mellon University]. (CMU School of Computer Science)

  2. Vargo, S. L., & Lusch, R. F. (2004). Evolving to a New Dominant Logic for Marketing. Journal of Marketing. [Open PDF, NTNU]. (iot.ntnu.no)

  3. Design Council (UK) (n.d.). The Double Diamond. Design Council Resources. (designcouncil.org.uk)

  4. Design Council (UK) (n.d.). Framework for Innovation: The Double Diamond Process. Design Council Resources. (designcouncil.org.uk)

  5. ISO (2019). ISO 9241-210:2019 Ergonomics of human-system interaction – Human-centred design for interactive systems. International Organization for Standardization. (ISO)

  6. Nielsen Norman Group (2019–2023). UX Workshops: Activities, Facilitation, and When to Use Them. NN/g Articles. (Nielsen Norman Group)

  7. Design Council (UK) (n.d.). History of the Double Diamond. Design Council Resources. (designcouncil.org.uk)

  8. Sanders, E. B.-N., & Stappers, P. J. (2008). Co-creation and the New Landscapes of Design. CoDesign. [Article overview and open PDF]. (Taylor & Francis Online)

  9. Atlassian (n.d.). DACI: A Decision-Making Framework. Team Playbook. (Atlassian)

  10. Atlassian (n.d.). RACI Chart: What is it and How to Use It. The Workstream. (Atlassian)

Talk to an expert