Participation Metrics: Engagement, Diversity, Depth

What do “participation metrics” actually measure?

Leaders measure participation to see who shows up, how they contribute, and what value co-creation generates. Participation metrics track engagement, diversity, and depth to give an evidence base for service innovation. Engagement counts the volume and frequency of interactions across channels and moments in a journey. Diversity records the representativeness and mix of voices that shape decisions. Depth reflects the level of influence participants exert on problem framing, design choices, and governance. Human-centred design defines participation as early, continuous involvement of users to achieve outcomes that are usable, useful, and accessible, which makes engagement quality a core success criterion.¹ Service-dominant logic strengthens this view by defining value as co-created through interactions, not delivered as a finished product, which elevates participation from a courtesy to a mechanism of value creation.²

Why should executives prioritise engagement, diversity, and depth together?

Executives prioritise the trio because the components reinforce each other and reduce risk. High engagement without diversity amplifies bias and produces fragile services. Broad diversity without depth tokenises participants and erodes trust. Strong depth without sufficient engagement narrows ideas and stalls scale. Research on leadership diversity shows a clear correlation with innovation revenue and EBIT, which signals how inclusive participation translates into commercial outcomes. Firms with above-average management diversity report innovation revenue that is 19 percentage points higher than peers, with EBIT margins 9 points higher.³ Co-creation research shows customers increasingly shape value through their experiences, so programs that widen and deepen participation tend to accelerate learning cycles and improve fit to need.⁴

How do we define participation “depth” in a practical, defensible way?

Teams define depth by the degree of decision influence granted to participants across discovery, design, delivery, and governance. Arnstein’s classic ladder describes rungs from manipulation to citizen control and remains a useful lens for classifying power-sharing.⁵ Contemporary practice often maps depth against the IAP2 spectrum from inform to empower, which helps leaders operationalise thresholds for co-design and co-decision.⁶ Human-centred design standards reinforce that meaningful depth begins when teams involve users early, iterate with them often, and evaluate with them against real tasks and contexts.¹ A defensible definition links each depth level to specific decision rights, artifacts, and checkpoints. This structure creates auditability and reduces performative engagement.

What signals prove real engagement rather than activity noise?

Teams prove engagement by triangulating behavioural, attitudinal, and outcome signals. Behavioural signals track active contributions such as ideas submitted, prototypes tested, and cycles completed per participant. Attitudinal signals track perceived influence, psychological safety, and clarity of purpose through structured surveys and debriefs. Outcome signals connect participation to measurable service improvements like task success, time to resolution, and complaint reduction. Human-centred design guidelines recommend continuous involvement with traceable outcomes, which ties signals back to design decisions.¹ Community-based co-design research highlights that clear roles, accessible sessions, and feedback loops increase sustained participation and reduce attrition, which are leading indicators of real engagement.⁷

How does diversity in participation drive better service innovation?

Diverse participation broadens problem framing and increases the chance of discovering useful edge cases. Empirical studies report a strong correlation between leadership diversity and innovation outcomes, which aligns with the mechanism that heterogeneous teams generate more and better ideas and convert them into market results.³ Reviews of participatory design show growth in techniques that intentionally recruit varied stakeholders and sustain involvement across multiple stages, which increases the quality of requirements and evaluation.⁸ Co-creation theory positions customers as active contributors whose varied contexts and capabilities shape value-in-use, so diversity in lived experience becomes a design input rather than a demographic checkbox.²

How do we operationalise participation metrics for enterprise programs?

Leaders operationalise participation by defining metric families, measurement cadences, and thresholds tied to gates. Engagement includes reach, participation rate, session completion, contribution count, and retention across sprints. Diversity includes representation mix against priority segments, inclusion scores, and participation equity ratios that compare talk time, task time, and decision time across groups. Depth includes decision-rights index, co-authored artifacts count, and governance share at key milestones. ISO 9241-210 encourages integrating these checks into normal life-cycle stages, which keeps measurement close to work rather than as a separate audit.¹ Service programs then connect metrics to incentives and risk controls so that teams cannot progress without meeting minimum participation thresholds. This structure improves repeatability and credibility.

What mechanisms raise participation depth without slowing delivery?

Product teams raise depth by shifting left and compressing decision loops. Teams convene small, representative panels with clear decision mandates for problem statements, acceptance criteria, and policy trade-offs. Lightweight participatory techniques such as rapid co-sketch, moderated concept testing, and timeboxed governance reviews maintain speed while expanding influence. Participatory design literature suggests that multi-stage involvement with a defined set of techniques improves collaboration quality and clarity of outcomes.⁸ Co-design in community services shows that pairing lived-experience advisors with domain experts shortens rework and improves adoption, which offsets the time invested upfront.⁷ Leaders codify these patterns as standard operating procedures with role charters, templates, and data capture.

Which comparisons help executives choose the right participation model?

Executives compare participation models along decision rights, time-to-impact, and risk profile. Consultation models optimise speed but offer shallow depth and higher risk of misfit. Co-design models trade some upfront time for higher certainty and downstream savings. Co-decision models maximise legitimacy for sensitive services but require strong facilitation and governance. Arnstein’s ladder provides a simple vocabulary for these trade-offs and helps align expectations with citizens and customers.⁵ The IAP2 spectrum further clarifies when to inform, consult, involve, collaborate, or empower, which supports program-level portfolio choices.⁶ Co-creation research suggests that moving up the depth curve is most beneficial when problems are novel, ambiguous, or high-stakes, because user expertise is essential to define value.⁴

How do we manage the risks of wider participation?

Leaders manage risks by formalising ethics, privacy, and equity controls. Programs set participation eligibility, consent processes, and data governance. Teams monitor participation equity so that no single group dominates airtime or decisions. Reviews document how insights translated into requirements to prevent tokenism. Human-centred design standards advise aligning responsibilities, documenting context-of-use, and validating with representative users, which reduces safety and compliance risks.¹ Stakeholder engagement frameworks recommend being transparent about trade-offs and closing the loop on decisions to maintain trust.⁶ Co-design in health and disability contexts adds guidance on accessibility, remuneration, and trauma-informed facilitation, which protects participants while improving data quality.⁷

How do we quantify impact so finance teams trust the signal?

Finance teams trust metrics that link participation to cost and revenue levers. Programs quantify rework reduction, time-to-value improvement, call containment, and adoption rates. Leaders triangulate with innovation revenue and margin signals observed in diversity research to frame expectations for scale programs.³ Service-dominant logic encourages shifting impact evaluation from outputs to value-in-use outcomes, which makes adoption and utilisation the primary health checks.² Participatory design reviews show that involving stakeholders across multiple stages improves suitability and satisfaction, which correlates with lower failure rates in implementation.⁸ Executives can stage-gate investments using minimum participation thresholds and validated outcome measures to protect capital allocation.

What is the field-tested measurement framework we can adopt now?

Teams can adopt a simple, field-tested framework that fits governance and reporting.

  • Engagement. Track reach, active participation rate, contribution velocity, session completion, and retention per sprint, with benchmarks by channel and segment. Align cadences to sprint reviews and release trains. Human-centred design standards support continuous involvement and iterative evaluation to keep these metrics tied to real tasks.¹

  • Diversity. Track representation across priority segments, inclusion scores, and participation equity ratios that capture speaking time and decision influence. Use leadership diversity evidence as a north star for the business case and set portfolio-level targets that reflect local context.³

  • Depth. Track decision-rights index by phase, count of co-authored artifacts, and governance share. Classify activities using ladder or IAP2 spectrum tags for comparability across initiatives.⁵ ⁶

Co-creation and service-dominant logic provide the conceptual backbone, while participatory design reviews and sector-specific co-design studies supply practical technique sets for delivery teams.² ⁴ ⁸ ⁷

What are the next steps for C-level and CX leaders?

Executives move first on governance, then on practice, then on scale. Leaders set minimum thresholds for engagement, diversity, and depth as gating criteria for investments. Teams adopt a standard metric kit and instrument toolchains to capture signals at source. Programs publish participation dashboards alongside delivery status to make inclusivity visible and comparable. Diversity research offers credible external benchmarks for portfolio targets, which helps secure board sponsorship.³ Human-centred design and stakeholder frameworks provide ready checklists and templates to embed into existing lifecycle controls, which reduces adoption friction.¹ ⁶ The outcome is a service innovation engine that is faster, fairer, and more reliable because the right people help shape the right decisions at the right time.


FAQ

What are participation metrics in Customer Science?
Participation metrics are a structured set of measures that track who engages, how representative participation is, and how much influence participants have on decisions across discovery, design, delivery, and governance. They operationalise human-centred design and service co-creation into measurable program controls.¹ ²

Why do diversity measures belong inside participation metrics?
Diversity measures indicate whether a wide mix of lived experiences informs decisions. Evidence links management diversity with stronger innovation revenue and EBIT performance, which supports the business case for diverse participation in service innovation.³

How is participation depth different from engagement volume?
Depth captures the degree of decision influence participants hold, not just attendance. Frameworks like Arnstein’s ladder and the IAP2 spectrum classify depth from inform to empower, which makes decision rights explicit and auditable.⁵ ⁶

Which standards and frameworks underpin participation measurement?
ISO 9241-210 defines principles for human-centred design and continuous user involvement.¹ Stakeholder engagement frameworks such as IAP2 and models like Arnstein’s ladder offer practical depth classifications.⁶ ⁵

Which research supports co-creation as a value mechanism?
Service-dominant logic defines value as co-created in use through interactions, not delivered at handover.² Co-creation research shows customers actively shape value through experiences, which makes participation essential to innovation.⁴

How can health and community services run safer co-design?
Studies of community-based co-design recommend clear roles, accessible formats, remuneration, and feedback loops. These controls protect participants and improve data quality while sustaining engagement.⁷

Which practices keep speed while increasing participation depth?
Teams use small, representative panels with explicit mandates, timeboxed co-sketch and test cycles, and structured governance reviews. Participatory design reviews show multi-stage involvement improves requirements quality without excessive delay when techniques are right-sized.⁸


Sources

  1. ISO 9241-210:2019 — Ergonomics of human–system interaction, Part 210: Human-centred design for interactive systems. International Organization for Standardization, 2019. ISO. https://www.iso.org/standard/77520.html (ISO)

  2. On value and value co-creation: A service systems and service logic perspective. S. L. Vargo, R. F. Lusch. 2008. European Management Journal (Elsevier). https://www.sciencedirect.com/science/article/pii/S026323730800042X (ScienceDirect)

  3. How Diverse Leadership Teams Boost Innovation. R. Lorenzo, N. Voigt, M. Tsusaka, M. Krentz, K. Abouzahr. 2018. Boston Consulting Group. https://www.bcg.com/publications/2018/how-diverse-leadership-teams-boost-innovation (bcg.com)

  4. Co-creation experiences: The next practice in value creation. C. K. Prahalad, V. Ramaswamy. 2004. Journal of Interactive Marketing (Elsevier). https://www.sciencedirect.com/science/article/pii/S1094996804701073 (ScienceDirect)

  5. A Ladder of Citizen Participation. S. R. Arnstein. 1969. Journal of the American Institute of Planners. PDF reprint. https://www.lithgow-schmidt.dk/sherry-arnstein/ladder-of-citizen-participation_en.pdf (lithgow-schmidt.dk)

  6. Ladder of Citizen Participation and IAP2 Spectrum: Learning from classic frameworks. G. Bammer. 2022. Integration and Implementation Insights. https://i2insights.org/2022/08/30/learning-from-arnsteins-ladder-and-iap2-spectrum/ (Integration and Implementation Insights)

  7. Community-based participatory research through co-design: supporting inclusive research practice. E. Russell et al. 2024. Research Involvement and Engagement (BMC). https://researchinvolvement.biomedcentral.com/articles/10.1186/s40900-024-00573-3 (BioMed Central)

  8. Participatory design: a systematic review and insights for future practice. P. Wacnik, S. Daly, A. Verma. 2025. Design Science (Cambridge University Press). https://www.cambridge.org/core/journals/design-science/article/participatory-design-a-systematic-review-and-insights-for-future-practice/C310A25B481980BE14AD4B38C0EE46D1 (cambridge.org)

Talk to an expert