What are vanity metrics and why do smart teams still use them?
Leaders chase numbers that look good because numbers that look good feel safe. Vanity metrics are measures that rise without proving causal value creation. Pageviews, raw app installs, social followers, open rates, and aggregate satisfaction scores often inflate with spend or season but fail to explain customer or financial outcomes. A useful metric changes decisions, predicts results, and links to a controllable mechanism. Eric Ries popularised the distinction between vanity and actionable metrics to push teams toward learning that drives growth.¹ The strongest test is simple. If the metric moves, do you know which lever you will pull next and why it will work again. If the answer is no, you have a vanity metric problem. Customer leaders should recast reporting around a testable hypothesis, a measurable behavior, and a clear financial outcome. This keeps metrics honest and keeps teams grounded in cause and effect.²
Where do teams slip up with vanity metrics most often?
Executives often mistake correlation for causation and then invest behind coincidence. Correlation reports show two lines moving together and invite false certainty. Causation requires a mechanism, a counterfactual, and a repeatable treatment. Treat correlation as a clue and not a conclusion.³ Teams also overindex on lagging indicators like revenue, churn, or NPS because these feel strategic and carry brand weight. Lagging indicators validate outcomes after the fact and cannot guide fast operational choices. Leading indicators forecast change and tell operators where to act today.⁴ Finally, many dashboards grow by accretion. New metrics never retire. The signal to noise ratio collapses. Leaders then anchor on the few lines that still rise, even if those lines prove empty of meaning. This is how well intentioned teams drift from learning to theatre. Strong governance stops metric creep and forces a clear owner, a clear decision, and a clear review cadence.
What is Goodhart’s Law and why does it matter in CX?
Goodhart’s Law warns that when a measure becomes a target, it stops being a good measure. The act of targeting distorts the behavior the metric was meant to proxy.⁵ Campbell’s Law adds that the more a measure is used for decisions, the more it invites gaming and corruption.⁶ CX leaders experience both laws when agents are bonused on average handle time and then rush calls, or when NPS drives survey begging that inflates scores while masking unresolved effort. Bain’s Net Promoter System introduced a simple, comparable advocacy measure. The measure is helpful when used as a learning tool and not as a blunt weapon.⁷ Healthy programs set a small set of outcome metrics that never become the day to day target. Teams then manage operations with behavioral and journey metrics that predict those outcomes. This preserves the integrity of outcome measures while giving operators safe, controllable levers.
How do you replace vanity metrics with decision-ready measures?
Leaders start by naming a North Star Metric that reflects value delivered to customers and captured by the business. A North Star Metric is a single measure that aggregates core value creation, such as weekly active accounts completing a key job to be done. The metric works when it maps cleanly to retention and revenue growth across cohorts.⁸ Teams then define a handful of input metrics that operators can move in days or weeks. These input metrics should be behavioral, unambiguous, and measured at the level where work happens. Replace open rates with first session task completion, replace aggregate CSAT with successful self service deflection without recontact, and replace raw installs with activated users who accomplish a defined outcome. Cohort analysis and journey analytics keep the focus on who improved, when they improved, and which treatment caused the improvement.² The test and learn loop then compounds.
How should CX leaders design experiments that retire vanity?
Executives should fund experimentation that distinguishes signal from noise. Randomised controlled experiments or quasi experimental designs provide counterfactuals that correlation charts never deliver.³ Define the treatment, define the unit of randomisation, and pre register the success criteria. Keep outcomes practical. Measure conversion to resolved outcome, repeat contact in seven days, or downgrade risk in thirty days. Power tests for detectability, not perfection. Use sequential testing or Bayesian methods to stop early when evidence is strong. Hold back a clean control where ethics and regulation allow. Summarise results in plain language with the mechanism, the effect size, and the decision. Leaders should not confuse statistical significance with business significance. A tiny uplift may be real and still be irrelevant. A practical uplift is big enough to matter at scale within cost, risk, and time constraints.³ This discipline keeps incentives aligned and vanity out.
Which guardrails prevent metric gaming and survey theatre?
Managers prevent gaming by separating the scoreboard from the steering wheel. Keep outcome metrics like NPS, retention, and revenue on the scoreboard and do not use them for individual performance pay.⁷ Use steering metrics that tie to process quality, customer effort, and task success at the team level. Publish metric definitions, owners, and valid ranges in a living catalogue. Enforce data lineage so users can trace each number to its source table and business logic. Rotate audits to check for survey begging, cherry picked cohorts, and quota traps. Teach Goodhart’s and Campbell’s Laws during onboarding to set norms that value learning and integrity.⁵ ⁶ Promote a culture that celebrates metric retirements as much as metric additions. A metric that no longer informs decisions should graduate with thanks. Dashboards should get lighter and smarter as programs mature. The catalogue should show fewer, more predictive lines.
How do you link CX metrics to financial value the board trusts?
Boards approve investments when teams connect operational change to economic outcomes with credible evidence. Start with a driver tree that links behaviors to unit economics. Define the bridge from first contact resolution, digital containment, and agent quality to retention, cross sell, and cost to serve. Validate each link with experiments, cohort patterns, or natural experiments created by policy changes.³ Translate operational improvements into annualised financial impact with scenarios. Show base case, upside, and downside with clear assumptions. Use customer lifetime value and customer acquisition cost to keep the math consistent across channels and segments. Investopedia provides clear definitions that help standardise language across finance and CX.⁴ Publish a quarterly learning report that lists retired metrics, new insights, and changes to the driver tree. This builds trust in the mechanism, not just in a number that happens to point up and to the right.
What should leaders do next to purge vanity and scale learning?
Leaders should run a four week reset. In week one, catalogue every metric, owner, decision, and audience. In week two, tag metrics as outcome, input, or vanity based on the decision test. In week three, define a North Star Metric, three to five input metrics, and the first experiment to validate the driver tree. In week four, sunset non decisions, publish the glossary, and start the learning report. Keep governance light and relentless. Set a monthly review that asks the same two questions. Which decision did this metric change. Which experiment did we stop, start, or scale. The simplest discipline wins. Teams that align on one North Star Metric and a few input metrics move faster and learn more.⁸ The executive team then sees cleaner decks, clearer trade offs, and credible economics. The brand earns trust because the operation earns truth.
FAQ
What is a vanity metric in customer experience and how do I spot one?
A vanity metric rises without proving causal value creation. If a metric moves and you cannot name the lever you will pull next or the mechanism that explains the change, treat it as vanity.¹ ³
Why do Goodhart’s Law and Campbell’s Law matter for CX governance?
When a measure becomes a target it stops being a good measure, and heavy reliance on a measure invites gaming. Use outcome metrics for the scoreboard and behavioral input metrics for day to day steering.⁵ ⁶ ⁷
Which North Star Metric should Customer Science clients choose?
Pick a single measure that captures value delivered to customers and value captured by the business, such as weekly active accounts completing a core job to be done, validated by cohort retention.⁸ ²
How do I replace NPS as a performance target without losing customer insight?
Keep NPS as an outcome scoreboard metric and manage operations with steering metrics like first contact resolution, digital containment without recontact, and task completion. Audit for survey bias and gaming.⁷ ⁵
What is the fastest way to retire vanity metrics in a contact centre?
Run a four week reset. Catalogue metrics, tag them by role, define a North Star Metric and three to five input metrics, then launch one experiment that validates the driver tree from behavior to economics.⁸ ³
Which methods prove causation for CX initiatives without slowing delivery?
Use randomised controlled experiments where feasible or quasi experimental designs. Pre define success, power tests for detectability, and summarise the mechanism, the effect size, and the decision.³
Which definitions should our executive team standardise across finance and CX?
Standardise leading versus lagging indicators, customer lifetime value, and customer acquisition cost. Use common references to anchor language and avoid confusion between correlation and causation.⁴ ³
Sources
The Lean Startup — Eric Ries — 2011 — Crown Business. https://theleanstartup.com/
Lean Analytics: Use Data to Build a Better Startup Faster — Alistair Croll, Benjamin Yoskovitz — 2013 — O’Reilly Media. https://leananalyticsbook.com/
Correlation and Causation — Wikipedia Editors — 2025 — Wikipedia. https://en.wikipedia.org/wiki/Correlation_does_not_imply_causation
Leading Indicator vs. Lagging Indicator — Investopedia Editorial Team — 2024 — Investopedia. https://www.investopedia.com/terms/l/leadingindicator.asp
Goodhart’s Law — Wikipedia Editors — 2025 — Wikipedia. https://en.wikipedia.org/wiki/Goodhart%27s_law
Campbell’s Law — Wikipedia Editors — 2025 — Wikipedia. https://en.wikipedia.org/wiki/Campbell%27s_law
Net Promoter System: How It Works — Bain & Company — 2025 — Bain Insights. https://www.netpromotersystem.com/about/
North Star Framework: A Practical Guide — Amplitude — 2025 — Amplitude Guide. https://amplitude.com/north-star-framework