Case Study: Telco Lifts Retention with Touchpoint Orchestration (2025)

Why retention demanded a new operating model

A Tier-1 APAC telecom faced rising switching intent and stagnant loyalty programs. Leadership asked for a measurable reduction in churn without blanket discounts. The team reframed the work: coordinate every retention-relevant moment in real time, not push more campaigns. Industry context supported the bet. GSMA Intelligence reported that roughly one in seven mobile users switched providers over the prior 12 months across major markets, with value for money as the top churn driver.¹ Personalization at scale remains a proven growth lever when executed with discipline.² The question became how to operationalize precise, stateful interactions across care, billing, app, and retail at telco scale.

What “touchpoint orchestration” meant for this telco

The program defined touchpoint orchestration as a closed loop: sense a customer signal, decide the next best action, act across the right channel, and learn from the outcome. Forrester frames real-time interaction management as enterprise tech that delivers contextually relevant experiences across the life cycle, which fit the telco’s aim to move from batch marketing to responsive service.³ Engineering translated that ambition into state machines: each customer occupied a journey state such as Payment Pending, Usage Spike, Coverage Concern, or Save Offer Considered; guarded rules advanced states only when data supported the move. This state discipline mirrored workflow patterns documented in Step Functions, where transitions, retries, and fail states reduce ambiguity and risk.⁴

Where the team started and why

The team chose three churn-prone scenarios: involuntary churn from failed payments, voluntary churn following poor experience signals, and price-sensitive churn at contract end. Each scenario earned a thin-slice journey with explicit states, legal transitions, and a single North Star outcome. Payment journeys targeted “Paid within 7 days.” Coverage journeys targeted “Complaint resolved on first contact.” Contract-end journeys targeted “Renewed on value plan.” The stack relied on reusable events and versioned schemas so signals could be safely reused across journeys without brittle mappings, consistent with Adobe’s event governance guidance.⁵ Experimentation relied on no-code randomized splits to test offers and sequencing before scale.⁶

Step 1 — Instrument the signals that matter

Teams cataloged a minimum set of high-fidelity events: payment.failed, payment.succeeded, speedtest.run, ticket.created, ticket.resolved, plan.end_window, and app.session. Each event carried identity keys, consent, and provenance. Schema validation happened at ingest; malformed payloads never entered flows. Events mapped to states, not directly to messages, which prevented channel-first thinking. This approach aligned with vendor guidance on event configuration and lifecycle controls.⁵

Step 2 — Design decisions for control and learning

Decisions combined deterministic rules and controlled experiments. Guarded rules enforced consent, frequency caps, and service priorities. Randomized splits created holdouts and A/Bs for message tone, channel order, and save-offer timing without custom code.⁶ A single change-approval routine required an owner to define the hypothesis, success metric, minimum sample, and rollback. When experiments involved service fixes, the “message vs. non-message” bias was checked: if a coverage issue was open, the next best action prioritized case escalation before any offer.

Step 3 — Orchestrate actions across care, app, and retail

Actions spanned messaging and non-messaging steps. Messaging covered SMS, push, email, and in-app. Non-messaging covered case creation, network trouble-ticket prompts, entitlement updates, and retail appointment nudges. The program used conditional holds instead of fixed delays; for example, post-install nudges paused until first app login or expired after three days with a fallback branch. This prevented redundant prompts and kept interactions “in the moment.” This pattern mirrored best practice in journey tools that support holds, delays, and splits as first-class objects.⁶

Step 4 — Build failure handling like an SRE

Dependencies fail. The team treated failure as a designed path: retries with backoff for external calls, explicit fail states that alerted operations, and safe exits that paused messaging during outages. This state-machine discipline, familiar from Step Functions, cut silent stalls and simplified incident response.⁴

Step 5 — Embed service signals that move retention

Care data became central to the save loop. First contact resolution (FCR) was measured and exposed to journeys; unresolved tickets suppressed sales offers and prioritized fixes. FCR remains a core satisfaction driver in contact centers and a practical leading indicator for churn risk, so elevating it inside orchestration aligned service and retention.⁷ The team also integrated network experience signals such as repeated speed-test failures to route proactive support rather than generic promotions.

Measurement the executive team could trust

The scorecard paired leading indicators that steer with lagging indicators that prove value. Leading indicators included time-to-action from trigger, time-in-state, progression rate by branch, duplicate-prevention saves, and FCR. Lagging indicators included 30-day save rate post-intervention, 90-day retention, and complaint rate. The “sense-decide-act-learn” map assigned data engineering to event latency and schema pass rate; product and analytics to rule hit rate and experiment lift; channel owners to send success and delivery; service to FCR; and executives to retention and revenue. Evidence from industry showed why the mix mattered: firms that execute personalization well see outsized revenue contribution relative to peers, which anchored the business case.²

What changed in the customer experience

Customers saw fewer redundant messages and more timely help. Payment failures triggered a single clear path: a proactive SMS with secure payment link, an in-app reminder upon next session, and an agent assist trigger only after two failed self-serve attempts. Coverage complaints triggered a callback with a skilled agent, not a promotion. Contract-end journeys prioritized plan right-sizing with transparent comparisons, then an offer calibrated by tenure and complaint history. External news reinforced the logic: operators applying AI and real-time triage in service report meaningful churn prevention at scale, underlining the value of routing customers to the right help fast.⁸

Results the COO cared about

Across a 12-week rollout, the program delivered statistically robust uplifts in early cohorts and held them through scale-up. Payment journeys shortened time-to-pay; coverage journeys increased first-contact resolution; renewal journeys raised value-plan adoption among at-risk segments. Lagging indicators moved in the expected direction alongside declining complaint rates. The CFO recognized savings from reduced repeat contacts and lower goodwill credits. The board saw a coherent mechanism rather than a one-off campaign.

*(Quantified results are withheld; methodology and confidence intervals were reviewed in the PMO’s experimentation register and align to platform best practice for randomized splits.)*⁶

What made the operating model stick

Three habits kept the gains: a weekly design authority, a monthly governance review, and a single change log for events, rules, and offers. The design authority approved journey edits against a checklist: event versioning present, consent enforced at entry and send, re-entry windows set, dedupe enabled for shared emails, branch counts within product limits, and fail states instrumented. This checklist reflected documented vendor guardrails rather than bespoke rules, which made it durable.⁵ ⁶

Lessons other telcos can reuse next quarter

Start thin, finish strong. Pick one churn driver per quarter and ship a stateful loop before adding breadth.
Measure what moves. Pair FCR and time-in-state with retention; do not mistake sends for progress.⁷
Prefer fixes to offers. Resolve network or billing friction before a save offer; customers reward the order.
Design for failure. Retries, catches, and safe exits are as important as messages.⁴
Institutionalize learning. Run a standing backlog of experiments and retire branches that fail to move progression.⁶


FAQ

What is touchpoint orchestration in telecom, in practical terms?
It is the continuous coordination of retention-relevant moments by listening to real-time signals, updating a customer’s state, evaluating guarded rules, and triggering the next best action across service and marketing channels.³ ⁴ ⁵

Which signals should a telco connect first to reduce churn?
Start with billing outcomes (payment.failed/payment.succeeded), service events (ticket.created/ticket.resolved), contract window signals (plan.end_window), and app engagement (app.session). Ensure each event is versioned and validated per vendor guidance.⁵

How do we test save-offer timing without heavy engineering?
Use journey randomized splits to compare channels, tones, and wait logic with holdouts. Modern platforms ship this as a no-code step, making experimentation routine.⁶

Why put first contact resolution inside the orchestration loop?
FCR predicts satisfaction and repeat volume. Exposing FCR to decisions prevents tone-deaf offers during unresolved issues and aligns service with retention goals.⁷

What proof exists that real-time, personalized interactions drive commercial results?
Research shows companies that excel at personalization see materially higher revenue contribution than peers. Telco-specific commentary and news show operators using AI-assisted routing and context to prevent churn at scale.² ⁸

How do we avoid breaking journeys at scale?
Adopt state machines with clear transitions, retries, and fail states; version events; enforce consent at entry and send; set re-entry windows and dedupe; and keep branch counts within documented platform limits.⁴ ⁵ ⁶


Sources

  1. The mobile churn challenge: where loyalty is lowest and recommendations for operators — GSMA Intelligence, 2025, GSMA. https://www.gsmaintelligence.com/research/the-mobile-churn-challenge-where-loyalty-is-lowest-and-four-recommendations-for-operators

  2. Unlocking the next frontier of personalized marketing — McKinsey & Company, 2025, McKinsey Quarterly. https://www.mckinsey.com/capabilities/growth-marketing-and-sales/our-insights/unlocking-the-next-frontier-of-personalized-marketing

  3. Invisible experiences: Anticipate customer needs with Real-Time Interaction Management — Forrester Research, 2024, Forrester Blog. https://www.forrester.com/blogs/invisible-experiences-anticipate-customer-needs-with-real-time-interaction-management/

  4. Learn about state machines in Step Functions — AWS, 2024, AWS Documentation. https://docs.aws.amazon.com/step-functions/latest/dg/concepts-statemachines.html

  5. About events | Adobe Journey Orchestration — Adobe, 2025, Adobe Experience League. https://experienceleague.adobe.com/en/docs/journeys/using/events-journeys/about-events/about-events

  6. Journeys Step Types: Randomized splits — Twilio Segment, 2024, Twilio Docs. https://www.twilio.com/docs/segment/engage/journeys/v1/step-types

  7. The link between customer satisfaction and first contact resolution — ICMI, 2018, ICMI Resource. https://www.icmi.com/resources/2018/the-link-between-customer-satisfaction-and-first-contact-resolution

  8. Verizon uses GenAI to improve customer loyalty — Reuters, 2024, Technology. https://www.reuters.com/technology/artificial-intelligence/verizon-uses-genai-improve-customer-loyalty-2024-06-18/

Talk to an expert