Case Study: bank lifts cross-sell with real-time decisioning (2025)

What is real-time decisioning and why does it matter in banking?

Executives use real-time decisioning to select the next best action for an individual during a live interaction. Real-time decisioning blends identity resolution, streaming data, analytical models, and business rules to decide what to show, offer, or say in milliseconds. Analysts describe real-time interaction management as the discipline that operationalizes this capability across channels to drive relevance and outcomes.¹ Leading banks that personalize at scale report measurable revenue gains and lower churn because customers receive timely, context aware propositions rather than generic offers.² This case study shows how one bank redesigned its cross-sell process with a decisioning fabric, then used identity and data foundations to activate offers in contact centers, mobile, and web. The bank focused on speed, safety, and explainability so that frontline teams trusted the system and customers experienced value at the precise moment of need.¹ ²

Where did the bank start and what problem did leaders frame?

Leaders framed the problem as low cross-sell conversion and inconsistent offer governance. Channel teams ran isolated campaigns, identity was fragmented across core, cards, and digital, and models took weeks to deploy. The bank defined a simple objective. The team would present one relevant, compliant offer per conversation while suppressing offers that conflicted with customer intent or regulatory constraints. The executive sponsor set three constraints that shaped design. Decisions must complete in under 150 milliseconds at the edge. Decisions must be explainable to customers and auditors. Decisions must align to product design and distribution obligations to ensure fair treatment.³ ⁴ This framing turned a generic “sell more” ambition into a system-level mandate that integrated identity hygiene, decision quality, and frontline experience. The bank made the call to treat decisioning as a shared service rather than a marketing campaign tool, which created a single source of truth for offers.¹

How did identity and data foundations unlock speed and trust?

Teams built an identity graph to reconcile customer, account, and device identifiers into a persistent profile. The profile stored stable attributes and referenced time-varying events through a streaming layer. The bank implemented a governed feature store so data scientists and engineers could publish, reuse, and version features across batch and real-time inference. Feature stores reduce duplication, accelerate model deployment, and create parity between training and serving, which improves model reliability in production.⁵ The team paired the feature store with strict data minimization and consent management aligned to national privacy principles.⁶ By resolving identity and standardizing features first, the bank avoided the common trap of pushing incomplete models into production. The identity graph improved eligibility checks. The feature store simplified model rollouts. The consent layer ensured offers respected preferences. Together, these foundations turned raw signals into stable decision inputs that frontline staff could trust.⁵ ⁶

What mechanism turned models and rules into live next best actions?

Architects introduced a decision service that evaluated eligibility, propensity, and value under business and risk constraints. The decision service pulled real-time features, scored models, and applied policies, then returned a single ranked action with reasons. Analysts define this pattern as real-time interaction management because it connects insights to orchestration and delivery, not just analytics.¹ The bank used standard APIs to integrate the service into the IVR, the agent desktop, the mobile app, and the public site. A champion-challenger testing method rotated algorithms and rules behind stable experiences, which accelerated learning without disrupting customers. Leaders insisted on “explain like a human” rationales so agents could articulate why the system suggested a product and why another product was suppressed under policy. This mechanism created a closed loop. Interactions generated outcomes and feedback. The decision service learned from results and updated strategies with light governance.¹

Which controls kept cross-sell safe, fair, and compliant?

Risk, legal, and product teams codified obligations into machine-readable policies. The team mapped product design and distribution obligations to eligibility and suitability checks so that ineligible customers never saw restricted products.³ The bank logged every decision with inputs, policy versions, model versions, and rationales in an evidentiary store. The store supported audit queries that reconstructed a specific decision at a specific time. Privacy controls enforced consent, purpose limitation, and access rights under national privacy law.⁶ The contact center received agent-facing guidance that explained offer suitability in plain language. Leaders anchored incentives to quality metrics, not only sales. Regulators encourage firms to prove fair treatment with traceable controls and outcomes.³ These controls protected customers, reduced regulatory risk, and raised internal confidence. Controls also shortened review cycles because auditors could inspect facts rather than debate assumptions. The controls made growth and governance reinforce each other.³ ⁶

How did the operating model scale decisions across channels?

Executives formed a cross-functional decision council that owned the catalog of actions, eligibility rules, and success metrics. The council balanced sales, service, and risk objectives while setting a standard for experiment design. MIT research shows that firms that pair technical platforms with product operating models realize value faster because teams align on decisions, not artifacts.⁷ The bank adopted product team rituals. Each quarter, teams refreshed a prioritized backlog of actions, features, and policies. Each sprint, the teams shipped one new action and one control enhancement per channel. Enablers included a shared taxonomy, reusable components, and continuous delivery of models behind APIs. The decision council managed a scorecard that combined conversion, consented reach, fairness checks, and agent trust. Leaders published the scorecard to make performance visible. The operating model treated decisioning like a product that serves all channels, which stopped channel silos from drifting apart.⁷

How did leaders measure lift, quality, and risk with evidence?

The bank treated measurement as a first-class feature. The team defined North Star metrics that balanced commercial and customer outcomes. Industry research shows that personalization at scale improves revenue and customer satisfaction when firms measure both acceptance and experience quality rather than only click-through.² The bank implemented randomized control tests at the decision level to isolate causal impact of specific actions. The evidentiary store recorded exposures, decisions, and outcomes with identifiers that supported protected-attribute analysis and adverse impact monitoring. Regulators and analysts encourage demonstrable fairness with testable hypotheses and documented outcomes.³ The team reported lagging results and leading indicators together. Campaign response showed immediate movement. Product uptake and retention confirmed durable impact. Agent trust surveys tracked explainability and usability. This combination closed the loop between data, model, policy, and experience so that the system learned responsibly and improved predictably.¹ ³

What results did the bank see after go-live and what lessons stick?

Teams saw faster time to value because identity and features were production-ready. Leaders saw higher offer acceptance where eligibility and intent aligned, and fewer complaints where suppression rules removed unsuitable products. Industry studies report similar patterns when organizations implement real-time interaction management with consent-aware identity and a governed feature store.¹ ² ⁵ Frontline staff reported higher confidence when rationales matched the customer’s context, which strengthened service quality in the contact center. Customers experienced fewer irrelevant prompts in the mobile app, which reduced fatigue. The evidentiary store simplified internal review and gave risk teams line-of-sight to every decision. The central lesson is simple. Treat decisioning as a product, invest in identity and feature governance, codify obligations as policies, and measure outcomes with causal tests and fairness checks. This creates a repeatable path to cross-sell lift without trading trust for speed.¹ ³ ⁵

What should executives do next to replicate this outcome?

Executives should begin with a crisp use case and a defensible constraint. Start with one action that matters to customers and the business, then design the identity, feature, and decision layers to serve that action with speed and clarity. Build a decision service that can score models, apply policies, and explain choices. Integrate through APIs to all live channels. Establish a decision council that owns the catalog, the rules, and the scorecard. Use randomized tests and fairness monitoring to create evidence that stands up to scrutiny. Research and regulatory guidance provide clear signals. Real-time interaction management works best when powered by strong identity, governed features, and transparent policy.¹ ² ³ ⁶ Banks that act now will convert intent into outcomes while strengthening trust. Banks that delay will continue to spend on campaigns that miss the moment and erode goodwill with every irrelevant prompt.¹ ²


FAQ

What is real-time decisioning in banking?
Real-time decisioning is the capability to choose a next best action during a live interaction by combining identity resolution, streaming features, predictive models, and business policies, usually in milliseconds.¹

Why should banks prioritize identity and a feature store before modeling?
Identity graphs and governed feature stores create consistent inputs for models, reduce duplication, and align training with serving, which improves reliability and speed to production.⁵

How does real-time interaction management differ from personalization engines?
Real-time interaction management connects analytics to orchestration and delivery across channels, returning a ranked action with reasons, not just segments or recommendations.¹

Which regulations shape compliant cross-sell in Australia?
The Privacy Act and the Australian Privacy Principles set consent and purpose rules, while product design and distribution obligations require firms to target products to appropriate customers and to demonstrate fair treatment.³ ⁶

How do leaders measure success without creating risk?
Leaders use randomized control tests, balanced scorecards that include conversion and fairness checks, and an evidentiary store that logs decisions, inputs, and rationales for audit.¹ ³

Who should own the decision catalog and policies?
A cross-functional decision council should own the action catalog, eligibility rules, and success metrics so that sales, service, and risk objectives stay aligned.⁷

Which first use case fits this approach?
A focused cross-sell or save offer with clear eligibility, strong customer value, and measurable outcomes is ideal for establishing the pattern before scaling to other journeys.²


Sources

  1. Forrester Research. “Now Tech: Real-Time Interaction Management, Q2 2020.” 2020. Forrester. https://www.forrester.com/report/Now+Tech+RealTime+Interaction+Management+Q2+2020/-/E-RES161860

  2. McKinsey & Company. “The value of getting personalization right—or wrong—is multiplying.” 2021. McKinsey. https://www.mckinsey.com/capabilities/growth-marketing-and-sales/our-insights/the-value-of-getting-personalization-right-or-wrong-is-multiplying

  3. Australian Securities and Investments Commission (ASIC). “Design and distribution obligations.” 2021. ASIC. https://asic.gov.au/regulatory-resources/product-distribution/design-and-distribution-obligations/

  4. Australian Competition and Consumer Commission (ACCC). “Consumer Data Right.” 2024. ACCC. https://www.accc.gov.au/focus-areas/consumer-data-right-cdr

  5. Delgado, W. et al. “Feast: Feature Store for Machine Learning.” 2019. Google & Gojek. https://feast.dev/

  6. Office of the Australian Information Commissioner (OAIC). “Australian Privacy Principles.” 2022. OAIC. https://www.oaic.gov.au/privacy/australian-privacy-principles

  7. Brynjolfsson, E., Hitt, L., Kim, H. “Building the AI-Powered Organization.” 2019. MIT Sloan Management Review. https://sloanreview.mit.edu/article/building-the-ai-powered-organization/

Talk to an expert