top of page

The CS Input Model – Turning Insight Into Motion (part 1)

  • glenclodore
  • May 25
  • 6 min read

Updated: Jun 2


ree

Author's Note: This is part of a larger effort to build a Customer Success AI agent, but before we can automate, we must define. This article is part of that process: mapping what CS actually does to create value. If you missed the first part, you can read it here: Article 1 – Stop Managing Stages, Start Managing Motion.


Rethinking Inputs in CS

Customer Success work is often described in outcomes: adoption improved, risk mitigated, customer retained. But what actually drives those outcomes? What does CS do to make them happen?


That’s where the CS Input Model comes in. It reframes CS activities as inputs, deliberate motions that can be pulled from a library and sequenced based on context. It asks a core question: "What should we do next, given what we know?" And it doesn’t just ask, it answers, with a curated set of actions that move the account forward.


Not every input is needed every time. The Input Model isn’t a checklist. It’s a library of motion, and what matters is choosing and sequencing the right actions, based on where the customer is and where they need to go.


The Role of Intelligence in Sequencing

CS isn’t just about activity; it’s about direction. Given limited time and headcount, the goal is not to do everything, but to do the right thing, at the right time, in the right way.

This is where intelligence, human or machine, becomes critical. The Input Model assumes there is a reasoned system behind the selection of each action. But this system doesn't sit outside the framework, it lives within it.

Each component contributes to it:

  • The Enhance x Increase Matrix defines the direction of progress: where the customer is and where they need to go.

  • The Input Library provides the building blocks, the full range of CS actions that might help create motion. It sits within the Input Model.

  • The Input Model itself acts as a strategist: choosing a subset of those actions and sequencing them in a way that is both effective and efficient, given the customer’s state.

ree

The sequence matters. These inputs aren’t standalone gestures. They build on one another, one unlocks the next. A coaching session might only work after stakeholder alignment. A usage review is more powerful after impact metrics have been defined. What looks like execution is often choreography. This strategic orchestration is what makes the Input Model more than a list, it's a dynamic engine that adapts based on context. Whether the system is powered by a human CSM or by intelligent automation, the outcome is the same: a curated, timely, and relevant motion plan that accelerates progress from one customer state to the next.

ree

The Input Library

The Input Library is the full set of possible actions a CSM might take to influence progress. But it’s more than a list, it’s a tagged system of motion, built for orchestration and designed for adaptability.


 You can explore the full Input Library I put together here: View the Library.


Each action is mapped across four key dimensions. These aren’t just labels, they serve a functional purpose: to prioritize, sequence, and (eventually) automate what CS does to move accounts forward.


🏷️Tag 1: Push vs Pull

This dimension captures the direction of motion:

  • Pull: Actions that extract information, insight, or alignment from the customer. These create clarity and context (e.g., gathering goals, analyzing usage, asking for feedback).

  • Push: Actions that apply pressure or forward motion, enabling, informing, or influencing the customer to move (e.g., training, proposing paths, aligning stakeholders).


To ensure consistency and support automation decisions, we use a simple rule-of-thumb when tagging:

  • Pull: Requires input or context from the customer before progress can be made

    • Example: interview stakeholders, collect feedback

  • Push: Proactively delivered by the CSM to influence, enable, or apply momentum

    • Example: draft success plans, deliver training


🏷️ Tag 2: People vs Data

This tag defines the interaction mode:

  • People: The motion involves communication, facilitation, or persuasion, often interpersonal and high-context.

  • Data: The motion involves analysis, reporting, measurement, or digital signals, often suited for automation or scalable systems.


 It helps us evaluate the nature of the work and how suitable it is for human delivery vs automation or AI.


🏷️ Tag 3: Effort Level

How much energy or time does the activity require?

  • Low: Can be handled quickly or asynchronously with minimal coordination.

  • Medium: Requires preparation, thought, or moderate coordination with others.

  • High: Involves multiple stakeholders, systems, or significant preparation often reserved for strategic accounts or high-value milestones.


 Effort tagging helps prioritize work and model where automation could reduce load.


🏷️ Tag 4: Motion Phase

Where in the flow of activity the motion occurs:

  • Pre - Preparation: work done before customer engagement (e.g., research, analysis, internal syncs).

  • During - Live interaction: the motion happens in real-time with the customer (e.g., a workshop, demo, review).

  • Post - Reinforcement: follow-ups or reflection that lock in value (e.g., recaps, action tracking, adjustments).


 Motion Phases help sequence inputs effectively. A well-executed review (During) often depends on thorough preparation (Pre) and drives value only if followed up correctly (Post).


🏷️ Tag 5: Introducing Tags for Automation & AI

To help prioritize system support, each input is also tagged for its automation potential:

  • Automatable: Can this be completed end-to-end by a traditional system (rules-based, pre-configured logic)?

  • AI-Drivable: Can AI meaningfully contribute to or lead this action by interpreting data, drafting suggestions, generating content, or proposing next steps?

  • AI-Orchestratable: Can AI help determine when and how to trigger this input, as part of a dynamic sequence?


 These distinctions matter. Not all automatable actions require AI. And many non-automatable tasks can still be guided or initiated by AI.


Why This Tagging System Matters

This isn’t classification for its own sake. These tags make the Input Model operational:

  • Orchestration: Inputs can be sequenced to fit a customer’s context and evolve as new signals come in.

  • Prioritization: CSMs (or systems) can focus on high-leverage inputs for a given moment.

  • System Enablement: Automatable inputs become candidates for workflows. AI-drivable ones shape copilots and assistants. AI-orchestratable inputs fuel dynamic, intelligent engagement models.

Orchestration is not a one-time act of mapping a plan. It’s a continuous, strategic process that re-engineers the input sequence based on new signals, friction, or emerging opportunity. While not always instant, the orchestration layer is designed to adapt dynamically as fast as context and data allow.

How AI Shapes Motion: Early Findings from the Input Library

From our full library of 222 inventoried input actions, early tagging reveals powerful implications for how AI can participate in CS execution (fig. 2):

  • 59% of all activities are AI-drivable: meaning AI can suggest, trigger, or complete them based on context.

  • 99% are AI-orchestratable: nearly all inputs can be sequenced, adapted, or prioritized by an intelligent system.

  • CS teams spend 41% of their actions in the Pre phase, yet 73% of these are AI-drivable, showing a major opportunity to reduce prep load.

ree

These figures represent orchestration potential based on the nature of the actions themselves. Actual execution depends on factors like data accessibility, system integration, and available RAG content. In environments where these are mature, the opportunity is significant. Where they are not, orchestration begins as a co-pilot and not an autopilot.


Drilling deeper:

  • Among Push activities, 61% are AI-drivable, and 41% are automatable.

  • Among Pull activities, 58% are AI-drivable, but only 26% are automatable.

ree
  • AI is particularly impactful in “human-heavy” actions: 40% of People-focused activities are not automatable but are AI-drivable, the kind of support only intelligence, not logic, can provide.

  • Looking at effort, AI has leverage where it matters most: 62% of medium-to-high effort activities are AI-drivable.


This tells us something critical: automation handles the obvious, AI handles the complex. In Customer Success, where effort is high and context is variable, AI’s real value isn’t just execution, it’s orchestration.


This strategic tagging lets us design intelligent systems that collaborate with humans, not just replacing effort, but enhancing what CSMs can deliver. It’s not about automating CS. It’s about augmenting motion, sequencing with intelligence, and freeing up time for humans to lead where it matters.


From Framework to Field

The Input Model transforms how we think about CS work: not as a collection of tasks, but as a strategic system of motion. With structure comes clarity and the potential for shared orchestration between humans and machines.


In the second half of this article, we’ll expand on how the Input Model adapts to different CS delivery models and drives motion through the CS Momentum Loop before bringing it all to life in a realistic scenario.


Continue to Part 2: The Input Model in Action

Comments


Something sparked? Let’s exchange notes.

© 2025 by Thoughts & Losses - Written by Glen Clodore

bottom of page