There is a particular kind of meeting that happens in Q4 at a lot of enterprise companies. Someone from finance pulls a list of SaaS subscriptions and slides it across the table. The number at the bottom is larger than anyone expected. A few line items nobody recognizes. Several tools that do, on closer inspection, what two or three other tools already do. At least one platform that was bought during a growth phase that has since ended, championed by a VP who left eight months ago.
This is not a failure of procurement. It is the entirely predictable outcome of how enterprise revenue stacks get built — incrementally, reactively, across multiple budget cycles, by multiple stakeholders with different priorities and different vendors pitching them. The result is a stack that reflects the org chart of three years ago and the priorities of five different leadership teams, not the actual go-to-market motion the company is trying to run today.
I have done stack rationalization work across organizations of different sizes and industries. The patterns are remarkably consistent. And the path forward is clearer than most teams expect — once you are willing to be honest about what you are actually looking at.
Why stacks bloat in the first place
Understanding the problem requires understanding how it was created. Revenue technology stacks do not bloat because people make bad decisions. They bloat because each individual decision was made in isolation, with incomplete visibility into what else existed, what it cost, and what it was actually being used for.
A VP of Marketing buys an intent data platform because a competitor is using it and the board is asking about pipeline generation. Six months later, the VP of Sales buys a different intent data platform because the SDR team wanted something with better direct dial coverage. Both platforms now live in the stack. Neither team knows the other has one. The data from each flows into Salesforce through different field mappings. Nobody has ever compared the overlap in the underlying data sets.
This scenario is not an edge case. It is Tuesday.
Add to this the natural lifecycle dynamics of SaaS — annual contracts that auto-renew, point solutions that get purchased for a single use case and then expand in scope without a corresponding expansion in governance, tools that get embedded in workflows in ways that make them politically difficult to remove even when their functional case has weakened — and you have the architecture of a bloated stack.
"The stack reflects the org chart of three years ago and the priorities of five different leadership teams — not the go-to-market motion you are trying to run today."
The four categories of stack waste
Before you can rationalize a stack, you need a framework for categorizing what is in it. In my experience, revenue tech waste falls into four distinct buckets — and they require different remedies.
Two or more tools doing materially the same job. Common in intent data, sales engagement, conversation intelligence, and enrichment. The fix is consolidation — but only after a fair capability comparison, not just a cost comparison.
Platforms with no active owner, low or zero usage, and no clear connection to current priorities. Often the easiest wins in rationalization — but watch for hidden dependencies before you pull the plug.
Tools that were bought for a go-to-market motion the company no longer runs. A PLG-era engagement platform in a company that has shifted to enterprise sales. The tool works fine. It just works for a problem you no longer have.
Tools with genuine capability that the team is using at 20% of their potential. Not a candidate for removal — a candidate for investment in enablement and integration. Often the highest-ROI opportunity in the entire audit.
The reason this categorization matters is that each type of waste has a different financial profile and a different change management implication. Cutting an orphaned tool is straightforward. Consolidating two overlapping platforms that each have active user bases and embedded workflows is a six-month project with political dimensions that need to be managed carefully. Turning an under-deployed tool into a fully utilized one often delivers more value than any cut you could make.
The audit: where to start and what to map
A stack audit that is worth anything starts with data, not opinions. Before anyone argues about which tool to keep or cut, you need a clear picture of what exists, what it costs, who owns it, and how it is actually being used.
The discovery phase typically involves four data sources: finance or AP records for actual spend, IT or security records for provisioned applications, CRM and integration logs for active data flows, and direct conversations with the stakeholders who use each platform day to day. The combination of these four sources almost always surfaces surprises — tools that finance knows about that IT does not, integrations that are running in the background that nobody actively manages, user counts in contracts that bear no relationship to actual active users.
Once you have that inventory, the mapping work begins. For each tool in the stack, I want to understand five things:
| Question | Why it matters |
|---|---|
| What business outcome is this tool supposed to drive? | Establishes the original intent and whether it is still relevant to current priorities. |
| What does actual usage look like — logins, records processed, workflows triggered? | Separates tools people rely on from tools people tolerate or ignore. |
| What other tools in the stack does this connect to, and how? | Maps dependency risk before any removal decisions are made. |
| Who actively champions this tool internally today? | Identifies the political landscape and the stakeholders who will need to be part of any consolidation conversation. |
| What would break if this tool disappeared tomorrow? | The single most important question for understanding true switching cost vs. perceived switching cost. |
That last question is the most revealing. Teams consistently overestimate the cost of removing tools that are lightly embedded in their workflows and underestimate the cost of removing tools that have quietly become load-bearing infrastructure. The only way to know which is which is to ask it directly — and then pressure-test the answer.
Building the business case for rationalization
Stack rationalization lives or dies on its business case. And the business case has to speak two languages simultaneously: the language of cost savings that resonates with finance and the CFO, and the language of capability improvement that resonates with the GTM leaders whose tools are on the chopping block.
The cost side is usually more straightforward than people expect. Annual contract values are knowable. Renewal dates are on record. The math of eliminating two overlapping $80,000-a-year platforms and replacing them with one $100,000 platform is not complicated. What requires more work is modeling the transition cost — the integration rework, the data migration, the retraining time, the productivity dip during cutover — so the net savings number is credible rather than optimistic.
The capability side requires a different framing. The argument is not just "we can do the same thing for less money." The better argument — and the one that tends to actually move leadership — is "we can do more of what matters with a stack that is simpler to manage, better integrated, and more aligned to how we actually go to market." A rationalized stack is not just cheaper. It is faster, cleaner, and easier to build on top of, particularly as AI-native workflows become a competitive requirement.
"A rationalized stack is not just cheaper. It is faster, cleaner, and easier to build on — particularly as AI-native workflows become a competitive requirement."
The sequencing question: what to cut first
Assuming the business case is approved, sequencing the rationalization correctly is what separates a successful consolidation from a painful one. The instinct is often to start with the biggest cost savings. I would argue for starting with the lowest disruption — building organizational confidence in the process before tackling the politically complex consolidations.
Orphaned tools with no active users and no live integrations are the right starting point. They deliver savings immediately and generate zero resistance. They also create a useful signal: if removing a tool that "nobody uses" surfaces unexpected objections, that tells you something important about the political dynamics you will face in later phases.
Functional overlaps should be sequenced by contract renewal timing wherever possible. The least disruptive moment to eliminate a tool is at renewal — the decision is framed as "we are not renewing" rather than "we are canceling," which changes the internal conversation significantly. Building a 12-month renewal calendar as part of the audit process gives you a rationalization roadmap that is aligned to natural decision points rather than imposed against them.
The AI readiness dimension that most rationalization projects miss
There is a dimension to stack rationalization that was not particularly relevant three years ago but is increasingly central to how I think about it today: AI readiness. A bloated, poorly integrated stack is not just expensive. It is a barrier to deploying AI-native revenue workflows effectively.
Agentic AI systems — the kind that can operate autonomously across your pipeline, enrich records, personalize outreach, and recommend next best actions — require clean, well-structured, well-governed data flowing through a coherent architecture. They cannot function effectively when the same data point exists in three different systems with three different field names and three different update frequencies. They cannot take reliable action when the tools they need to interact with are connected through brittle, undocumented point integrations.
A stack rationalization done well does not just reduce cost. It creates the architectural conditions for AI to actually work. That framing — rationalization as AI readiness, not just expense management — tends to elevate the conversation from a finance exercise to a strategic priority. And it is an honest framing, because it is true.
What success actually looks like
The best stack rationalization I have been involved in did not end with the smallest number of tools. It ended with the right tools — a stack where every platform had a clear owner, a clear purpose, a live integration with the systems around it, and a renewal decision that was made intentionally rather than by default. Where the data flowing through the architecture was trustworthy enough to act on. Where the GTM team spent less time reconciling conflicting signals from overlapping platforms and more time having the conversations that move pipeline.
That is the outcome worth building toward. Not the smallest stack. The most coherent one.
- Revenue stacks bloat incrementally and predictably — understanding how they got there shapes how you fix them.
- Categorize waste before cutting: functional overlap, orphaned tools, misaligned tools, and under-deployed tools each require a different response.
- A stack audit needs four data sources: finance records, IT provisioning, integration logs, and stakeholder conversations.
- The most revealing audit question: "What would break if this tool disappeared tomorrow?" Pressure-test every answer.
- Sequence rationalization by disruption level first, renewal timing second — not by cost savings alone.
- Frame rationalization as AI readiness, not just expense management. A coherent stack is the infrastructure that makes agentic workflows possible.