From AI Usage to AI Earnings Power
Agentic AI’s first investable cycle is forming around measurable work, workflow control, and budget formation
Research Series Note: We are spending the next several reports on the AI monetization question from three angles. Today’s note focuses on what customer evidence is telling us about AI ROI. Thursday’s report will look at how we see the AI platform war evolving as value capture moves from models to workflows, data, and control points. Next Tuesday, we will publish findings from a CIO/CTO survey we collaborated on, focused on AI ROI, budget formation, and how enterprise buyers are deciding which deployments deserve more funding.
Over the better part of the last two years, we have tracked enterprise AI adoption as it moved from experimentation to early production, with the ROI discussion becoming more central as deployments moved closer to operating budgets. The process is still early, and we would be careful not to overread any single customer story or survey result, but we are at the point where AI has clearly moved from capability to economic proof. Most companies no longer need to be convinced that AI can generate useful output, improve workflows, or create new product experiences. The harder issue is whether those improvements are large enough, repeatable enough, and measurable enough to justify the next layer of spending across software, models, services, and infrastructure. That is where the market’s AI narratives start to miss the economic question. Usage shows that a product has distribution. ROI shows whether the customer has a reason to keep funding it. We outlined the software monetization model needed for AI in the report below.
That is why we went through the exercise of collecting tangible AI ROI use cases rather than relying on broad adoption surveys (we have that data as well) or vendor-level attach commentary. We are sensitive to the broader question of AI cycle durability, and one of the key variables in that discussion is whether customers can point to enough measurable value to justify the spending now being built into software roadmaps, frontier-lab revenue expectations, and infrastructure deployment plans. The durability of this cycle will not be determined by capability alone. It will depend on whether AI becomes economically useful enough inside customer workflows to support continued budget formation.
That distinction is becoming more important as AI moves deeper into enterprise budgets. Software vendors need customer ROI to defend premium SKUs, higher attach rates, and usage-based pricing. Frontier labs need enterprise ROI to support consumption growth and the revenue expectations now embedded in the category. Infrastructure vendors need the same proof because the broader AI capex cycle ultimately depends on customers being able to convert compute into business value. CIOs and CFOs are also changing how they evaluate AI projects. The experimentation phase is not over, but the next wave of spending will face a more practical test: which workflows are being repriced, restaffed, accelerated, or made cheaper to run?
Our latest research note looks at agentic AI through that lens. Our conviction, as of now, is that the better read is that the first investable ROI cycle is forming in a more specific set of workflows than the broad enterprise AI narrative implies. The strongest evidence appears where AI is tied to repeated work with a measurable baseline and a budget owner already attached. Contact centers, regulated servicing operations, IT access management, developer workflows, and enterprise search or context layers all prove out well because they have visible denominators: calls, tickets, access requests, summaries, code cycles, compliance reviews, or knowledge retrieval tasks. When AI changes the cost, speed, quality, or capacity of those units of work, the monetization claim becomes easier to evaluate.
Outside that group, the evidence is still mixed. Sales, RevOps, HR, and finance back-office workflows have large budgets and repeated tasks, but attribution, exceptions, permissions, and liability make automation harder to underwrite. Broad knowledge-worker assistants may show usage and time saved, yet those metrics often stop short of proving a funded operating change. Fully autonomous cross-enterprise agents remain even earlier, with reliability, identity, data integration, and liability still limiting the move from interface to execution. The evidence is still incomplete, but the direction is worth taking seriously.
For stakeholders, we believe the economic issue is budget formation (as we will show in our CIO/CTO survey). The customer ROI test starts with the spend pool the agent is permitted to influence. Labor in a contact center, after-call work in a servicing operation, IT ticket queues, access management, developer capacity, onboarding, enterprise search, and compliance documentation are all different budget conversations. The more measurable the work, the easier it is for the customer to defend spend and for the vendor to price against value created.
That also changes how we think about value capture. The agent interface may not (more on this Thursday) always be the economic control point. In some workflows, the application vendor controls the system of action. In others, the data and context layer gets funded first because enterprises need governed knowledge before agents can act. Identity and security vendors may become more central as agents behave like non-human actors inside enterprise systems. Model providers can see consumption grow while the workflow economics accrue to applications, orchestration layers, or internal routing systems. Services firms may benefit from data cleanup and integration near term while facing pressure later if AI automates repetitive support, QA, migration, or maintenance work. For more on this, see our report on who has competitive moats in SaaS.
The practical implication is that AI ROI should be evaluated at the workflow level before it is extrapolated to the enterprise software stack. The cycle that earns the most analytical weight first will be the one where the productivity claim and the monetization claim become the same claim. That is the core focus of this report.
Paid subscribers get the full report, including:
A ranked evidence ladder separating stronger operating-economics cases from projected, anecdotal, infrastructure, and risk evidence.
Five primary customer case studies across healthcare call centers, regulated mortgage servicing, IT access management, developer workflows, and enterprise search/context.
A workflow-density heat map showing which enterprise AI use cases have the clearest near-term ROI visibility and which remain harder to underwrite.
A customer-story-to-budget-formation table mapping each use case to the affected budget line, likely control point, public-company read-through, and durability test.
A value-capture layer map covering workflow owners, data/context platforms, developer platforms, identity and security, model providers, SIs, and vertical AI vendors.
An earnings-call tracking dashboard focused on production conversion, paid attach versus bundling, workflow-level economics, data-readiness pull-through, agent governance, developer-tool durability, services mix, and pricing-model evolution.
A risk framework for where the market may be overgeneralizing, including broad copilot adoption, AI attach, seat-based SaaS pricing, SI exposure, model-layer value capture, and data-readiness bottlenecks.
The broader AI monetization read-through tying customer ROI evidence to software pricing, frontier-lab consumption, and the durability of infrastructure spend.




