All the market wants for Christmas is AI ROI
By Paddy Flood, Co-Head of Global Equity Research and Global Sector Specialist, Technology at Schroders
AI investment has been extraordinary, but the market needs to make sure it is looking in the right places for returns.
As the year draws to a close, the technology sector enters the festive season with one clear wish: evidence that the huge investment in artificial intelligence (AI) is paying off. Capital spending on AI has reached unprecedented levels, from chip purchases to data centre construction to multi-year cloud commitments. Yet investors still see limited disclosure of AI-driven revenues, profits or cash flows. The result is a growing concern that AI may not be delivering returns commensurate with the enthusiasm.
The market is right to ask where the payoffs are, but it also needs to be careful about where it looks before drawing conclusions. AI revenues are emerging, yet not always in the places investors instinctively expect. Understanding AI’s monetisation means examining the full technology stack and recognising that the economics surface in a blend of direct, indirect and sometimes hidden ways.
Starting with the basics: how people actually use AI
AI is used in several different ways, and each has very different revenue implications. Some people use AI for free through consumer chatbots such as ChatGPT or Gemini. Others pay for AI-enhanced products from software vendors, from AI-enabled productivity tools to developer services. And in many cases, consumers and businesses use AI without explicitly paying for the AI component at all. A news website that offers article summarisation, or a retailer that embeds an AI shopping assistant, is providing AI capability without charging the user for it. The AI processing still has a cost, but it is absorbed into the broader product experience.
This mixture of free and paid usage creates real ambiguity when investors look for revenue. If one focuses only on the narrow category of explicitly paid consumer or enterprise AI products, the numbers appear small. But that view misses a significant part of the activity. AI is increasingly becoming a cost of doing business, similar to search, content delivery or cybersecurity. Companies may not monetise it directly, but they increasingly have to offer or utilise it or risk losing users to competitors who do.
A company like Booking.com illustrates the point well. It is unlikely to sell AI features as standalone products, yet it will almost certainly use AI to power its recommendations, search results and customer service. The hope being that the economic benefit shows up through better conversion rates and higher profitability, not as a dedicated line of AI revenue. Crucially, every time someone uses AI, free or not, it triggers costs somewhere in the tech stack. Those costs are revenues for the companies providing the underlying model access and compute capacity.
Why the tech stack matters
Each AI interaction involves several layers of technology. At the top sits the application the user sees, such as a chatbot, an AI-powered search tool or a shopping assistant. Beneath that sits the large language model that produces the output. And below that sits the compute layer, where cloud platforms and specialist hardware deliver the processing required.
This structure matters because every time AI is used, value flows down the stack. Even if the end user pays nothing, the companies offering the service must pay the model provider, the cloud platform or both. The revenue does not always appear in the user-facing application, but it appears somewhere in the underlying infrastructure.
This is why focusing only on paid AI products risks underestimating adoption. The strongest indicators of real usage sit deeper in the stack, among the large model providers and the hyperscale compute platforms that record every AI request.
Where revenue is easiest to see: LLMs and hyperscalers
The clearest signal of usage and monetisation comes from the large language model companies (LLMs).
- LLMs such as OpenAI and Anthropic generate revenue through consumer subscriptions, enterprise licences and developer usage. Every action that uses their models, whether free or paid at the application layer, consumes capacity and therefore contributes to revenue. The numbers are striking. Combined revenue for these two companies is expected to reach tens of billions of dollars within a few years. If OpenAI and Anthropic meet their supposed 2026 targets, they will be similar in scale to long established software companies such as Intuit and Adobe, despite being far younger and operating in markets that barely existed a few years ago.
- Hyperscale cloud providers offer a second visible example. For platforms such as Azure, AWS and Google Cloud, AI is now a major contributor to incremental growth. Management teams have repeatedly highlighted generative AI workloads as a key driver of accelerating revenue and have noted that demand is frequently running ahead of available capacity.
These parts of the ecosystem offer the most transparent view of AI monetisation because they reflect usage directly and they are off to an impressive start. However, more will still be needed for the industry to justify the scale of the current investment.
Where revenue becomes less obvious: substantial but hidden monetisation
Beyond the companies selling AI directly and the platforms providing the compute behind it, there is a substantial layer of AI monetisation that is far less visible. Many large platforms deploy AI not as a product to sell but as an enhancement to improve their core businesses.
This is particularly true for the large digital advertising platforms. Meta and Google are investing heavily in AI to improve targeting, optimise creative output and refine campaign performance. Advertisers do not pay an AI fee, but they benefit from improved returns on spend, and the platforms benefit from stronger engagement and pricing power. The revenue uplift is not disclosed, but it is embedded within advertising revenue rather than explicitly labelled as AI.
This pattern is increasingly common across the digital economy. Companies add AI because it strengthens their economics, enhances user experience or helps defend market share. The monetisation is real, but it is hidden within broader revenue lines that do not isolate AI contribution. For those looking only for standalone AI revenue, this layer is easy to overlook, even though in aggregate it is likely to be very large. For active investors, this hidden layer is a significant opportunity, precisely because the market tends to overlook what it cannot easily measure.
Putting it all together: the revenue is there, but not always where expected
Substantial AI-revenues already exist, but they are distributed across the stack and often sit behind the scenes.
At the top of the stack, explicitly paid AI products are still developing. But lower down, the signals are more robust. The large model providers are already generating revenues that place them alongside some of the most successful software companies of the last two decades. Cloud platforms are seeing AI-driven acceleration. And many digital businesses, while not reporting AI revenue separately, are clearly benefiting from AI-enhanced monetisation.
None of these alone tell the full story. It is only when the entire stack is considered together, from user-facing applications to LLMs to cloud infrastructure, that the scale of AI adoption and monetisation becomes clear.
Further reading : click here