AI, like all too many tech terms, is a tent that’s admitted way too many into its shelter. Could this be why IBM’s CEO, Arvind Krishna, recently said that while there is no AI bubble, he also says that “It’s my view that there’s no way you’re going to get a return on that because $8 trillion of CapEx means you need roughly $800 billion of profit just to pay for the interest.” The latter view sure looks like it contradicts the former, so I think we need to analyze the whole interview (which I’ve referenced in the link) in light of what I’ve gotten from enterprises and found through my modeling.
It’s at least highly unlikely anyone who’s reading my blogs doesn’t know that my own belief is that AI in the form of huge clusters of GPUs running LLMs and offering AI-as-a-service are far distant from any realistic AI business case. Enterprises have told me all along that they don’t see this sort of thing doing much for their bottom lines, and thus can’t make a business case for much spending. They do, however, see a value to a much different sort of AI, one made up of smaller models deployed in a few GPUs, distributed more within their company and undertaking missions not unlike those that current applications support. For them, AI is software. Is this what IBM’s CEO means? Maybe, and sure I’d like to think that, even think he picked up what I’ve been saying, but what I’d like to think isn’t the point here; we have to figure out what is true.
So let’s start with one. IBM isn’t parroting me, I’m parroting the people IBM is listening to, influencing, helping; the enterprises. Fact is, IBM is driving the vast majority of the dollar benefits of successful enterprise AI business cases. It’s not even close. They are clearly in a better position than anyone else to assess the value of AI; they know what’s worked for buyers. It’s also true that if we assume $8 trillion in AI capex, there is zero chance that AI benefits can justify that. The whole of the global market for cloud computing services falls short of the required $800 billion by a hundred billion dollars per year, and how long have we been pushing cloud computing? Why would AI success, surely a lot more complex, be realized in a year or two?
So, enterprises see real AI value, but not value that would require, even justify, the massive AI data center investment. In the interview, The Verge asks “Have you told Sam [Altman]? Because he seems to think he can get both the CapEx and the return.” Krishna says “But that’s a belief. It’s a belief that one company is going to be the only company that gets the entire market. I got it, that’s a belief. That’s what some people like to chase.” A belief, a dream, a self-serving, hype-driving, fable? Sounds like it. Maybe for a startup, but for Microsoft, Google, Amazon, Meta? I think it would have to be way more than that.
So what are we missing? Krishna is sizing AI capex based on commitment, not on present numbers, and I agree with him that the number is unrealistic. But we’re not there yet; my data says global hyperscaler AI data center spending for 2024 and 2025 totaled $500 billion, and my model says that the top-end annual revenue that this could support would be roughly $120 billion, which is a respectable return. In short, the current level of investment in AI data centers seems (if my numbers are correct) justified. The problem isn’t that what’s been deployed can’t pay back, but rather than what’s been promised (hoped for or hyped up?) cannot be.
The enterprise (and IBM) view of AI agents can’t be supported by cloud-AI, any more than cloud computing can replace enterprise computing. Agents as enterprises see them are specialized application components. Popular AI hype says that they’re autonomous AI applications hosted in that vast data center community in the clouds. The spenders themselves will prevail here. But this topic illustrates part of the “AI big tent” impact. Everything gets lumped into a single technology label, which is a bit like saying “chip” and generalizing to everything that’s done with integrated circuits. Some of that stuff is incredibly valuable, some useless, and most specialized in terms of what it was designed to do. But call everything a chip and you lose all those distinctions.
That may be at the heart of Krishna’s comments. AI is not a bubble if you accept that there is an enormous amount of value in AI that has yet to be realized. There’s a lot of headroom in the “up” direction. But on the other hand, not everything we apply the label to will realize that value, or even any piece of it.
Why would the giants like Amazon, Google, Meta, and Microsoft then throw so much money? One possibility, the one that worries the Street these days, is the “tiger by the tail” problem. They’re in too deep, financially, not only with regard to AI but also with regard to cloud computing. If cloud-hosted AI services aren’t viable, then there is no way that the cloud can grow significantly. Thus, you spend to stave off a major problem. But what if there really is no future in cloud-hosted AI?
Another possibility is that the giants realize that the “agent craze” is real, and that there is in fact an opportunity for them to seize the AI agent market, but to do that, they need to align it with what cloud-hosted AI can do at present. This means, using my terms, focusing on the “interactive” agent model even though it’s what enterprises see as the least valuable overall. Even if that’s true, there’s little doubt that cloud-hosted AI is faster to get into, since it doesn’t require as much infrastructure development and deployment. If the giants can get their foot in the interactive-agent door, maybe they can slide into the workflow and embedded models of agent deployment, too.
I think IBM knows that agents are the key to AI, and that self-hosted agents are almost surely essential in getting the most value from AI. I think that’s at the core of Krishna’s comments, and also why he seems particularly skeptical about the OpenAI model, even if he’s careful not to say that explicitly. Of all the players, OpenAI is the most tied to the cloud-hosted giant LLM approach, and it’s also the biggest generator of AI hype (around, for example, AGI). Yahoo Finance named OpenAI their company of the year, which in terms of impact on the financial world is logical, but they remain the most vulnerable of all the big AI names because they don’t have their own direct path to monetization of a cloud AI model. Could that give them an advantage in pivoting to enterprise self-hosting? Not so far.
Wall Street is obviously antsy about AI, and also obviously not distinguishing between the mass cloud version that gets all the attention, and the self-hosted agent version that’s likely to be the money-maker in the long run. Still, money always talks, and the Street will eventually vote on whatever is going to get financial credibility. That vote is likely coming up in early 2026. We’ll see how it shakes out.
