We need big ideas in tech. That’s not only the conclusion of almost 500 enterprises who commented last year, it’s also what I heard from my vendor and telecom contacts. The tech booms of the past came about because we had a successful “big idea”, and everyone would really like to see another tech boom. AI? A big idea, but not yet successful (I’ll blog on why this broad group thinks that’s the case later on). What might a new candidate that could win?
Enterprises still think that AI agents could be the big idea; over 80% say that they think agents represent the biggest near-term opportunity to raise IT and network spending. They also say that they’re working through just how to make the most of agents, and in that process their view of AI agents is evolving.
From the first, enterprises saw AI as an augmenting element in current workflows, something like an application component in a traditional distributed computing framework. Early on, it was the component focus that caught their attention, largely because it was a contrast to the big-hosted-generative-brain model that popular media and AI model type were pushing. They couldn’t see that central model fitting in, but they could see AI as a piece of a workflow.
Now, they’re focusing more on the “distributed” side. They see agents pushed closer to workers, which means they see smaller models that can not only be run in a single rack in a data center, but can be run on a distributed set of resources, down likely to the PC on people’s desk and the real-time controller in the office, store, or factory. A big part of this shift is due to the realization that, as I’ve noted, a full 40% of the workforce is yet to be really empowered, and what’s holding this back is that those workers aren’t “information workers” but are “material” workers who move more than their eyes and fingers. It’s interesting (and yes, perhaps, a bit disappointing) to me that they don’t see “real time” as the big idea, but I think that’s because enterprises are practical about technology and are creeping up on it rather than taking a big and perceived-as-risky leap.
To a surprising degree, telcos take the same path. Of 88 telcos who commented, 74 said that the big idea for telecom was “differentiated services”, a leap away from the notion that broadband Internet was the universal dialtone of the future. The Internet, as a consumeristic tool, has to be dirt cheap, because consumers won’t pay for pipes. To get incremental revenue, then, telcos have to sell something else too. Simple, if you know what that is, but so far what they know is that it has to be different.
Different, to a telco, means different in service attributes, and telcos cite three of these—latency, availability, and security. If you could support applications whose needs in one or more of those areas was more stringent than the Internet provides, you could sell differentiated services. This thinking, some of my thoughtful contacts suggest, was what was behind network slicing, but the thinking was never really fleshed out.
Latency means real-time. Availability means some application whose operation is life-or-death. Security can’t be sold as a service attribute to things that can be secured end-to-end, so it means applications that rely on very simple and cheap elements, which means something like IoT.
It’s hard not to see this as an example of creeping up on real-time applications, which meet all those criteria. The problem is that telcos see what uses the service as the responsibility of others. They just provide it. Thus, progress here demands a telco cultural remake, which has not happened and likely will not happen. But the telcos do agree that the next step they have to take is what AT&T called “facilitating services”. The question is what they are; AT&T has gone quiet on the concept.
Does this sound, so far, like the big idea is very much software-centric? Enterprises and telcos both think that’s the case, so it follows that if you look at the vendor side, they’d think the same, right? Well, network vendors do in fact think that’s the case, but IT vendors, meaning computers and software, do not. This group still thinks, as enterprises do, that AI is the big idea.
Why is that a problem? Enterprise buyers and IT sellers agree AI is it. The problem is that every vendor in the IT space has a different vision of AI. Right now, most of the AI focus is created by the easily accessed AI services offered by giants like OpenAI, Google, and Microsoft, and by GPU behemoth Nvidia. It’s not what enterprises generally find can make business cases, as I’ve noted many times in the past. Very few software companies, according to enterprises, have a small-model, self-hosted, vision for AI that conforms to how enterprises have to use AI to meet cost/benefit and governance policies. Thus, there’s no real meeting of the minds.
This is changing, though. All the current AI giants are trying hard to get into the AI agent came, though the hosting model of AI isn’t suitable for applications that require access to key corporate data subject to governance, and the latency isn’t suitable for real-time application evolution. Google has successfully presented AI tools that enterprises say can be justified for roughly 15% of their workers, and more and more financial analysts are dissing the cloud-AI model, which forces even Nvidia to take self-hosting more seriously. Still, all the AI oxygen is still largely sucked out by the public model play, and that’s making it hard to build PR and even Wall Street momentum for the self-hosted approach.
So there we have it. There are big ideas out there, and there are even some trends and evolutions that, by converging the goals of buyers and sellers, offer a chance at actually getting something going. All the big ideas would converge on real-time/real-world applications, for example, and that’s good news. The less-good news is that converging a lot of elements from a lot of players will take time, and so we may not get another of those happy IT investment waves with exciting and real innovation behind them any time soon.
