“No responsible CEO is going to turn a company over to AI, period.” That’s surely emphatic, and also the comment an enterprise CEO sent me over the last weekend. Others, lower down in the same enterprise and in other ones, were commenting on my blogs on “agent AI” and why enterprises had been advocating something of that nature, even before the technology was being discussed. Now, with the concept of agents out in the open, more and more enterprise IT planners are seeing the potential.
While the concept of agent AI has been discussed for months, enterprise tech planners typically focus on things that are productized, meaning that they have firm specifications to review, interfaces to validate, and a price that can be plugged into financial justifications. We don’t have much of that so far, and so in the last ten days I’ve gotten comments on some of my recent points in blogs from only 76 enterprises. Others, presumably, are focused on action items they can actually take action on.
Before I talk about what the 76 had to say, let me say something about “stimulation bias”. If you start singing a song about flowers, the number of people who can hear you are more likely to think and talk about flowers than they would be without your song. Thus, people responding to my blogs are most likely to comment on what I’ve said, either for or against. Keep this in mind, please.
The 76 agreed on several important points. First, any deployment of AI they’re likely to do will be integrated with their current processes, in a process-specific way. That means that they believe they will deploy multiple AI models, each specializing in something, rather than any super-model. This view is totally consistent with what enterprises have been telling me about AI since 2023. Second, while only a third of the number would call the individual models “agents” spontaneously, the attributes they see for the models fits the notion of agent AI far better than it fits the notion of either super-models that do many things, or “horizontal” models that address workers’ productivity across multiple jobs, the “copilot” approach we often see. Third, there is no requirement that these “agents” (which is what I’ll call these small models hereafter) be autonomous. In fact, enterprises think that some would be integrated with human processes and others with current software, and the small-model nature of the agents mean that their context is likely set by, and their actions mediated by, other business processes around them.
The reasons for these views are also interesting. First and foremost, enterprises say that AI adoption has to work like adoption of any technology, meaning it has to be phased through their business to minimize disruption of overall business processes, control costs, and limit displacement of gear that’s not fully depreciated. Of the 76, 55 said that they’d want their earliest AI agents to be ones where the early-adopter risk was limited, and that they’d tackle more significant and impactful areas down the line when their skills and confidence levels were higher.
Enterprises also suggested that their early applications might be ones that were the opposite of the popular view that agentic AI had to operate autonomously. Autonomy isn’t widely trusted at this point, so having AI suggest things rather than do them automatically is the preferred approach. I’ve seen this for applications in netops and ITops already; “tell a professional” what’s happening and how to deal with it is preferred.
The final point made, the one my CEO comment reflects, is that surrendering control of a business, or even an entire business process, to AI is way out of current CxO comfort zones, and even among AI/IT advocates, something less than a third are willing to promote at this point. AI is a junior type, or some sort of specialist-geek type, that needs supervision and oversight. Compliance types who commented (11 of them) all said that fully autonomous AI in most missions could not pass an internal audit.
Where my stimulation bias point raises its ugly head is in how AI would look. Right now, 59 of the 76 see AI as an application component, a piece of software not unlike a microservice. Of the remaining 17, 8 say they have no specific way they’d think of an AI agent, and the remaining 9 like the notion of the “digital twin element” I’ve blogged about. That, of course, could be because of my blogs; only 6 said they had heard of, considered, or deployed digital twins. Of the 59 who saw AI agents as software components, 13 said they’d deployed or were evaluating digital twins, so not all who are familiar with the concept see agent AI playing any role.
The lack of a consistent/universal view of agent AI is probably the main reason why it’s been slow to evolve. Enterprises like tech concepts that are widely and consistently articulated by their strategic vendors. That’s not the case now, and in fact of the 76 enterprises who responded, only 4 said that they had a vendor telling them an AI story consistent with their own requirements, and none said that multiple vendors were telling the same story.
Even IBM, the vendor with the highest level of enterprise strategic influence and the one most say has the best handle on AI, still confuses enterprises. In a recent IBM “Think” piece that contrasts “agentic” versus “generative” AI. The article says that “Agentic AI is focused on decisions as opposed to creating the actual new content, and doesn’t solely rely on human prompts nor require human oversight.” The first part of that is fine in the view of those 76 enterprises, but the second part is problematic. I think the problem here is more one of terminology than anything else. AI types think and speak differently than business types.
IBM’s piece says that generative AI is what generates content of some sort, where agentic AI generates decisions and takes actions. That distinction doesn’t map to how enterprises who’ve chatted with me about AI see things. They think that generative AI is AI based on general intelligence training of the sort that’s trained on Internet content, and agent AI (I think the term “agentic” is leading us astray) is AI with training in a limited, specialized, area. For example, enterprises would classify most copilot applications as generative, where IBM thinks of them as “agentic”. Enterprises would also say that something like a tax preparation agent was just that, an AI agent that cooperated with a human, where IBM’s definition would likely put it in the generative category—it generates a tax return.
I’m concerned here, frankly. Yes, I’d hoped that my AI-and-digital-twin symbiosis would get more recognition, but more than that I’d hoped that what enterprises were hearing from IBM was aligning better with what they tell me they need to hear. Is IBM going down an “agentic” rat-hole here? I hope not, but it may be. If it is, then other player may have a shot at leading AI in the future.
Speaking of future, of the 9 enterprises who mentioned an AI connection with digital twins, 4 had comments about the three-way relationship between digital twins, AI, and the metaverse, or more specifically what I’ve called the “metaverse of things” or MoT. I’ll cover that in my blog tomorrow.