The biggest problem with AI agents, say enterprises, is that all you can easily find about them is trivial and wrong. Yes, it’s possible to infer some useful truths from basic agentic comments online, but the details that a savvy planner would need to make agent decisions and deployments is missing. One AI type told me “There are agentic AI myths, and agentic AI misses.” Let’s take a look at both to try to set the record straight.
The biggest myth, enterprises say is that artificial general intelligence is important to AI and AI agents. As I’ve related before, enterprises know that an actual human would surely be a lot less expensive than a cluster of GPU servers big enough to offer AGI, if we could do it. What enterprises want from an AI agent is what’s often known as a savant. In college, I worked a bit on a research project that required we interview people who were sometimes average and sometimes a little less, but had a special skill, like a photographic memory or the ability to do complex math in their heads. That’s an agent, to enterprises. One thing, one special thing, to paraphrase a song.
The first of our misses is related to this myth. If you visualize the ideal agent application, you quickly realize that it’s almost certain that a real-world business process is going to require the cooperative action of a bunch of agents. OK, we have A2A APIs, but as my enterprise AI type says, “You know what agents linked with hard-coded APIs is? A monolith.” If you need cooperative agents, you need agent orchestration. That’s something you hear next to nothing about in the overall AI hypespace, and in fact enterprises who look at the topic say that the only vendor who brings up the subject is IBM. For the record, I do no business with IBM, by the way.
Agent orchestration is to AI what enterprise service busses are to the Service-Oriented Architecture (SOA) in software deployment. They link application components and entire applications to create the workflow needed to automate an entire business process, or even an entire business. You can’t do efficient business software without a service bus, and you can’t use AI agents efficiently without agent orchestration.
The second AI myth enterprises mention is the myth that AI autonomy and in fact AI overall, necessarily leads us to a future where our AI fights for dominance and wins, destroying a business or the whole human race. “Intelligence” and even “autonomous intelligence” isn’t the same as “self-awareness” or “ego”. It’s far from clear that even AGI if achieved could lead to self-aware AI, consciousness in AI. There is a risk to AI that we miss because of our AI apocalypse focus. AI autonomy, even AI agent autonomy, does pose risks.
AI, like human intelligence and like traditional software, can make mistakes. With generative AI, this is called “hallucinating”, and it’s a problem because unlike software bugs, AI hallucinations are hard to find and fix because AI reasoning is fairly opaque. However, AI hallucinations tend to be proportional to model complexity, which tends to be proportional to breadth of topic training. AI agents, with their naturally specialized scope of function, are far less subject to the problem, but enterprises are still concerned enough about the issue to want to have some set of checks and balances on agents. This can take the form of having a worker make the final decision to accept agent recommendations, or an edit function to assess the output of the agent and detect something risky-looking.
The risk downside with the AI agent is that this is the sort of AI that could reasonably be used to run some real-world activity. You wouldn’t turn your business over to AI, but many already effectively let AI drive for them. Envision an AI agent controlling a human-form robot starting to hallucinate, or even to “learn” destructive behavior. That’s a lot more of a problem than having a huge AI-AGI cluster decide to make humanity extinct, but nobody talks about it.
The third myth is that AI always improves productivity, and thus is always justified. AI does not always improve productivity, enterprises tell me, and even when it does, the gain may not convert to meeting an ROI target, or realize an actual benefit at all. This has led many enterprises to AI decisions that have been all cost and risk with no justification. The biggest problem here, enterprises say, is that to non-technical departments, generative online AI services are accessible with limited or no approvals or ROI assessments. “People play with AI more than they work with it,” one CIO told me.
Most AI use, according to enterprises, does things that workers find convenient, and many think it’s fun to interact with AI, but none of this is very useful to business, and it means enterprises often miss the real value. Fortunately, agentic AI is less subject to this sort of usage than traditional online AI chatbot tools, but even agents can be directed at things that don’t serve a business purpose. Does saving a worker ten minutes in writing something help the company? Not unless the worker’s time (unit value of labor) is valuable, and even then not unless the free time can somehow be recovered for something else. Often, that’s not the case.
This relates to the final myth, which is that “AI makes people stupid”. Obviously it does not, but what it does to is to impact not their intelligence but their knowledge. Calculators, and then smartphones, have impacted people’s ability to do arithmetic on their own, to the point where many lose any sense of what a right answer would look like, and thus can’t spot an AI error. I was asked once, in a military formation, what the cube root of 1,728 was. Since that’s the number of cubic inches in a cubic foot, I knew it was twelve. Thus, if you told me that the cube of 13 was fifteen thousand and nine, I’d know it was wrong. Suppose you didn’t know that, and used the result? Missing some basic knowledge means you can miss an AI error, which is a problem given that we know AI can make them.
Enterprises tell me that, on the average, an AI project takes a third more time to complete than a traditional software project. I think a lot of that can be traced to these myth/miss points. We don’t really understand AI, and what we hear or read about it is more likely to be click bait than useful. Just sorting through the trash takes time, which may be why IBM’ consulting revenue was a bright spot in their latest earnings report. AI projects fail more often than traditional projects too, so if you’re thinking of one, enterprises who know would tell you to take your time.
