What do AI and wireless have in common? Yeah, this is sort of a trick question, but most people probably respond with “hype”, and that’s true. What’s also true is that both have a kind of generational succession to them. In wireless, most remember 4G, know 5G is current, and 6G is next. AI succession isn’t as numerologic; we had AI and machine learning, then we had generative AI, and then RAG (retrieval-augmented generation), and now many would say we have autonomous agents. Like 6G, the AI agent concept is a bit fuzzy at the moment, but like 6G it has potential, maybe even enough to salvage the technology as a whole.
The fuzz in AI agency is related to the fuzz in AI overall. The majority of AI that’s deployed is not the generative AI we always hear about, it’s in the form of smaller and simpler models. An AI agent is, at the heart, an AI element tasked with something specific. That doesn’t necessarily mean that it’s generative AI based on LLMs. In my own view, and in the view of a slight majority of enterprises I hear from, it’s not even necessarily fully autonomous. It’s task-oriented AI, and the task might as easily be to recommend as to actually act.
Another source of fuzz is that the pace of generation is too high; it’s fruit flies not elephants. Enterprises have been telling me from the first that they don’t have AI expertise on staff, in no small part because the AI experts want to be hired by AI companies who will offer more. How, they now ask, are they to hit a target that’s moving as fast as AI is? Thus, anyone who surveys enterprises looking for expert opinion on the evolution of AI are, by enterprises’ own accounts, wasting their time. So I won’t (exactly) do that.
Enterprises by well over 4:1 margins, think that any AI transformation is going to be self-hosted. They say it won’t involve “generative” AI by about 2:1, and only slightly less than that say that they believe AI will be used and valuable in contained missions, so multiple AI models are likely needed to really transform their business. They’ve also said that they like AI giving them advice, but are wary of it running things on its own. This AI fear is greatest where AI scope is greatest, so contained AI is more readily accepted as acting on its own. From this, you can see that at least implicitly, enterprises view AI agents as small, specialists, acting like a pilot on a ship leaving harbor. They can recommend, but they’re not the captain.
You can see this clearly in network operations missions for AI. Less than ten percent of enterprises say they’d like to turn netops over to AI completely, though a third think that might change in five years. On the other hand, no enterprises reject the notion of an AI agent giving them advice, and only a quarter say that they couldn’t accept autonomous reactions in netops in specialized areas of their network. Traffic and WiFi capacity management? Bring it on! Fault response? Recommend, but let me make the final choice.
What does this have to do with autonomous agents? You can probably deduce it at this point. Think about a company, a typical enterprise that runs on human rather than artificial intelligence. These companies are largely run by specialists in multiple areas, whose decisions fit within a broader framework set at the top. An AI analog of this would be a bunch of AI models/agents, each with an area of specialization, coordinated perhaps by a top-level super-agent. I think, based on what I hear from enterprises, that this approach is what they are most comfortable with, and also that they might see each of the specialty areas being managed by its own hierarchy of agents, just like a real organization would be, and then that some of the lower-level agents might be allowed to make decisions on their own, as long as they fit into policy constraints set from above.
To me, the important points here are first that autonomy has to be granted based on policy, that AI should be viewed as working within an agent hierarchy just like humans, and that some AI has the role for analyzing the results of other AI agents. I don’t hear these points spontaneously from enterprises, but they sure seem to be validated by inference. Why don’t I ask? Because my whole approach to getting user information on tech is to rely completely on spontaneous comments; asking questions inevitably leads the subjects and creates a bias in responses.
The reason I think these points are important is that I also don’t see much recognition of them among AI providers. If I’m interpreting enterprise comments correctly, and so far they’re not contacting me to say otherwise, then there’s an opportunity here to frame AI the right way. It’s also an indication that a lot of the current AI activity, startup and otherwise, my be missing the sweet spot in the market.
Some companies (like Cohere, as reported by Reuters) say they’re focusing on customized models rather than following the industry push to build one huge model that behaves like a superhuman expert. IBM, as I’ve noted in other blogs, seems to be accepting a specialist model approach too, and most small-language-model providers are at least offering specialist support, but in many cases they’re still positioning for less demanding generative AI missions. A part of this, I believe, is due to the fact that the majority of highly successful enterprise AI applications are chatbots, and these mimic generative-LLM tools but operate in a more specialized way. Another piece is due to the fact that various techniques to specialize cloud LLM tools (like RAG) are getting most of the ink in the space, no doubt in part because the providers are lobbying the media.
Is the human-organization-bound model of AI agents, a model that allows for both human and agent policy supervision, a missing link in enterprise transformation via AI? I think it may well be. We have, after all, made the study of optimum organizational structure a business-school classic, so shouldn’t we expect to do the same for optimizing AI agents? Have we perhaps gotten so obsessed with the future of AI, the concept of sentience, that we’re mortgaging the present? Maybe the proverbial hype cycle is partly to blame, or maybe the get-rich-quick mindset we see in the VC space.
If we did 6G right we could revolutionize not only networking but our lives. If we did AI right we could do the same, perhaps even easier. If we did both right, the impact would be massive. So dreaming big isn’t so much the problem of not parsing big dreams into achievable chunks. We can do better.