Several articles (like THIS) have noted that generative AI use dipped sharply in early June as schools closed out for the summer. Most take either the stand that this shows kids cheating on homework is a major user of AI, some that it shows that kids need to integrate AI more into their lives. What I think it really shows is that there are two totally different AIs out there, and that the real ROI for AI investment by both suppliers and businesses depends almost entirely on the second, the one we don’t talk much about at all. That means we don’t realize that there are actually multiple factors that divide AI implementations, and that if we look deeper there may be more than two forms.
If you think about it, it’s logical to assume that “artificial intelligence” has some things in common with human intelligence. For example, we have “generalists” and “specialists” in a lot of fields, including medicine, and even thinking of the broad population, there are always those who know a little about a lot of things and those who know a lot about few, even one. “Jack of all trades, master of none.” That means we could expect there to be AI that functions as a source of general knowledge and an AI that’s specialized, our first division.
We also have to ask what we mean by “intelligence”. In humans, the term “intelligent” is used both to describe people who have powerful intrinsic reasoning capability, a high IQ, or people who know a lot of things, which could relate to being well-educated, well-read, or just experienced.
With AI, there’s another division, and that’s the “chatbot” versus “copilot” versus “contractor” usage model. Is AI used to empower a human user, meaning that it acts as an advisor or agent in responding to questions or requests, or does it do something more independently, making this an “autonomy” question. This is obviously a sliding scale, going from simply providing information to perhaps making suggestions, to actually doing something on request or even without an explicit request at all, responding to external events.
If we take this back to generative AI usage and school, we can see that the “chatbot” or “copilot” form of AI is what students would likely want. You ask it to list countries alphabetically or by population, area, GDP, and so forth. There are a lot of subjects taught, a lot of student diversity of skills and interests and needs, so you want AI to be a bit like the nerd classmate who can give you an answer to a test question or write a paper for you. This is a “knowledge” “chat” “generalist” model.
If this works for students, why not for business? Enterprises point out that the problem is that helping individual employees this way doesn’t necessarily translate into quantifiable benefits. Almost every enterprise spontaneously tells me that their company is happy to let employees use free (or sometimes very low-cost) AI services in this personal-productivity way. However, this use has to be tempered by awareness of issues of data security and sovereignty—company secrets could be divulged to an AI model, which is bad in a risk management and governance sense. Worse, they might explicitly become public, since AI models often train on interactions with them. It also has to consider whether each AI-empowered worker is actually gaining productivity, what that productivity is worth, and what the chances are that AI offers the wrong answer—hallucinates.
The most straightforward way to look at AI from an enterprise perspective is to divide the workers into four groups based on unit value of labor and the ability of AI to support them. The first group, classically called “blue-collar”, are workers who perform physical tasks. The second, “white-collar non-professional”, perform office tasks that don’t require special education/training, and the third, “supervisory”, that requires experience and perhaps some specific interpersonal skill set. The final group, “professional”, perform tasks that require education/training for specialization. Enterprise strategies for AI adoption could target one or more groups, but the way that would have to be done and the chances of success vary.
According to what my enterprises tell me, backstopped by government data, blue-collar workers make up roughly 30% of the workforce, and actually have a unit value of labor that’s higher than the average of workers. Supervisory workers make up about 20 percent, half of which are supervising blue-collar tasks. Professional/technical workers make up another 30%, and white-collar non-professionals make up 20%. Enterprises say that they have found that it’s the white-collar professional, white-collar supervisory, and white-collar non-professional workers who can use the knowledge/generalist chatbot/copilot mode of AI, but that the actual business value is difficult to obtain for the non-professional, non-supervisory component because of a typically low unit value of labor.
The problem with empowerment of any white-collar job is that even “generalist” knowledge usually has to be combined with enterprise-specific business data to create real value. This vastly increases the risk of data security loss, and so enterprises tend to see the most valuable AI requiring self-hosting for data sovereignty and compliance reasons. This almost surely means capital and operational costs to sustain the AI as well as the cost of training and data validity management. That means that the productivity gains AI provides have to be realized in the form of more work output or fewer workers, and that the unit value of labor for the workers involved has to be relatively high to cover these costs.
An increasing number of jobs fit into the professional/technical category, including jobs that involve working in the real world, interacting with people and things. These jobs create special challenges in empowerment via AI because AI can’t replace actual interaction without considerable augmentation, sometimes via IoT and other times through speech, drawing, etc. Integrating AI with the real world is clearly an investment, and one that involves multiple technologies, vendors, software elements, and likely AI capabilities. The distribution of workers by job classification is highly variable across verticals, which means that there is no single AI model that is universally optimum.
All of this is what has led to enterprises focusing on “AI agents”, using their own definition of an agent, which is an AI model that performs a unit specialized task not unlike a piece of application software. As far as enterprises are concerned, it’s this “self-tuned” view of agents that will define the future business value of AI, not the widely hyped public generative AI services.
