AI is one of the few technologies that get ink both in the technical and popular press. One example of the latter is this piece from NBC News. It’s interesting because it captures so many of the contradictions of AI, and that of course makes it a nice topic to use as a framework for my own assessment, based on what I hear from enterprises…probably some of the same ones that contributed to the research the article cites. As you probably guessed already, my views and those expressed in the article touch but don’t merge.
The essential conclusion of the piece is that AI is vastly overhyped in terms of how much of it is really happening. The story says that AI is a feature of earnings calls but that much of what’s said is at best exaggerated and perhaps often downright untrue. It cites research that says “few businesses are able to use AI tools currently, as the tools are expensive and relatively few people know how to work with them.” What’s interesting is that the truth about AI lies between what the story says and what the story says others are saying.
According to the research that’s cited, only 4.4% of businesses say they’ve used AI to “produce goods or services”. Whether that’s true or not depends on just what “produce” means. My own contacts with enterprises show that if one defines AI use in the sense most of us would understand, virtually every enterprise “uses” AI. Even limiting “use” to the production of goods or services doesn’t really move the ball much, since there’s little a company does that isn’t related to the production of goods or services.
On the other hand, there’s a lot of truth in the article’s assertion that every company is pushing its AI credentials, particularly on earnings calls. AI is indeed, as the article suggests, in its “very, very, early stages.” This seems odd given that AI has been around for decades in some form or another. The fact that we’re viewing it as new, and the assertion that it’s in very limited use today, stem IMHO from the linkage most draw between “AI” and “generative AI based on large language models and trained on the widest possible set of information resources.”
The “real” AI, as I’ve said before, isn’t the huge public generative AI models, but rather the specialized AI/ML applications that have been evolving for decades. The mere fact that they’ve been around a long time explains why nobody pays much attention to them; news (as I’ve said before) means “novelty”. However, it’s not this “historical” version of AI that’s important, but how it’s been evolving, and how generative AI is influencing its evolution.
About a fifth of companies who have shared their AI thoughts with me are actually using AI formally, rather than just trying it out. Obviously that’s a lot more than 4.4%, but surveys on technology use are usually difficult when the technology is one that can be acquired as a service by a line organization, and not necessarily built by IT. About half of that number are using AI in a fairly formal way, a number still well above that cited by the research. The point is that enterprises aren’t as out of touch on AI as the article suggests, and (as I’ve already said in other blogs) they’re actually pretty savvy about how they have to be able to use it to make it worthwhile.
Generative AI in its broadest form, large language models trained on real data, is in fact pretty revolutionary, and it has significant potential impacts on business operations. The stories about superintelligence and risks to humanity aren’t deterring enterprises in any real sense, but they do create some defensiveness on the part of AI advocates who often have to promote AI trials and pilot tests to senior executives who aren’t exactly up-to-date on the actual details of AI.
It doesn’t help that the media and Wall Street treat large-public-model generative AI as though it was the only AI when they discuss the topic. That contradicts the fact that the claims of low adoption, of the kind we see in the referenced story and research, are based on actual formal AI commitments, which are fairly rare, but every single enterprise I’ve talked with has used the public-model generative AI in some way.
Which may be the real story here. We really do have AI testing, formally or informally, going on in virtually every enterprise of any size. I don’t have a huge SMB base, but even there I’ve found that every organization that has any formal IT team at all has looked at AI in some form, and well over half have been diddling with it. The problem is that all this diddling is happening down at a lower level, not driven from above but rather all too often done without any real sponsorship or interest from above. Half of all AI promoters in enterprises tell me that they don’t get support from their management. More than half admit that they themselves lack what they believe are the essential AI skills. All of them say that they need AI to be more personal.
Maybe Google’s has that in mind. Their new Gemini AI has three levels of power, ranging from Nano (that can run on a Pixel) to Ultra, the most powerful LLM Google has created, and Google plans to integrate Gemini in some form into almost every tool it offers. We may not see herds of Pixel-equipped business analysts testing new business theories, but having a hands-on opportunity with AI might help bridge that skill gap by exposing users to basics in the way that matters, focusing on application and not technology.
Enterprises tend to view generative AI in much the way they’d view an intern with a general mission, as one enterprise described it, a “floating intern” who is getting exposed to, and making a contribution to, a variety of aspects of company operations. There are organizations who use this approach (I worked for one long ago, but I wasn’t the intern) but most would say that they’d not bring in such a generalist intern. They’d want to have a specific mission, requiring not broad business knowledge but specific knowledge related to that mission. This seems to be the case with generative AI. Enterprises want to evaluate it in a mission, which means having a specific application in mind. But what is that application, given that the kind of generative AI they’re exposed to can’t really address their specific analytic needs? And what particular flavor of AI has the specific knowledge needed, or can acquire it at reasonable cost with adequate security?
To a degree, it seems we’re asking enterprises to do something they never do, perhaps without realizing it. For decades, they’ve told me that they don’t assess technologies in abstract, they assess products they can buy and apply. Is AI really in that state? Yes, but not the sort of AI that’s getting all the attention. Generative AI from Microsoft or Google let users kick AI tires, but isn’t that really assessing a technology and not a product? The question is whether those tire-kicking exercises will actually translate into AI plans, or whether the in-the-background AI/ML work being done in product form by real vendors will emerge and steal generative AI’s limelight.