What do enterprises want from AI? The short answer is “everything”, largely because that’s what enterprises are conditioned to believe is possible. The long answer, of course, is nuanced, complicated, and perhaps in the end even realistic. Not surprisingly, it takes a bit of effort to dig out even the current symptoms of an AI evolution that’s only getting started. I got data from 82 enterprises who were moving along on their AI journey, and their comments can offer at least a view of where things are and where they’re going.
The “going” part is easy on the surface; all 82 said they were trialing some form of large language model or generative AI, and 80 of the group said they were trying the technology in multiple applications. The top ones were writing copy (75), chatbots for marketing, sales, and support (71, 62, and 44), software development (37), business intelligence (31) and operations (network and IT, 23). Only 19 of the 82 said they were beyond try-outs and into production, spread among all these areas except operations.
When I talked about specific missions, though, it was clear that for software development and business intelligence, almost half the enterprises were really using a specialized tool that had integrated AI rather than directly using generative AI or LLMs. Not only that, 71 of the 82 enterprises said that they preferred to use AI pre-integrated into an application rather than added on in some way, which actually favors specialized tools over generative AI in the form we’re used to.
The apparent reason for this bias is a desire to adopt AI with a minimum number of changes to current practices and minimal impact on current teams. Of the 80 companies looking at LLM/generative models of AI, 41 said that there was worker pushback on the introduction, largely related to the conviction that their companies weren’t giving them proper training in the use of generative AI. That’s not much of a surprise given that 33 of the companies admitted that they gave “little or no” training and expected people to be able to “pick up” the technology through exposure. On the average, this group said that it took “a couple weeks” for people to get comfortable, but apparently a decent number were still wary at the end of that period.
The five companies most often mentioned by the total universe of 82 enterprises were OpenAI (for publicity alone they’re a winner), Microsoft (their Microsoft 365 office productivity integration was a top reason, and their upcoming Copilot the most-anticipated AI product), IBM (the AI leader at the start of the year, now in third place), Databricks, and DataGPT. I was surprised that neither Google nor Amazon managed to get into top five (they placed number 7 and 9 respectively) but I think the reason is that those two companies tend to direct their AI marketing more at CIO organizations than at the line organizations that seem to be driving most successful AI ventures.
The final two of our five are interesting in that they represent a more systemic vision of AI adoption, not layered on as is often the case with generative AI, and also not tied to specific tools as IBM would tend to do. In that regard, it may be these two that are the most interesting because they implicitly present a future where AI actually frames information access and management in a new way. Databricks’ “Lakehouse” model is the only widely recognized AI-centric framing of information, but DataGPT is raising interest as well. It’s too early to say that it’s an up-and-coming, but perhaps saying it’s standing out a bit would be in order. No enterprise said they’d considered DataGPT and ruled them out, where at least a couple said they’d done that with every other company in the top five. Note, however, that DataGPT is a relative newcomer, so enterprises have had less time to assess it.
All of this seems to suggest that the trend I’ve noted in AI for some times is continuing; there’s a focus more on AI tools that focus on private data and less on those that use broad public/Internet data. This is increasing the utility of AI, but also creating security concerns because companies need to surrender a lot of very private information to an AI tool in order to get the most useful results. That data is often the very stuff that companies have refused from the first to store in the cloud, and so there’s a gradual refocusing of AI attention on things that are run in-house. That, of course, often means abandoning massive LLM-and-GPU approaches in favor of limited, specialized, AI features.
If we look beyond LLM-based AI at broader AI/ML applications, there is little consensus among the 82 enterprises. Of the group, 65 said they used one such tool already, and 14 said they used at least two (the remainder didn’t know; nobody was confident they used no such applications). No named tool had a significant lead, and in fact the missions the applications served were quite diverse.
Only 17 of the group said they were testing cloud-based AI, and Microsoft was the leader here, with Amazon and Google roughly tied for second. None said they were already in production with their own cloud AI applications, but the area was considered promising by well over three-quarters of the group. Of the 65 who were not using AI features for their cloud services, the reason given most often was a lack of a clear target mission, followed closely by lack of development skills internally.
What about future trends, opportunities, and risks in the AI space? Remember that enterprises admit to having a lack of internal AI skills, and thus their credentials to assess the future of the technology are open to question. Not only that, even the enterprises who had production applications of AI were divided in terms of their interest in future trends. For example, enterprises who were using a pre-integrated, specialized, version of AI tended to see their use evolving in an evolutionary way, consistent with the way the tools that were supported through AI integration were themselves being used. Those who were using generative AI for applications like copy development or chatbots believed by a 3:1 margin that the technology would eventually “largely replace” human workers.
A third of enterprises said that they believed AI was a risk to jobs; the remainder believed it would improve productivity but would not result in significant job loss. Only a few percent, just above the level of statistical significance, thought AI was a societal risk or that AI might eventually be an active threat to humanity. Most laugh at the idea. The biggest threat that has any significant support (by three-quarters of enterprises) is the deepfake risk, the possibility AI would be so successful in emulating people that it would be used to construct doppelgangers of famous (or even ordinary) people to trick and defraud. They believe that rules and laws around AI governance are important, and when asked for how they should be directed, they point to this people-emulating issue as the prime target.
A final interesting point is the enterprises’ own shift in AI views over the year. Of the 82 who commented, 42 said that they were “more optimistic” that AI would present a greater value to them in 2024. Only 9 said they believed it would be less useful than they thought, and the rest said their views hadn’t changed. But only 19 of the 82 said their view of the missions of AI for 2024 were broader than they’d been this year, which suggests that they believe they’ve already identified the best AI applications and are just tuning the tools to address them best. They aren’t more negative on AI because they’re not negative at all, but they sure seem to be more realistic. That’s a good thing.