There’s been a rethinking of the value of cloud computing, for sure. As I pointed out in my last blog, there’s also been a rethinking of the value of NaaS. In both cases, hype has been one of the factors. Linus Torvalds recently characterized the AI space as “90% marketing and 10% reality”, and my candidate as the enterprise AI reality leader, IBM, disappointed Wall Street in its last report. Could it be that Torvalds is right, and that AI reality leadership isn’t important if AI is going nowhere?
Let’s start with a story about AI ROI. A study commissioned by Appen, also reported in The Register, says that ROI on AI projects declined this year, from 56.7% to 47.3%, resulting in a decline in the rate of AI projects that actually get deployed. The Register story says Appen attributes this to a lack of high-quality training data, but the first of my references shows a figure that lists six bottleneck changes year over year for AI, half of which have gotten worse in 2024. Of course, Appen is a company that provides data services for AI, so there’s always the question of survey bias, which a look at enterprises who have commented to me may help address.
So far in 2H24, I’ve gotten AI comments from 131 enterprises. At the high level, their commentary matches a lot of what Appen said; the number of new AI produce launches has steadily declined since the beginning of the year, but interestingly faster in the second half (so far) than in the first. On the average, the decline over 2023 looks to be roughly 14%, which is more than the Appen study found. For the second half, the rate seems to have accelerated to an annualized 17%.
IBM’s earnings report showed that their consulting revenue dipped and missed, as did their infrastructure segment. Software was up, and beat. IBM said that their genAI stuff was three billion dollars, up a billion quarter to quarter. This, to me, says that the IT vendor who leads in strategic influence saw pressure on consulting services, a class of revenue likely to be linked with new strategic projects, but at the same time saw a gain in their AI business. Can we reconcile those points?
First, of the 354 enterprises who offered commentary on IT in the second half of this year so far, 288 said that they were being more cautious overall on new projects in the second half. Two stand-out reasons were the Fed interest rate policy and the election, both representing macro-market conditions. None of the enterprises cited AI project ROI or other issues spontaneously in relation to this slowdown.
Second, of the 131 who commented on AI, 108 said that generative AI services linked to personal productivity (Microsoft 365, for example) were not expanding as quickly because the benefit was in doubt. In addition, 77 said that the pace of evolution of generative AI tools was fast enough to convince them to “wait a while” until the state of the art was more fully known. However, of the 14 who were doing self-hosted AI, none indicated they had cut back on the pace of the activity, 9 said the pace was expanding, and only 5 of the 22 companies who had started AI trials in missions beyond personal productivity said they were holding off.
Third, 95 of the enterprises were evaluating something other than traditional generative AI, something more like deep learning or small models or just ML. This group seemed totally unaffected by either project slowdowns or AI skepticism. Only 4 said they were slowing their initiatives, none were cutting back, but only 28 were “committed” or “in the process of deploying”. This number didn’t suggest a problem; the rate of advance was consistent over the year.
How about the ROI comments? First, almost every CIO I’ve ever talked with would kill for projects that could deliver an almost-50% ROI. Among the enterprises I talked with, the target ROI was rarely much more than 30%. Combine that with some of the other data in the study and you get what looks like a survey bias. A company that addresses training data shortcomings could be expected to encounter those who have them. The ROI bias is harder for me to explain, but it could be that the heady publicity generative AI has seen has set unrealistic expectations, that IT professionals weren’t involved in the projects and thus the ROI assessments weren’t accurate…take your pick.
Overall, I found that generative-AI services related to personal productivity, and “public” approaches to other LLM applications, were indeed suffering a little. To say the problem was lack of good training data, though, was an oversimplification according to enterprises. Overall lack of familiarity with AI was their number one problem, followed by lack of confidence in or support from AI partners. IBM’s success in the space, then, can be traced to providing a combination of education and good advice, and to having a progressively stronger open-model AI position.
What this means for AI overall can be summed up by Torvalds’ comment. Right now, there’s a lot more being spent on promoting AI than on actually realizing it, which is likely the problem with all of our hype waves. It’s troubling, then, but not unexpected to see caution developing among prospective users of AI who aren’t getting solid guidance from a source they trust. The question here isn’t whether AI is over-hyped (over-marketed, to paraphrase Torvalds) because we live in a world of constant hype. It’s whether the hype is covering a real value trend for AI.
Google turned in a nice quarter, which they attributed in part to AI. They may be right, but that’s also not the question. The real question is whether Google is seeing the effect of the AI hype, the kicking-the-tires and, yes, the questionable linkage between effect and cause, or that real value trend. Which kind of makes this question the same as the first one.
Which, in detail, is this: Is the value of AI focused on its ability to help a lot of people a little, or a few people a lot? Google and Microsoft and Meta and even OpenAI can profit from mass use of AI applications that actually yield little value but make it up in volume. IBM can profit from the use of AI by key decision-makers and professionals, of which there are very few, but whose production has significant value. Can we call this? Not yet, I think, but consider how often hyped trends pan out. It does make you wonder.