When I blogged last week about Nvidia’s telecom report, I got a lot of questions about the issues related to survey accuracy. Some related to why the problems existed, and some wondered whether the same sort of issues might influence the stories we get from users themselves, and many wondered what the impact of the issues might be on tech adoption. We’ve all read about the success of AI or 5G or something, after all. How many are true? Some, almost surely. All, surely not. So let’s try to answer those user questions, in general and in relation to AI in particular.
I surveyed users in an organized way for about 30 years. I got started on a paid survey for a big network vendor, and it was so difficult to line up people who’d talk with you and who actually knew something that I kept in touch with the group, originally numbering 300. Over the years, the number dwindled a bit, but often when somebody left the list for some reason, they were happy to nominate a successor. The key point, to me, was that I knew the people and they knew me, and I felt I was able to get truthful responses. I did one or two surveys a year so I’d not burn them out, and I shared the results with them while preserving the confidentiality of everyone involved. I was confident I got the truth then, and I’m still confident about that today, so I had a pretty good baseline to assess what I heard in other surveys, and I was even asked on multiple occasions to critique a survey done by someone else.
What I found was that somewhere between 30% and 40% of people who are surveyed will give an inaccurate response. Sometimes it’s because they don’t know enough about the topic, sometimes because they want to look smart or influential or in touch, but in any case they’ll answer and be incorrect. I’ve seen that number claim to be using technologies not commercially available, or saying they used a form of the technology that didn’t and couldn’t exist. In some cases, I had information about their company that was totally contradictory to their answer and totally credible. In some cases, they were just too enthusiastic and optimistic, claiming a value that didn’t exist or couldn’t actually be realized at the bottom line. And I’ve also seen surveys were fewer than ten percent of those surveyed even qualified for the survey in the first place, so the firm doing the job essentially falsified their results. The point is that I think that well over 90% of the surveys I’ve examined were, based on my own data, totally inaccurate. Of course, you have to ask whether I’m blowing smoke at you, and you should. You should always question market views.
One thing enterprises told me, on this topic, is that most of the AI adoption so far has been “citizen AI” pulled in by line organizations, and dealing with productivity in areas like document development, or where data governance wasn’t an issue because the data wasn’t business-critical. This stuff necessarily used cloud-hosted tools, and also was based on expensed services that didn’t involve an actual project approval and business case. If you ask these “citizens”, they’d tell you that they were getting a benefit, but where the proposed uses of cloud AI expanded to include areas that were subject to governance, and involved IT and the CFO, they were not approved and could not prove a business case. These casual uses of AI may be what’s driving the hype wave, because line personnel are more likely to exaggerate, or to ignore formal business-case issues.
To get this all linked to our topic, here’s the critical point. People want positive reinforcement, to be respected, maybe to be liked. People like to be surveyed, it makes them feel important. When there’s a lot technology, few will admit knowing nothing about it, and few will admit their own company is behind the massive wave of adoption they hear about. Hype waves carry a lot of people, and companies, along. The less involved with formal project processes these people are, the less likely it is that they’d present an accurate picture of benefits. They’ll just want to look smart, connected to the biggest and hottest topic. If you’re using AI, it’s less likely to take your job, right?
Companies want to look good too. They’re responsible to their investors, public or private. The public ones have to make quarterly regulatory filings, do earnings calls, and in general play up to Wall Street. If there’s a technology that’s sweeping the world, in a hype-wave sense at least, then there’s an advantage to have a story that engages with it, and a risk if you don’t. Make your own kind of music, sing your own special song, but it’s safer to occasionally be part of a chorus. Companies can’t fake financial reports without major risk, but they can spin numbers in a lot of ways.
Recently this has taken a specific form, which exploits the fact that a massive hype wave creates a way of shielding negative things, or even making a positive out of them. For example, not a single enterprise or telco contact has told me that their company had been able to cut a significant number of jobs by adopting AI, but most of those who had reduced headcount to improve profits by cutting costs said their company had claimed AI had enabled it. It was a good look, when saying you were laying off people to improve profits was not.
Where does this leave AI? There’s no easy answer to that. Right now, there is credible progress toward building a business case for the AI investment that’s already been made, and even for some modest growth in that investment. But Wall Street doesn’t reward modesty, and companies that have seen massive stock gains cannot sustain the gaining by simply justifying current deployments. They need massive new business case development. Is that possible? Yes. The barrier to it is failing to recognize you need it. Companies won’t spend more and more without getting more and more. If there’s an expectation that hype will save the day, then will those new business cases develop?
Enterprises tell me that they really believe in AI agents, in the self-hosted, component-like form they’ve always said they needed to be deployed in. There is no difference between data governance in AI and data governance in software, they say. Most of their business-critical stuff is not going to run in a cloud, whether using AI or not. They also tell me that they’re working through the processes of building skills and identifying tools, and that they believe that much, but not complete, progress will be made this year. But they do not see this as a revolution for the simple reason that things aren’t adopted in a revolutionary way. It would displace too much cost, create too much risk, and stress any credibility in a business case. They also don’t see what’s being published about AI as particularly helpful. They don’t want full autonomy, for example, they want AI to operate within the same kind of constraints as software copes with today. Trust but verify.
We’ll have to wait a bit for reality to catch up with hype, and the risk now is that key AI players will find that threatens their stock, and try to further exaggerate. That could actually slow AI evolution, and success. I understand the appeal of hype, of clicks, but this is not the way to run an industry.
