Here’s a basic truth that’s ignored too often; you can’t get right answers from wrong questions. You can, however, use the combination to boost clicks and hype, and so we see this a lot with AI. Most recently, a non-technology publication published a story titled “20 percent say AI has taken over parts of their job: Survey”. The findings, IMHO, show all the issues with surveys, and distort things way too important to distort.
First, “half of U.S. adults reported using AI tools in the last week”. I’m astonished anyone believes this, or at least interprets it as an actual, deliberate, commitment to AI. I think I know mostly tech types, and even among that group I’d barely hit that number. If I look at personal acquaintances only, my estimate would be maybe a quarter unless you count “using AI” as getting an AI summary in a search. Historically, a third of all people surveyed will claim they use something hot, even if they’ve never used it at all.
Next, 27% of users said that AI had automated some of their existing tasks. That’s about half those who said they used it. Well, what were you using AI for if not to “automate” some of the things you do? And another thing, does your PC, your word processor, your calculator, your spreadsheet, even your email “automate” some of your existing tasks? Does a power saw automate some of your tasks? We’ve used tools to enhance productivity since the dawn of humanity, after all. AI is just another tool, a step on a path to sophistication of tools. The question is not whether we use it, but how it’s used and most important, does the use create value that someone is willing to pay to acquire.
The majority of AI use, even according to the survey, involves AI that’s free to the user for some reason. It’s bundled with something, paid by their employer, or it’s offered as a free-to-use tool to encourage people to rely on AI and so elect to pay for more or better stuff. It would be nice to see how that sort of AI evolution is going, but of course 1) nobody surveys it because the results won’t get clicks, and 2) the number of people who respond accurately would be swamped by the number who say something because they think it makes them look smart and sophisticated.
Among the people who offer me comments on enterprise tech, which number well over 500, my analysis is that all of them “use” AI in some form, that about 70% get it paid by their employer, with an average cost of around $200 per year, and that about a third pay for AI on their own, the majority from the group who also get it from their employer, with a slightly lower ($120) average cost. I have a paid AI plan, and I’m sure a lot of industry analysts to as well.
AI saves me some research time, if I use it carefully. So do search engines. Spell and grammar checkers save me some time, too. Which shows that any work tool is supposed to save you time, make you more productive. The thing enterprises point out to me all the time is that saving worker time, improving worker productivity, is not in itself making a business case. You have to be able to somehow move that improvement to the bottom line to offset a cost. Right now, that collides with the problem of AI errors.
Recently I ran a test on my AI (Google Pro Deep Research). I gave it an assignment that I’d already completed on my own, researching economic data from a number of sources. I’m a good economics researcher, but by no means an economist, but I didn’t have a major problem getting the data. The project ran for over a half-hour on AI, giving me progress reports that surely looked like it was getting to the result. But it didn’t. AI was unable to find all the information, and simply left the columns for some of the data with “NA” for “not available”, when it obviously was. I didn’t consider this to be a complicated analysis, but AI didn’t produce any result at all.
The problem that produces is obvious; you can’t always get the right answer from AI. When you don’t get any answer, as was the case with my experiment, the failure is clear but the remedy is less so. I knew the right answer, but suppose I didn’t? If you’re a business who expects AI to empower someone’s research, perhaps enabling you to use a lower-cost person in a job and creating a labor-cost benefit, you missed your goal. When you get a wrong answer, the problem is that you’re now stuck with a costly error unless your worker spots the problem, which means either the cost of the error has to be charged against the net AI benefit, or your benefit is reduced because you needed human oversight of AI results to prevent errors from creeping in.
Enterprises say that the popular model of AI, the “chatbot” that answers questions in some form, rarely creates any actual realizable improvement in profits. The agent models that focus either on a specialized activity for which a foundation model can be trained, or that fit somehow into a business workflow and access company-private data, can generate net improvements in profit, but they’re still exploring the best way to get to a favorable outcome in a world that seems focused on the kind of AI that they already know doesn’t work for them as they’d like.
The enterprises who have done a lot with their IMHO-realistic view of AI have proven it out fairly easily, but so far this is less than 20% of enterprises. The low rate of success is due in part to the challenges in getting executive buy-in for an AI approach at odds with popular culture, and partially to the lack of tools and expertise. The good news is that there’s more and more things going on with the right-to-enterprises model, though there’s still a measure of hype involved. “Live AI”, and the notion of AI-based world models, are still aimed at a more click-worthy theme than at promoting a practical path to realizing an AI business case, but they’re a closer fit than we’ve seen so far, and that may make a difference even by the end of 2026.
Meanwhile, think carefully about any AI surveys you read. If you combine the natural-and-proven tendency of people to say things they believe makes themselves look smart/good with the fact that not much about AI terminology is even defined consistently, and you don’t exactly have a prescription for accuracy. Don’t misunderstand; my approach of analyzing spontaneous commentary has limitations, too, so you should take that into account here as well.
