Let’s face it, AI is a bit of a mess. Even among technologists, AI literacy is down in the bearskin-and-stone-knives level. Buyers don’t understand what aspects of AI technology connect to useful business missions, though they still believe that somehow it could be transformational. The space is tied to supergenius technical types, supergreedy VC types, PR shills, talking heads…and with all of this it still generates completely bogus responses maybe twenty percent of the time. And, finally, it’s going to take over the world and kill off humanity, a situation recently suggested as the reason for the OpenAI flap. At the high level, based on popular public comments, you can see how a planner might have a problem coming to terms with AI. Underneath, there’s still a sense that AI could be transformational if only we understood just what would be needed to apply it to transformational missions.
Think of AI today as being the hype of 5G multiplied by ten and augmented with the notion that unchecked it will destroy all lifeforms, but also with the potential to be R2D2 and C3PO to our Han Solo, Princess Leia, and Luke Skywalker. Trusted partner, mortal enemy, and now seemingly locked in a business battle that even the participants seem helpless to explain or even describe. Another hype wave, a wave of productivity benefits, a boon to medicine and education, a source of the worst kinds of political and personal risk…what is AI? All of the above, and more, and maybe less too.
Is there a basic truth about AI? Yes, and let’s try to at least expose it if not understand it.
Almost everyone sees AI through the glass of “generative AI”, a form of AI that applies what’s called “large language models” capable of understanding hundreds of thousands of rules, to create something that can mimic human outputs like images, text, or even speech. Most people have experienced it in the form offered by companies like Google and Microsoft/OpenAI, as a “chatbot” that answers human-language questions that we’d normally try to answer through the use of a search engine. This mass-market application has to be trained on as large a subset of all human knowledge as possible, and has to apply all the rules needed to mimic what an expert would say about the topic. It takes massive banks of GPUs, consumes enough power to run a city, and costs a bundle.
Generative AI is the most problematic of all forms of AI, though. First, training it on all that human knowledge means that it requires crawling accessible content, much of which may well be copyrighted, much of which is surely false, and much of which is irrelevant. Second, delivering mass-market anything these days runs hard into the barrier of return on investment. Telcos are asking for subsidies because consumer broadband profits are plunging because people just won’t pay a lot for broadband. Will they pay a lot for generative answers? The Internet is largely ad-sponsored, like broadcast TV, and yet the companies like Google and Microsoft who sell and serve ads with their search results would have to figure out how to introduce comparable numbers of ads into generative replies, or they’d have to charge consumers for usage. It’s hard to see how either would be possible, so it’s hard to say what the business case would be.
So why not business? Why couldn’t AI answer questions about marketing, do customer support, do product planning, improve the efficiency of operations teams, write software…all that stuff? Well, how much of your company or personal data is on the Internet and in use training AI models? How current is the training data given that we’re generating, so pundits say, petabytes of new data yearly? How many of that vast collection of GPUs that has collected and “understood” mass-market data would be needed to analyze our personal or company finances? Businesses don’t need mass-market AI, they need their-market AI. If there is any such thing, we’re not hearing much about it.
The real opportunity in AI, the aspect of it that could well change our lives, doesn’t require that AI provide free-text answers to open questions for the mass of the world’s population. What it requires is that it provides valuable insights to people who have a high unit value of labor and make a major contribution to our overall quality of life, health, and our economy. Those insights are as specialized as those people are; we don’t go to a doctor for help with our taxes, or to a landscape architect to do a market plan for a network vendor. A massive and powerful AI entity is less useful than a bunch of specialized and functionally limited entities.
This simple truth has a lot of impacts. First, we don’t train our specialized AI entities on the Internet, we train it on our own current and past data. Second, it doesn’t have to understand every nuance of language; it may not even need to understand free-form language at all. It can speak the languages of business analytics, for example, if its goal is to analyze business. All of this simplification means that we don’t need our specialized AI to be powered by a warehouse full of GPUs; a single rack would probably serve most business missions.
There are really two AIs out there, one the wildly publicized generative AI that has vast scope but limited utility, and the one that poses (if any do) the risk of major negative societal/economic impacts, and the other the “business AI”. We’re certain to continue to see both these develop and deploy, but I think it’s the first of the two that will always get the buzz and the second of the two that will inevitably provide the value.
What does this mean for job impacts? I think the second, “business AI” model is as much or more a threat to employment practices and patterns than the generative version. You can write pablum pieces based on likely-obsolete information with mass-Internet-trained generative AI, but you can’t do much company- or product-specific work. Microsoft’s Copilot will surely make document development a bit easier, but it’s not going to radically impact the number of people developing documents. Same with coding. On the other hand, you could use business AI to actually produce working material on company operations and product marketing. The question at this point isn’t whether the resulting material would meet quality goals, but rather whether it would actually displace workers, or simply shift their mission to one of “layout and review”, the process of directing AI at the right targets and checking the results for problems. Right now, that’s a question we can’t answer, but we probably will see some signs of where that’s all heading in 2024.