Sometimes the past can teach us the future. Is AI one of those times? Perhaps, so I want to take a new look at some past data to try to find out.
Those of you who follow my blog know that I’ve often talked about the past three cycles of enterprise investment in information technology. The cycles were the “mainframe” period from roughly 1950 through 1970, the minicomputer period from 1970-1980, and the PC period from 1980-1999. Each of these three periods represented at least a small-scale revolution in automation, and each brought IT closer to workers.
In my ongoing assessment of AI, one of my tasks was to turn Google’s Gemini Pro Deep Research loose on the evolution of computing, and it raised a point I’d thought about often but not really covered, which was that this overall evolution could be seen as a result of the commoditization (Gemini Pro called it “deflation”) of IT. Less investment, over time, bought more processing power, and it was this basic truth that allowed the proliferation of IT closer to workers.
The first computer I programmed was a racked system about five feet high, three feet wide, and ten feet long. It had less processing power and memory than my current (cheap-model) smart watch, and cost probably a thousand times or more as much. Size alone would have prevented it from playing the role that PCs ended up playing, personalizing IT, but that process also demanded a totally different view of applications, a shift from things companies developed themselves to things they bought, and even from things that did stuff to things that let you do stuff. AI is arguably a step in this evolution.
Microsoft once told me, in the early days of Windows, that about 80% of the incremental performance of microprocessors in PC evolution was being directed at the GUI, and I believe it. Not only that, I think it’s an essential part of the personalization of PCs. You need to offer results to workers in a form they can readily digest, and so you need to spend resources to do that. Eventually, you reach a point where having “pretty pictures” is more important than what you’re offering a picture of, where the actual computing takes second place to the presentation of results.
One of the things that this evolution did, of course, was to shift “personal” from the context of “worker” to the context of “consumer”. The truth is that the personal computing revolution, a revolution driven entirely by the plummeting cost of computing technology (microprocessors, solid-state memory), was an essential prerequisite for the Internet age, and the age of consumer dominance of technology overall.
A “graphical user interface” is critical in the consumer market because simple textual interfaces are too limiting in terms of applications. Yes, you can read a book, but not one with illustrations. You can’t stream video, listening to music may or may not be satisfactory, and ads are almost impossible in a consumer age already accustomed to TV ads. There is no question that a strong GUI advances office productivity, but it’s absolutely mandatory for consumer success. It’s likely not coincidence that the worldwide web, meaning HTML and HTTP, came along in 1989-1990.
Gaming and entertainment, in my view, drove the consumer PC success. Yes, applications like VisiCalc and Lotus 123 validated personal use of what were obviously business tools, but I don’t think they would have driven massive consumer use of PCs, and thus have created the environment in which the Internet grew. Without that, the mobile revolution and the shift of data networking from being almost a purely business tool to a market where businesses live in a world created by Internet consumerism.
Another critical element in the transformation of technology the Internet has given us is the deregulation/privitization movement of the 1980s. This move was itself arguably driven by the same commoditization pressure, this time in the generation of transport bandwidth. With cheap transport, competitors established themselves in the long-distance space (MCI in the US, for example), and public pressure on telcos to open up from their protected monopoly (or even government department, as in “Postal, Telegraph, and Telephone” or PTT) drove most major markets to deregulate. To sustain profits, other uses for bandwidth than voice calling was essential, but consumer data services were unknown…until the Internet.
In the 1990s in the US, there were only 50,000 multi-site businesses who were candidates for business networking services, and this number has grown only slightly over time. On the other hand, the number of households was two thousand times as great, and has grown by a third since. What was, in the 1990s, an almost science-fiction data rate of 45 Mbps would, today, be considered inadequate for suburban household broadband services. There were only a few thousand 45 Mbps connections in the 1990s; today there are over 80 million US households with higher bandwidth than that. Consumers have driven data traffic, far more than businesses. Consumers have also driven computer spending, more than businesses. What about AI?
Computer success in the consumer market was pulled through by business investment in computing. Businesses pioneered data networking, even the Internet. It’s reasonable to believe that business applications of AI will be essential to drive AI investment, improve technology, and start AI on the same unit-cost-commoditization decline that both computing and bandwidth experienced, and that ultimately brought both into consumers’ price tolerance range. But will that be necessary? I think you can argue the other side.
The Internet has created the as-a-service world, not only in business terms with cloud computing, but also in consumer terms. The average household isn’t going to pay a thousand dollars for an industrial-class GPU, but would they pay a couple hundred for as-a-service access to the AI applications that would require that, and more, to run? It may be that AI consumerism is not only the thing that could validate all that AI spending by giants like Google, Meta, and Microsoft, and the startups like OpenAI, it may be what they’ve been working toward all along.
There’s a noticeable dip in AI traffic when school is out for the summer, so student use of AI services is already significant, even dominant. I don’t do consumer surveys, but my model says that an equivalent to Google’s Gemini Pro would likely be accepted by 15% of households, which is a bit under 20 million. Give it away, and you could reach about two-thirds of households. However, the model suggests that you’d need a “household plan” for AI because families would resist paying per-person.
The bigger question for consumer AI, though, is whether there can be an enduring value presented. There’s no need to justify free things, but you do need to justify something you pay for. Further, my model says that while a current compelling value will justify a capital purchase or an ongoing fee, the latter will need regular reinforcement that has to be marketed to the consumer, where capital deals like buying a PC induce the buyer to try to find other exploitations for something they’ve already paid for.
AI can succeed without the consumer, but could the service model do that, given business focus on data security? That’s something I’m still working on.
