Sometimes it’s best to break down something complicated so it can be understood. Sometimes, breaking things down leads to losing the forest concept in a maze of trees. Which of these truths apply to computing, to hosting, and ultimately to networking? We’re likely on the verge of finding out, but not yet quite there, which means that figuring out the answer could well lead to a competitive advantage. But it also means we need to look at the impact of various AI strategies on the economy at large. AI can solve problems, create them, or both at the same time, and every player who wants to profit from it needs to think about all these outcomes. As is often the case, we need to lay some groundwork.
To start with, we have to look at the impact of tech in general, and of AI in particular. Generally, technology means “automation”, which generally means improvements to productivity created by assisting or replacing human labor. The plus side of this is that it reduces the cost of business operation, but the minus is that it tends to reduce employment, which reduces buying power. A massive application of AI to cost reduction could have that impact, and that’s the logic behind the stories of AI taking our jobs. But, if AI were to improve production, reduce the cost of goods, and empower more people to do better, it could in theory raise the overall standard of living, boost consumption and GDP, and create a better economy. It’s a matter of application.
If we looked back at enterprise IT investment, we’d find that the major waves of IT spending growth coincided with, perhaps were driven by, the distribution of computing. We started with giant systems in pristine data centers, moved to minicomputers, then personal computers, and then to portable things like tablets and smartphones. At the same time, we brought IT closer to work, to “point of activity”. Not surprisingly, this convergence of closeness has created diversionary forces.
One such force is the pressure of technology literacy. Yes, we can bring computing literally into everyone’s hands, but that raises the level of personal tech literacy needed to sustain it. As the personal tools get more complicated, the personnel has to get more skilled, or they not only can’t optimally apply the tools, they can’t even protect themselves from errors, hacking, etc.
Another force is the mighty force of economics. Some IT tools, like large language models, are expensive to host and may not be used by a given person/worker very often. That makes the economics first challenging, and eventually insurmountable. So, you build a pool and share it, which is what we see in cloud computing.
AI is bringing both these forces to bear, and on that point we can take up our main question. Can AI be a net good in a complicated world? The industrial revolution of the past disenfranchised a host of craft specialists because anyone could run a machine. Will AI produce a sort of set of super-machines that are even more productive than traditional industry, a new level of automation? If so, will it disenfranchise everyone, steal all jobs, or will it define a new thing? In the world of AI, and in any future that depends on AI driving a revolution, that’s the big question.
Ultimately the industrial revolution was good for the economy, and surely raised the average standard of living. In modern terms, it was accretive to GDP. Why? Because it did make quality goods cheaper, so less income was needed to buy them. At the same time, it gave more people a role in production, which gave them more income. More money, less money needed to buy, equals more buying, more economic activity, and all whirling around in a positive feedback loop. Would the AI revolution do the same? Maybe, or maybe not.
The industrial revolution was aimed at managing cost, but the cost was the cost of production. In pre-industrial times, there weren’t a lot of office jobs; workers were laborers, crafts types. Today, about 60% of the workforce is at a desk, doing business but not doing production. IT has, from the very first, targeted this group, and AI is targeting it today. The goal, truly, has been not to cut production cost but to raise profits by cutting “overhead”, and that difference has a pronounced impact on the economy overall. That’s what Amazon recently announced; a reduction in “headquarters” jobs.
Let’s suppose we cut office costs by 20% and that our target is to manage profits, which is what we see in almost all AI applications today. Those cuts are surely largely labor, so we’ve displaced some jobs. Have we lowered the cost of goods? No. Have we raised production? No. Have we created new consumers? No. In fact, what we’ve done is to raise the income of investors, boosted the capital side of the traditional three-legged stool of raw materials, labor, and capital. Some of that boost might trickle down, but how much? We don’t know, but it depends on whether the additional income goes to people who will actually spend it to buy things, rather than on just increasing wealth.
There’s also the question of whether trickling down would create a net gain in the economy/GDP. Over the last few decades, the US has seen most job growth concentrated in the service sector, suggesting that boosting the top ten percent tends to expand service jobs more than jobs overall. These jobs generally pay less, so the purchasing power of workers may decline.
Of course, not all AI is focused on office jobs. Amazon has also said that AI and robotics could radically reduce their need to hire additional workers, which is just a way of saying that jobs will be cut without looking too threatening to current workforces. However, Amazon isn’t making things, they’re shipping things. We’ve impacted, perhaps, the retail cost but not the fundamental cost, but we have managed to get out of the office and into the real world.
Ah, the real world. Where things get manufactured, moved, stored, fixed, and so forth. Why has this not been such a target for “automation” as the office? The answer is that real-world, real-time work requires information technology to do more than math and reporting, it requires manipulation of real things, which means IT has to have not only the ability to do the manipulating, but also to recognize the real-world state and operate within it. It’s event processing, not transaction processing, and it has to be done within the time constraints of real-world activity. Latency can kill, literally, in the real world, where in transaction processing it’s often buried in human response times and expectations. So we need to process events close to the processes generating them, at the “edge”.
If you read all the edge computing stories, you come away with the view that we’re on the edge (no pun intended) of edge greatness. The truth is that we’ve had edge computing for decades; I worked on edge projects fifty years ago. Almost every major industrial process today has edge computers running part or all of it. What we don’t see is edge computing in a public resource pool sense, meaning that the applications that drive edge computing have not yet faced the problems/forces of distribution I cited above. AI, I think, has the same not-yet-faced status, and what might change the situation for one may be what has to change it for the other.
So, what could make a public-edge model useful or even essential? There are multiple things. One is that the technical requirements of “the edge” become too large and too specialized to be realized locally. Another is that the concept of “local” changes. You can’t site compute locally if there really isn’t any single “local”.
AI is a potential force in expanding the technical requirements at the edge, both in terms of the hosting resources needed and the skills needed to deploy and sustain them. In turn, deployment of or expansion of AI could be driven by greater complexity of the processes under control. As real-time processes get more complex, it becomes necessary to model them to properly interpret events and control actions. Context, as I’ve noted, becomes critical. We already have examples of contextual modeling, through digital twins, through AI, or through the two in combination. While this could make a shared AI resource pool attractive, it would have to address latency, either by creating a hierarchy of “edge points” to shortstop time-critical events, supported by deeper shared resources for contextual analysis.
The locality question is a bit more straightforward. Enterprises are distributed entities, and if the goal of automation spreads across an enterprise it spreads out geographically, which means again that there may be a need for company processes hosted deeper than local processes, but still a need for the latter so still requiring a hybrid AI model.
What this means, I think, is that we really need to apply AI to the 40% of workers who do the lifting, pushing, building, which means we need to apply it to real-time missions. If we do that, we could improve cost of goods as well as reduce the overhead of business. Then, we need to look at AI as a means of enhancing labor, of empowering new workers in new roles, to ensure that we don’t end up killing demand by killing the buyers’ purchasing power.
Would edge AI and production automation raise GDP? I’d love to give an unqualified “Yes!” to that, but we really don’t know for sure. Would it broaden the set of “new” jobs available to absorb workers? Almost surely, but how many new and good jobs would be created, compared with what was lost, is difficult to say. In the extreme case, you could envision AI and robotics displacing almost every job, which would then demand some totally new economic system to sustain civilization. However, extremes are usually stamped out in their early stages by political and economic forces. Displaced workers can’t buy what robots produce. Still, the issues raised by extensive expansions to automation, no matter what the technology, will have to be faced, surely beginning within five years. I hope we’re ready.
