What’s the question that defines 2025? No, it’s not about AI or the cloud. It’s not even about technology in a direct way, but about what’s real and how we decide just what that is.
What is reality? That’s one of those important-but-imponderable questions that comes up regularly, and in this case has been debated for at least a millennium. It’s hard to reach a conclusion from such a mass of debate, but the general view is that reality is defined by consensus. What is “real”, then, is what observers would agree is real. The sky is blue, the sky is “up”, gravity makes things fall “down”, the world is round…you get the picture.
One problem with this nice approach is that some topics can’t be assessed by a random set of observers. For example, you probably wouldn’t want to have an operation based on the opinion of a hundred or even a thousand randomly selected people. From this emerges what’s been called “specialized consensus”, which is consensus drawn from a group actually capable of assessing the topic.
We’ve depended on the specialized consensus model for generations, for medical and financial advice, for governance, and even for news, and for technology news. However, for decades at least we’ve also tried to validate the specialized consensus views, because it’s clear that specialized consensus could lead to distortion if the group that provides it is somehow contaminated. However, surveys of views of many (randomly selected or specialized) are themselves subject to accidental or intentional bias.
All of this may seem philosophical, but it has two distinct paths of impact that are uprooting decades of technology evolution, and one of which impacts our lives overall. The latter is the rise of social media, and the other the increased difficulty validating new technologies. Both these have their roots in the populization of information that the Internet created.
Social media creates multiple virtual communities that defy many traditional geographic and cultural barriers. It makes it possible to gather information from, and distribute information to, an enormous segment of the world population. It promotes/allows both “consensus” and “specialized consensus” to be achieved, and at the same time erodes the influence of other information conduits. You can see this in the way that advertising investment is shifting from traditional media to social media. This combination is remaking sales, marketing, and entertainment.
The financial shift in adspend favors distribution multiplication over production. Video content is the soul of entertainment, and historically it’s been ad-sponsored on TV or audience-paid in theaters. Cable TV introduced premium-channel models where consumers paid for content, and streaming has multiplied this option. Time-shifted DVR viewing has been largely displaced by streaming. All of this raises questions about exactly how much content can be produced with the model, and what level of quality will be available. The new content production industry is more diverse, and that divides available revenue among more players, reducing the per-item budget.
Older content, then, will become more valuable in two ways, both of which are already impacting the market. First, rights to reuse material already viewed will become more expensive, which will make companies with long-term rights or their own libraries of material more valuable. Second, “remakes” of older material will become more popular.
New content trends are also already visible. “Reality” shows will increase in number and importance because they’re cheaper to produce. Animated or AI-generated material will also be favored for any new material, for the same reason. This is what’s behind the recent Hollywood strike negotiations regarding AI; obviously no actors would want to be replaced by their own avatars, but the same is true for those involved in production and editing. There will be a great appetite for reducing production costs, because of the competition for revenue and eyeballs being generated…anew.
This will, of course, also generate pressure on technology. Everything in networking, like everywhere else, has to be paid for. Both operators and enterprises tell me they’d love to see some sort of transformational technology come along, but even today their focus is on transforming costs. Unless something changes, that focus can only become more intense, at least with regard to consumer Internet and related technologies.
Ah, the inevitable qualifier! It’s common to think of barriers to progress here as being resolved by technology, but what if the barriers are something else. Like…us.
What single word characterizes tech progress from the arguable dawn in the 1950s to present? I submit that the word would be closeness. Tech has become valuable to us by becoming close to us. We have integrated with it, linked it with our lives and our work, and by doing so shared ourselves with and through it. Every major step in tech evolution has been a step toward closeness, and I think any steps we take from here will be so, too. Which is a problem, because with every step we’ve taken, we’ve raised risks and created back-pressure.
In my very first job as a programmer, I worked with a dizzying number of executives at a big insurance company, and some long-term insurance and accounting professionals. One day, one of the latter burst into the big bullpen-like room junior programmers shared, shouting “Your computer is making mistakes and you’re covering up for it!” As it turned out, the mistake was made by a company printing checks, who’d printed a magnetic-ink and text number that didn’t match, but the point is that tech got the blame automatically. Today, we’d automatically blame AI, perhaps. Or maybe we’d say we were hacked.
Tech can mislead us, lie. Tech can expose us. Tech can enrich us. It’s done all of these things. We had our last big tech wave in enterprise technology over two decades ago, when in the past we never went more than five years between them. Did tech lose its edge, or did our fears overcome our enthusiasm? One more step equals one step too many?
None of the above, I think. The problem we have today is largely one that involves the tradeoff of benefit and risk. There’s a lot tech can do for us, but we have to surrender more of ourselves to get to the next step. For example, we know from a combination of TV fiction and real experience in some locations that surveillance cameras in public locations can not only solve crimes but prevent them…at the cost of recording parts of our lives we’re not accustomed to having reviewed, and the risk of having the images misused. A video on a street corner could record a crime, or even one that’s about to happen. It could warn of an unwary step off a curb or into a hole, too, but it could also record an illicit meeting, a gesture behind someone’s retreating back, an embarrassing wardrobe malfunction. A video on a job site could guide a worker or spy on the worker.
Some of the risks we routinely take today would have appalled people five or ten years ago. The risks we’d have to take to get more of that closeness appall some today. We’ll either need to reduce perceptions of risk, or present benefits that make those risks worthwhile, and that’s likely more a challenge now because to reap more IT benefits we’ll have to cross a barrier—the barrier of the real world. We’ve allowed ourselves to use tech and accepted the risks that the use might create. To get to the next level, we’ll have to let tech out, let it see and hear what we do even if we’re not using those capabilities at the moment.
The consensus definition of reality plays here, in two ways. First, closeness demands a sharing of real-world information with technology, and that sharing has to be objective. We can’t run real-world systems based on subjectivism, unless each of us has our own real world. We can already see, in privacy debates, that the risk/reward picture for new technology has a significant element of subjectivism in it, which means that unless we can agree on steps to take, the broad integration of tech with the real world may not be able to gain public policy support. The second point is that even real-world conditions can be hard to objectivize. Is someone running to cross at a crosswalk a risk that something like autonomous vehicles should respond to, even when a sudden stop might create a rear-end collision? Whether those issues can be addressed is, at this point, impossible to predict.