A lot of our tech advances are about carts and horses, chickens and eggs. In other words, they represent examples of co-dependency. 5G is perhaps the leading example of this, but you can make the same statement about carrier cloud and perhaps even IoT. The common issue with these things is that there’s an application ecosystem that depends on a technology investment, whose justification is the same application ecosystem that depends on it. Is this a deadlock we can break?
The future always depends on the present, and evolves from it. When you read about the things that 5G Release 16 is supposed to add to 5G Core, you see features that are very interesting, and that could in fact serve to facilitate major changes in things like “Industry 4.0”, which is aimed at increasing M2M connections to automate more of industrial/manufacturing and even transportation processes. When you read about the more radical forms of IoT, you find stuff like universal open sensor networks that create new applications in the same way the Internet, an open network, has. Reading and aiming isn’t the same as doing and offering, though. Something seems to be missing.
And it’s the same thing that’s often missing, the horse that pulls the cart or the egg that hatches the chicken. To mix in another metaphor from a song, we need to prime the pump. The Internet grew up on top of the public switched telephone network (PSTN), and it grew out of a major business imperative for the telcos, which was how to create a consumer data service to eat up some of the bandwidth that fiber technology was creating, thus preventing bit commoditization. We have the latter force, arguably, today, but we don’t have the former. The question of our time is how we create a self-bootstrapping set of co-dependent technologies.
Evolutionary forces are one answer, the default answer. Eventually, when there’s a real opportunity, somebody figures out a way of getting things started, usually in a form where startup risk can be controlled. The problem with evolution is that it takes a long time. What we need is a way of identifying opportunities and managing startup risk that doesn’t require a few decades (or centuries) of trial and error. To get that, we have to identify credible opportunity sources, credible risks, and then manage them—with an architecture.
The hidden thing that we had in the Internet was just that; an architecture. TCP/IP and other “Internet protocols” are objectively not the perfect solution to universal data networking today, but they were a suitable transformation pathway. There were three important truths about the Internet protocols. First, they were specified. We didn’t need to invent something new because we already had an Internet when consumer data services were emerging. Second, they were open, so any vendor could jump on them without having to license patents and so forth. That widened the scope of possible suppliers. The final truth is that they were generalized in terms of mission. The Internet firmly separated network and application, making the former generalized enough to support many different examples of the latter.
Architectures, in this context, could do two critical things. First, they could set a long-term direction for the technology components of a complex new application or service. That lets vendors build confidently toward a goal they can see, and for which they can assign probable benefits or business cases. Second, they could outline how the technology pieces bootstrap, ensuring that there’s a reasonable entry threshold that doesn’t require paying for the entire future infrastructure on the back of the first application.
I’ve noted before that one of the issues we have with architecture these days is that telcos think a lot about migration and a lot less about where they’re migrating to. You might think this contradicts my point about 5G, but while 5G outlines an “architecture” for wireless infrastructure, it conveniently declares application infrastructure out of scope. NFV did that with management functions, and it ended up being unmanageable in any realistic sense. So, there are two things that we’d need to have in a real architecture for the future of 5G services. We need a long-term service vision, and we need an architecture defining the necessary elements of the technology ecosystem overall, not just the piece the telcos want to carve out for themselves.
I think that IoT, at least in the sense that it would be a specific 5G application, could be architected effectively by defining both a service interface (protocol) and data formats. We have candidates for the first of these, but mostly for low-power 5G, which isn’t what the telcos at least are hoping for. Having a strong definition of what a 5G mobile sensor protocol and data format would be, where “strong” means well-defined and accepted, is essential to get that ecosystem moving.
If we wanted to go beyond just getting things moving, we’d have to consider the relationship between sensors and services, and then between services and service providers. You could build an IoT application with a simple “phone-home” approach, meaning that the application would be expected to be directly connected to sensors and controllers. That’s not likely to be the best approach for broad IoT adoption. A better approach would be to define a “service” as the digested information resources contributed by a set of sensors, and the sum of control behaviors available through a similar set of controllers.
This raises two challenges, the first being simply defining these services and the second being establishing an information framework for their use. The latter then relates to the relationship between services and service providers. Would operators take on the responsibility for turning 5G-connected sensors and controllers into services, or would they simply provide the connectivity? If the latter, then we’re still stuck in co-dependency. If the former, then operators would have to accept what’s essentially an OTT service mission.
For “Industry 4.0” things could be, if anything, a little more complicated. 5G, at the technology level, is straightforward. Industry 4.0 would have to specialize based on the specific mission of the “industry” and the company within it. Presuming that we could target applications at the industry level, we’d still have dozens of different combinations of technology elements to deal with, which would mean having dozens of blueprints or templates that would define a basic business suite, and then specializing it as needed for a business.
Who does this? Software companies could work at it, but in today’s world we’d probably want to assume that an open-source initiative would drive the bus. That initiative would have to start with a broad application architecture and then frame industry-centric applications within the framework of that architecture.
We can’t assume that solving a network problem, removing a network barrier, will automatically drive all the application development that the network would facilitate. We can’t even assume that the network will advance in deployment scope or features without specific near-term applications. We can’t assume those applications would develop absent the needed network support, so we’re back to that co-dependency. Standards haven’t solved it, and neither has open-source, so we may need to invent a different kind of “architecture forum” to align all our forces along the same axis.