Many of you who read my blog know that Andover Intel has a users-only email link and we encourage consumers of technology to comment, ask questions, debate, or whatever with complete confidentiality. My blogs on standards in general and 6G in particular generated emails from 51 of 88 operators who have commented, as users, and that’s the highest level of response so far.
The thing that stood out to me was, paraphrasing one operator friend “OK, this is in fact a mess. What would you do differently?” That’s a good question, and one I’ll try to answer, taking into account the comments of the 51.
The challenge operators face is that they sell into what’s become and likely will always now be, a telecom market dominated by consumer issues. That means the demand side blows in the wind, and that tendency is exacerbated by the fact that today’s network is dialtone for OTTs to deliver proof-of-concept, fast-fail, tests that if successful will launch the next opportunity, the next driver for the underlayment the network provides. But at the same time, operators are tied to a stake in the ground created by long expectations of useful life for what they buy. The stake is seven to fifteen years’ deep, too deep to allow for rapid infrastructure change. This is a contributor to the supply-side mindset; you plan based on your capital constraints, set in no small way by the long depreciation cycle you need.
This, I think frames the challenge. What about the solution? Can it be found? Well, if we think about it, it already has.
Enterprises have some of these same problems, and the solution they’ve hit on over time was the notion of commodity hardware, open middleware, and modular software. If operators want to be successful with their supply-side mindset, then they have to do the same, and to do that they have to shed a persistent bias.
Which is the “box mindset”. To telecom types, network features are created by devices. Those devices present physical interfaces that link them to other devices over network connections. Because the features you define are box-bound, even an evolutionary view of services threaten to displace current technology. They also constrain you in terms of evolution, compatibility with the past. If you need to interwork with Generation A in Generation B, and if both A and B are device-based, then you tie yourself to the early generation just to phase in, and that limits how much you can advance. This has to go.
The starting point for the right approach is simple; functionality is created by software, and hardware is just something you run software on. The functionality for any given mission, then, should be a series of logical components that offer units of functionality. These units interact with each other via APIs and exchange messages via these APIs. You don’t think in terms of defining interfaces but in terms of defining APIs. If the APIs link things that aren’t local to each other, you may have to carry the messages between them on a network connection, but the messages are the meat and the network connection is just the condiments.
You have to map these logical components and their APIs to hardware, eventually, and in order to do that you have to rely on the same sort of resource abstraction that’s been around in IT for decades—the operating system and middleware. For the former, it’s logical to consider Linux to be the presumptive target. For the latter, you have two sets of issues to consider.
The first set is whether the requirements of the mission puts special performance, cost, or other requirements on the components you’re hosting. If your mission is to push packets, a data-plane mission, then you have to assume that you need special chips to handle the pushing. The middleware then needs to supply a set of bicameral “drivers”, with one side of the driver exposing a common function set and the other implementing it on a given chip type. The P4 language for switching is an example. The goal here is to support a set of logical functional elements on whatever is the best modern hardware mechanism available for the specific set of trade-offs you have to work with.
If you’re not pushing packets, then your messages are “control plane” messages aimed at managing the functions and their cooperative behavior in fulfilling the mission. For this sort of thing, you have to look at the relationship the functional components have with each other to fulfill their mission. Maybe the relationship is a simple peering, maybe client/server, maybe a community of partners, maybe a set of sequential process that’s either source-steered, hop-by-hop, or orchestrated…you get the picture. We have middleware tools designed for all the possible relationships we could ever expect to encounter in telecom, and we’d surely lose nothing by mandating that no relationship-managing model could be standardized if it didn’t map to a model we already have proven out.
The second set of issues relates to the intrinsic or useful level of function execution and distributability. Do the functions’ ability to support the mission demand they be collected, or can they be distributed in either groups or individually? Can the components be scaled by multiplication to handle loads? How do they maintain state? What data do they require? Is the model here suitable for cloud hosting, and what are the economic issues related to the features it would need? There’s middleware available to handle almost every possible way software components are organized and workflows are threaded, so it’s a matter of picking something. This step, interestingly, is often attempted, and sadly it almost always leads to a failure.
It’s not that this is the wrong approach; it isn’t. It’s not even that the person or group developing the functional map or model does it wrong. It’s that it’s used wrong. We see this almost everywhere in telecom, including in LTE, 5G, NFV…. People interpret the functional model as a network of devices. You can see this in the way that NFV developed, and ONAP as well. They map work paths to, or like, device interfaces, and they build a monolith whose rigid structure guarantees that if by a miracle it manages to serve the mission that stimulated its creation, it won’t survive a year of evolution.
The function map does has its challenges. As a software architect, I saw many enterprise applications that were first defined in a “flow chart” that described not a good way to do something, but rather the sequence of manual-based steps that work would go through. Having something modeled on an inbox-outbox human chain is no better than, nor different from, having a set of boxes linked by physical interfaces. A function map has to be developed with the best way of combining the functions needed; if it starts by assuming the functions are discrete devices, that’s how it will end up.
I’ve seen, and you’ve likely also seen, some industry initiatives that have tried to apply some of these principles (MEF comes to mind), but I don’t think of this process as a standards activity as much as a methodology that standards activities should apply. The fact is, particularly for telecom standards, there’s generally only one body that can really set standards, and that’s not going to change.
I’ve looked at a lot of telecom standards in my career, and none of them were framed with the goal of achieving the separation of function and function hosting, software and hardware, that enterprises take for granted. That has to change or we’ll never see a successful telecom standard again. The old way of standardization cannot deliver it.