The future of networking probably depends on defining a futuristic architecture for networking. Traditionally, standards bodies have driven progress in network technology and services, as the example of the 3GPP and 5G shows. When we talk about software-defined networks, software-driven services, and automated (even AI) operations, we’re in a different world, a world of software architectures and open-source. A recent Fierce Telecom article says the industry needs more collaboration among and within both standards bodies and open-source communities. Do we, and can we get it even if we need it?
The article is an interview with T-Systems executive Axel Clauberg, and the key quote is “Because we as operators don’t have enough resources, don’t have enough skilled resources to actually reinvent the wheel in different organizations. So for me, driving collaboration between the organizations and doing this in an agile and fast way is very important.” As someone who has been involved both in international communications and network standards, and open-source software, I can sympathize with Clauberg’s view. Operators have historically had difficulties in acquiring and retaining people with strong software architecture skills, and it’s worse today with all the startup and cloud competition for the right skills. But collaboration isn’t easy; there are several factors that can create chaos even individually, and sadly they tend to unify in the real world.
First is the classic problem of “e pluribus unum”, which is who gets to define the “unum” piece, the overall ecosystemic vision that will align the “cooperating” parties into a single useful initiative. What has tended to happen in the standards area is that a body will take up what it sees as a “contained” issue, and then exchange liaison agreements with other groups in related areas. The idea is that these agreements will guarantee that everyone knows what everyone else is up to, and that where bodies are adjacent in terms of mission, they’ll have a means of coordinating.
In practice, this approach tends to secure adjacencies but not ecosystems. There is still no clear vision of “the goal” in the broadest sense, and the problem with networking is that it is an ecosystem. You can’t have the right network without all the right pieces, and the definition of rightness piece-wise has to be based on the definition of rightness network-wise. But who is defining that? Years ago, the 3GPP started thinking about 5G, and they came up with what they believed were logical technical evolutions to address what they thought were meaningful market trends. Were they right? A lot of what’s been happening to pull 5G work apart and advance it selectively seems to show that our current vision of what 5G should be (and when) isn’t what the 3GPP came up with. Given that, how useful would liaisons be in creating a framework for cooperation between 3GPP and other standards groups?
Even in open-source, we have differences in perspective on what the glorious whole should look like. Containers should replace VMs, or maybe run inside them, or maybe run alongside them. They should be orchestrated and lifecycle-managed optimally for the cloud, or for a hybrid of cloud and data center, or for both separately. Differences in hardware and hosting should be accommodated through infrastructure abstraction, or through the federation of different infrastructure-specific configurations, or perhaps by picking only one approach to hosting and lifecycle management and making everything conform.
Another interesting quote from Clauberg is “For me, the biggest nightmare would be if we would have competition between the organizations, competition and overlap.” That’s not the biggest nightmare I see. For me, the worst case is where we have a bunch of organizations that studiously avoid competing and overlap, and by doing so operate in a majestic isolation that can produce the outcome we want only through serendipity. Liaison doesn’t mean cooperation. Furthermore, I submit that the way the market has worked successfully in computing and the cloud is through the competition and overlap process. We know the optimum solution to a problem because it wins, and for it to win there has to be a race.
Why are telcos like T-Systems seeing the competition and survival-of-the-fittest notions of computing and software’s past as a bad thing in their own future? Answer: they don’t have time for it. Telcos started their transformation discussions over a decade ago, and in an architecture sense they’ve not really moved the ball much. Ten years ago, for example, nobody was looking at software-defined anything, or lifecycle automation. Now it’s clear that software is where things are heading, and so operators are trying to adapt quickly, having started out late, and at the same time they’re trying to avoid missteps and sunk costs.
The interesting thing about that is the fact that open-source software strategies don’t really sink much cost. If you assume that you’re going to host on either COTS or white-box devices, then the equipment side of your strategy isn’t much in doubt. If you acquire open-source software, then software costs are minimal to zero. Thus, you really don’t have to worry about sunk costs unless you think you’re not going to host things or use white-box technology. Which, in short, means you don’t have to worry.
Carrier cloud should be a given in the sense of cloud infrastructure. There are also few questions regarding the OS—it’s Linux. Yes, middleware tools are still up in the air, but that’s true for the cloud overall, and it’s not stopping the cloud. Are operators simply being nervous Nellies, or are there deeper issues.
One candidate for the deeper issue topic is the bottom-up implementation model that standards groups and operator-driven activities in general have been taking. If you start at the bottom, you are presuming an architecture and hoping implementations add up to something useful. I’ve beaten this drum before, of course, and so I don’t propose to continue to do that.
The next candidate is actually the one I cited in the first quote from Clauberg. Operators lack the qualified resources, which means that they tend to be at the mercy of the rest of the industry, which means the vendors that they think are trying to lock them in and ignore their priorities and issues. Gosh, gang, if you don’t trust these people why do you continue to under-staff in the places that could create a vendor-independent position? Back in 2013 I argued that without open-source software emphasis in NFV, there would be little chance the initiative would meet its goals. Yet operators didn’t do anything to pursue open-source, and NFV goals have not been met. No surprise here, or at least there shouldn’t be.
What do operators want from standards, or from open-source? “Success” or “transformation” aren’t enough of an answer, and nobody is going to goal-set for you in the real world. Operators need to take “transformation” and decompose it into paths toward achieving it, both in the areas of service innovation and operations efficiency. They’ve yet to do that, and until they do, cooperation among the groups trying to help with transformation is as likely to focus on the wrong thing as on the right.