Well, for everything there is a season, they say. If that’s true, then the Open Networking Foundation (ONF) has passed its own. The body is merging with the Linux foundation after over two decades of independent live, and the Foundation is taking over three of its projects, Broadband, Aether, and P4. Broadband ideals with access technology, including SDN-Enabled Broadband Access (SEBA) and Virtual OLT Hardware Abstraction (VOLTHA). Aether is an open-source 5G platform, primarily for private 5G, and P4 is the flow-programming language developed for switching chips.
When the ONF launched, it was focused on SDN, which was hot at the time but not so much today. The body moved forward as SDN lagged, and in my view P4 was one of the most potentially important projects it launched, but it ran hard into a reality of our times, the question of how new concepts and standards can actually become market-relevant. The only possible answer is “open source”, and that meant that whatever ONF did, it was going to either cooperate with open-source communities like the Linux Foundation or sink. The merger with the Linux Foundation, arguably the current top open-source community, is the logical answer.
Standards have driven networking evolution almost from the first, with the old Bell System specs for modems being an example. The nice thing about a standard is that it tends to define network elements in black-box terms, meaning interfaces and properties that are externally visible are defined, but implementation is left to vendors. The IETF has been the source of the standards that have collectively created the Internet, and we’ve seen a healthy competition among vendors to implement the IETF specs. So why couldn’t the ONF, as a standards body, and SDN as a standard win out in the market? Are we losing something important?
It’s been said in the past that the IQ of any group of people is equal to the IQ of the dumbest, divided by the number of people. This cynical old saw was obviously intended to address the problem of collective action. “A camel is a horse designed by a committee” is another. The issue of collective decision and action has been a problem for quite a while, too. I remember working on the first distributed computing network application for IBM S/360 systems, with IBM and a major supermarket chain. We spent several days in meetings to answer the question “Who are ‘we’” when we use that term in ways like “We believe that….” Fruitful, huh?
The problem with this collective dysfunction in today’s world is that something delayed is something irrelevant. If you can’t get to market with a concept in a pretty short time, you miss the window or are eaten by faster competitors. This is my reason for the development of a disconnect between the telco world and the rest of tech. The former has always lived in a world of long-cycle assets and supply-side planning, and everything else has become consumeristic. No consumer waits on standards; as soon as a concept comes along it’s either jumped on en masse or it’s old and dead. No wonder telco standards seem to take forever, and when they come along they read like a Latin codex in terms of modernarity.
What you really need today to advance tech is something that’s available to be implemented, which means something that’s code and not specification. What company would unilaterally build code for a hot subject and turn it over to the world? What user would implement something totally proprietary and single-source? Answer to both: “None”. Collective implementation replaces collective definition in today’s tech world.
Open source development spreads the risk and benefits. More importantly, it moves the process of dreaming up some new tech thing along at a market-reasonable pace. Arguably some of the most successful things in tech have been driven by open source development. Look at Linux; it’s the server operating system that’s eaten the world. Even phone operating systems are based on some form of open-source software or another. Does this all mean that we’ve hit on the Right Answer for technology advancement? Perhaps, but we need to consider some critical points that are even now emerging.
Anyone who’s ever done a large software project, or even worked on one, recognizes the importance of architecture. There are many paths to achieving a given set of functional requirements, most of which are going to end up leading everyone to a dark place. A great design, a great architecture, is the foundation of great technology. Miss that, and there is a very good chance that the whole open source project is going to fail or sink into disuse. Given that, suppose that our great open-source project doesn’t start with a great architecture?
I’ve been involved in network standards groups and also in open-source network projects. While I can say that the latter have done better over time than the former, I can also say that neither of the two has delivered what I believe would be the best of outcomes for the market. Why?
Reason number one is that the hardest thing to create collectively is vision. Linux is arguably the most successful open-source project of all, and it started with the combination of a predecessor concept in UNIX, and the insight of a single engineer, Linus Torvalds. UNIX was based on MULTICS, the brainchild of Bell Labs, GE, and MIT, but it surely fit the camel-is model of disorder. Bell Labs drove a new project forward by redefining the architecture model, and Torvalds took things from the UNIX foundation. This all started in the 1960s, and Linux emerged first in 1991, and it gained ascendancy in the server space within two decades. But Linux is far from the only OS out there. Consumers like Windows and MacOS way better, and while Linux/UNIX is a seminal element in Apple’s stuff as well as Google’s Android and Chrome OS, its success is increasingly tied to creating a “shell” that shields users from its complexity. We’re not there yet on a universal Linux, and it’s been sixty years since the seminal concepts emerged.
Why does this matter? Because the problem with accelerated time to market is accelerated time to fail. How many of the “faults” in UNIX or Linux are flaws in conception, and how many are simply the result of changes in the market itself? Can you do something in six months and foresee the next sixty years? And if something comes along quickly to address the hottest current issues in the market, does it then kill all alternative approaches?
The final potential issue we need to think about is that of concentration. I blogged earlier this year about the seeming shift of enterprise attitudes regarding single-sourcing something. We used to worry about lock-in, but today it seems that both enterprises and telcos are actively pursuing it. A single vendor reduces integration problems, gives you more leverage on pricing other related areas…we’ve all heard the stories. Are we heading for the same thing in open-source, where a single strategy from a single body like the Linux Foundation is the heart of everything we’re doing in feature-hosting for network services? Sure looks that way, and could sweeping competitors aside in open source be as risky as focusing on a few large vendors for tech overall? We may find out.