Openness is good, right? We need open-source software. We need open APIs, open standards, and open-model networking. OK, let’s agree to agree on those points. The question is how exactly does openness come about, and particularly in the network space. A collateral question is whether there are degrees of success, and if there is, the final question is how we can optimize things. Openness isn’t new, so we can look to the past for some lessons.
Perhaps the greatest success in open technology is Linux. It’s become the de facto server operating system, and some (a decreasing number, IMHO) think it could even take over the desktop. How did Linux establish the concept of open-source operating systems, and win out? In an important sense, it didn’t, and the details behind that point are a good place to start our consideration of the Optimum Open.
Minicomputers were one of the transformational changes in the computing market. Starting in late 1960s and moving through the early 1980s, the “mini” was a more populist form of computing. IBM had launched the true mainframes in the mid-60s with the System 360, and as more companies got interested in more computing missions, the minicomputer was the obvious solution. Companies like Digital Equipment Corporation, Data General, CDC, Perkin-Elmer, and of course the mainstream IBM, HP, and Honeywell jumped in.
Every one of the mini vendors had their own software, their own operating system and middleware. Within 10 years, it was becoming obvious that the breadth of computer use created by the lower price points for minicomputers couldn’t be realized fully except through packaged software. Smaller companies could no more afford to build all their own applications than they could afford a mainframe. But there was a problem; the balkanization of operating systems and tools meant that software firms faced major costs building their wares for every mini option. By the early ‘80s, it was clear that most minicomputer vendors didn’t have an installed base large enough to attract third-party software.
There was, at this point, an alternative in UNIX. UNIX was a Bell Labs project that created an open-source operating system that was offered free to universities, research firms, the government, and so forth. Mini vendors started to offer UNIX in parallel with their own operating systems, and gradually put all their efforts into UNIX. But UNIX had a problem too; the early spread had created multiple warring UNIX versions, and many small variations on the two main themes. Standards aimed at the APIs (POSIX) came along to reduce the impact of the divergence, but it was still there.
Linux, which came along in 1991, was an implementation of the UNIX/POSIX APIs in open-source form, with one open-source group behind it. It was modern, lightweight, and license terms seemed to ensure there’d be no lock-in or takeovers possible. It won.
The moral of this, for our Optimum Open, is that most open initiatives start with some credible issue that drives vendors to buy in. That’s important, because today’s open-model network has a mixture of driving forces, but arguably they tend to come from the buyer rather than from the vendor side.
Standards groups attempt to create open network elements by specifying functionality and interfaces, and while these efforts have been fairly successful in creating competition, they failed to create commoditization. The problem is that a network element is a software-based functional unit that runs on hardware. As long as network software could be sustained in proprietary mode, the hardware side didn’t matter much, and open routing software is as much a slog as open operating systems. Even today, we’re only beginning to understand what an open switch/router software package should look like, and what its relationship with hardware should be. Standards, in addition, have proven to be too high-inertia to keep pace with rapid market developments. That’s why there’s been so much pressure to come up with things like open-source router software and white-box devices.
Why hasn’t open-model networking pushed for creative software for the same reason that open-model computing did? The answer is that in the computing world, the purpose of the open operating system was to build a third-party ecosystem that would justify hardware sales for the key players. In open-model networking that falls apart for three reasons.
The first reason is that open-model networking is trying to commoditize both hardware and software. Who promotes the open software model that pulls through hardware nobody makes much money on? Since the software is free and the hardware commoditized, there’s not a lot of dollars on the table for vendors, so only buyers have an incentive to drive the initiatives. Buyers of network devices don’t typically have big software development staffs to commit to open-source projects. Most such projects are advanced by…vendors.
The second reason is that there is no third-party software industry driving adoption of network devices. Open-source router software plus commodity white box equals no money to be made on any piece of the ecosystem, if there’s no software add-ons to sell.
The final reason is the vendor certification programs. You can be a Cisco or Juniper or Nokia or Ericsson certified specialist in using the devices. That certification makes employees more valuable, as long as the stuff they’re certified on is valuable. That encourages certified people to think in terms of the familiar vendors/products.
I don’t think that the kind of openness we’ve seen in software for decades will come about in networking unless one of two things happen. The first is that the service providers take a bigger role in developing open-model networking. Most of the progress so far has come from cloud providers and social network players (Google and Facebook, respectively). The second is that the computer hardware vendors get more aggressive.
I’ve worked with the service providers for much of my career, both within standards groups and independently. The great majority of them want open networking, but only a small minority is prepared to spend money on staffing up open-source or even internal network projects. Of that small group, perhaps a quarter or less has taken even a fruitful, much less optimum, approach. One comment I got last year seems relevant; “We’re ready to build networks from open devices, but we’re not ready to build open devices.” That puts the onus back on the vendors.
Even computer/server vendors have limited incentives to promote open-model networking, given that the goal of the buyer is for the stuff to be as close to free as possible. The server vendors could play in a hosted-virtual-function game, but NFV has failed to create a broadly useful model for that. The bigger hope may be the “embedded system hardware” devices, things like Raspberry Pi or the appliances on which SD-WAN is often hosted. But the best hope would be the chip vendors, and they have their own issues.
You make money selling what you make, which for chip vendors like Intel, AMD, NVIDIA, Qualcomm, Broadcom, and the rest is chips. Since these aren’t the kind of chips you eat, people have to consume them within devices that are purchased, so the things that promote those devices are the things chip vendors really love. A network these days has way more devices connected to it than living within it, which means that the stuff chip vendors focus on today (smartphones, PCs, tablets) are likely to be the things they focus on in the future.
This doesn’t mean that they wouldn’t take an excursion into a new space if there was enough money on the table. But how much money is there in open-model networking that is aimed at near-zero cost to the buyer? Not only that, the devices and even the network model of an open-model network are only hazily understood today. Does Intel (for example) rush out to educate the market so competitors can jump in and try to displace them without bearing the educational costs?
We’re back to buyer-side support here. There’s only one group that has a true incentive to push open-model networking forward, and that’s the network operators. The big cloud providers have in fact advanced data center switching in an open direction, but their incentives to go further are limited. Telcos are another story.
And here, the AT&T change in leadership may be the big factor. Stephenson was a supporter of AT&T’s efforts in open-model networking, which while they weren’t perfect, were likely the best in the market. Will Stankey offer strong support, even better leadership? He says he will continue Stephenson’s vision, but whether he does and how he does it may decide future of open-model networking.