One of the most important trends in network equipment is the increased availability of “merchant silicon”, chips designed to implement common network functions such as switching, routing, and route determination. It is this development that has led to the increased interest in “white box” or open-model technology in networks. While there are several vendors who offer these chips, Broadcom has been a market leader, and they have recently released a new behemoth in their Tomahawk 6 switching chip.
While white-box devices haven’t as yet proved the threat to the networking giants like Cisco that some had predicted, they have put price and profit pressure on those vendors, and have created a threat in the SMB/channel space and to new applications of networking, such as AI. Both these emerging missions do present a threat to the network vendors, and to the current equipment market dynamic.
What kept white boxes, whose price points are (enterprises and operators tell me) roughly 35% below that of the traditional vendors’ products, from sweeping the markets? Part was a willingness by those incumbent vendors to discount; enterprises say that if you get the price differential down to 15% they’d likely stay with their current vendor. Part was the fact that neither enterprises nor operators want to introduce a new product into a running network, period. Management and integration issues, particularly finger-pointing, are seen as just too much of a risk.
SMBs, in contrast to operators or enterprises, often grow into needing a network or network change. When there is no issue of a network in place, these businesses like lower-cost products, and often trust their integrator (a channel player) more than the vendor of the products they buy. Since most SMBs get data center products from a channel, it’s logical to get the data center network the same way.
The data center network, it turns out, is the place were white-box opportunity seems to focus for businesses, and even increasingly for operators. If we step outside the SMB space to look at operators and enterprises, we find that there is a growing loss of uniformity in data center thinking. One big factor in this is the technical shift toward container infrastructure, and the use of Kubernetes. Container deployment is usually based on creating “clusters”, which are servers that support significant horizontal traffic among the hosted application components.
In the old days of monolithic applications, traffic in a data center network moved one way—vertical or out of the data center and into the VPN to workers and others. As applications componentized and as integrating workflows linked applications to synchronize the impact of transactions across the entire business, inter-component traffic grew, and one of the purposes of the container cluster concept was to improve performance and security by keeping the components that talked regularly in proximity to each other, both in an addressing sense and in the sense of latency and available bandwidth. This was the “horizontal” form of traffic, the requirement for which bound the clusters into subnetworks.
AI is a special and extreme case of this. AI is a novelty, so its deployment is a classic green field. AI is also an application that has an enormous amount of horizontal traffic (between GPUs and rules) to support a modest amount of vertical traffic. In fact, outside training, AI is dominated by horizontal traffic, and its clusters need far more horizontal capacity. Because they’re new, they present a major new demand for switching that new white-box plays can compete for.
We see this in the push that open-network vendor DriveNets has made for AI switching. Long a Broadcom chip user, DriveNets had previously focused largely on new core missions for operators, but their Network Cloud has always focused on a cluster of connected devices, the boxes themselves being fairly uniform in structure to the point where different numbers could be deployed in different places to support multiple network missions, all supported by a common pool of spares and uniform operating tools and practices. Their AI push is important because AI is only one reason why a new cluster would be deployed. Enterprises have an AI driver, but other cluster-driving trends as noted above, so the AI move actually makes DriveNets a viable enterprise data center contender.
Obviously it does the same for the operator market. For operators, I’ve noted that the logical place to host value-add features would be the metro points, because metro is a logical aggregation point for local access points and connected there at low latencies, it concentrates enough users to create feature economy of scale, and things like Ciena’s coherent routing could increasingly optically mesh metro points as an alternative to a multi-hop electrical core.
We have drivers of change in both the enterprise and data center spaces. The SMB market is logically ideal for white boxes and switching chips. We could well be seeing a situation where the data center switches that are the big profit engine for the network equipment vendors could be open to more competition, price competition.
To me, this is a part of why HPE wants Juniper. If you own hosting, you have strategic control over what drives data center change, which is the applications. That gives you the ability to push data center switches better. Without that control, you can’t readily sell them. Truth be told, Juniper needs HPE more than HPE needs Juniper, but Juniper adds an offense in the network space, and keeping IBM from buying it adds a defense for HPE too.
We may be seeing a critical shift here, but one that’s been a long time developing. The value of IT is, in the end, created by applications. Applications link data to business processes and workers, and everything else is just plumbing. Plumbing is hard to differentiate. Networks are delivery mechanisms, things that are composed of a limited number of functions, and things that avoid complexity to assure operational efficiency. We may be seeing the flight from complexity, the “dumb” network model, making differentiation at the network level difficult, and making computing, particularly platform software tools, king of the hill. IBM has played that all along; Broadcom may be playing it now. Differentiate where differentiation leads to strategic control, and turn everything else into silicon.