Well, it’s “Recap Friday” again, and fortunately there are quite a few sound bites we can recap this week. At the top of the list is the Credit Suisse report on their Next-Generation Data Center Conference. Like most of these conferences, it was a vendor love-fest littered with the usual exaggerations (and perhaps a few outright lies), but there was some decent pickings among the details.
On the SDN front, it’s becoming clear that server consolidation is indeed driving data centers to scale up, and that this is increasing the primary problem that enterprises face in the data center network, poor resource efficiency due in large part to the natural inefficiency of Ethernet switching. What SDN can do is to almost eliminate that inefficiency by permitting explicit flow routing and thus better utilization…maybe. The qualifier here is that of the three models of SDN that we (reluctantly) recognize, the overlay model doesn’t really bear on this because it doesn’t influence per-device routing, and the distributed model doesn’t bear on it because the protocol development to support it is at the IP layer and not Ethernet. Only the “centralized” OpenFlow model works.
We also have to point out that the trend is speaking against traditional fabric solutions. A fat data center that’s fat on scaling up traditional applications doesn’t create cross-component traffic and require any-to-any connectivity. What you need in this kind of data center isn’t non-blocking any-to-any, but malleable some-to-some. This validates traffic-based malleable-capacity models like Plexxi versus fully connective fabric models.
There were also (as expected) some cloud claims in evidence. One is that adding private cloud to virtualization could increase your efficiency by an enormous quotient, which I don’t buy for a minute unless we assign the term “could” to events whose probability is equal to that of your flapping your arms in exasperation and launching yourself over the rooftops. What is important in justifying the transition from virtualized data centers to a public cloud model is the ability for applications to exploit highly dynamic resource pools. We are not there yet in most enterprises, and we’re not getting there in the next couple of years. When we start that move, it will be more to exploit “externalized” dynamism, the ability to roll components into public-cloud overflow resources. That doesn’t require a redesign of the local data center.
Speaking of cloud, we have the story that IBM and EMC are both looking to acquire cloud provider SoftLayer, perhaps spending a couple billion on the deal. There is nothing remarkable about SoftLayer’s service offering in my view; it’s IaaS and that’s not the way of the future. However, they have stuff in place, they have a framework from which you can create your own cloud or host on theirs, and they’re privately held. They also have decent marketing/positioning, and that may set them apart from rival Joyent, who actually has better technology but has been completely hopeless in articulation. It pays to sing, gang; let this be a lesson.
I think EMC is a more likely winner here, or at least they have more to win than IBM, but I also think somebody like Cisco should be the one looking to deal. You cannot be a provider of cloud infrastructure if you don’t offer a public cloud service to ease your customers onto a private cloud and to showcase your wares. You can’t be an IT kingpin without offering cloud infrastructure, and Chambers says he’s going to make Cisco an IT kingpin. QED. But EMC does need a cloud too, particularly given its new push on Pivotal, though SoftLayer is stronger in the SMB space where things like cloud-specific applications are a ways out, adoption-wise.
On the SDN front we have news that Dell is favoring SDN standards via OMG. On the one hand, we sure seem to have enough people wanting to define SDN standards these days—so many that a gladiatorial match to cull the herd may be in order. On the other hand, I’m firmly behind the notion that we need to be looking at everything relative to software control of network services in object terms (and so who better than OMG?) The key question for SDN, though, is the timing of a holistic vision. We do not have that now, not from anyone. We have itty bitty SDN pieces that a vendor has glommed onto and tried to make into a sexy story. For example, if data center efficiency is a goal and traffic engineering is the path to that goal, then the Nicira overlay model is a complete waste of time because you can’t engineer traffic with a network overlay that the switches don’t even know about. But if we don’t have a top-down vision of SDN, everyone can claim to be it/have it.
I think this whole issue of SDN standards is going to come to a head in the Network Functions Virtualization area. Unlike SDN, NFV is focused on function hosting, which forces it to consider a network as a service-layer cloud of features. Logically that’s a software framework that logically needs an object model to define it. Otherwise functions are neither portable nor can they interact with each other, and both those are required if NFV is to meet its stated goals. SDN has gone so far and gotten so diffuse that I don’t think it’s possible to fix it any more. Somebody will have to envelope it; whether it’s OMG or NFV or NFV endorsing OMG, something has to take the right slant on this. And even OMG and NFV can be too late to the party. My model says that both concepts are dead if they’re not completed by 2015 because by then pressure on network cost will have driven operators to proprietary strategies.