The iPhone 5 may not be revolutionary (well, let’s be fair—it’s not) but there’s never been much doubt it would be successful, and in fact the early allotment sold out in no time and much faster than earlier models. The question for the industry now is how to deal with this phenomenon. It’s not the traffic, it’s the fact that Apple and other appliance vendors are continuing to advance their role in driving the direction of mobile. Wireline is already about pushing bits at low margins. What are operators to do, and how about equipment vendors? Some seem to have reasonable ideas, and some less than reasonable.
DT is planning a major rollout of vectored DSL to reduce the cost of broadband deployment versus FTTH, and this is one of those deals that I believe might be less than reasonable. Operators overall tell me that advanced loop technology is profitable if you can deliver multi-channel video over it, and DSL is video-compatible only with the fairly complex IPTV overlay that AT&T’s U-verse models. You can deliver video with fiber, as well as broadband essentially without limit. Vectored DSL, which reduces crosstalk on bundled pairs that plagues short-haul ultra-fast stuff like VDSL, seems to me to be a rather limited and optimistic response to market needs. In any event, you still need FTTC because the loops have to be short to deliver the 100 Mbps or more that cable can deliver.
In contrast, Telefonica is continuing to push the service-layer envelope with an adventure in augmented reality—not the Google Glasses stuff but interactive multimedia advertising specifically targeted at the mobile user. Their deal with Aurasma, a leading global player in the space, is both an indication of Telefonica’s determination to get their own mobile ad position into the market and an indictment of network vendor service-layer strategies. I’ve said this before, but how many times do we need to see carriers jumping into bed with specialized service platform vendors to realize that the enormous opportunities for a service-layer architecture as a boost to network equipment vendor fortunes are largely lost now? This is a particular problem for vendors like Alcatel-Lucent, Ericsson, and NSN, who need to have as much monetization-oriented story as they can get to differentiate versus Huawei and to build opportunities for professional services.
Alcatel-Lucent won a big deal to transform Telefonica’s NMS, which is an accomplishment by any measure, but it’s still a kind of retrospective success in that it builds on what the network has always been and not what it has to become. The iPhone is creating the future of networking, and managing networks is just a small step above pushing bits. I’d like to see Alcatel-Lucent recognize that if you’re going to be talking about more efficient network operations you need not only transformed management but also transformed practices, and that leads to SDN.
Software defined networking is on the move, though like most hype-driven trends it’s sometimes hard to say whether all the moving is in a constructive direction. One recent focus has been on the application of OpenFlow to optical switching, something that could be jury-rigged with the current level of specification but could also benefit from some more explicit standards help. The problem is that the move, in my view, is highlighting some fundamental differences in how OpenFlow is interpreted. Some see it as a protocol to communicate forwarding rules—I count myself in this group. Others seem to see it as a specification for a low-cost switch, meaning that they expect the OpenFlow parameters to map right to data-path handling in silicon. That’s not going to happen now, but I don’t think it should ever have been expected. We should be expecting OpenFlow devices to “compile” rules in a way that’s optimized to their specific implementation of forwarding-plane behavior. There will still have to be control-plane processes in OpenFlow switches, no matter what, and we may as well give them some useful missions now to avoid confusion later on.
Speaking of confusion, I’m still frustrated by the fact that everything in the media’s vision of the cloud world seems to be revolving around IaaS and Amazon competition. Virtual hosting is a two-dimensional economy game; first it depends on how much cost your virtual framework can replace and second on how much better your economy of scale is than an enterprise’s. It doesn’t take a supercomputer to figure out that displacing only the hardware platform isn’t the right answer to optimal opportunity. In fact, the real cloud opportunity will always lie in creating what I’ll call a “cloud PaaS”, a model of a virtual network operating system that’s inherently distributable and hostable. Amazon, I think, is moving in this direction with its storage options—these are services that exist in and for the cloud. So, obviously, are some other cloud platforms like OpenStack. None of them so far are really talking about the future of the cloud. If there IS any future, then it has to lie in the definition of a platform that explicitly captures the cloud’s benefits by making them available to applications as OS services.