Cloud computing is probably the most important concept of our time, but also likely the most misunderstood. It will never do what proponents say it will—displace private IT. In fact, it’s not likely it will displace more than about a quarter of today’s IT spending. However, it will generate new spending and in the end will end up defining how all of IT is used, and done.
The biggest problem we have with the cloud is that we think it’s an IT strategy and it’s in fact an application architecture. Cloud computing is based on the fact that as you lower the price of network connectivity you eliminate the barriers to distribution of intelligence. Componentization of software, already a reality for reasons of development efficiency, marries to this trend of distributability to create a highly elastic model of computing where information processing and information migrate around inside a fabric of connected resources.
What we’ve seen of cloud computing up to now has been primarily not cloud computing at all but the application of hosted IT to the problems of server consolidation. Application development lagged the revolution at the resource end of the equation, and it still does. That gap is disappearing quickly, though, as companies and vendors alike come to realize what can be done. However, we’re stuck with the notion that the cloud is a replacement for the data center and that notion will take time to overcome.
Cisco might be seeing something here, though. Most of Cisco’s marketing hype innovations, like the “Internet of Everything”, don’t do anything but generate media coverage—their intended purpose both on Cisco’s part and on the part of the media. Cisco’s attempt to jump one step beyond the cloud (as the Internet of Everything was an attempt to jump beyond the Internet of Things), “fog computing”, may actually have a justification, a value. If we assigned the notion of “the cloud” to the original hosted-consolidation mission, perhaps “the fog” as the name for where we’re headed might be helpful in making people realize that we’re not going to change the world by hosting stuff, but by re-architecting IT around a new model of the network. But “fog” isn’t the most helpful descriptor we could have, obviously. We’re light on the details that might tell us whether Cisco’s “fog” was describing or obscuring something, so we’ll have to look deeper.
The drive to the future of the cloud is really linked to two trends, the first being the continued (and inevitable) reduction in cost per bit and the second being mobility. As I noted earlier in this piece, lower transport costs reduce the penalty for distribution of processes. In a net-neutral world, it’s possible to protect low-cost transport resources inside a ring of data centers because these interior elements aren’t part of the network. Thus, we have a situation that encourages operators to think of “interior lines of communication” because they don’t always cannibalize their current service revenues. And mobile users? They are changing the industry’s goal from “knowledge networking” to “answer networking”.
I’ve noted in prior blogs that it’s easiest to visualize the mobile future as being made up of users who are, in a virtual sense, moving through a series of context fabrics based on factors like their location, social framework, mission, etc. These fabrics represent information available about the specific thing they represent, but not available like the classic notion of IoT sensors to be read. They’re analytic frameworks instead. You could visualize a person walking down the street and as they move transiting an LBS fabric, their own social fabric, a shopping fabric, a dining fabric, even a work fabric.
The information from these fabrics is assimilated not by the user’s smartphone but by an agent process that represents the user, hosted in the cloud. This process will dip into the available fabric processes for information as needed, and these processes will be a part of the “mobile service” the user sees. Some of them will be provided as features by the user’s mobile carrier and others by third parties.
The cellular network and the mobile service network is now separated into two layers. One is the user-to-agent connection, which would look pretty much like mobile services would look today, but with the primary traffic anchor point being not the public network gateway but the on-ramp to the cloud where agent processes are hosted. The second layer is the inter-process links that allow agent processes to contact fabric processes for services.
Many of these fabric processes will be multi-tenant server-based applications that look a lot like the Internet elements of today, and many will be dedicated cloud components that are customized to the user. Some of these fabric processes, like video delivery, will be able to utilize the connection to the user directly—they are brokered by the user’s agent but the agent doesn’t stand in the data path—while others will deliver information to the agent for analysis, digestion, and delivery to the user. We could call the two classes of fabric processes Internet processes much like those of today, and Undernet processes that are structured more like back-end applications.
Things like the IoT are important in networking not because they’ll somehow multiply the number of devices on the Internet. We all know, if we think about it, that IoT devices won’t be directly on the Internet—the process would be insecure and the notion of a bunch of anonymous sensors being queried for information is a processing nightmare. We’ll have an IoT fabric process, probably several from different sources. What’s important is that the IoT could be a driver of the Undernet model, which would create a form of intra-cloud connection whose character is different from that of traditional networking. We don’t have users in the traditional sense, we have members that are application pools running in data centers. It’s likely that these data centers will be fiber-linked and that we’ll then have a service layer created by virtualized/SDN technology on top.
Business productivity support, in the form of corporate cloud applications hosted in the public cloud or the data center, creates fabric processes in the Undernet. Things like the SOA versus REST debate, even things like NFV and orchestration, become general questions for the binding of elements of intelligence that give a mobile user/worker what they need when they need it. We lose the question of the at-home worker to the notion of the worker who’s equally at work wherever they are. Collaboration becomes a special case of social fabric behavior, marketing becomes a location-fabric and mission-fabric intersection. Service features and application components become simply software atoms that can be mixed and matched as needed.
Security will be different here too. The key to a secure framework is for the Undernet to be accessible only to agent processes that are validated, and it’s feasible to think about how to do that validating because we’re not inheriting the Internet model of open connectivity. You have to be authenticated, which means you have to be a proven identity and have a proven business connection to the framework so your transactions can be settled.
All of this is very different from what we have, so in one sense you can say that Cisco is right for giving it a different name. On the other hand, it’s a major shift—major to the point where it is very possible that incumbency in the current network/Internet model won’t be too helpful in the Undernet model of the future. We’ll still have access networks like we do now, still have server farms, but we’ll have a network of agents and not a network of devices. So the question for Cisco, and Cisco rivals, is whether the giant understands where we’re headed and is prepared to move decisively to get there first. Are the PR events the first signs of Cisco staking out a position, or a smoke screen to hide the fact that they aren’t prepared to admit to a change that might cost them incumbency? We’ll probably get the answer to that late in 2015.