Everyone believes that virtualization is important. I hear that from all 301 enterprises I’ve chatted with on the topic this year, and all 88 of the network operators. There are really no service providers or enterprises who I think aren’t committed to virtualization, and in multiple ways. It’s also a topic that most buyers (and of course all sellers) say they understand. In the enterprise space, 249 of the group said they were “completely satisfied” with their virtualization understanding, and only 3 said they believed they had material deficiencies in their understanding of the concept. But do either enterprises or providers really understand?
If you do a quick search on the term, you see that there are at least a half-dozen different definitions of virtualization, and that same thing is true among both providers and enterprises. It’s an operating system strategy, it’s the basis for cloud computing, it’s something that lets you create virtual representations (nothing obvious about that)…you get the picture. None of the definitions actually define the concept; they define the way it’s applied to specific missions.
Here’s my take. Virtualization is the process of creating a functional abstraction of an IT or network element that is then presented and used like the real thing would be used, but is constructed from a collection of resources. That definition applies to all the missions, and because it’s oriented toward the fundamental core of them all, it contains the critical things we need to be thinking about when we talk about virtual networks, virtual functions, virtual servers, clouds, and so forth. Many of you will recognize that “abstraction” is also the basis for intent modeling, and the commonly cited concept of the “black box” whose contents can be known only by examining its exposed properties.
A functional abstraction is the distillation of the properties of something we want to use. A server is something we use for hosting. A virtual machine is a functional abstraction of a server. So is a container, and so are other elements of cloud computing, like IaaS. In order for that abstraction to be used, two things are required. First, it has to be presented, which means that some implementing process has to assert the interfaces that the real “something” would, so that from the outside the functional abstraction looks like what it represents. In other words, you have to create a black box that from the outside looks like the real thing. Second, it has to be mapped or realized by filling in the black box with a set of real resources that fulfill its defined external properties.
If we look at virtualization this way, we can see that there have been three distinct phases to its application. In the first phase, virtualization was used to map between a pool of resources and a specific instance presented for use. You create virtual machines from a pool of servers, and when you present one for use the user/application doesn’t know just where it came from (except to the extent that you allow assignment to be influenced by parameters like location). This was the basis for most data center virtualization and also for early cloud IaaS.
The second phase of virtualization differs in that it presents not an abstraction of a specific real resource (a server) but rather an available behavior set. You could think of this as “interface abstraction” because the behavior sets are usually presented via an interface, or even “service abstraction” because the behavior represents something that looks like a service you can consume. IP VPNs are an example of this, as are PaaS and SaaS cloud services.
What about the third phase? This is where we are now, and what might arguably be the end-game for virtualization. Here, you still abstract a behavior set, but instead of focusing on one that’s available, you use the flexibility and agility of abstraction to present behaviors you can use, want, or need, but are not currently available. We have some of this already in the form of the web services offered by cloud providers, and by artificial intelligence.
The overall goal of all these phases is to separate functionality from explicit resource commitments. The properties at the top of a virtualized model are consistent, dependable, and user-bound in that applications and practices are dependent on them. The way those properties are realized evolves as technology changes make new tools available, as resources are pooled and allocated, failed and scaled. Virtualization is a kind of adapting process. You have a specific kind of plug, which demands a compatible socket, and you use virtualization to map between what the plug has to see and what’s available to pump stuff into it.
You might wonder what the difference is here, phase-wise, so let me offer an example. If we go back to the origins of network functions virtualization (NFV) we see that the goal was to create a virtual network function (VNF) that replaced a physical network function (PNF), meaning a device or appliance. This was very much a phase-one process because what you abstracted was not a service seen by a user or application, but a device as seen by other devices. NFV let you slide in a VNF for a device, and so preserved the network as well as the service. If we were to look at VPNs, SD-WAN, or a cloud conception, we’d see a second-phase abstraction, meaning that we would abstract a service interface and the functions/features presented there.
The third-and-maybe-final phase is the most interesting, because it’s aimed not at current resources or services, but at new features, functions, and missions. The point with this form of virtualization is not to provide an alternative path toward fulfilling something, but rather to create something that would facilitate consumption. This is where I think NFV should have gone, where cloud computing is already heading, and where “carrier cloud” should really be directed. In this form of virtualization, the abstraction is purely functional, high-level in nature, and designed to facilitate development above rather than create interoperability and efficiency below. In a sense, it creates via abstraction APIs the “platform-as-a-service” model I’ve talked about, and that has been in use for web services for ages.
The problem with this seemingly wonderful, effective, responsive, and logical outcome is that it requires a wider vision of the future than we’re accustomed to seeing in the service space. You have to understand development and your partnership with developers. You have to understand software architecture and its role in framing tools that support the direction that the service market will take. You have to understand the economic stake, and the economic benefits, that would accrue for all the stakeholders. It’s a heavy lift, but I think it’s where we’re inevitably going to take virtualization. It’s mostly a question of who sees this future, when they see it, and how much they can exploit it to achieve a measure of market dominance.