In this second of my pieces on the things we’re forgetting in carrier cloud, NFV, and lifecycle automation, I want to look at the issue of tenancy. You may have noticed that VNFs, and the NFV processes in general, tend to be focused on single-tenant services. Virtual CPE (vCPE), the current leader in VNF popularity, is obviously designed for the tenant whose premises is involved.
What makes this interesting (and perhaps critically important) is that many of the most credible applications for function virtualization aren’t single-tenant by nature, and even single-tenant services may include features that are used by multiple tenants. Are these applications perhaps different enough in their requirements that the same framework used for single-tenant services isn’t optimum for them? If so, there’s a barrier to adoption of NFV principles just where they may be most important.
Back in August of 2013, I did a presentation on this issue, describing the concept I called “Infrastructure Services”. An infrastructure service is a functional element that isn’t instantiated for each tenant/service, but rather is shared in some way. A good example of such a service is the Internet. If you were to frame out a model of an SD-WAN service, you’d need to presume Internet connectivity at the target sites, and represent that connectivity in the model and management, but you’re not going to deploy an instance of the Internet every time someone orders SD-WAN.
One example of an infrastructure service that’s particularly important (and particularly unrepresented) is the elements of NFV software itself. In a TMF-modeled deployment, we’d have a data model (the SID in TMF terms) that includes a state/event table to direct service events to operations processes. The thing that processes a service data model to do the steering of events to processes is what I called the “Service Factory”, one of the infrastructure services. Similarly, each of the operations processes could be infrastructure services. You could then say that even the service lifecycle software could be framed in the same way as the services it’s deploying; “Deploy” is a lifecycle state, as is “order” and so forth.
One reason this is a good starting point for infrastructure services is that an operations process might be a “microservice”, fully stateless and thus fully scalable, and another operations process might be an infrastructure service that’s shared, like something that makes an entry in a billing journal. Note that these services—either type—could be created outside NFV or could be NFV-created, providing that NFV had (as I was recommending it have) a means of specifying that a given service was “infrastructure” rather than per-tenant.
Given this, what can we say about the properties of an infrastructure service? The best way to try to lay out the similarities/differences versus per-tenant VNF-like services is to look at what we can say about the latter first.
Recall from my other blogs on NFV that the NFV ISG has taken a very box-centric view of virtualization. A virtual function is the hosted analog of a physical network function, meaning a device. You connect virtual network functions by service chaining, which creates a linear path similar to what you’d find in a network of devices. A per-tenant VNF is deployed on the activation of the service, and disappears when the service is deactivated.
An infrastructure service is deployed either as a persistent function, or as a function that comes to life when it’s used. Both could be scalable and redeployable providing the implementation of the function was consistent with that goal. It probably lives inside a subnet, the same structure that VM and container deployments presume to hold the components of an application. It doesn’t expose the equivalent of an “interface” to a “device”, but rather an API.
In terms of access, you’d be able to access a per-tenant function from another function within the same service, or selectively from the outside if an API of a function were exposed. Inside a subnet, as used by both OpenStack and Kubernetes/Docker, you have a private address space that lets the subnet elements communicate, and you have a means of exposing selective APIs to a broader address space, like a VPN or the Internet. This is pretty much what the NFV community seems to be thinking about as the address policy.
An infrastructure service is by nature not limited to a single tenant/service instance, but it might not always be exposed to the world or to a VPN as the only option. This, in fact, might be the most profound difference between infrastructure services and virtual functions, so let’s dig on it a bit.
A given service has a “service domain”, representing the address space in which its unique functions are deployed. Think of this as the classic IPv4 private Class C IP address, like 192.168.x.x. There are over 65 thousand available addresses in this range, which should cover the needs of per-service deployed function addressing.
Next, it would be reasonable to say that every service buyer might have a “tenant domain”, an address space that exposes the APIs of the functions/services/elements that have to be shared. Let’s say that this tenant space is a Class B private address, something like 172.y.x.x, where y is between 16 and 31. That range has over a million available addresses, plenty to handle how cross-service functions could be addressed/exposed within a tenant’s service infrastructure. An example of such a function might be management systems.
What about the stuff outside the tenant’s stuff? The Class A private address, 10.x.x.x has almost 17 million available addresses. We could imagine this space being divided up into ## function groups. The first would be used for the service lifecycle software application itself. The second would represent the address space into which service- and tenant-specific APIs were exposed to allow lifecycle software to access them. The third would be for the infrastructure elements (hosts, etc.), and the final one for those infrastructure services that were available to per-service, per-tenant, or lifecycle software use. For this one, think IMS.
One of the important things about infrastructure services is that you don’t really deploy them as much as “register” them. Infrastructure services are known by APIs, meaning network addresses. The addresses may represent a persistent resource or one on-demand, but they will rarely represent something that’s deployed in the way VNFs are generally expected to be. Thus, infrastructure services are really cloud components.
This isn’t the first time I’ve said that many of the things we think of as “NFV” aren’t. Applications like IMS, like CDNs, like most of IoT, are consumers of infrastructure services not VNFs. In fact, the first NFV ISG PoC involved an open-source IMS implementation, and it demonstrated that deploying an infrastructure service like IMS could be done using the same process as could be used for traditional VNFs, but that addressing and address assignment was subnet-based and thus “cloud-like” rather than service-chain and box-like. Proof, I think, that even back in 2013 we had an implicit recognition that shared service components were more likely to follow cloud principles than VNF principles.
Infrastructure services aren’t theoretical, they’re real, and we have examples of them all through the current initiatives on carrier cloud, virtualization, and lifecycle automation. Like virtual networking, infrastructure services are fundamental, and like virtual networking they’ve been largely ignored. They were raised from the very first (in the first NFV PoC, as I’ve noted here) and they should have been an integrated part of NFV, but that didn’t happen. It may happen now, at least in the sense that they may get back-filled into NFV’s framework, because of the attention being paid to mobile, 5G, content delivery, and carrier cloud services. It makes sense to frame virtualization to address all its missions, and infrastructure services are certainly a mission to be addressed.