We now have a number of public or semi-public operator architectures for their next-gen networks, and we’ve had semi-public vendor architectures for some time. The “semi-“ qualifier here means that there are a number in both spaces that are not explicitly NDA, but are nevertheless not particularly well described. We also have a number of viewpoints on what has to be done, not reflected well in any of the models. I’ve tried to synthesize a general model from all this, and I want to try to describe it here. Comments welcome, of course!
A good approach has to start at the top, with the conception of a retail service as distinct from a network service. A retail service (which the TMF calls a “product”) is a commercial offering that will normally combine several network services to create something that has the geographic scope and functional utility needed. Retail services are the provenance of OSS/BSS systems, and they have a price and an SLA associated with them. Customer care necessarily focuses on the retail service, though that focus has to be qualified by the fact that some of the underlayment is also at least somewhat visible to a customer.
What is that underlayment? In my view, the TMF has the best answer for a part of it—the notion of a “customer-facing service” as a component of a “product”. A CFS is an orderable element of a retail service that, like the overarching retail service or product has a price and an SLA. It’s my view that this is all mediated by OSS/BSS systems too because the distinction between a “product” and a “CFS” is minimal; one is a collection of the other.
The next-gen question really comes down to what’s under the CFS layer, for three reasons. First, it’s pretty clear that below CFS we’re getting to the place where “deployment” and “management” become technical tasks. Second, because this is the point of technology injection it’s also the point where next-gen and current-gen become relevant distinctions. Finally, at this point we have to start thinking about what makes up a “manageable element” to an OSS/BSS and how we’d express or model it.
The TMF places “resource-facing services” under CFSs, and I think my own work in the SDN/NFV space have suggested to me that this may be too simplistic a concept. My own suggestion is that we think about how virtualization and the cloud work and use their mechanism to guide the model from this point down.
In virtualization and the cloud, we create a virtual artifact that represents a repository for functionality. That artifact is then “assigned” or “deployed” by associating it with resources. Thus, what I would suggest is that we define the next layer as the Network Function layer. We compose CFSs and products by collecting network functions. It seems to me, then, that these NFs are the lowest point at which OSS/BSS systems have visibility and influence. The management of “services” is the collected management of NFs.
A NF could be created in theory in two ways. First, we could have a lower-level “network service” functionality that’s created and managed through a management system API. Second, we could have something that has to be deployed—NFV virtual network functions (VNFs) or router instances or cloud application components. This is the process that I’ve called “binding” in my own work—we’re binding a network function to an implementation thereof. I suggested (without success) that the TMF define a domain for this—to complement their service, resource, and product domains.
The NF binding process is what gives next-gen infrastructure both operations efficiency and service agility. Anything that can deliver the characteristics of a NF is a suitable implementation of the NF. The concept is inherently open, inherently multi-vendor and multi-technology. But the binding is critical because it has to harmonize whatever is below into a single model.
The cloud and virtualization wouldn’t be worth much if an application hosted in a container or virtual machine had to know what specific hardware and platform software was underneath. The abstraction (container, VM) insulates applications from those details, and so the NF has to insulate the OSS/BSS and its related services/products from the infrastructure details too. So the NF is a management translator in effect. It has a lifecycle because it’s the compositional base for services that have lifecycles. The binding process has to be able to drive the lifecycle state of the NF no matter what the resource details are below.
VMs and containers are the keys to virtualization because they’re the intermediary abstraction. The NF is the key to next-gen infrastructure for the same reason. Thus, I think it’s how we formulate and implement the NFs that will make the difference in any model of next-gen networks.
There are two pathways to defining an NF. The first is the commercial pathway, which says that an NF is defined when an opportunity, competition, or optimization goal introduces a value to some feature that’s not currently available or well-supported. The second is the exposure pathway which says that if there’s something that can be done at the technical level within infrastructure, it can be exposed for exploitation. In either case, there are some specific things that have to be provided.
The top thing is that an abstraction concept has to be an abstraction. The concept of “intent model” has recently grown out to describe the notion of an element that’s defined by what it intends to do, not how it does it. Clearly that’s the critical property here, whichever direction the defining of an NF has taken. Operators have to define their own NFs as proper intent models, and they have to demand that vendors who claim to offer support for “services” define the features in intent-model form so they map nicely to NFs.
The second point is that one I already raised about management translation. From the top, the NF has to present management in functional terms. From the bottom it has to collect management in technical terms because resources are technical elements that have to be managed as they are. Since there are by definition multiple ways to realize a given NF abstraction, the NF processes have to harmonize the different models below to a single functional management model above.
I cannot overstress the importance of this. With functional abstraction at the NF level accompanied by management abstraction, any two implementations of a given NF are equivalent at their top interface. That means that as pieces of a service they can be substituted for one another.
One interesting aspect of this point is that even if we were to have absolutely no standardization of how VNFs were deployed and managed and onboarded, we could simply present different approaches under a common NF functional umbrella and compose them interchangeably. Another is that we could build both VNF and legacy infrastructure versions of an NF and use them interchangeably as well. Finally, if we put a model hierarchy with decision points on deployment underneath an NF, we could mix infrastructure and build services across the mixture because the NF composition process for all the different infrastructures in place would be harmonized.
This doesn’t solve everyone’s problems, of course. VNF providers would either have to try to promote some harmony below the NF level or they’d have to provide implementations of the NFs in their market target zone, and potentially provide them for every infrastructure option below. Thus, NF abstraction is a complete tool for OSS/BSS management integration but only a starting point for how the next level should be managed.
I do think that we could gain some insight even down there, though. I propose that an intent-model concept of functional layers is “fractal” in that what’s a service to one layer is the infrastructure of the layer below. If NFs decompose into what are essentially sub-NFs, then the hierarchy says that by intent-modeling every layer we can use alternative implementations there in an equivalent way. We then simplify the way in which we create openness because NF-decomposition is now a standard process at all layers and it solves the openness problem where we use it—which is everywhere. That presupposes a common modeling approach at least as far down as the point where we get very configuration-specific. That’s been my own approach all along, and I hope that this explains why.
How would that lower-level decomposition work? That’s a topic for a future blog!