One of the things I found interesting about the SDN World Congress last week was that it asserted, in effect, that the whole wasn’t the sum of the part but rather than one part was good enough to make it. Anyone who had “network”, “function” or “virtualization” in any form erected the great banner of NFV for people to flock to.
An operator told the story of going to various booths to see their NFV strategy, and asking at some point “Where does this happen?” in reference to some specific NFV requirement. They were told “Oh, that’s in the higher layer”. SDN déjà vu, right? All the world’s computing power, knowledge, and software skill couldn’t possibly cover all the stuff we’ve already pushed north of those northbound APIs. Now NFV is doing the same thing.
The “why” of this is pretty simple. Besides the universal desire to get maximum PR benefit with minimal actual product collateralization, this is a hard problem to solve. I’ve talked about it from the network side in the past, and most of you who read this know that the CloudNFV project that I’m Chief Architect for says it solves it. Perhaps knowing how we do that will help understand why there are so many people raising their eyes up toward that higher layer.
Virtualization demands mapping between abstraction and reality, and the more complicated and unbounded the need for abstract things is, the more difficult it is to realize them on resources. It’s easy to make a virtual disk, harder to make a virtual computer, and darn hard to make a virtual network—because there are a lot of pieces in the latter and because there are a lot of things you’d have to ask your network abstraction to support.
When I looked at this problem first about a year ago, when the NFV concept was launched, I knew from running my own open-source project in the service layer that the architecture to fulfill NFV could be defined, but that implementation of that architecture from “bare code” would be very difficult. There are too many pieces. What made CloudNFV from a concept into a project was the fact that I found an implementation framework that resolved the difficulties.
In software terms, the challenge of both SDN and NFV is that both demand a highly flexible way of linking data and process elements around service missions. We always say we’re moving toward that sort of agility with things like service-oriented architecture (SOA) and cloud databases like Hadoop, but in fact what we’re doing isn’t making our architecture agile, it’s making a new architecture for every new mission. We’re stuck in two things, guilt by associations in data management and lockstepism in the processes.
We wouldn’t read as much about unstructured data as we do if we didn’t need structure to conceptualize data relationships. Look at a table, any table, for proof. The problem is that once you structure data, you associate it with a specific use. You can’t look up on something that’s not a key. When we have multiple visualization needs, we may be able to create multiple views of the data, but that process is really one of breaking down our “defaults” and creating something else, and it doesn’t work well for large data masses and for real-time missions.
On the process side, the whole goal of componentization and SOA has been to create what software geeks call “loose coupling” or “loose binding”. That means that pieces of a process can be dynamically linked into it and that as long as the new pieces have the required functionality, they fit because we’ve described a bunch of stuff (through WSDL for example) to make sure of that. But look inside this and what you find is that we’ve taken business processes and linked them to IT processes by flowing work along functional lines. Our “agile processes” are driven like cattle in a pen or people marching in formation. We have in-boxes and outboxes for workers, so we give that same model to our software. Yet, as anyone who works knows, real work is event-driven. There is no flow, except in a conceptual sense. We have the in- and out-boxes to reflect not the needs of the business but the nature of our work distribution. We did the same with SOA.
When I found EnterpriseWeb’s stuff, I saw a way out of this double-mess. They have a flat and structureless data model, a process model that’s effectively a bunch of functionality floating out there in space with no particular connection set or mission. To this seeming disorder, we bring a semantic layer that allows us to define bindings by defining how we want something to be done. Data and processes are linked with each other as needed. The context of doing things—the in- and out-boxes—are defined by the semantics and maintained in that layer, so that when we have to do something we marshal whatever we need and give it a unit of work to do. You can scale this dynamically to any level, fail over from one thing to another, load-balance and scale-in/out. All of that is automatic. And since the contextual links represent the way work was actually done, we can journal the links and capture the state of the system not only now but any time in the past too.
This is how we made CloudNFV (which runs, and was demoed to operators in Frankfurt last week) work. We have abstracted all the data and semantics of the NFV application into “Active Virtualization”, whose virtualizations are those semantic-layer overlays that define what’s supposed to happen under the conditions NFV poses. We can make this structure look like whatever anyone wants to see—management can be expressed through any API because any data model is as good as any other. We can make it totally agile because the way that something is handled is built into a semantic model, not fossilized into fixed data and software elements. It’s taking an old saw like “You are what you eat”, which implies your composition is set by a long and complicated digestion-based sequence, and changing it to “You are what you want”, and wants don’t have much inertia.
Orchestration is more than OpenStack. Management is more than SNMP. NFV is more than declaring something to be hostable. SDN includes the stuff north of the APIs. We do the industry a disservice when we shy away from requirements for any reason. It’s not necessary that we do that. Six companies, none of them network giants, have built a real NFV and there’s no excuse for the market to settle for less from any purported supporter of the technology.