Want to know the biggest technical problem in networking today? It’s the fact that we can’t seem to stop thinking “interfaces” when we need to be thinking “APIs”. Every day we see examples of this, and every day the industry seems to ignore the significance. We need to face it now, if we’re going to make orchestration, automation, and even IoT work right.
API stands for Application Programming Interface, but the term “interface” by itself is usually applied to a hardware interface, something like RS-232, or a wireless standard like 802.11ac. By convention, “interfaces” are implemented in hardware, and “APIs” in software. That’s a huge difference, because if you churn out a million boxes that have to be updated to a new “interface”, you might be looking at replacing them all. If a million boxes need an API changed, it’s almost always just a software update.
This update facility means that you have to spend a lot more time future-proofing interfaces. APIs are not only more naturally agile in response to change, they’re usually based on message-passing and can have more direct flexibility built into them. Further, API standards are fairly easy to transmute into other API standards; it’s called an “Adapter Design Pattern” and it makes one API look like another. The approach works as long as you have the same basic data elements (even if they’re in different forms, like one that accepts “speed” in Mbps and another in Gbps) in both.
All this may sound like academic/standards mumbo jumbo, but it’s actually so important that the success of standards could depend on it. The reason is that if networking standards bodies apply the same processes to standards involving APIs that they’ve traditionally used for interfaces, they’ll get bogged down in details that have little or no consequence, and likely collide with other activities working in the same space from the software side.
Interface-bias in standards groups tend to generate software models that look like connected boxes. They look at traffic rather than looking at events, and that undermines the exploitation of specialized software features like concurrency, scalability and resiliency. If you look at the model of NFV that was produced in 2013 by the ISG, you’ll see this approach in action. It’s fine as a representation of functional relationships, but it’s not suitable for driving software design. A software person would have taken a different tack, starting with the “interfaces” of the end-to-end model and thinking of them as APIs instead.
Software processes that have to manage asynchronous activity have to be event-driven, because asynchronicity can only be communicated in terms of what’s happening and what state the elements are in. Ask yourself what happens when, as a “box process” is handling a specific condition during deployment or scaling, another condition (related or unrelated) arises. You’d like to be able to pass the new event off to another instance of the process, and coordinate any cases where two consecutive events impact the same service. How do you show that in a box-process model? You don’t.
A software developer with event-processing experience can see a lot of different ways to accomplish the same functional task, but in a software-optimized way. In the early ExperiaSphere project, well before the NFV ISG launched, I demonstrated the notion of a “service factory” that could be instantiated any number of times, and where any instance could be passed a copy of the service model and the event and handle it correctly. The “interfaces” here, really APIs, were event-passing interfaces that communicated only the event data. The Service Factory pulled the service information from a database and did its thing.
Software architects defining APIs usually focus on two separate steps. First, you decide what the higher-level interface will be. In the software world, there are examples like REST (the web HTTP model), JSON (the JavaScript model), SOA and RPC, and a whole series of specialized APIs for the cloud. Nearly all these APIs are designed to deliver payloads, meaning formatted data packages, which is the second thing the architect will define. That means that software APIs are a combination of a simple package-exchange framework and some package format/content detail.
Does this mean that network standards people should be thinking of defining those package format/content rules? Not necessarily. APIs follow what some have jokingly called the “two-consenting-adults” rule; the two processes that are linked by an API have to agree on the package, but every possible process combination doesn’t have to use the same one. The best example of an application of this rule is in “intent modeling”.
An intent model is an abstraction of a complex process that defines its behavior, inputs and outputs, and service-level agreement. Does every intent model have the same API and package? It shouldn’t, because every intent model doesn’t have the same “intent” or do the same thing. A software geek would probably say that if you’re defining intent model APIs, you’d first create a broad classification of intent models; “IP subnet”, “IP VPN”, “cloud-host”, and so forth. There might be a hierarchy, so that the “IP subnet” and “IP VPN” would be subclasses of “IP Network”. Software is developed this way now; in Java you can define a Class that “implements” something, “extends” something, and so forth.
Interface people tend to focus instead on the notion of a universal data/information model, which demands that they explore all the possible things a software function might do and all the data it might need to do it. Obviously, that takes a lot of time and produces something that’s potentially enormous. That “something” is also very brittle, meaning that any change in technology or service could give rise to new data needs, which would then have to extend the old model. Since that old model is “universal” it raises the risk that previous implementations might then have to be updated. With the “right” approach, new data is only significant in the specific packages where it might be found; everything else stays the same.
People might argue that this loosey-goosey approach would make integration difficult or impossible, but that’s not true. Suppose some body, like the TMF or ETSI, defined a set of intent classes to represent the elements of network services, meaning the features. Suppose then that they defined the basic input/output requirements for each, which would not be difficult. They could then say that anyone who implemented an Intent-Class IP-Subnet had to be able to run properly with the specified class data packages, and that they could offer extensions only within that basic operation. Now integration is a choice for operators—use the basic class or exploit benefits. If enough people decided to use an extension, it might then be made part of the basic class, or more likely used to create a derivative class.
Network services, or features of network services, aren’t the only place where APIs and interfaces will end up biting us if we’re not careful. IoT is at great risk, and in fact so is event processing in general. Here it’s important to remember two things; simplicity is important to contain costs and processing effort, and software can adapt to change more easily than hardware.
The big problem with IoT is that we’ve yet to face the truth with it overall. We are not going to be putting hundreds of millions of sensors on the Internet, what we’re going to be doing is exposing vast amounts of sensor data on the Internet. That is best done through APIs, and of course it should be APIs that deliver event information to functional/lambda elements or microservices. Again, what’s needed here are “API classes” to allow multiple types of sensors to be combined with event processing and analytics to deliver secure, compliant, useful information.
We’re never going to make NFV integration effective without a model of services and infrastructure that’s hierarchical and supports inheritance and extension, like programming languages (Java for example) do. We’re never going to make it work at scale without state/event logic, and newer things like IoT are going to be even more dependent on those two requirements than NFV is. Nothing that’s done today in NFV, orchestration, IoT, automation, or whatever is going to have any long-term value without those two concepts, and it’s time we accept that and fit them in.