APIs are good and perhaps even great, and “open” APIs are even better. That’s the party line, and it has many elements of truth to it. Look deeper, though, and you can find some truly insidious things about APIs, things that could actually hamper the software-driven transformation of networking.
“API” stands for “Application Program Interface”, and in software design it’s the way that two software components pass requests between them. If a software component wants to send work to another, it uses an API. “Services”, which are generalized and independent components, are accessed through APIs too, and so are “microservices”. In fact, any time application behavior is created by stitching work through multiple software elements, you can bet that APIs are involved in some way. The very scope of APIs should alert you to a fundamental truth, which is that there are a lot of different APIs, a lot of variations in features, styles, mechanisms for use, and so forth.
At the highest level of usage, one distinction is whether an API is “open”, meaning that its rules for use are published and available to all, without restrictive licenses. Many software vendors have copyrighted not only their software but their APIs to prevent someone from mimicking their functionality with a third-party tool. That practice was the rule in the past, but it’s become less so as buyers demand more control over how they assemble software components into useful workflows. Certainly today we don’t want closed APIs, but opening all APIs doesn’t ensure that a new operator business model would emerge.
One of the reasons is that most pressure to open up APIs is directed at opening connection services to exploit their primitive elements. The thinking (supposedly) is that by doing this, operators would be able to make more money selling pieces than selling entire services. That’s not turned out to be true in other times and places where wholesaling service elements was tried, and few if any operators today believe this is a good idea. We need to have new things, new features, exposed by our new APIs, and those new APIs have to expose them correctly, meaning optimally.
At the functional or technical level, you can divide APIs into two primary groups—the RESTful and the RPC groups. RESTful APIs are inherently client-server in structure; there is a resource (the server) that can deliver data to the client on request. The behavior of the server is opaque to the client, meaning that it does its thing and returns a result when it’s ready. RPC stands for “remote procedure call”, and with an RPC API you have what’s essentially a remote interface to what otherwise looks like a piece of your own program, a “procedure”.
Within each of these groups, APIs involve two distinct elements—a mechanism of access and a data model. The former describes just how a request is passed, and the latter what the request and response look like. Generally, APIs within a given group can be “transformed” or “adapted” to match, even if the mechanisms of access and data model are somewhat different. There’s a programming Design Pattern called “Adapter” to describe how to do that.
Where things get complicated is in the implied functional relationship of the components that the API links. One important truth about an API is that it effectively imposes a structure on the software on either side of it. If you design APIs to join two functional blocks in a diagram of an application, it’s likely that the API will impose those blocks on the designer. You can’t have an interface between an NFV virtual network function and an element management system without having both of those things present.
We saw this in the NFV ISG’s end-to-end model, which defined functional blocks like the Management and Orchestration (MANO), VNF Manager (VNFM), and Virtual Infrastructure Manager (VIM). While the diagram was described as a functional model, these blocks were the basis for the creation and specification of APIs, and those APIs then mandated that something like the functional block structure depicted was actually implemented.
The functional diagram, in turn presumes the nature of a workflow between components, and in this case presumes a traditional monolithic management application structure. That’s a problem because a service is made up of many elements, each of which could be going through their own local transition at a given point. In traditional management systems, an element has a management information base (MIB) that represents its state, and this information is read by the management system for handling. Thus, you get a management flow that consists of an element state, and processes then determine what to do about that state. Everything has its own state in a service, and so it’s easy to see how deciding what to do about a single element’s state in the context of the grand whole could be difficult.
The notion of state here naturally gives rise to the issue of stateful versus stateless processes and APIs. In theory, RESTful APIs should be stateless, meaning that the resource or server side doesn’t remember anything between messages. That makes it possible for clients to (in theory, at least) access any instance of a server/resource to get the same result. It also means you can fail something over by reinstantiating it, and you can also scale something under load.
All of this has to be related to some broad software-architecture goal to be truly useful, and as I’ve said many times, I think that goal is the intent-data-model-and-event-driven structure similar to that proposed a decade ago by the TMF. In this structure, an event is analyzed based on the current state of the modeled element it’s associated with, and this analysis (based on a state/event table) kicks off an associated process and sets a successor state if needed.
In an event-driven process, everything reduces to an event. That means that higher-layer things like service orders or management status requests are events, and events activate the processes in a standard way. Which is how? The answer is that there should be a standard “process linkage” API that is used by the event-handling tool, one that presumes a process link (in the form of a URI, the general case of our familiar URL) is present in the state/event table, and that then activates that process. The exact mechanism isn’t critical, but if you want the process to be scalable, it should be stateless and designed to a REST API.
What’s the data model passed? The answer is the element in the data model that’s currently in focus, the one whose state/event table is being used. Whatever data that intent-modeled structure has available is available to the process being activated, along with the event data itself. It’s fairly easy to transform data elements to match process requirements, and so this kind of API would be very easy to define and use.
The process of posting events could be more complicated, but not necessarily. My own ExperiaSphere work showed that for the TMF’s approach to data-coupling of events and processes to work, it was essential that there be a rule that events could only be exchanged among adjacent model elements—superior/subordinate elements, in other words. This limits the need to make the entire model visible to everything, and also simplifies the way that events could be exchanged. Presumably it’s easy to make a model element “see” its neighbors, and if each neighbor was identified then the posting of an event to it would be easy.
There is a complexity at the “bottom” of a model hierarchy, where the model element encloses not a substructure of elements but an implementation of a feature. Real hardware/software events would have to be recognized at the level of the implementation, and the implementation of a primitive “bottom” element would then have to be able to generate an event upward to its superior element. Only “bottom” elements enclosing actual implementations would have to worry about this kind of event translation or correlation.
If we had true model-driven service definition and lifecycle management, the only APIs we’d really need are those that generate an event, one to be passed into the model’s event-to-process orchestration to drive a change in state or inquire about status. These APIs would be very simple, which means that they could be transformed easily too. The barriers to customization and to the creation of useful services would fall, not because the APIs enabled the change but because they didn’t prevent what the fundamental architecture enabled.
Which is the point here. With APIs as with so many topics, we’re seizing on a concept that’s simple rather than on a solution that’s actually useful. We need to rethink the structure of network features, not the way they’re accessed. Till we do that, APIs could be a doorway into nothing much useful.