We know we need to have security and compliance in SDN and NFV, simply because we have them today in other technologies. We also know, in at least a loose sense, that the same sort of processes that secure legacy technology could also secure SDN and NFV. The challenge, I think, is in making “security” or “compliance” a part of an SDN or NFV object or abstraction so that it could be deployed automatically and managed as part of the SDN/NFV service.
Security, or “compliance” in the sense of meeting standards for data/process protection, have three distinct meanings. First, they are a requirement that can be stated by a user. Second, they are an attribute of a specific service, connection, or process. Finally, they are a remedy that can be applied as needed. If we want a good security/compliance model for SDN and NFV we need to address all three of these meanings.
The notion of attributes and remedies is particularly significant as we start to see security features built into network and data center architectures. This trend holds great potential benefits, but also risks, because there’s no such thing as a blanket security approach, nor is “compliance” meaningful without understanding what you’re complying with. Both security and compliance are evolving requirement sets with evolving ways of addressing them. That means we have to be able to define the precise boundaries of any security/compliance strategy and we have to be able to implement it in an agile way, one that won’t interfere with overall service agility goals.
Let’s start by looking at a “service”. In either SDN or NFV it’s my contention that a service is represented by an “object” or model element. At the highest level, this service object is where user requirements would be stated, and so it’s reasonable to say that a service object should have a requirements section where security/compliance needs are stated. Think of these as being things like “I need secure paths for information” and “I need secure storage and processing”.
When a service object is decomposed, meaning when it’s analyzed at lower levels of the structure on the path toward making actual resource assignments, the options down there should be explored with an eye to these requirements. In a simple sense, we either have to use elements of a service that meet the service-level requirements (just as we’d have to do for capacity, SLA, etc.) or we have to remedy the deficiencies. The path to that starts by looking at a “decomposable” view of a service.
At this next level, a “service” can be described as a connection model and a set of connected elements. Draw an oval on a sheet of paper—that’s the connection model. Under the oval draw some lines with little stick figures at the end, and that represents the users/endpoints. Above the oval draw some more lines with little gear-sets, and those represent the service processes. That’s a simple but pretty complete view of a service.
If a service consists of the stuff in our little diagram, then what we have to do to deploy one is to commit the resources needed for the pieces. Security and compliance requirements would then have to be matched to the attributes of the elements in our catalog of service components. If we have a connection model of “IP-Subnet” then we’d look at our model resources to find one that had security and compliance attributes that matched our service model. Similarly, we’d have to identify service processes (if they were used) that would also match requirements.
My view is that all these resources in the catalog would be objects as well, built up from even lower-level things that would eventually lead to resources. A service architect could therefore build an IP-Subnet that had security/compliance attributes and committed resources to fulfill them, and another IP-Subnet that had no such attributes. The service order process would then pick the right decomposition based on the stated requirements.
It’s possible, of course, that there are no such decompositions provided. In that case, there has to be a remedy process applied. If you want to have the service creation and management process fully automated (which I think everyone would say is the goal) then the application of the remedy has to be automated too. What might that look like?
Like another service model, obviously. If we look at our original oval-and-line model, we could see that the “lines” connecting the connection model to the service processes and users could also be decomposable. We could, for example, visualize such a line as being either an “Access-Pipe” or a “Secure-Access-Pipe”. If it’s the latter then we can meet the security requirements if we also have an IP-Subnet that has the security attribute. If not, then we’d have to apply an End-to-End-Security process, which could invoke encryption at each of our user or service process connections.
Just to make things a bit more interesting, you can probably see that an encryption add-on, to be credible, might have to be on the user premises. Think of it as a form of vCPE. If the customer has the required equipment in which to load the encryption function, we’re home free. If not, then the customer’s access pipe for that branch would not have a secure option associated with it. In that case there would be no way to meet the service requirements unless the customer equipment were to be updated.
I think there are two things that this example makes clear. First is that it’s possible to define “security” and “compliance” as a three-piece process that can be modeled and automated just like anything else. Second, the ability of a given SDN or NFV deployment tool to do that automating will depend on the sophistication of the service modeling process.
A “service model” should reflect a structural hierarchy in which each element can be decomposed downward into something else, until you reach an atomic resource. That resource might be a “real” thing like a server, or it might be a virtual artifact like a VPN that is composed by commanding an NMS that represents an opaque resource structure. The hierarchy has to be supported by software that can apply rules based on requirements and attributes to select decomposition paths to follow.
At one level, NFV MANO at least should be able to do this sort of thing, given that there are a (growing) number of service attributes that it proposes to apply for resource selection. At another level, there’s no detail on how MANO would handle selective decomposition of models or even enough detail to know whether the models would be naturally hierarchical. There’s also the question of whether the process of decomposition could become so complex (as it attempts to deal with attributes and requirements and remedies) that it would be impossible to run it efficiently.
It’s my view that implementations of service modeling can meet requirements like security and compliance only if the modeling language can express arbitrary attributes and requirements and match them at the model level rather than having each combination be hard-coded into the logic. That should be a factor that operators look at when reviewing both SDN and NFV tools.