Where do you go when you’re tired of orchestration? Cloudify says it’s “micro-orchestration”, according to their latest release of the Spire edge-orchestration platform. Given that Spire is targeted at edge computing, and given that operators are obviously looking at the edge and considering how they’d monetize it, I think it’s clear that edge computing is in fact driving orchestration needs. I also think it’s reigniting the question of how NFV orchestration, cloud orchestration, and edge orchestration fit together.
NFV, as an ETSI Industry Specification Group (ISG) project, was launched to define how appliances used in network services could be replaced by hosted functions. While the broad concept of NFV doesn’t specify whether the functions are single- or multi-tenant, the focus the ISG had on virtual CPE and service chaining has bent the activity toward single-service, single-tenant missions. These missions were presumed to involve services under fairly long-term contract, and since physical appliances tend to be deployed for a long time, so their virtual equivalents were presumed to be semi-permanent.
Cloud computing is about replacing dedicated on-premises servers with virtual servers (in some form) from a resource pool. The goal is to run applications, nearly all of which are inherently multi-user, and the great majority of which are persistent, meaning they’re loaded and run for an indefinite but fairly long time. Where dynamism comes into the cloud is largely through the use of multi-component applications some of whose pieces are expected to scale under load, and all to be replaced if they’re broken.
Edge computing is about placing hosting resources close to the points of user activity. The linkage of the hosting to what the user is doing implies that edge computing is tactical, extemporaneous, temporary, in nature. In most cases, it’s likely that edge computing would be used as an adjunct to cloud computing or premises “distributed” or “private cloud” deployments that use a resource pool (containers come to mind). An edge component might serve a single user or multiple users, depending on just what was happening in the area the component was serving at a given time. It might have to scale, and it might also be unloaded to save resources if there was no need for it at a given moment.
What we call “orchestration” is the process of placing a software component in a hosting resource and setting up the necessary parameters and connections for that component to work. NFV is an example of “simple orchestration”, largely because the goal of NFV orchestration is primarily to deploy (or redeploy in case of failure) and because the connection of VNFs is presumed to be a matter of linking through trunks, just like physical devices would be connected. The cloud started with similarly simple deploy/redeploy thinking (NFV is OpenStack-centric, and OpenStack is also the basic model for virtual machine deployments in the cloud).
So, you may be thinking, it’s edge computing that’s changing orchestration. Nope, it’s microservices. A cloud is based on a featureless resource pool, so it shouldn’t matter whether you smear the pool out toward the edge. What’s changing isn’t “edge computing”, it’s the way we componentize applications and services that are based on hosted features. More dynamism demands more dynamic orchestration, and microservices are by nature more dynamic.
A microservice is a lightweight software component that performs a simple task on a simple request. Unlike things like transactions, which often involve complex processing and many steps, microservices are presumed to have an input (the request, or event) and then to return a result that’s based on that input alone. That means that a single microservice can support a whole community of users whose requests are intermingled, because the order in which they’re handled doesn’t matter to a microservice. The context or state of the user’s job has to be maintained elsewhere, if it’s needed.
So why are we hung up on edges here? Three reasons. First, nobody but software people understand microservices, so writing stories about them is of limited marketing value, so you can’t sell ads for the stories easily. Second, it is true that the easiest way to explain an application of a microservice is to say “events” or “IoT” because simple signals are indeed a perfect thing for microservices to process, and most events originate at the edge. Third, if you are doing event processing, you may want to limit the latency associated with processing the event to keep from messing up the real-world activity your application is supposed to be supporting. Push handling to the edge and you can do that.
The orchestration challenge of microservices, and the “container” architecture that’s most often used to host them, has gradually grown as the dynamism of our target microservice architectures has increased. The original container systems, like Docker and the initial Kubernetes stuff, were very similar to OpenStack in terms of what they were expected to do for deployment. Containers were a simpler environment, and that simplicity facilitated easier setup for deployments, but the steps were much the same. Over time, containers have evolved, generally by adding elements to Kubernetes to create and expand what I’ve been calling the Kubernetes Ecosystem.
The problem with Kubernetes as an ecosystem is that it doesn’t orchestrate everything, even with the add-on tools that make it an ecosystem in the first place. There are ways of accommodating things like bare metal or VMs, but one of the specific shortcomings of the framework is that it’s not a universal model-based approach. What everyone (me included, or perhaps in particular) would like is a model-driven framework that includes intent-based features to model complex applications and services, and that can be used to represent hosting and connectivity in a fairly arbitrary way. Such a framework, as the TMF’s work with NGOSS Contract proves, could also be used to provide state/event activation of processes for lifecycle automation.
Modeling is pretty much what Cloudify has focused on. Its architecture is based on TOSCA, which is what I think should be the modeling language for any cloud-hosted, component-based, service or application. It appears to have some event-handling capability, though it’s not clear to me whether there’s a facile way of defining state/event tables within TOSCA models and using them to steer events to processes (that capability exists in TOSCA). If that’s not present in Cloudify’s approach, then it’s not micro-orchestrating enough.
I can’t tell for sure whether it is or not. When I blog, I rely on a company’s documentation and never on their representations, because there’s no proof in a statement, only a document. Cloudify’s website is much like the websites of DriveNet or SnapRoute, two companies I blogged about last week. The material is aimed almost totally at a developer rather than at a decision-maker. If micro-orchestration is totally transformational, it’s going to take more than a developer to buy it, and sell senior executives on the total range of technology impact that transformation implies.
Our biggest challenge in orchestration today is getting past the term itself into the details. You can have little orchestras and big ones, general guidance or total lifecycle management, and yet we use the same word. Everyone who talks orchestration, including Cloudify, should provide enough material to confidently convey the technology’s overall implications and the business impact buyers could expect.
I like Cloudify, but they need to look over their shoulder. The Kubernetes ecosystem I talked about is advancing very rapidly, filling in the gaps we still have in total application and service lifecycle automation. There’s a huge community behind it, and if Cloudify or any other player with a specific solution wants to keep up with the market, they’ll need to get ahead, because the ecosystem is way too large a mass to pass once you’ve gotten behind.