I can vividly recall one of my early telco transformation meetings. It was just after NFV had launched, but before any real work had been done. At the meeting, two of the telco experts sitting next to each other expressed their views on OSS/BSS. One wanted to transform it, retaining as much as possible of the current systems. The other wanted to start over. This polarized view of OSS/BSS futures, it turned out, was fairly pervasive among operators and it’s still dominant today.
The notion of transforming OSS/BSS has a long history, going back more than a decade in fact. The first transformational notion I saw was the TMF’s NGOSS Contract work, something I’ve cited often. This was an early attempt to reorganize operations processes into services (SOA, at the time) and to use the contract data model to steer service events to the right process. This, obviously, was the “event-driven OSS/BSS” notion, and also the “service-based” or “component-based” model.
We sort of did services and components, but the event-driven notion has been harder to promote. There are some OSS/BSS vendors who are now talking about orchestration, meaning the organization of operations work through software automation, but not all orchestration is event-driven (as we know from the NFV space and the relatively mature area of DevOps for software deployment). Thus, it would be interesting to think about what would happen should OSS/BSS systems be made event-driven. How would this impact the systems, and how would it impact the whole issue of telco transformation?
We have to go back, as always, to the seminal work on NGOSS Contract to jump off into this topic. The key notion was that a data model coupled events to processes, which in any realistic implementation means that the OSS/BSS is structured as a state/event system with the model recording state. If you visualized the service at the retail level as a classic “black box” or abstraction, you could say that it had six states; Orderable, Activating, Active, Terminating, Terminated, and Fault. An “order” event transitions to the Activating state, and a report that the service is properly deployed would translate it to the Active state. Straightforward, right? In all the states, there’s a key event that represents its “normal” transition driver, and there’s also a logical progression of states. All except “Fault” of course, which would presumably be entered on any report of an abnormal condition.
You can already see this is too simplistic to be useful, of course. If the service at the retail level is an abstract opaque box, it can’t be that at the deployment level in most cases. Services have access and transport components, features, different vendor implementations at various places. So inside our box there has to be a series of little subordinate boxes, each of which represents a path along the way to actually deploying. Each of these subordinates are connected to the superior in a state/event sense.
When you send an Order event to a retail service, the event has to be propagated to its subordinates so they are all spun up. Only when all the subordinates have reported being Active can you report the service itself to be Active. You can see that the state/event process also synchronizes the cooperative tasks that are needed to build a service. All of this was implicit in the NGOSS Contract work, but not explained in detail in the final documents (GB942).
Operations processes, in this framework, are run in response to events. When you report an event to a subordinate (or superior) component of a service, the state that component is in and the event itself combine to define the processes to be run. The way that an OSS/BSS responds to everything related to a service is by interpreting events within the state/event context of the data models for the components.
This approach contrasts to what could be described as the transactional or workflow approach that has been the model for most business applications, including most OSS/BSS. In a transactional model, operations tasks are presumed to be activated by something (yes, we could think of it as an event) and once activated the components will then run in a predefined way. This is why we tend to think of OSS/BSS components like “Order Management” or “Billing”; the structure mirrors normal business software elements.
To make the OSS/BSS operate as an event-driven system, you need to do three things. First, you need a data model that defines a service and its subordinate elements in a structured way, so that each of the elements can be given a specific state/event table to define how it reacts to events. Second, you need events for the system to react to, and finally you need to have OSS/BSS processes defined as services or components that can be invoked from the intersection of current state and received event, in any given state/event table.
Most OSS/BSS systems are already modular, and both operators and vendors have told me that there’s little doubt that any of them could be used in a modular-as-service way. Similarly, there are plenty of business applications that are event-driven, and we have all manner of software tools to code conditions as events and associate them with service models. What we lack, generally, are the models themselves. It’s not that we don’t have service modeling, but that the models rarely have state/event tables. Those would have to be authored as part of service-building.
You can see from this description that the process of modernizing OSS/BSS based on NGOSS-Contract state/event principles is almost identical to the process of defining virtualized function deployments as described by the NFV ISG, or the way that AT&T’s ECOMP proposes to build and manage services. That has three important consequences.
First, it would clearly be possible to organize both SDN/NFV service lifecycle management and OSS/BSS modernization around the same kind of model, meaning of course that it could be the same model. Properly done, a move in one space would move you in the other, and since automation of both operations and the lower-level lifecycle management processes are essential for opex efficiency and service agility, the combined move could meet transformation goals.
Second, the model could be defined either at the OSS/BSS level or “below” that, perhaps as independent NFV orchestration. From where it starts, it could then be percolated up/down to cover the other space. Everyone in either the OSS/BSS space, the SDN/NFV space, the DevOps or orchestration space, could play in this role.
Third, this level of model-driven integration of operations processes with service and resource management processes at the lower level isn’t being touted today. We see services and service modeling connected to OSS/BSS, presumably through basic order interfaces. If that’s accidental, it seems to suggest that even advanced thinkers in the vendor and operator communities aren’t thinking about full-scope service automation. If it’s deliberate, then it isolates operations modernization from the service modeling and orchestration trends, which in my view would marginalize OSS/BSS and hand victory to those who wanted to completely replace it rather than modernize it.
That returns us to those two people at the meeting table, the two who had diametrically opposed views of the future of OSS/BSS. Put in the terms of the modeling issue we’ve been discussing here, the “modernize” view would favor incorporating OSS/BSS state/event handling into the new service automation and modeling activity that seems to be emerging in things like ECOMP. The “trash it and start over” view says that the differences in the role of OSS/BSS in a virtual world are too profound to be accommodated.
My own view falls between these two perspectives. There are a lot of traditional linear workflows involved in OSS/BSS today, and many of them (like billing) really don’t fit a state/event model. However, the old workflow-driven thinking doesn’t match cloud computing trends, distributed services, and virtualization needs. What seems to be indicated (and which operators tell me vendors like Amdocs and Netcracker are starting to push) is a hybrid approach where service management as an activity is visualized as a state/event core built around a model, and traditional transactional workflow tasks are spawned at appropriate points. It’s not all-or-nothing, it’s fix-what’s-broken.
Or, perhaps, it’s neither. The most challenging problem with the OSS/BSS modernization task and the integration of OSS/BSS with broader virtualization-driven service management, is the political challenge created by the organization of most operators. So far, SDN and NFV have been CTO projects. OSS/BSS is a CIO domain, and there is usually a fair degree of tension between these two groups. Even where the CIO organization has a fairly unanimous vision of OSS/BSS evolution (in the operator I opened this blog with, both views on operations evolution were held within the CIO organization) there’s not much drive so far to unite that vision with virtualization at the infrastructure level.
Could standardization help this? The standards efforts tend to align along these same political divides. The TMF is the go-to group for CIO OSS/BSS work, and the CTO organizations have been the participants in the formal bodies like the NFV ISG. Underneath it all is the fact that all these activities rely on consensus, which has been hard to come by lately as vendor participants strive for competitive advantage. We may need to look to a vendor for the startling insights needed. Would we have smartphones today without Steve Jobs, if a standards process had to create them? Collective insight is hard, and we’ve not mastered it.