“The times, they are a’changin,” according to an old Dylan song. That’s surely true in the telecom world, according to the telcos themselves. Looking back over the first full year of Andover Intel operation, and the conversations I’ve had with 88 telcos and cable companies worldwide, I can see both direct and indirect evidence of just how much has changed. And, of course, how much has stayed the same. Those who hope that the telecom space will level itself without subsidies and with improved spending on infrastructure necessarily hope that the stuff that’s changing has changed for the better. Hoped, but is it a vain hope?
Is your business the same as it was five years ago? That’s a question I’ve asked every one of the 88 players, and all agreed that it was not the same. Will it be the same five years from now as it is today? Again, all 88 say it will not be. Will it be better? That question splits the group, with 49 saying it will be better, 25 saying it will be the same, and the remaining 14 saying it will be more challenging. Even that split has qualifiers, though, when you dig down.
The biggest technology shift operators cite, one that goes back about three decades, is the Internet and its impact on data services. The Internet created the consumer data market, something that operators really liked when it first came along in the 1990s, but which has been a source of the current profit problems. It also centered business data services on an IP foundation, eliminating the mix of data protocols (SNA, DECnet, and so forth) that had existed, and also eliminating the need for enterprises to build networks by combining digital trunk services (DDS, T1/E1, T3/E3, SONET) with nodes like routers. The whole concept of a virtual network came along as a result.
All operators accept that technology shift, and all admit that it created the first true change in the nature of “services”. In the past, services were about making connections to specific things, like people or applications. After the shift, services connected people and experiences. This created what 57 of the 88 operators saw as the biggest business transformation, which was the shift from connecting to delivering. When you connect things, you’re explicitly creating a union or a fusion. Person A needs a connection to talk with Person B; the connection doesn’t create the other person but it does create a unique value. But when you “deliver” something, the focus shifts to the thing being delivered. That’s the start of what telcos called “disintermediation”, because suddenly their own contribution (the connection) was subducted as buyers, users, businesses focused on what was being delivered, which telcos didn’t provide.
What do telcos think is the technical challenge that will have to be met to make the future better for them? According to them, it comes down to improved service composition and service operations. Which, the technical types believe, really comes down to APIs. However, both among and within the individual operators, there’s a polarization regarding just what “APIs” mean.
Operator technologists and forward-thinking service planners, people I find in 71 of 88 operators, acknowledge that improvements in service revenues are going to come from the introduction of new services, built on a new set of valuable features that are more likely to be created through software running on servers than behaviors of switches or routers. They don’t see the future as a set of new devices but as a set of new features, and they see the process of service introduction as one of assembling features and presenting them. To them, the biggest question of APIs is “what features” they expose, followed by composition and operations questions.
Senior management, even in the CTO organization but especially at the COO/CFO/CIO/CEO levels, sees APIs as a marketing tool. They’re still in the days of disintermediation, meaning that they see their problem as being disconnected from the retail process rather than the nature of retail demand. They see APIs as a means of exposing things, features, they already have so that these can be exploited by companies with a better understanding of retail markets, whether business- or consumer-oriented. Thus, the APIs may not expose features as much as lower-level services, and are used not by operators themselves, but by partners.
I think that the work of the NFV ISG has, in a way, been directed by the dominance of the second of these perspectives. When you frame the goal of function hosting as the replacement of appliances, you frame it necessarily in the context of what there are already appliances deployed to do, which means current (traditional connection) services. However, NFV didn’t anticipate the notion of retail partnerships and the use of APIs, as much as the use of traditional device interfaces and on-us services. I think that NFV needed to either embrace features as the goal (the first of the two approaches, advocated by the literati of the operator organizations) or embrace partnerships. It’s really not done either of the two.
Part of the reason may be that when you are looking at the feature-composition mission of API exploitation, you’re really arguing for a true and pure cloud approach, meaning that most of what the NFV ISG did was irrelevant. The cloud deals with componentization constantly, supports both a monolith and a functional/microservice approach, and manages scalability and resiliency regularly through its normal tools. The process of service composition, in an ideal world, would be a combination of selecting feature APIs that were valuable, presenting them in the form of optimized APIs, and using cloud tools for composition and deployment.
The operations side of the story isn’t as straightforward. The problem is the highly variable relationship between service features, hosting points, retail services, and service users and their experience. Any service an operator offers is likely to involve connectivity as well as a variable number of non-connection features. The service may involve multiple users in multiple locations, and some of the service features may have instances dedicated to a single user, a group of users, and all users overall, in any combination. The process of service composition, deployment, and orchestration has to be able to record and maintain the associated relationships between features, services, resources, and users, and to manage all of that against an implicit or explicit SLA (or SLAs).
One part of this particular challenge is an issue that came up well over a decade ago in the IETF, called “infrastructure to application exposure” or (in IETF whimsy) “i2aex”. This RFC proposed using a repository to collect resource (meaning both hardware and features in our context here) telemetry and store it in a database, which would then be queried by all applications (composition, deployment, operations, management, and even services and features) to establish overall operating state. The indirection involved would both protect resources from what could mimic a denial of service attack if a lot of applications tried to access the MIBs, and also permit customized views (APIs) representing collective behaviors that were either actual services or independently managed components.
Another piece of the challenge is the “butterfly wing” effect. As the number of feature elements that make up a service increases, the difficulties associated with linking a service issue with a feature issue explode. This is an area where AI could be helpful, because mass correlations in a short period of time are a threat to effective responses. If you don’t know what’s actually happening, if you can’t see past all those butterfly wings, then you’re in trouble. But in order for AI or anything else to work, there has to be a knowledge of the desired state of collective infrastructure behavior, a template of what’s expected.
There are operators, or at least tech mavens within operator organizations, who see all of this, and who’ve been telling me about their problems for five years at least. That’s good and bad at the same time, because these valuable few are also telling me that they’re not making much progress in getting the issues to the forefront. For example, they say that their management believes that AI alone is a solution, meaning that AI could somehow address all their complexity issues without any preferred state, any guidance on “rightness”. These literati don’t think that will work, and I agree.