While the source of the quote (Albert Einstein) isn’t well known, many recognize the statement that “Insanity is doing the same thing over and over again and expecting different results.” One would think that sentiment would discourage continued reliance on failed concepts, but that’s apparently not true in the telco world. There, it means that you can rinse and repeat as long as you give the failed concept a different slant. Thus, telco APIs have become the latest example of hunkering down on the ever-commoditizing base of connection service. Take something that makes less every year, expose it as an API, and watch it make more money! It’s like having a tunnel into Fort Knox. There are obviously different views of this (HERE, HERE, my own analysis of Ericsson’s Vonage APIs, and Ericsson’s own release) so let’s dig into the issue and see whether the whole API thing makes any sense at all.
Let’s start with the positive. It is true that the future of telco profits is tied to a greater participation of telcos in the revenue stream for higher-layer services, which we’d call “over the top” or OTT. It is true that the easiest way to get that while avoiding competing with the established OTTs is to expose “feature APIs” that those OTTs would then consume. It is true that if every telco has their own APIs, the burden of dealing with that multiplicity across and global OTT service area creates a big barrier. And, it’s true that establishing a kind of API convocation company involving a big API source and big operators would at least help reduce that multiplicity.
Now the negative. It’s true that the value of an API is the value of what it exposes, so creating an API to share a useless service, or one that doesn’t itself facilitate any new OTT revenue or reduce OTT cost, is useless. It’s true that having a common API consortium driving widespread new OTT opportunity doesn’t exactly facilitate competitive differentiation. It’s true that other network vendors, like Nokia, have their own API ideas and that their APIs are different from Ericsson’s and the consortium’s. It’s true that the equity partners in the group will all somehow have to share in revenues, making the deal less attractive to the telcos who need revenue the most.
How do we reconcile these groups? With a song, I propose. “You’ve got to accentuate the positive, eliminate the negative, latch on to the affirmative….”
It seems to me that accentuating the positive starts with getting the pricing and royalty payment plans right, because the real and perhaps only positive we have to play on is the standardization of telco APIs so that efforts to exploit them by OTTs doesn’t become overly complex because of per-telco differences. In order for the standardization to be meaningful, it would have to be offered across a wide range of telcos, far beyond the current sponsors of the initiative. The limiting factor in that is the details of the pricing/payments strategy, which for now isn’t public and which I’m told off the record isn’t fully settled.
One clear challenge here is that absent some royalty payments made to founders of the group, there’s little incentive to commit funds to the initiative, but those payments will have to be spread out, with presumably each sponsor telco sharing in 50% of whatever is distributed (Ericsson holds 50%). If a non-member telco licenses the APIs, it would mean that the license fee would have to be shared, and in some cases the non-member might be contributing to the revenue of a competitor. I think a lot of work on the way all this works will be essential to ensure the proposed benefits can actually be harvested.
I also think that membership of OTTs in general, and the cloud providers, content providers, and social media players, is essential. The telco view of what constitutes a useful API set has, up to now, been singularly naive. What’s needed is input from the players who would presumably be paying for the API use. This might suggest either changes to the proposed API set, or perhaps even new APIs. Right now, only Google Cloud represents the OTTs, and that’s surely too narrow a base.
Which leads to eliminating the negative, which I think is the real challenge here. There are services, like a time service, that could be argued as being useful in API form, but of course there’s a lot of time provider resources available on the Internet, via GPS, etc. Turbo-button gaming? Another idea that’s been proposed for literally decades, but has not taken off. The problem in part is the fact that applications transit both multiple communications networks and often multiple hosting locations, and it’s difficult to guarantee performance across that spectrum of players. Especially when we’re dealing with the Internet’s historic bias against settlement and in favor of bill and keep, and where any inter-provider deals to settle premium handling payments could run afoul of the ever-shifting net neutrality policies.
My best OTT contacts made it clear, when Ericsson first bought Vonage, that they didn’t find the APIs involved to be enough to launch any significant new services, or lower their barriers to introducing such a thing. Connectivity is the essential property of “Internet-as-a-network” or its “data dialtone” capability, and there are few ways to expose it in an API that create anything different from what’s naturally available. I don’t think there are any ways to expose a feature with revolutionary potential, and that’s what the expert says too. Thus, eliminating the negative requires latching on to an affirmative, which would be a new set of features to expose, features beyond basic connectivity. Over three-quarters of enterprises say that those features would almost have to be related to IoT and digital twins, because that’s the leading edge of business/process automation.
Enterprises think that telco API roles in IoT would have to focus on what they think of as “public sensor” applications. The great majority of IoT is based on local sensors and local-edge computing, owned and deployed on enterprise facilities. There is little value enterprises see for operators in this kind of application, but there are applications (including smart cities and many transportation-related applications) that require deployment on public property, and where overbuild created by per-enterprise investment could compromise everyone’s business case.
The question for telcos would be less what APIs this might drive, but how to encourage public sensors. One way, the easiest in terms of chance of success, would be to deploy the sensors themselves. Enterprises think this would be the best approach to building a business, but telcos don’t like the first cost or even the need for direct investment, because of the bottom-line impact. The alternative would be to somehow induce others to deploy. How that could work is open to debate.
Some enterprises think that an affirmative partnership strategy would be needed, where the telco offered sensor/effector connectivity for little or nothing, and banked on a set of add-on features, which could range from administration and security to digital twin services. Telcos don’t show much interest in this either, according to what I hear from them.
And that, I think, is the heart of the issue. No matter what color you paint a pig, it’s still a pig. The strategy of exposing current connection features through an API has had many colors over the last two decades, but all the aspects come down to an effort to exploit the comfortable, not the positive. If that doesn’t change, all telcos will end up is a mud-colored pig, which is pretty much what they started with.
Who could change it? Interestingly, maybe only Google. They’re part of the group, and they’re the cloud provider that telcos have consistently told me they trusted the most of the cloud giants. Google probably knows the right new features to create and to expose. Will they tell the telcos, and will the telcos accept Google’s guidance? That may be the question that determines the success of this venture.