Telco interest in AI is high, like the interest of other verticals these days. Still, telcos are perhaps a bit behind in general in their progress at assessing AI, meaning that if you look at the 44 vertical clusters I get data on, telcos fall in the bottom third in terms of AI capability based on self-assessment. Only 13 telcos out of the 51 who had AI comments to make said they had actual, serious, AI initiatives underway, and only another 22 said such initiatives were in planning. What’s particularly interesting, or perhaps sad, is that the reasons for this seem to tie back to the same basic culture issues that have stalled telco progress in so many ways already. It’s the supply-side bias.
Telcos tend to plan in terms of resources, not in terms of needs or demand. The classic “build it and they will come” mindset focuses on building, which necessarily focuses on what your building and how you’re doing it. So, for example, 47 of the 51 telcos said they were looking at “generative AI” rather than at “AI” in general, and that’s because generative AI gets all the ink. You can imagine a telco CEO, having read a story on the topic, calling in the CTO and saying “Hey, we need to be considering this generative AI stuff. Get somebody on it.” What results is a plan to evaluate generative AI without any real specific business case to drive the process.
Telcos actually have a potentially great set of business cases available. As I pointed out in my blog earlier this week, they are almost universally trying to reduce headcount, particularly in consumer sales and customer support. These are missions that chatbot technology is well-suited for. One big telco told me that they got better customer responses from a chatbot interaction than they did with offshore call center support in a side-by-side test of the two. Despite this, several telcos told me that a recent TMF event on generative AI hardly even mentioned the chatbot as a tool, and Light Reading’s article on the event doesn’t either.
The big issue for every enterprise of any sort who wants to evaluate AI is that of business case. It’s hard to model this at this point with so little objective telco input available, but my preliminary efforts suggest that you could cut customer sales and support costs by roughly 30% from current levels simply by using chatbots for all support interactions, replacing most of the requirements for interaction with a human agent. That’s a pretty big reduction. Further reductions of around 20% could be achieved in what I’ve called “process opex”, meaning the network and service management tasks. However, public model generative AI isn’t helpful in either of these two areas, you need a specialized form of AI that’s operating on private data.
Customer and sales support can’t be addressed using models trained on the Internet. Business intelligence can’t be analyzed that way either. You need something that’s customized to your own information. The language (the “prompts”) associated with generative AI models are fine with private data but you have to train with that data rather than with generic Internet data. What’s most helpful, according to the broader spectrum of enterprise verticals, is a model that’s trained on the general form of input/output and then fed the specific data of the company as a prompt. Since this is a simpler use, enterprises say that hosting the model on premises is practical. However, it’s not clear whether a chatbot application, which requires human-credible conversational syntax, can be done effectively this way.
Telcos, like other verticals, say that they’re not prepared to share their company data with a public-model generative AI tool. Most aren’t even willing to have it stored or used in the cloud, so they also resist the idea of using public cloud services to host AI. They want to have their AI hosted in their own data centers. There’s growing realization of this among the big AI providers (Amazon, Google, Microsoft, OpenAI) and thus growing interest in creating an AI tool that can be run on premises, and creating AI hybrids that keep private data on premises while drawing on broader AI knowledge. However, telcos seem more focused on the public model capabilities, including using AI tools in writing documents, even in dealing with emails.
The problem with these tools is that I’ve been unable to gather anything that proves these tools could reduce opex/headcount. While individual workers and even managers may like them, and say they can reduce the time required to do many different kinds of writing tasks, they don’t as yet associate that with concrete savings. One AI planner told me that management was “looking at ties not at coats”, which meant they were looking at AI for glamour reasons rather than practical ones, and so had accumulated no data on value at all.
There are many operational missions for generative AI, and most perhaps could be supported on premises. Network operations, business operations, and business analytics could all be visualized as the process of analyzing tabular data to produce a tabular result. Think of it as a model written in AI-speak, based on a kind of spreadsheet template that’s filled in. There are ample commercial and open-source large-language-model tools (LLMs) available that can be hosted on premises or even on IaaS services in the cloud. The latter, according to enterprises, is at least a little more secure than using public AI services to analyze company data. Enterprise specialists in AI like the template and model idea, but it doesn’t seem to have caught on at the telco level as yet, and therefore telcos haven’t really been working to develop the “templates” needed.
This is unfortunate, because I think it’s clear that telcos could launch a standalone collective activity to build specialized templates for use, or do that within a current body like ETSI, the TMF, or even an open-source group like the Linux Foundation or Apache. I think AI template/model creation would be very similar to an open-source project. This should have been the first step in the telco AI process. Don’t worry about starting diddling with the public AI stuff, work out how to create your own templates, your own foundation models, that are generic enough to use by any telco. That might sound a bit farfetched, but remember telcos started out as either regulated monopolies or actual parts of the government, so the telco vertical probably way more standardized than the typical vertical.
It’s also possible that a vendor would be willing and able to do it. IBM has been making overtures to telcos on the use of watsonx, and so has Google and OpenAI, though the latter with less emphasis and less engagement so I’m told. Juniper is IMHO the leading vendor in the network-AI space, and they might be in a position to leverage that through HPE, if the deal doesn’t distract all the key people in both companies. Vendors could run this on their own, or they could join a telco initiative. The vendors would prefer the former, and while history says the telcos would likely prefer the latter, right now they don’t seem capable of expressing any useful view of their own.
I’d love to know whether there are telcos that would support an initiative like this, but I try not to ask questions that will bias the people I’m talking to; that approach means that you end up researching your own views. None of them brought the idea up, but the should have. Perhaps eventually they will.