There’s always going to be some vendor whining regarding any technology. Somehow, buyers don’t seem to accept that it is their duty to buy the latest technology offered, whether they actually have a business case for it or not. Vendors keep insisting that even if the new stuff doesn’t really do what’s needed, that’s only a minor hurdle the buyers need to work around. No surprise then that Microsoft used some if its MWC time to rant about the pace of “open cloud” adoption in Open RAN and 5G.
The central argument Microsoft makes isn’t totally stupid. They’re saying that the RAN should be cloudified, which is also what most operators (77 of 81) say. They’re saying that a cloudified RAN should be compatible with container services of any public cloud, which is what 71 of 81 operators say to me. A Light Reading story on the topic reports that AT&T admits their Ericsson RAN software will run in part on Ericsson’s CNIS cloud infrastructure and not on public cloud in general, and on Microsoft’s Azure in particular. The article says “That sounds far from ideal because the whole rationale for cloudification, as far as many experts are concerned, is that said operator can collapse all its network functions onto the same underlying platform, scrap the silos and be more efficient.” That point, I have to say, doesn’t get the same positive operator reaction, based at least on what they tell me.
Of the same 81 operators, only 20 say that they believe that their Open RAN stuff should run only on a public cloud provider, and there’s no difference in that view whether you’re talking about a single provider or multiple ones. The quote above implies that “network functions” means generalized NFV-style virtual network functions (VNFs) and not just RAN-related ones, and if you make that clear, then only 16 say public cloud hosting should be the uniform strategy. All the rest believe that they would likely elect to host at least some network functions on their own infrastructure, on vendor-supplied technology, or somewhere other than public cloud. I think that’s actually the right choice, for a number of complicated reasons.
The RAN part of a network is a two-layer structure that’s generally seen as running from the various tower sites to the on-ramp to the core network. While it’s not usually presented this way, I think the best way to visualize it is based on those two layers. The bottom layer is the “user plane”, and it’s best to think of it as being an IP network with some added features to link it to the other layer. That layer, the “control plane”, is responsible for things like subscriber management and mobility management.
There is no question, either in my mind or the minds of the mobile operators who I chat with, that the control plane elements of the RAN can be hosted in the cloud, and via containers. Of 73 operators who actually offered mobile services, 66 had comments in this area. Of that group 64 believed control-plane elements could be cloud-hosted, 47 said that they should be where the service geography was broad enough, and 18 said they were already either planning or using cloud hosting of some control plane elements.
The data plane side is more complicated. I’ve already said, in past blogs, that I did not believe that cloud-hosting of RAN data plane features was generally useful. My reason was that the data plane is a switching function, best handled with devices using custom “white-box” chips rather than by general-purpose CPU/GPU technology, and I still believe that to be true. I don’t think that “cloud” resource pools can be expected or even justified way out toward the RAN edge. An appliance is more logical. Of our 73 operators with mobile services, 59 had that view, but 62 had another concern they rated higher.
What? That they would be essentially committing to ceding the data-plane traffic to cloud-provider handling rather than to a more traditional IP network. Some worried about ingress/egress pricing, but all worried that what they would be doing is starting to offload traffic onto a cloud provider’s internal network. In fact, of that 62, 52 said that they had similar concerns about cloud applications in general, and SASE in particular. One Tier One said “I’m worried that the core network of the future is a set of private DCI [data center interconnect] links between cloud data centers, and we’re just providing access technology.” That, of course, is the most expensive and least profitable piece of the whole of network infrastructure.
There are vendors, including NVIDIA and Broadcom, who wouldn’t mind having the concept of a “resource pool” redefined to include broader missions than general-purpose computing. The AI-RAN Alliance, announced at MWC, says “We utilize AI for the enhancement of RAN performance. We build infrastructure where AI and RAN can share information and collaborate. We enable new AI applications to run on RAN.” Will this group promote a broader definition of a generalized RAN-and-AI-and-Edge resource pool? Too early to say; there’s nothing useful on their website yet.
Would it matter if the group actually came up with something? That may be the better question. Almost from the dawn of Open RAN and 5G there’s been interest in (and attempts at) broadening the scope of the “resources”. The whole notion of the RIC (RAN Intelligent Controller) is that there’s a need to manage a set of RAN network functions. Most vendors who support Open RAN and the RIC indicate that new revenue streams could be provided by new functions, and there’s a lot of optimizing of things like traffic, beamforms, and slicing that are managed there. However, it’s not clear how valuable AI would be relating to any RIC-controlled functions, even less clear whether the AI that’s used would need to be hosted there, and totally unclear whether any of the stuff would actually be enough to generate a real commercial opportunity.
Vendors will always try to promote stuff for buyers to buy. The media will always work hard to find stories to click on. Promoting either of the two creates buzz but not business cases, as I’ve been saying recently. Could AI features in a RIC domain be helpful? Probably, but unless we actually figure out something dramatic they could do there, I don’t think “helpful” is compelling enough.
We don’t need “helpful”. What we need is something transforming, compelling, sufficient to change the patterns of infrastructure investment. I’ve been involved in telco-land for decades, and from the very first too many vendors have focused on “things you can do” with a technology rather than things that could justify it. Could we do stuff, edge-computing stuff, with RAN resources? Sure, and we could do things with smartphones, smart watches, thermostats. None of those things will change the world, or the network.
Here’s the thing. Edge computing, as we already know, is currently exploding, but not the way it’s been described. Enterprises are putting it in spots near the processes that it supports, on premises, in-facility. We also know that AI is going into phones, and we know that phone AI is already being supported by deeper AI processes. Do we need to have AI applications, not RAN applications of AI but AI applications hosted in the RAN? I doubt it. What would prove me wrong? Identifying missions that go beyond “it could be done there” into the realm of “it can be done a lot better there.” I think finding those missions is going to take a long time.