I’ve never held a view that private 5G would be an enormous opportunity for vendors, cloud providers, or operators, and I still don’t. I do believe, though, that private 5G may be extremely important to all these groups, and to unlocking additional IT and network spending and investment. Enterprises are seeing the issue more clearly now, and that gives me more ammunition to support this position, and more clarity on the issues we face in realizing both private 5G and what it could lead to.
Let me start by saying that I do not believe that private or public 5G or 6G will replace WiFi. The great majority of WiFi applications don’t need anything cellular technology can offer, nor would they pay the price of getting it. As a network technology, WiFi wins, but many of the things we need today go beyond the network. Private 5G, say enterprises themselves, has a potentially critical role to play in the evolution of edge computing and real-time IoT applications.
Private 5G is a successor concept and a trailblazer concept all in one. In the former sense, it’s a modernization of the private LTE or private wireless initiatives of the past, initiatives that have seen some success in the marketplace. In the latter sense, it is what 6G has to evolve to consider, and thus what’s done with it, or even proposed for it, could influence the direction 6G takes, and the direction of future network services.
All private wireless technology is specialized in that it’s value is limited to enterprises or government agencies who have a need for what looks much like mobile services but have a reason not to utilize public cellular services to fill it. In the past, one strong reason for that was cost—a lot of devices/users that would have to pay for public-level connectivity they not only don’t need but have to be protected from. Another was security, that protection need reflects a potential device/user vulnerability to things like hacking or DDoS attacks.
5G as it was standardized didn’t particularly impact these two factors, and little that’s been done since really does either, at least not directly. What did impact things was the Open RAN movement of 5G, which expanded on the 5G standards to open up radio network functional elements that the 3GPP didn’t. In doing this, Open RAN raised the possibility that RAN software could be modified to support non-RAN, even non-network, function hosting. You could plug an application component in, and by tightly coupling it to the RAN and RAN hosting points, offer its services at lower latency than any other model of shared hosting could duplicate.
You’d think something like this would be so appealing to cloud providers who want to host private 5G that they’d have jumped on offering the capability, but I’ve not heard from any cloud provider or enterprise who saw that happen. Instead, an enterprise seems to have taken the lead in that area, building what one expert there called a “RIC-Rack”, a plugin to the RAN Intelligent Controller that would allow connections to both the near- and non-real-time RIC, coupling software to the RAN resource pool and infrastructure. This initiative was self-hosted, but the company says they’re working to use “public” hosting, meaning presumably the cloud or something offered by a network operator.
There appear to be two barriers to this concept, the performance of and security of the RAN and mobile infrastructure. Obviously, something that runs within the framework of the RIC might hog resources to the point where it compromised services, and any form of shared hosting has to ensure that a tenant can’t touch other tenants’ code or the code that controls the infrastructure overall. Thus, something like a RIC-Rack should be standardized and validated, perhaps as a part of an initiative like O-RAN. Some in the enterprise who’s working this idea believe that standardization is their only path to getting cooperation from cloud providers and operators.
Others are perhaps more cynical. A decision to actually offer RIC-Rack services could generate a kind of edge-hosting land rush, something that might actually hurt the cloud providers because it would encourage deployment of resource pools more broadly, raising their capex, before there’s certainty that the facilities would pay back. The problem getting that assurance relates to identifying just what goes into the rack and how it’s both built and justified. I think this may really be the standards issue in another guise; the rack is a kind of middleware, but what general set of hosting features should it offer to intersect with application opportunity and business-case justification in a broad market?
OK, why not create a standard for the rack? The problem there may be that middleware goes, obviously, in the middle of something. The “somethings” here are the real RIC and the edge software component, and while what the RIC needs can be identified by examining the source code in an open implementation, the edge application needs are harder to pin down. Does the rack, for example, only provide access to RIC/network features that could be inferred to be valuable to service users, or should it be able to add operator features that might require deeper integration with service logic? Should the rack include edge features that are likely needed generally by applications, even if they aren’t RIC features? You get the picture. Without an idea of what the feature goals are, it’s impossible to define a standard that would optimally address the opportunity.
Which is what? That’s the other problem. With all the hype around the concept of edge computing and the value of low-latency application access, we don’t have much real information on what applications we’re talking about. Part of the reason is that buyers typically don’t spend much time figuring out what products/services vendors or operators should be offering them, but rather on what to do with what is offered. That means that “edge computing services” would have to be viewed as displacing the services currently hosted on premises, usually out of the data center as part of process control activities. Displacing something is complex, and the debate over just what should be run in the cloud versus in the data center is proof of that.
So why not work out the needed features, if you’re a cloud provider eager to push into new revenue areas? Or if you’re an IT or network vendor who wants to unlock additional spending? It goes back to the “don’t build a road for others to share” mindset. Creating a market is an enormous marketing chore, and building out products/services to address what you built is similarly enormous. As soon as you take a clear step, what you’re up to will be obvious to competitors, who can then jump onto your bandwagon and reap any benefits that actually develop. That truth is what underlays Cisco’s well-known “fast follower” versus “leader” preference. Let someone else take the risk, then step on their initiative.
I’ve noted this problem before. When a complex opportunity is presented, particularly now that low-apple opportunities have been picked, it takes an ecosystem of supporting products and services to address it. Who, as a part of that ecosystem will take the risk to move? We’ll have to see in the case of private 5G.