Andover Intel https://andoverintel.com All the facts, Always True Thu, 24 Apr 2025 11:48:13 +0000 en-US hourly 1 Are Cloud Providers Getting into Networks? https://andoverintel.com/2025/04/24/are-cloud-providers-getting-into-networks/ Thu, 24 Apr 2025 11:48:08 +0000 https://andoverintel.com/?p=6100 There has, for years, been a potential for the cloud providers’ networks to create competition for enterprise networks based on MPLS VPNs. I noted in an earlier blog that enterprises were seriously looking at reducing their WAN costs by using SD-WAN and/or SASE. This obviously generated an opportunity for cloud providers to offer WAN services, and also for someone to offer multi-cloud interconnect. Maybe it also offers more, as Fierce Network suggests.

The increased role of public cloud services as a front-end application element, and the growing tendency for enterprises to think of workers, customers, and partners as roles within a common hybrid cloud application set, has meant that many enterprise traffic flows have already abandoned the MPLS VPNs in branch locations, flowing instead from Internet to cloud to data center.

Google’s Cloud WAN, Amazon’s AWS Cloud WAN, and Azure Virtual WAN are all cloud provider services that aim to increase the role of the cloud provider in what was once a branch-and-HQ MPLS VPN lake. I’m not seeing, or hearing about, any major initiatives from cloud providers to take on branch connectivity missions for non-cloud customers. Enterprises also, so far, seem not to be looking for that sort of relationship, but changes could be in the wind.

Recall my comments on the possibility that something IMS-like might create an opportunity to steer traffic by QoS? Well, these cloud WAN services are all essentially fast paths from the Internet (for access to the sites) to the cloud WAN. If we presumed that there was a fast path from the user to the cloud WAN on-ramp closest to them, it might then create a QoS facility for the cloud WAN that was better end-to-end than an externally provided SD-WAN solution.

Another possibility related to that is the chance that some form of access network partitioning, not unlike wireless network slicing, could offer a way to separate premium services. Would this violate net neutrality? Who knows? There have been times, particularly early on, when neutrality policy also excluded services that aimed at Internet bypass in order to offer paid prioritization. There have been times when it allowed it, and probably times when sunrise and sunset might have been under suspicion. Let’s face it, neutrality policy is in the eye of the party in power.

Another Fierce Network article raises a third possibility. Lumen Technology, who in the first of my Fierce Network pieces was cited in its partnership with Google’s Cloud WAN, is of the view that the Internet is not the enterprise, or the enterprise network. Lumen proposes to build something that is, a fabric if high-capacity Ethernet trunks that link the cloud providers and enterprises. He’d like to connect the local hubs of all these providers too, and I’d not be surprised to find Lumen connecting some major enterprise sites and partnering with metro/regional fiber access players, too.

This is one of those good-news-bad-news things. Enterprises would surely be happy to see cloud providers offer competition in premium business services. Cloud providers would, of course, be even happier to sell them. Telcos? Well, that’s where bad-ness comes in.

Telcos are, as ISPs, more likely subject to neutrality rules, including wavering policies on Internet bypass. In the US, the original justification for extending net neutrality in that direction was to prevent non-neutral services, because they were more profitable, undermining Internet investment that directly impacts…wait for it…voters. The risk, they see, is that cloud providers with no neutrality issues to worry about might end up providing those premium services and increasing the disintermediation the telcos already suffer at the hands of OTTs. Before, they were disintermediated from higher-level, more valuable services. Now they could be disintermediated from services in their own zone.

On the other hand, might operators who used 5G or 6G IMS and network convergence (the much-hoped-for “all-IP network”) offer what I previously characterized, based on some insightful comments, as nIMS across wireline and wireless? Remember that IMS spec calls for specialized bearer channels based on service requirements, and it seems unlikely to me that 6G would deliver on microsecond latency for everything, which suggests that low-latency might be an example of a service requirement for a specialized bearer channel.

This last point may be critical, because it illustrates a paradox that’s been created for the network operators. On the one hand, they’re buoyed by the hope that new applications in general, and new real-time applications in particular, are a major future source of revenue for them. The same thing, I must point out, is the hope of edge computing and many who envision a “smart” future. On the other hand, these applications will surely demand latency and availability guarantees that are not justified for the Internet overall, and may not be economically feasible if they have to be presented on a no-extra-charge basis to everything.

Best-efforts Internet isn’t an ideal, optimal, or perhaps even compliant service on which to base real-time applications. We need to accept, at the policy level, what we’re mandating economically and technically. Real-time is the next time we need to be worrying about. It’s likely that a real-time application would demand a subset of Internet features, optimized for low latency and high availability. Getting to that could happen, in theory, either by refining how real-time-enabling features work, or separating real-time from the Internet. Cloud providers, who are also the most likely players to take a shot at the real-time-dependent edge computing space, have a big incentive to try to advance those special features. Could they do it themselves? Yes, deeper in the network, but not likely in the access network unless the features become part of “standards”.

Could it be? In theory, mobile services (IMS, 4G, and 5G) support multiple data network gateways, so theoretically a cloud provider could offer connection to one of their hubs. Would an operator themselves be allowed to offer something like this? I’m skeptical. This would, in my view, create a reversal of the past policies aimed at keeping operators from exercising an unfair advantage due to their position. Could neutrality put them at unfair disadvantage here?

There’s a potential collision between public policy in the traditional sense, and the health of the market that, to the FCC and many other regulatory bodies, is the primary goal of regulation in the first place. We have standards and technology trying to advance, but at risk to running into a barrier created not by technology limits or economics, but by a policy. The role of the Internet has evolved to be truly the modern equivalent of dialtone, and we need to think about that in technology planning and regulation.

]]>
6100
What Operators and Vendors Hope 6G Will Offer https://andoverintel.com/2025/04/23/what-operators-and-vendors-hope-6g-will-offer/ Wed, 23 Apr 2025 11:31:04 +0000 https://andoverintel.com/?p=6098 In my blog yesterday about the future of operator network services and infrastructure, I mentioned the possibility (well, maybe “hope” would be more accurate) that the 6G initiatives might address some issues in a useful way. Since we’re at least five years from a solid idea of what 6G is going to do (we might get an idea of what it’s targeted at in three years, but not how it will deploy), can we even see a glimmer of direction? I got some views from both operators and vendors, and (of course) I have my own. Here’s what comes out of that mix.

Among operators, the big requirement for 6G is simple; it has to be more technologically evolutionary than revolutionary. Operators are almost universal in their belief that 5G didn’t return on the investment it demanded. The majority (just shy of three-quarters) also express reluctance to accept any promise of new revenue to offset costs, which means that they expect 6G “transformation” to be accomplished with a lot less hardware deployment than 5G.

Among vendors, you get almost the opposite picture, which isn’t much of a surprise. They want 6G to be a true infrastructure revolution. One vendor eagerly told me that “6G will transform every device that touches a network.” You can almost see them counting the prospective money. The thing they believe is that it’s all about speed and latency. “We will offer terabit service; we will offer latency measured in single-digit microseconds.”

Build it and they will come meets build it on the cheap.

There are some “technical” goals that both camps accept, and they’re mostly related to cost reductions that satisfy the operators but focus on something other than capex, so they don’t hit vendors’ bottom line. One example is energy efficiency, another is reductions in network complexity. There’s also interest in RF efficiency to get more from available spectrum, which could lower the cost of achieving some of the feature/function goals.

A couple of smart people in both the operator and vendor camps offered what I think is the most realistic summary of 6G goals. To paraphrase, 6G has to prepare the network for the evolution of applications and experiences that businesses and consumers will want or need going forward, so the network will not create a barrier to their growth. This means that the network will have to anticipate these new requirements in a way that fosters early adoption and experimentation, but does not impose a high “first cost” on operators, an impact that could delay or even stop deployment.

These smarties think that what’s likely to happen is an advance into 6G in phases, each phase offering a useful step that manages cost but introduces new capabilities to encourage application development. One very interesting comment these people offered was that private 6G and FWA may end up being the keys to 6G success. The reason is that many of the features of 6G would logically develop from applications with a very limited scope, more suited to private wireless, and would expand to wider geographies only if they were proved out in the early private missions.

Again, paraphrasing: Low-latency applications are real-time applications, and these are today almost totally limited to plant and campus distances, supported by WiFi and wired connections. One goal of 6G is to be the sole accepted solution to real-time low-latency missions, which we’d call IoT missions. To make that happen, it has to go into the plants and campuses, and displace local connection options there, rather than depend on new wider-ranging and as-yet-undeveloped applications that WiFi and wires can’t serve.

The first phase of 6G, say this group, has to focus on efficiency and on this private-mission goal. They see early private 6G utilizing current 5G spectrum, but using a simplified network model versus 5G, and relying more on machine learning to make components of the service work autonomously. As these applications expand, the literati think they’d likely open new spectrum at the high end of the mid-band, in the 20-24 GHz piece of the 14-24 GHz range. This spectrum has potential as a private resource, in part because the shorter wavelength doesn’t propagate as far, and carving out something in this range for local exploitation has less impact on future exploitation of the mid-band by operators.

The simplified model this group is thinking about is about building wireless networks with fewer boxes, meaning eliminating things in 6G that were really introduced in 4G and continued in 5G. One proposal many make is to completely rethink the concepts of the IP Multimedia Subsystem. Anyone who’s looked at an IMS diagram recognizes that there are more different boxes in it than in the whole of the rest of IP networking. The group believes that the nIMS (as one calls it) should be viewed as a mobility-capable overlay on a standard data infrastructure, one that is provisioned to have the capacity needed and that therefore has little reason to manage it.

One goal of nIMS is to make it possible to deploy private 6G at the plant/campus level, using resources enterprises either have (current local-edge process control servers) or can acquire and manage in the way they’re used to handle compute. Think of it as running on Linux blades, and the only new thing needed is the radio setup. Some of the thoughtful group think that any specialized features might also be hosted in a router blade, and that in particular, any User Plane functions of nIMS should be router/switch features and not custom devices.

To facilitate the expansion of the private 6G model, the group think that spectrum policy is critical. They recommend a chunk of the top end of the 14-24 MHz spectrum be allocated for government/utility use in smart cities, with priority given to any municipality with at least 100,000 population. Since smart cities are one of the specific 6G application targets, this would encourage development of 6G solutions rather than force cities to use an earlier alternative. They also say that 6G should run within any 5G spectrum available, public or private, which would mean it could in theory be used with existing radios if 5G were already deployed.

The big points this group makes are related. First, you have to make 6G start small, with minimal “first cost” and that cost focused on preparing for a private 6G application-driven evolution. Second, you have to ensure that every step in the process is aimed at just getting you to the next step by opening new benefits gradually, as the service opportunities justify. Operators in this group agree that it’s first cost, the cost of an initial deployment, that matters, and that by targeting limited mass-market deployment early on, you could manage their first cost and risk.

But, of course, you don’t address the vendor problem of wanting a boatload of money, nor do you eliminate the risk that the gradual roll-out doesn’t generate any validating missions, and that 6G then never does anything revolutionary at all. But better to fail on the cheap than to fail expensively.

]]>
6098
Can We See Two Decades into Telecom’s Future? https://andoverintel.com/2025/04/22/can-we-see-two-decades-into-telecoms-future/ Tue, 22 Apr 2025 11:41:01 +0000 https://andoverintel.com/?p=6096 I went back over some of my own writing a decade or two ago, and it made me wonder how much we could hope to uncover about the future of network infrastructure for service providers a decade or more from now. Everyone loves transformations; they generate interest for us and clicks for publications and advertisers. Can we expect one?

One thing we can say confidently is that there’s still more provider interest in mobile services than in wireline. That’s probably a good place to start, so will mobile drive the future network? I think it depends on what part of the network we’re talking about.

The recent interest in satellite-to-4G-phone service illustrates that one force in mobile networks is the desire to be connected everywhere. There are two prongs to this particular force; rural broadband and remote in-touch services. The former relates to the desire to project reasonable broadband quality to locations that are difficult to serve with traditional wireline or even FWA technology. The latter relates to both emergency calling in the wild, and simply keeping in touch when you’re kind of off the grid.

Another force in mobile networking is the presumption that broadband capacity limits the ability of the Internet and OTT ecosystems to provide new services and features. Thus, having more mobile bits to play with is inherently a good thing, which is why so much of 5G hype centered on the greater bandwidth it could provide. Pushing more bits requires fundamental changes to not only mobile infrastructure but also mobile devices, and this of course enriches vendors.

Wireline, as I’ve pointed out, is generally less interesting to operators than mobile services, but there is a third force in this space, created by the combination of a gradual gain in the use of the Internet as a transport media for VPNs (via SD-WAN or SASE) and the uncertain regulatory status of premium handling features on the Internet itself.

The future of service provider networking likely depends on how these three forces work for and against each other in the coming decades.

Let’s assume that the “everywhere” force dominates. This clearly further accentuates the interest in and profit from wireless, and likely focuses a lot of attention on 6G. Given the almost-universal (and rightful) discontent operators had with the ROI of 5G, we can expect this attention to take the form of explicit ROI pressure by the operators, pressure that will demand that either a convincing and significant new service revenue stream can be identified, or that the cost of the 6G transformation is minimized. Operators are skeptical about the former (once 5G-bitten, twice 6G-shy) and so are already saying they want 6G to be a software upgrade wherever possible. That means that any significant new equipment revenue would likely be limited to the radio itself, perhaps not even including other elements of the RAN subsystem.

I would expect that this would tend to direct 6G to “capacity” in some missions (like FWA) but perhaps more to the integration of satellite services and mobile services, improving on the symbiosis with 4G already coming in satellite broadband. There is potential for this to create a value-add (meaning incrementally profitable) service, which would help 6G’s credibility considerably.

Anything that makes FWA potentially better, particularly things that might make the number of customers per unit area and the per-customer capacity higher, would shift deployment focus more toward FWA. The limitations in mobile devices to meaningfully display content with resolutions above full HD (1920×1080) make it hard to justify faster device-mobile connectivity, but in FWA missions even 8k video is a reasonable longer-term target.

Universal connectivity might also come to mean IoT connectivity. For IoT to step up, it seems essential that it be both inexpensive enough to adopt for basic sensor missions, and widespread enough to accommodate the sort of mobile IoT applications already emerging in transportation, even personal vehicles. Might a universal 6G mobile-and-satellite combo be a way to go there? Perhaps, if it could be structured to be inexpensive for the real missions it targets, and have limited or no risk of being arbitraged for missions served already by higher-priced mobile services.

The big risks in this evolution overall are the proven inadequacy of standards initiatives to address real-market conditions, and the risk of vengeful vendors pushing back on a process that leaves them limited opportunity. One of the largely unrecognized impacts of a software-6G framework is that there is virtually no opportunity for any vendor other than one of the wireless infrastructure giants of today (Ericsson, Huawei, Nokia) from gaining much traction. This is the group that has dominated 5G despite operators expressing a desire for an “open” solution, and so whatever 5G technology has deployed and is expected to be software-upgraded to 6G (with perhaps some new radio stuff injected) comes from these vendors. Operators admit that they don’t see much chance of their buying 6G software from a new vendor to go into 5G hardware from one of these three. “We’d be multiplying integration and finger-pointing when we want to limit it,” one told me.

All of this tumult tends to collide with the goal of operators to converge all their services on a single infrastructure. This collision is created in no small part by the nature of the Internet’s bill-and-keep, net-neutral approach to services. Mobile services, from Release 5 of IMS two decades ago, have allowed for special handling of sessions to mobile devices. Might this encourage operators to think of their access networks as service-neutral, their core as an extension of “the Internet”, and try to push services out into the access side to avoid colliding with neutrality rules, which blow in the political wind? Could IMS, network slicing, or both somehow extend into wireline access? Could 6G help with that?

Another possibility some operators cite is repurposing the current MPLS VPN model. MPLS VPNs are created with substantially the same resources as operator Internet backbone services, and they don’t get neutrality push-back. Suppose that MPLS was used for what we might call a “SD-WAN-net”, a backbone where business services run, separated from each other by the SD-WAN level of technology and from the Internet by MPLS? Would that still pass the regulatory sniff test? If it did, might it meet up with an IMS-ish structure in the access network of both mobile and wireline? Might something like that also support IoT? I think this is the question we need to be looking at if we cast out eyes two decades forward to look at service provider evolution, so I’ll touch on it later this week.

]]>
6096
The Enterprise View of the Network of the Future https://andoverintel.com/2025/04/17/the-enterprise-view-of-the-network-of-the-future/ Thu, 17 Apr 2025 11:28:56 +0000 https://andoverintel.com/?p=6093 What network models are enterprises looking at for the future? How might the network of 2028 differ from that of 2025? I got some information from 294 enterprises that offer some answers to these questions, but they also point out that there are many different drivers operating on networks over the next three years, and these drivers impact enterprises based on things like the vertical market they’re part of, the size of the company and their office sites, and the unit value of labor and nature of work in each office. So one size here won’t fit all, but we can only deal with broad market questions based on averages.

Enterprises don’t see any specific shifts in the missions that drive networking to change things in the next three years. If that’s true, then it’s likely that the same cost-driven network modernization policies that have been in force over the last decade (except during the COVID lockdowns) will continue to drive things. More for less, then, is the goal.

One area where there’s a lot of interest is in branch or secondary site access. The traditional approach to this has been the MPLS VPN, but enterprises are very concerned about site access costs, and VPN access will, they say, cost as much as ten times the price of the same bandwidth delivered as business Internet. Yes, they admit that QoS and reliability might be better with the MPLS VPN, but they point out that companies depend on the Internet for direct prospect/customer connectivity already, and that mobile workers or those in home offices or mobile also depend on an Internet path back to enterprise applications.

Of our 294 enterprises, all say they already depend on the Internet to reach customers and prospects, and all say that mobile workers use the Internet to reach the company VPN. About a third say that some branch locations are connected via the Internet already, including those who use mobile-worker VPN access techniques for very small office locations.

Formal SD-WAN or SASE is harder to pin down; roughly a fifth of enterprises say they use it, but I’m not confident that all those who don’t say are in the “don’t use” category. For example, 147 of the 294 commented on what site conditions would to them justify VPN connectivity, which is exactly half the total, and why would they have a view here if they didn’t have to decide between VPNs and an Internet-based VPN connection somewhere?

With regard to those comments on site conditions, the primary metric enterprises say they look at is the number of workers. On the average, companies suggest that an IP VPN isn’t cost-effective in sites with fewer than 17 workers, but the nature of the workers and the work seems to influence this. Companies whose remote sites are largely staffed with high-unit-value-of-labor professionals (healthcare, finance, etc.) set the decision point as low as 8 workers, and those whose office locations are staffed with people whose labor value is lower set it as high as 28. The presence of any custom network-connected devices (again, common in healthcare and finance) argues in favor of MPLS, too.

Of the 294 companies 208 say they could do more with SD-WAN and related technologies, meaning that there are sites where they believe it could be applied. The “why-wasn’t-it” comments offered center on simple inertia; enterprises tend not to change things unless there’s something specific to drive them to consider change. Here, there seems to be an indication that a driver is emerging. Among the 208, 78 say that they are “considering” initiating or expanding the displacement of MPLS VPNs, and all this group cite a company drive to reduce network service costs. Most interestingly, of the 28 comments I’ve gotten since the latest tariff/market flap, 22 say that reconsideration of MPLS VPN usage is likely to come along. This suggests that financial uncertainty could generate an interest in a preemptive cost-reduction push, even before any specific need to do that has emerged.

AI is another complex topic, even beyond network implications but especially within them. Going back to the first Andover Intel comments on AI back in early 2023, enterprises weren’t really considering the issue at all. Within a year, they divided into two camps, the AI-traffic optimists and pessimists, with roughly two thirds in the first group and the remainder in the second. Even at this point, there wasn’t much specificity regarding what kind of traffic is generated and where it’s going.

In late 2024, most AI-traffic optimists said that the primary impact would be from self-hosting AI, and likely in the data center LAN. The optimist group had grown, at this point, to account for three-quarters of enterprises. The pessimist group didn’t see self-hosting at all (they believed in cloud AI) or they didn’t believe in large language models, generative AI, or extensive AI training. This group, while small, included a larger percentage of companies who’d actually adopted AI for significant business missions.

I think the pattern of viewpoints is due to the evolution in the level of actual AI experience, in the planning phase and then in deployment. Everyone comments on things that they know are new and think are interesting, but only when businesses start a real exploration can we expect the comments to mean much in a predictive sense. This is why surveys get things wrong so often; if you don’t validate actual familiarity in some way, you can bet that everyone will comment on any exciting new technology, but their comments will rarely matter more than what you’d get from a random encounter on the street.

The broadest comment from enterprises, made by 221 of 294, was that increasing the performance and reliability of the data center network was important. This group is larger than the number who explicitly cite AI as a driver to the data center LAN (133), and things like increased use of virtualization and greater traffic loads in general were cited, but equally important was the view that technology improvements should make it possible to increase capacity well beyond any (minimal) increases in cost. Modernization, then, is still an explicit driver of network change.

Enterprises didn’t have any significant number of comments on their broad networking goals, another indication that they aren’t seeing these goals changing. I think that even AI and SD-WAN are simply examples of things they hope to accommodate in the orderly more-for-less modernization. What might drive a more proactive, business-driven, model shift?

A business case, obviously. If we look only at the question of model-generative forces, we could speculate that the business case would have to create new network-connected workflows, which means a combination of new data and new data consumers. “New” here would mean previously unexploited, which likely means that the new data consumers would have to be drawn from the 40% of non-office-centric workers. New data would then mean information either not previously collected at all, or collected but not fully utilized, that associates with these workers. That would almost surely mean IoT data, or “telemetry” as some would call it.

OK, you’ve heard this from me before, but truth is truth. If we want to see a big jump in network and IT spending, we have to unlock a big new benefits pool. But there is perhaps a new wrinkle here. Of a separate group of 188 companies who reported use of IoT, 173 said they had a significant edge computing initiative dedicated to the control of real-time processes. Of that group, 73 said they collected all of this for analysis, 42 said they collected some, and the remaining 58 said they didn’t collect any of it except as a backup. Is all that data not being collected and analyzed truly useless? Could we examine it and look for productive uses? Might we then identify some new network missions? Worth thinking about.

]]>
6093
The Evolution of “Non-Transactional” Flows and Applications https://andoverintel.com/2025/04/16/the-evolution-of-non-transactional-flows-and-applications/ Wed, 16 Apr 2025 11:39:08 +0000 https://andoverintel.com/?p=6091 One of the biggest, and yet least-recognized, challenges enterprises face in software deployment these days is addressing non-transactional models of application workflow. We’ve spent decades understanding and populizing online transaction processing (OLTP), in large part because for decades that’s the only kind of application you found at the core of businesses. That’s changing today, in part because we’re changing what enterprises do with applications (and through them, with their workers, partners, and customers), and in part because we’re learning that transactional workflows can themselves sometimes benefit from non-transactional thinking.

An OLTP application is designed to automate/replicate a piece of a commercial process, something that in the old days might have been done through an exchange of paperwork. OLTP applications typically present some form of menu to allow the user to select the transaction they want, and then drive the user through a series of steps to complete it. An order, for example, might present a product page for selection (and which might require a database inquiry for available products and quantities on hand), then in response to the selection made, generate a form that obtains information like quantity, features (if selectable), ship-to, bill-to, etc. From this information, the application builds a transaction that then flows to one or more applications. In the old days, we’d have said that transactional applications fell into three categories, inquiry, update, and delete.

From roughly the 1970s, roughly contemporaneous with the explosion in OLTP, we saw another kind of workflow model, what’s often called the “event-driven” model. This came along in response to a need to recognize that some tasks had to be visualized as the handling of a series of events generated by an external source, each of which had to be handled in the context of the relationship with that source, the “state”. Thus, this is often called “state/event” handling. In the late 1960s, this approach emerged in the handling of network protocols, including the old IBM Binary Synchronous and Systems Network Architecture (Bisync and SNA) and also TCP/IP and has exploded with process control and IoT applications.

If you look at the missions associated with transactional and non-transactional workflows, you can see that one fundamental difference is the relationship between the source and application. In transactional flows, the application drives the context completely. In IoT or process event-driven systems, the presumption is that the source has an inherent context that the application has to synchronize with, or that both sides have such a context, one that then has to be unified. But how different are these, really? That’s the new question, created in no small part by the growth in the Internet and the cloud.

Both these models, and in fact any application or process model that involves two parties connected through a flow of messages, is a state/event, cooperative, system. In some, most transactional processes, it’s possible to simplify the workflow and state/event processing to simply watching for a sign that the other party has been disconnected or somehow lost synchronization. Think of saying “What?” in a conversation, to attempt to regain contextual communication. But if you make the two sides of the process more complex, and handle them through more complex steps and with a more complex mode of communication, you start looking a lot like a state/event process. You have to run timers to tell you how long to wait for something, expect acknowledgment events, and so forth.

Then there’s the new missions. Social-media applications demonstrated that chatting, whether pairwise or in a group, doesn’t fit a transactional model well. It does fit state/event. In fact, a lot of the cloud computing features relating to event processing came from tools created by social media companies or online content delivery companies. What’s happening because of this Internet/cloud symbiosis is a rethinking of how even transactional applications should be viewed. “We don’t see applications as front-end and back-end now, we see them as Internet applications and core data center applications, coupled,” one CIO told me. This reflects the fact that the Internet has changed how we “shop” or, more broadly, how we decide what we want to do in an online relationship. There’s a lot more time spent making choices than on executing on the one we select. Think of your last Amazon purchase; you browse a lot of stuff, taking minutes or even hours. When you’re done, you add it to the cart and check out. Only that last piece is really “transactional”; the rest is online content browsing and research.

This application division, more dramatic in impact on software than a front/back-end division of application components, facilitates the use of cloud computing by dividing one transactional mission into two new missions—one of option browsing and decision support and one of database and business process management. It also lays the groundwork for how edge computing applications in real-time process management could be structured to facilitate the cloud-hosting of some elements, with or without a migration of cloud hosting outward toward individual users.

If you explore a complete real-time industrial or other process control application, you almost always find that it starts with a very latency-sensitive control loop for the actual process system control piece, and then concatenates one or more additional control loops to handle tasks related to, triggered by, but not synchronized with, the first loop. For example, we might have a local production line whose movement and actions create a local control loop. This loop, for some events, might have to signal for parts replenishment or goods removal, and this signal would be at a second level, one that involves a task not immediately linked to the original process control steps, but rather simply related to their result. For example, it might pull something from a local point of storage of material. That loop, if it draws that storage level too low, could then signal another loop, which might be a traditional transaction, for the ordering, shipment, receipt, and payment for the parts. The first loop is highly latency sensitive, the second somewhat to significantly less so (depending on the time required to move from local storage to the industrial process point), and the last not likely latency-sensitive at all. We could do loop 3 in the cloud, and perhaps loop 2, and almost surely not loop one, based on current cloud latency.

Even true process control applications have components that could be edge-hosted in the cloud. Transactions, having seen the decision-making front element separated from the true database processing, is likely less demanding in terms of latency than the most stringent pieces of process control, making it likely these could be accommodated even more easily using a shared resource pool. The volume of these front-of-transaction events might be sufficient to develop satisfactory economy of scale at points further out, meaning closer to users/processes, than traditional cloud hosting. We could do more in “the cloud” then.

The point here is that there is a realistic model for cloud/edge symbiosis, one that would accommodate any migration of hosting toward the point of process, but would not require it. This, in my view, requires providers of edge-cloud services, vendors, analysts, and the media to forego the usual “everything goes” approach, recognizing that it’s simply not possible to distribute shared hosting to the same latency distance from the controlled processes as it is premises hosting. Expecting too much is a good way of ensuring less than optimum.

]]>
6091
Is It Time to Rethink Netops? https://andoverintel.com/2025/04/15/is-it-time-to-rethink-netops/ Tue, 15 Apr 2025 11:23:38 +0000 https://andoverintel.com/?p=6089 Enterprises have always managed their networks, but just how that’s done has always had its own twists and turns. The common thinking is expressed by the FCAPS acronym, meaning fault, configuration, accounting, performance and security, and this is what we could call the “prescriptive” thinking. But enterprises themselves seem to recognize some higher-level issues, mostly relating to the relationship between QoS, QoE, and “fault”.

The emerging debate has, say 84 of 389 enterprises I’ve gotten input from, created three loose camps representing primary approaches to ensuring proper network operations. The three are the QoE camp, the preemptive camp, and the prescriptive camp, and the way the jousting among these informal groups shape up may end up determining not only how we manage networks but how we build them.

The most vocal of the groups, the QoE camp, has 32 enterprises who explicitly claim membership. This group believes that network management processes should be considered to be fault isolation processes, and that they’re invoked not because some fault is detected by the hardware but because a user complains. Networks, they reason, are about delivering quality of experience, and so it’s experiences that should be the focus of management activity. A tree falling in the wood, to this group, is nobody’s concern, so don’t bother listening for it. If somebody notices, and reports a negative consequence, you deal with that.

I think the reason this group is vocal is that this approach is favored by the community of technologists who believe in greater “network populism”, greater focus on line departments and less on technology. It certainly gets more CxO attention; of 183 comments on network management I’ve gotten from CxOs, 104 emphasized the need to focus on whether the network was fulfilling its mission, over focus on technical metrics. One CIO said “I don’t care what the latency or packet loss rate is, I care how many complaints I get.” This CIO hastens to say that doesn’t mean you ignore signs of trouble, only that you don’t step in with remediation that might go wrong until the signs start to impact the mission.

What seems to drive this view is the increased role that human error plays in network problems. Among the 32 enterprises who champion this position, all say that human error is a larger source of faults than hardware, software, or service problems. They reason that if netops teams are out there trying to tweak some obscure network parameter to hit an FCAPS goal, they’re likely to break something that users felt had been working fine. Of the 183 CxO comments, this view was held by 128, which is probably even more telling. I’d have to say that from what I hear, this is the dominant camp.

The competing camp? It’s the next one, the preemptive camp. This group, 24 in number among the enterprises claiming a preferred approach, might be considered a variant on the QoE approach. They say that, yes, the user experience alone should be the target netops aims to hit, and so yes, it should be complaint-driven. However, smart management says that not having a complaint is better than addressing one. Thus, you build a network not to be optimally cost-effective, but to be optimally complaint-prevented. Think overcapacity on links and redundancy in devices and you’ll save money in opex and make users (and management) happier.

One adherent to this view describes a “three-two-zero” approach. You always have three paths from any device available onward to their source of applications or data. You never have more than two devices transited in the path of the information, and the failure of any device or any pathway has zero chance of creating a user complaint. This means that you look at systemic health through preventive planning and maintenance, the “P” in FCAPS, perhaps. A fault in the network won’t hurt QoE, it will just turn on or off some lights, and you recognize a failure because it’s inescapable. You fix it by putting something in to replace that whose lights went into a bad state. No need for diagnosis; it’s staring you in the face, and users aren’t impacted so there’s little pressure to act in haste and mess up.

The 28 remaining are probably just explicit advocates of what all those who remain of the 389 who commented would say they do, prescriptive management, business as usual, based on the traditional rules like FCAPS. The approach here is traditional; you have management telemetry that provides insight into network conditions, and for which there’s a normal range of readings to expect. If something pushes outside that range, you take steps to bring it back. Simple.

The abstract organizational-political driver of greater line influence into tech is an external and perhaps driving force to displace this approach, but there’s also awareness of its issues inside netops groups. The problem is the same one users and CxOs cite, which is human error. Networks are very complex and getting more complex daily. One network professional said that in the fifteen years of their career, they saw the number of monitored variables increase from the dozens to many hundreds, and the number of parameters that could be set rise from “around a hundred” to “probably two or three thousand”. The level of interdependence of parameters has also grown, though nobody was comfortable in quantifying how much. The point they raise instead is that moving “A” is much more likely to cause a flood of changes to “B” and beyond than it was in the past. Errors, then are not just more likely, they’re almost inevitable.

Pilots call an understanding of the state of their aircraft overall, and its place in the real world, “situational awareness”. Netops people agree that good netops practices demand you have it, but that network reality in 2025 makes it hard to achieve. They’d hope that this is an issue AI could help with, but stress that autonomous action is a threat to the staff’s situational awareness. “Tell me what’s going on, tell me what seems the best steps to take and what their likely impact will be, and let me decide which and when. Then go forward stepwise, giving me a chance to override,” one netops pro suggests as an AI paradigm that would work. Seems logical to me.

]]>
6089
Impacts of the HPE Deal for Juniper: Up or Down? https://andoverintel.com/2025/04/10/impacts-of-the-hpe-deal-for-juniper-up-or-down/ Thu, 10 Apr 2025 11:48:54 +0000 https://andoverintel.com/?p=6087 We’re still waiting for movement by someone on DoJ’s opposition to the HPE/Juniper deal. Meanwhile, the two companies, their customers, and their competitors are all gaming out the results of the possible outcomes. That’s difficult because we can’t be sure why HPE or Juniper wanted the deal in the first place.

The general view in the market, and on Wall Street, is that HPE wanted Juniper’s AI. I’m more skeptical of this as time goes by. If Mist AI was the target and if DoJ is worried about the impact of the deal on the enterprise WiFi market, why would HPE not offer to divest that part of Aruba? In any case, it seems to me that buying Juniper to get Mist would be like buying a dozen eggs (at their inflated prices) to get some egg cartons.

I think that the Juniper deal is more likely to be HPE’s reaction to two forces. One is the force of commoditization. Everything in enterprise IT has been under price pressure, and that’s not going to stop now. In a commodity market, you need to maintain account control and sell as many pieces of a project as possible, or you’re locked in a price war. The other force is IBM. Here’s a company who is constantly underrated by the media, but their stock has been up in a down quarter, and they have the most strategic influence of any company according to my data. They also handle almost three-quarters of the commercial transaction processing globally, and of course the ownership of transaction processing means owning the data and also owning what drives networking.

IBM doesn’t sell network gear any longer, so if we view computing, storage, and networking all collapsing into a commoditization black hole, then you can’t afford to have a missing piece. Juniper would give HPE a big footprint in a space where IBM has only nail clippings.

For Juniper, you have to ask why, if Mist AI is an asset worth buying a whole company over, they can’t overwhelm their traditional rival, Cisco, with it. The fact is that a commodity market doesn’t easily admit to feature differentiation at all, and network equipment has always been a case of Cisco versus not-Cisco, meaning that Juniper doesn’t have a realistic chance of gaining market share on Cisco, only gaining share in those who reject Cisco out of hand. The problem Juniper has is that it’s a lot easier for Cisco to gain not-Cisco market share, and a good way of doing that would be through increased strategic influence. For several decades, the real driver of enterprise network success has been the data center network, and data center network decisions are driven by the compute side, which is why IBM has so much influence. Cisco has at least some server position, but Juniper doesn’t. HPE, of course, does, and is second to IBM in influence there.

So, let’s assume all this is valid thinking. How do we game the possible outcomes?

OK, DoJ drops its objections, with or without concessions from HPE, or they lose the case. In this situation, HPE takes a big forward leap in terms of market importance. They don’t need to architecturally integrate Juniper; they’re added products for HPE to leverage in deals they control at the strategic level. They now have something IBM does not, and they’ll surely threaten IBM with it.

Who loses? IBM in the long term, because they will either have to buy a network player or use their influence to turn data center networks into a white-box lake, with no dominant strategic player. In the short term, Cisco, because HPE/IBM battles shift focus to something Cisco isn’t much of a player in, which is the compute space.

This might create another winner, Broadcom. Obviously, if IBM needs white-box supremacy in the data center, Broadcom chips win out. Similarly, any battle for influence that elevates the data center will elevate VMware. In fact, Broadcom might be a winner no matter how the deal works out, as I’ll get to.

Now for that outcome. The merger is not approved, so what happens? HPE and Juniper both lose. The former now has less to fend off IBM from one side, and Dell and Broadcom/VMware from the other. They now have to make Aruba into a decided asset, which diverts them from the defense of the server space. The latter is now facing Cisco, IBM, and HPE/Aruba in a battle for influence, and that will be very difficult for them to win, or even bring to a draw.

Who wins? Cisco and IBM, obviously, but also Broadcom. Platform software like VMware’s stuff is part of the server/compute picture, and a strategic battle at that level helps them whether the battle is offensive or defensive. Not only that, I think that the future of networking really lies in virtual networks, and VMware has a long-standing NSX positioning there. Both Cisco and Juniper have virtual-network stuff, but both companies have been cautious lest they undermine the feature credibility of the real hardware they sell. Broadcom has no such concern.

What about enterprise AI? How might that be impacted (if at all) by the success or failure of the merger? I think that AI becomes more important if the deal goes through, because it magnifies the compute battle, a battle IBM knows is really about AI as part of the transaction workflow rather than AI as a personal copilot. If the deal fails, I don’t see HPE or Juniper driving enterprise AI any differently.

I think the deal should go through, not for reasons of market optimality but as a matter of law. I don’t think DoJ has presented a valid reason to stop the deal; the notion that somehow enterprise WiFi is threatened with monopoly is, to me, simply silly. It would be a shame to do something market-negative for a silly reason, but politics is politics after all, and silliness is not unknown there.

]]>
6087
Is There a Way to Fight Telco Commoditization? https://andoverintel.com/2025/04/09/is-there-a-way-to-fight-telco-commoditization/ Wed, 09 Apr 2025 11:48:45 +0000 https://andoverintel.com/?p=6083 Here’s a seemingly obvious truth for you; there’s no such thing as an infinite TAM. Any market can be saturated, and as saturation approaches all markets can expect to see a slowdown in growth rate. So it is with wireline broadband in general, and cable broadband in the US in particular, according to a Light Reading piece. The thing that I find interesting about these discussions is that people are happy to talk about the trend and seemingly reluctant to accept the cause. Is increased competition behind a decline in subscriber growth when everyone is reporting a decline? A zero-sum game with losers needs winners too, so the truth here should be obvious, as I said.

The wireline broadband market TAM, the total addressable market, tends to grow as the number of financially viable target households grow. This, in turn, is largely driven by a combination of population growth, perhaps more by the growth of the middle class, and the impact of any broadband subsidy programs. But not only does the TAM set a cap on a market, growth is almost always impacted by just how far along you are in the adoption curve. In the early 2000s, broadband Internet was in its infancy, and growth rates were high. As the market matured, not only did we pull off more and more incremental prospects, we picked the low apples, the households with the highest tech literacy, the greatest willingness to pay.

Where we are today in broadband reminds me of where we were with voice in the 1980s. Technology was making bits cheaper, and so there was growing competition for the long-distance calling space because it dodged the high cost of providing access. Voice didn’t consume many bits, so transporting it in aggregate was cheaper by the day. To break out of this, what was needed was a broadly used service that consumed a lot of bits, something that people would value. The Internet gave us that, but since then what’s happened? We’ve run out of opportunity for revenue growth there too.

What competition has really done in broadband is to eliminate any easy path to increasing average revenue per user, by charging more for service. I had a special rate on my home broadband, negotiated by a threat to change providers, and as its expiration approached, the operator contacted me spontaneously and extended the offer. So, no new households spring up, and no willingness to pay a higher price for services either. Consumers cluster at the low end of the service plan inventory because they don’t need more. I’d bet that my home Internet is twice as fast as I actually need, and I’m at the low end of the price/capacity spectrum. Along the way, in fact, I got a 50% increase in my speed, and didn’t even notice a change.

This is the issue operators face, the thing we call “commoditization”. Almost everyone does things online today that they didn’t do before; I stream all the content I consume at home, for example, and yet this increase in usage hasn’t driven me to buy top-tier services. This, I believe, means that operators have missed an important truth, which is that the potential revenues from the increased bandwidth needed for new online services will never offset competitive- and technology-driven reductions in revenue per bit. If you want to make money on new services, you can’t just carry them, you have to build them, and offer them.

This, I think, is why operators’ superficial strategy for things like IoT have failed to do what they hoped, and why most 5G hype turned out to be just that…hype. Connectivity cannot be made valuable in abstract; it’s what you do with it that matters. Operators who want to focus on the former and let others handle the latter surrender all their pricing power and opportunity for benefiting from new applications, as they’ve been doing for decades.

So does that mean the separate-subsidiary community is right? There’s a conference on it today, but I don’t think it can come to any useful conclusion. First, would regulators allow telcos to fund such a subsidiary fully? That was forbidden in the past. Even if they did, all this allows is for a new tortoise to enter the race with a bunch of experienced hares. The OTT community knows how to do value-added services. If there are people in the telcos who also know, where have they been hiding? Who staffs the subsidiary? Will they try to hire everyone from the outside? No, they’ll pull key people from inside, from the pool of connectivity enthusiasts. Will they then be competitive? You tell me; I know my answer, and I’ll bet you do too.

This is the classic dilemma of the “smart versus dumb” network. I debated this with David Isenberg at a BCR event back in 2004. I tool the “smart” side and won the debate in a setup that was designed to go the other way. But we ended up with dumb networks, so did I lose in the end? Yes, in one sense, because we went the dumb-network route. No, in another. What I argued was that there was not enough profit in dumb networks to sustain investment, and to roughly quote my comment then: “There is no chance whatsoever in my lifetime that we’ll re-regulate. If we can’t change fundamental policies and regulations, what chance do we have to repeal basic economics? We’d have to re-regulate to retain capital credibility for the carrier industry in the dumb network scenario.” Isn’t that a pretty good description of the threat facing telecom today?

I also had an impromptu debate with an FCC Chairman at another event, and he said that the FCC was responsible, overall, for the health of the industry. I agree, and I also agree that the dumb network approach helped create the pace of innovation we’ve seen from the Internet. But could regulation have kept the whole industry healthy and still innovative? I think so. I still believe that barring settlement for premium services and handling, an element in most net neutrality policies, hurt the industry overall, and I know this was being discussed almost a decade before my 2004 debate. But most of all, I believe that the fact that regulatory policy shifts in the political winds has hurt.

The health of the industry should have been the regulatory goal, and it was not. It can’t be now; you can’t stuff the deregulation genie back in the bottle. Could the industry work it out? There was, a couple of decades ago, an example of a “standards” initiative that I think is the model that could work. The IPSphere Forum (IPSF) had no membership fees, was run by a “service provider council” whose meetings were open only to providers themselves, and that addressed services and then infrastructure rather than the other way around. The climate of the time wasn’t in favor of the kind of revolutionary thinking that it generated, but now? It might work. In fact, it might be the only sort of thing that could.

]]>
6083
Politics, Tariffs, and Telecom https://andoverintel.com/2025/04/08/politics-tariffs-and-telecom/ Tue, 08 Apr 2025 11:40:37 +0000 https://andoverintel.com/?p=6081 There’s a lot of comment these days that everything has changed, that there’s a new global order that cuts across economics and politics. It’s all likely true, though how profoundly it will change things is still a bit uncertain. Light Reading certainly has it correct when they say that “Nokia’s new boss must tackle Trump tariffs and mobile uncertainty.” There’s even more to worry Justin Hotard, though. Let’s take things in order.

Tariffs could certainly complicate things for global network operators, and for their vendors. I’ve seen estimates that network equipment could be expected to rise in cost by about 7%, but some say as little as 4% and some as much as 15%. It depends on what you’re buying, who you’re getting it from, and where it’s made and installed. It also depends on how things fall out with the tariffs over time, which depends on the goal in establishing them. If it’s to negotiate, then the impact may be smaller and a few vendors and telcos even tell me that costs could decline if threat of US tariffs result in cutting other tariffs. This happy outcome isn’t expected by many. If it’s to boost US manufacturing, then the impact could be real and long-lived.

This hits operators and their key vendors at a bad time. As I’ve noted in recent blogs, network operators tend to have the largest capital budget during periods of standards-driven transformation. 5G has been such a period, but it’s pretty clear that 5G has largely run its course, and that it was over-hyped in terms of profit impact, which limits the operator tolerance for massive capex. The passing of the 5G influence tends to put operators back into the profit-per-bit capex starvation they’d faced before 5G. And 6G, whatever it is, may not save them. Operators want it to be pure software, and of course vendors want it to fork-lift everything into the trash.

As someone on LinkedIn recently suggested in a comment on one of my blogs, it my be that operators have to face the fact that they’re selling commodity. True, but those same operators, their vendors, their customers, and the vast sea of OTT players have to realize that most commodity providers aren’t contending with unpredictable new demand sources, sources that a lot of tech and tech users depend on to continue the flood of new Internet-based applications, content, and missions.

The big question, I believe, is whether the impact of tariffs will have the effect of reducing operator willingness to capitalize new connection capacity in the face of further increases in cost and thus declines in ROI. If that happens, then the Internet experience may not stand up to any significant new demand, which could limit a lot of consumer and business spending that’s not strictly related to the services themselves. Do I need new TVs as much if streaming video gets glitchier instead of better? How about IoT, autonomous vehicles? The Internet is the new central nervous system of the global economy, and any limits in its growth pose a broad and significant threat.

What 6G needs to do to break this cycle is to advance the Internet ecosystem overall rather than to create some new supply-side vision of infrastructure whose connection with demand the global value is tenuous to say the least. Yes, it’s better that a 6G that’s yet another standard-writer’s intellectual exercise is limited to software to manage its cost, but utility maximization would be the best outcome. But…but…how do you do that? Operators really want to hunker down in the comfort of the bit-pushing business, and if they don’t embrace adding value, only cutting costs, then it’s impossible to fix the problem in the long run, and tariff increases will hurt.

Why couldn’t the 6G initiative, the 3GPP, fix things? Well, why didn’t they fix 5G, or 4G? Truth be told, standards in the telecommunications space have always been an extreme example of supply-side built-it-and-they-will-come-ism. Toss in all the political tensions of the world, including tariffs, and do you have a formula for a new level of enlightened cooperation? It sure doesn’t seem likely to me, and I’ve been a part of a number of international standards initiatives.

There should be no question that the Internet, as the central nervous system I described, is the foundation of global technology, and likely the foundation for the evolution of technology as an influence in all our lives. I think most would agree with this, and yet we build the Internet up from concepts that go back decades, perhaps as far as a century, and with business models at least that old. An agile top layer on a glacial core isn’t agile any longer, and so we need to be planning broadband infrastructure from the ground up, toward the service future we want to achieve.

How about “structural separation?” This has also gotten some attention on LinkedIn recently. My concern here is that, post deregulation in the US, we ended up with that very concept, under the guise of a “fully separate subsidiary” for “information services.” Where is all that today? The short answer is that regulatory uncertainties killed it off; if you impose a wholesale requirement on an industry you have an impact on their planning that’s similar to the impact a country has on business when it imposes nationalization. It’s hard to go back, because you’ve demonstrated you’re willing to do something destructive. Telcos are still trying to shake off the impact of rules that were intended to save others from telco-predatory practices, when today it’s the telcos we’re trying to save. We could have planned better then; it probably would have made things easier today.

Most telcos agree on an ideal infrastructure model, what I’ve called the “metro” approach. You set up major nodes in key metro areas (roughly 250 in the US, and another 800 in the rest of the world), multi-home or even mesh them with an all-optical core, then aggregate access onto fiber to reach users. In these nodes, you inject any high-level features you have, and also provide interconnect with other operator networks. This model could reduce cost and facilitate higher-level service participation, but it doesn’t depend on it, so it might be easier for connection-centric telco executives to buy into.

Communications is, at its heart, a cooperative process. In a world where cooperation may be getting harder to achieve, it’s going to take some enlightened effort to get things onto an optimum track, and keep them there.

]]>
6081
Why Do Operators Always Seem to Get Openness Wrong? https://andoverintel.com/2025/04/03/why-do-operators-always-seem-to-get-openness-wrong/ Thu, 03 Apr 2025 11:46:38 +0000 https://andoverintel.com/?p=6079 I’d bet you that if technology buyers were asked for a single word to describe what they’re looking for in tech these days, the word would be “open”. Why, then, do we keep hearing that network operators are disappointed in the progress of things like Open RAN? Why do enterprises, in contrast, seem to adopt open technology so easily? There are a lot of factors here we need to explore.

The most obvious factor to consider is the credibility of the product source and the source risk tolerance of the buyer. In general operators tend to keep infrastructure longer than enterprises, so they have to worry more about whether their vendor will stay in business and keep supporting their stuff. Of the 88 operators who have commented on this topic, all say that they require a financial/business-stable source. Of the 411 enterprises who offered comments, only 69 say that, and 104 say that it’s a “secondary” factor for them.

The “why” of this is important, though. Operators say that one of the goals of open technology is to multiply the number of sources, where enterprises presume that their source for open technology will be a vendor they’re already familiar with. Thus, enterprises’ presumption is that there will be no new source for open technology and operators think developing one is a major goal for openness. They measure open offerings to conform to that.

Even that begs a “why?” and here again we find a difference in operator/enterprise view. Operators want new sources because they have believed for years that it’s the only way for them to reduce capex, and every operator says they need to manage capex. In contrast, only a bit more than a third of enterprises even mention that, and so while enterprises may also believe that vendors overcharge them, they’re comfortable with competition between their vendors, or with the notion of beating them up over pricing.

The final “why?” comes in here. Enterprises are in general looking for an open technology as a point product. A server, platform software, whatever. Operators are looking for an open solution, usually to a new standard or regulatory requirement, that will mandate multiple product elements. If all these elements come from a pool of sources, there’s a massive integration requirement that operators simply don’t want to deal with. Enterprises rarely even mention this issue, again perhaps because they presume that their current vendors will be the source of open technology. Think Red Hat.

Another factor in the openness challenge that some operators point out is that a solution that doesn’t require integration, or that has less integration risk, is necessarily based on some widely recognized model, like a standard. Operators have generally faced a standards-driven upgrade requirement every five to ten years, and some (as I indicated in yesterday’s blog) have begun to take an interest in long-cycle planning for infrastructure changes even absent a driving standard. In contrast, very few enterprises talk about any preemptive modernization of their IT systems. The need for that has been largely erased by decades of stable client/server and virtualization thinking. The architecture of applications these days still has questions regarding the hybrid use of the cloud, but the framework is evolving in maturity even there.

There are also some supply-side considerations to think about here. For several decades, almost all network operator infrastructure change has been driven by mobile standards, given that mobile services have for decades been the bright spot in profits and the focus of competition. This space has been dominated by a small number of vendors (Ericsson, Huawei, and Nokia, primarily, with Huawei being replaced in many areas due to government pressure). Enterprise IT, in contrast, has a dozen major vendors or more. The chances of a “new” vendor entering the enterprise IT space is far better than the chances of a new vendor entering the mobile infrastructure space. In most cases, Open RAN players not one of the big vendors are partial-solution players, getting buyers back to the integration problem.

The most significant new factor in the picture differs for the enterprise and operator communities, too. For enterprises, its the cloud and AI, perhaps more generally the role of as-a-service versus in-house. For operators, it’s the determination to push 6G standardization as a software-only strategy, which would tend to make it easier for new players to enter the space.

What operators are asking for is an infrastructure model that, in the end, is more like that of the enterprise. Give us a stable, long-term, equipment model and then augment it with the necessary feature advances using software and perhaps the introduction of feature-as-a-service. In all, it could create a revolution in operator infrastructure, which is why perhaps the whole Open RAN or open-model 5G should have focused on that goal from the first. Which is also why 6G needs to follow operator demands here. Which it may, or may not, and we can see some hints of why that is by looking at Open RAN and 5G.

One contaminating issue was that the 3GPP 5G specifications didn’t really think about creating a universal hardware framework, one that could even support 5G much less endure beyond 5G. There was an unhealthy mix of trying to preserve the general 4G-LTE model and trying to converge with wireline infrastructure. We all know the old saw about serving two masters, and making it three might well have stalled 5G for years more.

That was the second contaminating issue. The whole 5G standardization process was simply too lengthy, and that’s typical of these international standards initiatives. Another old saw is “The IQ of any group of people is equal to the IQ of the dumbest, divided by the number of people.” Consensus is not only time-consuming, but it’s an enemy of innovation.

The third issue was Wall Street and the media. The Street values stocks by quarterly earnings, and something that’s aiming at 10-year relevance is hardly going to come to fruition in three months. Something whose value is boring financial credibility isn’t going to generate any clicks on stories either, and those clicks not only help ad-sponsored media but also the stock price of vendors and operators.

All of these issues will impact 6G evolution as much as they did 5G, and you also have to wonder whether the vendors, who tend to dominate all standards processes by the simple mechanism of staffing them with their own people, would accede to a standard whose goal was to reduce operator spending on equipment. That’s something we should watch as 6G evolves, but we probably won’t know the final picture on 6G until around 2029.

]]>
6079