Andover Intel https://andoverintel.com All the facts, Always True Thu, 23 Apr 2026 11:41:13 +0000 en-US hourly 1 244390735 Network Evolution Follows Data Flow Evolution https://andoverintel.com/2026/04/23/network-evolution-follows-data-flow-evolution/ Thu, 23 Apr 2026 11:41:13 +0000 https://andoverintel.com/?p=6375 It’s likely not a surprise to any who have followed my views to hear that I don’t think AI is going to revolutionize the WAN. The data center network, yes. It may surprise some to hear me say that I think AI isn’t the only force acting on the data center, though, and to hear that the other perhaps-major force on the data center would likely revolutionize the WAN too. This other force might also be the major driver of AI, which might kind of close the circle here. Complicated, huh? So let’s get to it.

Technology isn’t revolutionary in itself, it’s what we do with it. That’s true of network impact too. Networks carry data, support data movement, so what we have to be looking at if we want to track network change are the changes in data movement, and their causes. Enterprises have already been tracking one, and one that AI applications are impacting. “Horizontalization”

In the old days of monolithic applications, everything followed a nice input-process-output flow model. A user at a terminal generated a transaction, which flowed to the computer system running their application, which generated some database activity, and then returned something to that user. You could plot this top-down, which is what led to it being characterized as a “vertical” flow.

In the 1990s, we saw some kinks in this simple model of data flow, emerging from a combination of the trend to componentize applications and the trend to integrate applications by creating paths between them for shared action support. This, of course, was “horizontal” traffic. One enterprise noted that, within their company, they’d seen vertical traffic roughly double in the last ten years, but horizontal traffic had exploded by twelve times.

The force behind all of this was the need to achieve more productivity by improving information integration, quality, and distribution. Early IT was a bunch of application silos, and horizontal traffic the result of recognizing that businesses didn’t run on silos, but on the whole of their interactions with customers, suppliers, regulators and government agencies, and so forth. There’s strength in numbers, it’s said, and though the saying was talking about a different kind of numbers, it’s true for the business kind as well.

AI continues this trend, because what businesses want from AI is a set of agents that can do stuff that would otherwise require human action. That doesn’t necessarily mean closed-loop autonomy, but it does mean that the agents have to assimilate what the humans it augments/replaces knows or can know. More horizonalism, and the next and biggest force creates the most horizontal traffic of all.

Real-time services have a couple of faces of their own. One face, the most traditional, is the force of IT proximity. For seventy years, progressive growth in IT commitment and IT spending has been driven by bringing IT closer to workers. We started with mainframe computers in glass rooms tended by acolytes to personal computers or even smartphones, sitting on nearly everyone’s desk. We used to bring work to the mainframes, but we ended up bringing IT close to the work, making it a part of working.

But what about the no-desk situations? That’s the second of our faces of real-time services. It might seem like an evolution of the current trends, but it’s aiming at a different kind of work and workers. We’ve managed to enhance the productivity of those who could work through IT, but not those who’d need IT to work through them. Real-world systems are ones that make IT a cooperative piece of real-world processes, even ones involving people, and so they have to know about the real world, both conditions and rules. The new information relationships needed are a potential driver for AI, and of course new information relationships mean new information flows, which means network requirements.

I think that the initial focus of real-time services will be one of facility optimization, working to improve process control with a plant or campus. Most of the information needed for this will surely have been collected by early systems, so this is more a compute or compute-and-AI manipulation of a world model (digital twin).

The orderly expansion of this first step would take things to larger facilities, where inevitably fixed conveyance (like assembly lines, belts, etc.) give way to moving vehicles of some sort, and where it’s more and more likely that human labor has to be integrated. Both these introductions require new data to be collected, and in the case of the worker additions, also likely require new things be distributed to synchronize human and automated elements.

Running a world model, even for a “local world” is computationally intensive and generates horizontal data flows within the model, which is likely to be distributed both in terms of hosting elements and in terms of physical location. Thus, the initial impact of our real-time evolution will be sharp growth in horizontal traffic. Yes, there will be new telemetry and control flows, but these will likely stay on private facilities for now.

What expands us outward is the tightening of relationships between world models and human workers. It’s probably inevitable that, to the extent this trend is even recognized, that it gets twisted into robots and humans working side by side, but that’s a development that neither enterprises nor I believe will come along before about 2030. Initially, it’s likely to take the form of guidance, relying first on a phone/tablet display and then on augmented/virtual reality. Eventually it may involve simply timing mechanical movement to synchronize it with human movement, but all of this will mean analysis of video images to determine what the human workers are doing and where they are in the cooperative process sequence.

This process of coordination between humans and machines is what ultimately generates the network traffic, even in the WAN. Some jobs, like almost all of them in the public safety sector and the military, are long on the need to collect “awareness augmentation” information, both to alert a person to conditions they need to handle and to populate a model that enables broader process control, like a “smart city” environment.

What this all means for data centers and networking is what will drive the future of both, and that’s next week’s topic!

]]>
6375
The Basis for Low-Latency Service Opportunity https://andoverintel.com/2026/04/22/the-basis-for-low-latency-service-opportunity/ Wed, 22 Apr 2026 11:42:22 +0000 https://andoverintel.com/?p=6373 The real change that the Internet brought about, in network service terms, is a shift from human consumption of connectivity to human consumption of applications through connections. If low-latency services are the future of telecom, then what do the “services” serve, in terms of applications, and how do the applications develop? That’s the key question that any talk of a new telco spending initiative has to answer, so let’s try to answer it based on carrier comment.

First, the only low-latency services that telcos themselves mention are those related to some form of real-time process control. Process control means control of things that take action in the real world, that do things. This category divided into two main segments—processes bound to fixed machinery and processes linked to mobile elements, which might be mechanical, human, or both. There is general agreement among telcos that bound processes are not likely to generate real-time service opportunities for them unless mobile elements are introduced in some way, so it’s fair to say that mobility of process elements is an essential feature of applications that credibly consume low-latency services.

The challenge with mobility is that there’s an inverse relationship, generally, between the latency requirements and the geographic scope of the process. The further a process is spread, the greater the mechanical delay associated with movement within it, and thus (if we neglect, for this discussion, the issue of autonomous vehicles, which we’ll get to later) the less valuable low-latency is. If I want to coordinate the dispatch of something between facilities that will take minutes or more to complete, there’s little to be gained by asking for it with microseconds-level latency. But if a process is confined to a single facility, the control of it can be locally hosted there, eliminating the need for a connection service, it’s practical to replace it with WiFi or private cellular.

Most process control apps, according to enterprises, are hosted on their own process controllers (which today are most often “embedded control” or “real-time” systems) local to the processes themselves. This is true of facility-mobile applications like warehouse stocking and removal, as well as bound processes, simply because it gives enterprises the most control and because the suppliers of the mechanical elements of these processes often also provide the applications and control hosting tools, which expect local connectivity. Telcos think that the evolution of these applications to include more mobile elements and greater geographic scope would create the opportunities for low-latency services, but enterprises are more skeptical.

This gets us to the notion of autonomous vehicles. Enterprises say that they would not expect to see the ROI in creating autonomous transport vehicles for passage between facilities absent any broad autonomous vehicle support. In most cases, the cost of a driver isn’t a large enough component of total cost of the process elements to make displacing human drivers worthwhile, especially given that it would mean replacing the current fleet elements. Today, they say, mobile process elements tend to be introduced into new facilities, as a part of the process of opening one and equipping it. Given that, it’s not surprising that enterprises suggest it could be smarter to aggregate related processes into a larger facility controlled by and connected with their own resources.

It’s hard to say how realistic this is, though. It is true that cloud computing is generally less economical than self-hosting; it justifies primarily where demand is too highly variable to permit cost-effective self-hosting. Might that also be true for edge hosting services, and thus justify low-latency connections to them? This is another place where there’s a diverging viewpoint in the market, this time both among telcos and among enterprises.

The “cloud-like-side” perspective is that if somebody deployed edge hosting, applications to exploit it would evolve. Absent the proactive deployment, there’s no reason for enterprises to plan such applications since they’d lack a mechanism to be run. The “cloud-unlike” perspective says that the cloud worked because self-hosting highly variable applications was the only option prior to the cloud, and so applications that could benefit from cloud hosting existed, latent opportunity for cloud services existed. A whole distributed and coordinated multi-process relationship would have to exist for a WAN service opportunity to host and connect their control existed. How would it be supported?

You can see sense, value, in each of these approaches, but I think you can also see that cloud hosting of highly variable application elements could be justified by simple economy of scale, but what about edge hosting? Real-time control is hard to “schedule” resources for. You have them or you don’t, and if you don’t what happens to the control process? This is way more complex to analyze, which is why there’s a debate.

Complicating the debate, for the telcos, is the issue of OTT competition, meaning at least the cloud providers but perhaps others as well. One telco planner told me that his company believes that electrical utilities and even local governments might get into the edge hosting business. The cloud providers have taken steps to make their cloud middleware tools available on customer-owned servers, which means that they can encourage enterprises to build applications around a cloud model, making them easier to migrate to future edge services. The same telco planner says that in any event, metro-level latency might well fit with current cloud hosting, at least for companies proximate to current cloud hosting points. In all, there’s a lot of credible competition for edge hosting, so why not let someone else do that, and simply connect that which emerges?

Because, others say, it might take a long time to emerge, time telcos might not have. Furthermore, since telcos have no experience with nor credibility in the application-hosting market, couldn’t those who develop it do so while minimizing the low-latency service opportunity? Chip improvements, in cost and performance, could well make it possible to build more intelligence into mobile devices, and we already know that autonomous vehicles that need connectivity all the time to stay operating pose a major risk that regulators could use as an excuse to bar their operation. How long until connectivity can be fully guaranteed, fully trusted, at any latency? My Internet was down yesterday for a brief time, long enough to have caused a lot of crashes or traffic problems.

I think we know that an event-driven, model (digital-twin-world-model) structure is what’s needed to make real-time process control work. I think we know that the components of the application would be functions whose input is the model-element needed and the event triggering it. I think we know that this makes the application naturally distributable, which means we can host a piece where the constraints like cost and latency are optimally balanced. What we don’t know is how this gets started so as to maximize the capabilities for all concerned, and since this future is an ecosystem as complex as any we’ve attempted to build, we need to get on the stick and figure that out.

]]>
6373
Time to Stop the “Gs” in Mobile Services https://andoverintel.com/2026/04/21/time-to-stop-the-gs-in-mobile-services/ Tue, 21 Apr 2026 11:34:20 +0000 https://andoverintel.com/?p=6371 There’s a growing polarization of views in the mobile infrastructure and standards area. On the one hand, we have a group who say that 6G will be the “next wave” in mobile infrastructure, creating a burst of telco spending on network equipment. On the other, there’s an increasingly vocal group who not only don’t see such a burst, but don’t really see 6G, or any other “Gs” at all. Can we get to the bottom of the story by looking at telco comments over the last six months? Let’s see, and you can also find some interesting comment in this Light Reading story.

I have 88 telco comments on mobile evolution made to Andover Intel since 2024, with 81 telcos who have made comments on 6G in the last six months. Of the latter group, 12 say that 6G will create a burst of spending, 38 that there will be a bubble of spending, 23 who see no significant change in spending, and 8 who don’t think there will even really be a 6G or any such organized phasing of mobile standards for decades, at least. Why the division?

The biggest factor is the drivers behind any new investment. All of the 12 who see a 6G burst believe that low-latency applications will generate a “significant” or even “vast” new pool of telco revenue. Another 31 think that “some” new low-latency opportunity will emerge, largely from our “bubble of spending” group, and the rest say that they believe there will be no real new service opportunities emerging from 6G at all. The group who sees no actual, meaningful, new mobile standards generations at all believe that no such opportunity will ever emerge.

The problem here is that even those who believe in low-latency services don’t see it being a major market. They point out that consumers, who make up the majority of mobile spending, have never shown any willingness to pay much for any incremental service improvements. Most won’t pay for additional bandwidth, and many are abandoning unlimited plans for metered service to lower their costs. The prevailing view is that consumers will take what’s offered for no cost, which begs any telco business case unless you believe in subsidization via ads, which few telcos do.

The next factor, which operates on both the bubble and nothing-new-from-6G groups, is whether anything else could justify a mobile refresh of any sort. Most see nothing other than technology improvements that could be linked to cost reduction. The majority of the bubble group believe that such improvements are viable, and in fact could provide a bridge to the time when low-latency applications would gain traction after 6G rolls out. Maybe even before; most AI RAN proponents (19 of the group think it’s worth looking at, though only 5 think it’s definitely a reasonable direction to take) believe AI is the path to operations efficiency and would also pave the way for edge hosting. They see AI RAN deploying before 6G. Another 18 think AI RAN might be a part of 6G, but even this group is concerned about costs.

72 of the operators in the 6G comment group say that they believe the primary requirement for 6G is to prove an ROI for any investment, and 66 say that they don’t believe that any significant investment could generate enough ROI, so the goal has to be to minimize any 6G costs. Target bubble thinking, not burst thinking, in short.

The Light Reading piece notes some differences in the views of US telcos versus those in the EU, and perhaps also those in Asia/Pacific. My comments validate this to a variable extent. It’s true that US telcos are more likely to be influenced by vendor positioning, which of course favors anything that results in more spending. However, they are still largely cautious; only 1 of the 12 burst-thinking telcos are from the US. Asia/Pacific telcos have, in general, higher demand density than any others in the world, and thus get a higher return on infrastructure spending and value cell capacity and efficiency more. The EU, which has more competition than other geographies, is the home of the most skeptical telcos. There may be a link there to the move to get an EU commitment to telco subsidies from OTTs, of course.

I think that what this shows overall is that 5G had a major impact on telco thinking, and a largely negative one. A few telcos characterize 5G as “the death of the Field of Dreams”, and most telcos seem to embrace the thought if not the exact terminology. They were caught up in what was clearly a vendor-induced hype wave, which resulted in a huge ROI shortfall and a more cautious attitude. However, a slight majority (49 of 88) telcos have, over time, said that they believed that some credible new revenue source was essential for their business, and I think that leads to the size of the “bubble” group; they want to believe but were 5G-bitten and so doubtful of 6G. They’re also doubtful of things like Open RAN, V-RAN, and AI RAN, but to return to a historical analogy of a past blog, to a Titanic survivor in a lifeboat, any light looks like a rescuer coming because the alternative of no-rescue is simply too dire.

There’s a risk here, of course. Being too willing to believe in baseless hype is never a good strategy, but being closed to real opportunity is no better. The challenge is to recognize what real opportunity is, and what 5G has done is debunk the notion that visible market consensus in a hype-and-click-bait age is rarely real consensus. What is? That’s the challenge facing all these telco evolutions.

People talk and listen and see with biological equipment. Thus, they’re prepared to exploit network services that rely on those senses. Unless you believe in remote touch and smell, we have to move beyond sense-exploiting if we need new service opportunities, which means we need to develop an ecosystem to build up applications that exploit those senses in a different way, and that use a service like low-latency connectivity to do that.

Here’s my view. 6G can’t be good, but it can be bad. Other “Gs” are even worse, because as long as there’s a formal body with Field-of-Dreams historicity, vendors will use it to try to push the same sort of hype that gave us the 5G disappointment. It’s those vendors who’s pursuit of profit drives them to innovative thinking that we’ll have to rely on to give us a telecom future, so we have to drive them to do that by shutting off the Gs once and for all. Telco service standards cannot save telecom, but false hope could kill it.

]]>
6371
How Could We Make AI RAN Work? https://andoverintel.com/2026/04/16/how-could-we-make-ai-ran-work/ Thu, 16 Apr 2026 11:43:02 +0000 https://andoverintel.com/?p=6368 “All that glitters is not gold”, so the saying goes. All that has benefits isn’t a good investment either, and that’s been a recurring problem in assessing technology in general and network tech in particular. Many new technologies were capable of doing something well, even better, but not enough so to justify an investment that often involved ripping out un-depreciated assets and taking new risks. Operators tell me that’s where things really are with AI RAN. Of a dozen who have done real work on the topic, ten say that they are only about halfway to a business case with it. Can they get the rest of the way? Here are their issues and the potential remedies they suggest, with my own views mixed in.

The biggest problem that network operators cite is that AI RAN is too much of a moving target in both cost and benefits. “You talk about building AI into the RAN,” one technologist said, “as though there was one sort of AI and one model of RAN, and neither is true.” Another: “How much energy can you save if you’re relying on a technology that needs new power plants to run?” There is no real definition of just what’s needed to run an AI RAN model, which means both its capital cost and opex is impossible to predict accurately. The benefits AI RAN targets are only real under certain conditions of deployment, both in terms of density of cells and customers, and in terms of age of existing infrastructure. It’s also uncertain whether an advance in wireless standards to 6G could justify a shift to AI from ASICs, given that telcos don’t want 6G to require a forklift because 5G hasn’t returned on the investment in it.

How do you fix this? “The AI RAN camp needs to put its technology on a diet,” says one operator expert. “We aren’t going to run an AGI [artificial general intelligence] model at a cell site.” Most of the dozen operators told me that they believe AI RAN should perhaps be called “ML RAN” because they think the real needs of AIops could be realized with machine learning and a modest hosting requirement. That would lower both the cost of the gear and the power and cooling requirements. But, they say, all the impetus behind AI RAN is being created by chip giants like Nvidia, who don’t want a watered down model.

That gives rise to the second problem, which is AI RAN has become a theoretical on-ramp for edge computing. If you can’t lower costs with superchip AI RAN, sell AI services as edge services, the theory goes. “Who do we sell it to?” one operator asks. “Why would we win against hyperscalers who are already selling AI?” Not at the network edge, perhaps, but every cloud provider does offer an on-premises middleware tool that’s clearly a camel’s nose for any edge service tent that becomes feasible. “So because we can’t make an AIops business case for AI RAN because it’s too suppositional, we should add some other value thing in that’s even more iffy.”

Edge services at the network edge rather than in the metro, according to operators, it just too big a step. If you want extreme latency control, host on premises next to the processes under control, which is and has been the established practice. The only way to break out of that would be to lower the cost of your edge service radically, which can’t be done profitably unless you can achieve a mighty economy of scale. At a cell site? Get real. Even metro hosting could be challenging, but that’s where you have to start with edge computing. Instead of moving expensive assets out to the cell edge, you lower latency between customer and metro, something that was a goal for 5G and is expected to be one for 6G.

The third question raised comes out of this evolutionary approach. Given that AI technology is changing rapidly, and that many of its value propositions are under pressure, how do you justify a long-term investment in it? Telcos typically depreciate over a longer cycle than enterprises, and if AI is something that has to be eased into, how long could it take before we have a convincing AI answer? What’s the risk of investing in it without such an answer, especially if the final justification emerges only at the end of a long and as-yet-undefined application evolution to something like real-time, augmented reality, robots, and so forth?

This issue can be seen as a consequence of the other two. Time, they say, heals all wounds, and while that’s nonsense at one level, it is true that many if not all the other issues could be resolved by the evolution of network-dependent applications, particularly those elusive real-world-real-time applications I’ve blogged about in the past. Given that, you could argue that AI RAN benefits are inevitable, so why not get started? The problem is that the realization of the benefits and the evolution of the best-available technology could render current investment obsolete. How then can you justify getting started?

The only possible answer to this one is to work hard, right now, to frame the technical requirements for those future applications, and assess AI/ML directions to align the technology with the needs. That could redirect early investment in AI RAN along lines more likely to create an optimum return on investment for network operators. The problem with this lies in the inherent opportunism of the vendors in the AI space who would almost surely dominate that hard work. We can see today that Nvidia’s focus is on validating its own market position, which tends to create an emphasis on things like large-scale robotics, which would require considerable work and time to integrate into enterprise operations and people’s lives. What these require is not the question that early evolutionary steps would need to answer. It’s not the final, most exciting destination, but the first steps on the route, that really matter.

]]>
6368
Are We Ever Going to See a Realistic AI Survey Story? https://andoverintel.com/2026/04/15/are-we-ever-going-to-see-a-realistic-ai-survey-story/ Wed, 15 Apr 2026 11:43:32 +0000 https://andoverintel.com/?p=6366 Here’s a basic truth that’s ignored too often; you can’t get right answers from wrong questions. You can, however, use the combination to boost clicks and hype, and so we see this a lot with AI. Most recently, a non-technology publication published a story titled “20 percent say AI has taken over parts of their job: Survey”. The findings, IMHO, show all the issues with surveys, and distort things way too important to distort.

First, “half of U.S. adults reported using AI tools in the last week”. I’m astonished anyone believes this, or at least interprets it as an actual, deliberate, commitment to AI. I think I know mostly tech types, and even among that group I’d barely hit that number. If I look at personal acquaintances only, my estimate would be maybe a quarter unless you count “using AI” as getting an AI summary in a search. Historically, a third of all people surveyed will claim they use something hot, even if they’ve never used it at all.

Next, 27% of users said that AI had automated some of their existing tasks. That’s about half those who said they used it. Well, what were you using AI for if not to “automate” some of the things you do? And another thing, does your PC, your word processor, your calculator, your spreadsheet, even your email “automate” some of your existing tasks? Does a power saw automate some of your tasks? We’ve used tools to enhance productivity since the dawn of humanity, after all. AI is just another tool, a step on a path to sophistication of tools. The question is not whether we use it, but how it’s used and most important, does the use create value that someone is willing to pay to acquire.

The majority of AI use, even according to the survey, involves AI that’s free to the user for some reason. It’s bundled with something, paid by their employer, or it’s offered as a free-to-use tool to encourage people to rely on AI and so elect to pay for more or better stuff. It would be nice to see how that sort of AI evolution is going, but of course 1) nobody surveys it because the results won’t get clicks, and 2) the number of people who respond accurately would be swamped by the number who say something because they think it makes them look smart and sophisticated.

Among the people who offer me comments on enterprise tech, which number well over 500, my analysis is that all of them “use” AI in some form, that about 70% get it paid by their employer, with an average cost of around $200 per year, and that about a third pay for AI on their own, the majority from the group who also get it from their employer, with a slightly lower ($120) average cost. I have a paid AI plan, and I’m sure a lot of industry analysts to as well.

AI saves me some research time, if I use it carefully. So do search engines. Spell and grammar checkers save me some time, too. Which shows that any work tool is supposed to save you time, make you more productive. The thing enterprises point out to me all the time is that saving worker time, improving worker productivity, is not in itself making a business case. You have to be able to somehow move that improvement to the bottom line to offset a cost. Right now, that collides with the problem of AI errors.

Recently I ran a test on my AI (Google Pro Deep Research). I gave it an assignment that I’d already completed on my own, researching economic data from a number of sources. I’m a good economics researcher, but by no means an economist, but I didn’t have a major problem getting the data. The project ran for over a half-hour on AI, giving me progress reports that surely looked like it was getting to the result. But it didn’t. AI was unable to find all the information, and simply left the columns for some of the data with “NA” for “not available”, when it obviously was. I didn’t consider this to be a complicated analysis, but AI didn’t produce any result at all.

The problem that produces is obvious; you can’t always get the right answer from AI. When you don’t get any answer, as was the case with my experiment, the failure is clear but the remedy is less so. I knew the right answer, but suppose I didn’t? If you’re a business who expects AI to empower someone’s research, perhaps enabling you to use a lower-cost person in a job and creating a labor-cost benefit, you missed your goal. When you get a wrong answer, the problem is that you’re now stuck with a costly error unless your worker spots the problem, which means either the cost of the error has to be charged against the net AI benefit, or your benefit is reduced because you needed human oversight of AI results to prevent errors from creeping in.

Enterprises say that the popular model of AI, the “chatbot” that answers questions in some form, rarely creates any actual realizable improvement in profits. The agent models that focus either on a specialized activity for which a foundation model can be trained, or that fit somehow into a business workflow and access company-private data, can generate net improvements in profit, but they’re still exploring the best way to get to a favorable outcome in a world that seems focused on the kind of AI that they already know doesn’t work for them as they’d like.

The enterprises who have done a lot with their IMHO-realistic view of AI have proven it out fairly easily, but so far this is less than 20% of enterprises. The low rate of success is due in part to the challenges in getting executive buy-in for an AI approach at odds with popular culture, and partially to the lack of tools and expertise. The good news is that there’s more and more things going on with the right-to-enterprises model, though there’s still a measure of hype involved. “Live AI”, and the notion of AI-based world models, are still aimed at a more click-worthy theme than at promoting a practical path to realizing an AI business case, but they’re a closer fit than we’ve seen so far, and that may make a difference even by the end of 2026.

Meanwhile, think carefully about any AI surveys you read. If you combine the natural-and-proven tendency of people to say things they believe makes themselves look smart/good with the fact that not much about AI terminology is even defined consistently, and you don’t exactly have a prescription for accuracy. Don’t misunderstand; my approach of analyzing spontaneous commentary has limitations, too, so you should take that into account here as well.

]]>
6366
Why We Need to Rethink “Cloud-Native” https://andoverintel.com/2026/04/14/why-we-need-to-rethink-cloud-native/ Tue, 14 Apr 2026 11:43:25 +0000 https://andoverintel.com/?p=6364 I’m an avowed opponent of industry terms that have no stable definition, particularly when that lack perpetuates myths and hype. And, yes, you can argue that “AI” or “artificial intelligence” is such a term, but there’s an earlier one that I think is particularly destructive to the telecom world. It’s “cloud-native”.

The accepted definition of cloud native is something like “a software approach that fully exploits the hosting model that cloud-computing offers, maximizing its benefits.” OK, that sounds good, but it begs a number of questions. Does it have to exploit all the features of the cloud, most of them, some of them, or what? How do you define “fully”? But the biggest, I think, is how do you do it, in a software architecture sense?

Many of you know I worked on an open-source project called “ExperiaSphere”, that I launched to create an implementation of what I believed “cloud native” should be. Not the only one, for sure, but one that by framing the goals in software would offer both a proof it could be done and a way of answering those earlier questions I posed. I won’t talk about the project here, but rather about the principles, and they were designed specifically to drive the evolution of services and service management in a telco world.

Let’s start with basics. A service of any sort is a collection of resource commitments that create the feature set being sold. The service-in-waiting, then, is a recipe, a set of instructions about how the resources are committed. The resources are a pool of cooperative elements, devices, servers, connections, and so forth. These assert features, and those features are the pantry that a service sale draws on to follow the recipe that defines the desired outcome.

Service sale? So there’s also a process, the process of selling a service, billing for it, tracking the accounts, building new recipes, committing resources to a sale, releasing them when the term expires, restoring operation if a resource fails and has to be replaced…a bunch of processes, then. In the old-line monolith world, these processes would all be applications, with a queue of inputs and a bunch of outputs. That’s what the original OSS/BSS systems were, in fact. That’s not cloud-native.

How do we make this cloud-native? We have to stop thinking of commercial paper like orders and bills as drivers, and instead think of them as byproducts. What drives these processes, all of them in fact, is events. Things that happen, signals that request an outcome. The challenge in this whole service ecosystem lies in handling these events within the context of the business—financial constraints, legal constraints, resource constraints, and even the constraints set by those service-driven commitments of resources. All these constraints encourage us to think of the service ecosystem as a set of models, and that includes both the resources (financial and otherwise) and the processes themselves. An event is processed based on the state of the things that constrain it, whatever they are, which makes the structure of cloud-native applications one of state-event systems. Send an event to a process model, and it handles it based on its state.

A sale is an event, so it goes to a sales-event-handler, which draws a recipe from a file and dispatches it to the resource process as an event. If the state of resources and resource policies permit, the result is a commitment, which is the instantiation of the sale/model on the resource set. Now, that model is a state/event process too. If it gets a fault event from a resource, it acts on it by either replacing the failed element or reporting a failure upward to a billing process, which also got an event notifying it that a service had been instantiated. You get the picture.

Essentially, cloud-native stuff is model driven state/event stuff. The processes are responses to events that are linked (in a table or graph) to each state/event combination in each model. The models contain everything needed to process an event, so the processes at all these intersections are microservices; you spin them up when and where you need them. Thus, there is no “OSS” or “BSS” or “NMS” in the traditional sense; all these are really just a bunch of microservices floating in state-event-model hyperspace. What connects them is the state/event relationships driven by events that weave through them to reflect stuff going on in the real world. An application of old, then, is really just a structured event flow and the processes it connects.

By convenience, though, you can talk about things like service management, resource management, accounting management, the whole FCAPS thing and the whole TMF thing. The policies that constrain all the ways that events are processed in various states of various models collectively define an “application” set. My own view is that there is a “business model” that frames the business flow, and under that there’s a service model set and its processes, a resource management set and its processes, and the business-level models and processes that reflect customer management, accounting, purchasing, inventory, personnel and payroll, and the rest. But a customer, a router, an invoice, and so forth are all models or events in their own right. Accounting and sales manage customer models or supplier models.

The properties that we looked for in that basic cloud-native definition fall out of this. You can spin up as many instances of a given state/event microservice as you need, where and when you need them. The commitments in resources are elastic both for the services and for their management. Same for resources. Same for software. It’s not a bunch of machines chugging along, it’s almost an organism, something that expands and contracts almost like breathing.

You might wonder about this, of course. Why does Tom think he knows how to do this? First, I’ve done a lot of event-driven software, as a programmer, an architect, and running teams. Second, I’ve followed the processes of people who have done it wrong, like the ONAP work. Telco types think in monolithic terms. You can convert a monolithic design into microservices, though, and if that’s how you define cloud-native, then you’re there. It’s not the right way and it won’t really meet the goals because architecture constrains implementation. Think back to the old days of IT; read a record, process, and write. Then turn that into read a queue of requests…and so forth. Now, divide that into microservices and tell me it’s cloud-native, optimally scalable, resilient, and elastic. Just dividing stuff up doesn’t make it elastic or agile, and until the people in telco-land get this, cloud-native is forever beyond their reach.

]]>
6364
What Does AI Think of AI RAN? https://andoverintel.com/2026/04/12/what-does-ai-think-of-ai-ran/ Sun, 12 Apr 2026 15:24:54 +0000 https://andoverintel.com/?p=6362 I thought it would be interesting to have an AI analysis of the whole AI RAN scene. Here’s the result in both report and audio summary form. The material was produced by Google Gemini Pro and NotebookLM.

]]>
6362
The Week in Networking: Week Ending April 11th, 2026 https://andoverintel.com/2026/04/12/the-week-in-networking-week-ending-april-11th-2026/ Sun, 12 Apr 2026 15:22:41 +0000 https://andoverintel.com/?p=6360 Here’s the latest AI analysis of the top five network technology announcements for the week ending April 11th. As always, note that this is AI-generated and does not necessarily reflect the views of Andover Intel. The audio file is an AI analysis of the same top five, but with a commentary on how Andover Intel would view each release based on an analysis of our blogs. The material is produced by Google’s Gemini Pro and NotebookLM.

Please feel free to comment and ask questions on LinkedIn!

]]>
6360
Is AI RAN a Real Value or a Prop to AI Hype? https://andoverintel.com/2026/04/09/is-ai-ran-a-real-value-or-a-prop-to-ai-hype/ Thu, 09 Apr 2026 11:41:33 +0000 https://andoverintel.com/?p=6358 What’s behind the push for AI-RAN? Not a single enterprise has, over the last year, even mentioned any interest in or benefits of it to them. Of the 88 operators who I’ve had contact with in that period, only 22 said they believed it might, stress the qualifier, have value, and 49 said they saw no real benefit. So why do we keep hearing about it? Perhaps this Light Reading piece has an answer to that.

There are a lot of theories about AI RAN. Some focus on the benefits it could bring in terms of managing cells and spectrum, as a replacement for ASICs. Some focus on the potential for edge-hosting AI services. Some on both. Operations and security value are both often cited. There is theoretical benefit for all of this, but provable value? Not according to virtually every enterprise (as buyers) and the majority of telcos (sellers).

The problem with the use of GPUs as an ASIC replacement is that you’re looking at a real cost penalty with a speculative benefit. Yes, there are indications that the full exploitation of MIMO could be more likely with GPUs than ASICs, spectral efficiency might be higher, and there might be operational benefits. However, most operators think these benefits are not compelling to the point where they could justify deployment of new RAN equipment, and they’re doubtful whether the benefits could be realized if they were phased in with orderly modernization initiatives. Pockets of AI, in short, don’t seem to operators to offer a value.

You could address the problem of pockets of AI by linking AI RAN to a massive new infrastructure wave, which brings us to the next-generation wireless point. Many say that AI RAN is essential for 6G, but operators note that their own priority for 6G is not to have it be a big fork-lift. Half of operators say that were that to be a requirement, they’d seriously look at not advancing to 6G at all, and almost half think that their pressure on the 3GPP would induce the body to reject any such requirement.

Leveraging something that doesn’t deploy in the first place is surely a challenge, but it may be even tougher than that. The pressure on operator applications for AI RAN inevitably create interest in lowering the cost of achieving some of AI RAN’s promised benefits. About a third of operators believe that promoting GPUs as an ASIC replacement begs the question of ASIC improvement. Why not simply do a better ASIC? The majority believe that even if it were determined that AI could be a benefit, it would not be the generalized AI we see today, driving those massive data centers, but rather a form of machine learning, or perhaps a simple GPU and a foundation model. This “small-model” approach isn’t spontaneously validated, but it is expressed as a counterpoint to the perception that AI RAN means generic-like AI.

This directly impacts the notion of using the AI RAN resources to offer edge computing. The more specialized the RAN hosting mission is, the less likely it is that it could support edge services, since any limitations in what the edge host could offer could stick enterprises who used the service with limited application migration support. What the AI RAN edge might work with today’s application, but a new requirement? That would be a tough sell.

Tougher given that enterprises are more likely to see a risk in using shared AI RAN hosting resources for edge computing than a benefit. A bit over a third of enterprises say, spontaneously, that they’d be concerned about the security of a mobile network whose hosting was shared with edge service users. Some pointed to the issue of GPU hacking, just recently noted. Most just think that if security gains are a justification for AI RAN, then sharing resources with edge services are a more-than-compensating risk. They have challenges securing their own resource pools, after all. Why would operators not have them too?

Interestingly, though, operator comments suggest that a lot of (if not all of) these concerns could be overlooked if their current primary RAN vendor were to adopt AI RAN. Remember that operators have moved to the position that the biggest problem with “openness” is that it presumes a willingness to exercise a broader range of vendor choices, when this to operators just means more integration worries and costs and more finger-pointing if a problem occurs.

So, who wants AI RAN? Two groups, say operators. The first is the “outliers” in the mobile infrastructure space, who want the incumbents to share the wealth. Their hope would be that the innovation that AI RAN might (again, note the qualifier) create could (same) produce something so beneficial it would promote replacement of infrastructure at a faster rate. Second, the AI players, notably Nvidia.

Operators do not want to do a lot of integration. That means that multi-vendor RAN is a heavy lift in itself, and also that pockets of new RAN technology would be avoided even if the theoretical benefits of AI RAN could be achieved in a pocket-deployment environment. In any event, all the giants in mobile infrastructure have learned to embrace open initiatives with the realization that as long as they do that, it’s likely they won’t really be admitting others into their tents.

For the AI players, AI RAN is almost essential, for three reasons. First, they have to keep shaking the earth to sustain the hype wave. A ton of capital has been sunk into AI data centers, which for the most part are running AI that nobody is paying for. Applications needed; line up here. Second, just a little extension in hype-wave life might be enough for something real to come along and sustain AI spending. Third, even just attempts to realize opportunities that don’t actually pan out, or even exist, might create a useful application or technique that could build that “something real”.

I think it’s this group of AI types, more than any collection of buyers or of mobile vendors, that are really pushing the notion of AI RAN. That doesn’t mean that the move is totally self-serving and cynical, but I think that both are there in ample measure. Many, including me, have predicted that the AI wave would, as hype waves all do, crest and crash eventually. Will AI RAN crash with it? Unless it deals with real value propositions, I suspect it will.

]]>
6358
How Much of a Benefit is the Telco IRR? https://andoverintel.com/2026/04/08/how-much-of-a-benefit-is-the-telco-irr/ Wed, 08 Apr 2026 11:42:00 +0000 https://andoverintel.com/?p=6356 My blog on telco challenges, reasons, and possible solutions included a comment on the advantages of having a relatively low internal rate of return. This, I contend, lets telcos invest in projects that would be financially unattractive to OTTs, including cloud providers. It generated a dozen CFO comments, both from telcos and enterprises, and so I’m following up with a more detailed look at IRR impacts, based on these comments.

Let’s look first at the IRR advantage telcos have. Social media companies and cloud providers have a return on invested capital (the company-wide average of IRRs) in the 30’s percent range. Telcos have ROICs in the high single digits, so we could say they’re a third of the cloud and AI providers. A project with an ROI of 20% would fall far short of the target for the former group, and would be a significant improvement for a latter. Let’s test that through the comments.

CFOs tell me that you need to look at tech project spending through the lens of corporation budgeting and public company financial reporting. First and foremost, they say, IRR is regularly compared with what some call the “hurdle rate”, which is the rate of return that’s the de facto minimum as seen by boards of directors, senior management, and Wall Street analysts. A many of the CFOs tell me that the hurdle rate is perhaps a better name for the benchmark against which projects are assessed, but others say that the CFO would normally set an ROI target somewhat higher than the hurdle rate, since the latter represents the point where you transition from “questionable value” to “negative value”. Hurdle rates, being totally financial in makeup, also don’t consider any risk premium, which typically rises as the payback period for a project or the useful life of the assets increases. So far, the “theory” of the IRR advantage holds, but with identified complications.

CFOs in verticals where the economic output is goods for sale also note that the rate of inventory turnover compared to the length of the supply chain is also a factor that impacts project decisions. If you can sell something before you need to pay for having acquired it, your cash-flow benefits are considerable and you actually don’t have much capital tied up. Projects that can accelerate this sell-before-you-pay will reap a financial benefit beyond the normal ROI calculations.

It is true, CFOs tell me, that companies with a low IRR/hurdle rate can justify projects whose ROI is lower, but whether that’s a leverageable benefit isn’t a given. For example, the finances of two companies in the same vertical but with widely different hurdle rates would normally be assessed by investors and creditors in a way that preferences the one with the higher hurdle rate, because it shows that company is using capital better. Where things are more complicated is where we’re looking at companies in different verticals, like telcos and OTTs of various types, and where the risk premium for a potential project is high.

Let’s look at the central issue in opportunity creation in real-time applications, which is where the applications are hosted relative to the location of the processes involved. Today, almost all real-time applications are self-hosted by enterprises at points close to the processes. This makes the process/application control loop “short” and reduces the risk that a connection problem will stall the process being controlled. Could you achieve better economy of scale with a reliable edge computing service and goods edge-connection QoS? Likely so, but whether this would be a good project for the company with the application depends on the relationship between the savings in infrastructure economy of scale compared to the cost of the service and connection.

From the perspective of the provider of edge hosting services, or of connections to support these services, the risk premium is high for two reasons, which bears on the question of whether telcos could actually exploit their ROI advantage. The obvious reason is that there’s no current convincing proof that these services can make a business case for buyers, and if they can it could take some time. The less-obvious reason is that the public cloud providers are already preparing for an edge opportunity to emerge, and if it does they could well be entrenched competitors with application skills, in a space new to telcos, who also lack the essential application skills likely to be needed.

Every major cloud provider offers a premises-hosted middleware toolkit whose goal is to commit an enterprise to an application structure that favors their cloud services. In many cases, this toolkit is used to link on-site edge computing in process control, with the front-end piece of cloud applications or front-ends. It’s reasonable to say that these create a bridge between today’s self-hosted model of edge computing and any future model, likely more metro-ized, since process control integration is most logical when facilities are close enough to engage in shared activity.

Low-latency service needs are not likely to be enough to save a telco role, or even be valid. Integration of two or more adjacent-but-not-connected facilities would mean goods/parts/material would have to be transported among the facilities, and this is a macro-time process. You don’t need millisecond latency to control something that would likely take minutes or even hours to complete when undertaken. Where physical stuff has to be moved, integrated process control value is limited. Thus, only mobile applications would likely benefit from low-latency services, and the number of these is limited.

It’s possible that, given time, mobile real-time needs would grow. I’ve blogged in the past about the opportunity link between augmented reality glasses and real-time services, and I believe that link exists. The questions are how long it would take for it to mature, and whether telcos could supply the skills needed to play in it, or would just be trapped in a pure connectivity mission that would, like consumer broadband, inevitably commoditize.

AI is a special problem right now, according to both telco CFO-staff and enterprises. There are three risk factors to consider there. First, can the technology actually make a business case, given that to do so it likely has to be substantially self-hosted in order to resolve governance concerns? Second, can it be trusted not to create a problem that even human oversight can’t properly deal with? Finally, is it moving so fast that any investment in AI will be rendered obsolete far faster than an investment in traditional hosting? Over 80% of enterprises say that right now AI is assigned a higher risk premium. Telco comments on that are too sparse to be useful.

This all bears on the question of telcos’ IRR advantage. Can telcos win at hosting? Is there any real value to edge-as-a-service without it? The answer is that IRR advantage alone isn’t going to be compelling. There’s no reason to exploit a financial advantage to exploit a non-opportunity. This means that telcos need to develop application-level relationships, which would demand they frame a strong edge computing model with effective APIs, and do it right now. Otherwise, the cloud providers will own the application model and make it very difficult for telcos to gain any traction with real-time applications.

]]>
6356