Andover Intel https://andoverintel.com All the facts, Always True Thu, 12 Mar 2026 11:25:08 +0000 en-US hourly 1 244390735 Is Satellite Emergency Service More Disintermediation Risk? https://andoverintel.com/2026/03/12/is-satellite-emergency-service-more-disintermediation-risk/ Thu, 12 Mar 2026 11:24:33 +0000 https://andoverintel.com/?p=6331 Here’s an interesting comment I heard from an MWC attendee: “It’s interesting that the most buzz from the show came from a topic, Starlink Mobile, that represents telco Disintermediation 2.0”. I think it’s an interesting point.

Telcos have complained for decades that others have exploited their connectivity assets, demanding low prices for Internet, then building high-margin services on top. This was the original meaning of “disintermediation”, and it’s interesting that the term is now being applied to a satellite service set that doesn’t even ride on telco connectivity, but rather augments it. But in a more philosophical sense, it may be valid. Could satellite players offer emergency connectivity to telcos just to demonstrate to users that satellite is almost always available, and then expand that “emergency-only” role to eat persistent service dominance?

The “Featured Story” from LightReading for the show over the weekend was “At MWC, SpaceX execs tout Starlink V2 – and a key carrier partner for it.” SpaceX, speaking at a keynote, talked about the value of universal connectivity, not only broadband in areas where terrestrial infrastructure can’t serve, but also for “life-saving connectivity”, meaning emergency communications in those same areas. I think that’s a valid story, but it’s also one with implications.

How do you use mobile communications? Most of the people who tell me about their personal use (roughly 80%) of mobile combines “Internet” applications that in most cases are (or could be) connected via WiFi, with spontaneous personal calls/messaging that very often has to connect via cellular service. That mirrors my own usage; I don’t need mobile broadband most of the time because all I do when “mobile” is answer calls or texts.

OK, suppose that SpaceX or somebody else (like Amazon or Google, for example) offers nothing except mobile call and text? I get a phone number that always works, everywhere. I drop my normal mobile service completely, and simply connect via WiFi in fixed places where I really use other connected applications. Where does this leave telcos?

That telcos need to cut deals to offer customers for universal emergency connectivity shows that mobile services can’t fulfill all connectivity. Satellite services can, particularly if we limit their target to calls and texts only. If we assume that a satellite service was a part of a kind of VPN that would automatically (via the smartphone or device) connect via WiFi when there was such a service available, we’d have a model that would use relatively little satellite bandwidth, and one that for many could replace traditional mobile services.

Who might want this? Think almost any MVNO, but in particular some player like a cable MSO, some of whom already have WiFi extension to mobile service options. Or Google, whose Fi service uses T-Mobile cellular, who offers satellite emergency connectivity on some recent Pixel models, and who offers international connectivity. Anyone launching Internet satellite service, of course. Who doesn’t, or shouldn’t? Telcos.

Many of the younger people I know wouldn’t like this because they rely more on social media than on calls and texts, but could social-media providers offer some feature limitations in order to encourage satellite providers to integrate them into their call-text VPN? Why not?

Mobile services are used in different ways by different people. When public WiFi was limited, there was a lot of value to full-scale mobile broadband. Today, less true, particularly for those who don’t use social media as a substitute for continuous physical presence.

So isn’t this a justification for the 6G integration of satellite service and perhaps even WiFi with mobile services? Not unless the telcos want to accelerate disintermediation of their mobile services. The smart play for satellite players would be to encourage this sort of integration, in order to take advantage of WiFi or even mobile service to offload higher-bandwidth applications or service in areas where there are a lot of users who could load up satellite channels.

We could, nay probably are, headed to a time when instead of satellite being a small-scale emergency add-on to mobile service, mobile could be a specialty off-ramp for satellite, something to use if WiFi isn’t available to serve the mission. I think that telcos could have had significant influence in this area, but it’s too late now. The old adage that telcos fear competition more than they value opportunity has reared up and bitten them, and hard.

Satellite voice services came along in the early 1980s in response to high long-distance rates, which telcos kept in place to protect their profits. The telcos eventually abandoned those rates, because voice traffic had a minimal impact on network capacity in the Internet age. But their institutional memory kept pinning satellite providers as the competing enemy, and so they shunned making deals with them to extend coverage to places where even mobile infrastructure couldn’t profitably serve.

Similarly, telcos could have recognized that social media created an alternative to many calls and texts. If they had, might they have launched social-media-linked services that integrated call/text services into the social media site, rather than have the sites build a parallel service? Sure, but wary of “disintermediation” by OTTs who they saw as predatory competitors, telcos hunkered down on the old, and by doing so fostered the new in addition to losing the opportunity for new services.

Telcos, friends, are too slow, too cautious, too protective of the remnants of the past. Their own trade shows are becoming a showcase for others who are faster, more risk-tolerant, less rooted in current thinking and fearful of change. They’re increasingly controlling the agenda, and the last of the opportunities for telcos to seize any high ground is passing away.

If the satellite impact is real, it would destabilize the telco mobile services business, which is their most profitable, so it would destabilize the telcos themselves. We would almost surely have a major profit and infrastructure investment problem. Thus, there’s a public policy point to consider here. What happens if the current trend continues? Telcos would eventually have to become public utilities in the old regulated sense, with a regulator-set pricing and profit level. Or even be a government monopoly. Sound like pre-1980s, pre-privatization, thinking? It is. I think that what we’re seeing is that we went about those reforms wrong, just as we’ve done in many areas in “deregulating” other utilities like electricity, water, and gas, and perhaps even things like mail services. Is there a real, and unappreciated, risk in deregulating essential services? We should be asking that now, before it’s too late.

]]>
6331
Some Telco Views from MWC https://andoverintel.com/2026/03/11/some-telco-views-from-mwc/ Wed, 11 Mar 2026 11:31:37 +0000 https://andoverintel.com/?p=6329 As my telco contacts digested MWC, they offered an interesting consensus comment; 55 of 75 who commented on the show made this remark: “Open RAN can’t fix 5G/6G” which I think is an interesting comment on both. One that, obviously, begs the “Why?” question.

It’s clear to almost every telco that we’re evolving to a more “converged” view of broadband, one that is based on common infrastructure for fixed, mobile, satellite, and even WiFi. The clarity is spoiled by the fact that while that seems obvious at this point, and should have been a fundamental point in the design of 5G/6G architecture, that really didn’t happen for 5G and isn’t really doing that for 6G either.

“If you accept broadband convergence, the primary goal has to be reuse of infrastructure elements across every type of access. We don’t have that,” one telco noted. The big offender, they say, is mobile, but there’s also a need for what another telco called “the Grand Unifier”.

If you look at the architecture of 5G, you can argue that it really defines three things—the RAN, mobility management, and the core. Most of the telcos think that the “core” piece should not exist as a part of mobile infrastructure at all, but rather be a single common central element to broadband. Access features can map to core features, but they should never extend into the core. That’s an essential stating point, telcos think, but not the whole story.

Mobility management, if you cut to the chase, has two main elements—registration of devices and “finding” the access point for those devices as they move about. The function should be a boundary function at the point of connection to the service core, perhaps some metro-level point, where a “service address” that’s known to the service network, say the Internet” is mapped onto a tunnel that gets the packets to the right access point. Today, that’s mobile only, but many telcos think that it should be a universal feature, one that lets devices roam to any cell, any wireline broadband connection (through WiFi to a device) and satellite. This is a goal, most agree, of 6G, but the implementation may be an issue. What a select group of thoughtful telcos want is for this to be done by creating a different relationship between the control and user planes.

Does this sound like vRAN? Not according to the commenting telcos. According to them, vRAN is about turning functions into software, and of course it’s about the RAN. Telcos say that you have to start with the whole mobility management issue if you’re really going to optimize infrastructure for a converged future. That means taking all the user-plane functions and incorporating them in the router. If you’re going to create tunnels and steer/route onto and off the tunnels, you do that in the routers, so that all the data handling for all access options is built into devices that are already handling “traffic”. You then create an interface between the routers and the mobility control-plane features so that mobility management can create tunnels, direct traffic, and break them down. MPLS, they note, already does tunneling in routers, and many think that MPLS should be the mechanism of choice here.

Once you have the data plane centered in traditional routers, you can host the control-plane functions as software features on utility servers. This makes mobility management a true overlay element, and also enables it to direct service traffic to any access technology. However, there still might have to be some extra handling issues to face there, given that every access network assigns a device to an address. How do you manage directing a service address to an access address. One telco suggestion was to treat an entire access network as a private address space and push and pop address headers at the boundary. This could be a function of a router if the router could pull address translation from a database, somewhat like SDN switches would pull routing information, caching it for the duration of its need. The access networks would presumably manage this at the boundary.

What about the “open” part, like Open RAN? The rule, the telcos say, remains that control and data plane separation, with the former in software and the latter inside routers, would be paramount in designing the mechanisms for access, whatever they are. Thus, they’d want an initiative to open the RAN to continue the separation of control and data, and to use routers/switches for the latter. As far as the former, there is some interest in having all control-plane functions be software, but a recognition that as you get closer to the antenna, the value of not only openness but even of control/data separation reduces.

This attitude arises from two factors. First, the major telcos don’t want to integrate multiple vendors. Second, they don’t think that openness begats innovation in the RAN, because only the giants can really afford to innovate. “The concept of ‘open’ is just like any other tech feature, meaning it has to pay back overall,” one commented. “The media loves an open RAN,” said another, “but for us, not so much.” Most admit that the whole concept of openness ends up being justified as a way to hammer down prices, and a defense against a vendor leaving the market. Those justifications increasingly don’t work.

What appears to be true, surely for those telcos who offered me comment and likely for others, is that neither Open RAN nor vRAN is seen by telcos as a broad solution to their business challenges. Some see a path to creating an infrastructure model and service model in a 6G-ish sort of way, but with specific technology elements so far absent from, or even contradicted in, 6G standards. Others are simply trying to navigate the demand forces that drive a need for greater capacity, the lack of differentiated services that could command premium payment, and the growing pressure on them to constrain costs to stabilize their business model.

]]>
6329
Bound and Unbound Systems in Real Time Automation https://andoverintel.com/2026/03/10/bound-and-unbound-systems-in-real-time-automation/ Tue, 10 Mar 2026 11:29:02 +0000 https://andoverintel.com/?p=6327 My views on the importance of real-time applications for the advancement of tech overall, and for new telecom service opportunity generation, are well-known to those who follow my blogs. Over the last six months, 64 enterprise IT planners/architects have offered me comments on their own views and experiences in this area, and I think they offer a window into what’s really likely to happen in the space.

I promise enterprises anonymity in return for their sharing views with me, and that means not providing anything that might lead to their being identified. Since terminology on this topic is inconsistent, I’m going to frame the concepts in my own words to avoid giving away the person who commented.

Enterprises overall agree that real-time applications are key to any increase in IT spending or any changes in the telecom services they’re likely to consume. Twice as many say, for example, that this is the major driver in both areas than say that AI is even a “significant” contributor, unless AI is used as part of a real-time strategy. As I’ve noted in the past, almost all the actual real-time application progress has come in the form of orderly expansion of existing process automation applications, which today rely almost totally on local specialized edge systems using what’s typically known as “real-time operating systems” (RTOS) and running on systems optimized for placement local to the processes they’re supporting. These systems are what we can call a “bound process”, meaning that the process involved uses some form of mechanical system like an assembly line, substation, refinery, etc. This bound process can be represented by an IT-generated model that today would be characterized as a “digital twin”.

The collective comments of the 64 specialists indicate that the driver for change to these bound-process applications is the fact that, in nearly all cases, they are tightly coupled to related processes that are not hosted within the same facility. A factory needs to acquire parts/materials, and ship finished goods, both of which are external to the current applications. Where efficiency can be improved by linking all these interdependent elements of a “business”, the linkage can sometimes be handled by simply exchanging events/triggers, and often the linking processes themselves take measurable time, so these exchanges don’t require special communications resources. If there is a tighter coupling required, which is likely the case if the various interdependent elements are not co-located but are still proximate (within a larger facility, like a plant or yard, or perhaps even among metro-located elements) then real-time control of the interactions may be useful. This is the source of most realistic edge computing service opportunity.

Our specialists also note that when this sort of multi-process symbiosis is assessed, it is sometimes (or even often) the case that some of the new processes being assessed are “unbound”, meaning that they involve elements that are more autonomous in behavior, like human workers or things being run by them. A truck doesn’t run on a track, it runs on a road. Workers in a warehouse move according to a combination of policy/training, their own will and assessment of conditions, and the local conditions themselves. While it might be possible to create a digital twin representing unbound processes, it’s not a simple task of creating a model of a static set of elements with static relationships, as it would be in the case of bound processes.

The big barrier to including unbound processes in a process automation application is creating a process model for them. The best way to get that, all of the 64 agree, is by incorporating video analysis into both the creation of the model and the populating of real-world conditions into the model. That makes AI analysis of video the most important AI mission relating to new IT spending and new telecom service opportunities.

One example of this has already been announced, and I mentioned it HERE. Arda’s mission is apparently primarily model-building, but details are sparse at this point (47 of the 64 specialists had heard of it, but none had gotten any briefings from the company). This sort of capability would allow a company to place cameras to record activity in a space, or in something like a vehicle, and from that determine how the process was actually being conducted. The specialists doubt that this could be done without the benefit of human interpretation, or the ability to draw on the digital twin models of any bound processes in the facility, to relate movement and position too mission and task. Obviously, we need to see more detailed work in this area.

In any event, building a model from the real world isn’t enough according to the specialists. You need to be able to analyze video to populate the model with conditions, or there’s no way that the model can accurately reflect real-world behavior fully. A digital twin of a busy intersection might offer you a lot of insight into how the intersection might behave under various conditions, but not much on how it’s currently behaving. If the purpose of a process model is to facilitate the introduction of IT knowledge and IT-directed action into a real-world process, you need to know what the conditions are in the moment when that stuff is being introduced.

The ability to model unbound processes is critical to optimizing the impact of real-time applications on business efficiency, which means on willingness to invest in IT resources to do the optimizing, which means spending on IT, and spending on network service enhancements to expand the scope of the applications. A smart city needs to know a lot about what’s going on in the moment, or it isn’t smart enough.

This all raises some questions, of course, the biggest one perhaps being the impact of video analysis on personal privacy. A worker in a facility might have some concerns about some video-analyzing AI agent watching them like Big Brother, but this could likely be contained. However, spreading this kind of thing to public streets and to buildings, to augment public-safety workers for example, could mean that more of the general public are dragged in. Street-level camera surveillance is accepted and even sought in some countries, and resisted in others (including the US).

The bound/unbound system issue is something that enterprises are starting to address, and it’s already demonstrating that it has major implications in terms of both targeting and technology. Given that system models, digital twins, are complex in themselves, adding in the dimension of how they’re populated effectively in the real world threatens to delay their realization. Fortunately, there are initiatives that are starting to provide technical solution pathways, if not final answers, to these problems. There’s a lot of money on the table in this space, so it’s important.

]]>
6327
MWC, COBOL, and Tech Fables https://andoverintel.com/2026/03/05/mwc-cobol-and-tech-fables/ Thu, 05 Mar 2026 12:48:07 +0000 https://andoverintel.com/?p=6325 What do MWC and COBOL have in common? Two things, one obvious and one not. The obvious link is AI, which is the dominant conversation at MWC. The not-obvious one is that the so-called obvious is covering up the important and real stuff.

Let’s start with COBOL, which is an acronym for “Common Business-Oriented Language”. I’ve done a lot of COBOL programming in my career, and I’m confident I could still code in it if I wanted to bother. The current focus on COBOL is a result of claims that because AI has a new tool to translate COBOL to another language, it threatens the whole software industry, or at least threatens IBM’s mainframe incumbency. Nonsense.

COBOL is probably the easiest programming language to learn and use, because it has an almost English-language syntax. I’ve coded in a lot of languages, and I am of the firm belief that any competent programmer could learn it in a week and work in it confidently.

The reason that’s important is that if COBOL were an albatross hanging around the necks of CIOs in IBM shops, they could have easily addressed that by simply changing the code they already ran, or compiling the programs on a different platform. You don’t need AI translation of code, people, you can get COBOL compilers for everything from PCs to Linux servers, in both commercial (including IBM) and free/open-source form (try GnuCOBOL if you need something). So if somebody wants to flee a mainframe, and if COBOL programs were the barrier, they’ve had the pathway to leap that barrier from the first, and still have it, independent of AI.

So why all the hype about this? Some of it is simply the result of click-seeking; you need stories so you grab on to something that has click potential. More people read about threats than about hopeful developments; good takes care of itself but bad has to be managed. Some is more complicated.

Wall Street is driving a lot of this nonsense. IBM’s stock has been on a roll because IBM alone, of all the big AI players, has actually had the story straight from the first. They have the highest enterprise-agent self-hosting success rate according to enterprises. You’d love to have gotten in on the ground floor on their stock, but most probably believed the hype that IBM was a dinosaur. I was actually asked to write a story on that before the stock took off on AI success, and I refused because I knew that it was B.S. Anyway, hedge funds have another chance, and a profitable one. They hit IBM’s stock with short-selling, drive it down, and make money when they cover their shorts with now-lower-priced IBM stock. They then buy, and when IBM goes up again, they make more money. If they can encourage the IBM-COBOL-dinosaur hype, they increase the potential for a big IBM drop, and the money they’ll make. And in the Internet and the click-obsessed world it created, they can do that easily.

Nonsense pays, at least for some.

Which gets us to MWC. What is a conference like that intended to do? Publicize, sell. Vendors spend a lot of money exhibiting there, and buyers spend a fair piece of change attending it. All this investment has to come with a return. For vendors, it’s visibility, attention. For buyers, it’s education, exposure. For both, there has to be something compelling going on, or nobody will care. The easiest way to get that is to ride a good hype wave, which is why we have an MWC focus on the intersection of two. One, 6G, ties into the ever-present hope that the Next Big Thing in mobile will redeem telco capex. One, AI, ties into the current popular buzz. Link the two, as Nvidia has surely been working to do, and you have a Great Attractor.

What gets covered up, then? Besides the obvious and general answer, the truth? Behind the scenes, under the hype, there’s an alternate reality that happens to be at least related to the truth.

The problem with hype waves is that they crest and fall into the drink. That may be fine from a story perspective; the news that AI has died would be just as click-worthy as that it was going to kill you, your family and friends, and humanity in general. For vendors, though, it promises a stock crash that could destroy them, even though the hedge funds would make money on it. So, underneath it all, there’s stuff going on to build the Thing that Will Emerge From the Deep when the AI wave crashes. It may not be as dramatic as the original hype wave, but it could save a lot of vendor grief. So we need to know what it is.

I think it’s clear that there are a lot of players who believe that telcos need some form of business salvation. AI and 6G are the preferred pathways to that, but only because everyone wants a deus ex machina (for those not familiar with the phrase, the Oxford Dictionary defines it as “an unexpected power or event saving a seemingly hopeless situation, especially as a contrived plot device in a play or novel.”) sort of solution. That works in novels and plays, but not in the real world.

We hear that the issues with telcos, the barriers to AI, are data coherence, APIs, lack of skilled personnel, and so forth. All of that is true and false at the same time. Yes, those are issues, but not the issue. Every project faces them regularly. The solution is to make changes, spend money, and the issue is the “spend money” part. Could 6G revolutionize infrastructure? Yes, if telcos spend on it. Could AI transform telcos and enterprises? Yes, if they spend on it. The problem is that in order for that spending to make any business sense, there has to be a suitable return on investment. Make a 6G revolution, and nobody will be able to afford it, because we know based on 5G experience that you can’t “build it and they will come”, you have to wait till they want/need it, then build it. With AI, what all the stories come down to is that investing billions in AI models could make something real, but would it do what we’re already doing better, better enough, to pay those billions back?

6G revolutions would make the vendors happy, but telcos have already said they don’t want a 6G outcome that requires major infrastructure upgrades. So, no revolution. AI replacing humans would make the AI pundits, and probably the media, happy (up to when AI replaced them, at least), but suppose that a giant AI data center can do the job of a human? Is spending a billion to get a human outcome when humans are already in the jobs a smart move? Oh, you say, the data center could replace a lot of humans, but if that’s the case then where is the need for all this new AI investment coming from? Where are the cost savings so far? Sure all this MWC 6G stuff and AI-consciousness stuff is entertaining, but are we really talking about multi-billion investments justified by the fact that it’s fun to read about them?

API work, open-source elements, in 6G could eliminate the telco fears of integration and operations complexity. The same could open enterprise core business data for AI exploitation, safely. There are initiatives in progress that would do these things, some that are even aimed at doing them, but you’ll not likely hear about them because they’re defensive positions, things behind the attractive hype that’s intended to be the survival bunker for when the hype fails. Talk about them now and you’re shouting “The Emperor is buck naked!” Then your stock tanks, because the hype was the here and now, and the fallback is a step on a longer and more boring path.

The good news is that outside MWC (and not in COBOL, I’d put in), there’s real progress happening. A former OpenAI bigwig is reportedly starting a company (Arda) to automate manufacturing, and it will analyze video to create a “digital twin” of an environment, then train robotic elements to work within it. This is a rational use of AI to connect itself to real-world processes, a critical step to an optimum future. We also had news of an AT&T initiative to replace AI applications in netops that were based on giant LLMs hosted in the cloud, with small models hosted on their own resources. This is what enterprises have been talking about for AI all along.

Hype waves on 6G and AI are likely to leave us all high and dry, unless…unless somehow some of the incremental steps that are contemplated, even announced, accidentally turn out something incrementally useful, or some player gets smart and tries to do that deliberately. There are signs, both in announcements like the one I just cited, and others from Nvidia and AMD in their telco-related announcements, that there’s a chance of both. The question for MWC, and for the AI community, is whether this will happen in time, or whether the waves of 6G and AI will crash over them.

]]>
6325
Surveys, Hype, Tech Adoption, and AI https://andoverintel.com/2026/03/04/surveys-hype-tech-adoption-and-ai/ Wed, 04 Mar 2026 13:02:45 +0000 https://andoverintel.com/?p=6323 When I blogged last week about Nvidia’s telecom report, I got a lot of questions about the issues related to survey accuracy. Some related to why the problems existed, and some wondered whether the same sort of issues might influence the stories we get from users themselves, and many wondered what the impact of the issues might be on tech adoption. We’ve all read about the success of AI or 5G or something, after all. How many are true? Some, almost surely. All, surely not. So let’s try to answer those user questions, in general and in relation to AI in particular.

I surveyed users in an organized way for about 30 years. I got started on a paid survey for a big network vendor, and it was so difficult to line up people who’d talk with you and who actually knew something that I kept in touch with the group, originally numbering 300. Over the years, the number dwindled a bit, but often when somebody left the list for some reason, they were happy to nominate a successor. The key point, to me, was that I knew the people and they knew me, and I felt I was able to get truthful responses. I did one or two surveys a year so I’d not burn them out, and I shared the results with them while preserving the confidentiality of everyone involved. I was confident I got the truth then, and I’m still confident about that today, so I had a pretty good baseline to assess what I heard in other surveys, and I was even asked on multiple occasions to critique a survey done by someone else.

What I found was that somewhere between 30% and 40% of people who are surveyed will give an inaccurate response. Sometimes it’s because they don’t know enough about the topic, sometimes because they want to look smart or influential or in touch, but in any case they’ll answer and be incorrect. I’ve seen that number claim to be using technologies not commercially available, or saying they used a form of the technology that didn’t and couldn’t exist. In some cases, I had information about their company that was totally contradictory to their answer and totally credible. In some cases, they were just too enthusiastic and optimistic, claiming a value that didn’t exist or couldn’t actually be realized at the bottom line. And I’ve also seen surveys were fewer than ten percent of those surveyed even qualified for the survey in the first place, so the firm doing the job essentially falsified their results. The point is that I think that well over 90% of the surveys I’ve examined were, based on my own data, totally inaccurate. Of course, you have to ask whether I’m blowing smoke at you, and you should. You should always question market views.

One thing enterprises told me, on this topic, is that most of the AI adoption so far has been “citizen AI” pulled in by line organizations, and dealing with productivity in areas like document development, or where data governance wasn’t an issue because the data wasn’t business-critical. This stuff necessarily used cloud-hosted tools, and also was based on expensed services that didn’t involve an actual project approval and business case. If you ask these “citizens”, they’d tell you that they were getting a benefit, but where the proposed uses of cloud AI expanded to include areas that were subject to governance, and involved IT and the CFO, they were not approved and could not prove a business case. These casual uses of AI may be what’s driving the hype wave, because line personnel are more likely to exaggerate, or to ignore formal business-case issues.

To get this all linked to our topic, here’s the critical point. People want positive reinforcement, to be respected, maybe to be liked. People like to be surveyed, it makes them feel important. When there’s a lot technology, few will admit knowing nothing about it, and few will admit their own company is behind the massive wave of adoption they hear about. Hype waves carry a lot of people, and companies, along. The less involved with formal project processes these people are, the less likely it is that they’d present an accurate picture of benefits. They’ll just want to look smart, connected to the biggest and hottest topic. If you’re using AI, it’s less likely to take your job, right?

Companies want to look good too. They’re responsible to their investors, public or private. The public ones have to make quarterly regulatory filings, do earnings calls, and in general play up to Wall Street. If there’s a technology that’s sweeping the world, in a hype-wave sense at least, then there’s an advantage to have a story that engages with it, and a risk if you don’t. Make your own kind of music, sing your own special song, but it’s safer to occasionally be part of a chorus. Companies can’t fake financial reports without major risk, but they can spin numbers in a lot of ways.

Recently this has taken a specific form, which exploits the fact that a massive hype wave creates a way of shielding negative things, or even making a positive out of them. For example, not a single enterprise or telco contact has told me that their company had been able to cut a significant number of jobs by adopting AI, but most of those who had reduced headcount to improve profits by cutting costs said their company had claimed AI had enabled it. It was a good look, when saying you were laying off people to improve profits was not.

Where does this leave AI? There’s no easy answer to that. Right now, there is credible progress toward building a business case for the AI investment that’s already been made, and even for some modest growth in that investment. But Wall Street doesn’t reward modesty, and companies that have seen massive stock gains cannot sustain the gaining by simply justifying current deployments. They need massive new business case development. Is that possible? Yes. The barrier to it is failing to recognize you need it. Companies won’t spend more and more without getting more and more. If there’s an expectation that hype will save the day, then will those new business cases develop?

Enterprises tell me that they really believe in AI agents, in the self-hosted, component-like form they’ve always said they needed to be deployed in. There is no difference between data governance in AI and data governance in software, they say. Most of their business-critical stuff is not going to run in a cloud, whether using AI or not. They also tell me that they’re working through the processes of building skills and identifying tools, and that they believe that much, but not complete, progress will be made this year. But they do not see this as a revolution for the simple reason that things aren’t adopted in a revolutionary way. It would displace too much cost, create too much risk, and stress any credibility in a business case. They also don’t see what’s being published about AI as particularly helpful. They don’t want full autonomy, for example, they want AI to operate within the same kind of constraints as software copes with today. Trust but verify.

We’ll have to wait a bit for reality to catch up with hype, and the risk now is that key AI players will find that threatens their stock, and try to further exaggerate. That could actually slow AI evolution, and success. I understand the appeal of hype, of clicks, but this is not the way to run an industry.

]]>
6323
Exploring The Value of “Open 6G” https://andoverintel.com/2026/03/03/exploring-the-value-of-open-6g/ Tue, 03 Mar 2026 12:29:17 +0000 https://andoverintel.com/?p=6321 There are a lot of questions being asked about 6G, despite the fact that (or perhaps because) it’s likely half-a-decade away. One of the questions is how it should treat the RAN, whether “open RAN” should be mandated in 6G. Telcos I chat with favor that position by 2:1, but interestingly even those who do tell me that they don’t believe that it will really change the RAN or vendor landscape much. A decent number even think that the debate is destructive to addressing other, more important-to-telco, questions.

What vendors think about this is more complicated, as a Light Reading story indicates. I don’t have any credible contacts with Huawei or ZTE at this point, but some telcos who do tell me the same story the article cites, and that to me suggests some truths. Truth One: Incumbent vendors believe they’d be better off without open RAN. Truth Two: Most incumbent vendors realize that the initiative failed to have a major impact in 5G, and will almost surely do the same for 6G. Truth Three: “almost surely” isn’t necessarily enough of a guarantee.

Huawei is somewhat unique among the big mobile infrastructure players in that it faces a political pressure to drop its gear from networks, and it has a larger share of emerging-market telcos than it rivals. This combination means that there is more pressure on their incumbencies, and a greater chance that the telcos under the pressure would look at an open strategy to save money.

But let’s face it, we live in a world dominated by clicks, and nobody really wants to stand up in public against openness. That doesn’t mean they’d give it more than lip service, though, and that in turn means that it’s fair to ask whether the whole idea has, in the telco world, any value at all. Let’s look at that question first.

Telcos, enterprises, and pretty much everyone who consumes technology really want a system in which everything is plug and play—no integration, no complexity of operations. They also want the best possible technology at the lowest possible price, preferably free. Sadly, we live in a world of unsatisfied wants as much as we live in one dominated by clicks. The fact is that the benefits of openness demand multiplicity of players and competition among approaches, and if you forswear the notions of integration and resolving operational complexity, you won’t have that. A bunch of open players emerges when you can have best-of-breed competition at the micro-component level, and many/most of the competitors then would likely be micro-competitors who had to be combined to provide a complete solution.

How do you eliminate integration and operational complexity in an open infrastructure? The only possible solution is AI, which to me means that pushing open 6G means betting that by the time it comes along, autonomous AI operation of infrastructure will be accepted broadly, and that the open strategy would be designed to maximize the chances of that happening. We do not know whether the first is true, and we should know that at present we have no credible idea of how that AI optimization could evolve.

That last point is important, because if we had an open RAN initiative in/for 6G that could address the point, it might justify the time spent on promoting the concept and the divisive and distracting impact of its inclusion. Given my experience with standards bodies and industry groups, I think a formalized inclusion of open RAN in 6G would virtually exclude all chance of progress in the AI optimization space. I also think that involving standards/industry groups is likely to do nothing but slow and dilute the effort, which is why I think the AMD/GSMA initiative is almost certain to do little or nothing to help.

This is something that Nvidia should undertake, instead of issuing platitude studies, and contribute their results in an open release. They’re promising an open-source 6G model, but that’s not enough to resolve the integration/operations concerns. An AI operations interface model could influence development of AI tools, and their inclusion in 6G deployments.

Why not the 6G vendors? It goes back to the truth that nobody who’s an incumbent has anything to gain from an open RAN adoption, nor to any AI initiatives that would facilitate multi-vendor deployment. The vendors would surely accept AI tools in a public sense (as they did with Open RAN) if someone like Nvidia pursued the strategy, but if they did something on their own they’d justifiably focus it on their own product line, and gain from their investment.

So, OK, there could be a value in an open element to 6G, but IMHO it would disappear if it were to be confined to the RAN. There’s no such thing as a partially open infrastructure, limited integration, limited operations complexity, as far as telcos are concerned. The benefits of openness would dissipate or even disappear if they were limited to a piece of infrastructure. You can see that attitude in how Open RAN for 5G has failed to deliver; telcos buy infrastructure not RAN infrastructure, and so you need to extend the thing you expect to facilitate openness to the entire mobile infrastructure. Which, of course, makes it a lot more complicated.

My conclusion here is that 6G should not mandate open RAN, or open anything, at this point. Absent a way to totally address the integration/operations issues of telcos, such a decision would not have any better outcome than Open RAN has had in 5G. It would, again IMHO, likely have a worse outcome, because fighting 5G-like battles in 6G risks having the whole initiative fail for the same reasons that 5G did—which we can summarize as being a supply-side-field-of-dreams mindset.

Future mobile services can’t be profitable if they don’t offer differentiation based on QoS, meaning that they support applications that require a level of QoS better than the best-efforts mindset of today’s mobile broadband. That doesn’t mean that these future services can skate by simply offering the QoS; they have to be able to promote the applications. That, in turn, means they have to feed an active project process or launch one. Since the latter is hardly possible for a telco, they have to hope for the former.

My discussion with enterprises on the evolutionary-project-centricity of their technology spending told me something new, which was that if you can’t launch a credible project to change tech adoption you could only feed one that got started without you. They dismiss the notion of revolutions because they’d require revolutionary spending, revolutionary project approval cycles, and impose revolutionary risk. That, I believe, means dealing with the evolution of IoT applications in process management and control from being premises-based to something more metro-scale. If I’m correctly analyzing what enterprises are telling me, that evolution is the only path to 6G success.

]]>
6321
More on Enterprise AI Targeting https://andoverintel.com/2026/02/26/more-on-enterprise-ai-targeting/ Thu, 26 Feb 2026 13:08:42 +0000 https://andoverintel.com/?p=6319 My blog last week on occupation statistics and project targets generated immediate comments from enterprises, 208 in total as of yesterday. Based on these, I want to make some additional points, not only on the value of using occupation data to target projects, but on selecting targets optimally based on that data.

It was this latter point that I found most interesting, and also a key factor in how we interpret the comments of enterprises on things like AI. None of the 208 said that they were planning AI projects based on value targeting, meaning picking targets based on the characteristics of the employee group(s) whose productivity was to be augmented. Only 143 said that they looked at value targeting even as a secondary factor, though all said it was important. That sure seems a contradiction, one that deserves more explanation.

According to the 208, new tech projects including AI were almost always initiated by line department comments and questions, and sometimes by outright requests. They almost always were extensions or augmentations of things already being done with some form of IT, and about half of them came about because of a change in business conditions, regulations, economic trends, etc. This is why enterprises tend to see AI agents as software components; the projects that deploy them are projects that had deployed software in the past.

What’s different with AI versus “IT” projects is that line organizations are more likely to initiate requests for specific capabilities, or even contract for AI-as-a-service offerings, without IT coordination. IT is also more likely to “offer” AI in response to questions, comments, or requests, from line organizations. The key thing here is that AI planning is reported to be fragmented; companies do not say that they’re formulating a broad AI strategy as much as sneaking up on one a project at a time.

According to enterprises, the simple truth is that AI productivity projects in their company are almost always aimed at managerial and professional/technical workers. These job categories have two or more of three critical characteristics; they have a high unit value of labor, they have decision-making and expense approval roles, and they have significantly faster head-count growth than the labor force at large. Management positions hit all three of these, and so are most often targeted. Computer science, engineering, and healthcare professionals hit at least two of the three.

There is also a vertical-market difference in AI empowerment to consider. Interestingly, the highest percentage of empowerment comes in the “educator” job category, which has nearly 90% AI use (and almost all of it is cloud-hosted AI). IT vendors, architect/engineering firms, and financial analysts all report 70% or higher AI use.

Almost all of this AI use relates to the cloud-hosted as-a-service form of AI, much in chat form but increasingly using AI tools. Getting data on the AI agent deployment is much more challenging due to the early stage we are in agent adoption, but some interesting information does emerge from the comments of the 208.

So far, AI agent use is largely driven through or by IT, unlike the use of generative-as-a-service AI. It is somewhat more likely to be stimulated by line department interaction than normal IT projects, but all the applications so far fit into one of the three categories of agent I’ve blogged on before; workflow, interactive, and integrated. The integrated model, requiring as it does embedding in other software, is the most likely to be driven by IT, the interactive form the most likely highly influenced by line organizations, and the only kind that is so far reported to ever be acquired and deployed entirely by line organizations. But even here, the majority of this style of agent almost always involves IT, because the great majority of agent missions (over 90% so far) involve either local hosting of the model for data governance reasons, or selection of an as-a-service provider who can meet compliance goals.

The targeting of the agent applications that are part of a workflow is obviously the same as the targeting of the workflow overall, unless agents extend the workflow to a different user set. Right now, that’s reportedly the case for only about ten percent of workflow agents, but I suspect it will grow as enterprises realize agents are a good way to extend many applications. However, only a bit over half of workflow applications of AI agent actually target empowerment directly, the ones that involve generating a display or report output based on analysis. Those involved in editing or other aspects of transaction handling don’t generate worker-visible outputs and so don’t relate directly to empowerment.

Where AI agents are used in interactively, the main mission is the support chatbot today; it accounts for two-thirds of interactive agent missions. Support chatbots can operate either pre- or post-sale, with the latter currently dominating, and there are at least current differences in how they’re deployed. Pre-sale chatbots, dealing as they do with customer-facing data that’s hardly proprietary, are more likely to be cloud-hosted, and make up the largest class of agent-as-a-service applications today. Currently, most of the pre-sale chatbot agents that are self-hosted relate to B2B sales.

The post-sale or customer support chatbots are, today, seen most often as involving data governance policies, and so are more likely to be self-hosted. However, where the product/service is B2C rather than B2B, as-a-service models are preferred currently. This is particularly true where the expected customer base is large, widely distributed geographically, or both. In those situations, an as-a-service model is said by enterprises to handle the variable load levels better.

Most of the 208 enterprises admit that a strategy of targeting productivity-justified AI based on how many workers could be empowered, what the total unit value of labor was, or some combination of the two would be smart if AI deployment was decided on a centralized basis, but that’s not the case today, and enterprises admit it’s not likely to be the way they do tech projects in the future.

This poses an interesting question, which is whether project incrementalism with AI or other tech advances can fully realize tech potential. Companies do projects, not revolutions. Vendors want revolutions, or at least some do, and if that’s the case does it mean that vendors will have to try to influence project development in directions that optimize the long-term, market-wide business case? I think that’s what Nvidia is trying to do by painting glossy, hopeful, pictures, but that approach risks a major problem if they don’t see the market clearly, and marketing hype can blind those who produce it as easily as those intended to consume it.

]]>
6319
Telco AI as Nvidia Sees It Versus What They Tell Me https://andoverintel.com/2026/02/25/telco-ai-as-nvidia-sees-it-versus-what-they-tell-me/ Wed, 25 Feb 2026 12:46:37 +0000 https://andoverintel.com/?p=6317 Nvidia, no surprise, is very interested in AI adoption by telcos, and they’ve just released a 2026 survey on it. Let’s take a look, exploring the conclusions and also how the results compare with what I hear from the 88 telcos I’ve chatted with in the last 6 months.

Most of those who follow my blog know that I am uncomfortable with surveys, given that experience has showed me that people often don’t respond properly, that the survey design often dictates the responses, and that many who are asked to participate have no qualifications to answer the questions. However, I admit that the only reliable way to get data from users of technology would be to investigate in person, which is hardly practical. The Andover Intel approach is to glean information from users’ questions and spontaneous remarks rather than to ask what will almost surely be leading questions. Even this is biased, since we can’t assume that a representative sample of buyers will ask or remark. Keep that in mind here, please.

The Nvidia survey covers global responses from over a thousand people. I rely on somewhere between three dozen telco responses and almost 300, representing 88 telcos, and I consolidate multiple responses by organization. My fear on the Nvidia approach is that large a group is unlikely to be a representative sample, or contain only people qualified to answer. Keep this in mind too.

The keynote comment in the report illustrates the tension here. It says that 99% of respondents say that AI has helped enhance employee productivity, and 26 say the impact has been significant. Of the 88 telcos who commented on AI from January 2025 to the present, only 17 said they had seen any productivity benefit to AI, and none said the benefits were significant. The issue here, I think, is that AI might “improve productivity” in that it might make some tasks easier, but whether this is a benefit depends on whether that improvement translates to a bottom-line impact. So far, none of my 88 telco contacts claim any substantial cost reductions attributable to AI.

This is an issue I’ve pointed out with enterprise AI too. In fact, most people say that there are situations where they find AI helpful, but most won’t pay for it. Same among workers; you can say it helps you without proving there’s a benefit to offset your company’s paying for it. We’ve proved, with AI, that if you give something away, people will take it. That doesn’t seem to me to be either surprising, or helpful in making a business case to justify major AI infrastructure investment.

The next point is that 65% of Nvidia’s responders said that network automation was being driven by AI. I hear that AI is being assessed for network automation missions by over three-quarters of telcos. I’ve heard that most operators think network automation is a top use case, more than the 59% who say that in the Nvidia survey, but all of this comes under a subhead that “Autonomous Networks Take Priority”, and telcos tell me that they are not yet ready to allow AI to autonomously manage networks. Would they like to? Sure, but none thought they’d take the leap in 2026, and only 32 said they believed they would rely on autonomous netops in three years.

The next subhead says that “Distributed AI Computing is on the Rise, Fueling the Path to AI-RAN”. Three telcos tell me that they are doing anything, even seriously testing distributed AI for AI-RAN. The detailed comments in this section do admit that a lot of this is in the area of wireless R&D, but while the report says that telcos are “stepping up investments in AI-native RAN”, telcos are not telling me that, and the comment flies in the face of reports that more and more players are leaving open-RAN models, which is where the only three telcos with any AI-RAN activity are playing with it.

I think you’re getting the point here, so let me just highlight some key points:

  • Survey says almost half report AI has helped open new revenue opportunities; no telco told me that.
  • Survey says that 89% believe that open-source is important to their AI strategy, that 60% say their company uses generative AI, and 48% that they use or are evaluating AI agents. All my telco contacts say open-source AI is important, all say their company uses generative AI somewhere, and all say they are evaluating AI agents.
  • The top use case for AI in the survey is network automation, but the top use case I hear is customer service chatbots, which top the list for 62 of 88 telcos. I note that the North America data from the survey more closely matches my responses, but my telco comments are worldwide. The survey said that half of respondents said that network automation topped their use-case list in ROI, none tell me that.
  • In the survey, 54% said data-related issues were their top AI challenge; my comments put expertise at the top (70 of 88) with data a fairly distant second (43 of 88). However, that figure is close to the survey comments on data issues. The fact that data issues include privacy and sovereignty mean to me that those responding to Nvidia were focusing primarily on public-cloud generative AI.
  • While I’ve noted that the attitudes expressed on the acceptance of autonomous network operation via AI are highly optimistic, the report later admits that only 4% report high or full autonomy and 25% report little just basic autonomy.
  • The report shows that AI-native plans seem to focus on wireless networks, which my own telco comments validate. It also shows that the interest is greatest for small telcos, which I can also validate; they have less access to skilled netops personnel. Most of the plans are safely in the future (2027-2030), which is why those who comment to me don’t really see anything happening; something in that time range is almost never budgeted, so it’s not real.
  • The survey said 90% believed AI was helping to increase annual revenue; none tell me that. Exactly the same result for opex, from the survey and from comments to me.
  • 99% in the survey said that AI had boosted productivity; less than 20% tell me that. In the survey, 26% say the impact is major or significant; none tell me that.
  • 89% say they are increasing their AI budgets for 2026; this is also what I hear, but only 11 characterized their increase as significant.
  • The survey said that 48% were using or assessing AI agents; all telcos tell me they are assessing it, and 48 said they had at least one trial/test or use in play.

In all, I can’t validate the results of the survey except in a few areas. I’m not saying that it’s wrong, but that it doesn’t conform to what I hear myself. Subjectively, I think it is likely wrong, for the bias reasons I’ve noted above. I’ve done surveys for decades, and audited survey results from others. It is incredibly difficult to get an accurate result, even if you’re trying, and surveys published by vendors are usually published because they paint a picture favorable to the vendor, which should be no surprise and isn’t an unreasonable thing to find. AI in telco has most of the same challenges as AI in any other vertical; that’s what I’m hearing. Attitudes are hopeful, exploratory, but not often fully committed, and I don’t think that they’re likely to become fully committed in 2026.

]]>
6317
Occupation Types, Productivity Benefits, and Tech Projects https://andoverintel.com/2026/02/19/occupation-types-productivity-benefits-and-tech-projects/ Thu, 19 Feb 2026 12:27:53 +0000 https://andoverintel.com/?p=6315 The goal of productivity enhancement is going to produce value, justification, only if you either generate more business or fulfill existing business opportunity at a lower cost. The former demands an elastic market for the product/service, and I don’t have enough information to assess how likely that’s the case in the US or global economies, nor do I think there’s any credible source that does. That means that the value of productivity improvement lies in its ability to reduce labor cost, either by reducing labor or by using cheaper labor. That, in turn, means those two factors have to justify any tech project, including AI.

This means it’s important when complications arise in those productivity factors. A recent analysis by Axios may be exposing an issue that faces AI and other cost-management strategies. Productivity tools have historically been aimed at office workers for the simple reasons that the “information content” of their jobs was high, they already had devices to tap into information, and they were conveniently located in fixed positions. As the piece points out, over the last three years or so, US employment has shifted a bit away from office workers. Are the people we’ve depended on for productivity enhancement dwindling in the workforce? If so, why. If so, what’s next for productivity? My view, based on enterprise comments, has been that office workers represent 60% of the workforce, and non-offce, labor, workers the other 40% If we used up the former, we’d have to depend on the latter. OK, but does the available labor statistical data validate, enhance, or repudiate that?

To start with, it’s a bit difficult to validate the Axios data because the categories they define aren’t explicitly in the BLS (US Bureau of Labor Statistics) data. I also distrust the 2025 data because of the combination of government shutdown and politics. Finally, I don’t think that just grabbing recent years gives us a valid trend. So I’ve done my own analysis of the raw BLS information, and I’ll try to address the questions based on this analysis.

There are 21 major BLS occupation categories plus a total-jobs category. I’ve taken enterprise comments to assign them to two groups, “office/desk” jobs that represent people who are largely information workers, and “labor” jobs that involve manipulating real-world things. The former group has 11 job categories, the latter has 10, and I’ve looked at all these and the total employment over a ten-year period.

From 2015 to 2024 (a decade), US employment overall has grown at a CAGR of 1.12%. Obviously, this has to miss the large number of under-the-table workers, most of whom are involved in various personal-service types of jobs, “labor” in a broad sense. Despite this, if we look at the balance of “office/desk” jobs and “labor”, it shifts by only about 2% (58/42% in 2015 to 56/44% in 2024) over the decade. Enterprise views, then, are close to the statistics, but not spot on.

Statics help us dig deeper, in any event. The biggest sources of growth the data shows were in Management, Business/Financial operations, Healthcare Support, and Transportation/Material Moving. Two in each in office/desk and labor. In the same period, the biggest areas of job loss were in Personal Care and Service. There were a total of three labor occupation categories that showed a loss of jobs and two office categories. Two office categories had substantially neutral job growth, compared to three labor categories. So far, not a major difference.

The picture this seems to be creating is driven by “automation” and company revenue opportunity. We’re seeing growth in job categories where the vertical they represent is strong, so the companies can earn a good return on their labor cost. We see slower growth, no growth, and even declines in jobs that are readily automated or that are changed by automation, like sales (where jobs decline because of online shopping). In production occupations, we see a decline in jobs reflecting increased process automation, where in transportation and material-moving jobs that have not been highly targeted by process automation, there’s a significant increase.

But automation hasn’t created revolutionary change, and in fact the end result might validate Axios’ view. If we look at the CAGR for office jobs versus labor, we do find what the Axios piece suggests; the former grew at only 77% of the labor force growth overall, while the latter grew 129%, almost double the rate. However, this has to be explored in two further ways—in the context of the overall economic situation, and by specific job classification—to yield a final result.

There’s a broad indication of a growing sense of financial insecurity among consumers, the ultimate economic engine. The largest decline, in the personal services area, reflects less willingness to spend on things that could be done by the individual. However, this doesn’t extend to areas where the service is not directly paid by the consumer. Healthcare, where job counts are increasing explosively, is largely covered by insurance benefits or government programs. Online shopping impacts office jobs by taking retail out of offices/stores. In other words, automation may not be fully responsible for things we’re seeing.

Now, let’s now look at jobs in more detail. It seems logical that a place to apply new automation strategies like AI would be the specific places where jobs are increasing based on traditional approaches.

Among office jobs, management occupations, business and financial operations, and computer/mathematical occupations all show very high CAGRs relative to jobs overall; more than double the pace. These are all knowledge worker categories, and so are all jobs that could be enhanced by AI information. They are also all jobs that have a high unit value of labor. The troubling fact here is that they should have been jobs that traditional automation could have targeted. Why didn’t they? I would argue, based on casual enterprise comments, that automation in any form tends to support reducing lower-level jobs but creates higher-level jobs. More but smaller teams, more managers, remember? Automation specializes, in short, and shifts activity from human work to human supervision. This shift has reduced job count, but increased unit value of labor.

In the labor categories, we see the highest CAGRs in jobs with relatively lower unit values of labor. This likely means that automation practices to date have pulled higher-value jobs out of the market, which means that what remains is likely to be harder to target. Axios is right in saying that it appears automation has already impacted office labor, making the opportunity for productivity enhancement greater in labor, and thus the AI opportunity. However, there’s a lot of stuff that has to be done to make real-time automation capable of enhancing productivity. Is there any hope of pulling more out of the office space?

I think that the real reason why labor is a better target is more complicated than the Axios chart suggests. To get to labor productivity, we have to integrate computer technology into the work itself, not rebuild jobs around the computer. The latter process has made knowledge worker elites premium requirements, and empowering them demonstrably does not reduce their job counts. If we could use AI, and in particular AI agents, in the way enterprises want to, which is to build them into workflows, we could target the flows of human work as much as computer work, and create something that has a bigger net impact on labor cost.

Does this mean that AI replaces us? That, unfortunately, gets us back to the point about price, demand, and opportunity. The industrial revolution replaced a lot of craft workers, but it lowered goods prices enough to create more demand and the overall effects on employment and quality of life were positive. How many workers could AI displace? Could it create new spending to create new jobs of a different type? Would these jobs be accessible to the displaced workers? I don’t think this is going to be an immediate issue for us to consider, but it probably will be down the line.

]]>
6315
How Much of a Game-Changer is Cisco’s G300? https://andoverintel.com/2026/02/18/how-much-of-a-game-changer-is-ciscos-g300/ Wed, 18 Feb 2026 12:27:28 +0000 https://andoverintel.com/?p=6313 Cisco’s announcement of its G300 chip was positioned to address AI workloads, but these days about the only thing we’re not claiming is driven by AI is politics, and I’m not sure about that either. In any event, anything that’s positioned relative to AI has to be examined in two ways. First, does it actually provide value in AI evolution? Second, how is that value derived, from AI or from something broader that happens to at least possibly consume it?

The first question above may be the hardest to answer with any authority, given the range of things that are claimed to be AI missions. There seem to be four AI models out there, best understood by arranging a row/column structure with a 2×2 dimension. You can draw this out or visualize it, as you wish.

For the rows, label the first “cooperative” and the second “autonomous”. For the columns, label the first “cloud” and the second “self-hosted”. OK?

The cooperative cloud model, our first square, represents the AI that most people who use AI are thinking about. There’s a huge data center complex hosting LLMs, and users connect to it almost always via the Internet. This connection lets the users ask questions, typically ones whose answer is derived from general knowledge that is culled from the Internet itself.

The autonomous cloud model, down one row in the “cloud” column, is where many of today’s cloud giants are trying to get to. “Agentic” AI is positioned by them as a form of AI that does something rather than tells people something. The challenge with that, in a value sense, is that it’s not clear how much of it can be done without gathering in what the user would consider personal, or what a worker’s company would see as proprietary.

Now for the self-hosted column, starting with the “cooperative” row. Here we have missions that use language models hosted by a company or, in theory, a person. In roughly 80% of applications reported so far, this is done to overcome data sovereignty issues or personal privacy concerns, and in the other 20% because either cost or QoE demands dedicated handling rather than a resource pool. However, this group of missions still works interactively with a user.

In the self-hosted autonomous box, we have applications where the AI does something similar to what a traditional application or component would do, which is to process input and produce output without continual human interaction. This is the AI model that enterprises have generally found capable of making a business case, but it’s also variable in terms of how it uses data, just as applications/components are.

Generally, Cisco’s chip opportunities, the justification for and value of the G300 line, would arise out of either hosting AI or delivering large quantities of data to/from the models, wherever they were hosted. Obviously, it’s massive data movement that justifies the G300. Massive data movement would tend to rule out our 1:1 box, the conversational cloud, but would likely play in at least some of the AI applications of the other boxes.

Generally, I think, Cisco’s chip opportunities are also greatest where the network impact is across many buyers, particularly because Cisco has a broad network incumbency. That means it likely derives more opportunity from the self-hosted column than the cloud column. The cloud-autonomous combination might involve data movement, but the capacity of the user-to-cloud connection limits the value of being able to push data faster from its enterprise source.

To me, this says that Cisco gains from the G300 to the extent that enterprises host their own AI and apply it to locally sourced data. If the cloud model prevails, then the best Cisco could hope for is that they’d sell G300 gear to hyperscalers who demanded a massive discount. The worst is that those buyers would use generic chips from people like Broadcom or Nvidia, and Cisco would see nothing from it. So, the G300 is not a bet on AI in general, but on self-hosted AI. Cisco’s press release, linked above, shows that with the heading “Silicon One G300: The Networking Foundation for the Agentic Era.” Yes, for sure that’s kissing the most credible current AI baby in a PR sense, but it’s also a positioning statement.

Does Cisco see what enterprises have said all along? I think that’s likely, and for sure Cisco’s broad AI positioning would seem to apply mostly to enterprises who plan to host their own, in their own data centers. However, Cisco would surely not send hyperscaler hosts of AI away, and might well also hope to grab up any smaller ones that evolve, such as within telcos. So, there is AI value to the G300.

On to our second point, which is whether Cisco can add value to AI, or even outside AI. The answer to that, I think, is a clear “Yes!”

The biggest risk to any network-mission value proposition is congestion that impacts QoE. Traffic management is complicated and costly, but essential if capacity is limited. Something like 80% of router code is dedicated to it, and enterprises estimate that it generates three-quarters of user complaints and two-thirds of their network operations costs. If you simply made networks very fast, raised the capacity, you’d have a profound impact on QoE, business cases, and costs.

What would the optimum data center network for the 2020s era look like? Enterprises would say “infinite capacity, 100% availability, and free”, but they all know that’s not realistic. What they actually would like is one with a capacity so high that congestion becomes unlikely, where alternate paths can be created to respond to failures but don’t congest in the process, and that offers autonomous recovery but high netops visibility. In short, they’d like what Cisco seems to be promising the G300 and the rest of its portfolio can approach. And since Cisco says (in the same release) “To enable AI network builders of all sizes – hyperscale to enterprise – Cisco is introducing the next generation of Cisco N9000 and Cisco 8000 fixed and modular Ethernet systems, powered by Silicon One, and designed for the extreme power and thermal demands of AI workloads,” they’re promising a level of capacity higher than enterprises would likely need themselves.

The most important point, though, is that the value of capacity doesn’t stem from AI, it stems from the increasing need to organize general business value from specific applications and data. AI is itself a tool in doing that, but enterprise needs for QoE, resilience, and cost-efficiency existed long before AI came along.

Traffic management is needed when traffic conditions need to be managed, and those conditions drive not only most of the opex but most of networking’s complexity. Where traffic conditions are most critical is in the data center, where traditional transactional data, increasing real-time data, and business intelligence-gathering are creating a web of information flows that are complex, QoS-dependent, and business-critical. Those who build the data center of the future would love to trade a modest capex increase for a reduction in opex and complexity. If a major data center networking vendor doesn’t offer that, then there’s always white boxes, so Cisco is on the right track as long as they don’t get so caught up with the AI hype wave that they’re caught in a collapse.

]]>
6313