Andover Intel https://andoverintel.com All the facts, Always True Thu, 25 Apr 2024 11:39:15 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.2 Can Vendor Strategic Influence Counter IT Spending Pressure? https://andoverintel.com/2024/04/25/can-vendor-strategic-influence-counter-it-spending-pressure/ Thu, 25 Apr 2024 11:39:15 +0000 https://andoverintel.com/?p=5785 What is “strategic influence”, how does it impact buyers and sellers, and what do trends in it mean for the tech market? For decades, I’ve worked to model market behavior as a means of providing another data point to surveys, which present a multitude of issues if you rely on them completely. One thing that’s come out of all that work is that major technology initiatives are driven rather than simply developing, and that the ability of a vendor to advance major technology changes depends on the degree to which it can influence buyer strategic planning. Hence, “strategic influence”.

At the start of this century, IBM was the runaway leader in strategic influence. If we set a scale using my models, and assign values to mean that a vendor has totally decisive influence (score 100), enough influence to outplay competitors (50) or no influence on strategy at all (zero), IBM in the year 2000 had a score of 48, and over the period up to 2010, it vacillated between 46 and 53. No other vendor of any type ever achieved a score even in the 40s. FYI, in the 1980s, IBM’s score hit its high of 65, but retrocasting my model, I estimate that in the late 1960s it hit 79.

Other IT vendors hit their high in the 1990s, which is also when IBM’s score dipped into the high 30s, but none of them ever equaled IBM. HPE managed to score 31 in this period, its high point, and Dell hit 27, also its highest. The only software platform player to break into the 20s was VMware, who managed 22 in 2014 and fell slowly after that, to 18 in 2023. Cloud providers hit their high between 2015 and 2022, with Microsoft leading with a high of 37. Amazon with 30, and Google at 22. They’ve all fallen by roughly 4 points since.

Network vendors did their best between 2002 and 2008, with Cisco hitting a high of 24 in that period, rivaling IT vendors, but today even Cisco can’t break into the 20s (in 2023 their score was 16). They had another boost during COVID but didn’t quite get back to their best levels.

Now let’s look beyond influence for a moment, at another metric, which is the percentage of IT spending that represents new projects, versus orderly upgrades to infrastructure. Up to the 1990s, “new” spending tended to account for between 58% and 65% of all IT spending, and since 2000 it has dropped pretty steadily, to the point where in 2023 the new projects made up only 38% of IT spending.

You can see that there’s an interesting synchrony involving strategic influence and IT spending. When IT vendors had stronger strategic influence, more project spending was present. This raises the question of which factor was the driver. My older data from users can’t answer that, but the free-form commentary I’ve gotten over the last year can offer some insight.

Enterprises seem to have boosted their project activity in response to vendors having delivered new technology that opened up new avenues for applying IT at a good ROI. Up to some point in the 1990s, this happened on a regular cycle whose average length was roughly 12 years, but those cycles then ceased. I contend that this failure to introduce new paradigms was the leading cause of the decline in “new” IT spending, and also for shifts in strategic influence. No new paradigms, no new spending.

So can we declare that the chain of events? Not totally, because my enterprise chats also suggest that as new project spending declined, vendor strategic influence fell because there were fewer initiatives where that influence could be applied. And as budgets focused on cost control, vendors became more cautious about even proposing something new. So you can see an element of circularity here.

Wouldn’t vendors, faced with falling IT spending, want to introduce something new? Sure, but here we encounter another force, the “higher apple” problem. The technology shifts needed to move the needle on productivity gains have grown, because the easy connections between tech and productivity have already been exploited. What’s needed now is more of an ecosystem than a product. A lot of pieces have to be put into place, and that means either that a vendor has to share profits with others who supply missing elements, or have enough strategic influence to insure that if they supply all of the pieces, they can drive the project and lock it in to themselves. And, of course, drive a “spend more” story in a market that’s been dominated by “control costs”.

I think this is more of a factor in network equipment, and even IT spending than I first believed, because of this positive-feedback effect. We are now in a market situation where buyers have become conditioned to maintaining the status quo, at first because they weren’t being presented with any transformational technology options, and then almost out of habit. Vendors then found it harder to offer anything that didn’t control costs and simply hold the line. And the problems with transformational new projects that started the whole mess became more acute because cost-driven buyers don’t promote aggressive new product planning.

IBM, champion of strategic influence, missed on revenue yesterday. They also announced their acquisition of HashiCorp to improve their already-strong position in hybrid cloud. Their consulting revenue was down slightly versus 2023, and I think that reflects growing buyer concerns about the economy overall. Even with strong strategic engagement, IBM can’t promote aggressive new project adoption against such a negative tide. It’s the biggest tech problem of our time; how do we restore confidence that IT can really improve and transform business. It’s a question that’s going to get harder to answer as buyers get more ossified into the status quo.

]]>
5785
Might Software Become More Important than Hardware in Networks? https://andoverintel.com/2024/04/24/might-software-become-more-important-than-hardware-in-networks/ Wed, 24 Apr 2024 11:32:45 +0000 https://andoverintel.com/?p=5783 Is networking heading into a software-first period? Should it be? There are areas on this topic where enterprises and operators agree, areas where those two groups disagree, and areas where there’s more or less agreement within each group. Is there any conclusions to be had from this disorder? Let’s see.

Let’s start with some numbers. I had comments on a software focus in networks from 229 enterprises and 72 operators. Of that group, 178 and 68, respectively, said that they believed software was of increased importance to their network infrastructure planning. Both groups cited largely the same drivers for the trend too; open-model networking, increased focus on operations, cost of equipment, and lengthening of period of expected useful life. The groups didn’t place the factors at the same level of importance, and enterprises showed more variability in their ranking of the factors than operators did.

The operators seemed to divide almost equally between a capex-focused camp (35) and an opex-focused camp (33). The first camp listed equipment cost and open-model networking as their top two priorities, and the second listed increased operations focus and lengthening period of useful life as their top two. The comments of both camps were particularly interesting.

The capex camp of operators were primarily retail ISPs, but also included some mobile operators still facing 5G RAN modernization. This camp was the one willing to accept open-model networking, but their preference was still to knock down the price of network hardware, hopefully from their current major vendor but if not, from a main competitor. The 5G operators in this camp had more interest in open-model networking than retail ISPs did, by far.

The opex camp of operators is made up of most of the larger, mature, ISPs and mobile operators. They are not, at this time, facing major new buildouts and in fact are hoping to avoid them. They are more focused on extending the life of current gear and raising profits by cutting costs, especially opex but also capex, but they see an open-model transition as being likely to cost them more on the operations side than it might save on the capital side.

Operators are often reluctant to classify themselves firmly into any single camp, mostly because the great majority are trying to cut costs anywhere they can. For most, the 5G budget period has come and gone, and the fact that 5G isn’t likely to raise revenues has become clear to the financial markets, making costs the exclusive focus. But even those with some 5G buildout still budgeted, the decline in 5G revenue credibility is impacting how the money will be spent, and how much pressure there will be to turn some funds back. In 2020, only one operator in eight said they saw such pressure; in the last six month one in three said they faced it.

5G did contribute something important to operator infrastructure planning, the notion that “open-model” can easily be turned into a synonym for “single-vendor”. While many (perhaps even most) operators and vendors believe that it’s easier to adopt open-model networks in a greenfield environment, in practice most say that they’re electing to focus on one vendor during deployment and “open up” down the line—perhaps. For sure, the open model assures them of way out of their single vendor if it becomes necessary, but they hope it won’t.

This attitude is most prevalent in the access network; as you get deeper (metro and core) all operators are more interested in and willing to exploit open technology. In part, I think, that comes back to our question about software-centricity. Most management focus is applied close to where services are delivered not deep where they’re invisible. In addition, management practices and tools can be less consistent where there are fewer pieces of gear. You can see that this doesn’t add up to much software drive for the capex camp.

On the opex side of the camp structure, there’s a diversity problem, in three dimensions. First, network infrastructure is almost always made up of multiple equipment types. Second, it often includes multiple vendors. Third, management needs vary from equipment-centric FCAPS to service and customer experience management.

The ONAP mission was to resolve all of this, but ONAP started off with fatal architectural flaws that my own (yes, extensive) experience with network operations says can never be corrected once put into place. Of the 88 operators I’ve engaged with, 57 say they are totally opposed to ONAP, 20 say they view it negatively, 8 say they have no view, and three said they have either adopted it or are trying it. Interestingly, ONAP was launched as an operator initiative, and its architectural flaws can be attributed IMHO to lack of software architecture experience among operators themselves.

What this all means is that for network operators, software is an important focus that so far is almost entirely unfocused. The Nephio project, whose goal is to apply Kubernetes to both network equipment (the NFV ISG’s physical network functions) and virtual network functions, hasn’t corrected all the problems of ONAP and doesn’t address everything in management and operations. Nothing else has really emerged. There is a lot of interest among operators in software, and some actual commitment to having software take a larger role in infrastructure planning, but operators want vendors to lead in transformation, and that’s not yet happening.

On the enterprise side, it’s much harder to even define camps, but if we need to try (the effort could be at least helpful) we could divide our 408 enterprises who offered comments into three—the data-center-centric group of 178, the cloud-centric group of 87, and the VPN-centric group of 143. The division here is based on what is exerting the most planning influence. In software terms, all three camps rate management of QoE at the top of their lists, but they expect to go about it in three different ways.

Companies with a data-center-centric perspective on networking were the rule up to a decade ago or so, and even now are the largest group. They have a fairly static set of remote offices that they connect overwhelmingly through MPLS VPN services, and their use of the cloud is still limited for their own workforce. Most of their network budget is associated with their data center LAN, and it’s driven mostly by application growth, containerization, or both.

This is the group most likely to embrace open-model networking meaning white-box switching, but their QoE focus is really more likely to be on Kubernetes and on a virtual-network plugin for it. In a LAN, the realistic goal is and should be to bury issues in capacity and connectivity. However, open-model is still a minority strategy; they want to stay the course with their dominant vendor. Even their VPN plugin is somewhat likely to come from that vendor, though VMware’s NSX is more popular.

The cloud-centric enterprises are highly focused on supporting applications aimed outside the company, on using the cloud to support their own workforce, or both. With the Internet and the cloud providing the “access network”, this group is migrating slowly (by default, usually) to a cloud-as-a-network model, which means that they really don’t worry about network software except as a cloud-resident SD-WAN component or SASE. Most of their QoE control is exercised via cloud management.

The final group, the VPN-centric ones, are perhaps the most interesting. One reason is that they are under a diverse set of pressures for change. You’re VPN-centric in part because your data center isn’t evolving quickly, which might be an indication that you are doing more in the cloud. You’re also likely to be more dynamic in terms of remote locations, which could also lead you to the cloud, or to an SD-WAN transformation.

VPN-centric enterprises, not surprisingly, are the group most interested in software to monitor and manage the user experience. In part, that’s because the VPNs have an SLA and so there are contractual guarantees that could be enforced to assure QoE. SLA monitoring for enforcement has a long history, and also a history of forcing (or encouraging, if you prefer) operators to monitor their SLAs from the supply side.

Where does this lead us with regard to software-centricity? To two different places, I think. For enterprises, software will surely drive network decisions more, but not network software per se. All the enterprise forces are combining to define “QoE” in a way that depends more on the applications and their hosting than on their connectivity. For operators, it’s going to come down to whether the VPN-centric camp of enterprise buyers is diminished because of an implicit migration to the Internet and the cloud. There is a real chance that the Internet is going to be the universal connection fabric of the future, and if that’s the case then best-efforts wins, and operator service and customer experience management has to find itself a new role, one aimed at a kind of defensive QoE mission to avoid churn.

]]>
5783
Let’s Hope Network Vendors have a Secret Agenda https://andoverintel.com/2024/04/22/lets-hope-network-vendors-have-a-secret-agenda/ Mon, 22 Apr 2024 11:42:31 +0000 https://andoverintel.com/?p=5780 We all understand the concept of a secret agenda. Most of us understand that the term has a kind of intrinsic taint about it, a notion that the goal is to mislead. Well, we’re right to be cynical, but at the same time we’d all better hope that network vendors have one of those cynically rooted secret agendas right now. Because what they’re saying they are planning would likely lead to disaster.

One of the very visible trends we’ve been seeing in network vendor investor calls and conferences is a stated shift of focus to the enterprise market, to respond to a slowing in network operator spending. Hey, it’s OK that telcos aren’t buying routers like they used to, it’s OK that 5G is entering a kind of nuclear winter. We’ll just sell that sort of gear to enterprises instead, until the inevitable “modernization cycle” kicks in and ignites network operator spending again.

There are two problems here. First, enterprise network spending has been under pressure for decades, and in fact one of the things that has helped boost operator network spending has been the virtual-network trends that have let enterprises trade network equipment capex for service expenses. Enterprises don’t buy routers like operators do. At the very least, a shift to focus on enterprise network equipment would starve out whole product families.

The second problem is that the “modernization” cycles for decades have depended on something more than abstract modernization, for one simple reason. If your business has to do better, be more profitable, every year you either have to earn more revenue or lower your costs. For two decades now, the whole of the Internet age, network operators have been facing a truth that new revenue was getting very hard to find. Thus, cost management. If you had to “modernize” under cost pressure, you needed to spend less to sustain the same services, not more. Thus, vendors who supplied you would receive less.

Then there’s 5G. One of the “something more” things in the world of telco modernization is a new standard that requires new technology. The area where that’s most likely to come along is mobile, like 5G. Obviously, we had earlier “Gs” and with each of them there was an opening to new service revenues. There were plenty of claims about what 5G would drive, but none of them proved a significant benefit. So 5G vendors decided that private 5G would be the answer. That, it turns out, also has two problems.

First, if enterprises needed or even wanted private wireless, couldn’t they have adopted 4G? Then what would drive them to spend again just to get 5G, a technology that couldn’t justify new services for operators? If they hadn’t adopted 4G, why would 5G (with no new service potential) suddenly convince them private wireless was the way to go?

Second, what about WiFi? Everybody uses it already, it’s cheap to buy and easy to manage, and new stuff is always backward compatible with existing devices. OK, it has range issues, but no matter how many buildings you have and where they are, you can stick hubs in the right places and cover all your workers. OK, maybe a few who move around while accessing your network are missed, but that’s what mobile broadband services are for, right? There are some verticals where private 5G makes some sense, but not nearly enough to compensate for declining spending by mobile operators.

You can see why we need a secret agenda here. Any network vendor has to tell a story like “the enterprise market will take up operator spending slack”, but they’d better have a hidden set of people working to actually solve the problems. Is there such a thing, and do they even know what the problems are? That’s the difficult question, and there’s no better proof of that than AI.

AI is a very old concept, older than the majority of people who talk or read about it today. The first work was done in the 1950s and the majority of the underlying concepts were out there by the 1970s. What we’re seeing now is really a dawn of AI mass accessibility, not a dawn of AI. Generative AI is driving the AI bus, but it really doesn’t prove an AI business case, it just demonstrates that if you make something understandable and interesting, people will play with it.

This doesn’t mean that AI couldn’t drive something positive for networking and IT, suppliers and buyers. It just means we aren’t really even looking for it, at least not broadly., and that’s what happened to 5G. Proponents of both focused on what you could do with the technology and not what would justify it. The latter question is the real problem; it’s not enough that a technology be useful, it has to be enough more useful than the status quo to justify incremental spending.

Why hasn’t the network vendor community solved that real problem? Because it’s not a network problem. Networks are connecting tissue, and the solutions we need aren’t yet there to be connected. IT, hardware and software, will have to lead the way here, and there are a few vendors there (IBM is the most obvious) that could do something. We also have two new hybrid players, Broadcom and HPE, who have both networking and IT elements, and who would thus have motivation to create a whole value organism, if they can see it.

Even some “pure” network vendors might have a shot here. Nokia has shown an interest in the industrial control space and so has Ericsson. Cisco sells servers and platform software. What’s not clear is whether these vendors have even as good a chance of seeing the whole ecosystem as Broadcom and HPE do.

And what is that whole ecosystem, the organism everyone needs? If the heart of that organism is the abstract solution that has the requisite utility, the brain is the application model that’s going to deliver on the solution. I’ve argued that the abstract solution can be inferred by the way that computing investment has evolved, and also that digital twins and real-time applications were the model we should look to for implementation. I’m not saying those two are the only, or even best, answer but some answer is critical.

]]>
5780
Analyzing the Poll Results from Telco as a Platform https://andoverintel.com/2024/04/18/analyzing-the-poll-results-from-telco-as-a-platform/ Thu, 18 Apr 2024 11:41:22 +0000 https://andoverintel.com/?p=5778 In this earlier blog, I talked about the challenges of the emerging “Telco as a Platform” concept, and referenced a TelecomTV story on their summit on the topic. That story included a poll on the question “What are the main benefits to network operators of a Telco as a Platform strategy?. The poll cited seven points, and I want to use this blog to comment on each. Let me start by saying that my own discussions with telcos are consistent with the poll result, with perhaps some qualifications and comments I interpret from my own interactions, as noted below.

First, create new service opportunities in the enterprise market. It’s pretty obvious that the enterprise market is the only place where a telco could hope to launch a new service based on hosted platform features and have any chance of success. However there are two issues associated with that market that would seem to argue against there being much of an opportunity there for telco as a platform.

The first of these issues is that telco services for enterprises are already under pressure from things like SD Wan and cloud networking/SASE. If the services were to migrate away to something like SD-WAN over consumer broadband, it could mean that the telco is it not in a position to pursue telco platform services successfully.

The second issue is that public cloud providers have already offered most of the kinds of services that enterprise would be interested in beyond basic connectivity so there’s already established competition in this space. Do telcos believe they can compete with the public cloud provider? Or, as I believe, are they just stuck in a connection-service mindset?

Second, develop new channels to market across the portfolio. Who are these new channels? Again public cloud providers would seem to be the organizations most engaged with enterprises and therefore most likely represent new channel opportunities. Unfortunately as already noted these cloud providers are competitors to telcos for new services. Could the goal be to offer partners like public cloud providers lower level communication services to resell through APIs. If so it would seem to remove the telco from being a retail player with full margins to being a wholesale player with only partial margins.

Third, enhance innovation efforts with developers and partners. This seems to me to be the clearest statement possible that the telcos would really prefer their partners to take on the burden of developing retail innovations using wholesale almost that are provided by the telcos. If that’s the case, it’s essential that those wholesale elements not be things that the end customer is already consuming in retail form from the telco (connectivity) or it’s revenue-dilutive. But if that’s the case why would a public cloud provider even want to partner with the telco when they could provide the same services as the telco could provide and keep all of the money?

Fourth, compete better with global tech companies. “Global tech companies” here obviously means companies like Amazon, Microsoft, and Google or in other words public cloud providers. So what we’re essentially saying here is that the operators want to launch new services to better compete with the same players that they declare partners in some of the previous points. In any event, competing with Big Tech means getting into OTT-type services, and it’s obvious operators really don’t want to do that.

Carrier cloud as a concept was the real opportunity for telcos to get into competition with big tech. They didn’t take that opportunity up. Given that it seems too late at this point for them to try something like this.

Fifth, monetize their investments in cloud native and automation. What investment are we talking about here? Is it NFV? I would contend the cloud providers have already invested plenty in cloud native and telco investments have been limited to supporting hosting of their own small set of network features. Telcos would have had the opportunity to deploy edge computing in central offices and other real estate close to the edge. Again, they didn’t take that opportunity up, and it’s questionable now that they could make the investment without frightening Wall Street to death, and in time to exploit any edge opportunity.

Sixth, create market differentiation from other telcos and techcos. You you can’t differentiate at the wholesale level, except by pricing. You also can’t differentiate versus other telcos, if all of you are relying on retail providers to leverage your assets in their offering. If you’re wholesaling to partners, the partners are going to demand API commonality with other telcos who would also be wholesaling features, because their organization and target market boundaries don’t correspond with those of the telcos. They won’t want to build special versions of their retail service to accommodate API differences.

Seventh, become more attractive to investors and skilled talent. A telco as a platform strategy is not going to make a telcos more attractive to investors or to skilled talent. Any successful strategy that raises revenues, cuts costs, and improves profits through some combination of these things would make telcos more attractive. A failure of any strategy, including telco as a platform, would reduce telco attractiveness significantly. Thus, if telcos are intent on pursuing TasaP, they need to be sure they make a success of it. Can they? They only have one shot.

If you only have one arrow you need to be focused on only one target. If there’s anything that’s clear from the results of the poll it’s that telcos have not collectively settled on one target. Yes it’s true that a single telco wouldn’t necessarily have all of these objectives, but it’s also true that telcos historically operate as a collective, meaning that it isn’t enough for a telco to have a unified goal. A group of telcos large enough to set standards has to have a unified goal. I’m not seeing it in these poll results, nor do I see it in my own interactions with telcos.

Telco as a platform is not an end in itself, it’s a means to an end. The end, to telcos, should be improvement in their profits, because that’s the primary goal of any company, or should be. I believe that a rational TasaP strategy is essential for telcos to optimize the use of hosted features in new services, but having one doesn’t select service or feature targets. Before jumping into TasaP, telcos need to do their homework on their future services.

]]>
5778
Neutrality and Groundhog Day? https://andoverintel.com/2024/04/17/neutrality-and-groundhog-day/ Wed, 17 Apr 2024 11:39:25 +0000 https://andoverintel.com/?p=5776 Well, the FCC is back to the net neutrality drawing board, drawing inevitable comparisons with the classic movie. The link I referenced is a good primer on the concept, written by someone who, like me, is skeptical about the concept. It’s not that I don’t believe we need some regulation, but it’s important that the rulemaking not go too far. Here is a link to the current fact sheet (April 4, 2024) on the activity, and it suggests that the worst risks might not be realized. But the new rule still creates issues that I doubt the FCC will be able to resolve. In addition, if there is an administration change, we can be sure we’ll end up with a replacement rule, and that raises a point I’ll get to below.

I think we can divide neutrality goals into three broad categories. First, there are rules to prevent ISPs from throttling traffic for selfish or opportunistic reasons, which is the issue my first reference link talks about. Second, there are rules that relate to how ISPs settle for peering traffic, and third there are rules that aim to build a fence around one of the first two issues to ensure that an ISP doesn’t dodge the effect of a rule with a clever service strategy.

I have to disagree that Comcast’s decision to throttle peer sharing protocols to manage the limitations of upload traffic on their cable infrastructure was the dawn of the neutrality debate, while I do agree that it gave a name to it. The real start of the neutrality problem was the launching of the worldwide web, the concept of web severs and browsers that started with Mosiac, twelve years before the Comcast kerfuffle. The Web made the Internet about experience delivery, which is inherently asymmetrical in traffic terms…and in business terms. The networking needed to connect users is way more expensive than the one needed to connect content, and since the Internet was launched in a bill-and-keep model, that meant that “retail ISPs” were certain to be under business pressure eventually. The question of settlement among ISPs came up within a few years of the Web’s launch, and the solution that came along was peering agreements among ISPs. Peering agreements are the only permitted source of settlement among ISPs and with content providers.

What the FCC has done is first to restore the previous classification of the Internet as a telecommunications service, primarily to give the FCC regulatory authority. In a perfect world, what should happen is a new telecommunications act, but obviously getting that done in the current political climate would be very difficult. The disputes over this point are, IMHO, primarily from parties who don’t want the regulations that follow. The FCC also takes steps to forebear applying many of the rules (like tariffs) to ISPs that are applied to other telecommunications providers.

Those regulations start with “straightforward, clear rules that prohibit blocking, throttling, or engaging in paid or affiliated prioritization arrangements”, and here we see a mingling of the first of my three rule classes with the third. The original goal of preventing traffic discrimination could be circumvented by introducing payment for better treatment, which when the rest of traffic is “best efforts” has the effect of implicit throttling. You can easily see that big players could pay for delivery of their stuff where startups might find this a burden.

The risk posed here is that there are applications that cannot be reliably supported on best-efforts service. To forbid paid prioritization is to foreclose these applications from using the Internet. Given that the Internet is declared to be a “telecommunications service”, would it not make sense that it be considered the telecommunications service, discouraging alternate services that could only create a form of overbuild. And what happens if such an alternate service proposes to carry what is now traditional Internet traffic, given that it was launched to provide better QoS?

You can also argue that when an Internet user buys gigabit broadband rather than 100 Mb, they are engaging in paid prioritization. Is it inconsistent to allow them to do that, but say that paying for a “fast lane” or specific QoS is potentially anti-competitive. In early discussions on this topic, it was suggested that the rule allow users to pay for a fast lane, but not content providers, on the likely correct theory that it was this third-party payment that posed the real risk. Then, of course, you have to prevent content players from reimbursing their users out of their own fees.

The proposed order does recognize that “edge providers” exist and may need special services beyond basic Internet access service (BIAS, as the FCC terms it). This portion seems to at least attempt to address IoT/M2M applications and perhaps edge computing as well. The FCC seems to aim at allowing at least a “special lane” for edge traffic, but says that what creates the difference is the user of the service, which would suggest that if you used an “edge provider” service to deliver something the Internet would customarily handle, you are now serving an Internet user, thus making it a BIAS, and you lose your ability to offer special handling.

We now come to the ever popular topic of Internet peering. There have been proposals over time that include everything from having no peering charges at all (open free peering in other words) to have a tariff peering and even traffic oriented settlement made mandatory. The FCC seems to be taking a position that the current commercial peering process, where peering agreements, including charges, are up to the companies themselves is the preferred strategy. This is certainly the least markets disruptive approach that could be taken, but it obviously leaves questions open regarding the relative market power and health of the ISPs that support retail customers versus those that support content providers. IMHO, the FCC peering stance suggests at least that they would not favor big-tech subsidies of the type EU telcos have sought.

What this order would do, as drafted, seems to be returning Internet regulation to a prior state, restoring “neutrality”. However, most ISPs and tech companies tell me that they didn’t really see much difference in the “non-neutral” period and the “neutral” one, and even the FCC notes that state neutrality rules and the chances that rules would change every time the party in power changed, have dampened any desire to specialize Internet services, shifting them from the long-term practices that have emerged. To me, this means that any fundamental policy shift can’t be imposed by the FCC, but will require another Telecom Act to amend or replace what we got in 1996.

]]>
5776
Telco as a Platform? https://andoverintel.com/2024/04/16/telco-as-a-platform/ Tue, 16 Apr 2024 10:55:56 +0000 https://andoverintel.com/?p=5774 TelecomTV had its first “Telco as a Platform” event, and the concept is surely interesting. Of course, these days, we can’t assume that “interesting” means or even implies “helpful”. So, let’s look at the referenced article and see what might have been a hit, and of course what might have been missed. For those who want to look at some session videos, here is the link. There’s also a nice summary on the concept on LinkedIn. In this blog, I’ll look at the concept, and later this week, I’ll review a poll taken at the event.

I’ve been an advocate of looking at new services as something based on middleware and APIs, and I think there’s some of that in the concept of Telco as a Platform. I’ve not been enthusiastic about the notion of operators exposing 5G and other service elements via APIs (which I think the LinkedIn piece would consider “Telco Network as a Platform”, and that’s in there too. I’ve been dismissive of the idea that operators could raise profits by selling these features to others, and that’s also in there. Can we frame something uniformly good out of this mix?

Let’s start by saying that if future services are highly dependent on hosted features rather than purely on appliance/device behavior, then some sort of organized middleware platform is essential, as I noted in a blog last week. Without that, development of features and operationalization of services would be highly inefficient. As I said, I think this concept is within TasaP borders, so why can’t I just say that this initiative is a good thing. It’s the impact of the other stuff.

Any references to extending 5G value (or 6G value) via APIs makes me nervous, because 5G has been around long enough to have demonstrated the value of the concept, if it were real. It has not done so. I think that this point illustrates one of the pitfalls that TasaP advocates are falling into, the notion that third-party innovation (meaning, usually, OTT innovation) is a good way of getting new service revenue without the telco having to get into that uncomfortable space themselves. Yet if you look at the poll taken of the attendees of TelecomTV’s event, I think it’s clear that telcos are looking to have OTTs pull telco chestnuts out of the fire.

The overwhelming majority of so-called new telecommunication services that have come along so far from telcos themselves have been nothing more than billing modifications to existing services. Think of things like turbo buttons and temporary high speed additions to service plans as examples of this. Customers are interested in these things to the extent that they save money for the customer overall which means they’ll lose money for the talk overall.

Says the APIs that are associated with things like 5G or with any other existing service tend to be APIs exposed existing features it’s difficult for these to really change the game very much in terms of the service value proposition. This is why I believe that any attempts to exploit current infrastructure through APIs is doomed to failure. Not only that, these initiatives tend to focus operators on reasonably on the safe step of going forward with the familiar rather than developing the new.

How about something like network functions virtualization or NFV? It’s true that NFC would theoretically permit the creation of new features, features that don’t exist today, is that these features would have to be valuable before exposing them is going to be profitable and it’s not clear that NFV has really demonstrated any such profit opportunity up to now.

If we can’t identify new revenue opportunities associated with telecom as platform can we at least identify cost savings? The answer to that I think is a little bit murky because, up until now, relatively few service elements are actually hosted and therefore susceptible to a platform strategy. First and foremost telecom services are made-up of connectivity and built from connectivity. That means network equipment is still the dominant element in these services. Even if we were to host a few feature elements, as would be the case with something like 5G, it’s not clear that these elements would present enough of a platform opportunity to drive any real change in operations. Making them more efficient call the operationally speaking, would impact a relatively small part of the service.

What would be necessary to make the telco as a platform concept successful? First I think you’d have to be talking about a Greenfield opportunity rather than a simple enhancement to existing services and existing infrastructure. Second, I think you would need a large number of hosted features rather than features embedded in an appliance. Meeting both these conditions would be problematic for two reasons.

The first problem is fear of the unknown. Imagine the telco, who has for decades been building services and infrastructure by connecting devices, suddenly deciding to use a large number of hosted features rather than those familiar tools. That problem is magnified if we’re talking about a brand new service something that is not offered before and which is therefore frightening to them on its own.

The second problem is technical imagination. If we’re going to build a large number of features it stands to reason that those features are going to have common elements and those common elements are themselves gonna have to be exposed as features through APIs. If we were to fail to do this then the building of the features themselves would become operationally and developmentally inefficient. The can a telco, with relatively little service-building experience, possibly anticipate the middleware that would be necessary to create and expose all of these new features?

Here’s a basic and uncomfortable truth. Before telcos could talk about new services based on features, and TasaP to create and support those new services, telcos are going to have to become software-oriented companies and that’s going to take a long time. Ironically, initiatives that try to advance TasaP may actually be hurting the concept, because it’s forcing consideration of a second step before the first step has been taken, or even could hope to be taken.

]]>
5774
Where are We with Digital Twins in General, and for Networks in Particular https://andoverintel.com/2024/04/15/where-are-we-with-digital-twins-in-general-and-for-networks-in-particular/ Mon, 15 Apr 2024 11:37:39 +0000 https://andoverintel.com/?p=5772 OK, I admit that I have a significant interest (some might call it an obsession) with digital twin technology. I think it’s justified, though, because I truly believe that the next critical step in information technology is its integration with the real world, with our work and our lives. To make that happen, to make it even possible, demands we be able to synchronize IT with real-world systems, allowing IT to then influence things. That means digital twin technology.

Some of the real-time applications of digital twin technology may not seem immediately like real-world stuff, and one such application is networking. Networks, utility grids, and similar thing are obviously real-world, though, even if we focus not on the people who are involved but on the cooperative element behaviors that form their routine operation. In fact, we could argue that things like air corridors, highway systems, assembly lines, companies, and smart structures or cities, are too.

For cooperative systems like networks, a digital twin brings in the important notion of the whole, the mission, the context of things…the sort of thing that air traffic controllers call “getting the picture”. That’s the essential basis for cooperation, and you can argue (and I would) that without it you can’t understand or manage cooperative systems because the element behaviors influence the behavior, and behavioral goals, of other elements. Adaptive routing has to reflect this interdependence by spreading notices of change (topology updates) until the network has “converged” on a new state.

A lot of the things that have been cited as AI benefits in networking (see my blog HERE) could not be realized at all except as an AI cooperation with a digital twin of the network. In fact, digital twin technology in combination with traditional network operations tools and practices, could realize more of those benefits than AI alone could, and I’ve described my view of the process in the blog referenced in the last sentence. Thus, a network digital twin is a logical first step in network transformation.

There is vendor interest in network digital twins already. Ericsson has a nice primer on the topic, and so does Nokia. Forward Networks has an ebook with good insights as well, and NVIDIA did a piece on using one in IT training. Looking beyond vendors, there was an IEEE call for papers on the topic last year, and an often-cited IEEE paper on the topic was published in 2021. The IETF also did a concepts paper on network digital twins.

Despite the fact that this is hardly a new technology and that it already has some vendor support, I don’t as yet have any comments from enterprises or operators suggesting it’s in use. I did have a nice chat with a very savvy operator technologist on the topic, one that at least raised what a big operator sees as issues and opportunities.

The first point my savvy friend made was status synchronization of a network digital twin is a challenge, but what’s really difficult is doing something with what you learn. The value of the whole concept, in fact, is directly proportional to the granularity and speed at which you can exercise network control. The operator technologist said either SDN or extensive use of MPLS TE with explicit path selection is essential to actually leveraging a network digital twin.

The next point was a bit more complicated and perhaps subjective in application. Network complexity determines whether the goal is to define multiple alternate network states to be selected, or to analyze conditions to calculate an alternate state. A complex network, said the techhie, isn’t necessarily one that has a lot of trunks and nodes, its one that has strong interdependency factors. If a problem in one spot has a very limited scope of impact, then the network is not complex in the alternate state sense. If a new state is likely to have extensive impact, then the network is considered complex.

By way of example, the least complex network is one that is fully meshed with large-capacity trunks, and where there are no obvious external factors likely to create multiple simultaneous failures. In such a situation, you’d expect an alternate state would likely require only minimal reconfiguration. A complex network example is a network with few alternate paths between nodes, which would mean that any new routing would likely impact many nodes and trunks, and where external factors like power or weather could generate distributed simultaneous failures.

In the simplest networks, digital-twin technology may have little benefit, while in complex networks it may be essential to effective operation. As networks move from simple toward complex, the optimum use of digital twins evolves from simulation to predefine alternate states to dynamic reconfiguration. The simulation might be a place where AI would be used, and obviously AI could be used to support dynamic reconfiguration.

The final point is profound and often overlooked. It’s essential that the network digital twin remains in contact with the network. All of these proposed benefits depend on having the digital twin of the network properly synchronized with the network, which isn’t as straightforward as it might seem. How is the telemetry passed if there’s a failure, and how is control exercised. My contact points out that in the complex configurations where digital twinning would be most valuable as an operations tool, it is very likely that some event/status telemetry would be lost, and that management connectivity would likely be compromised.

Having the network fall back on adaptive connectivity is one possibility, but it delays things and risks having adaptive mechanisms undoing or at least competing with digital-twin control. Using broadcast mechanisms or an alternative channel (wireless is what my tech friend suggested) is better.

This last point is a potential issue for all digital twins. The worst thing that can happen to a digital twin is that it fails to mimic the state of the system it represents, creating what in AI would be called a “hallucination”. The second-worst thing would be to be unable to manipulate the state of the real world as intended. In any digital twin system, but in particular systems intended to sustain communication, we’ll need a reliable way to prevent this disconnect problem.

]]>
5772
The Role of AI in Networking: Enterprise-Positive, Operator-Maybe https://andoverintel.com/2024/04/11/the-role-of-ai-in-networking-enterprise-positive-operator-maybe/ Thu, 11 Apr 2024 11:46:21 +0000 https://andoverintel.com/?p=5769 Is networking the next bastion of AI hype, or is there actually a lot of value we could harvest there? That’s an important question, one this Light Reading article shows is already being asked. It’s true, as the article suggests, that mobile operators are hopeful, and also true that so are operators overall and even enterprises. If all this hope converges on some common AI missions, that convergence might be an AI proof point. But is is converging, and is there any other validation out there? Do operators, and network vendors, have an opportunity related to AI? It’s complicated.

The article offers three areas AI might target, “performance and efficiency and drive profitability”. The first two seem to me to be dependent on an AI mission in capacity planning and/or traffic management, and the last would have to target revenue, capex, or opex. I would argue that capex savings would have to derive from improvements in performance or efficiency, so that leaves revenue augmentation or opex reduction.

I think AI could likely be a benefit in capacity planning, and since applying it to that mission wouldn’t likely incur a major cost, I think it would be worthwhile step for AI and operators, and even as a tool to help enterprises size data center trunks and branch access connections. However, enterprises agree with my view that the value to them would be limited unless major changes in network traffic were contemplated. That matches operator views that this particular idea is most useful in greenfield situations. One that many mentioned was FWA, where node placement optimization is considered a significant issue.

Traffic management via AI faces two primary questions. First, how much could be accomplished without adopting a central control paradigm like that of SDN? You can’t do local traffic management to achieve any benefits beyond dealing with a local failure or congestion. MPLS offers traffic engineering benefits, but adaptive routing would seem to defeat any AI attempts at network-wide optimization through MPLS, beyond initial route planning. Second, is optical capacity so cheap that you couldn’t justify spending on optimizing how you use it? And the most capex-intensive part of network infrastructure is the access part, where there are rarely many alternate paths for traffic management to choose from. Operators say this AI mission is likely effective only in greenfield applications, too.

Are we out of options here? I don’t think it’s that dire, but I do think that we may be proving that AI isn’t a panacea for operator profit problems or enterprise network challenges. There are two broad AI opportunities related to networking, and one really related to the business of networking. Of course, they all have potential issues.

Security is perhaps the biggest network-specific opportunity for AI, one that exists for service providers and enterprises alike. A dozen enterprises and twice as many operators say that they’ve determined that almost all the significant security issues they’ve ever encountered or even heard of leave a traffic footprint, and a majority could be at least addressed and minimized by network reaction. Operators believe that network security can be sold, and enterprises agree.

The problem is collecting the knowledge. It’s not practical, and may not be possible or even legal, to inspect packets in detail to identify problems, but looking at the patterns in traffic and in interactions is another matter, one where AI could clearly help. One enterprise told me they’d minimized a ransomware attack by detecting a pattern of spread, which generated an unusual relationship between user activity and database service activity. In this case, finding the pattern was a happy accident, but the company is now exploring AI options to solidify the strategy.

You can improve AI’s potential in security if you have the ability to recognize “sessions” or relationships among users and applications. You can improve it further if you use policy to manage what sessions are allowed. Most malware manifests its presence by attempting connection, many of which are prohibited, and just knowing this is happening and detecting a spread is a solid signal of a problem. It also identifies the specific risk actors, allowing them to be cut off.

AI could be especially valuable as an alternative to using explicit session policy violations to detect suspicious behavior. The session patterns of users or applications could be enough in themselves to identify a problem in the making. However, this mission may pose challenges for operators, because enterprises are wary of having anyone inspect their data to the point needed to achieve session awareness, so traffic volume alone might be the only path.

Problem detection and analysis is the second network-specific AI mission with good potential. Operators who see a managed service opportunity view AI as a way to offer a valuable service without incurring a lot of human cost, improving both sales and margins. Enterprises list this as their own primary area of AI interest in networking, both for current and proposed/planned AI use. It does require a lot of visibility into the network to be effective, which so far has tended to make it most successful in enterprise-deployed scenarios.

Managed services are nothing new, and the obvious truth is that were they a transformational revenue opportunity, operators would have already realized it. The question is whether AI could change that picture by lowering the cost so significantly that managed services could look compelling to buyers and still be profitable to sellers. I think it could, but if that were the case, couldn’t enterprises buy the AI tools themselves? Enterprises say that the visibility risks would lead them, by more than a five-to-one margin, to pick their own AI over a managed service. Might AI be useful to SMBs, though? Yes, but operators are hesitant to target the space at this point, citing the price sensitivity known to exist there.

What about that traditional-business opportunity? Well, operators and enterprises are all businesses, all with customers to sell to, accounting, employees, and so forth. All of these activities could be improved through the use of AI, but particularly the area of prospect/customer relations. However, these applications are more difficult to frame as services, and because they aren’t networking-specific they’re beyond the scope of this blog.

How can we sum up the potential of AI in networking? There are compelling value propositions for enterprises, and Juniper has already demonstrated that. HPE might push it even further. There are some interesting opportunities for operators, but probably none that would be transformational, and many of the proposed missions there have minimal credibility. About a third of operators told me they thought AI could help lower opex, but none thought it was the long-term answer to their profit-growth prayers. I don’t think so either.

]]>
5769
Seeking a Realistic Model for Feature Hosting https://andoverintel.com/2024/04/10/seeking-a-realistic-model-for-feature-hosting/ Wed, 10 Apr 2024 11:38:42 +0000 https://andoverintel.com/?p=5767 We seem all too often to miss our chance to ask important questions while there’s still time to plan out the optimum answer. So it is, I think, with the concept of service feature hosting. We haven’t really even tried to define what a service feature is, in fact. And while the decade-ago Network Function Virtualization (NFV) dealt with how to host virtual appliances, it hasn’t reached a satisfactory solution to even that problem, much less the very different challenge of feature hosting.

Hosting anything demands addressing the issues of component placement, deployment and redeployment, component connectivity, and management to any implicit or explicit SLA. Hosting network features requires all of these issues in conjunction with the behavior of the network’s dedicated devices in general, and in particular in conjunction with router configuration (especially BGP and BGP/MPLS).

All the hosting demands are exacerbated by the issue of resource equivalence. If, within a pool of resources, the operations practices have to be adapted to specifics of a resource, you don’t really have a pool at all. Hosting resources can differ based on either hardware or platform software, meaning the operating system, middleware, and system tools available. In addition, there is a risk that different feature implementations would demand different operations tools and practices; VMs versus containers or stateful versus stateless implementations are obvious examples.

It doesn’t stop with deployment either. One of the major challenges relating to this feature-to-network coordination is that adaptive routing might well move traffic away from features, and if network traffic topology changes required massive relocation of hosted feature elements, the result could be a significant overloading of operations tools and a major SLA violation.

All of this has to be assessed in the context of the nature of the service whose features are, at least in part, hosted. Today, telecom services are almost entirely related to connection—with others, with applications, or with experiences. If our, and telcos’, definition of “services” expands, that would surely take it more into the realm of applications and experiences. In fact, this expansion (to the extent it happens) is surely the largest possible new source of feature hosting requirements. Given the multiplicity of applications and experiences that might be targeted, it’s easy to see how capex and opex efficiency could be compromised. If, as I think is likely the case, many of the targets would introduce similar requirements (IoT, for example), there is also a risk that independent development might create multiple divergent implementations, wasting time and money.

All of the challenges described here so far could be addressed with an organized hosting model that would standardize development and operations practices. There are two possible approaches, each represented by a strong example that we’ll use to compare them, the “IoT approach” and the “edge computing approach”.

The IoT approach says that the applications/experience targets form a number of distinct classes, within which there is a significant opportunity to identify common elements that could become middleware and support standardized operations tools and practices. This inherent commonality means that a lot can be accomplished both in standardizing and accelerating development and in promoting resource equivalence do enhance capex and opex.

The problem with the IoT approach is finding those distinct classes and dealing with what falls outside them all. IoT applications, as a class, are unified at the mission level (real-world, real-time) and at the work level (event flows from sensors launch control flows to effectors). My own experience in the space suggests to me that the total “common-element” code in an IoT application (excluding any transaction processing triggered) is greater than the application-specific code. However, this degree of class cohesion is rare, and to achieve any of it it’s usually necessary to define classes in a more granular way, which dilutes the benefit by multiplying the number of models needed. Realistically, going with the IoT approach likely means that either service expansion would be limited to a small number of targets or some strategy would be needed to address hosting features as efficiently as possible even if they didn’t fall into a specific target application class.

The alternative is the “edge computing” model, which is really based on the notion that there exists a new and different platform strategy that could be deployed to attract applications/features, and by doing so tend to standardize requirements and implementations. If this is to have any practical distinction versus the IoT approach, we must presume that its goal is to define a platform model whose features support both a broad range of applications and specifically a set of applications not yet substantially realized, a “green field” whose targeting avoids time-consuming and costly rewrites of existing application elements.

Edge computing is an obvious example here, and also a demonstrator of the limitations of this approach. The presumption inherent in “edge computing” is that there exists a group of applications that are highly sensitive to latency and at the same time involve users who are somewhat distributed and still related in behavior, such that local computing resources can’t be used efficiently and cloud computing can’t fulfill latency requirements. The obvious issue here is that the only major type of application that’s latency sensitive is IoT, which seems to converge both approaches to a hosting model on a single application.

So what? Well, besides the fact that having only one application to base the future on, what’s the impact? I think the real problem is what this unhappy convergence does to the evolution of the middleware, and what that means in turn to development and operations.

Operators overall are unhappy with the idea that there’s only one application left to drive revenues, but we know from their past obsessive focus on device registration in IoT as their only contribution to the concept, that they don’t want to be in the application business at all. Given a situation where they see not only a multiplicity of applications they’re afraid of, and the convergence between the biggest and most credible of that group (IoT) with a simple, generalized-infrastructure solution (edge placement of computing assets) they’re already lining up to jump to the latter.

And that’s a problem, for two reasons. First, if the edge is really nothing more than a relocated subset of the cloud, then operators in general and telcos in particular are already at a crushingly fatal disadvantage versus the cloud providers. Second, if real edge applications exist at all, why have they not already been validated, given no technical barrier the cloud hasn’t already addressed?

My personal view, which I hasten to say I cannot validate with spontaneous telco input, is that real-world, real-time, services and applications are the only new thing on any tech horizon. IoT is likely an element in most of these, but the broader category of real-real (world and time) is the real opportunity because it’s broad enough to represent significant revenue and has enough common elements to define a useful set of middleware for development and tools for operations.

Could a standard advance this concept? I don’t think so; it really demands an open-source project to move it effectively, and we have no data on whether open-source could unlock capex. Such a project could in theory spawn a standard, too. That combo might be a point of hope, but it would still take time, and time is running out.

]]>
5767
Taking Another Look at Telco Health https://andoverintel.com/2024/04/09/taking-another-look-at-telco-health/ Tue, 09 Apr 2024 11:22:06 +0000 https://andoverintel.com/?p=5765 There is just no end to the bad news, or at least the bad stories, regarding the health of the telcos. Light Reading had an interesting piece, one that fairly link the problem to the mobile hype, meaning 5G and 6G. The linkage is fair, but it doesn’t tell the whole story. It also seems to suggest ways the crisis is being mitigated, many of which have IMHO zero chance of working on any scale.

One key point in the story is that while telcos are clearly facing a profit problem, the companies facing a crisis are the equipment vendors. Smaller players with a specific telecom focus have been going bankrupt, and large players have been cutting staff, consolidating, or both. Juniper, arguably the number-two generalized network equipment vendor behind Cisco, recently reported its shareholders had “overwhelmingly” approved its acquisition by HPE.

Telcos tend to relate their problems to a steady decline in “profit per bit”, which some have suggested is an unreasonable metric that distorts the issue. I don’t disagree, but it does illustrate what I believe to be the basic problem, which is that the Internet and consumer broadband have transformed the mission that telecom is expected to serve. This necessitates the transformation of the telecom business model, and so far the industry hasn’t gotten this second transformation right.

We call telcos “CSPs” often, the acronym standing for “communications service provider”, but today telcos aren’t really providing communications services in most cases. Instead they are providing delivery for digital experiences, and the distinction is critical. Older services tended to charge by connection, but modern consumer broadband doesn’t “see” connections, it carries traffic. Given that most consumer broadband is sold based on a monthly fee not usage, the more valuable the Internet is the more traffic is generated, and the lower profit is earned per bit.

The reason why profit per bit may not be a proper way of looking at ISP health is that most of the capital cost of consumer broadband lies in making the connection to users, via spectrum or physical media. Most of opex is related to either access or per-customer (service) issues. Thus, while revenue per bit doesn’t rise with traffic, cost doesn’t rise proportional to traffic. If profits at telcos are under pressure, it’s not entirely due to Internet traffic growth. And telco profits, and their stock price, are under pressure. Over the last five years the two big US operators’ shares are down over 28% where the S&P has doubled.

Another interesting point stock analysis shows is T-Mobile’s success where its traditional telco competitors failed. Their shares had a gain of almost 129% over the same five years, beating the S&P. Could it be that wireline broadband is less an issue? The majority of Internet traffic growth is video, and home video is usually delivered over physical media. Mobile service has long been more profitable than fixed, anyway. Perhaps headcount is important; both AT&T and Verizon have far more employees than T-Mobile, with AT&T having the most.

I have to wonder if my demand density metric is at play here, and if it is just what that means for telcos in the long run. Operators with a low demand density have a greater challenge making infrastructure profitable, and a part of that is surely that outside operations organizations are less efficient because they are spread out.

None of these factors that influence telco profits and capital spending seem susceptible to positive changes, not in the near term and perhaps not in the longer term either, at least not unless the overall service model changes. There are financial analysts (the cited article above quotes one) who believe that an orderly modernization or investment cycle will start shortly, but I don’t believe it.

It’s not that telcos won’t modernize or invest, but that there has to be a validation of such a decision, and a budget for it. If capital investments, like purchase of network equipment, has been suppressed for years (which it has) then obviously this validation hasn’t happened in that period. What starts it now? It can’t be simply a matter of making a decision.

And it isn’t. Two things are proven to be valid activators. The first is a new service opportunity that has revenue generating potential. The second is a standard or regulatory requirement who’s support is necessary to advance the state of the infrastructure even outside of a new service. 5G or 6G are examples of the latter, which is perhaps why these standards have gotten so much attention. But standards without services are a problem, and 5G demonstrated it.

5G got a budget, but it was clear that the required spending would send profits lower unless compensating revenue was generated. People came up with plenty of 5G “use cases”, but what could be done with a technology proved (as it often has) very different from what buyers would actually spend money on. It may well be that networking had, during a long stretch where information delivery lagged information availability, gotten used to be what everyone was waiting for. At about the time of 5G, the situation reversed, which meant that before 5G capabilities could be relevant, a whole application ecosystem would have to emerge, complete with its own justification. 5G couldn’t create all the pieces needed, which will also be the problem with 6G.

5G probably contributed to the vendor profit starvation we’re now seeing, too. If operators found 5G created non-covered costs, they’d almost surely put pressure on other areas of capex to compensate. In fact, of the 33 telcos who offered me comments on capex, 21 said they believed their 2024 budgets would be impacted this way, and all said it was possible.

So what could fix this? I was somewhat surprised to hear 19 of these 21 telcos say that a new and more revenue-focused standard was the answer, even though 14 of that group admitted that telecom standards were flawed, perhaps fatally. Why, to mix metaphors, try to ride a dead horse? They say it’s because there is no other pathway, that a major capital investment in something new simply can’t be made without standards sanction. That’s surely something to think about, because none of the operators believed any such initiative was in the works, and a new one would take years.

]]>
5765