Andover Intel https://andoverintel.com All the facts, Always True Thu, 27 Mar 2025 11:36:06 +0000 en-US hourly 1 How “Universal” Should Broadband Be, and How Do We Get There? https://andoverintel.com/2025/03/27/how-universal-should-broadband-be-and-how-do-we-get-there/ Thu, 27 Mar 2025 11:36:06 +0000 https://andoverintel.com/?p=6072 One of the pressing questions of the Internet age is whether broadband Internet access is a service so essential that it must be made available to everyone. Should there be truly “universal service” in broadband? At what price? There have been many programs and laws passed on this, but in the last year or so in the US, it’s an issue that’s increasingly come up at the state level. New York has led in this, but other states are also exploring the idea. Light Reading cites a study by the PUC in California’s debate on affordable broadband, and it seems to suggest that affordable broadband wouldn’t have a significant impact on provider revenues. Is that true, does it matter? A lot of that depends on public policy, public attitude, but we can look at the fundamentals to at least get a glimpse of the issues we face.

Broadband policy, like most of public policy, is complicated by the general truth that getting products and services costs money, and that money is easier for some to part with than for others. For decades, it’s been the dominant global policy to establish some form of “universal service”, wherein those who can’t afford the market price for communications are able to obtain it for something they can afford. In recent years in the US, the implementation of this policy has started to shift from imposing a charge on service to create a subsidy pool that then makes up the difference in price, to requiring that ISPs offer broadband packages below market rate to qualified customers. Needless to say, this move has generated pushback from operators who already, in some global markets, are asking for broad subsidies, even for market-priced services.

Those who (like me) are historically minded will realize that what’s happening here is a collision between the established concepts of public utilities and of public-stock corporations. Telecom used to be either a regulated monopoly (a public utility) or even a government depart (postal, telegraph, and telephone or PTT). Today, it’s almost always more like any other stock company, and in fact utilities in general are drifting away from the monopoly model. The question is whether broadband Internet access should be made available at an affordable price, below market price, and if so whether the mechanism to lower prices should be based on subsidies paid to the operators, by operators accepting a lower profit on some customers, or by some combination of the two.

What California is debating is whether operators should offer low-income customers Internet access below the current $30 per month rate. If we look at the report data, it’s focused not on unserved customers, but at low-income customers (those at or below 200% of the federally set poverty line) who are already getting broadband, either at the $30 per month 100/20 Mbps Affordable Connectivity Program rate or by paying market rates for a faster connection. Those groups represent 500,000 and 850,000 customers, respectively. Since the report indicates there are over 5.8 million low-income households, meaning that almost four and a half million households are unserved.

What the report concludes is that the reduction in revenue that would result from mandating a $15 per month service is likely less than one percent, and would be only 2% if all currently served eligible households, including those now electing market-rate plans, were to shift to the new $15 plan. The report goes on to say that many households say that any price at all would deter them from using broadband Internet, and proposes a phased introduction of subsidies from 2026 through 2034, that would eventually address all 5.8 million eligible households. The cost of these subsidies would depend on the mandated operator price of the low-income service tier. The report goes on to say that the benefits to the state of having a broader broadband participation rate could justify these subsidies.

Obviously, taxpayers and voters at any level set “public policy” and I’m not going to comment on what the right policy is, or even if there is one. The technology issue is, IMHO, to frame services, infrastructure, and businesses so that the promised benefits are achieved to the greatest extent, while mitigating any negative impacts as much as possible. Obviously, you can’t mandate profitable operation, and so operators must be able to earn enough to invest, or we’d face a necessary return to a public utility model and perhaps even to a future where broadband services would have to be made a part of the government, as they were in some countries in the past (remember the PTTs?).

To me, this starts with the recognition that universal broadband will require very efficient access infrastructure. Access represents a third of the capex and 42% of opex, according to what I get from operators. To deploy specialized infrastructure to serve low-income households makes it more costly, so as much common infrastructure as possible is essential. This is particularly true if it’s a goal to set relatively high levels of performance on even basic service plans; the 100/20 Mbps service level, for example, is hard to achieve over random telephone twisted-pair.

Both CATV infrastructure and FTTx can provide a framework for delivery of a fairly wide range of services. For example, you can hang an FWA node off a fiber that could also support FTTH, and since many low-income households are concentrated in urban areas where 5G cellular broadband would be efficient, it would be smart to encourage this sort of infrastructure versus trying to make-do with in-ground copper loops.

Opex here may be a larger issue. Operators tell me that, on the average, low-income households are more likely to require support. This is particularly true for senior citizen households, since in general seniors are less tech-savvy. Might it be prudent, as part of an affordable broadband plan, to mandate an AI support chatbot, or at least offer financial/tax encouragement to the use of one? Might a state want to set standards on support chatbots, or even provide a baseline solution that operators would supplement with their own service-specific data?

A similar level of planning is essential in ensuring that the value to the state/government entity created by universal broadband is maximized by planning services delivered over broadband to citizens. Things like broadband telemedicine, for example, are useless if they’re not offered, and if public health programs don’t cover them. All the services likely to be beneficial are also services involving a lot of personal data, and so a mechanism to secure them reliably is critical.

There’s an unfortunate tendency to try to offset real costs with hypothetical benefits. It’s just as dangerous to offset real costs with benefits that you don’t try to maximize, or even assure. I personally think that there really is a benefit to universal broadband, but I don’t think it’s going to fall into our laps. Governments, operators, and enterprises are all subject to economic reality in the end. Dealing with that at a technical level really isn’t difficult, but it is an explicit step that has to be worked for.

]]>
6072
Why We Need to Rethink the Way We Do IT Projects https://andoverintel.com/2025/03/26/why-we-need-to-rethink-the-way-we-do-it-projects/ Wed, 26 Mar 2025 11:54:03 +0000 https://andoverintel.com/?p=6070 As I noted in my blog yesterday, transformational changes in technology buying by enterprises depends on the launching of new tech projects that unlock new benefits, unleashing new sources of funding. For two decades, the contribution of new benefits to IT budgets has fallen, to the point where today it makes up less than a fifth of its peak levels. Obviously, there’s something bad going on. Part of it is that we’ve picked the low apples of business benefits, making it harder to gain new funding, but there are other factors we can dig out, too. We can also get some insight into why new higher-apple projects are harder to launch.

The way that projects get launched has evolved through three distinct phases since the dawn of IT. In the first phase, which lasted from the 1950s through the 1970s, tech projects were normally driven by analysis of current processes and the way they could be improved. The technology to do the improving was known and often even already in use, and so we could call this a “harnessing” phase. The second phase, which ran from roughly 1980 through 2000, saw project opportunities known to exist (often by having been analyzed in the first phase) exploited when a missing piece in the tech puzzle came along. We could call this the “completing the puzzle” phase. The third, and current, phase involves project opportunities that require a rethinking of the way the business is done, a rethinking profound enough that it hadn’t been done in past phases. Call this the “business analysis” phase. This phase evolution, we’ll see, is critical to where we are in tech evolution overall.

My personal experience as a software architect spans all these phases. In the first phase, for example, it was common for companies to let IT organizations pick targets of “automation” because they understood tech capabilities, and so could visualize how they could be applied by observing current operations. Tech capabilities were understood, and applied. In the second, when something new came along (like the IBM PC) the new capability could be evaluated in light of past assessments, and it was easy to see where something could be done. In both these phases, we were technology driven, and we could assume that understanding the technology could quickly lead to recognizing how to apply it.

Not so the current phase. The problem here is that when a technology comes along, say “cloud computing”, the application of the technology will require a re-examination of the entire business process and technology framework to get a handle on a project approach and frame a business case. This, to me, suggests that the organization of a project team needed to change. In the first phase, companies had separate “programmers” and “system analysts”, behavior that in the second phase started to erode into a single “programmer-analyst”. Have we seen a change to accommodate today’s situation?

Not according to enterprises. In fact, until 2024 (at the end of the year), I didn’t see any statistically significant number of enterprises thinking about reorganizing the way they did projects. Since then, a dozen or so enterprises have talked about what one called a “cloud team”, a special group of IT professionals whose job was to frame projects for implementation via hybrid cloud. These teams, led by an experienced software architect and staffed by people with specific cloud development training/experience, did the heavy lifting that their companies had come to realize was needed for successful use of the cloud.

What sets the cloud team concept apart is that it presumes that the project to be assessed will be disruptive both in technical and business terms. It starts by cataloging where the cloud is different, then where the “differences” are likely to matter most in a business sense. I think it’s a great idea, but keep in mind that cloud computing is almost 20 years old (AWS launched in 2006), and we didn’t come up with the cloud team concept until 2024. Even now, use of a formal cloud team is down in the single-digit-percentages in enterprises. Interestingly, all my cloud-provider contacts say that, to some extent at least, they use the concept themselves in feature planning. None say they promote it to buyers, but of course there may be some influence exercised by cloud provider people I don’t interact with.

Where I have heard of more widespread use of a “team” concept is in industrial IoT. In fact, most enterprises in the manufacturing vertical, and about half of those in transportation and utilities, say that they develop applications with a team very similar to the cloud teams, but with the important addition of some number of operations professionals from the line organizations involved. I’ve worked myself on several of these teams in IoT applications in transportation and facilities management, but I’ve not personally seen them used in more general business applications.

Why hasn’t this concept spread more quickly, more broadly? Here I do have some personal experience. The user company I worked for the longest was, when I was first hired, letting centralized IT manage development and operations of data centers and networks. About 18 months in, they divided IT development up by major line of business, because line departments found the central group unresponsive. They went back to centralized when the divided IT proved inefficient, and ended up going back and forth several times. The point, I think, was that you have to view a “team” as a collection of people for a task, not as a permanent organizational element, and since both the line and IT sides of companies are organizational in structure, there’s a tendency to weaken teams in favor of organizational separation of tasks. Enterprise comments over the last several years, including comments from those with cloud teams, support this view.

AI seems to be a technology in desperate need of a team, and so is the empowerment of those missed 40% of workers I’ve blogged about many times. In one case, we have a specific technology in mind, and need optimal applications. In the other, we have a target mission, but not a specific technology. I wonder whether the role of “enterprise architect”, but strengthened, and in some cases offered co-leadership with the role of software architect. It got started at the right point, in the early 2000s when the concept of workflows, service-oriented architecture (SOA), and enterprise service bus workflows came along. Enterprises say it didn’t get, and doesn’t have, broad traction, and that where it exists the mission the role plays is inconsistent. It seems to me that enterprise architects should be the designers of business processes, and they should work with software architects to manage the automated side. All this would add up to what a modern project should look like.

Most enterprises can create a team to do a project, but not to consider one or to assess potential opportunities. Logically such a team should be under the CIO, and all the cloud teams I found were in fact done that way, but keep in mind that a cloud team has perhaps a more technical-assessment role, and those in place tend to look at projects already being done inefficiently. For my enterprise readers, I’d love to hear your thoughts and experiences on this topic.

I think that the teams concept is critical. There is no single formula for tech project success, because we’re not back-filling technology into recognized opportunity areas. With things like AI, we’re not just shooting at a moving target of opportunity, we’re shooting from a moving technology vehicle. Both, arguably, are moving randomly. It’s going to take a new level of cooperation.

What happens if we don’t get a team approach, or something else, that changes our magic ratio in a favorable direction? That would favor two things, incumbency and industry consolidation. Absent funding to justify major displacements, budgets tend to simply replace aging gear in kind, which favors whoever provides that gear. The fact that requirements aren’t changing because nothing is changing them makes feature differentiation more difficult, which means price differentiation and commoditization is likely. That situation in turn drives consolidation of vendors for efficiency.

The industry would be more exciting if we could raise our ratio. Jobs would be more interesting, stock prices would likely be higher, and in all good things could be expected. But it might take time, and that introduces a qualifier into our “could be better”, the qualifier being “in the long term”. Right now, long-term thinking means to the end of the current fiscal year. Is that long enough for this kind of progress to be made? Probably not, but we’ll see.

]]>
6070
A Shot at Modeling Buyer Behavior for 2025 and 2026 https://andoverintel.com/2025/03/25/a-shot-at-modeling-buyer-behavior-for-2025-and-2026/ Tue, 25 Mar 2025 11:44:56 +0000 https://andoverintel.com/?p=6068 In my early days as an industry analyst, I did what most did and issued market forecasts. It didn’t take me long to find out that there was minimal correlation with what enterprises or operators said they would be doing in the future, and what they actually did. To get around that, I built a “decision model” to forecast not behavior but the drivers of behavior, then used the model to move to the creation of a forecast. I don’t survey users these days, or produce forecasts, but I thought it would be interesting to fiddle with the model using the things enterprises tell me. Not to produce long-term forecasts, but to predict near-term tech decisions. The results, I think, are interesting.

The biggest question vendors, and the tech markets, face today is probably one that’s almost never asked explicitly. That question is “How is buyer decision behavior changing in the next year or so?” If we expect the future of any technology to be different from the present, there has to be a pending change in decision outcomes. The model says that there’s a chain of truths that have been operative for decades, and it’s that chain we have to pull to get to some answers.

One thing clear from decades of history is that technology change is brought about by spending change. If you look at the residual value of a given class of technology (network gear, servers, etc.), you find that the pace of change we’ll actually see in a year depends on the ratio between the budget available and that residual value. When that ratio is large, enterprises entertain changes in technology direction, changes in vendor commitments, and so forth. When it’s small, they tend to stay the course.

Let me offer some specifics here. Over the two years of Andover Intel’s information-gathering, we’ve had a small ratio in play. We have that as 2025 opens, for three reasons. First, the number of new projects whose business case opens new budget contributions is at a historic low. Of the enterprises who offered comment here, the average contribution of new projects to IT and network spending is less than 10%, when over the last 30-plus years it’s averaged 34%. For fifteen years, it ran just around 50%, with four years when it went as high as 65%. Second, the asset base has grown over time, making it harder to displace stuff not yet fully depreciated, and finally the years and years of tech investment has leveled the peaks and valleys of asset depreciation, so there’s no single point where “modernization” is facilitated by having a lot of stuff aging out at once.

The result of this low-ratio condition is that any real opportunity to gain market share is generated by specialized places in the IT picture where new projects (that less-than-10% contribution) impact on a narrow front. Think AI and network equipment, for example. IT deployment in house depends largely on creation of “IT clusters” that generate more horizontal traffic, thus creating a specialized data center network that doesn’t have to conform to past technology choices or vendors. Why do network vendors love AI hype? Because it gives them a chance to gain market share, a chance that in 2025 isn’t offered by anything else.

Another interesting point is that the model says that new projects generating higher ratios and thus more transformational opportunity will take slightly over a year from approval to the delivery of a distinctively higher budget, and the budget/ratio good times will last (on the average) slightly less than 2 years, then trail back to baseline over another two to three years. That means that if we want 2026 to be a great year, a year of vendor opportunity, we’d need to identify something to drive it in 2025, and in early 2025 at that.

This truth creates another interesting model prediction. The value of startups is greatest during a period when a prospective driver has arisen, and projects that exploit it are being framed. This puts incumbent vendors under pressure to quickly prepare to exploit the gains in budget dollars, and thus facilitates an exit strategy for startups. Good exits rarely occur during periods when ratios are very low, absent that new driver, and also rarely occur during periods when the ratios are really high, largely because the budget largess has already driven users to make their product decisions. Only revenue will justify a good exit at that point.

A third point is that buyers, whether they’re enterprises or network operators or cloud providers, don’t actually believe the hype. Taking our current AI hype as an example, the tone of published material on AI was overwhelmingly positive in 2024 and is the same in 2025. It was like the cloud—everything is moving to it. Enterprise technologists never believed in the universal cloud, and they don’t believe in universal AI either. And here’s an interesting point; the cloud drove cloud spending, but didn’t drive a revolution in enterprise IT capex. It increased the expense budget only. Right now AI is doing the same thing.

Point four is that the greater the budget contribution, the greater the new-project benefit pool, the longer the positive impacts will sustain, but the longer the time it will take for the peak to be reached. What I’ve characterized as the “three cycles of IT” in the past (in the 1960s, the 1970s, and the 1980s, roughly) happened because of a transformational shift in technology application that had broad impact. Interestingly, each of these cycles got shorter and each contributed less to total spending than the one before. Since then, because of what my model identifies as the “ratio problem”, we’ve not had a major cycle. I submit that this is because it’s harder these days to transform.

Enterprises universally characterize the benefits of IT and networking as “improvements in productivity”. There’s only so much improvement possible in anything, and the most attractive targets are things that offer a lot of gain with minimal cost and risk. ROI, in short. We’ve done them, or at least the attractive and easy ones. What’s left is actually very large—the potential untapped productivity benefit pool is larger than what’s already been realized—but also very complex, and requires a larger “up front” investment. We’ve done what we’ve done by exploiting things familiar and in many cases already in place, but what remains will require breaking new ground.

The big take-away from all of this is that vendors who are looking for better profits had better starting to look for better business cases. Even stealing market share as a strategy will be difficult in a time when our ratios seem to be stuck at a low value; the safe choice and the easy choice is to stay the course. But that gets them to breaking new ground. The obvious questions are “where is the new ground” and “will it be broken”. I’ve pointed out in past blogs that roughly 40% of the workforce, the group not tied to desks, are obvious candidates. However, they’ve been that all along. The real question is whether we’ll uncover something to lower the barriers to reaching this non-empowered group. Since we’ve never been in ratio hell for as long as we have now, models can’t answer that one, but I did get some insights from enterprises that I’ll share in the next blog.

]]>
6068
Did NVIDIA Make a Business Case at It’s GTC Conference? https://andoverintel.com/2025/03/20/did-nvidia-make-a-business-case-at-its-gtc-conference/ Thu, 20 Mar 2025 11:55:38 +0000 https://andoverintel.com/?p=6066 You’ve got to admire somebody who’s willing to say that AI is “underhyped”, which is what Fierce Network’s story on the NVIDIA GTC Conference says is the view of CEO Jensen Huang. Is it even possible to have something underhyped these days? I wonder, but the comment gives us a reason to look at the role of AI in networking, both at the enterprise and operator levels. There may even be some implications we can draw regarding AI overall, hidden in the details.

To get an important point out of the way, what company CEOs say at conferences is propaganda, and NVIDIA is under pressure on Wall Street right now. They need to look confident about AI opportunities because they need the Street to see their future market as growing. Thus, you have to assume that no public forum is going to miss a chance to convey an optimistic vision. The question here is whether AI is really under- or over-hyped, meaning whether the future holds more and more of it, or if it’s a flash in the media pan. Huang can’t make an opportunity real, only realize one that’s already waiting in the wings. Reality here, as usual, depends on real buyers.

I chat with 88 operators, and of that group I’ve had AI comments recently from 67. Every one indicated a belief that AI had value in their business, but here’s where we have to make an important point, which is that an application for any technology isn’t the same as a justification for it. You can use a brick to hammer in a nail, but hammering is a poor justification for buying bricks. To make it even more subtle, you can demonstrate that AI has “value” in creating a research report, but is the overall value enough for you to be willing to pay for AI, and pay enough that providing it is profitable?

The story says “Nvidia is working with 150 telcos around the world, including 90% of the top 50, and they are rapidly adopting AI across their business for internal productivity, customer experience and improving performance and performance-per-watt on the wireless network….” I agree; I’ve chatted with some of them. The question remains whether these applications of AI are really going to transform their business, or even end up justifying continued AI spending.

The top application of AI today, in every single vertical is the “assistant”, which uses AI as a personal productivity tool. Almost two-thirds of the people who use assistants admit to me that they’d never pay for what they get; either they’re using a free tool or their company is picking up the tab. A quarter the comments I get on the technology suggest that people actually hide assistant use because they don’t believe their management would approve it, and other surveys published recently suggest the same thing. Every operator told me they use assistants somewhere, but none said it was transformational.

The second-place enterprise AI application is the support chatbot, which 58 of the 67 operators said they used. This application, at least, gets a positive check from operators’ executive suite, so while 44 of the 58 said that support chatbots had proved more expensive than expected, had not generated as positive a customer response as had been hoped, or both, nobody said they were dropping the plans. But 38 CFOs, when asked what percent of bottom-line growth they could attribute to adoption of this application, could not respond with a number, and 3 said it was “minimal”.

Spectrum efficiency, bandwidth conservation, and network reliability all get positive marks, but again none of them were cited as offering any significant improvement to the bottom line. Even proposed opex reduction attributable to AI was an application CFOs were unwilling to say would generate a significant benefit. Add all the applications cited by NVIDIA together, and you get what a few CFOs said could be “a percent or two” of improvement.

But is this enough? It may well be enough to justify operator AI interest. The operator CFOs admit that they were approving AI projects whose ROI was far lower than their target. Two said that they had or would approve a project with a single-digit ROI. The reason is that cost reduction, if provable, is a major target for operators who have little faith in new revenue opportunities.

Enterprises are also eager to find ways of reducing the cost of network equipment and services, and in particular to reduce the number of operations errors that can impact QoE or security. CFOs of enterprises are likely (by a 2:1 margin) to accept a project ROI lower than their normal target for network AI that’s directed at these missions. A slightly lower percentage feel the same way about AI chatbots applied to their own pre- and post-sale support missions.

This leaves us at an important point. AI reality isn’t necessarily, or even likely to be, AI transformation or revolution. Yes, there are good things that operators, and enterprises, can do with it. Many will be done, but will they justify the level of investment already made, the almost-a-hundred-billion boost in cloud capex, for example? Much less, justify its increase in the future? Not so far. People won’t pay enough for what they’ve proved AI can do. But it can do more, can be transformational. We have to learn how to make that happen, and just a pretty song at a conference isn’t the answer.

]]>
6066
What Enterprises Think an AI Transformation Looks Like https://andoverintel.com/2025/03/19/what-enterprises-think-an-ai-transformation-looks-like/ Wed, 19 Mar 2025 11:35:48 +0000 https://andoverintel.com/?p=6063 As everyone who reads my blogs surely knows, I’m trying to get a line on a vision of AI that enterprises believe could really transform their operation. One major challenge is that enterprises are themselves unsure of what the optimal AI solution would look like, and thus often can’t offer me much in the way of suggestions. I now have 52 solid exchanges on AI with enterprise strategists in 48 companies, and I want to summarize what I consider expert views.

A little about the 52 is in order here. Of that group, 41 offered comments on their role or background, and it was surprisingly diverse. The largest group were software architects, then developers with a DevOps role, then team leaders. There were no CIOs in the group, or development managers. Whatever the role, all had experience with AI tools, and 37 had some formal AI training.

Let me start by saying that none of the 52 said that AI had already transformed their business, but 47 said it had a transformational impact on some part of their operation or another. All 52 experts say that AI is potentially a business-transformational technology, meaning that it has the potential, but only 13 of 52 (both 13 experts and 13 enterprises represented) think that the public-LLM-generative-AI stuff that dominates the news is transformational. For that type of AI, there is in fact only one application that the 52 agree is valuable, and that’s the “support chatbot”. Almost every enterprise has pre- and post-sale support missions that have traditionally required human call centers, and about three-quarters have already (pre-AI) gone to some form of automated support. Most have also tried offshore call centers, but as of now that strategy is being questioned because of pushback from callers. Even with a human support agent, though, there’s still the challenge of getting the right answer delivered. The 13 enterprises who believe that generative AI chatbots are transformational are all companies that have large interactive support needs.

The other 39 experts from 35 enterprises tell me that to transform their business, AI would have to integrate into their business processes in multiple places. Just as no single traditional software application alone could transform a business, these 39 say no single AI application could do so. Getting AI into any single point in a business process, it turns out, is more complicated than it seems.

One obvious way to achieve AI introduction is to link AI to individual workers already involved in the target business process. The 39 experts agree that there are places where an AI “assistant” or “copilot” could have a positive impact. They believe this would not be the result of broad empowerment (integrating AI with document development or email for a large body of employees), but by introducing a more specialized AI model to a small number of high-value employees whose work product and effectiveness is highly valuable. In my terms, assistant technology’s value depends on the unit value of labor of the workers being assisted. They believe that some of these assistant missions might benefit from training with public data, but none think that it would require training on data with the breadth of the Internet. A high-value worker typically has a specialist job, and requires specialist assistance.

If you’re not going to rely totally on AI integration via an assistant-to-worker bond, then you have to introduce AI as a component of a business process flow in its own right, which would mean as a part of an application workflow. Enterprises have long recognized the need to pipeline applications (or their components) together in a workflow, and things like the enterprise service bus (ESB) and business process execution language (BPEL) were foundations of the original IBM Service-Oriented Architecture (SOA) designed to do this.

ESB/BPEL is only one example of the broader need to integrate business process elements together in a way that reflects their real-world context in the business. You can also bind them explicitly (each calls its successor and passes the needed data), you can use a publish-and-subscribe event processing approach, you can use a digital twin model, a service mesh…you get the picture.

So, apparently, do IBM and Oracle, the two vendors that enterprise AI experts recognize as playing a positive strategic role in getting AI organized into a business process. Both companies have told their strategic accounts that AI is essentially an application component to be integrated, and that the same rules and policies that guide the orchestration of application workflows have to work for AI. Thus, there may be different approaches taken depending on the specific nature of the business process, the way workflows are currently steered, and the role AI is to play.

Where AI is used for forecasting, modeling, planning, or other analytic missions, enterprises tend to think naturally about having AI integrated into existing analytics tools, which would likely mean one of the explicit or static mechanisms of binding it in. Where AI is to be used in processing business data, commercial transactions, it’s very likely it would simply look like an application component linked in via whatever mechanism was employed (an ESB, a broker, etc.).

Using AI in event-driven applications, which would include both control of process systems and IT or network operations, is an application that all 52 of our experts think would be useful and 50 think could be transformational. It’s also the area where there’s the most variability of viewpoint on the nature of integrating AI into the picture. Industrial process control, transportation systems, and similar real-world control missions seem to 39 of our experts as being specific applications where digital twins should provide for AI integration and context control. For AI and network operations support, the optimum solution (according to 46 of the experts) is more similar to the technique used to integrate AI into business planning. This likely reflects that IT and network operations is visualized as a planning and supervisory task, where industrial process control involves actually involving AI in the work itself. So far, none of the experts is suggesting that AI would actually take control of every aspect of network routing, for example.

While most (46 of 52) AI experts say that their organizations have a handle on the integration of AI with their business processes to the point where significant business improvements could be generated, they all agree that the issue needs to be addressed in a more effective way to impact the broad market. All 46 said that their CIO was integration-literate, but none believed that any other C-suite people in their company were even modestly AI literate, and 49 thought that was a problem because it made framing a project and getting approval more difficult.

Chatting with this group has demonstrated, to me at least, that there’s a big difference between the way that transformational AI is visualized. The majority of enterprise management/executive personnel think of it purely in chat-generative terms, but the people responsible for transformational projects think of it almost as a programming language. This difference, I think, isn’t just a matter of exposure, but surely it’s a result of a longer and more formal assessment of AI value. Are we all headed in that direction? Maybe, but one somewhat demoralizing point is that 51 of the 52 said they believed that vendors were more focused on the hosted-chat model of AI. Said one, “The amateurs are winning.” That would be bad for those who hope for optimal AI adoption any time soon.

]]>
6063
AI Might End Up a Casualty of Bubble-Think https://andoverintel.com/2025/03/18/ai-might-end-up-a-casualty-of-bubble-think/ Tue, 18 Mar 2025 11:48:57 +0000 https://andoverintel.com/?p=6061 I’m now seeing comments about the “AI bubble” and its “bursting” even beyond the tech media. We certainly had a major tech dump in stocks last week, so it’s fair to ask whether the problem is, as Axios said, that the AI bubble had burst. I guess you know by now that I’m going to say that it’s complicated. I’m going to say that we have a problem, a problem bubbles have largely created, that goes beyond stock market prices.

Tech today lives in a climate that favors, even loves, bubbles. Stocks going up is always good news. Companies are rewarded for saying things that make their stock rise, and for saying things that limit its falling. Ad sponsorship of nearly all tech news (and most other news, too) means that we tend to see or hear what those who place ads think will help them. And let’s face it, we all click on stuff that’s interesting, and the more clicks something can generate, the better coverage it will get.

Back in the 1980s, I was doing semiannual market surveys as we transitioned networking from paid subscription to ad sponsored. At the time this started, we had about 14 thousand organized points of network equipment procurement in the US, and the reader base for primary network publications was about the same. A decade later, I found we had increased the reader base by a factor of ten, and had gained less than 500 new points of purchase. If you added up the budgets that people said they were responsible for on those reader service cards, the total exceeded the total capex budget, and approached the US GDP. The point? Readership was now dominated by network amateurs. What do you think online tech news has done?

AI has been around for decades. In the 1990s, I did consulting in the field, in fact, and at that point I don’t recall much interest in the topic, even in tech publications. You could read about “knowledge engineers” and “subject matter experts”, and anyone who wasn’t pretty well-grounded in the topic had no idea what any of that meant. Today, we’ve solved that problem. We have stories about how AI is going to steal your job or make mankind extinct, and millions read them. Consumability generates clicks, not insight. Search engine optimization (SEO) targets the masses, and the masses don’t good technology project decisions. Sadly, most of what non-technical C-suite people read falls into this mass-click category.

Generative AI is consumable, at two levels. First, we can read about how it’s advancing, enhancing, and eventually coming for us. Apocalypse Now, or maybe Apocalypse Z. Second, we can play with it online for free, and get subscriptions for AI-as-a-Service. The second of these lets AI bypass the usual capital ROI controls a CFO would apply, so AI can sort of sneak into businesses. Business case, ROI, who needs it? As a manager or executive, or maybe just a senior person, I can sign for up to a hundred bucks of AI service a month, and everyone is doing it so my decision will probably never be questioned.

We’ve created a hype and bubble industry that used to be tech.

Or, well, we’ve created a hype and bubble industry on top of tech. What’s most likely to be published is what search-engine optimization determines is most likely to be read, which favors mass readers over those whose information needs drive actual tech progress. We still need all that tech stuff, the stuff that the 14 thousand professionals read about, but how they get the things they need, how they learn about the aspects of AI that aren’t sensational enough to get noticed, is all pushed into the background. It’s the tortoise in the race, with the hype hare out in front by a mile.

And that, my friends, is where we are right now with AI. The sad truth is that AI is a revolution, something that could add billions to IT spending because it could generate more billions in benefits. How? You’ll probably never know until it’s already happened.

Last week’s stock market wasn’t really AI’s fault, but AI played a role. The stock market isn’t what you think either. You probably picture it as professional investors looking for value, but the great majority of tech stock trades aren’t made by investors at all, but by traders, and the majority of those are the hedge funds you hear about. Investors can make money when their stock goes up, but traders can make money when their stock goes down, too. What they can’t do much with is stocks going nowhere. When something like AI comes along, they love the upside it generates. When AI uncertainty raises its head, they love the downside too, and trading professionals have tools that actually let them break a market. When things start looking dicey, they trigger those tools and the market dumps. A couple days later, the market goes back. Every up, every down, wins. These swings are the easiest to see, and to cause, in market areas where hype has raised expectations unrealistically, like AI.

The real value of AI, a value that companies like IBM saw from the first and that I’ve blogged on for a year or more, is that it can generate functional elements that can be introduced into business processes just like we already introduce software elements. These functional elements are what enterprises have been telling me are “AI agents”. As this piece points out, though, the concept of the agent has already been contaminated. We’ve made it into, paraphrasing the article, generative AI with an instruction manual. It’s really a software component to enhance a business process.

That’s the reason why AI isn’t all hype. What’s hype is our conception of its value, a conception created by the way we learn about technology advances. The real value of AI will take time to develop, perhaps more time because focusing on the wrong thing invariably leads to delays in recognizing the right one. Read through the article, though, and you’ll find comments from Anthropic and Salesforce that seem to straddle hype the reality, at least. A year ago, we didn’t even see that much. When reality starts to dominate, we’ll see another AI wave, and while it probably won’t be as exciting as the current one, and may not even create the same Wall Street boom, it will surely deliver new winners.

How will enterprises get to the real AI value proposition? The answer lies in two factors. First, there are some vendors who enterprises believe are delivering strategic AI advice—IBM and Oracle are the leaders in terms of enterprise comments. These vendors, where they have influence, can help management frame AI projects optimally. Second, some enterprises are creating special AI teams, combining technical people with AI skills, technical people who understand current business process and workflows, and decision-makers. It’s a bit early to call these teams a success, but the early signs are positive. Their greatest benefit may be their ability to translate AI technology into business-consumable form, something that getting AI projects approved and making them successful surely demands.

]]>
6061
To My Telco Friends https://andoverintel.com/2025/03/15/to-my-telco-friends/ Sat, 15 Mar 2025 14:20:33 +0000 https://andoverintel.com/?p=6059 I just want to note that, while many of you realize they’re free to take advantage of Andover Intel’s offer to technology users (see that page on this site for details), some tell me they didn’t realize that they were both “users” and “providers”. In the former role, as consumers of technology, telcos are of course welcome to comment and ask questions via our user mailbox.

]]>
6059
What Does IBM/Juniper Cooperation Mean to the HPE/Juniper Deal? https://andoverintel.com/2025/03/13/what-does-ibm-juniper-cooperation-mean-to-the-hpe-juniper-deal/ Thu, 13 Mar 2025 11:34:50 +0000 https://andoverintel.com/?p=6057 Those who follow networking likely know that the long-in-process merger between HP Enterprise (HPE) and Juniper Networks is being challenged by the DoJ. Then, (both issued the same press release on February 28) we heard that Juniper and IBM were cooperating to simplify netops, and the deal would include “joint sales, marketing and product integration efforts.” IBM is quoted, saying “Our collaboration with Juniper illustrates how ecosystem partnerships can help accelerate the adoption of AI for critical business use cases such as network management, demonstrating even greater value to enterprise customers.” Wait! How could a network company proposing to merge with HPE collaborate with an HPE competitor? What’s going on? Essentially, we have three possibilities we need to examine as a hierarchy.

In the top layer, IBM and Juniper may be responding to the DoJ action, presuming the deal with HPE is off, they may be simply creating another Juniper relationship to exploit post-deal, or they might be using the relationship to relieve DoJ concerns. If the former, we have two possibilities. IBM is prepping for an independent Juniper, or IBM may be looking to acquire/merge with Juniper itself. We’ll talk about all of these.

It’s really hard for me to see HPE and IBM both promoting the same Juniper stuff. The press release I quoted says that Juniper and IBM would be cooperating in sales and marketing. Would that work if HPE owned Juniper? We’ll get to that below.

If Juniper/HPE think that the DoJ would kill the deal, then both companies would be exploring their alternatives to the merger. For HPE, they’d need to look at another networking company, or simply forget mergers as a pathway to expanding their network business. If they fear that the DoJ would kill the Juniper deal on competitive grounds, could HPE hope to have any deal approved? I’d suspect not, so they’d walk away.

Juniper is another story. Data center networking drives enterprise network capex, and data center hosting and applications drive data center networking. Juniper would benefit from having HPE pull them through into deals, because HPE sells what runs in the data center. Simple, right? Thus, we could expect to see Juniper doing one of two things. One, they try to do something cooperative with another data-center strategic influence giant, and IBM is the absolute number one there. Two, they could try to get acquired by such a giant, meaning Juniper might replace HPE with IBM as a parent.

If the goal is to try to address DoJ concerns, then the presumption would be that the “IBM Guest Services”, which explicitly links the cooperation to Mist AI and WiFi networks, might be appropriate, given that the whole WiFi thing was the basis of DoJ objections. Don’t worry, regulators, HPE isn’t going to lock up WiFi AI because Juniper is partnering with a major HPE competitor, IBM.

I think that it’s pretty clear that the DoJ means business; they’re not being discouraged by light-hearted comments. Thus, I think we can assume that the Juniper-IBM deal is in fact a reaction to the DoJ. I think it’s a kind of hedging. It may be enough to convince the DoJ that they should approve Juniper/HPE, and if so the deal goes through. IBM and HPE then have to somehow work out how any cooperation would work down the line, but that’s not impossible, given that a large percentage of IBM’s Red Hat customers probably run on HPE gear. If the DoJ is intransigent after the IBM deal, then Juniper and IBM have to decide how far they’d be wanting to take cooperation.

IBM used to be in the network business, but their mainstay was their proprietary (and insightfully excellent) SNA technology. As IP players, they never excelled, and they ended up selling off their network business to rival Cisco. Would IBM now acquire Cisco’s primary competitor? Seems questionable. A cooperative deal with Juniper doesn’t put them in Cisco’s face, and Cisco is after all the largest network player in enterprise accounts. We might expect IBM to do deals with other network vendors too, under this level of cooperation, but it would be hard to do that if they actually bought Juniper.

But…but…but. This doesn’t address the question of Juniper’s viability. Suppose the DoJ squashes the HPE merger. That means that if IBM doesn’t buy Juniper instead, Juniper has to rely on cooperation with IBM to pull up its own sales and revenues, or essentially contend they can go it alone. Their quarters haven’t been bad, after all.

Now, we return to the question of whether IBM and HPE are incompatible bedfellows, or just perhaps a bit strange. The IBM cooperation is valuable to Juniper, period. IBM has the best and smartest position on AI, according to enterprises. It has the largest strategic influence on enterprise planning. Juniper’s AI-native networking push surely aligns with that. With regard to HPE/IBM, remember that IBM doesn’t make servers (other than its mainframes, which are hardly a growth-market offering), so it really isn’t necessarily competing excessively with HPE. Self-hosted AI benefits IBM. If it sells servers, it benefits HPE. If it sells switches it benefits Juniper.

I think an IBM/Juniper deal would be even more interesting, even profound, than an HPE/Juniper deal. However, I think that IBM’s position in the broad market and in AI in particular is a software position. Red Hat and watsonx are both software, and Red Hat is the hook IBM has hung its entire business on as a means of broad-market opportunity harvesting. The question is whether buying Juniper would contaminate this, and that’s a tough one. Would it alienate Cisco? Surely Cisco wouldn’t like it, but what impact might that have. Cisco can’t supply an alternative to Red Hat, nor can they match IBM’s influence with major accounts.

If the DoJ approves the HPE/Juniper deal, then I think IBM’s deal with Juniper probably offers everyone something. IBM gets a showcase example of AI agent operations. HPE gets watsonx and IBM AI options, and Juniper gets another channel to help them sell AI networking.

Can we play out what happens if DoJ does reject the HPE merger with Juniper? If that happens, then IBM is most likely to push hard on the cooperation with Juniper, and HPE and IBM still become more strategic competitors in network equipment for enterprises. HPE is, I think, harmed immediately. Juniper becomes more and more dependent on IBM for strategic influence, even in AI, and eventually it might be prudent for IBM to acquire Juniper rather than leave them hanging in the market, potentially a target for a competitor.

Whatever happens with the DoJ, I think the Juniper/IBM cooperation raises the stakes, making what might have been a sort of yawn decision in the view of many, into what might prove a profound pivot point for IT, networking, and AI.

]]>
6057
The Factors Driving Enterprise AI Planning https://andoverintel.com/2025/03/12/the-factors-driving-enterprise-ai-planning/ Wed, 12 Mar 2025 11:58:54 +0000 https://andoverintel.com/?p=6055 Enterprises, like telcos, face a world dominated by AI and other hype, while their own technical reality is still pretty pedestrian. There are some points of congruence between hype areas and real planning focus, but even in those areas the influences driving enterprise network and IT planning are more complicated than responding to a new tech wave. That’s what makes it worthwhile to have a look at them.

I got 288 enterprise comments on tech infrastructure planning and spending so far this year. All these included discussions of their “driving priorities”, as one put it, and none of the popular hype targets, most notably AI or the cloud, were cited by more than a quarter of the group as primary, much less as the top, considerations in network and IT planning and deployment. However, over half said that both the cloud and AI were factors to weigh in the future, and a third said there would be some impact on this year’s budgets.

You can probably guess enterprises’ top priority; cost management, cited first by 217 of the 288. Second, cited by 46, was improved QoE. Optimizing the cloud was cited as top driver by 17, and AI by 8. I can’t give you reliable data on what percentage of data center switching budgeting was driven by AI, but surely nothing like the article I cited earlier would suggest. Data center switching changes were budgeted by 199 of the 288, though, and it was the top network equipment spending proposed. All 25 enterprises who listed AI or the cloud as the driving priority expected to spend more on data center switching, showing that where there was significant planning emphasis on either area, it drove significant changes in the data center and its network.

Outside the data center, things are mixed. Of the 288, 54 said they expected to spend on edge-LAN upgrades, most in their main site. Another 52 expected to spend on SD-WAN, with slightly more than half that group already SD-WAN users. Only 17 said they were budgeting for edge routers to attach to VPNs other than SD-WAN. None indicated they were budgeting for more WAN traffic due to cloud usage or AI.

If these numbers surprise you, let me repeat a point I’ve often made. Surveys are largely useless because people don’t tell the truth on them. Many will shade their responses because it looks better to say that you’re doing cloud or AI planning when asked what’s driving your 2025 budgets, even if the questions aren’t structured to favor that answer (which, regularly, they are). But let’s continue, keeping in mind that the data I provide is an analysis of spontaneous comments, which I feel are a better measure of reality.

Looking explicitly at network spending, only 8 enterprises mentioned WiFi 6 or 7 in their comments, and in all these cases the improved technology wasn’t a driver of the upgrade, but just a beneficiary of an upgrade driven most often by a larger number of users or higher rate of bandwidth consumption per user. Enterprises agree the more modern standards are best if you are upgrading, though.

In the SD-WAN case, of 288 enterprises, 221 said they were under pressure to lower their WAN service costs, but only 148 planned to attempt that, including the 52 who had budgeted for new or increased use of SD-WAN. Another 57 said that they had assessed SD-WAN but would not likely adopt it unless it was offered by a credible (preferably a current) operator rather than something they had to create themselves. Of the 52 with specific SD-WAN plans or deployments, 5 were based on an operator-provided service. IMHO, this means there is a significant interest in SD-WAN services, and that there’s a barrier to roll-your-own SD-WAN models.

In AI networking, 6 of the 8 said that they were deploying a “small AI cluster”. Only four offered any comment on cluster size, but all of these planned less than 100 GPUs. Interestingly, all of these were associated with already-underway deployments. Of 122 who said they expected to deploy AI down the line, the majority believed they’d use “agents” that were described in a way most consistent with small-model AI or even basic ML. Only 12 expected to deploy large-scale (LLM) generative AI, and this group also (interestingly) mentioned less-than-100 GPU limits. Of the 122 who expected to deploy AI, the primary reason (held by 98) was data sovereignty, with another 25 saying they didn’t want the usage-price-AI-service hanging over them. The impression I get from AI plans is that enterprises are still trying to come to terms with AI adoption costs and benefits.

Within the 288 enterprises, 190 said their network and IT spending was going to be focused on sustaining current applications and missions. Only 11 said they had any new applications in mind; the rest made no comment on the topic. Of the 11, two said that the new stuff would account for more than fifteen percent of their total IT spending, five said it would account for less than ten percent, and the others didn’t characterize it. This shows that “project spending” continues to run far below historical levels (over the last 40 years, it’s averaged 37% of total IT spending). Of the 288, 38 said that “productivity projects” would be the only source of new money in IT, and the remainder didn’t even mention the topic. Only two enterprises suggested what new projects might be, and both mentioned IoT.

What’s surprising here is that many more (122) enterprises indicated they expected AI adoption, yet none suggested AI would be a source of productivity projects. Does this mean enterprises don’t see things like the AI copilot applications as AI adoption, or as a valid target for increased spending, or as a way to improve productivity? I can’t tell from the comments, but I have my own idea.

Which is? Which is that enterprises haven’t really figured out AI. It gets good ink, as we say in the media business, so they believe that all the cool, upstanding, people are looking at it, which of course means they are. But this attitude, back in the 1980s, resulted in a third of enterprises telling me they used gigabit Ethernet when there were no commercial products available. The generative AI model doesn’t require planning, insight, or really anything but interest, and so it’s been the dominant AI approach. However, will enterprises pay for that? They tell me that they won’t, and if that’s true then everyone who believes in AI will need to help enterprises work out its value proposition and set a path to realizing it.

]]>
6055
Telco Comments on MWC https://andoverintel.com/2025/03/11/telco-comments-on-mwc/ Tue, 11 Mar 2025 11:32:26 +0000 https://andoverintel.com/?p=6053 OK, MWC is over, and most operators say that it didn’t have answers for them on how to increase their profits. “We had a lot of vendors telling us the same old things,” one told me. Is there no new thing? Must operators, to achieve their goals, finally start doing things they’ve been told they had to do all along, but wouldn’t accept? I had 19 operators offer fairly extensive comment on that point, and it’s worth looking at what they say.

First, and perhaps most important, 11 of these operators started with the comment that Wall Street influence on operators was unreasonable, unrealistic, unprecedented. “We’re not a growth stock,” runs the typical theme. “Historically, telcos have been utilities, dividend stocks. We’ll never be able to match the NASDAQ leaders for growth.” There’s a lot of truth in this, I have to say. I can remember, on my very first job, talking with a seasoned guy in his 50s about financial security. His recommendation was to invest in utilities, including telcos, to earn steady dividend income. You don’t get rich quick this way, though, and I suspect that the Street view these days is all about that.

The reason this is important is that the right strategy for telcos depends on their goal, which for public companies like nearly all telcos are means shareholder value. If shareholders have unrealistic expectations, then telcos have little chance of meeting their expectations. As “value” or “dividend” plays, it’s another matter. All they have to do is sustain good free cash flows to generate dividends, and that’s a whole different thing to plan for.

New revenues are essential to getting a telco beyond the value label. According to 15 of the 19, telcos were likely able to sustain a dividend/value role by managing costs effectively, but they were not going to be able to achieve status as a growth stock. None believed that if cost management was optimally applied, they could sustain credibility as a growth stock over the long term. Of the 19, 12 thought that it would be smart for telcos to first apply cost management principles to their operation before attempting to find new revenue sources, and given that cost management was seen as critical to even a long-term value-stock position, the first step toward a viable future was universally accepted as cost management.

The big question, say the telcos, is what the optimum future infrastructure would look like. To answer that, operators point to network complexity as the primary issue in both opex and capex. There are too many devices, too many layers, in the network. The ideal model, say 15 of 19 operators, would aggregate edge traffic as directly as possible to a single metro-level point, which would then be fully meshed with other metros. This would reduce the number of devices and create a point for “feature injection” to generate new service revenues.

One thing this means goes against many of the current views on network evolution; intelligence at the extreme edge. None of the 15 believed this was possible; the capex/opex burden would be increased. Thus, the whole notion of Open RAN, with a controller (the RIC) managing resources, is held as impractical. Mobile/5G smarts should, to the greatest extent possible, be focused on a major metro hosting point. To give you an idea of what that would mean, there are said to be somewhere between 150 and 300 metro hosting points in the US, and about twice that number in the EU.

To achieve this, operators think they’d need a form of “passive aggregation” to the greatest possible extent. That means more optical capacity and optical deployment and fewer electrical layers. Three operators offered their own statistics here, which were fairly consistent. They said that for a given total-device capacity, a router cost twice as much to buy and four times as much to operate as a reconfigurable optical multiplexer (ROADM).

The problem, say the 15 operators, is that this isn’t how networks have been built traditionally, nor is it how the literal interpretation of mobile-network standards or open models would build them. While costs would be lower overall with this approach, the cost of premature displacement of a lot of technology makes the step unattractive, particularly given the fact that 5G deployment has resulted in many fairly new installations.

You may wonder why I’ve focused the issue of cost management on infrastructure. The reason is that the 15 operators all said that their previous mechanisms for reducing opex had picked the low apples, the opportunities that didn’t relate to a restructuring of the network itself. They also agree that given the fact that all cost reduction mechanisms ultimately run out of gas (you can’t have negative costs), profit growth will eventually have to come from revenue growth, which means stealing market share or selling new stuff.

The remaining four of 19 operators still believe that cost reductions can be had without restructuring their network plant, and that new revenue can be gained within the traditional models. These operators were characterized by four things. First, their primary service geography had a high demand density, meaning that a lot of buying power was concentrated in a contained area. That makes their capital deployments more efficient, and also reduces the number of devices needed. Second, they had a large population of business users. Third, their residential base was more technically literate, making support less costly. Finally, they had strong fixed/mobile symbiosis in marketing, so a larger percentage of their customers got both mobile and wireline from them. Only half of the four seem likely to sustain these factors over time.

One topic that did interest many operators (ten of 19) but is hardly new or glamorous is SD-WAN. A few, like BT, seem to think that future profit depends in part on the ability to raise “service networking” above infrastructure, so as to allow for changes in the latter to avoid impacting the former, and vice versa. More (eight of the ten) believe that it’s inevitable that competition force them to offer SD-WAN services.

Did MWC answer telco’s questions on how to grow their profits, or at least stabilize them? Not based on the comments I got, but to be fair, I have to point out that telco comments don’t suggest that they attended the show with that as a goal. One particularly interesting comment was “We went to MWC like someone who goes to a supermarket to buy dinner fixings, but without any recipe in mind.” That may raise the critical point; you can’t find the ingredients of a transformation if you don’t first decide just what your guiding parameters will be. Maybe next time.

]]>
6053