Andover Intel https://andoverintel.com All the facts, Always True Thu, 20 Feb 2025 12:38:51 +0000 en-US hourly 1 AI Operations Tools are Taking Off: To Where? https://andoverintel.com/2025/02/20/ai-operations-tools-are-taking-off-to-where/ Thu, 20 Feb 2025 12:38:51 +0000 https://andoverintel.com/?p=6037 One early point of application for AI in general, and AI agents in particular, is network/IT operations. In the last six months, 154 enterprises have told me their own interest in this area has increased dramatically, and the number who have adopted or say they will adopt AI in that role has doubled. What’s behind this? There clearly has to be some specific mission or missions driving the new interest in AIops. Those 154 enterprise can shed some light on the matter.

By a slight margin (76 to 66) enterprises said their top mission for AIops was “reducing errors”. Second place was “reducing downtime”, and third (and last, with 12 mentions) was “faster problem response”. Obviously there’s a relationship between these three, and thus the ranking is really an indicator of how they view operations problems. Of the group, 88 have a specific idea of what that is, and 66 are outlook-oriented rather than focusing on what they’re responding to.

Everyone agrees that the enemy overall is complexity. One CIO said “there’s nothing in my infrastructure that hasn’t gotten more complicated in the last decade, and nothing I don’t think is headed for more complications in the next one.” The number one source of the complexity problem is componentization of applications, though about a third of enterprises will specifically say “the cloud”. The point is that if you build up applications from components, then deployment and operationalization is more complicated, and network connectivity is as well. Just figuring out what’s supposed to be happening is a burden, and deciding what should be done is piled on top of whatever you think is the root cause. Then it’s implementing that decision without unintended consequences that hits you.

I think the last paragraph explains a lot. Enterprises who have looked deeply into the cause of network issues realize that what’s going wrong is almost surely a slip in one of those three steps, meaning an error. Only about 8% of enterprises are seeing their root problem being an “original” problem like an equipment or service fault; most realize that the real problem lies in accuracy of response. That, I think, makes AIops an ideal solution.

But what do they expect AIops to do? Here we have some issues, because the “literati” of the group (44 of 154) say that the first and critical need is to establish a picture of the operating states and their validity, where the rest really offer no specific pathway to achieving the first goal.

Complex tech systems, meaning IT and network infrastructure and all the components that make up applications, have many operating states, meaning many combinations of conditions that could be expected to exist. Some of these states allow “normal” operation, meaning they sustain the collective mission of the applications, while others are fault states that do not. Most, say the 44 literati, fall somewhere between. The literati say that the first of our three steps, figuring out what’s supposed to be happening, is really a matter of determining the current state and classifying the result into one of the three categories I just related. This provides a specific mission for AIops, which is essential if you expect to find a tool to do what you want.

The second step, deciding what should be done, is actually potentially more complex. The literati generally agree that given that the state of the system under control is determined by the collective QoE of its users, it’s not difficult to decide where to look to assign state. On the other hand, the elements of our application system, business-wide, are potentially highly interdependent. The literati offer a multi-phase approach to this step. First, you decide the range of things that return the system to an acceptable operating state, then you pick the one that has the best combination of positive and negative impacts. This, they see, as a predictive process that might either be an AI analysis of the new-state the remedy targets against past history, or a simulation.

The third step, implementing the decisions necessary to bring about the new state the previous step identified, is also complicated, according to the literati. Problem resolutions almost always have to be implemented in a specific way, a specific sequence, or they fail. Enterprises say that the largest number of mistakes made in this particular step of activity is a failure to do things in the right order. How that it determined is also a matter of prediction, again requiring either an AI recommendation based on past history, or a simulation.

What we have here, then, is something with multiple complex steps, so it’s no wonder enterprises are looking first and foremost to reducing operations errors. The problem, say the literati, is that it’s very difficult to apply AIops the way it should be used to accomplish what enterprises are looking for. The problem goes back to the issues of agency, and a related area we could call “jurisdiction”.

Find an AIops tool today, and you almost always find a tool with a very specific, very limited, scope. Let’s say you have a netops AI tool. Obviously, it handles network operations, and it’s likely that it does that for a single vendor and maybe even a single product area, like wireless LAN or data center networking. The literati agree that while limited-mission AIops like this are useful, they’re not as useful as AIops overall would be. The reason is simple; in the inter-reactive world of IT these days, almost nothing you do at the network level can be assessed for mission value in isolation. You might fix a network problem to break something elsewhere, and with worse impact. “You need to keep looking at QoE overall,” said one operations specialist, “not just network availability or QoS.”

Enterprises overall, including the literati, say that they are not being offered a truly systemic AIops strategy today. They also agree that this isn’t a barrier to their adopting some of the limited AIops they are offered, especially at the network level, but they do say that the limitation has an impact on what they’re prepared to let AI do.

Agent AI, applied to just a part of operations, isn’t considered something enterprises are generally willing to accept in autonomous response form. None of the literati, and only 38 of the 154 who responded on AIops, said that they’d likely let a limited AI tool actually implement its recommendations. Thus, the value of AIops might be limited by the limits in its scope.

While vendors might love to fix that, it’s far from clear who’d step up. You’d need a strong position with data center hardware and platform software, and in network equipment, to be an ideal candidate. HPE/Juniper, were it to survive the DoJ challenge, might well be the candidate, and if they were they’d probably induce others to look at the space as well. It could add a little interest to the increased competition in the equipment space. Nothing does that as well as a potentially powerful differentiator, which full-scope AIops would surely be.

While you can’t get as much out of a limited-scope AIops as you might like, you can still get enough. Enterprises are generally happy with the tools they have, even those who wished they could do more. AIops is a good example of an agent-AI model, and it’s also likely a good example of an application that would benefit from a system of linked AI agents to increase its scope. I think that’s where we’re headed.

]]>
6037
Is a Fusion of AI, Digital Twins, and the Metaverse Essential? https://andoverintel.com/2025/02/19/is-a-fusion-of-ai-digital-twins-and-the-metaverse-essential/ Wed, 19 Feb 2025 13:04:29 +0000 https://andoverintel.com/?p=6035 If a fusion of AI and digital twins is at least a convenient abstraction to use describing the relationship, does a metaverse figure in? If so, how? Could it be that what I’ve called the “metaverse of things” or MoT, is what the AI/digital-twin fusion is, and if not, what is MoT in this picture? Very few enterprises from my recent sample of 76 had any comment on these points, but these early views are important.

One of the most important points enterprises have made to me about IoT, AI, digital twins, and real-world productivity enhancement is that they don’t see AI agents operating autonomously as often as they see them cooperating with workers. Enterprises who see large-scale operations enhanced by technology, using models based on digital twins, AI, or a combination of the two, see those models necessarily embracing some processes that remain fully human-controlled. Thus, there is a need to consider how these models interact with humans.

In simple automation, a model element that needs human assistance can display or print a message that will then be read and actioned by a worker. As things get more complicated, it may be necessary to provide workers with a more sophisticated visual representation of the model process and their own world, meaning some virtual reality picture. That’s where metaverses in general, and the MoT concept in particular, could come into play.

Metaverses are virtual worlds, and their single most distinguishing characteristics are that they have inhabitants and can be visualized in some way. Given that, it’s tempting to say that an MoT concept, as an extension of an AI/digital-twin partnership, would make sense in applications where one, or preferably both, those characteristics are present. However, deciding when that’s the case isn’t easy, say the four enterprises who made comments.

The four enterprises don’t feel that every digital twin, AI or not, that feeds a human system necessarily has an inhabitant or requires visualization in a metaverse sense. I think that the view comes down to this; if a human’s interaction is observational, then they’re not a participant in or inhabitant of the model system, and it’s not necessary to visualize the system as a virtual reality. If there is a human inhabitant of the system, one that is modeled by it and thus a participant, that also doesn’t mean the system is a virtual reality.

A better test, say these enterprises, is the nature of the control inherent in a digital model of the real world. If you visualize an IoT industrial process that’s autonomous, meaning that the digital twin system (with or without AI) decides what to do and then in some way commands devices to do it, then there’s no need for virtual-reality visualization. They do point out that “autonomous” here has to mean that not only is a human not regularly controlling the system, but also that they are not supervising it. If the system facilitates human control or is subject to human supervision, then the model has to present a visualization that supports the human role.

How about situations where the human is part of the process being twinned, but perhaps acting as a kind of sensor or effector, not actually running or supervising? This is where our four enterprises see complication, and to show why we need an illustration.

Let’s presume that we have a business process called “pick and ship goods”. It’s a warehouse-centric process that ties in goods to be identified and moved, a place where the goods are stored, a vehicle onto which the goods are placed for shipment, and something that moves between place and vehicle. Enterprises generally agree that this pick-and-ship model is usually inefficient and often opens them up to theft of goods, so it’s a regular target for automation of some sort. Whether that includes a virtual-reality, MoT, approach depends to a great extent on what that “something” is.

Suppose it’s a bunch of human workers. We’d expect a system to deliver a manifest for the vehicle, and workers would use this to grab the goods from the shelf and load them. All the workers need is the manifest, and if there’s no “supervision” process to validate that the right stuff is being loaded, there’d be little benefit to creating a VR representation of the process, so no MoT.

Now let’s suppose that the goods are all tagged with barcodes or RFIDs or something similar, and that these tags are read by a sensor at the point where the vehicle is loaded, to prevent something from being loaded that’s not supposed to be there, and to ensure the entire manifest of goods is picked and loaded. There might be a light set above the truck bay, with “red” indicating something not on the manifest was being loaded, “yellow” that the manifest wasn’t yet complete, and “green” that loading was correct. What we’ve done here is introduce supervision.

Worker walks up to truck with package. “Yellow” light, load away. Red light? Put the box aside and go back to the bins to pick again. Green light? Close up the vehicle and dispatch it, then go on to the next pick and load. The problem is that you now have three new problems to supervise. One, did the worker load anyway? Two, where did the wrong item get placed, and how will it get put back? Three, will the worker pick the correct item next time?

For the first problem, you could expect that the worker load the wrong item into a “put back” bin, with its own sensor set, and so you’d expect to see the item go there and not on the vehicle, but did the worker pick two items, put one back, and the other on the vehicle? You could add in sensors to try to home in on what’s happening, or let a warehouse supervisor monitor things. In the latter case, you may want to give the supervisor a VR representation of the process to facilitate that.

The notion of supervision, and perhaps some extra sensors, could address the other two points. Workers could be detailed to move things from the put-back bin to the shelves, and it would be possible to give workers a portable device to scan the tag on each package to get put-back directions and also to scan a tag on the location where the item was put to verify placement. If the pick and load process workers had the same device, they could validate each pick against the manifest, and that could reduce the need for supervision and for VR-type visualization.

The extent to which human interaction is essential in a modeled process, then, is the real measure of the value of an MoT extension into visualization, and the thing that likely differentiates a digital-twin model from an MoT model. What I find most interesting in reviewing the comments of my four leading-edge enterprises is that they suggest that a MoT conception of process automation may be important as we try to evolve to broader use of IoT and process automation, and not a later step to be added to refine the process. If that’s true, then we need to think about these points quickly.

]]>
6035
How Enterprises Actually Want to Use AI https://andoverintel.com/2025/02/18/how-enterprises-actually-want-to-use-ai/ Tue, 18 Feb 2025 12:34:57 +0000 https://andoverintel.com/?p=6033 “No responsible CEO is going to turn a company over to AI, period.” That’s surely emphatic, and also the comment an enterprise CEO sent me over the last weekend. Others, lower down in the same enterprise and in other ones, were commenting on my blogs on “agent AI” and why enterprises had been advocating something of that nature, even before the technology was being discussed. Now, with the concept of agents out in the open, more and more enterprise IT planners are seeing the potential.

While the concept of agent AI has been discussed for months, enterprise tech planners typically focus on things that are productized, meaning that they have firm specifications to review, interfaces to validate, and a price that can be plugged into financial justifications. We don’t have much of that so far, and so in the last ten days I’ve gotten comments on some of my recent points in blogs from only 76 enterprises. Others, presumably, are focused on action items they can actually take action on.

Before I talk about what the 76 had to say, let me say something about “stimulation bias”. If you start singing a song about flowers, the number of people who can hear you are more likely to think and talk about flowers than they would be without your song. Thus, people responding to my blogs are most likely to comment on what I’ve said, either for or against. Keep this in mind, please.

The 76 agreed on several important points. First, any deployment of AI they’re likely to do will be integrated with their current processes, in a process-specific way. That means that they believe they will deploy multiple AI models, each specializing in something, rather than any super-model. This view is totally consistent with what enterprises have been telling me about AI since 2023. Second, while only a third of the number would call the individual models “agents” spontaneously, the attributes they see for the models fits the notion of agent AI far better than it fits the notion of either super-models that do many things, or “horizontal” models that address workers’ productivity across multiple jobs, the “copilot” approach we often see. Third, there is no requirement that these “agents” (which is what I’ll call these small models hereafter) be autonomous. In fact, enterprises think that some would be integrated with human processes and others with current software, and the small-model nature of the agents mean that their context is likely set by, and their actions mediated by, other business processes around them.

The reasons for these views are also interesting. First and foremost, enterprises say that AI adoption has to work like adoption of any technology, meaning it has to be phased through their business to minimize disruption of overall business processes, control costs, and limit displacement of gear that’s not fully depreciated. Of the 76, 55 said that they’d want their earliest AI agents to be ones where the early-adopter risk was limited, and that they’d tackle more significant and impactful areas down the line when their skills and confidence levels were higher.

Enterprises also suggested that their early applications might be ones that were the opposite of the popular view that agentic AI had to operate autonomously. Autonomy isn’t widely trusted at this point, so having AI suggest things rather than do them automatically is the preferred approach. I’ve seen this for applications in netops and ITops already; “tell a professional” what’s happening and how to deal with it is preferred.

The final point made, the one my CEO comment reflects, is that surrendering control of a business, or even an entire business process, to AI is way out of current CxO comfort zones, and even among AI/IT advocates, something less than a third are willing to promote at this point. AI is a junior type, or some sort of specialist-geek type, that needs supervision and oversight. Compliance types who commented (11 of them) all said that fully autonomous AI in most missions could not pass an internal audit.

Where my stimulation bias point raises its ugly head is in how AI would look. Right now, 59 of the 76 see AI as an application component, a piece of software not unlike a microservice. Of the remaining 17, 8 say they have no specific way they’d think of an AI agent, and the remaining 9 like the notion of the “digital twin element” I’ve blogged about. That, of course, could be because of my blogs; only 6 said they had heard of, considered, or deployed digital twins. Of the 59 who saw AI agents as software components, 13 said they’d deployed or were evaluating digital twins, so not all who are familiar with the concept see agent AI playing any role.

The lack of a consistent/universal view of agent AI is probably the main reason why it’s been slow to evolve. Enterprises like tech concepts that are widely and consistently articulated by their strategic vendors. That’s not the case now, and in fact of the 76 enterprises who responded, only 4 said that they had a vendor telling them an AI story consistent with their own requirements, and none said that multiple vendors were telling the same story.

Even IBM, the vendor with the highest level of enterprise strategic influence and the one most say has the best handle on AI, still confuses enterprises. In a recent IBM “Think” piece that contrasts “agentic” versus “generative” AI. The article says that “Agentic AI is focused on decisions as opposed to creating the actual new content, and doesn’t solely rely on human prompts nor require human oversight.” The first part of that is fine in the view of those 76 enterprises, but the second part is problematic. I think the problem here is more one of terminology than anything else. AI types think and speak differently than business types.

IBM’s piece says that generative AI is what generates content of some sort, where agentic AI generates decisions and takes actions. That distinction doesn’t map to how enterprises who’ve chatted with me about AI see things. They think that generative AI is AI based on general intelligence training of the sort that’s trained on Internet content, and agent AI (I think the term “agentic” is leading us astray) is AI with training in a limited, specialized, area. For example, enterprises would classify most copilot applications as generative, where IBM thinks of them as “agentic”. Enterprises would also say that something like a tax preparation agent was just that, an AI agent that cooperated with a human, where IBM’s definition would likely put it in the generative category—it generates a tax return.

I’m concerned here, frankly. Yes, I’d hoped that my AI-and-digital-twin symbiosis would get more recognition, but more than that I’d hoped that what enterprises were hearing from IBM was aligning better with what they tell me they need to hear. Is IBM going down an “agentic” rat-hole here? I hope not, but it may be. If it is, then other player may have a shot at leading AI in the future.

Speaking of future, of the 9 enterprises who mentioned an AI connection with digital twins, 4 had comments about the three-way relationship between digital twins, AI, and the metaverse, or more specifically what I’ve called the “metaverse of things” or MoT. I’ll cover that in my blog tomorrow.

]]>
6033
Does a DoJ Block of the HPE/Juniper Merger Really Help Cisco? https://andoverintel.com/2025/02/13/does-a-doj-block-of-the-hpe-juniper-merger-really-help-cisco/ Thu, 13 Feb 2025 12:46:13 +0000 https://andoverintel.com/?p=6030 There are a lot of stories coming out on whether the DoJ’s decision to try to block the HPE/Juniper deal ends up helping Cisco (HERE and HERE, for example), and Juniper’s CEO has also made that comment. The sense of the view is that if blocking the merger helps the current dominant player, how can the merger be anti-competitive? In order to know whether the claim is true, we have to look at how the success of the merger would hurt Cisco, and whether that hurt is a result of the merger itself or simply a reflection of market conditions. I’ve already come out against the DoJ on this, so I won’t go into that in detail here.

What “helps” Cisco, in the real world, is what raises its profits. How else can you measure benefit to a company? To do that, a company has to cut costs, raise revenues, or both. Cutting costs is a zero-sum game, so ultimately Cisco and every other vendor in the network (and any other) space is dependent on raising revenue to achieve a gain. That can be done by having buyers spend more (increase ARPU), or having more buyers (increase TAM or market share). It’s hard to invent new buyers in a mature industry, so the only way to get more buyers is to steal market share.

Cisco and Juniper have been locked in a market-share battle for almost three decades. That could mean that if the HPE/Juniper merger going through would create a more effective Cisco competitor, then having the merger blocked would indeed “help” Cisco. But my use of “could” rather than “would” is deliberate. Does the merger create a more effective competitor? Even if it does, is blocking it really a help? Let’s look at both points.

The problem in networking today, and the problem with tech in general, is that the TAM pie isn’t growing enough. Why does a company buy network gear? Two reasons. One, old gear is either broken or hard to maintain, so it needs to be refreshed. Two, new network benefits justify new network spending. This sets up the “race” that Cisco would run with Juniper today or HPE/Juniper if the deal goes through.

Suppose a company has a switch or router that’s getting long in the tooth. How do they replace it? It’s part of a broad deployment, in which over 90% of enterprises say there’s currently a dominant player. In fact, over 60% say that there’s only one player in a given part of the network, the part where our antiquated device lives. The vast majority of enterprises would replace it in kind, with the current version of the same device, from the same vendor.

Suppose a lot of gear was to be shifted out, though? Maybe it was deployed at the same time and so it’s all obsolescent. Maybe the vendor did something that turned off you or your management. Now, there’s a real chance there’d be a competitive bid, even without any new requirements.

Suppose a company does have a new source of network demand, a new mission that contributes a new business case? In many cases, this would require only some upgrades to current gear, perhaps some new line cards. In that case, the current vendor(s) supplying the devices to upgrade would win. In other cases, the new mission might require new gear, often deployed in a new enclave. That new mission would then justify taking a new look at vendors, a look where the incumbent vendor might be favored (if they’d made a good impression), or might not be.

Cisco, as the incumbent, has an advantage in total brown-field deployments, because those deployments favor the incumbent. If Cisco hasn’t messed up, they also have an advantage in “green-field” deployments for reason of their incumbency, but not as strong an advantage. They could be knocked off.

Cisco’s reorg clearly shows that Cisco itself thinks the current market situation poses threats to its profit growth, which it sort of does even statistically; the more share you have the harder it is to gain share, anti-trust concerns aside. But it’s also true that incumbents typically want to defend more than to attack, and so they are conservative in their product management. Cisco’s “fast follower” strategy is an example of this, and it goes back decades.

OK, suppose you are Cisco and you want to avoid competitive risks without a total remake of the market. What do you do? First, you try to defuse any differentiators that might arise from new tech developments. Think AI. Second, you try to exercise account control, strategic influence, to get companies to plan their strategies for new network missions in your own terms. That lays out where Cisco has to look at HPE/Juniper to decide if the merger is a threat.

Cisco’s reorg around AI is, IMHO, clearly aimed at keeping control of the one development that could create a new little enclave of network deployment. In-house AI means a cluster of GPU servers, perhaps requiring a data center network of their own. Not only that, rival Juniper’s AIops positioning has been effective against Cisco in areas where any sort of major network upgrade was proposed, and even won a few deals that displaced Cisco gear. HPE, of course, is an AI server vendor, and so has a definite involvement in any new self-hosted AI deployment.

In fact, HPE might well be a threat to Cisco’s account control even without Juniper. They already have the Aruba line, and their strategic influence among enterprise buyers is better than Cisco’s. For decades, my involvement with enterprises has shown that data center networking drives enterprise networking, and data center technology drives data center networking. Cisco has servers (UCS) but they don’t have a significant position in the enterprise data center market. HPE does.

So, from all of this, it would seem that blocking the merger helps Cisco. If that’s the case, given that Cisco is the incumbent, it could also hurt competition if the DoJ succeeds, but I’ve already given my views on that. However, it’s important to note that the only solution for Cisco in the long run is to add to the size of the network pie, and get the largest share of the addition. Yes, HPE/Juniper might be in a better place to do that, but HPE alone, and IBM alone, and maybe even Dell and Oracle and Broadcom alone, could do that too. Networks deliver stuff, and it’s the applications that create the stuff that builds business cases.

The big imponderable here is whether any of these Cisco competitors, including the now-disputed combo of HPE and Juniper, would be able to build any convincing new business cases. Or, of course, whether Cisco could. I personally doubt that Cisco would take a lead in developing new IT business cases, or that it could succeed if it did. But I also wonder whether the other players could. Only IBM has shown real appetite for business-case evangelism so far, but Oracle seems to be making some moves. Could Juniper drive HPE to do the same? Open question, I’m afraid, but I think the net here is that despite having just reported a decent quarter, Cisco does face risks down the road, and I think blocking the merger would reduce those risks.

]]>
6030
Digital Twins, AI, and Agents https://andoverintel.com/2025/02/12/digital-twins-ai-and-agents/ Wed, 12 Feb 2025 12:24:57 +0000 https://andoverintel.com/?p=6028 What’s more important to the future of tech, digital twins or AI? If you have to pick only one thing, I’d argue that the digital twin would win, because the future of tech depends on more real-time, real-world, automation and lifestyle augmentation. Digital twins that model the real world seem inescapable in that mission, and in fact are already being adopted there. But AI can also play a role, and as I noted in an earlier blog HERE, companies like NVIDIA are seeing the link, and the connection is further noted HERE. I think the case for symbiosis is clear, surely clearer than the progress, and sadly the article I reference doesn’t offer us much insight. Let’s try to get some from what enterprises tell me.

The article I cited above is interesting to me because, since it represents a survey and I don’t do formal surveys, it provides at least a slant on intentions. I rely on anecdotal comments from enterprises, which are hard to frame intent from reliably, but they do offer a view of actual use/implementation. However, my data is best used by considering missions rather than the vertical an enterprise represents.

Manufacturing automation is the place where digital twins are used the most today; 69% of enterprises involved in manufacturing say they are using digital twins and 100% are considering the use. Facility automation missions(smart cities, buildings, campuses, etc.) uses digital twins in 34% of all enterprises, and 73% of those involved in warehousing, utilities, refining and fuel/power distribution, etc. Beyond these missions, penetration of digital twin technology is limited; only 9% of enterprises cited other uses to me.

Right now, less than 10% of enterprises use AI in IoT-real-world-related missions at all, but the comments I get suggest that those who use digital twins are more likely to use AI, though symbiotic usage is still limited. A big part of the explanation is simply the novelty of the AI and digital twins individually, and the fact that any relationship has been promoted only recently (in 2025, in fact).

There are three ways that AI and digital twins could form a symbiosis. First, they could be used for “adjacent” elements of a real-world-automating process, meaning they cooperate to do separate and discrete things. Second, they could share responsibility for a step, with a flow of information between them being essential to the implementation. Third, one could generate or facilitate the other. From what I hear, the implementations of AI and digital twin symbiosis are tracking in that order.

Manufacturing is not just building stuff, it’s also moving parts and finished goods, gauging demand, predicting supply pricing for optimum purchasing, and so forth. Every single enterprises that currently uses both digital twins and AI in their real-world processes uses some AI for this non-building-stuff activity, and their comments tell me they expect to use more. Today, just short of a majority of manufacturing digital twin users use digital twins in some aspects of transportation of material and finished goods, warehousing, etc.

This first-level symbiosis seems to be the primary driver of the second level, the process cooperation level. For example, scheduling of things involves not only the processes of moving them around, but also the timing of a completion of an order, based in part on movement, part on storage, and part on actual manufacturing. Thus, there is a value to having feedback across the boundaries of all these processes, and a means of coordinating the whole as the sum of the process parts. That role integrates things done via digital twinning with things done via AI.

A complex real-world process is almost surely not a single digital twin, but a hierarchical system of twins. As process cooperation between digital twins and AI mature, it seems to some enterprises that AI, in agent form, could become an element in the hierarchy. In other worlds, AI could create the same sort of model for some process components as digital twin technology would for others, and AI might also create a “layer aggregator” element that combined digital twins at a lower level. Right now, only a small number of highly thoughtful process planners think in these terms, but their comments are compelling. They believe that both AI and digital twin models are agents, and that the future of automation in any form is the sewing of both types of tech agents and human agents into a final, complete, business process. I think this is an exceptionally valuable insight, and also one that justifies our final level of symbiosis.

If automation is really a fundamental management of agents, then you need to agent-ize as much of any process as you can. NVIDIA Cosmos and a growing number of AI tools facilitate the creation of digital twins, which then builds agents that can be incorporated into higher-level process models that presumably could also be facilitated via AI.

The evolution of digital twin and AI symbiosis is interesting because it seems to me to indicate that it’s integrating what are often two very different “IT” activities, the edge-real-time-embedded-control stuff and the traditional information processing and analytics stuff. The enterprises who are the most thoughtful about the second-stage evolution of the symbiosis are showing signs of better integration of these activities, and that may mean either that this sort of symbiosis drives IT integration to its highest level, or demands that it get there in order to develop the symbiosis. Chicken first, or egg?

I have to wonder whether, whatever is first, we’re not seeing a need for a shift in the way we do IT. To the extent that real-time process automation has been done, it’s typically handled independent of or only somewhat integrated with traditional IT. Might we now be seeing a need to think of an entire business as a real-time, automated, system? We now have a university generative-agent-AI twin representing a thousand humans. This sort of thing could help us model entire workforces, eventually.

Does this essentially create new AI-kills-off-mankind risk? “Distributed Hal?” I’m sure somebody will suggest that eventually, but in the meantime we can always watch other humans with suspicion. That’s where our biggest threat has been all along.

]]>
6028
Telcos and Opex: Too Little Too Late or Too Late Too Little? https://andoverintel.com/2025/02/11/telcos-and-opex-too-little-too-late-or-too-late-too-little/ Tue, 11 Feb 2025 12:35:55 +0000 https://andoverintel.com/?p=6026 We’ve all heard the phrase “Too little, too late”, and surely applied to telcos, but I want to propose another seemingly contradictory phrase to describe their current state. It’s “Too late, too little”.

Twenty years ago, telcos spent more per revenue dollar on opex than on capex (roughly 40 cents vs 22 cents). What I’ve called “process opex”, meaning the operations cost of the network itself (equipment and its support, customer support, energy, etc.) was 26 cents. To ensure that network upgrades made necessary by service evolution, standards evolution, and equipment updates didn’t make them look too bad on Wall Street, telcos worked hard to eat away at opex. Their particular focus was to cut staffing associated with support and consumer installations, and they made great strides in that area, resulting in cutting human costs by about 30% overall, achieved by 2020. Then they hit a wall.

Of 88 operators I’ve gotten comments from, 83 said that they had no overall plan for managing opex through that 20-year period, meaning that they’d kind of played whack-a-mole by tactical changes made predominantly to areas that were running regularly over budget. Of that group, 56 said that their reductions in human costs had been accomplished by measures that were only marginally beneficial, and all 88 said that their overall opex had not been radically altered. In fact, according to the operators themselves, they’d ended up with a process opex of 32 cents in 2024. Their measures had not kept pace with the factors that drove process opex, and opex overall, up. Why?

According to operators, the trends in consumer online usage and the business response to these trends is the biggest issue. Second is Wall Street’s demand for instant (meaning this financial quarter) gratification, followed by what some operators call the “tech news as entertainment” bias in user expectations. Then there’s unrealistic operator revenue expectations, followed by outmoded standards and technology consensus practices. This has pushed operators away from a thoughtful consideration of opex trends and of technologies that would best address them. Now, it’s too late, because there’s too little time and latitude to do the right thing. To see why, let’s look at each of our issues in more detail.

Online services, admit operators, are driven entirely by OTT players rather than by operators’ own plans. Every year, consumers consume more, businesses supply more, and everyone expects more because tech is worming its way into every life, every activity, more thoroughly. There is nothing telcos are worse at than opportunity or demand-side planning; they’re supply-side organizations through a hundred fifty years of DNA.

This has had a major impact on customer care. “The average household in our customer base uses more data than the average company did in 1980”, one operator told me, “and they’re just as strident in their protests against problems with their experience, but almost incapable of even basic self-support.” As a result, 29 of the 83 operators with no overall opex management plan said that their customer care in 2024 cost more than it did in 2000, despite sharp reductions in head count. In fact, 15 of the operators in the group said that cost rises were because of their headcount reductions. “Service-specific technology for support doesn’t survive radical changes in the service,” one said.

Why did so many whack opex moles, though? Wall Street wanted to see cost management that kept pace with necessary network capex budgets, and that precluded any opex strategy that took years to mature. Then there’s the impact of massive change. Why are air traffic controllers still pushing paper slips around to track flights? Same thing. Making a big change to a critical system can create critical disruptions, and from the perspective of the users (fliers in the case of ATC), the best you can hope for is that nobody notices what you’ve done.

Operators are reluctant to even estimate what a systematic approach would have cost, but the 11 who did offered an average of 60% increase in operations technology spending for a period of three years. None of the operators thought there was any chance they could have gotten approval from the C-suite for that, and none believed they could get it today either. “Yeah, we’re taking action. We’re sticking our head deeper in the sand,” one OSS planner said.

Of our 83 operators, it’s important to note that all of them said that vendors’ own interest in their Street creds meant that none were offering the tools for this vast transformation in opex thinking either.

The fact that both consumers and companies seek out sensational changes because stories on them are entertaining and get all the attention, and clicks win in today’s world, means that operators are also under pressure to respond to things that may be unprofitable or even impossible. Hedge funds drive stock prices because they drive trading volume, and they play on the PR side of new services and technologies. Hedge fund managers are not operations-literate, and in any event hedge funds profit from bubbles, and then again when those bubbles burst. You can’t offer boring-if-true comments on things like 5G, 6G, edge computing, or AI at an investor conference. You can’t offer exciting comments without spinning your cost wheels to deliver at least some trials.

All of this leads operators to accept optimistic and unrealistic estimates of new revenue opportunities. Look at 5G and what it promised, and look now at how little has been delivered. Of the 88 operators, 87 said that 5G was over-hyped, and 84 said that 6G was almost surely unrealistic as it’s evolving. Every operator said that many of their new-revenue plans of the last two decades were delusions, two-thirds said most were, and almost a third said that all had been.

Why are all these delusional things getting continued support? A big reason is the antiquated procedures operators undertake to define new services and their technologies, procedures honed when they were supply-side-dominated regulated monopolies. I’ve been involved in telco standards for decades, and I can tell you that any innovative and potentially revolutionary idea is inevitably buried. There are too many reasons for that to go into here, but I see no signs of this changing. Operators agree: “We look for consensus in a telco and vendor population that could never hope to reach it,” one told me.

What I find surprising is that only 9 of 88 operators hit on what I believe to be the biggest problem of all, which is competition and churn. For mobile operators, competition and churn are cited as the biggest factors increasing process opex; for fixed operators it’s still a major factor. When you offer a service whose only meaningful differentiation is price and reliability, and when the latter is generally considered “good enough” across all choices, you are forced to manage costs and try to offer suitable, effective, and cheap support.

The obvious question at this point, if you accept the views of operators themselves, is whether it’s really “too late”. Have operators passed the point where they could expect to remedy the situation? The answer to that, I’m afraid, is hard to pin down. They’d have to undertake a massive program of rethinking operations, a task they’ve been singularly unable to undertake in the past. In fact, they can’t even get internal agreement on whether to “modernize” or “replace” OSS/BSS, a debate I’ve personally witnessed.

That’s really the question here. In 2012 I was seeing operators face the question of a rethinking of operations, and dodging it. Nothing helpful, IMHO, has happened since. Was the topic so daunting it scared them off? I don’t know, but if operators don’t face it, they’ll have to face the consequences of falling profits.

]]>
6026
What We Like, What We Hate, and How we Make Tech Decisions https://andoverintel.com/2025/02/06/what-we-like-what-we-hate-and-how-we-make-tech-decisions/ Thu, 06 Feb 2025 12:33:20 +0000 https://andoverintel.com/?p=6024 It used to be that we’d look at tech purchases in terms of what buyers like. We may now be entering a period when what matters is what they hate. A big part of this attitude adjustment relates to the shift of tech purchases from new projects to maintenance budgets, and another part is due to increased difficulty buyers of tech experience in getting and retaining skilled workers. Let’s look at how this shift is impacting both enterprises and network operators.

When you go back a couple of decades, you find that buyers’ most-expressed concern was “vendor lock-in”. They saw their vendors’ strategy of building one of those “circular pull-through” ecosystems, where if you bought anything from them, that thing made you dependent on a whole family of products that were associated with your purchase, and which then let the vendor mercilessly exploit you by charging more and more as you got less and less mobility of vendor/product choice. Or they saw a vendor abandoning something or going out of business, leaving you high and dry.

Today, things are different. Today, what buyers fear most is…integration. The whole that’s made up of the sum of parts is a whole that needs assembly instructions, and that’s bad. Then, when the whole breaks, all the parts will point to each other and shrill out accusations. That’s bad, too. Finally, every player in the integrated whole will gradually evolve, so the whole can’t be assembled from the original parts any longer. That’s fatal, because you now have to start over.

The assembly issue is the source of the classic objections to integration. To get X numbers of vendors’ stuff stuck into a cohesive system, the buyer has to be willing to take the lead, or has to employ an integrator. The former requires a significant level of tech skill, more than enterprises think they can easily acquire and retain. The latter adds costs, and increasingly enterprises are reporting that the integrator is just “another finger in the circular pointing”. In addition, the percentage of enterprises who say they can trust an integrator has fallen from 69% in 2010 to 33% in 2024.

The question of whether integrators are just more fingers in the pointing contest has also seen a view shift. In 2010, only 11% of enterprises cited issues with integrator finger-pointing, where in 2024 it rose to 77%. Most of this was due to a radical jump in the number of situations where integrators failed to meet their commitments to get things working together. In almost half these cases, the integrator blamed the enterprise for failing to address all the things expected of the integrator in the negotiations and contract, and about a third came from cases where the integrator blamed one (12%) or more (20%) vendors, but failed to “break the tie” between the parties.

Finger-pointing in the operations period is at least consistent; just over 80% of enterprises reported it in 2019 and 84% in 2024. I’ve been called in to address the fall-out of this particular problem, and it’s incredibly complex and difficult for enterprises to even identify someone qualified to do the job. What’s needed is a pretty deep understanding of the specific issue involved, which of course is difficult if that issue hasn’t been identified. I have to admit that all my experiences here came about because I had a preexisting project relationship with the enterprise involved, so they knew me and my areas of expertise.

The final problem, that of divergent evolution, has grown from only 7% of enterprises reporting it in 2010 to 34% reporting it in 2023. What’s especially scary about this is that the cause of the problem is linked to the open/open-source movement. Fairly, it should be linked to any sort of standardization.

One early text on computer networks comments “The wonderful thing about standards is that there are so many to choose from.” Very true, and even so with open-source software. Let’s turn to the operators for an example. Back when Open RAN was planned, almost 90% of operators said it would assure interoperability of elements. In 2024, when it was being deployed, only 40% of operators said that it had done so. This, in my view, why Light Reading’s article recently suggests that operators are blowing kisses at open ecosystems while buying from the same few giants.

I think that all classes of buyers are now seeing “open” or “standard” approaches as “catastrophe” insurance not as a general coverage contract. If my chosen vendor is open and fails, worst-case, I can hope to juggle in a replacement without a crushing integration cost. Lock-in? So what? Finger-pointing is eliminated with only one finger in the game, and if that player goes away I have an exit strategy. Ugly, perhaps, but if I pick a big enough vendor my first time, the risk of their failing is limited.

This seems to me to be the challenge of open-model networking when applied to building a cooperative system, rather than a set of discrete devices with standard interfaces. Enterprises are, today, less likely to believe that even routers will interwork seamlessly across vendor lines. In 2010, 13% said they could not be relied upon to do that, while today 37% have that view. But expand to the level of a system of devices, something like Open RAN? Enterprises don’t have direct experience with that at levels I can say are statistically significant, but only 14% of enterprises say they’d get something like Open RAN from multiple vendors.

A lot of the old factors that drive vendor and product selection are falling by the wayside, but in an uneven way. Enterprises, for example, love open-model data centers based on Linux, standardized hardware, and even standardized middleware, but they’d really like to get it all from some source who can put it together. The same is true for operators and for networks. It’s great to have open technology, but it’s essential to avoid integration issues. These changes are driven by shifts in the nature and extent of our technology commitments, and they’re going to keep coming, keep influencing buyers and sellers, for years to come.

]]>
6024
Who’s Winning in the Cloud (and Who Might Win Later)? https://andoverintel.com/2025/02/05/whos-winning-in-the-cloud-and-who-might-win-later/ Wed, 05 Feb 2025 12:31:46 +0000 https://andoverintel.com/?p=6021 Who’s winning the enterprise cloud race, why are they winning, and what might others do to change their own fortunes? Enterprises have been undergoing a kind of gestalt moment with the cloud, so it’s actually a good time to present them with other options, as well as to recognize that picking the right approach has been determining the fate of cloud providers for some time and will continue to do so.

The top-line question, the one about who’s winning in the cloud space, is hard to answer because it’s hard to define what “winning” means. The top cloud provider, according to Wall Street, is Amazon. The top cloud provider, according to enterprises, is Microsoft. The most strategic cloud provider is IBM. The most AI-centric is Microsoft. The most technically sophisticated is Google. The one to watch is Oracle. How different can you get?

My Wall Street friends tell me that Amazon’s cloud supremacy is based on the fact that AWS hosts more of the online services, both startups and mature public companies, than any other cloud. Online services, including and perhaps especially those you can characterize as related to social media, are the perfect cloud services. They are highly bursty in nature, making it expensive to build out to the capacity needed to cover peaks while still being at least somewhat economical in the valleys. They exchange information that has fewer governance rules than enterprise applications would. The business case for the cloud in this space is obvious, and what’s valuable above all is scope and stability of operation, and cost. Amazon has met those requirements from the first.

For enterprises, this has created a bit of mystery. Why, they wonder, is Amazon so great? Most bought into the view that “everything’s moving to the cloud”, which implied that somehow the business case for the cloud was universal. This is what resulted in a flood of disillusionment and stories of repatriation. Microsoft, who wasn’t selling to those online companies but to enterprises, had a better understanding of enterprise cloud needs, which is why enterprises have rated them at the top for five years running.

Microsoft’s challenge, according to enterprises, has become its lack of strategic influence among the larger enterprises, whose potential budget for the cloud is the highest. Strategic influence is important in influencing the launching of new projects, which obviously are key to maximizing and optimizing cloud utilization. This has helped companies like IBM and Oracle, who both have greater influence, gain cloud market opportunity.

OK, what I’m seeing is that Microsoft is likely to continue its expansion in enterprise market share, providing that current drivers remain dominant. If new projects come into the picture, either because market conditions drive buyers’ interest in them or players with strategic influence do the same, then things could change in favor of IBM and Oracle. New things, new differentiators, new outcomes.

AI is complicating the cloud picture too, and here we see a mixture of complex forces playing on all the vendors and cloud providers. Enterprises are most interested in the use of AI as an adjunct to current applications, especially ones related to business analytics. This has favored IBM and Oracle, who understand that space best. On the other hand, enterprise interest in AI overall could be classified as interest in the agent form of AI, which utilizes more specialized and limited machine learning or deep learning tools and is less reliant on large language models. IBM is reported to be “unbiased” with regard to cloud hosting and AI, Oracle somewhat favoring their own AI cloud hosting, in implementing agent strategies. None of the major cloud providers seem to be positioning for the broad interest in AI agents at this point, according to enterprises.

Right now, cloud AI is almost entirely generative AI of the type used in conjunction with search engines, chatbots, or email/document copilot elements. Self-hosted AI is used in the latter two missions, primarily to address data governance requirements. It would appear, based on enterprise comments, that if AI agents were offered as a service, either an SaaS-like component for general consumption or in the form of a platform (PaaS) on which enterprises could host their own agents, a business case could be made for cloud hosting. That doesn’t guarantee it would be successful, only that enterprises could consider the economics of a cloud AI agent in the same way as they increasingly expect to consider the economics of cloud hosting overall.

Microsoft is not seen as a leader in this, nor is Amazon. Google might have its own best shot at gaining market share here, since they are thought to have superior technical knowledge of the cloud and cloud-to-premises relationships. However, Google doesn’t have any significant strategic influence with enterprises; IBM, Oracle, HPE, and Dell all score higher and Amazon and Microsoft around the same as Google. Google also appears to be too smart to engage effectively with enterprises whose access to cloud-skilled personnel is limited.

It appears that there is an opportunity for cloud providers to exploit with respect to AI agent hosting, but that they’re not yet exploiting it. That may sound surprising, but the problem seems to be one of “AI literacy”. Enterprises in general have not been aware of the agent AI model for much more than a couple of months, and so have not been exploring its potential. Stories on agent AI have, IMHO, been short on facts and long on hype, and so have not supported effective planning. IBM and Oracle, according to enterprises, are now presenting at least a realistic view, but even for them there’s still the inertia of enterprise strategic planning to deal with. It will likely take until 2H25 for many agent-related projects to get started, and enterprises will likely first explore the way agent AI is used before they worry about where it’s hosted.

Cloud providers, given their relatively low level of enterprise strategic influence, would likely be best served by exploring options for the general-consumption AI agent-as-a-service concept. There are plenty of areas in finance and business analytics that could benefit from an expert agent’s help, and analytic missions generally can tolerate latency better than missions in process control, for example.

]]>
6021
Has the DoJ Lost its Mind? https://andoverintel.com/2025/02/04/has-the-doj-lost-its-mind/ Tue, 04 Feb 2025 12:35:24 +0000 https://andoverintel.com/?p=6019 No matter the vertical, nothing is as difficult to assess as the anti-trust implications of M&A. In tech, that’s particularly true because of the pace of evolution and the breadth of influence and symbiosis within the space. Rulings that are based on traditional factors are almost certain to be flawed in reasoning, and thus likely to come to the wrong conclusion. That’s surely true of the decision to file suit to block the HPE merger with Juniper Networks.

I’ve worked with both HPE and Juniper in the past, and I’ve blogged about the deal as well, so you won’t be surprised that I have strong views on the DoJ suit to block it. Let me present my views in as organized a way as I can.

The first comment I’d make is that it seems to me (and many others) that the DoJ’s case is built around the wireless LAN space, and frankly I think that anybody who thinks that modern companies would build an M&A around WLAN is simply crazy. If you want to believe that WiFi isn’t a highly competitive space, with or without the HPE/Juniper deal, I can’t stop you, but I’m sorry to say you’re not thinking this through. Networking is a vast space, and wireless LAN is a pimple on it. In all my years of working with enterprises, talking with hundreds of them every year, I have never had a one tell me that their network strategies were based on wireless LAN. It’s not whether the merger is or isn’t justified based on the wireless LAN space, but that space isn’t what DoJ or regulators in general should even be worrying about.

Second, neither HPE nor Juniper are the leaders in that vast and real enterprise network market, Cisco is. The combination of HPE and Juniper builds a better competitor against the market-leading Cisco, which is surely pro-competitive rather than anti-competitive. Cisco is primarily a network equipment company that also dabbles in servers (UCS) and software (Webex). HPE, absent the merger, is a server company that dabbles in networking. To me, it’s obvious that HPE is building up its network side with the proposed merger, aiming at the broad opportunity in networking and defending against the risk that Cisco would expand its interest in servers and software, which I contend is just what Cisco’s recent reorg suggests it might do.

Third, returning to Cisco’s reorg, is that not a pretty clear signal that networking is commoditizing? Show me a commodity market that doesn’t generate M&A pressure? When margins are low and differentiation is hard to come by, companies look to M&A to create more efficient operation. It’s not something regulators should want to stamp out; if anything it should be encouraged to assure that a market doesn’t stagnate because the meager profits are spread among too many players to allow for R&D and innovation.

Fourth, Mist AI may have its roots in wireless LAN, but it’s now been subsumed into Juniper’s AI-Native Networking and AIops positioning. Anyone, does anyone believe (besides, apparently, some in the DoJ) that AI’s network value proposition is wireless-LAN-specific? Staying with Cisco, isn’t AI a big focus in their reorg? And if there’s pressure on capital spending on network equipment (which we all know to be the case) then doesn’t it make sense to assume that AIops would infiltrate netops? If that’s the case, then how does HPE’s ability to leverage Juniper wireless LAN AI give it any unfair advantage in the market? Who in networking isn’t expecting or already deploying AI in operations?

Fifth, commoditization in network equipment is furthered by generic switching chips, a market in which Broadcom dominates. Broadcom’s merger with VMware was approved, and this gives the company a key position in both networking and servers (via its VMware software). I’d argue that “white-box” devices are the real competitors in the networking space and that they effectively prevent anyone from exercising monopoly power in the market.

Sixth, the real issue in enterprise network equipment, and the only issue that really matters, is data center networking. For decades, this space has been influenced largely by the data center side, meaning in the end the policies on application software and enterprise data deployment. The greatest strategic influence in that space is IBM, who doesn’t even have network products these days. Could the HPE deal with Juniper make it more competitive in the data center? Sure; HPE is number two or three (Oracle and HPE juggle for second) in the data center strategic influence race. But who does M&A to make themselves less competitive? Does HPE/Juniper make that market less competitive? No; it makes it more competitive.

I think DoJ is suffering from a combination of being locked in the past and the classic forest-for-the-trees problem. Wireless LAN isn’t a key market; it’s been a decade since it has been. In any event, even the data center network market is more influenced by outside factors, as IBM’s influence shows, than by the juggling of equipment issues or competitors.

The bigger picture, though, is that both networking and IT are in need of a new set of business cases to drive up both spending and innovation, and I’ve long said that the biggest barrier to getting these business cases is their breadth of impact. Cisco, as I’ve noted before, was arguably launched by a single product—a NIC for a DEC minicomputer. The days when a tech revolution could be kicked off by something that simple and localized are gone forever. What we need now is a parallel push across many different technologies, and that’s not something that a marketplace that specializes in pieces of this and that will easily create.

We have systemic challenges in IT today, not little areas that when fixed relieve market pressures on a broad front. Only systemic players can meet these challenges, so for the DoJ to cripple a merger that creates one is a very wrong move, one that protects nobody and risks everything. The suit to block the merger, in my view, has no justification and makes no sense when examined through the lens of market reality.

]]>
6019
OK, Who Wins in AI in the Post-DeepSeek World https://andoverintel.com/2025/01/30/ok-who-wins-in-ai-in-the-post-deepseek-world/ Thu, 30 Jan 2025 13:04:57 +0000 https://andoverintel.com/?p=6017 OK, DeepSeek happened. OK, it convulsed stocks. That was then, this is now. If the DeepSeek announcement was an earthquake on Wall Street, where will the rubble fall hardest, and what might be left in a condition not only better than expected but better than before DeepSeek came along? Those are the questions we’ll look at today.

Let’s start with what I think is the most important point here. While there might be a winner in a race to nowhere, the winner can’t expect much out of it. In the world of AI, OpenAI and the whole AGI (artificial generalized intelligence) thing has captivated those looking for entertainment, but they don’t offer any clear path to monetization any time soon. Whether DeepSeek is great at the sort of online generative AI chatbots we hear so much about, then, is kind of irrelevant. What matters? Is it great at something with real potential to generate ROI.

Enterprises have told me from the first that, to them, the value of AI lies mostly in what it might bring to existing applications. Their hot-buttons are largely in the area of business analytics, but you can see that there’s also value in almost any other area, in almost every vertical, to the embedding of expert intelligence in applications. There are big benefits here, so that covers the notion of the “R” in “ROI”.

On the “I” side, we have less clarity. Nobody should seriously believe that an AI expert that takes 800 GPUs to run, ten times that many to train, and needs a river to cool is going to be economical. Entertaining, maybe, but not economical. Yet that’s what the majority of our AI seems to have been directed to producing. I’ve heard AI players bragging about the number of GPUs their model needed, when any CIO would tell them that the goal should be to use fewer rather than more. We could embed simple-model AI agents in everything, and we can’t do that with megamodels.

What DeepSeek has done is illustrate how vulnerable the AI hype wave is to ROI reality, or even a tale that conveys a risk to that reality. If AI needs to be minimalist to succeed broadly, then the “cloud model” of enormous GPU data centers is a losing proposition, and that seems to be what people have been chasing. At least that’s what we’re hearing about. Truth be told, we don’t really know for sure that DeepSeek can even do what it seems to promise, but we can be darn sure that the threat alone is enough to send chills through our current AI thinking, and that almost surely means someone will move to realize the DeepSeek threat in an effort to counter it. This will change AI, and impact everyone in it, so let’s look at each of the elements of AI to assess the impact.

Top of our list is the AI model players, like OpenAI, Google, and Meta. I’ve blogged for some time that inside the AI skunk works of all these players is a set of initiatives that have the same goal as DeepSeek. That’s because I find it impossible to believe that enterprises haven’t shared the same thing with them all as they’ve shared with me. Contained, limited-resource, AI agents have been what enterprises wanted from the first. IBM has been the only player in the whole market to not only recognize but articulate that, but I’m sure it’s been widely known.

Why then have the model players continued to measure success by counting GPUs? Because it generated good ink, publicity, because cloud players were good early sales opportunities, and because supporting a race by selling racing gear is smart even if the race is to nowhere. The real AI opportunity is complicated, and nobody wants to wait to realize benefits in a financial world where only the next earnings report matters.

Now, these guys will have to shift, because VCs and other vendors are likely to go after the DeepSeek opportunity, and so the big model players will have to do the same. The risk is that if they seem to do that, they validate the impact of DeepSeek and may hurt their current position. These companies face a risk, but watch Anthropic in particular because they seem to be doubling down on the AI hype side of the picture.

Next, we have the AI chip players, most notably NVIDIA, and this is the group of players we can expect to see impacted by DeepSeek the most. In fact, most of the market debacle really came down to this space in general, and NVIDIA in particular. However, DeepSeek’s impact isn’t to destroy the space but to populize it, which means there will be winners too.

In the near term, NVIDIA is at risk, because pressure on the public generative AI giants will put near-term chip spending under pressure. NVIDIA actually makes lower-end chips, including ones used by DeepSeek, but they’re lower-profit, than the keystone GPUs the big guys buy. I’m sure they’ll push the lower-end stuff, but they can’t seem to be abandoning their current cash cow.

Not so for players like AMD and Broadcom, and even Intel. If populist AI is the future, then cheap AI chips are essential, and players with no real exposure to the hype side of AI have everything to gain by pushing to get traction, which means pushing for competing low-cost AI models too. Somebody could find gold here, so this is a space to watch. Even NVIDIA’s risk is more near-term than long-term.

How about the software players? IBM, Oracle, Intuit, SAP…you know who I mean. This group could find themselves with nowhere to go but up, but a misstep on the ladder could be very hurtful. IBM, who alone has seen the AI truth, has an uncluttered path upward. Oracle, having bet a lot on AI training via the cloud, needs to shift its focus to enterprise-hosted AI, and if they mess that up they’re inviting trouble. IBM is ramping up its own initiatives there. Intuit has major risks to navigate because AI agents could make new entrants into taxes and bookkeeping practical, and they’ve been raising their prices on all their stuff to boost profit growth. But AI agents could open the door to a simplification of both spaces.

Platform software vendors likely have a net upside here, not only because a more populist version of AI would mean more locally hosted agents, which means more platform tools to host them, but because AI operations support offers a potential differentiator. Operations missions for AI are camel’s-nose opportunities, a way of getting AI into place without navigating a massive transformation. If the AI is agent-based, and requires modest resources, it can expand from any early ops outpost.

For the server vendors, this move is probably neutral-to-positive. It’s not yet clear how much new hardware would be needed for local AI hosting, but I do think that at least the shift to agent AI will promote local hosting of AI in the form of data center clusters.

Network players, notably Cisco, are a mixed bag. Self-hosting AI generates AI-cluster opportunity, but in general, AI in itself is neutral on network impact. However, network operations has the same opportunity for AI integration as IT operations, and it is likely to gain credibility through the linkage with agent AI. This would favor Juniper in the space, since Juniper has the most mature and best AI position.

Realistic AI, I believe, would benefit us more than hyped AI, but just because we’ve proven with DeepSeek that hype AI is vulnerable doesn’t mean it won’t continue on. Here we are on Thursday, and much of the market mess DeepSeek created on Monday seems forgotten. Yes, stocks aren’t back to the level they were on Friday of last week, but the panic is over. But not the impacts.

Wall Street loves bubbles, not just because they make money when the bubble is expanding (what, in Street lingo is called “alpha” or appreciation in value) but because they make money when it bursts, because professionals make money on any change in valuation (“beta” in Street-speak). If I’m right, the Street and both AI prospects and suppliers will have to navigate this same sort of panic again, because what’s driven AI up to now is almost all hype. There is a value proposition for AI, perhaps a powerful one, but that’s not what we’re hearing about now. The question is what, if any, player in the AI space has known this was hype all along and has been working behind the scenes to fund that AI reality. Whoever does emerges strong, and those who don’t will surely suffer.

]]>
6017