If you haven’t had enough of both AI and tech company deals, consider the announcement by Cisco and NVIDIA. The overall goal is pretty clear; support enterprise deployment of AI in their own data centers. There’s also a good chance the move is intended to overhang Juniper’s AI-Native announcement, respond to IBM’s apparent AI success, and perhaps even overhang the HPE/Juniper deal. Are we going to see big AI-data-center wars among network vendors? Anything Cisco does has to be looked at because of its market power in networking, so that question has to be addressed.
Let’s start with a basic truth. Cisco is not taking a leading position here, because they really don’t want one. Traditionally, Cisco has always tried to be a fast follower, letting others take the risk and then overwhelming them with market power. Cisco has also always been one to counterpunch competitive positioning, to overhang others’ stories to prevent their getting too much attention. There is nothing technically revolutionary in their announcement, it’s really nothing more than a business relationship, a marketing move. Is it then nothing more than an attempt to stifle other announcements, or is it the opening move in a Cisco fast-follower strategy? We’ll have to look at a lot of data points to decide, starting with the Cisco/NVIDIA deal itself.
The primary piece of the deal is pretty simple. Cisco is adding NVIDA’s Tensor Core GPUs to its M7 UCS rack and blade servers, and adding NVIDIA AI Enterprise software tools to its price list. In addition, they’re offering jointly developed and validated reference architectures for AI hosting to Cisco’s Validated Designs inventory. Cisco therefore intends to sell a complete AI hosting solution, not just an AI networking solution.
Enterprises have been telling me from the first that they were not going to push company secrets into public AI models, they were going to host them. That would mean hosting a large language model (LLM), of which there are both open-source and proprietary versions available. That’s not bad in itself, but the problem it creates is that the enterprises tend to look for LLM strategies first and then how to host them, which means that they go into their RFP process without a favored hosting approach. Neither NVIDIA nor Cisco want that; as incumbents in their respective fields, they want the RFPs wired to their advantage. But since neither would present a natural source by themselves, band together and be one.
The most important question the deal raises is whether there really is an about-to-explode enterprise AI hosting opportunity. Remember, the whole deal might be an example of Cisco’s competitive counterpunching, too. Right now, we don’t know just how much incremental value AI will add to enterprise analytics and business intelligence, the missions that enterprises tell me would be the only ones with enough value to drive a significant AI deployment. We do know that IBM is looking very good in the AI hype race, because unlike most other runners, they are actually delivering and installing AI in customer locations. We also know that HPE’s acquisition of Juniper may be aimed at competing with IBM in AI. Is that enough to justify a Cisco fast-follower move?
IBM’s watsonx is the number one AI innovation enterprises cite to me, and this article is a good description of their approach. You can see it’s a pretty comprehensive package. IBM is a competitor of HPE, and while I think it’s unlikely that HPE is buying Juniper just for its AI, I do think that HPE has aspirations to rival IBM in the AI space. Given that IBM isn’t a networking company, or even much of a server company, HPE may think its best strategy is not to immediately take on watsonx, but to emphasize the stuff they have that IBM does not, while still holding an AI card to satisfy the media. Networking AI data centers could be that positioning.
Juniper’s AI-Native announcement included the network switches for an AI data center. HPE would pick up that asset and could add it to their own AI hosting plans, which would then be a threat to Cisco for sure, and might even be perceived as a threat to NVIDIA…in positioning and marketing terms at least. That would also explain why Cisco introduced some linkage between the Nvidia deal and Cisco Networking Cloud and Digital Experience Monitoring. Juniper has that sort of thing in its AI-Native announcement, and HPE would acquire the capability with Juniper.
Given that HPE already has ProLiant servers for AI, and they include NVIDIA GPUs, the tougher question to answer is why NVIDIA would create a channel-conflict risk with the Cisco deal. I don’t think there’s a single rack-mount server vendor who doesn’t offer NVIDIA GPUs. It’s not much different from having Intel or AMD sell CPU chips to a bunch of vendors, then adding a new vendor. So what? It’s already non-exclusive. But it’s also true that if what we’re going to see is a bit of a combination of a hype-driven positioning war and perhaps early experimental deployment, then it wouldn’t hurt NVIDIA a bit to have both HPE and Cisco, rival players in the AI data center space, both offering NVIDIA chips.
In the balance, I think that both Cisco and NVIDIA are actually seeing an explosion in enterprise-hosted AI. I think both want to be able to capitalize on it. I think both realize that while the early activity from IBM, the HPE/Juniper deal (which won’t close for nine to twelve months, according to my Wall Street friends), and Juniper’s AI-Native announcement aren’t strong enough to carry competitors to an easy victory, but given time those competitors might come up with something. HPE in particular poses a risk because they already sell NVIDIA GPU systems, they have platform software, they have data center engagement, and they have Juniper and it’s AI-Native Network positioning. It’s a matter of whether HPE is smart enough to take advantage of all this.
Anyone who’s read my blogs since the Cisco/Juniper deal came along knows that I’m not at all confident that HPE is capable of pushing the combined value proposition optimally. While creating a true, full-spectrum, AI-native-data-center-and-network story isn’t dumb, it’s not optimally smart either. It might be smart enough to hold a place for themselves, but not smart enough to prevent Cisco from stealing their thunder. HPE is going to have to play this incredibly well if they want to fend off a Cisco response.
HPE has the motivation, at least. The server market that forms the foundation of HPE’s business has been commoditizing as much as networking has. Cisco has sold servers, the UCS line, for a long time. They’ve never come up in my enterprise discussions as a server powerhouse; in most cases their sales seem to have been to operators or in network-coupled missions. It’s hard to see how Cisco could become a real player in the AI hosting space without a lot of additional effort. It could peddle self-hosted AI to enterprises who wanted it, but could Cisco actually play in a true AI explosion in the data center?
So what if there is a real and growing-to-hot market for enterprise self-hosted AI? That may be something NVIDIA is also looking at. They don’t want to rush out and promote private AI when they’re selling GPUs hand over fist to the big public-model players. Particularly if it turns out that private AI isn’t a great business. So let Cisco bear the standard in the private AI space. If Cisco finds there really is a wave of opportunity, well, they don’t make AI GPUs, NVIDIA does. Cisco’s dominance is in networking, and if AI data center deployments create an enormous network opportunity, Cisco needs to be ready to leverage that into a dominant move.
Back to my opening, to competitive counterpunching versus the fast-follower strategy. The latter is what Cisco is doing here. It’s smart, but it’s also dangerous, because while it positions Cisco to interdict an HPE initiative and to counter IBM’s early AI success, it sets Cisco up as a merchant and not a vendor in the AI hosting space. It might also convince either or both these Cisco competitors do actually do something smart, something enabling. There are great stories buried in the HPE/Juniper deal. There are great stories buried in the Juniper AI-Native announcement. IBM’s AI story is proving out at the sales level already. While the stories are a big reach for companies that don’t like even little reaches, they are also a reach to a place that Cisco won’t try to lead a charge to. Aggression from HPE or IBM could mean Cisco fast-following wouldn’t be fast enough. In that case, can Cisco change their DNA fast enough to counter at all?
There’s no new pressure to be aggressive acting on IBM in this space, but there is enormous pressure on HPE. They have to justify a $14 billion acquisition. HPE/Juniper can win this current battle of AI…for a short time. They’ve got perhaps six months to step up and own the real value proposition the thing I focused on with the last two blogs. After that, HPE risks Cisco’s fast-follower strategy overtaking them, and once you let Cisco out in front of you, their sales dynamism will make you a perpetual also-ran. Tech revolution is for tech revolutionaries, and Cisco’s deal with NVIDIA insures that we’re going to find out whether there are any of those left at HPE or Juniper.