What is a “private cloud”? I looked at enterprise views on this last year, so it’s a topic that’s hardly new. Still, there are new developments, as shown by the most recent references to Broadcom’s success with VMware. The classic definition isn’t helpful; a dedicated cloud environment exclusive to a single company. Enterprises themselves seem to see it from a number of different angles, and their views are shifting enough to justify taking another look at the topic. Since my last blog in the fall of 2024, I’ve had comments on private cloud from 328 enterprises, and they seem to reflect three distinct visions of the term.
The most popular characterization of the term, cited by 202 of the enterprises, is that a private cloud is a deployment of virtualization technology via software, allowing it to be used in the data center. Those who say this seem to define virtualization technology as being a combination of either virtual machine or container hosting and the tools needed for deployment, orchestration, and management of that hosting.
The second-place characterization is a refinement of the first, one more of priority or mission than anything else. A group of 103 enterprises say that a “private cloud” is a software platform or middleware toolkit that provides for application hosting using either data center VM technology or cloud provider VMs. This, you’ll note, is the definition that dominated in September 2024, and the great majority who held it then hold it now.
The third place characterization is cited by 20 of the enterprises, and it’s relatively new. This group says that a private cloud is a self-hosted implementation of public cloud technology, designed to replace expensive public cloud services. Think of it as the result of disillusionment with cloud costs, which we know has generated some very high-profile stories on repatriation. Still, as my numbers show, it’s a minority definition, and in fact the great majority of enterprises in this group have not abandoned public cloud services.
In one sense, the shift since September 2024 seems to reflect an abandonment of the “everything-moves-to-the-cloud” view and an overall view that cloud adoption policies need to be rethought. The majority view, held by the 202, is that you need to build your data center and your cloud hosting so as to create a hybrid common hosting model. The next view is that you need to prep for that, and the final view is that you need to make the data center as resource-efficient as a public cloud, so you can host things with the same root economies but without any cloud provider mark-up.
This shift is creating tension in other areas, one of which is the whole microservices-and-cloud-native thing. If the cloud is suspect (or at least the motives of cloud providers are suspect), then might everything cloud-specific be tainted? Just as enterprises have been oversold on cloud benefits, have they been oversold on cloud technology? Maybe.
Only 273 enterprises have offered me comments on “cloud-native” in the last couple years. Of this group, 22 expressed a positive view of their experience with it, 129 expressed a negative view, and the remainder were still exploring the value proposition and assessing the potential cost, benefits, and risks. I’m not comfortable with the statistical significance of statistics on whether the negative view is becoming more common, but there do seem to be more such comments over the last nine months or so. This is roughly coinciding with the increase in talk about repatriation, so I think we are seeing some signs that cloud-native concepts, applied incorrectly, have contributed to cloud cost overruns.
One other interesting fact out of the current comment group is that 302 of them said that the cloud-provider tools beyond basic hosting (VMs, containers) and some database functions were likely to have a higher TCO than relying on third-party software enterprises can also run in their data centers. In fact, 138 said that they realized now that relying on cloud web services rather than hosted middleware from a vendor like Red Hat or VMware was focusing their development too much in a cloud direction, raising costs and making it more difficult to either switch cloud providers or repatriate applications. In fact, it made it hard to develop for the data center, because available middleware didn’t work like the cloud tools did, so they had no past development experience to draw on.
Is all this an indication of a sinister plot by cloud providers to lock users in? To a degree, it surely is. No seller is going to ignore the benefit of account control or fail to do what they can to achieve it. However, there are other factors. One is that early cloud applications came from social-media and other OTT providers, and while these did have some potential to teach lessons in online development, most didn’t reflect the same usage model as enterprise applications present. How Netflix or Facebook builds apps is highly relevant to someone getting into online content or social media, but less so for someone doing a front-end to their customer support tools, and even less for augmenting core business applications. But early on, nobody was doing either of those things, so all the attention went to that which was currently being done.
I think that the trends I’m postulating here may be why Broadcom has made a success of VMware despite dire predictions to the contrary. It may even be why Broadcom did the acquisition in the first place. They could well have seen the signs of cloud-cost disenchantment, realized that it was going to drive a change in how the cloud was both viewed and used, and bought up the asset most likely to benefit from the change.
Could we have anticipated this, as an industry? Did the detour we seem to have taken into cloud-la-la-land end up wasting money and raising the bar for the development of not only a rational buyer business case but a rational cloud provider business model? Were we mislead by hype? I think the answer to all these questions is “Yes!”, and I think we have to ask whether we’re doing the same thing with AI today.