Cloud computing doesn’t always save money. That’s contrary to the popular view, certainly contrary to the publicized view, and controversial to boot, but readers will recognize that it’s a point I’ve made often. A recent article in VentureBeat makes that point, but has been its own source of controversy, and frankly I’m not in agreement with many of the themes of the piece. So, I went back over my own enterprise data with the hope of balancing the scales for the cloud, and I’ll share what what I learned.
The first and most important point was that any discussions about “cloud” savings or benefits is an exercise in futility, because the topic is too broad. There are three major classes of cloud project, and the economics of each of them is unique. Second, most assessments of cloud costs versus data center costs fail because they’re not leveling the playing field in terms of assumptions. Third, most cloud projects are badly done, and a bad project creates bad results except by happy accident. I’ll talk about these points and present what enterprises have told me over the last three years, and you can draw your own conclusion.
Cloud projects can be grouped into three classes based on just what’s running. The first class represents applications actually moved to the cloud, things that were previously run on-premises and have been transported to cloud hosting with little or no modification. These represent about 25% of enterprise cloud applications. The second class represents application front-end additions hosted in the cloud, new GUI and presentation logic added to legacy applications. These represent about 65% of enterprise cloud applications, the largest class by far. The third class is the cloud-native independent applications, applications written for the cloud and not designed as front-ends to legacy applications. These are only 10% of current enterprise applications.
Applications moved to the cloud are limited in the extent that they can exploit cloud benefits, and users report that the primary motivation for moving them in the first place is a form of server consolidation. You can’t scale these applications because they weren’t designed that way, and the cloud doesn’t do much to add resiliency to them either, according to users. According to users, only a fifth of these applications “clearly met the business case” for the cloud. Another 40% were “marginally” justified, and the remaining 40% “failed to meet the business case”. About a third of these were migrated back in-house. The experience here shows that cloud hosting is not inherently more cost-effective if the cloud’s benefits are constrained by application design.
The situation is very different for the second group, the application front-end usage that dominates enterprise cloud use. Users say that nearly 70% of these applications met the business case, another 25% did so marginally, and only 5% failed to meet the business case. Interestingly, only ten percent of the failures were either repatriated to the premises or under active consideration for it. Users are happy with the cloud in these missions, period.
The third group is a bit paradoxical. According to users, just shy of 40% of these applications were clearly justified, another ten percent marginally justified, and the remaining half failed to meet the business case. Over half of that failed group were already being (or had been) repatriated or rewritten, with the latter somewhat more likely than the former.
Why cloud-native, of all categories, isn’t meeting goals comes down to the second and third of my three opening points. Users were comparing hosting costs in-house to cloud costs, when the cloud was actually providing more benefits than the premises hosting option was providing. Cloud-native applications are, when properly designed, scalable and resilient in a way that’s nearly impossible to achieve outside the cloud. The “when properly designed” qualifier, of course, is a key point.
Cloud-native development skills are in high demand, and most enterprises will admit that they have a difficult time acquiring and retaining people with that skill set. Many will admit that part of the problem is that their development managers and leads are themselves lacking in the skills, making it hard for them to identify others who actually have the right background. Without proper skills, the cloud-native applications often don’t exploit the cloud, and there are then fewer (if any) benefits to offset what’s inevitably a higher hosting cost.
If we turn back to the VentureBeat article, we can see that their story is about “cloud” economics, and it’s biased in a sense because the majority of public cloud use comes from tech companies, social media and the like, and not from enterprises. The Dropbox example the article cites illustrates this point, but it also illustrates why it’s dangerous to use cloud applications from startups to judge cloud economics overall.
Startups are capital-hungry, and so the last thing one would want to do is rush out and buy data centers and servers to host their expected customer load, then pay for them while they tried to develop the business. Most start in the public cloud, and most who do eventually end up with two things—highly efficient cloud-native applications, and a workload that could justify a private data center with ample economy of scale. As I’ve pointed out a number of times, cloud economy of scale doesn’t increase in a linear way as the size of the resource pool increases; there’s an Erlang plateau. Any reasonably successful dot-com company could expect to nearly match public cloud economies, and if they don’t have to pay the cloud providers’ profit margins, they will of course save money.
Enterprises aren’t in that position. Most of their core applications are going to stay in the data center, and be augmented with cloud front-ends to improve quality of experience and resiliency. They could not replicate cloud capabilities in scalability and availability without spending more than they’d spend on the cloud, and their satisfaction with that class of applications demonstrates that they realize that. They also realize that without the special cloud benefits, the cloud for them will be more expensive. Where they can realize those benefits, the cloud is great. Where they cannot, the cloud is less great, and maybe not even good.
Front-end application development isn’t the same as core application development, because the pricing, security, and compliance implications of the cloud don’t fit with transplanting stuff from the data center, as the experience of users in the first class of applications shows. The concepts of “cloud-native” development, from high-level application design to the APIs and details of the microservices, are not well understood, and almost everything written about it is superficial and useless at best, and wrong at worst. That’s why our third class of applications aren’t as successful as the second front-end class; there’s more to deal with when you do a whole application in the cloud rather than just a front-end.
There’s a lesson for the operators here, of course, and for 5G and O-RAN. Cloud-native hosting, or any hosting, of network functions and features is not a job for amateurs. Every single enterprise I’ve talked with about cloud projects told me that they underestimated the complexity of the transition, the level of knowledge required, and the extent to which the cloud changes basic software development assumptions. That’s true for anyone, not just enterprises, and the network operators need to keep that in mind.