Cloud computing has surely been transformational, but it also surely contends with AI as the leader in the tech-hype category. What gives the cloud the edge, perhaps, in this cynical race is the endurance of the hype. Year after year we heard about how everything was moving to the cloud. Now we hear that everything, or most stuff at least, is being repatriated. Broadcom said in a recent “private cloud report” that 69% of enterprises are considering repatriating some cloud apps. My own data suggests that it may be more like three-quarters, though the number of apps involved isn’t impressive. More significantly, almost all enterprises have told me that they realize they’ve not gotten the cloud paradigm right, and they’re being more cautious on cloud project approval. That’s already slowed cloud growth somewhat.
We don’t really need more data on the statistics of cloud projects, nor I think do we need to talk about what’s getting repatriated. We should instead look at the cloud projects that have been highly successful. I have comments from 48 enterprises who had cloud projects that blew past the approval levels and delivered even more than had been projected. What did these enterprises do differently?
If I set the bar of critical success association at 75%, meaning requiring at least 36 of the enterprises to have referenced a particular approach or screening criteria, I find that we have four specific things that qualify. I’ll look at each of these, and a couple of near-misses, in this blog.
The top success requirement listed, by 45 of 48 enterprises was applications with a high level of load variability, where peak workloads were at least 2.5 times average workloads. This requirement plays to a basic truth about hosting, which is that if you have to size for peaks when they’re very high indeed, you waste too much money self-hosting. Joining with other enterprises who likely find their peak periods driven by different factors than yours, and you can fit your loads onto shared hosting resources so they fit into others’ valleys. The cloud provider, by statistically multiplexing work, can offer better economics than you can generate yourself.
It may even be better than that, say 41 enterprises. Almost all online retail is driven by holiday sales to a degree, which means that the peak loads are synchronized and in theory the cloud provider should face less favorable economics with their standard pricing paradigms. In practice, there’s other activity that seems to fade back during those retail peaks, or perhaps the cloud providers accept a period of lower efficiency as a price of getting the customer, because this group all moved to cloud hosting for applications that were traditionally seasonal in nature, and had great success.
The second success requirement, cited by 40 of the 48, was significant geographic spread of the applications’ users. All this group had a user base of at least continental scope, 33 reported multi-continent spread, and 17 were “global” in the scope of their applications’ users. Of this group of 40, 34 said that their applications required a high degree of user interactivity (more on that below), and to support this model for a broadly scattered user base, they needed to draw on hosting resources similarly distributed. You’ll note that even companies who supported a purely regional base sometimes had issues in providing a high level of user QoE for their whole user spread, so the overall quality of infrastructure also plays a role.
The third success requirement, cited by 37 of 48, was that front-end user interactivity created more than 40% of the compute load. This makes sense because the more compute needs are focused on the application elements that present highly variable and QoE-sensitive demand, the harder it is to build a data center to accommodate that variability.
Companies increasingly see their IT activity as being transaction processing, reporting, and user interaction (workers, partners, prospects, customers). The former two classes of activity don’t normally support the levels of variability or distributability of users needed to make the cloud a clear economic winner. The latter is the key, so the more work it represents as a percentage of total compute load, the more sense it makes to offset to the cloud. Otherwise, normal capacity management practices can likely make it work in the data center.
Requirement number four is a division of functionality in the application’s use, creating a “consideration” and “action” phase, or a limited relationship between the applications’ data needs and core business data. This was also cited by 37 enterprises (24 of which also cited the third requirement above). What it means is that if the interactivity of the application requires significant core database access, then moving it to the cloud creates both a data transfer cost and a data sovereignty, security, and governance challenge. Very few enterprises believed that hosting core business data in the cloud was a viable concept, and most felt that exchanging it regularly with cloud components also created the risks noted here.
Retail sales, which usually have a pretty significant browsing-and-considering interaction prior to actually committing anything transactional in nature, or which require only a minimal amount of core data to support the initial consideration phase, are ideal targets. If everything the application user does requires access to core data, the cloud is less attractive.
The first of our near-miss requirements, cited by 32 of 48, was application missions that involve many low-unit-value transactions rather than a few high-unit-value ones. I’m personally inclined to think that this criteria is most likely linked to the same factors as our first and fourth, but the companies who cite it say that the higher the unit value of each transaction, the more likely they are to facilitate interaction with core data, thus binding the application closer to the data center.
The second near-miss requirement, cited by 27 of 48, may also be a technical companion of one of the four validated ones. The application’s implementation lends itself to breaking out the front-end user-interface, portion. Long-term economics wouldn’t be impacted by this factor; it’s a project cost factor. However, some users report applications that are not readily modified because they’re provided by a third party, or because the application is old and maintenance is difficult.
There was also one interesting data point on “practice”, meaning the way a prospective cloud application was implemented. Of our 48, 41 said that their approach was to start with only basic cloud features, augmented by open platform tools, and use specialized cloud web services only if there was an audit on costs done. This group said that in nearly all cases, these specialized tools lower development costs but raise operating costs, and that they won’t generate optimum return in most applications. It’s important to note that the same open platform tools are available for data center deployment, making hybrid cloud easier and facilitating a migration of components between cloud and data center if evolution of requirements dictate a rebalancing down the line.
Overall, the thing that stands out for me is that all 48 of these enterprises seemed to understand the cloud. They hadn’t bought into the simplistic media notion of universal replacement of data centers by cloud hosting, nor did they believe everything had to be cloud-native simple microservice meshes or be based on sophisticated cloud-resident features. They were savvy shoppers, and that’s the most important lesson of all for enterprises facing the evolution of application hosting.
