Wall Street is always important, but not always insightful. Because the view of the Street on a company or strategy determines share price, which usually has a decisive impact on management policy, we have to follow it. But the Street has its own agenda, a desire for a revolution that can be played, so we have to take their views on technology itself with a grain of salt. Credit Suisse did a recent conference on TMT (telecommunications, media, and technology) that brought all this home.
The most important thing that I take away from the conference, based on attendee comments and material, is that the Street still doesn’t understand the cloud. Most, I think, don’t even know what it means at a technical level. Nowhere is this more obvious than in the way the Street covers “hybrid cloud”.
Suppose you build an extension on your home, maybe a new deck or a dormer and extra room. Are you building a house? Obviously not; the house is being extended. If you have a data center and add on a cloud front-end, are you building a hybrid cloud? No, you’re extending the data center strategy by adding a public cloud strategy. The analogy here is intended to show that if the foundation strategy of something is built on, it’s the thing being added that dominates the planning, or should. You don’t need to add much to the house to support the deck, nor do you have to add a lot to the data center to support hybrid cloud.
One Street takeaway from the CS conference was that hardware vendors were seeing “choppy” demand but “the secular focus remains squarely on the push for hybrid cloud.” The push is real, but the impact on hardware is highly speculative.
Nearly all the enterprise cloud use today is focused on providing a web/mobile front-end to legacy applications. Those applications expose a set of APIs (application program interfaces) that are normally used for online transaction processing, and the cloud then feeds the APIs with work, essentially replacing dedicated terminals (remember those?) or local PCs. The user experience is largely set by the cloud front-end. This is “hybrid cloud”, meaning that applications are a hybrid of public cloud and data center processes. The former, the cloud front-end, is the new piece, though. Little or nothing is done to the data center.
As enterprises evolve this front-end/back-end cloud-and-data-center model, what usually happens is that work on the user experience opens an opportunity to create a different information relationship between the user and applications. This different relationship is created by exposing new APIs, new processing features or information resources. I’ve talked with a lot of enterprises about how this is done, and none of them indicated it was a task that required rethinking data center hardware.
Another related cloud takeaway actually comes from somebody who should know better, IBM. IBM said that “Chapter One” of the cloud was where we were now, where 20% of “the work” had moved to the cloud. They said this was largely “customer-facing applications”, which is a vague characterization of the cloud-front-end model that’s really what’s out there, but then they said that “Chapter Two” was moving the other 80%.
There is a next-to-zero chance that 100% of enterprise applications are moving out of the data center. There is only a small chance that even half of it will; my model says that the fully mature cloud model will end up with about 42% of enterprise work in the cloud and the rest in the data center. Most of that 42% isn’t “moved”, it’s newly developed for the cloud model. Later on, IBM introduces another set of numbers; 60% of workloads migrate and 40% don’t. More realistic, but my model reverses the percentages, and I think there’s very, very, little chance that IBM’s figures are right.
One reason for the difference in cloud view might be the way IBM and the Street view containers and Kubernetes. IBM thinks the biggest drag on adoption of the cloud is the difficulty in gaining comfort in containerization. That raises my next point of Street confusion; the Street thinks all cloud applications have to be containerized, and that all containerization is aimed at the cloud.
Containers are portable application components, in effect. They are an essential element in a good virtualization strategy, and containers and Kubernetes orchestration of container deployment was really first adopted by enterprises in a pure data center hosting context. You can tell that because the public cloud adoption of Kubernetes (Google invented Kubernetes, so we’ll exclude their use) lagged the enterprise data center use of the concept. Not only that, recent advances to Kubernetes have been aimed at making it work in a hybrid cloud, which wouldn’t have been a necessary add-on had Kubernetes been designed for the cloud to start with.
Containers are a great strategy for any data center, and a beyond-great strategy for any company with multiple data centers. Container adoption has been proceeding in the data center from the first, and it’s almost certain that we’d have containers and Kubernetes sweeping the software/platform space were there no cloud computing at all. However, that doesn’t mean that there’s no container action going on in the hybrid cloud space.
What’s really going on here is what you might call the “gravitational pull” of the cloud. As we add APIs to data center applications to enrich the user experience, we create “cloud pull” on the application components that present those APIs. In more technical terms, software running in the data center, as it’s more tightly bound to the cloud’s scalable and elastic processes, tends to become more elastic. Non-scalable APIs can’t effectively connect to a highly scalable front-end.
Making these new front-back-end-edge components scalable means applying more cloud principles to their design, and also being able to move some instances into the cloud if local resources dry up. This “cloudbursting” is where you end up needing to orchestrate across the boundary between cloud and data center, or among clouds.
In effect, we have a middle ground between cloud and data center where elements can float between the two, scaling up into the cloud or back into the data center depending on load and performance issues. That middle ground will get larger, constrained eventually by the fact that cloud economics favor applications that have highly variable workloads, and core business applications (once the front-ends have been moved to the cloud), lack that characteristic. This constrained-migration thing is entirely driven by software issue, of course.
And that is what I think is the big miss for the Street, as the conference illustrates. There’s a focus on the cloud and the future in hardware terms, not recognizing that what’s driving change in IT and networking alike is software. The vendor with the best hardware may win server deals, but they’re not going to drive cloud migration, and in fact their sales are slaved to the hosting and application strategies that software will set, and is setting.