Is it “everything old is new again”, or “everything new is a bad idea”? Whichever it is, it sure seems like a lot of hot new tech ideas are getting a harder second look these days. The cloud, AI, and microservices are all tech revolutions that are getting some serious questions asked about their value. It’s like tradition is returning to IT, which of course isn’t what media/analysts traditionally like to see. How many articles about the data center of old winning out can we tolerate? But tradition gave us punched cards, analog modems, and mobile phones the size of a Manhattan phone book (which, of course, is another tradition we’ve weaned away from). Innovation is what created the modern world, so how can these seemingly innovative things now be questioned? Enterprises have some idea.
“It’s all bandwagoning,” one long-time CIO told me. “You get something new and interesting, and it gets noticed. Those who are using it get noticed, and being noticed is better than being anonymous. So you use it.” About two-thirds of all enterprise contacts I’ve had over the last three decades have said something like this. But however true it is, it doesn’t explain why those new things turn out to be called “bad”. Some new and interesting things have been solid winners from the first. Some had a slow start and then gradually proved themselves. Why do we see three more recent revolutions all questioned?
Many enterprises still see this as a “bandwagoning” issue. The problem with status-driven adoption is that it isn’t driven by thorough, thoughtful, assessment. As a result, there’s a higher probability that the new thing will be applied the wrong way, or to something that it doesn’t fit at all. About half of all enterprises think this is a root cause of our harder-look phenomena.
A root cause, but not the only root cause. Almost the same percentage think that hype is a root cause. “The cloud will take over, or maybe AI will take over, and in any event you need to be planning for microservices,” another CIO remarked. “It’s hard to resist at least an exploration of something that everyone is reading about, particularly your CEO. And everyone knows that exploratory projects have inertia, and they can blunder into being adoption just because so much time and effort was sunk into evaluation.”
It’s interesting that the CEO who’s held the position the longest of all those I’ve interacted with took a different slant. “There’s really one driver of IT change, and that’s the need to empower people, decisions, better. That means making IT reliable and accessible, more of each every year. And every year, after more low-apple strategies have been followed, it gets harder.”
The fusion of these four comments, I think, reveal what’s as close to the truth as we’re likely to get. What we’re seeing is a failure of cost-benefit analysis, created in large part because we’ve taken good ideas too far, not that we’ve created bad ideas. All three of these technologies are good, probably great, and possibly even revolutionary, but none of them are the “universal constant”, the hypothetical thing that, multiplied by your answer, yields the correct answer. No matter how great your new electric drill is, it won’t be valuable in turning on your TV. But…we’ve tried that with them all.
“The dumbest thing we did,” one enterprise CFO told me, “was to try to do business analytics in the cloud. When the software was in the cloud and the business data was in the data center, the transfer costs and latency killed us. When we put the data in the cloud too, the cost of updating it and our compliance people ganged up to kill us.” Of course any thorough assessment of that application would have revealed the problem, but everyone was caught up in the cloud euphoria.
“We thought that microservices would give us scalability and availability, but the application was so complex it never ran correctly, and in any case just processing the one most common transaction took five times as long as it used to,” said a development director. “We fixed it by tossing out the whole idea of services and going to a monolithic model, but that didn’t give us what we wanted either.” The director admitted that they eventually realized that some “servicication” was good, but that too much was a disaster, and they’d never realized that would be the case until they’d had two failures.
“We paid nearly a hundred thousand dollars for an AI analysis of our business, and it produced five recommendations that we presented to a department head meeting,” a CEO told me. “Two of them would have involved doing something everyone agreed would completely fail, for very obvious reasons. One was illegal, and the two remaining ones couldn’t generate enough benefit over three years to justify the cost of the analysis and ongoing AI use.” This company, like others, found out that it’s often impossible to know how much you’ll get from an AI project without first doing the project.
What do enterprises think would fix the problem? Some offer joking (maybe half-joking) notions like “Put all your visionaries in a sealed room for the first year after something new comes along!” Almost all CFOs, and well over half of CIOs, say that what’s needed is to “stamp out generalization and justification by reference”. With regard to both the cloud and AI, both groups agree that there should have been a requirement to frame a business case in enough detail to permit a test of it before going forward. “We accepted that everything was moving to the cloud, or that the cloud was always cheaper. Well a lot of the everything is moving back because the cloud turned out to be usually more expensive.”
The problem with things like cloud computing and AI, say these CxOs, is that companies don’t quantify benefits, which they say takes three steps. First, you have to identify the source of the costs and offsetting benefits, and how they’re to be realized by the technology. Second, you have to define a test or trial to validate both the cost and benefit assumptions associated the first step. Finally, you have to run the test/trial and gather the data to get approval. You don’t accept published claims.
Microservices are in one way different and in another not so much. None of the enterprises said that CFO validation of a microservice decision was needed unless implementing the decision was associated with project spending that had to be reviewed. Even when that happened, it was rare for the CFO review to actually look at the “value” of microservices versus alternatives. CIOs say essentially the same thing; microservices were recommended by development teams, and no more likely to be questioned than the choice of a programming language. They admit that they wonder whether other decisions, like containerization or virtualization, should also have been given a harder look.
In summary, enterprises admit they’ve not given new technologies the thorough validation that they should have. I think that the comment on “bandwagoning” captures a big part of the problem, but I also think that we’ve come to rely too much on information sources that are more beholding to vendors/sellers than to buyers, and also on summaries or snippets of information rather than extensive documentation. Enterprises are mixed on this one, though; most executives are looking for Cliff Notes. Maybe they’re finding them too often already.