IBM does a lot of good stuff, covering a fairly wide area, including AI. I cited an IBM-sponsored report as one of two in a blog last week, in fact. There’s another report out that also at least relates to AI, but also has a broader target. Called “6 blind spots tech leaders must reveal”, it’s subtitled “How to drive growth in the generative AI era” but there’s a lot more to it than AI if you look closely. There’s stuff I think is insightful, connects with what I hear myself, and also stuff I think is totally wrong.
Before I take that close look, I want to answer a question some have raised privately to me. The question was “Is IBM a client of yours?” and the answer to that is “No.” I did some consulting with IBM way back, at least 25 years ago, but nothing at all since then, and nothing relating to their AI. Nobody can pay to have me blog about them here, and their status as a client doesn’t impact what I say. Even if I write a contributed piece for a vendor, (I’ve not done that for IBM) it’s done with the understanding that I will say only what I truly believe. That’s a promise, so now let’s get on with the analysis.
The report opens with something I think we all accept, which is that the CIO is critical to business success because technology is critical to business success. Enterprises who gave me strategic views in 2024 (a total of 394) said (by a margin of 347 to 47) that effective use of technology was the biggest requirement for success, and would be so as far into the future as they could see.
Another interesting data point comes next; the C-level executives agree that IT has “has become less effective at basic technology services over the last 10 years”. Even CIOs agree with this; 69% said it was effective ten years ago versus 47% now. My own contacts are in rough agreement with this, but they, and I, disagree with the report regarding the cause.
The report cites what some would say are root-cause issues but that my own contacts believe are simply abstract points that demand a more concrete assessment. It’s hard to disagree that “technology teams are called to have greater symbiosis with our business” but what develops that? My own data would point out that roughly 20 years ago, we started to see a decline in the portion of IT spending that originated with new projects, and that decline was acute ten years ago. What happened was that finding new IT projects that could make a business case became more and more difficult, meaning that IT was deliberately focusing on sustaining what had been justified in the past. How could you hope to sustain effectiveness at “basic technology services” when your IT wasn’t evolving and your requirements were?
The question of abstraction comes to a head on page 6 of the report, which lists the “Four critical capabilities and characteristics [that] set tech outperformers apart”. Nobody I know would argue with them, but I think almost every enterprise would say first that just citing them doesn’t implement them, and second that projects of the type now not getting approval are largely attempts to do the implementing.
What comes next in the report might justifiably be called a significant insight, but it’s one that would be easy to miss. In a section headed “Going all in on cloud and AI”, the report says “Today, tech leaders are prioritizing infrastructure investments, spending nearly one-third more on hybrid cloud than AI. Looking ahead, they are fully committed to the power of cloud and AI together.” Yes, it’s true that more is spent on hybrid cloud than on AI, because (so enterprises tell me) they’ve been doing application modernization on the cheap by building cloud front-ends to core applications. Yes, it’s also true that enterprises are committed to both cloud and AI in the future (though more growth in hybrid cloud spending is expected than on AI spending), but here’s where the significant insight gets underplayed. The report goes into the six blind spots without making what I think is actually its key point. That point is that what the report calls “generative AI”, which I believe is really about applications of large- and small-language models, could allow IT to connect to business in business’ own terms. If we can train a language model to understand business, then we can deepen that critical relationship without having to continually spend on new stuff. We get “project benefits” that are now foreclosed, because we lower the cost barrier to approval. How do the six points relate to that? Let’s see.
The first point is “Tech must be the core of everything we do”, and I respectfully disagree. Digital transformation, meaning IT transformation, has to make the business the core. Every past step that resulted in a big boost to IT spending over time moved tech closer to the business. Creating a business link to AI could be the next wave, by letting tech adapt dynamically to needs.
Point two is “Our collaboration is only skin-deep”, an opaque way of saying that we need to get buy-in for tech with a closer relationship between tech and CFOs. True, but not if our goal is to get CFOs to rubber-stamp tech spending. We need to adapt IT to fit business better in a way that leverages its assets to avoid a lot of unique and expensive projects. An IT model of a business created through AI? Why not?
Here’s an interesting statement, the third point: “Generative AI could break our organization.” The point I get from this is that our “IT model of a business” that AI creates could stress infrastructure and reveal current applications and data as a sort of house of cards. Part of this relates to the application modernization versus new projects dilemma I already noted, and part to the fact that any process that offers more autonomy in integrating IT and business needs could bypass controls over both utilization and information quality. Tech debt issues, if they exist, could be magnified.
A potentially related point, number five in the report, is “Our AI may be irresponsible.” That same autonomy cited in the last paragraph could also impact governance. Today, almost eighty percent of CxOs who comment to me on their use of generative AI in the form of a public service say they believe that some confidential data has been exposed. This is why most enterprises think that unlocking the real value of AI will demand self-hosting; most have already said they wouldn’t host core data or applications in the cloud for data sovereignty reasons.
Point five is another one I think duplicates much of earlier points: “Our data could be a liability.” AI is another of those historically famous “garbage-in-garbage-out” things; feed it bad stuff and you get bad results. Yes, enterprises agree, and interestingly this is perhaps the most common criticism of “citizen developer” or “low-/no-code” applications. Make it easy for anyone to do IT, and everyone who does it poses a risk they’ve not only created a bad result, but contributed their badness as part of the company’s data inventory.
Enterprises with AI experience note that the selection of data to be used for training and analysis is critical, and cannot be left to amateurs. LLMs are a form of deep learning, which in turn is a form of machine learning. “What do you want your model to learn?” asks one expert, “If you’re not careful it doesn’t learn from your mistakes, it learns them.” Many companies train models not on primary data but on derivations, and many companies fail to train using outside factors that bear on the analysis. For example, modeling for sales/marketing strategy, a bit element of business analytics, requires you consider economic and even political factors that influence buyers. Selecting what you need to train on, based on what you want to learn from AI, requires data science skills.
As I often say in consulting engagements, “There’s no substitute for knowing what you’re doing,”, which means there’s also no substitute for getting people who do on your team. Point six on the list is “We’re still fighting yesterday’s talent battle”, seems to me to also raise that project versus budget spending on IT. For two decades, we’ve favored IT spending that focuses on sustaining the old, and of course that reduces the incentive to bring in people who understand the new. We’re also not bringing IT closer to the worker, the movement that led to all the past tech booms. I’ve pushed the notion that not only have we missed empowering the 40% of the workforce that isn’t largely desk-bound, we’ve failed to introduce IT to their work, preferring them to come to IT. I think this section of the report misses that by suggesting that the workforce has to rethink practices and re-skill itself; work often defines how it needs to be done, and we need to accept and improve that way when redefining the task isn’t possible.
“The AI revolution is underway” is the title of the closing page, and…well…maybe “barely underway” would be a better point. Suppose that we somehow transported a modern compound bow and arrow back to the early days of stone tools. You might imagine a number of uses that would be tried, but would the people ever grasp the real mission? AI doesn’t represent as extreme a challenge, but I think there’s a thread there worth pulling. What is the value of AI? Is it empowering or replacing? Does it help make us productive as individuals, within our tasks, or re-task us? Do I hit something over the head with the compound bow or stab them with an arrow, or shoot it from a couple hundred yards away? My choice of use will determine the value of the tool.
I think the profound truth the piece gives us a glimpse of is that AI unbridled is not going to do what we hope it will. It needs to be guided, contextualized. For AI to be fully impactful, we can’t depend on humans bridling its misbehavior, so we need to apply it within a “smart enterprise.” That doesn’t mean we put enterprises on autopilot, but that we collect and policy-bind information about operations so that it can run what we want it to and advise where we need it.
Yes, to cite another historic cliché “to a hammer everything looks like a nail”, and you know I’m a believer in digital twin technology. I think that the best of AI lies in applying it within a model of a task, a process, a business, a life. I think that would meet the goals, address the risks, of what the report discusses. I’d love to hear what you think, particularly the over-400 enterprises who share comments with me. I think we can have some valuable discussions.