According to a recent story, an astonishing 80% of IT projects fail, and project failures are a big problem, for sure. Of 419 enterprises who offered me comments on their IT projects this year, all but 7 said that at least one IT project they’d launched in the last year had failed. None reported anything like an 80% failure rate across all technology areas, though. I think the difference here is just what a project failure means. Enterprises would likely agree that 80% of AI projects fail to gain approval or fail in execution, but the great majority fall into the former category. In fact, from what I can glean from commentary offered me, execution failures in AI projects run about 35%, for cloud projects it’s 31%, and for other projects, it’s 24%. It appears that the rate of approval failure, roughly 40%, is fairly consistent across all technologies. Thus, the actual difference in AI failure rate versus that of other technologies is only, at the most, a 75%-to-64% spread.
Rather than try to figure out where the higher number comes from, I want to try to look at IT project failures themselves. What drives them? Then we’ll see whether we can uncover what might lay behind the higher number the article cites.
Where projects have failed to gain approval in the last year, the reasons offered fell into two main categories—failure to meet the corporate ROI targets (62%) and a compliance/governance problem (36%). However, the former group accounted for almost all the internal IT project failures, and compliance/governance was a larger factor in AI and cloud projects. In the case of AI, it accounted for almost two-thirds of all failures.
If we go to execution failures, we see a very similar division of causes. In 67% of cases, the problem was a failure to meet the business case, and in 28% of cases a failure to deliver on compliance/governance measures. The former problem type was usually discovered only near the end of the project, while the latter was often found earlier, during implementation.
Let’s look at execution failures in another light now. Citizen developer projects using low- or no-code technology suffered execution failures at a 39% rate, a bit higher than reported for AI or cloud technology projects under CIO control. However, this rate was more constant across all the technologies I’ve cited above, and it relates to the fact that citizen developers are not as facile in project management tasks, from inception to execution. I’ve experienced this phenomena often in my career, and I think it leads us to why the AI failure rate the article notes is so high.
AI covers a lot of technology ground, and the popular form of AI, the generative AI services offered by cloud giants is offered as a service, meaning that in most enterprises it can be committed by anyone with a budget, where projects involving capital investment need approvals. Of our 419 enterprises, 388 said that line departments in their company could adopt AI-as-a-service without any special oversight, and 345 said they had uncovered examples of this in their own company, usually because a failure in the application led to an appeal to the CIO.
If we take a statement from the article that “The amount of hype surrounding generative AI means some executives believe its use can magically transform a company for the better” there’s an implicit presumption that the “executives” could make an AI commitment when their own AI knowledge was developed from the “hype”. That’s true for most line executives, but rarely for CIOs themselves. And enterprises tell me that this is exactly what they find, what one called “phantom” projects involving AI or other technologies, projects launched by the publicity a technology achieves, which then convinces a line manager that the technology could aid them. Perhaps because of past experience (and frustration) in getting a formal IT/CIO project approved and delivered, the manager seeks an alternative in as-a-service form, which can be expensed up to a specified limit for even a low-level manager. So, without any real expertise, the project goes forth, and usually fails. Not because it’s an AI project, but because it’s not getting the review that any tech project should get.
But the article cites six AI project failure reasons, an only the first two (misunderstanding or miscommunicating what problem needs to be solved using AI” and “lacking the necessary data required to adequately train an effective AI model” can be laid at the “citizen AI” doorstep. The other causes relate to a real technical implementation failure of the type that a line manager could never have been involved in. What I think this reflects is that “appeal to the CIO” that usually accompanies a line-driven technical project failure. The CIO organization picks up the project and tries to fix what is often (even usually) unfixable.
What enterprises tell me is the usual trajectory for a failed AI project validates this. The project starts with a line decision to use some as-a-service AI tool. It runs into problems because it either doesn’t do what’s expected of it or costs too much. The CIO organization looks at it and determines that it can’t be done with as-a-service AI for data sovereignty reasons, and tries to either frame a self-hosted version (running into the problem of model selection and hosting requirements) or solve the sovereignty issues through data engineering. In most cases, there’s no solution that can make a business case, so the project fails.
Projects that are driven by the CIO group, either because they started there or because management had the good sense to start a failed citizen AI project over from the planning phase, still tended to fail just a bit more often than other technology projects, due to a lack of AI experience at the conceptual and execution levels. The kind of issues that the article mentioned as problems are in fact too complex for most enterprises to even raise prior to having practical AI experience to draw on.
Which leads us to another point. Enterprises say that almost 75% of all first projects in a technology area like AI will fail, but the failure rate falls with each attempt; second ones fail at roughly 48%, third at 34%, and it then stabilizes at around 29% thereafter. One CIO offered this wry advice to prospective AI users; “Hire a team that’s already had a couple of AI failures!”
Maybe not as crazy as it sounds, huh?