My blog last week on occupation statistics and project targets generated immediate comments from enterprises, 208 in total as of yesterday. Based on these, I want to make some additional points, not only on the value of using occupation data to target projects, but on selecting targets optimally based on that data.
It was this latter point that I found most interesting, and also a key factor in how we interpret the comments of enterprises on things like AI. None of the 208 said that they were planning AI projects based on value targeting, meaning picking targets based on the characteristics of the employee group(s) whose productivity was to be augmented. Only 143 said that they looked at value targeting even as a secondary factor, though all said it was important. That sure seems a contradiction, one that deserves more explanation.
According to the 208, new tech projects including AI were almost always initiated by line department comments and questions, and sometimes by outright requests. They almost always were extensions or augmentations of things already being done with some form of IT, and about half of them came about because of a change in business conditions, regulations, economic trends, etc. This is why enterprises tend to see AI agents as software components; the projects that deploy them are projects that had deployed software in the past.
What’s different with AI versus “IT” projects is that line organizations are more likely to initiate requests for specific capabilities, or even contract for AI-as-a-service offerings, without IT coordination. IT is also more likely to “offer” AI in response to questions, comments, or requests, from line organizations. The key thing here is that AI planning is reported to be fragmented; companies do not say that they’re formulating a broad AI strategy as much as sneaking up on one a project at a time.
According to enterprises, the simple truth is that AI productivity projects in their company are almost always aimed at managerial and professional/technical workers. These job categories have two or more of three critical characteristics; they have a high unit value of labor, they have decision-making and expense approval roles, and they have significantly faster head-count growth than the labor force at large. Management positions hit all three of these, and so are most often targeted. Computer science, engineering, and healthcare professionals hit at least two of the three.
There is also a vertical-market difference in AI empowerment to consider. Interestingly, the highest percentage of empowerment comes in the “educator” job category, which has nearly 90% AI use (and almost all of it is cloud-hosted AI). IT vendors, architect/engineering firms, and financial analysts all report 70% or higher AI use.
Almost all of this AI use relates to the cloud-hosted as-a-service form of AI, much in chat form but increasingly using AI tools. Getting data on the AI agent deployment is much more challenging due to the early stage we are in agent adoption, but some interesting information does emerge from the comments of the 208.
So far, AI agent use is largely driven through or by IT, unlike the use of generative-as-a-service AI. It is somewhat more likely to be stimulated by line department interaction than normal IT projects, but all the applications so far fit into one of the three categories of agent I’ve blogged on before; workflow, interactive, and integrated. The integrated model, requiring as it does embedding in other software, is the most likely to be driven by IT, the interactive form the most likely highly influenced by line organizations, and the only kind that is so far reported to ever be acquired and deployed entirely by line organizations. But even here, the majority of this style of agent almost always involves IT, because the great majority of agent missions (over 90% so far) involve either local hosting of the model for data governance reasons, or selection of an as-a-service provider who can meet compliance goals.
The targeting of the agent applications that are part of a workflow is obviously the same as the targeting of the workflow overall, unless agents extend the workflow to a different user set. Right now, that’s reportedly the case for only about ten percent of workflow agents, but I suspect it will grow as enterprises realize agents are a good way to extend many applications. However, only a bit over half of workflow applications of AI agent actually target empowerment directly, the ones that involve generating a display or report output based on analysis. Those involved in editing or other aspects of transaction handling don’t generate worker-visible outputs and so don’t relate directly to empowerment.
Where AI agents are used in interactively, the main mission is the support chatbot today; it accounts for two-thirds of interactive agent missions. Support chatbots can operate either pre- or post-sale, with the latter currently dominating, and there are at least current differences in how they’re deployed. Pre-sale chatbots, dealing as they do with customer-facing data that’s hardly proprietary, are more likely to be cloud-hosted, and make up the largest class of agent-as-a-service applications today. Currently, most of the pre-sale chatbot agents that are self-hosted relate to B2B sales.
The post-sale or customer support chatbots are, today, seen most often as involving data governance policies, and so are more likely to be self-hosted. However, where the product/service is B2C rather than B2B, as-a-service models are preferred currently. This is particularly true where the expected customer base is large, widely distributed geographically, or both. In those situations, an as-a-service model is said by enterprises to handle the variable load levels better.
Most of the 208 enterprises admit that a strategy of targeting productivity-justified AI based on how many workers could be empowered, what the total unit value of labor was, or some combination of the two would be smart if AI deployment was decided on a centralized basis, but that’s not the case today, and enterprises admit it’s not likely to be the way they do tech projects in the future.
This poses an interesting question, which is whether project incrementalism with AI or other tech advances can fully realize tech potential. Companies do projects, not revolutions. Vendors want revolutions, or at least some do, and if that’s the case does it mean that vendors will have to try to influence project development in directions that optimize the long-term, market-wide business case? I think that’s what Nvidia is trying to do by painting glossy, hopeful, pictures, but that approach risks a major problem if they don’t see the market clearly, and marketing hype can blind those who produce it as easily as those intended to consume it.
