I don’t think there’s anyone out there who wouldn’t agree that AI, and in particular generative AI, is a fast-moving technology. There are literally dozens of announcements relating to it every week, and the sum of these are creating not only increased interest and opportunity, but also increased confusion and indecision. One CIO told me “I can’t launch a review of generative AI before the tools I’m targeting change out from under it.” Some enterprises are taking the “mile wide and inch deep approach”, assigning one or two people to quickly assess new developments and make a recommendation for further review where it’s warranted.
One of the biggest changes enterprises are seeing is a shift from the broad vision of artificial intelligence to something one called “artificial expertise”. In this approach, AI tools are applied to a very specific task rather than to the general processes of research and document development. Coding is an example, but image generation, image analysis, and even video analysis are also gaining support.
In fact, of 77 enterprises who offered comments on this topic, 43 said that their interest in AI was now directed exclusively at one or more specialized missions rather than at the broad “public experiments” with things like ChatGPT or Bard. The hottest missions that group cites are code assistance and AI analysis of real-time video, and each of these demonstrates a trend toward the honing of AI attention toward places where it could provide the greatest benefit. That increased focus on a business case is a good thing, because without it AI turns into another of those all-to-common hype waves.
Coding assistance, which is largely focused on “copilot” applications tightly coupled with development tools like repositories and interactive development environments, is a part of a broader evolving trend that’s applying AI to the entire application lifecycle, facilitating what’s sometimes called “rapid development” or “continuous integration/continuous deployment” (CI/CD). Those who know my own background as a programmer and software architect won’t be surprised to hear that I believe a lot of software development is drudgery. If you give a person picked for insight and innovation and wrap them in a lot of routine tasks, they will often mess up. If AI could be used to take over these routine tasks it could speed development and improve code quality. Selecting library elements and validating API usage are two things that come to mind, but 40 of 43 enterprises said that code copilot applications of AI were a target.
Another area of interest is in the review function associated with both coding and deployment scripting and configuration. In CI/CD applications in particular, there’s a risk that the pace of the project is going to outrun human review effectiveness. AI is a “second pair of eyes” that can spot issues and either flag them for correction or even suggest corrections. Users are still more likely to accept AI as an advisor than to allow it to take over a task completely, but it’s clear that even two months’ experience with advisory AI is enough to start companies thinking about testing an interventionist approach.
I wasn’t surprised by the code support interest enterprises cited, but I have to admit I’d not have picked real-time video analysis as a prime target. The interest here isn’t as broad; only 29 of the 43 enterprises mentioned it, but that’s still two-thirds of the group. What seems to be driving this interest is a variant on the “second pair of eyes” idea, but this time perhaps it’s about AI providing the first pair. Visual inspection and monitoring are a major part of many business processes, and most companies know that having a person watching for a highly unlikely event is prone to errors because of boredom, inattention, even falling asleep. There are also examples (from the medical field, in particular) where AI inspection is proving to identify things that people, even experts, can miss. At the very least, AI an winnow down a mass of inspections to a few that require human attention.
Going back to the full 77 enterprises with AI views to offer, all had been evaluating the general, public, forms of generative AI and 34 said they were using it in a formal and sanctioned way. All 77 said they thought it was probably in use within the company, but without specific project sanctions. Of the 34, the primary applications were chatbots (31), document development (29), image and artwork (19) and research (18). All of these areas except research were rated as “good” applications with “growing” interest. In the research area, which included any development of material based on well-known data, the hallucination problem was cited as a major concern, something that limited the utility of generative AI in that application because it required too much review and validation of the output.
The final point I’ve drawn from my conversations is that generative AI is having a mixed impact on the broader topic of AI and ML. All 77 enterprises agree that generative AI has raised the profile of AI within their company. In 31 of the cases, they say that the overall impact on acceptance of AI/ML has been positive, in 29 cases they say it’s been negative (the other 17 think it’s had no impact). For those who believe that generative AI has hurt AI overall, the reason is a mixture of overdone expectations and the problem of errors. Senior management, they say, tends to lump all AI together, and so when they read about the issues with generative AI they become concerned that these issues infect other AI/ML tools. Of the “negative impact” group, 17 say that they’ve experienced pushback on approval of other AI/ML projects. Of course, of the 31 who found positive impact from generative AI, 27 said that it had made getting project approval easier.
I think what we’re seeing here is more a reflection of the AI literacy of senior management than any AI-specific trends. Companies who have mustered or developed AI literacy think that the overall approachability of generative AI has improved senior management understanding, probably because they had trusted subordinates who could explain things. Where the in-house skills were limited, management fears ran rampant and impacted AI projects overall.
One clear point, though, is that it’s far easier to adopt AI features in software with a specific mission (like coding or real-time video analysis) than it is to adopt AI in abstract. AI, constrained within set boundaries and linked to specific tasks with specific benefits, is the force that’s really driving AI adoption. It’s likely to remain so, and so this is the trend we need to be watching, more than non-specialized forms of generative AI. If the latter has a real business case, it is almost surely going to evolve from further specialization toward mission-specific and narrow applications of the kind we’re starting to see now.