If enterprises (and I) are correct, the future success of AI is very likely to depend not on enormous generative AI intelligence models, but on relatively contained and probably hierarchically organized agents. I believe that these agents will be “deep learning” models, some perhaps rising to the scope of an LLM but many being much smaller. What they will all be is specialists, expertly trained in some fairly contained function. That raises the question of how this new structure can be effectively harnessed to fulfill business missions and build business cases.
There is a strong temptation to think of this structuring challenge as an AI challenge, and I think that if we looked far into the future we might find that it truly is, but that ignores the present, which is at best risky and at worst a business-case disaster. We cannot possibly believe that AI technology will displace current software quickly; the business disruption and asset displacement impacts of such a move are insurmountable. We have to evolve to the future, and that means a new kind of thinking, about AI and about applications. We have to think in terms of processes and of greenpatches.
A business is, functionally speaking, a complex mesh of processes. Each of these processes exists in two domains—the organizational/human domain and the IT domain. If you work for an enterprise and read my blogs, you are part of the CIO’s organization, and there’s an accounting organization, sales, marketing, payroll and personnel, and so forth. These high-level organizational elements are super-processes, divided organizationally into smaller elements, like “accounting” is divided into “receivable” and “payable”.
What organizes these processes today is really large an organizational/human activity. IT is largely directed at facilitating human activity. However, we do have (even today), workflows at the IT level that cross process boundaries. Those who remember early software componentization will recall the “enterprise service bus” that aligned IT services with cross-business-process workflows. While both the technology and the terminology are ancient by today’s software standards, the fact is that this is how businesses work, and how IT works within businesses, today. We could say that a business process is the unit of company activity, and an IT service is the collective support for each business process.
How would this evolve to optimally utilize AI? Rather on speculating about a future when the over-mind of AI calls out its AI minions to perform atomic tasks, seeing all, knowing, and organizing all, let’s be realistic. The easiest path of evolution would be to address the IT services within the overall process structure, or perhaps more accurately, to look at the more atomic levels of the business process hierarchy as our early AI targets. The most productive early targets would be ones where we had a large incremental AI benefit to claim, and a relatively low disruption/displacement cost—which is what I’m calling a “greenpatch”.
A greenpatch is a variant on the old “greenfield” concept. In IT, a greenfield project is one that is new, not displacing any existing technology solution or solution element. Think of a greenpatch as a special case, a place where either a new IT service can be framed via AI, or where a current framework can be rebuilt using AI, with a compelling ROI. The goal is to stick AI elements in where they’d do the most net good, keeping as much of the surroundings in place as possible to reduce bleed-through impact. When you draw back your bow, the goal isn’t to hit the moon, but to avoid hitting your own foot. Put in AI terms, this means finding either atomic IT services or current human processes and considering their implementation as AI agents.
This is one reason I’m so interested in the NVIDIA Cosmos models I blogged about yesterday. Obviously the greenest of the greenpatches would be areas where existing IT integration was minimal, as would likely be the case with the roughly 40% of the workforce whose activity isn’t deskbound, but rather is out in the real world doing real things. Cosmos is designed to model humans doing stuff, and so could open this area up to AI enhancement.
A real-world AI greenpatch system would likely start by having a Cosmos-like entity analyze videos of the human activity to derive a physical-system model. That would then be used to plan optimizations and monitor for efficiency and safety. If the AI system required integration with either people or other IT systems, the required information could be introduced or output in an appropriate form—think an “adapter-API” that speaks AI on one side and IT/human on the other.
Deep learning or simple machine learning (ML) could be used to perform deskbound tasks as well, and in fact we can build an example from one to illustrate. Suppose we have a group of people working on the corporate tax filings. There may be software in play here, or it may be largely a human process. Either way, we already have some foundation models in AI aimed at doing taxes of various kinds, so we could plant such a model in this process to facilitate it. To interface, we’d look at what information feeds the current process, and develop an “adapter API” that would format those sources to match the model input/prompt requirements. So, perhaps, we have a federal-AI and state(s)-AI models.
If, hypothetically, the only taxes we had were handled by these two models, then organizationally speaking, we have a superior process (Tax-Handling) with only AI-model systems as subordinates. Getting AI to “supervise” AI isn’t a giant reach, so we could assume we’d use a new supervisory AI model to assist, or even be, Tax-Handler. At any level, the way the AI is trained has to match the activity being supported; world foundation models like Cosmos would be used for human-action activity, for example.
This is probably not the way most people think of AI being used, and even enterprises don’t spontaneously come up with it. What they do is implicitly or explicitly reject the notion of some huge monolithic AI controlling everything, and I’m not seeing these objections grounded in fear of “Hal” misbehavior of that AI entity as much as in the practical problems of both achieving AI capable of such a scope of control, and evolving to its use.
Slow and steady, in other words, wins the AI race too.