Mainframes, monoliths, hybrid cloud, AI…all these are terms that we kick around, but one company, IBM, embodies them all in a way that no other does. IBM is also doing pretty darn well these days. In the last 6 months, it’s stock has gone up over 20%, and in the last year over 50%. Is there a relationship between the four tech terms I opened with and IBM’s success? Can answering that offer us any insight into the future of computing and even networking? Could their recent announcement of the LinuxONE Emperor 5 help? Let’s see, but let me start by saying that IBM is not a client and my views on this are my own; they don’t even know I’m writing this.
Business IT is really about transaction processing. A “transaction” is the modern equivalent of the stuff we used to call “commercial paper”, the invoices, payments, ledgers, reports, and stuff that define business operations and support the essential buying and selling that make up our economy. Commercial paper or transactions originate where business is done, which is widely distributed, but transaction processing tends to concentrate toward the point where databases reside. That’s why early IT was totally concentrated in data centers, and why almost every major enterprise has a data center where things are concentrated and stored today.
Here’s the thing. A company almost always needs a kind of “single source of truth” with regard to things like accounts, inventories, and so forth. If you were to assume that there were multiple databases representing, let’s say, the current inventory levels, you’d need to keep them synchronized to avoid having differences among them result in significant errors, like selling something you don’t actually have on hand. You also need to mediate access to a record to ensure that a collision in usage doesn’t leave it in a bad state. We’ve long had something called “distributed transaction processing” to manage synchronous access and updates.
The concept of a “mainframe” evolved as the early computer systems, designed for single-thread “batch” processing, grew to handle multiple simultaneous programs and online real-time access. IBM’s System/360 was arguably the first mainframe, released in the mid-1960s. Distributed computing in various forms came along in the next decade, and more localized (and eventually “personal”) computing gradually absorbed many of the tasks workers needed IT to help with, but transaction processing still normally centralized on the “mainframe”, so it became a kind of database engine. IBM says “Mainframes are data servers that are designed to process up to 1 trillion web transactions daily with the highest levels of security and reliability.”
That point frames the whole mainframe/Linux relationship. On the one hand, a mainframe data engine, and even a mainframe that hosts applications that demand a lot of database access, is one thing. A mainframe as a continued single source for applications is another. For two decades, IBM contended with the fact that mainframes weren’t the ideal application platform, and that even transaction processing had elements that should be run elsewhere.
IBM’s insight into cloud computing was that the data-server role was a persistent business requirement, which meant that the cloud was a hosting point for more presentation-level functions, projected outward, closer to the user, with the elasticity needed to absorb load variations and to unload an essential choke point in every process, the core database functions. Things like reporting, accounting, and other central business functions are logically more efficient if they’re run local to the point of data storage, so the IBM vision of a “data center” is a mainframe, surrounded by local adjunct servers that peruse the databases, augmented by elements distributed outward, including to the cloud or to desktop PCs.
The spread of processing threatens the mainframe, and IBM took a number of steps to minimize the threat. One was what one enterprise wryly calls the “miniframe”, the lower-end Z-series products that are designed to run in a data center cluster adjacent to the real central mainframe. Another was the effort to accommodate Linux, which accelerated with IBM’s acquisition of Red Hat. The newest LinuxONE stuff, I think, is designed to build a platform that is based on Linux and that can also be that central mainframe, running more database-centric applications as well…applications like AI.
Business analytics are obviously something that’s database-centric, and it’s also the area where enterprises have consistently said offered the greatest potential for AI augmentation. If AI is separated from the data it works on, and if AI elements are separated from each other, then the network impacts the performance of AI significantly, to the point where its value in some real-time missions might be compromised. It’s clear that the latest LinuxONE stuff from IBM is a response to this. Make a mainframe into both a database engine and an AI engine and you have the ideal business analytics centerpiece.
IBM sees the logical structure of IT as a set of concentric layers, the innermost of which is the database collection and the outermost the users of applications, human or machine. That’s logical, which makes the big question why we seem to have missed the obvious all along, and we can divide the causes into “practical” and “cynical”.
The practical cause is the way that computer hardware evolved, toward the integrated circuit, solid-state memory, mini- and microcomputers. This allowed it to be pushed out to the worker level, and it’s natural that vendor and buyer attention both focused on the new things this outward push enabled. Think the word processor, spreadsheet, and other components of desktop office suites. This push opened new areas for computer vendors and software providers, and that tended to focus product news on the outer layers of the model. It was a lot easier to be a new PC vendor than to be a new mainframe vendor; there was no incumbent to fight to displace.
The cynical cause is the combination of ad-sponsorship and search-engine optimization (SEO) created by the online age of publications. A typical first-line manager has the budget authority to purchase a PC, and could very well simply buy one online or at a retail store. Enterprises tell me a typical database-machine mainframe today would cost almost two million dollars, that it’s often surrounded by smaller mainframes costing a couple hundred thousand, and for all of this the purchase process is a long slog through layers of evaluation and approval. The former kind of buy is driven by marketing; you can’t send sales people out to push PCs to line managers. The latter buy is supported by a permanent on-site sales team, and you don’t need to beat the bushes to find prospects; they’re already customers. Thus, all the advertising is pushed by the outer-layer players, and that creates an artificial picture of how IT is done.
One thing this has done is to give IBM a unique position in terms of account control and strategic influence. Mainframe accounts almost always have sales teams, pre- and post-sale support for enterprise project planning. Other IT vendors rarely have this level of persistent customer contact, and so a much smaller level of influence on planning. IBM has a decisive lead in enterprise AI, despite what you may read to the contrary, because they can get their story to the right people at the right time, and because their accounts are the largest consumers of IT.
So why do so many enterprises depend on mainframes? The truth is that for most, a mainframe data engine is almost surely the best strategy in terms of performance and availability. The big problem for the mainframe in its traditional form (The IBM Z-series) is that the software running there is often old, sometimes ancient, and significant modernization pressure on applications can lead to the question of whether the existing software can be updated effectively. I think the LinuxONE mainframes are IBM’s answer to this; you can retain the mainframe engine role but adopt an operating system and programming model (Red Hat software) that is the same as would be used on the layers surrounding the database engine. By linking AI into it, as the most recent stories do, IBM is sowing the seeds of a new mainframe model that doesn’t have the old-software vulnerability. If they succeed, they cement themselves in a strategic leadership position for decades to come.
The wider implications here are even more important. If the layered model is indeed obvious and real, then we should be keeping it in mind when considering almost every development in IT opportunity or technology. You need to run data-centric stuff proximate to the data; that means governance is likely to demand centralization of those applications. The greatest impact on networking is likely to be created by missions that change upper-layer requirements, which is why real-time IoT is so critical. The lesson here is that despite being considered a dinosaur by many, IBM has actually gotten things right here.