One of the biggest, and yet least-recognized, challenges enterprises face in software deployment these days is addressing non-transactional models of application workflow. We’ve spent decades understanding and populizing online transaction processing (OLTP), in large part because for decades that’s the only kind of application you found at the core of businesses. That’s changing today, in part because we’re changing what enterprises do with applications (and through them, with their workers, partners, and customers), and in part because we’re learning that transactional workflows can themselves sometimes benefit from non-transactional thinking.
An OLTP application is designed to automate/replicate a piece of a commercial process, something that in the old days might have been done through an exchange of paperwork. OLTP applications typically present some form of menu to allow the user to select the transaction they want, and then drive the user through a series of steps to complete it. An order, for example, might present a product page for selection (and which might require a database inquiry for available products and quantities on hand), then in response to the selection made, generate a form that obtains information like quantity, features (if selectable), ship-to, bill-to, etc. From this information, the application builds a transaction that then flows to one or more applications. In the old days, we’d have said that transactional applications fell into three categories, inquiry, update, and delete.
From roughly the 1970s, roughly contemporaneous with the explosion in OLTP, we saw another kind of workflow model, what’s often called the “event-driven” model. This came along in response to a need to recognize that some tasks had to be visualized as the handling of a series of events generated by an external source, each of which had to be handled in the context of the relationship with that source, the “state”. Thus, this is often called “state/event” handling. In the late 1960s, this approach emerged in the handling of network protocols, including the old IBM Binary Synchronous and Systems Network Architecture (Bisync and SNA) and also TCP/IP and has exploded with process control and IoT applications.
If you look at the missions associated with transactional and non-transactional workflows, you can see that one fundamental difference is the relationship between the source and application. In transactional flows, the application drives the context completely. In IoT or process event-driven systems, the presumption is that the source has an inherent context that the application has to synchronize with, or that both sides have such a context, one that then has to be unified. But how different are these, really? That’s the new question, created in no small part by the growth in the Internet and the cloud.
Both these models, and in fact any application or process model that involves two parties connected through a flow of messages, is a state/event, cooperative, system. In some, most transactional processes, it’s possible to simplify the workflow and state/event processing to simply watching for a sign that the other party has been disconnected or somehow lost synchronization. Think of saying “What?” in a conversation, to attempt to regain contextual communication. But if you make the two sides of the process more complex, and handle them through more complex steps and with a more complex mode of communication, you start looking a lot like a state/event process. You have to run timers to tell you how long to wait for something, expect acknowledgment events, and so forth.
Then there’s the new missions. Social-media applications demonstrated that chatting, whether pairwise or in a group, doesn’t fit a transactional model well. It does fit state/event. In fact, a lot of the cloud computing features relating to event processing came from tools created by social media companies or online content delivery companies. What’s happening because of this Internet/cloud symbiosis is a rethinking of how even transactional applications should be viewed. “We don’t see applications as front-end and back-end now, we see them as Internet applications and core data center applications, coupled,” one CIO told me. This reflects the fact that the Internet has changed how we “shop” or, more broadly, how we decide what we want to do in an online relationship. There’s a lot more time spent making choices than on executing on the one we select. Think of your last Amazon purchase; you browse a lot of stuff, taking minutes or even hours. When you’re done, you add it to the cart and check out. Only that last piece is really “transactional”; the rest is online content browsing and research.
This application division, more dramatic in impact on software than a front/back-end division of application components, facilitates the use of cloud computing by dividing one transactional mission into two new missions—one of option browsing and decision support and one of database and business process management. It also lays the groundwork for how edge computing applications in real-time process management could be structured to facilitate the cloud-hosting of some elements, with or without a migration of cloud hosting outward toward individual users.
If you explore a complete real-time industrial or other process control application, you almost always find that it starts with a very latency-sensitive control loop for the actual process system control piece, and then concatenates one or more additional control loops to handle tasks related to, triggered by, but not synchronized with, the first loop. For example, we might have a local production line whose movement and actions create a local control loop. This loop, for some events, might have to signal for parts replenishment or goods removal, and this signal would be at a second level, one that involves a task not immediately linked to the original process control steps, but rather simply related to their result. For example, it might pull something from a local point of storage of material. That loop, if it draws that storage level too low, could then signal another loop, which might be a traditional transaction, for the ordering, shipment, receipt, and payment for the parts. The first loop is highly latency sensitive, the second somewhat to significantly less so (depending on the time required to move from local storage to the industrial process point), and the last not likely latency-sensitive at all. We could do loop 3 in the cloud, and perhaps loop 2, and almost surely not loop one, based on current cloud latency.
Even true process control applications have components that could be edge-hosted in the cloud. Transactions, having seen the decision-making front element separated from the true database processing, is likely less demanding in terms of latency than the most stringent pieces of process control, making it likely these could be accommodated even more easily using a shared resource pool. The volume of these front-of-transaction events might be sufficient to develop satisfactory economy of scale at points further out, meaning closer to users/processes, than traditional cloud hosting. We could do more in “the cloud” then.
The point here is that there is a realistic model for cloud/edge symbiosis, one that would accommodate any migration of hosting toward the point of process, but would not require it. This, in my view, requires providers of edge-cloud services, vendors, analysts, and the media to forego the usual “everything goes” approach, recognizing that it’s simply not possible to distribute shared hosting to the same latency distance from the controlled processes as it is premises hosting. Expecting too much is a good way of ensuring less than optimum.