After I posted some comments on how I’d do 6G, as a software-architect type, I got a LinkedIn request and some operator requests via our user-only mailbox, to expand on the details, so here goes, along with a diagram.
Let me start by explaining how this approach came about. In the early 2000s, I saw that telco work on network service software was still highly monolithic, and even the IPsphere stuff I was involved in tended to presume a monolithic approach. The elements were coupled by events, but each was still a single software component, and all events were queued for handling by something that was not scalable. Events couldn’t be prioritized or contextualized prior to processing, making the process more complicated.
I launched a project I called “ExperiaSphere”, programmed in Java. At a high level, the goal of ExperiaSphere was to abstract the notion of “services” to a higher level than network infrastructure. Features were defined at the higher level, and mapped to infrastructure at the time of deployment.
In ExperiaSphere, services to users were represented by a composition made up of “Experiams”, which were templates that described some functional piece. A service template was created in a “factory” that could build the service, and so the templates were in effect order forms. Any factory that could accept an order could fulfill it, which consisted of mapping the features to infrastructure, a process I called “binding”. I built up test frameworks using multiple computer systems to test distributability and event processing, and ran this for about five years fiddling with its structure as an exercise in a more cloud-friendly way of handling deployment and service management.
What I tested were examples of a service between users of different network providers in a session, the use of a service that was hosted locally to the user, and the use of a remote service. In all these cases, the remote elements of any session were only visible to the extent that the session required it; the remote service was itself composed from multiple Experiams and only an abstract of it was available to be composed into a session. It’s this model that I was referencing as what I believed would be a better way of building a 6G (or 5G, even) deployment, and the rough application of that is shown below.

Referring to the figure above, we start with two databases (shown at the bottom left), each maintained by operators. The first, the Subscriber Database, could be the current customer data maintained by operators. The second, the Session Database, is the token representing an information exchange. This is created when an exchange session is started, populated initially from the subscriber databases. In the case of a subscriber accessing a remote service like content, a generic service-related database entry is used.
The subscriber template would have policy entries to define how some things/events were handled. Can you access data while roaming? When is satellite/wireline switchover invoked, and how? These policies impact the creation of the state/event decoder (see below) to set the way that events are handled for the session.
The session template is stored in the databases of the operators owning the subscribers and synchronized between the two for any events that change session state overall. The template includes not only the needed data on the session (addresses, etc.) but also a state/event decoder in tabular or graph form. This tracks the state of the session (setting up, active, decommissioning, fault, etc.) and relates this state to an event received from any source, which would be the subscriber(s), the remote service, the infrastructure, mobility, switch to another access method, etc. When a state/event combination is decoded, the result is the combination of the new state (if any) and the identity of a microservice to be run. This is forwarded through a mediator (orchestrator) to a “service mesh” appropriate to the nature of the microservice, meaning whether it’s a real-time or non-real-time feature.
Most real-time features would relate to events generated by the subscriber, perhaps because of an explicit request, or because they moved. Examples would be a user in a vehicle moving between cells, a user requesting a change in service either explicitly or implicitly (service switching from mobile to wireline when a user got home, or when only satellite service was available). These events would be handled locally, which might include providing some “wait-a-minute” signal, but the processing might require rolling changes to the delivery chain, in which case the events might kick off other changes, even to the partner user. The service template for each user would contain the local connectivity and the selection of any intermediate connection resources onward to the partner subscriber/service.
One casualty of this would be that the latency is likely to be more than currently experienced when moving between cells, but to constrain latency here seems unnecessary when other changes, like the switching between wireline and wireless, couldn’t be accomplished as quickly as a current cell shift without incurring more cost than the market could likely bear.
When an event is processed, the service template guides it to actioning microservices based on state/event, as noted earlier. This can be done in a variety of ways; the microservices might be resident and scalable, resident and supporting a queue of events, non-resident functions loaded on demand, etc. I think the goal would be to have little or nothing that wasn’t regularly used or on a tight latency budget to be non-resident, loaded near the event source as needed. However, any microservice could be loaded and run anywhere.
The “correlation” box requires some explanation. My work in ExperiaSphere convinces me that most events will be generated by a process that already is related to a session template, but where this is not the case, the correlation process would require an index to allow the impacted session template(s) to be recognized and the event steered as needed. In ExperiaSphere, I posted events with the template identified where it was known, and where no template was identified, a correlation process found the impacted templates. An example of how this could be used would be a cell-site failure. In this case, all the impacted sessions could be either switched to another site with overlapping coverage, or to an alternative service like satellite if policy allowed.
My goal in this is to replace the telco-popular notion of a monolithic management system with a common event queue. This creates a highly elastic and distributed model for hosting that, through templates, can contextualize events and process them better than the alternative.
