This is the fifth and final blog in my series on edge computing, and in it we’ll talk about two critical issues. First, how do we network “the edge” to realize the capabilities buyers expect. Second, how do we secure the edge, given that its real-world relationship target makes it a profound security risk.
Any discussion of edge networking has to address the question of just where “the edge” is. Local-edge networking, the connection made to and from an on-premises processing point, is obviously the user’s responsibility. However, the user has to ensure that this connectivity doesn’t compromise the latency improvements that edge computing is aimed at delivering.
The great majority of connections to a local edge will be made using a local technology, like wires, WiFi, or an IoT protocol like ZigBee. From the local edge inward, connections could be made in a variety of ways, including the WAN options of the Internet (and SD-WAN) and the corporate VPN. These WAN options would be applicable to cloud processing, data center processing, and any edge-as-a-service being consumed. Whatever option was selected, it would be critical to maintain the performance of the connection, both in terms of capacity (to avoid congestion delay) and in terms of intrinsic latency, which is a combination of propagation delay and handling delay by devices. All of this is already understood by enterprises.
The big network question isn’t related to local-edge at all, but to edge-as-a-service, and it’s not so much how you connect to it, but how traffic is handled within the edge. Recall from past blogs in this series that “the edge” as a service is almost surely a metro-area service. That means that edge hosting resources would be distributed within a metro area in a variety of ways, likely determined by the total edge opportunity within a metro area. For example, it might start with a single metro process point, and expand as opportunity grows to include distributed hosting points closer to major traffic sources.
The key to metro edge success is the network, because it’s critical that the metro edge hosting resource pool be fully equivalent, meaning that you could pick a hosting point from the pool at random and still fulfill the edge hosting SLA, including latency. That argues for a fabric-like connection model, with a lot of capacity between hosting points and a minimum of handling variability—a mesh. You also need the metro fabric to couple to data center switching at each hosting point, and the same level of meshing at the switching level to avoid any variability of latency and QoE. The better this whole network fabric is, the bigger the metro edge can be without compromising the SLA, and the more efficient the pool of resources would be.
Edge computing is potentially the biggest driver of metro connectivity, and it’s almost certain that it will contribute enough (via things like 5G feature/function hosting) to make the metro network the most important piece of the network overall. If cloud providers dominate edge-as-a-service, then much of the traffic from the metro areas will go to cloud provider data centers, and the larger metros at least will surely have direct fiber connectivity. I expect that more and more traffic, and particularly high-value traffic, will bypass any IP core networks completely, or will connect to the core only to reach Internet destinations.
CDNs have already shifted the balance of traffic away from the core network, and edge-as-a-service would make the shift decisive. If gaming or autonomous vehicles were to become major edge applications (which is not certain, despite the hype) then we would likely see metro-to-metro meshing as well to accommodate distribution and migration of edge application components. That would tap further traffic from the IP core. All of this indicates a focus on deployment of metro router/metro fabric complexes and a reduction in traditional core routing requirements.
Finally, it’s possible that edge computing, if widely successful, could combine with cloud computing to change the whole notion of a VPN. If edge and cloud combine to become the connection path for both users and “things”, then the concept of a corporate VPN would reduce to be nothing more than connections to the Internet/edge and connections between data centers. Think about that one, and you can see that it would bring a truly seismic change in the industry!
That might happen through something that’s already bringing about change, which is virtual networking. The connectivity dimension of edge computing, and the fact that it could change VPNs, mixes edge impact with things like SD-WAN. If the future of edge networking and cloud networking diminish traditional VPN roles, they don’t diminish the need to manage connectivity better, and differently. Virtual networking can manage multi-tenancy; those needs arguably launched the whole trend. Virtual networking can quickly extend itself and contract itself to accommodate changes in application scope, too. And virtual networking can enhance security.
Edge impacts on security could surely be profound. Obviously, any complex distributed system has a larger attack surface, particularly when much of it is hosted on multi-tenant infrastructure. Network operators have shown little awareness of this in their NFV work, but fortunately public cloud providers have been focused on multi-tenant hosting security from the very first, and they’ve generally done a good job at the infrastructure level. The edge security problem is likely to arise higher up, in the middleware and application components.
Stringent latency constraints tend to discourage complex security tools, and if edge computing serves event-driven applications, the event sources themselves become attack points. In addition, the APIs that serve to connect application components from event source to ultimate processing destinations will have to be protected. All of this has to be done in a way that, as I’ve noted, doesn’t compromise latency benefits, and it has to be consistent or there’s a major risk of holes developing through which all manner of bad things can enter.
It seems certain that a lot of the protection here will have to come at the network level, something public cloud providers already know from their multi-tenant work, work that relies on various forms of virtual networking. We should expect that edge applications would run in private address spaces, for example. Think of VPNs that seamlessly cross between premises local edge, edge-as-a-service, the cloud, and the data center. SD-WAN can offer that.
One significant security issue with edge computing, presuming proper tenant separation by address segmentation, is likely to be the introduction of malware components, a Solar-Winds-style hack. The risk is exacerbated by the possibility that many edge computing features will be based on middleware libraries that may or may not be fully secured. This risk would be mitigated if the primary cloud providers or major software vendors provided the edge middleware, but the sheer dynamism of the edge might make it difficult to spot offending elements because of shifting hosting points and traffic patterns.
Could zero-trust security, which is offered by a very few SD-WAN vendors, be a key to securing the connectivity of edge computing? I think it could, and that could be a major benefit, because whatever security can be offered at the connectivity level, whatever insights into bad behavior can be obtained there, will not only reduce risk but also reduce the need for security elsewhere, security that may prove difficult to provide without adding a bunch of new layers.
That introduces what I think is the biggest edge security issue, and that is complexity and its impact on operations. People are going to ignore this because many see edge-as-a-service as being nothing more than a kind of local cloud. The problem with that vision is that there’s no justification for edge computing if it’s simply relocated cloud computing. Latency-sensitive applications don’t look like cloud applications; if they did they wouldn’t be latency-sensitive. The applications, as I’ve noted in this series, will drive a whole new application architecture, an architecture that’s highly dynamic not only within the edge provider’s infrastructure, but in its hybrid relationship with users. Future edge applications, to justify the edge, will be more complex, and that’s a problem.
The greatest potential application of artificial intelligence and machine learning may well be edge operations. We’ve had (including recently, with Akamai and Fastly) examples of how little issues can create global problems, and arguably these little issues come down to operator errors or process failures. In a low-latency world, the effects of blunders have low latencies too, and they multiply.
Internet security is almost as big an industry as networking, and that proves that if you don’t build security into a technology architecture, you can’t really hope to secure it. We will need to consider edge application security fully before we start deploying edge applications, period. Remember that edge computing is really about digital twinning and real-world application synchrony. We mess up the application and we mess up the real world. The consequences of security issues in edge computing are absolutely dire, and the risk of malware and hacking are perhaps even worse.
Let’s close things out now for this blog series on the edge. Edge computing, if it exists as a differentiated strategy, has to serve a set of missions that aren’t currently addressed. Those missions seem to focus on latency-sensitive IoT-like applications, particularly ones that require an application create and/or maintain a digital twin of a real-world system. There are two possible edge models, one where the edge is hosted by the user near the event source, and the other “edge-as-a-service” where it’s hosted by a provider in a metro complex. Since we can already support the former, any transformation of computing that arises out of the edge would have to come from the latter.
Edge-as-a-service necessarily involves a set of tools, what we could call “middleware” or “web services”. These tools should provide a common model for building and running edge applications, both local and as-a-service, and the model has to make security a built-in, not an add-on, property. Something like a Hierarchical State Machine (HSM) is the most promising of currently identified technology concepts to form the basis for digital-twinning applications.
A foundation of both the edge infrastructure and edge applications is a proper set of network tools to provide secure and agile connectivity among components, and in and out of the edge. If a lot of agility, tenant separation, and application security has to be provided at the edge, for applications whose structure and distribution will vary, then it seems likely that virtual networking will have to play a major role.
We already have SD-WAN tools that span from user to cloud to data center, and it would seem very logical to incorporate these tools in the edge, particularly if they can also provide zero-trust security. For IoT applications, of course, overhead and latency will be key issues since access bandwidth could be limited and it would make no sense to introduce latency through a virtual network when the applications’ justification depends on low latency.
The final point about the edge is (OK, you probably guessed it!) we need a single edge architecture from the deployment of infrastructure upward to the deployment and operationalization of applications. The edge is going to be a mixture of applications, ranging from elements of telecom infrastructure for 5G through IoT, and on to gaming and more. If every application has its own toolkit, it’s hard to see how we can avoid overspending on the edge, and operational and security challenges.
Some sort of “edge computing” is certain, because we already have it in IoT applications and more. Edge-as-a-service is likely certain too, largely because of 5G, but the kind of profoundly different edge computing we read about is far from certain. We need to have a viable edge model, and here we will almost surely end up relying on public cloud providers, unless software vendors act quickly to counter the cloud providers’ early success. If we want the best of all possible edge computing model, we’ll still have to work for it. I hope this series has, by laying out issues and options, improved our chances of success.