What will drive the security space in 2026? That’s a question really important to network vendors, but what will replace it may be a question even more important. Security, after all, has been a revenue and profit bright spot for vendors for over a decade, and that seems to be coming to an end.
The notion that security as a driver to network spending would have a finite lifespan shouldn’t have come as a surprise. Very few technologies endure in the spotlight forever, and those that to, like routing, tend to be in a state of slow commoditization. That which you need forever, and in larger quantities, just can’t be a source of forever profit growth for vendors. On the other hand, vendors are hardly likely to step on an established product set when there’s still some life remaining. So while the challenges with security are real, vendors really need to look less at simple abandonment and more at some “what next?” thinking.
Security is a defensive spending target; companies embraced it because of growing risks to bad actors. However, the place to secure anything is at the on-ramps, and since cloud computing has rebuilt the front-end strategy the attack surface has moved out to the cloud. That erodes the value of premises-based tools, something Cisco learned with Spunk’s under-performance. Security also has a lot of dimensions, driven by the growing number of “attack vectors” that are emerging.
Enterprises tell me that they see three broad security missions. One is to protect in-flight traffic from interception. Another is to protect against “denial of service” created by overloading networks or applications with spurious traffic, and the third is to protect applications and databases from unauthorized access. The first of these is seen as handled through end-to-end traffic encryption, the second is clearly a network mission, and the third (and most important) has elements that could be handled in either the endpoint or the network.
Denial of service attacks can originate with a single source or distributed sources (DDoS). The attack is recognized by a heavy volume of traffic, and the remedy is to block the identified sources at the point of entry. This “identified sources” point is the hard part; the more distributed the attack the harder it is to distinguish attack traffic from legitimate traffic, and the harder it is to prevent the attack from overwhelming the entry point where you’re trying to determine legitimacy. DDoS is usually applied at an Internet portal, since the Internet is what offers the easiest target for attackers given that it doesn’t authenticate users at the point of connection. ISPs and online security providers (Cloudflare, for example) can play a role here by preventing “bots” from attacking or by recognizing attack traffic patterns and quenching the stuff before delivery to the target. In any event, this stuff has to be caught at the front-end, the network portal, so there’s only a certain amount that enterprise network equipment can do.
That leaves application/database security, and as I’ve noted, enterprises rate this as their top issue. The problem here is that as enterprises shift toward cloud front-ends for their applications, the security has to be provided in, and increasingly by, the cloud platform being used. The prevailing cloud-front-end-data-center-transaction-processing model exposes low-level application and data assets to the cloud, so if the cloud doesn’t secure them, there’s a major risk of a successful attack.
This is a problem because there are a growing number of attack vectors aimed at applications. The classic one is the outsider intrusion, which often relies on faults in the application or platform software (an “exploit”) to gain outsider access to something inside, so to speak. Another common one is credential-stealing, where someone impersonates a valid user by gaining their ID and password. A third is “infection of the valid”, where malware gains access to a system or device with legitimate access and exploits it bypassing the user completely.
The issues of password and ID failures is difficult to address with the network; it’s easier when it’s done through a software tool that manages access. However, some SD-WAN products have the ability to recognize users and their valid application/data partner elements, as a byproduct of managing traffic, routing, and priority. This is a “virtual network” approach, and it may be one thing to watch for security in 2026.
In theory, virtual networks alone can augment security by facilitating separation of assets and users so as to disconnect people from things they shouldn’t need to access given their roles. For example, access to payroll applications, personnel records, or governed assets could be limited by partitioning them and their users. If the virtual network is augmented, as I’ve noted some SD-WANs are, to know both users and the assets they can connect with, the combination can go even further to cement access security provisions.
There are three problems with this happy situation. First, enterprises find it’s a lot of work to create and maintain user-based access rights. While having a grouping strategy to allow for assets and users to be assigned to groups or roles, then providing group/role-based rights, can reduce this burden, over half of enterprises say it’s created too many problems for them in keeping workers connected to what they need.
Second, virtual-network or SD-WAN traffic access control doesn’t prevent the injected-malware problem. If a user with the proper rights is infected, the malware inherits the user’s access rights, including virtual-network access and connectivity.
Third, and perhaps most significantly, this approach requires that the partitioning extend to the cloud or front-end technology, which makes it what many would say is more an SASE extension. It may also mean that when the Internet is used to support work from home or other remote work, or multi-site workers whose system connections are variable, there has to be a kind of login/identity step taken, which must then be able to make the proper network connection. That means that the virtual connection management process itself could be hacked, and if that’s the only threat barrier, you’re in trouble. If an intruder gains access, somehow, to an interface that’s already on a virtual network, and if other access barriers are not provided for key assets, there’s no protection. If you prevent this by requiring per-asset application access control, you’re eliminating the benefit of the virtual network’s unification of access management.
It seems like where all this is heading is that the only positive contribution networks can make to security is in analyzing traffic patterns to look for signs of unusual behavior. This could mean looking at traffic patterns from every endpoint, users and assets, but some enterprises say that it would be easier and just as effective to look at traffic to “governed” assets. The pushback here is the classic “big brother is watching” problem; would enterprises use this for worker surveillance? In any event, would it be effective if malware planted in “authorized” systems was designed not to create unusual patterns of access? About a third of enterprises think that this, and any other network-based security, simply invites hackers to focus on planted malware, credential fraud, and how to make either or both harder to detect. Thus, they think, software maintenance to address exploits, and security scans to detect malware, are more important.
But isn’t that what Splunk does, at least as a subset? This opens what may be the two real challenges for security tools provided by network vendors. The first is that the division of the security problem between the software side and the network side kicks the decisions up to the CIO level, and most CIOs are far more application-and-software oriented. The second is that the defense-oriented view of security tends to fragment responses; it makes sense to defend what you fear will be attacked. Splunk is really an observability tool with security applications. Observability has general value, but it’s proved harder to push to enterprises in no small part because “general value” and “unfocused” are only a smidgen apart in the minds of many enterprise types. Traffic-oriented analysis to detect security issues is logically just a feature, so if the market is indeed shifting in that direction for network’s participation in security, it may be a shift that hurts every network solution.
