Security is really the only network budget area that enterprises tell me is safe from pressure. In fact, of 294 enterprises who commented on their plans in the last quarter of 2023, 288 said that they expected a modest-or-better increase in their security budgets overall. The problem is that while spending is increasing, satisfaction levels aren’t. Of that same group, 211 said that they believed they overspent on security, and the percentage that said they expected to spend more was roughly the same for this group as it was the enterprises overall.
One reason I believe we have a problem here lies in the fact that few enterprises can articulate a coherent overall security strategy. In that same target group, 19 said they were “confident” they had such a strategy, and 157 said they “believed” there was. The rest were either unsure (102) or sure they didn’t (16). The mere fact that the number who said they had no coherent strategy was almost equal to the number who were confident they had one. It’s very clear that enterprises continue to throw money at security solutions based on a response to events, either their own experiences or those of others.
There’s also a lot of uncertainty regarding what coherent strategies are available. The number one strategy cited is “zero trust”, which is listed at the primary model by 122 of the enterprises. “Firewalls” gets 85 votes, “application access control” gets 38, and nothing else gets more than ten votes. But of the 122 who say zero-trust is the security model leading edge, only 33 said they were confident in defining the properties and requirements of the solution. How many are right depends on what you consider “right”, and that is of course a problem in itself.
To me, zero trust means that there is no implied right of access for anything; that access rights have to be explicitly conveyed through some trust-grant mechanism. That’s a definition accepted by 21 of the 33 who were confident they had a definition, and even by a majority of those who didn’t cite zero-trust as the primo approach. As to what trust-grant mechanism is appropriate, there is little or no consensus, and it’s clear that this is the big problem that zero-trust and security overall has to address.
Strictly speaking, an explicit trust grant would have to be set by policy, which means that there has to be both an authority that can issue one and a mechanism to define and enforce it. It turns out that creating either of these two things is problematic, and sustaining a grant policy once established is even harder, because of the complex notion of “explicit”.
To most of us, something that’s “explicit” is something “stated clearly and exactly, leaving no room for interpretation, confusion, or doubt.” For example, “John can access these resources, and Mary can access those resources,” is an explicit statement of trust policy. This raises two questions; who can make the statement and how can their decision be implemented? Both are problematic, the first because the lines of authority aren’t often clear, and the second because it’s just hard to do.
An information relationship is a pair-wise pathway between a consumer and an asset. Who decides what the consumer’s rights are, and aren’t the rights of the asset also important? A typical organization chart will show a hierarchy of command and control, so who along the path can set the policy? Since a similar chart could be drawn for the asset, how do the two charts merge into a unified approach? Do you have to go upward to the first common box? If so, you may end up at the CIO level or even higher. Only seven companies said they had a clear definition of who could set policy.
All of these companies favor a role-based approach that includes what we could call a “role hierarchy”. In this approach, you define a series of roles, with the favored approach being that you start at an organizational level. Each division or operating group has a high-level role, and that role defines the trust policies for everyone in it. You can restrict trust below this level, but you cannot add it in, and the normal practice (says enterprises) is to restrict trust grants further as you dive down the organizational chart.
In this approach, it’s the responsibility of what we could call the “trust engineer” to consult with both the management chain of the organization and the management chain of the assets to which the organization might connect, to establish the rules. There is no interest I can find in engineering multiple chains, one for users and one for assets, and then having a sort of policy merge take place. The big problem here, of course, is the work associated with establishing the explicit connection rights that everything requires.
The companies also agree that the key step in implementing trust policies is the notion of absolute identity. Everything, users and assets, software, data, and people, has to be known for what it is, with great certainty. For “everything” here, we should read (as a practical matter) everything that has a network address and can connect or be connected to.
For both the question of the “who” and the “how” we have to add in the complication of variability. Variability comes in two levels, the first being related to changes in workflows that have to be reflected in trust grants, and the second being changes in the relationship between “who” and “where”, a classic network problem that’s reflected in the address-to-entity relationship.
People change roles, roles change tools, and things like illnesses or vacations can disrupt work policies. That means that unless trust grants are changed, people can end up not being connected to the assets they need, or application components and data could lose touch with each other.
Identity is an issue for a number of reasons. One is that many users/assets get their addresses assigned dynamically, which means that address of an entity may not be an absolute indication of who/what it is. Do you require some form of strict sign-on for users, with 2FA? A security key? And how does an application component get an identity?
Finally, there’s the related question of “distrust”. An element that misbehaves must, at some point, be considered to have lost trust, and thus lost rights. How do you decide that misbehavior has occurred, how do you revoke trust, and how does it get restored? Just a bit over half of zero-trust users believed they had solid strategies here.
The distrust issue may be the most problematic of all, and not just for zero-trust strategies. Any form of security is grounded in the notion that bad stuff can be recognized. How? And if it is, will the security framework allow offending entities to be disconnected until they’re re-certified? Malware planted on a system inherits the identity of the user, or at least inherits the address-to-identity association. A key logger might be running on a system right now; would it be detected? If not, might it collect enough to allow the owning user to be spoofed? Even some basic evidence of issues, like an unknown open port on a system’s IP address, could be missed if nobody does a port scan, or if the results of one aren’t understood.
Going back to my earlier comment on halves, it’s my view that less than half of products that assert “zero trust” support actually provide anything that approaches being useful. This is one of the biggest problems in security; there’s nowhere that the tendency to hype-wash technology products does more damage. If you’re serious about security, you need zero trust, and if you’re serious about zero trust you’d better take a very hard look at any vendors who offer it to you, or you may be stepping backward instead of forward.