The recent massive US government hack has raised a lot of security concerns, as it should have. The details of just what was done and how remain a bit murky, but I think that one thing that’s clear is that the hack attacked applications and data through management and monitoring facilities, not directly. That’s something that should have been expected, and protected against, but there’s a history of ignoring this dimension of security, and we need to rewrite it.
I’m not going to try to analyze the specifics of the recent hack, or assign blame. Others with direct exposure to hacked sites are better equipped. Instead, what I want to do is address an attitude that I think has contributed to hacks before, including this one, and somehow still seems to go on creating problems. A big part of the problem we’re having today is virtualization, and that’s everywhere.
It’s traditional to think of an application or database as a set of exposed interfaces through which users gain access. Security focuses on protecting this “attack surface” to prevent unauthorized access. This is important, but when we talk about “defense in depth” and other modern security concepts, we tend to forget that hacking doesn’t always take the direct route.
Early hacks I was aware of didn’t involve applications at all, but rather focused on attacking the system management APIs or interfaces. A hacker who could sign on to a server with administrative credentials could do anything and everything, and many organizations forgot (and still forget) to remove default credentials after they’ve installed an operating system. As we’ve added in layers of middleware for orchestration, monitoring, and network management, we’ve added administrative functions that could be hacked.
All these management applications have their own user interfaces, and so it’s tempting to view them as extensions of the normal application paradigm. The problem is that more and more of the new tools are really involved in the creation and sustaining of virtual resources. There are layers of new things under the old things we’re used to securing, and it’s easy to forget they’re there. When we do, we present whole new attack surfaces, and yet we’re surprised when they get attacked.
One of the most insidious, and serious, problems in the virtual world is the problem of address spaces. Anything we host and run has to be part of an address space if we expect to connect to it, or connect its components. When something is deployed, it gets an address from an address space. Homes and small offices almost always use a single “real” or public IP address, and within the facility assign each user or addressable component an address from a “private” address space. These addresses aren’t visible on the Internet, meaning that you can’t send things to them, only reply to something they’ve sent you.
The “security” of private address spaces is often taken for granted, but they aren’t fully secure. You could, if you could form the proper packet headers, create a message addressed to a private address, and it would be delivered. That’s not the main problem for our new middleware tools, though. The problem is that anything that’s inside that private address space is able to address everything else there.
Address space management is a critical piece of network security policy. It won’t prevent all security problems, but if you get it wrong it’s pretty likely to cause a bunch of them. One of my criticisms of NFV is that it pays little attention to address space management, and the same could be said for some container and Kubernetes implementations. Whatever can access an application or component can attack it, which means that address space control can limit the attack surface by keeping internal elements (those not designed to be accessed by users) inside a private space.
Another issue is created by management interfaces. Things designed to work on IP networks presume universal connectivity and apply access control to protect them from hacking. This presents major problems at best, because ID/password discipline is inconvenient to apply, so people tend to pick easy (and hackable) passwords, write them on post-it notes, and so forth. Another problem is the one we’ve retained from early days; software is often provided with a default identity (“admin”) and password (blank), and users can forget to delete that identity, rendering the interface open to all who know the software.
In modern networks and applications, this problem is exacerbated by “management creep”. I’ve seen some examples of this in NFV, where VNF management is extended for convenience to relate to the management of the resources the VNF is hosted on, which then renders those resources hackable by those with access to VNF management.
Then there’s monitoring. You can’t monitor something without being installed so as to access it, which means that monitoring tools have an almost-automatic back door into many things a user would never willingly (or knowingly) expose. As in my earlier address-space example, the monitoring element can be contaminated and influence not only the specific thing it’s supposed to probe, but perhaps other things within the same address space. This kind of attack would defeat address-space-based partitioning to reduce the attack surface.
The net here, sadly, is that most organizations and most developers don’t think enough about security. Network operations and IT operations have the same problem, but with the specific problem being “tunnel vision”, the focus on threats to a public API or interface rather than to internal interfaces between components that, while not intended to be public, may still be exposed.
The development people have their own tunnel-vision problem too, which is that while many would realize that introducing an infected component into an application during development is almost self-hacking in action, they don’t take steps to prevent it. A number of enterprises told me that their first reaction to the recent massive hack was to review their own development pipeline to ensure that malware couldn’t be introduced. Every one of them said that their initial review showed that it was easy for a developer to contaminate code, and “possible” for an outsider to access a repository.
Do you want a comforting solution to all of this, a tool or practice that will make you feel safe again? It’s not here, nor in my view is it even possible. There is no one answer. Zero-trust security is a helpful step. API security and workflow authentication step by step would help too. Both will work until the next organized attempt to breach comes along…and then maybe they won’t. The only answer is to take every possible precaution to defend, and every possible action to audit against attempts to do something. Unusual access patterns, like failures to authenticate access, or unusual traffic patterns could be an indication of a breach in the making, and if recognized could prevent it from becoming a reality.
This is perhaps a legitimate place for AI to come in, if not “to the rescue” in an absolute sense, at least in the sense of mitigating risk to the point where you could play the odds with more confidence. Because, make no mistake, security is a game of improving the odds more than one of absolute solutions. Play smart, and you do better, but you can never relax.