Most of the time, vulnerabilities are things that grant attackers superpowers: the ability to read or write arbitrary files to the server filesystem, the ability to run arbitrary commands, etc. But sometimes vulnerabilities can get in the way of each other. When this happens enough in an application, you can wind up with something The Simpsons called “Three Stooges Syndrome":
Something that we frequently see at Bishop Fox during pen tests is a component or feature that is vulnerable to some serious issue but isn’t quite exploitable. This can happen for a variety of reasons, often not due to any explicit security feature. Fully exploiting a vulnerability sometimes means threading a tight needle of functionality in ways that the system was never intended to behave. And if the conditions don’t line up perfectly, you might wind up with a serious vulnerability, even though it can’t be exploited right away.
I’m here to tell you that you should not ignore these issues! Too often we see this sort of bug either ignored or de-prioritized. The reality is that the issue might become exploitable at any time due to ordinary and seemingly innocuous code changes. Don’t be like Mr. Burns. It’s easy to look at low-severity findings in your pen test report and walk away thinking you’re indestructible. The reality may be very different.
Let’s explore a hypothetical example of a web application vulnerable to SQL injection. During the pen test, we notice that inserting a single quotation mark
into the user’s
field results in an HTTP 500 error, and after cross-referencing this with the application source code, we can tell that the input is not being sanitized on its way to the back-end database. This seems like a textbook case of SQL injection, so we start attempting exploitation right away.
But there’s one holdup: the
field is only eight characters long. Any longer input we enter will be truncated to eight characters. This rather throws a wrench into our exploitation plans. In order to actually inject queries, they’d have to be extremely short. And doing something meaningful, like exposing password hashes from the database, might not be possible.
But the length limit on a field is a fragile strategy for application security. There might be some clever query that is super short and gets in under the character limit, or more likely, the limit may be extended later on in development. It’s not hard to imagine that a developer would (correctly) realize that eight characters is far too short for the name of a city. The residents of Taumatawhakatangihangakoauauotamateaturipukakapikimaunga
horonukupokaiwhenuakitanatahu, New Zealand would be very upset if they couldn’t enter their city name. Furthermore, what developer would reasonably expect that extending a
field from eight characters to 100 would catastrophically undermine the security of their web application? This is why such an issue can’t be ignored.
I sometimes like to refer to the Update Software button as the Would-you-like-to-maybe-break-everything button. This can be true in more ways than you might expect.
Sometimes Bishop Fox will find a vulnerability that is not exploitable because a component is so old that it doesn’t yet support the functionality necessary for the attack to work. For example, imagine a web application that uses an ancient XML parser that doesn’t support external entities (XXE). But it’s an internal component not directly exposed to users, so it’s no big deal, right? Well, this functionality might just be what an unrelated server-side request forgery (SSRF) vulnerability has been waiting for. By fixing one issue, another may become exploitable.
I get why you would feel hesitant to take a perfectly working system and “improve” it. But there’s a point at which being cautious turns into being negligent. Keeping around old components out of fear is not a sound security strategy. This is why it’s critical that security be a conscious effort and not something that happens by accident.
Another common source of an “accidentally secure” system is when a security mechanism exists, but in an unstable state. If code was never changed in the application, it would remain fine. But web applications rarely remain untouched for long. It is in the nature of a web application to continually change, and they should be designed accordingly.
Consider the following rough diagram of the logical flow of a web API as user data propagates from API endpoints to the back-end database:
As you can see, all of the endpoints in the code pass through an input validation routine before reaching the final database connector. Looks secure, right? What could possibly go wrong? A lot, actually. Here’s one possibility:
A new endpoint is added. Only this time, the programmer didn’t notice that there’s an input validation routine that needs to be called. They (not unreasonably) assumed that the database connector layer would handle sanitizing things going to the database. And now suddenly we have an exploitable SQL injection vulnerability made by such a simple mistake.
Instead, a more secure design would look like this:
With database sanitization occurring at the database layer, developers are free to add new functionality without fear of accidentally destroying everything for unseen reasons. Furthermore, this eliminates the possibility of one of those internal pieces of functionality from accidentally removing the input sanitization. This is a design that is sturdy and unlikely to suddenly blow up because some small detail changed.
It’s important when evaluating a security vulnerability to identify the root cause. When a vulnerability looks serious but seems to be currently unexploitable, it typically means that something is going very wrong internally. The root cause may well be that the security of some component is not working as intended and could collapse at any moment. Take this into consideration when prioritizing what to fix.
The hallmark of a medium- or low-severity vulnerability is that its risk is conditional in some way. It may not be a big deal on its own, but when combined with something else, it can become a very serious issue. Resist the urge to ignore these conditional vulnerabilities, because what’s accidentally secure one moment can be accidentally insecure in the next.
Application security should be a deliberate act. Just because the stars may have aligned in your favor today does not mean that they will do so tomorrow. As code churns and new features are developed, issues that depend on quirks of implementation may change their behavior. This is why it’s critical to have a strong security-minded design and mechanisms to enforce it.