Pentesters, Don't Overblow the Risks of Your Findings

Recently we got a call from a concerned client: A penetration testing firm engaged by one of their branch offices had reported a major vulnerability in a mobile app! The finding was in a core technology that was believed to be highly secure, and was already integrated into several production apps, so a certain level of hectic ensued. According to policy, critical risks had to be fixed immediately, so players from various departments were engaged. They'd have to find a way to handle the issue, and quickly, even if it meant working late nights and weekends.

Unfortunately, the report was sparse on information: Apparently, some reverse engineering of the app had been done and a secret key embedded into the Java bytecode had been exposed. The issue was rated "critical", along with a code snippet showing the uncovered secret - a variable conspicuously named "key" that appeared to contain a secret value. What was missing however was the context in which this key was used (it wasn't possible to say from the code snippet) and no mentions of any specific attacks enabled by the finding. As fix for the critical vulnerability, the tester recommended to "obfuscate the code".

The mystery was resolved two days later: The key in question wasn't part of the app's core logic after all, but an access token for a third-party analytics API. Whether this constituted a security risk was debatable, but it certainly wasn't a critical issue. The time and resources invested, including several urgent meetings, phone calls and emails, had been wasted.

Despite standardization efforts, risk assessment has always remained a somewhat subjective effort. It's just not an exact science: Even if you follow a systematic risk assessment scheme, variables like "exploit difficulty", "access complexity" and "impact" leave a lot of room for interpretation. Manual penetration testing is a (mentally) tough job and whether we like it or not, there's pressure - imposed by our peers or by oneself - to be "successful": Even if you are aware that it doesn't objectively matter, being unable to exploit a system after trying everything for weeks feels like failure.

We're also part of an industry where the risk of some vulnerability classes is commonly exaggerated, especially the highly accessible ones. Case in point: Cross-site request forgery. Yes, CSRF attacks aren't impossible, but in many cases the attack vector is too convoluted to warrant more than an "informal" risk rating. Yet, you'll see "critical" CSRF bugs being published on a regular basis.

People that uncover critical bugs are the heroes of our industry. And nowadays, we're even being gifted with "branded bugs" - vulnerabilities so awesome that they deserve their own name. It's no wonder then that we tend to overestimate the risk of the security issues we find.

Keep in mind that the result of your penetration test should be reasonable, workable recommendations that help increase the security of the test object. The goal is not to teach the client a lesson! The amount and criticality of vulnerabilities reported is not a measure of the quality of the report if there is no substance to it.

How do I assess vulnerability risks correctly?

Comprehending the test object and related attack vectors on a technical level is a prerequisite for assessing risks. You should be confident about the nature of the issue and its potential impact, and you should be able to make a judgement about the propability that the attack is executed. This is only possible if you understand the technology you are testing (then again, you shouldn't be testing something you don't understand anyway).

Eliminate subjectivity

Leverage standards to make your risk assessment as objective and comparable as possible. CVSSv3 might not be a perfect expression of risk in all situation, but it as least facilitates consistency between risk ratings. If you're working in a team, try to standardize risk ratings as part of the penetration testing methodology.

If you're unsure about some risk in a certain context, you might be able to get some ideas from standard documents. For example, the OWASP Mobile Security Verification Standard (MASVS) contains a list of security requirements in mobile apps. A failure of a L1 or L2 control in the MASVS should be treated as a vulnerability, while failures in L3 and higher more fall into the nice-to-have category. This will help you prevent errors such as rating a lack of root detection as a critical issue.

Be realistic

Take a good look at the issue in question and be mindful in your risk assessment. Remember, you don't have anything to prove: Fairly secure systems are rare, but they do exist.

TL;DR

Part of the pentester's job is to provide a realistic risk assessment and workable recommendations for the vulnerabilities discovered. To assess risk, you need to understand the impact and context of a vulnerability. Leverage standards to ensure the quality and consistency of the assessment.

About the Author

Bernhard Mueller is a full-stack hacker, security researcher, and winner of BlackHat's Pwnie Award.

Follow him on Twitter: @muellerberndt