Secure Coding 101: 4 Common Mistakes Developers Make When Fixing Cross-Site Scripting
Even though awareness of web security issues has been on the rise, preventing and fixing XSS issues throughout an application is not always completely straightforward - especially if security was not considered early in the development life cycle.
In our code reviews and pen-tests, we often find XSS prevention methods in that are incomplete or even completely miss the mark. The following list details the four most common mistakes we encounter. Avoid these when writing code!
Note: This article presupposes that the reader is familiar with the technical details of cross-site scriping attacks. For those who want to learn more about the attack, we suggest reading the attack description on OWASP.
1. Blacklist filtering
Intuitively it makes sense to assume that filtering the "bad" input would solve the XSS problem. Developers therefore often attempt to filter certain typical cross-site scriping input strings. A common filter we have actually encountered several times blocks requests for strings containing "<script>" and "alert" - preventing exactly the proof-of-concept from a VA or pentest report but nothing else.
But even more comprehensive blacklist filters are a bad idea. Trying to prevent every possible way script can be executed in web browsers is pretty much impossible. Have a look at filter evasion cheat sheet and you'll get the idea.
2. Applying filters instead of escaping output
Im theory, whitelist filters can comprehensively prevent XSS. If you strip all non-apha-numeric characters from each and every input variable, there won't be any possibility of injecting meta-characters of any form. We have seen web application where this has actually been applied, and it does work.
The problem with this approach is that permitting only alphanumeric characters for all inputs is impractical most of the time. Special characters, such as single and double quotes, have legitimate reasons to be submitted in text inputs. Existing functionality of the web application might break. So you'll have to make exceptions to the filter, and XSS issues will start crawling back in.
Whitelist filters are not the best approach to catch XSS flaws (of course, you should have them for a lot of other reasons). The best way to tackle XSS flaws is escaping untrusted data.
3. Escaping output inconsistently
Very often we find that a web application prevents XSS attacks in some locations while other locations are still vulnerable. This is a common problem when protections have been applied retroactively, e.g. in reaction to a penetration testing report.
Whether we are talking about fixing an existing application or building security into a new one, it is always better to provide central library functions for escaping untrusted data, and make sure that these functions are applied in every single location where untrusted data is included as part of the output. If a standard library can be used, even better. For example, the OWASP Enterprise Security API (ESAPI) provides protection against common XSS attacks and can be retrofitted into the majority of web applications.
4. Escaping the wrong characters
Escape untrusted inputs to prevent cross-site scripting in a web application. Use a library function to escape the untrusted data wherever it is rendered in the output, and make sure to apply the right kind of escaping for the context in which the data is rendered.
About the Author
Bernhard Mueller is a full-stack hacker, security researcher, and winner of BlackHat's Pwnie Award.
Follow him on Twitter: @muellerberndt