Rate limiting is a core defensive technique for any public‑facing web service. By throttling requests from a single client you can protect upstream resources, mitigate brute force attacks and keep latency predictable. Nginx ships with the limit_req module which makes it easy to enforce request caps based on IP address. This guide walks through a complete setup, explains each directive, shows how to test the limits, and offers tips for logging and fine tuning.
Prerequisites
- A Linux server with root or sudo access
- Nginx version 1.9.0 or newer (the
limit_reqmodule is built‑in in most distributions) - Basic familiarity with editing
/etc/nginx/*.conffiles
If you are using a distribution that splits configuration into /etc/nginx/conf.d/ and /etc/nginx/sites-enabled/, the examples below will work in either location as long as they are included by the main nginx.conf.
Understanding the limit_req Module
The module works with two concepts:
- Zone – a shared memory area that stores request counters keyed by a variable (usually
$binary_remote_addr). - Limit rule – applied inside a
serverorlocationblock, referencing the zone and optionally configuring burst capacity and delay behavior.
A minimal configuration looks like this:
limit_req_zone $binary_remote_addr zone=perip:10m rate=5r/s;
$binary_remote_addris the binary representation of the client IP, which saves memory compared to the plain string form.zone=perip:10mcreates a 10‑megabyte shared memory segment named perip. Roughly 1 KB stores about 16 000 entries, enough for most small to medium sites.rate=5r/slimits each IP to five requests per second.
The actual enforcement happens with the limit_req directive:
location /api/ {
limit_req zone=perip burst=10 nodelay;
proxy_pass http://backend;
}
burst=10allows a short spike of up to ten extra requests that exceed the steady rate.nodelaytells Nginx to reject excess requests immediately instead of queuing them.
Step 1 – Install or Verify Nginx
On Ubuntu/Debian:
sudo apt update
sudo apt install nginx -y
On CentOS/RHEL:
sudo yum install epel-release -y
sudo yum install nginx -y
Start and enable the service:
sudo systemctl start nginx
sudo systemctl enable nginx
Check the version to confirm module availability:
nginx -V 2>&1 | grep -- '--with-http_limit_req_module'
If you see --with-http_limit_req_module in the output, you are ready.
Step 2 – Define a Shared Memory Zone
Edit /etc/nginx/nginx.conf (or create a file under conf.d/). Add the zone definition inside the http block:
http {
# Existing directives ...
limit_req_zone $binary_remote_addr zone=perip:10m rate=5r/s;
# Include other config files
include /etc/nginx/conf.d/*.conf;
}
Save and test the syntax:
sudo nginx -t
If the test passes, reload Nginx:
sudo systemctl reload nginx
Step 3 – Apply Limits to a Location
Open the site configuration you want to protect. For example /etc/nginx/conf.d/example.conf:
server {
listen 80;
server_name example.com;
location / {
# Default content or proxy
root /var/www/html;
index index.html;
}
# Protect the API endpoint
location /api/ {
limit_req zone=perip burst=10 nodelay;
proxy_pass http://127.0.0.1:8080;
}
}
The limit_req line tells Nginx to consult the perip zone for each request that matches /api/. If a client exceeds 5 r/s plus the burst of 10, Nginx returns HTTP 503 by default.
Step 4 – Customize Burst and Delay
You may want to let occasional spikes pass without penalty. Removing nodelay causes excess requests to be queued for up to one second (the rate period). Example:
location /api/ {
limit_req zone=perip burst=20;
proxy_pass http://127.0.0.1:8080;
}
Now a client can send 5 r/s continuously and an additional 20 requests that will be processed gradually. Adjust burst based on typical traffic patterns.
Step 5 – Test the Configuration
A quick way to verify limits is using curl in a loop:
for i in $(seq 1 30); do
curl -s -o /dev/null -w "%{http_code} " http://example.com/api/status
done; echo
You should see a series of 200 responses followed by 503 once the limit is hit. To observe real‑time counters, enable the $limit_req_status variable in logs:
log_format main '$remote_addr - $status [$limit_req_status] "$request"';
access_log /var/log/nginx/access.log main;
After reloading Nginx, entries will show - for normal requests and 429 or 503 when the limit triggers.
Optional – Centralized Logging with Syslog
If you aggregate logs in a SIEM, add a syslog target:
error_log syslog:server=127.0.0.1:514,facility=local7,severity=info;
Now every rate‑limit event appears as a structured log entry that can trigger alerts.
Common Pitfalls
- Using
$remote_addrinstead of$binary_remote_addr– the plain string consumes more memory and reduces the number of entries you can store. - Setting the zone too small – a 10 MB zone is usually enough, but high‑traffic sites may need 20 MB or more to avoid eviction of counters.
- Forgetting
limit_req_status– without it you cannot tell from logs whether a503came from rate limiting or an upstream error.
Conclusion
Nginx’s built‑in request throttling gives you a lightweight, high‑performance way to protect services on a per‑IP basis. By defining a shared memory zone, applying the limit to specific locations, and tuning burst settings you can stop abusive traffic without adding external dependencies. The configuration snippets above are ready to copy into any modern Nginx deployment. Remember to test with realistic request patterns, monitor logs for unexpected rejections, and adjust the zone size as your traffic grows.
Leave a Reply