Nginx Comprehensive Guide - Installation, Configuration, and Security Hardening
NGINX (pronounced "engine-x") is a high-performance HTTP server, reverse proxy, and load balancer designed for high concurrency, high performance, and low memory usage. It was initially created to solve the C10K problem (handling 10,000+ concurrent connections) and has since become one of the most popular web servers in the world.
NGINX Web Server Guide
Table of Contents
- Introduction to NGINX
- Installation
- Basic Configuration
- Virtual Hosts
- SSL/TLS Configuration
- Reverse Proxy Setup
- Load Balancing
- Performance Optimization
- Logging and Monitoring
- Security Hardening
- Troubleshooting
- Additional Resources
Introduction to NGINX
Key features:
- Event-driven, asynchronous architecture
- Reverse proxy capabilities
- Load balancing
- Caching
- SSL/TLS termination
- HTTP/2 and HTTP/3 support
- WebSockets support
- Fast static content delivery
Installation
Debian/Ubuntu
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Update package lists
sudo apt update
# Install NGINX
sudo apt install nginx
# Start NGINX
sudo systemctl start nginx
# Enable NGINX to start on boot
sudo systemctl enable nginx
# Check status
sudo systemctl status nginx
CentOS/RHEL
1
2
3
4
5
6
7
8
9
10
11
12
13
14
# Install EPEL repository (if not already installed)
sudo yum install epel-release
# Install NGINX
sudo yum install nginx
# Start NGINX
sudo systemctl start nginx
# Enable NGINX to start on boot
sudo systemctl enable nginx
# Check status
sudo systemctl status nginx
Alpine Linux
1
2
3
4
5
6
7
8
9
10
11
# Update package lists
apk update
# Install NGINX
apk add nginx
# Start NGINX
rc-service nginx start
# Add to default runlevel
rc-update add nginx default
From Source
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
# Install dependencies
sudo apt install build-essential libpcre3 libpcre3-dev zlib1g zlib1g-dev libssl-dev
# Download and extract NGINX
wget https://nginx.org/download/nginx-1.24.0.tar.gz
tar -zxvf nginx-1.24.0.tar.gz
cd nginx-1.24.0
# Configure, compile and install
./configure --prefix=/usr/local/nginx --with-http_ssl_module --with-http_v2_module
make
sudo make install
# Create systemd service file
sudo nano /etc/systemd/system/nginx.service
Add the following to the systemd service file:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[Unit]
Description=The NGINX HTTP and reverse proxy server
After=network.target
[Service]
Type=forking
PIDFile=/usr/local/nginx/logs/nginx.pid
ExecStartPre=/usr/local/nginx/sbin/nginx -t
ExecStart=/usr/local/nginx/sbin/nginx
ExecReload=/usr/local/nginx/sbin/nginx -s reload
ExecStop=/bin/kill -s QUIT $MAINPID
PrivateTmp=true
[Install]
WantedBy=multi-user.target
Then enable and start the service:
1
2
3
sudo systemctl daemon-reload
sudo systemctl start nginx
sudo systemctl enable nginx
Docker
1
2
3
4
5
6
7
8
# Pull the NGINX image
docker pull nginx
# Run NGINX container
docker run --name my-nginx -p 80:80 -d nginx
# To use a custom configuration
docker run --name my-nginx -v /path/to/nginx.conf:/etc/nginx/nginx.conf:ro -p 80:80 -d nginx
Basic Configuration
NGINX configuration files are typically located in /etc/nginx/
(package installation) or /usr/local/nginx/conf/
(source installation).
Main configuration files:
nginx.conf
: Main configuration filesites-available/
: Directory for available site configurationssites-enabled/
: Directory for enabled site configurations (symlinks to sites-available)conf.d/
: Directory for additional configuration files
Basic structure of nginx.conf
:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
Basic Server Block
A simple server block for serving static content:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
server {
listen 80; # Listen on port 80 for HTTP connections
server_name example.com www.example.com; # The domain names this server block responds to
root /var/www/example.com; # The root directory for website files
index index.html index.htm; # Files to try when a directory is requested
# (in this order)
location / {
try_files $uri $uri/ =404; # First try the exact URI, then the URI as a directory,
# and if neither exists, return 404 error
}
# Additional configuration...
}
This is the most basic NGINX server configuration block. It tells NGINX to:
- Listen for HTTP requests on port 80
- Respond to requests for “example.com” and “www.example.com”
- Serve files from the
/var/www/example.com
directory - Look for index.html or index.htm when a directory is requested
- For all requests, try to match them to a file or directory, and return a 404 error if nothing is found
Virtual Hosts
Virtual hosts (server blocks) allow you to serve multiple domains from a single NGINX instance.
Name-based Virtual Hosts
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
server {
listen 80; # Both blocks listen on the same port
server_name site1.example.com; # First virtual host domain name
root /var/www/site1; # Root directory specific to this site
# Other directives...
}
server {
listen 80; # Same port as above
server_name site2.example.com; # Second virtual host domain name
root /var/www/site2; # Different root directory for this site
# Other directives...
}
Name-based virtual hosting allows you to host multiple websites on a single IP address. NGINX determines which server block to use based on the Host
header sent by the client (the domain name in the browser). This is the most common type of virtual hosting since it doesn’t require multiple IP addresses.
IP-based Virtual Hosts
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
server {
listen 192.168.1.10:80; # First server listens on a specific IP address
server_name site1.example.com; # Domain name for the first site
root /var/www/site1; # Root directory for first site
# Other directives...
}
server {
listen 192.168.1.20:80; # Second server listens on a different IP address
server_name site2.example.com; # Domain name for the second site
root /var/www/site2; # Root directory for second site
# Other directives...
}
IP-based virtual hosting uses different IP addresses to distinguish between websites. Each website is bound to a different IP address, providing more isolation between sites and allowing for separate SSL certificates per IP. This approach is less common now due to IPv4 address shortages but can be useful in specific scenarios.
Default Server
1
2
3
4
5
6
7
server {
listen 80 default_server; # This is the fallback server for unmatched requests
server_name _; # Underscore is a catch-all for hostnames
return 444; # Special NGINX code that closes the connection without response
# This prevents unknown host access
}
The default server block handles requests that don’t match any other server block’s server_name
. The default_server
parameter marks it as the catch-all, and the underscore (_
) is a placeholder that matches any hostname. Returning status code 444 immediately closes the connection without sending any response, which is a security best practice to prevent probing of your server with random domain names.
SSL/TLS Configuration
Setting up SSL/TLS
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
server {
listen 80; # Standard HTTP port
server_name example.com www.example.com;
# Redirect HTTP to HTTPS - this is a permanent redirect (301)
# that preserves the original hostname and URI path
return 301 https://$host$request_uri;
}
server {
listen 443 ssl http2; # HTTPS port with SSL enabled and HTTP/2 protocol
server_name example.com www.example.com;
# Certificate and private key paths
ssl_certificate /etc/nginx/ssl/example.com.crt; # Full certificate chain
ssl_certificate_key /etc/nginx/ssl/example.com.key; # Private key
# Strong SSL settings
ssl_protocols TLSv1.2 TLSv1.3; # Only allow TLS 1.2 and 1.3 (no older, vulnerable protocols)
ssl_prefer_server_ciphers on; # Prefer server's cipher order over client's
# Modern cipher suite that provides strong security
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256';
# SSL session settings to improve performance
ssl_session_timeout 1d; # How long sessions are stored
ssl_session_cache shared:SSL:10m; # 10MB shared cache across worker processes
ssl_session_tickets off; # Disable TLS session tickets (security best practice)
# OCSP Stapling - efficiently checks if certificates are revoked
ssl_stapling on; # Enable OCSP stapling
ssl_stapling_verify on; # Verify the OCSP response
resolver 8.8.8.8 8.8.4.4 valid=300s; # DNS resolvers to use (Google's DNS)
resolver_timeout 5s; # How long to wait for resolver response
# Additional security headers
# HSTS tells browsers to only use HTTPS for this domain for next ~2 years
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
# Prevents browsers from MIME-type sniffing (security feature)
add_header X-Content-Type-Options nosniff;
# Prevents your site from being embedded in iframes on other sites (clickjacking protection)
add_header X-Frame-Options DENY;
# Enables browser's XSS protection and prevents page loading if attack detected
add_header X-XSS-Protection "1; mode=block";
# Document root and index files
root /var/www/example.com;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}
This configuration implements a complete SSL/TLS setup with current security best practices:
- The first server block handles HTTP requests and performs a permanent redirect to HTTPS
- The second server block configures HTTPS with:
- Modern TLS protocols (1.2 and 1.3 only)
- Strong cipher suites that prioritize security
- Performance optimizations through session caching
- OCSP stapling for efficient certificate revocation checking
- Security headers that protect against common attacks:
- HSTS forces HTTPS connections
- X-Content-Type-Options prevents MIME-type sniffing attacks
- X-Frame-Options prevents clickjacking
- X-XSS-Protection enables browser’s built-in XSS filtering
This configuration follows Mozilla’s “Modern” SSL recommendations and provides both excellent security and good performance.
Using Let’s Encrypt with Certbot
1
2
3
4
5
6
7
8
# Install Certbot
sudo apt install certbot python3-certbot-nginx
# Obtain and install certificate
sudo certbot --nginx -d example.com -d www.example.com
# Auto-renewal (Certbot creates a cron job or systemd timer)
sudo systemctl status certbot.timer
Reverse Proxy Setup
NGINX can proxy requests to backend servers (e.g., Node.js, Python, PHP, etc.).
Basic Reverse Proxy
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
server {
listen 80; # Listen on port 80 for HTTP
server_name example.com; # Domain name
location / {
# Forward all requests to a local application server
proxy_pass http://localhost:3000;
# Use HTTP/1.1 for proxy connections
proxy_http_version 1.1;
# Support WebSocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
# Pass the original host header to the backend server
# This is important for applications that generate URLs based on the host header
proxy_set_header Host $host;
# Don't use cached responses when the client has set certain headers
proxy_cache_bypass $http_upgrade;
}
}
This basic reverse proxy configuration allows NGINX to receive requests from clients and forward them to a backend application server (like Node.js, Rails, Django, etc.). It acts as a middleman, which:
- Hides the backend server from direct access
- Can handle SSL termination (the backend doesn’t need to)
- Can serve static files separately from the application
- Supports WebSocket connections with the proper headers
- Preserves the original request information when passing to the backend
WebSocket Proxy
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
server {
listen 80; # Listen on port 80
server_name example.com; # Domain name
location /websocket/ {
# Forward WebSocket connections to a dedicated backend
proxy_pass http://localhost:8080;
# Required for WebSockets
proxy_http_version 1.1; # Use HTTP/1.1
proxy_set_header Upgrade $http_upgrade; # Support protocol upgrade
proxy_set_header Connection "upgrade"; # Upgrade connection when requested
# Forward important client information to backend
proxy_set_header Host $host; # Original host requested
proxy_set_header X-Real-IP $remote_addr; # Original client IP
# Add client's IP to X-Forwarded-For header
# Maintains IP address chain if request passed through multiple proxies
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# Tell backend what protocol the client used (HTTP or HTTPS)
proxy_set_header X-Forwarded-Proto $scheme;
}
}
This configuration specifically handles WebSocket connections, which are persistent connections that allow real-time bidirectional communication between clients and servers. The key differences from a standard proxy:
- It’s limited to a specific path (
/websocket/
) - It includes the critical
Upgrade
andConnection
headers required for WebSockets - It passes additional headers to provide the backend with information about the client
- The X-Forwarded-* headers allow the backend to understand it’s behind a proxy
Buffering and Timeouts
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
server {
# ...
location / {
proxy_pass http://backend; # Pass requests to an upstream server or group
# Buffering configuration
proxy_buffering on; # Enable response buffering (improves performance)
proxy_buffer_size 8k; # Size of buffer for response headers
proxy_buffers 8 32k; # 8 buffers of 32k each (256k total)
proxy_busy_buffers_size 64k; # Limit for busy buffers (active sending to client)
# Timeouts
proxy_connect_timeout 60s; # Max time to establish connection with backend
proxy_send_timeout 60s; # Max time between two successive write operations
proxy_read_timeout 60s; # Max time between two successive read operations
}
}
This configuration focuses on performance optimization and reliability for proxied connections:
- Buffering: NGINX reads the backend response quickly and then serves it to clients at their pace
- Improves performance by freeing the backend server faster
- Protects against slow clients that could otherwise tie up backend connections
- The buffer sizes control memory usage and efficiency
- Timeouts: Protect against hanging connections and server problems
- Connect timeout prevents hanging on unavailable backends
- Send timeout prevents NGINX from waiting too long while sending requests
- Read timeout prevents NGINX from waiting too long for backend responses
Properly configured timeouts are essential for high-traffic sites to prevent resource exhaustion from stalled connections.
Load Balancing
NGINX can distribute traffic across multiple backend servers.
Basic Load Balancing
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
# Define upstream server group - a named group of servers
upstream backend {
server backend1.example.com; # First backend server
server backend2.example.com; # Second backend server
server backend3.example.com; # Third backend server
}
server {
listen 80;
server_name example.com;
location / {
# Route requests to the named upstream group
proxy_pass http://backend; # "backend" refers to the upstream group defined above
# Pass important headers to backend servers
proxy_set_header Host $host; # Original requested host
proxy_set_header X-Real-IP $remote_addr; # Client's IP address
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # Chain of IPs
}
}
This basic load balancing configuration distributes client requests across multiple backend servers. NGINX uses this setup to:
- Define a pool of available servers in the
upstream
block - Route requests to these servers using the
proxy_pass
directive - Pass important client information to the backend servers
By default, NGINX uses a round-robin algorithm, sending requests to each server in turn. If a server fails to respond, NGINX automatically routes requests to the next available server, providing basic high availability.
Load Balancing Methods
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# Round Robin (default) - distributes requests sequentially across servers
upstream backend {
server backend1.example.com; # Each server gets requests in turn
server backend2.example.com; # First request → server1, second → server2, third → server1, etc.
}
# Least Connections - sends requests to server with fewest active connections
upstream backend {
least_conn; # Enable least connections method
server backend1.example.com; # These servers will receive requests based on their current load
server backend2.example.com; # Server with fewer active connections gets the next request
}
# IP Hash - sends requests from the same client IP to the same server
upstream backend {
ip_hash; # Enable IP hash method
server backend1.example.com; # A client's requests will consistently go to the same server
server backend2.example.com; # Based on a hash of the client's IP address
}
# Weighted - controls distribution ratio between servers
upstream backend {
server backend1.example.com weight=3; # This server gets 3 requests...
server backend2.example.com weight=1; # ...for every 1 request this server gets
# (75% vs 25% distribution)
}
NGINX offers several load balancing algorithms to optimize request distribution based on different needs:
- Round Robin: The default method that distributes requests evenly in sequence
- Simple and fair, but doesn’t account for server capacity or current load
- Least Connections: Sends new requests to the server with the fewest active connections
- Helps balance load when request processing times vary
- Better for situations where connections might stay open for different durations
- IP Hash: Consistently routes requests from the same client IP to the same backend server
- Provides session persistence without cookies or shared storage
- Essential for applications that don’t store session data centrally
- Weighted Distribution: Allows different servers to handle different proportions of traffic
- Useful when servers have different performance capabilities
- Can be combined with other methods (weighted least_conn, etc.)
Health Checks
1
2
3
4
5
upstream backend {
# Remove server from rotation after 3 failures for 30 seconds
server backend1.example.com max_fails=3 fail_timeout=30s; # First server with health parameters
server backend2.example.com max_fails=3 fail_timeout=30s; # Second server with health parameters
}
This configuration implements passive health checks, which automatically detect and handle server failures:
max_fails=3
: If a server fails to respond 3 times, NGINX temporarily marks it as unavailablefail_timeout=30s
: Defines both:- How long NGINX considers a server unavailable after reaching max_fails
- The time period during which max_fails is counted
NGINX’s passive health checks work by monitoring actual client requests rather than sending dedicated probe requests. When a server fails to respond or returns errors, NGINX tracks these failures and temporarily removes unhealthy servers from the rotation.
For more advanced health checks (like active probing or checking specific endpoints), NGINX Plus or third-party modules are required.
For more advanced health checks, consider using NGINX Plus or upstream monitoring modules.
Performance Optimization
Worker Processes and Connections
1
2
3
4
5
6
7
8
9
# Auto-detect number of CPU cores - optimizes for the server's hardware
worker_processes auto; # NGINX will use one worker per CPU core
events {
# Increase max connections per worker
worker_connections 4096; # Each worker can handle up to 4096 connections simultaneously
multi_accept on; # Accept as many connections as possible at once
use epoll; # Use efficient event processing method (Linux only)
}
This configuration optimizes NGINX for maximum performance by:
Auto-scaling workers:
worker_processes auto
tells NGINX to create one worker process per CPU core, which is optimal for performance in most cases. Each worker process handles connections independently.Increasing connection limits: The default
worker_connections
is often 1024, but increasing it to 4096 allows each worker to handle more concurrent connections. The total max connections = worker_processes × worker_connections.Enabling multi_accept: When enabled, each worker will accept all new connections at once rather than one at a time.
Using epoll: This is a high-performance event notification mechanism on Linux that’s more efficient than the default methods. On FreeBSD/macOS, you would use
kqueue
instead.
File Handling Optimizations
1
2
3
4
5
6
7
8
9
10
11
12
http {
# Enable sendfile for faster file transfers
sendfile on; # Uses kernel sendfile instead of read+write
tcp_nopush on; # Optimizes the amount of data sent at once
tcp_nodelay on; # Disables Nagle's algorithm to reduce latency
# File descriptor cache
open_file_cache max=1000 inactive=20s; # Cache up to 1000 file descriptors, remove if unused for 20s
open_file_cache_valid 30s; # Check cache validity every 30s
open_file_cache_min_uses 2; # File must be accessed at least twice to stay in cache
open_file_cache_errors on; # Cache file lookup errors
}
These optimizations improve file delivery performance:
sendfile: Bypasses user space and transfers data directly from the file system cache to the network stack, eliminating the need for NGINX to read the file into memory first.
- tcp_nopush/tcp_nodelay: When used with sendfile:
tcp_nopush
optimizes the packet size by buffering data before sendingtcp_nodelay
disables Nagle’s algorithm, reducing latency for small frequent packets
- open_file_cache: NGINX maintains a cache of open file descriptors, metadata, and errors:
- Reduces the overhead of opening files repeatedly
- Periodic validation ensures cache freshness
- Configurable retention based on usage patterns
- Error caching prevents repeated failed lookups
Compression
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
http {
# Enable gzip compression
gzip on; # Enable compression
gzip_comp_level 5; # Compression level (1-9, higher = more compression but more CPU)
gzip_min_length 256; # Don't compress very small files (inefficient)
gzip_proxied any; # Compress responses to proxied requests
gzip_vary on; # Add Vary: Accept-Encoding header
# Specify MIME types to compress
gzip_types
application/atom+xml
application/javascript # JavaScript files
application/json # JSON data
application/ld+json
application/manifest+json
application/rss+xml # RSS feeds
application/vnd.geo+json
application/vnd.ms-fontobject
application/x-font-ttf
application/x-web-app-manifest+json
application/xhtml+xml
application/xml # XML data
font/opentype # Web fonts
image/bmp
image/svg+xml # SVG images
image/x-icon # Icons
text/cache-manifest
text/css # CSS stylesheets
text/plain # Plain text files
text/vcard
text/vnd.rim.location.xloc
text/vtt
text/x-component
text/x-cross-domain-policy;
}
This gzip compression configuration significantly reduces bandwidth usage and improves load times:
Balanced compression level: Level 5 offers a good compromise between CPU usage and compression ratio (level 1 is fastest, 9 is highest compression)
Smart compression thresholds: Only compresses files larger than 256 bytes, as very small files don’t benefit much from compression
- Comprehensive MIME type list: Compresses all compressible content types including:
- Text-based formats (HTML, CSS, JavaScript)
- Data formats (JSON, XML)
- Web fonts and vector graphics
- Application-specific formats
- Proper headers: The
gzip_vary on
directive adds theVary: Accept-Encoding
header, which is essential for
Logging and Monitoring
Custom Log Formats
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
http {
# Default log format
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
# JSON log format for easier parsing with log analysis tools
log_format json_combined escape=json '{'
'"time_local":"$time_local",' # Timestamp of the request
'"remote_addr":"$remote_addr",' # Client's IP address
'"remote_user":"$remote_user",' # HTTP Basic Auth username (if any)
'"request":"$request",' # Full HTTP request line (method, path, protocol)
'"status": "$status",' # HTTP response status code
'"body_bytes_sent":"$body_bytes_sent",' # Response size in bytes
'"request_time":"$request_time",' # Time to process the request
'"http_referrer":"$http_referer",' # Where the client came from
'"http_user_agent":"$http_user_agent"' # Client's browser/agent
'}';
# Apply log format to access logs
access_log /var/log/nginx/access.log json_combined;
error_log /var/log/nginx/error.log warn; # Log level: debug, info, notice, warn, error, crit
}
This configuration defines custom log formats that provide detailed information about client requests:
- Main format: The traditional combined log format used by most web servers
- Includes client IP, timestamp, HTTP request, status code, response size, referrer, and user agent
- Format is human-readable but requires parsing for automated analysis
- JSON format: Structured logging that’s easier to process with tools like ELK Stack
- The
escape=json
parameter ensures special characters are properly escaped - Each field is explicitly named for clarity
- Includes
request_time
for performance monitoring
- The
- Log levels: The error_log directive specifies what severity of errors to log
warn
captures warnings and more severe issues- For debugging, you can temporarily change to
debug
level
Structured logs are invaluable for monitoring, troubleshooting, and security auditing. The JSON format is particularly useful for log aggregation and analysis tools.
Rate Limiting
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
http {
# Define limit zones
limit_req_zone $binary_remote_addr zone=mylimit:10m rate=10r/s;
# ↑ ↑ ↑
# Key (client IP) Zone name Maximum rate
}
server {
# ...
location / {
# Apply rate limiting
limit_req zone=mylimit burst=20 nodelay;
# ↑ ↑ ↑
# Zone name Burst Process bursts immediately
# ...
}
}
This configuration implements rate limiting to protect the server from being overwhelmed:
- Zone definition:
$binary_remote_addr
: Uses client IP as the limiting key (each client gets their own limit)zone=mylimit:10m
: Creates a shared memory zone named “mylimit” with 10MB of sizerate=10r/s
: Limits each client to 10 requests per second
- Application in location:
burst=20
: Allows a burst of up to 20 additional requests beyond the ratenodelay
: Processes requests in the burst immediately rather than queueing them
Rate limiting is essential for preventing:
- Brute force attacks
- Denial of service (DoS) attacks
- Excessive API usage
- Resource exhaustion from aggressive crawlers
Request Tracing
1
2
3
4
5
6
7
8
9
10
11
12
server {
# ...
# Add request ID to responses
add_header X-Request-ID $request_id; # $request_id is a built-in random unique identifier
location / {
# Pass the same ID to the backend for end-to-end tracing
proxy_set_header X-Request-ID $request_id; # Forward the ID to backend services
proxy_pass http://backend;
}
}
This configuration adds request tracing capabilities:
Unique request identifier: NGINX generates a random ID for each request using the built-in
$request_id
variableEnd-to-end tracing: The same ID is:
- Added as a response header back to the client
- Forwarded to the backend server in a request header
This simple addition is powerful for debugging and monitoring:
- Allows correlating logs across multiple services
- Helps track a single request through complex architectures
- Enables users to reference specific requests when reporting issues
In distributed systems and microservices architectures, request tracing is crucial for understanding request flow and diagnosing problems.
Security Hardening
Hide NGINX Version
1
2
3
4
http {
# Hide NGINX version
server_tokens off;
}
Secure Cookie Settings
1
2
3
4
5
6
server {
# ...
# Secure cookie settings
proxy_cookie_path / "/; HTTPOnly; Secure; SameSite=strict";
}
Content Security Policy
1
2
3
4
5
6
server {
# ...
# Content Security Policy
add_header Content-Security-Policy "default-src 'self'; script-src 'self' https://trusted-cdn.com; img-src 'self' data: https:; style-src 'self' 'unsafe-inline' https://trusted-cdn.com; font-src 'self'; connect-src 'self'; media-src 'self'; object-src 'none'; child-src 'self'; frame-ancestors 'none'; form-action 'self'; upgrade-insecure-requests;" always;
}
Restricting Access
1
2
3
4
5
6
7
8
9
10
11
# Allow/deny by IP
location /admin/ {
allow 192.168.1.0/24;
deny all;
}
# Basic authentication
location /protected/ {
auth_basic "Restricted Area";
auth_basic_user_file /etc/nginx/.htpasswd;
}
To create a .htpasswd
file:
1
2
sudo apt install apache2-utils
sudo htpasswd -c /etc/nginx/.htpasswd username
Request Filtering
1
2
3
4
5
6
7
8
9
10
11
12
13
# Block certain user agents
if ($http_user_agent ~* (scraper|bot|crawler)) {
return 403;
}
# Block certain request methods
if ($request_method !~ ^(GET|HEAD|POST)$) {
return 444;
}
# Size limits
client_max_body_size 10m;
client_body_buffer_size 128k;
ModSecurity WAF
ModSecurity is a Web Application Firewall (WAF) that can be integrated with NGINX.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
# Install dependencies
sudo apt install git build-essential libpcre3 libpcre3-dev libssl-dev libtool autoconf apache2-dev libxml2-dev libcurl4-openssl-dev libgeoip-dev liblmdb-dev
# Clone ModSecurity
git clone --depth 1 -b v3/master --single-branch https://github.com/SpiderLabs/ModSecurity
cd ModSecurity
git submodule init
git submodule update
# Build ModSecurity
./build.sh
./configure
make
sudo make install
# Clone NGINX connector
cd ..
git clone --depth 1 https://github.com/SpiderLabs/ModSecurity-nginx.git
# Download NGINX source (match your current version)
NGINX_VERSION=$(nginx -v 2>&1 | sed 's/nginx version: nginx\///; s/\s.*$//')
wget https://nginx.org/download/nginx-${NGINX_VERSION}.tar.gz
tar zxvf nginx-${NGINX_VERSION}.tar.gz
cd nginx-${NGINX_VERSION}
# Configure and build NGINX with ModSecurity
./configure --with-compat --add-dynamic-module=../ModSecurity-nginx
make modules
sudo cp objs/ngx_http_modsecurity_module.so /etc/nginx/modules/
# Enable module in NGINX
echo 'load_module modules/ngx_http_modsecurity_module.so;' | sudo tee -a /etc/nginx/modules-enabled/50-mod-security.conf
Configure ModSecurity in NGINX:
1
2
3
# In http context
modsecurity on;
modsecurity_rules_file /etc/nginx/modsecurity/main.conf;
Create main ModSecurity configuration:
1
2
3
sudo mkdir -p /etc/nginx/modsecurity
sudo cp /ModSecurity/modsecurity.conf-recommended /etc/nginx/modsecurity/modsecurity.conf
sudo cp /ModSecurity/unicode.mapping /etc/nginx/modsecurity/
Enable OWASP Core Rule Set:
1
2
3
4
5
6
cd /etc/nginx/modsecurity
sudo git clone https://github.com/coreruleset/coreruleset.git
sudo cp coreruleset/crs-setup.conf.example coreruleset/crs-setup.conf
# Create main.conf
sudo nano /etc/nginx/modsecurity/main.conf
Content of main.conf
:
1
2
3
Include /etc/nginx/modsecurity/modsecurity.conf
Include /etc/nginx/modsecurity/coreruleset/crs-setup.conf
Include /etc/nginx/modsecurity/coreruleset/rules/*.conf
Restart NGINX:
1
sudo systemctl restart nginx
File Permissions
1
2
3
4
5
6
# Set correct permissions for NGINX files
sudo chmod 640 /etc/nginx/nginx.conf
sudo chmod 644 /etc/nginx/mime.types
sudo chmod -R 755 /var/www
sudo chown -R root:root /etc/nginx
sudo chown -R www-data:www-data /var/www
Troubleshooting
Common Commands
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# Test configuration syntax
sudo nginx -t
# Output: "nginx: configuration file /etc/nginx/nginx.conf test is successful"
# Reload configuration without downtime
sudo nginx -s reload
# Gracefully reloads config without dropping connections
# Check running processes
ps aux | grep nginx
# Shows all running NGINX processes (master and workers)
# Check open ports
sudo netstat -tulpn | grep nginx
# Shows which ports NGINX is listening on
# Check logs
sudo tail -f /var/log/nginx/error.log
sudo tail -f /var/log/nginx/access.log
# Real-time log monitoring with -f (follow)
These essential commands are used for day-to-day NGINX administration:
- Configuration testing: Always test your configuration before applying it
- Catches syntax errors and some logical issues
- Prevents configuration errors from causing downtime
- Zero-downtime reloads: The
-s reload
signal:- Makes NGINX re-read the configuration
- Gracefully replaces worker processes
- Maintains existing connections
- Process inspection: Checking NGINX processes helps verify it’s running as expected
- Should show one master process and multiple worker processes
- Number of workers typically matches CPU cores
- Port verification: Ensures NGINX is listening on the expected ports
- Helps debug connection issues
- Identifies potential port conflicts
- Log monitoring: Real-time log access is crucial for troubleshooting
-f
option continuously displays new log entries- Error logs show issues, access logs show request patterns
Common Issues
- 502 Bad Gateway
- Check if backend service is running
- Verify backend service address and port
- Check proxy_pass directive
- Increase proxy timeouts
- 403 Forbidden
- Check file permissions
- Verify SELinux settings (if applicable)
- Check allow/deny directives
- 404 Not Found
- Verify root directory
- Check location blocks
- Verify try_files directive
- Connection Refused
- Check if NGINX is running
- Verify that NGINX is listening on the correct port
- Check firewall settings
- SSL Certificate Issues
- Verify certificate paths
- Check certificate expiration
- Ensure certificate chain is complete
Debug Logging
1
2
3
4
# Enable debug logging
error_log /var/log/nginx/error.log debug;
# Sets the most verbose logging level
# Options from least to most verbose: crit, error, warn, notice, info, debug
Increasing the log level to debug
provides detailed information about NGINX’s processing logic:
- Shows request handling decisions
- Exposes configuration interpretation
- Includes connection handling details
- Helps diagnose complex issues
Debug logging should only be enabled temporarily for troubleshooting due to:
- Significant performance impact
- Rapid log file growth
- Potential exposure of sensitive information
Header Debugging
1
2
3
4
5
6
location /headers {
add_header Content-Type text/plain; # Return response as plain text
# Return formatted string containing key request information
return 200 'Request Headers:\n$http_host\n$http_user_agent\n$http_referer\n$http_cookie\n$http_x_forwarded_for\n\nServer Variables:\n$request_method\n$remote_addr\n$server_protocol';
}
This diagnostic endpoint creates a special URL that displays request information:
- Purpose: Creates a testing endpoint that shows how NGINX sees incoming requests
- Implementation: Returns a text response with:
- Key HTTP headers sent by the client
- Important NGINX variables for the request
- Usage: Visit
/headers
in a browser or with curl to see:- What headers are being received
- How proxies might be modifying requests
- Values of server variables
This is invaluable for debugging client-server interactions, proxy configurations, or header-related issues without needing to check log files.