NGINX is an alternative to apache2 / httpd.
It is faster, less complex and more often used in production environments than apache2.
Commonly used features are: Reverse-proxy, Handling SSL Certificates, load balancing and caching (files, FastCGI, uWSGI).
It creates (by default) one worker thread per CPU core available, and each core can handle up to 10.000 connections easily.
With a 2 core CPU you have 3 processes: 1 main and 2 worker. With HTTPD (Apache2) you have ~220 Processes / Worker Threads.
NGINX uses way less RAM and processes then apache2. (source)
Warning: NGINX doesn't have a feature like .htaccess (but is faster because of that), so you have to deny access to folders via the nginx config instead of an .htaccess file.
It is way more difficult to install third-party modules into nginx then into apache2.
Read more: https://en.wikipedia.org/wiki/Nginx
Official site: https://www.nginx.com/company/ (Official partners are IBM, Microsoft, redhat, AWS and more)
Install nginx via sudo yum install nginx
You may have to start and enable nginx manually via sudo systemctl enable nginx --now
Default-Configuration is at /etc/nginx/nginx.conf
Default-Webpages are located in /usr/share/nginx/html
If you can't connect check your firewall, maybe it blocks http / port 80 traffic.
To enable the HTTP and HTTPS via firewalld run the following:
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo firewall-cmd --reload
You should now be able to reach your website from your browser.
Configurations for domains or subdomains are stored in /etc/nginx/conf.d/ (on debian in /etc/nginx/sites-available)
On debian, these files will be linked to the /etc/nginx/sites-enabled directory via the ln -s command.
To create your first file for your domain (we use the example.com domain here) simply create the file and paste your config in there: sudo nano /etc/nginx/conf.d/example.com.conf (on debian: sudo nano /etc/nginx/site-available/example.com)
Warning: On CentOS the config files have to end with .conf, or else they won't load.
Content of the file:
server { listen 80; listen [::]:80; server_name example.com; root /var/www/example.com; index index.html; location / { try_files $uri $uri/ =404; } }Please be careful with the root /var/www/example.com line of the config.
Having an HTTPS webserver is standard nowadays and easy to do aswell.
Install certbot to get ssl certificates for https: sudo yum install certbot python-certbot-nginx
To Create these certificates just run the command (in our example for example.com and test.example.com):
sudo certbot --nginx
For this command to work you need to have a config file with your ServerName already in the conf.d/ directory, because certbot will fetch all the ServerNames and generate HTTPS certificates automatically. You will get asked a few questions, just fill them out correctly. If certbot asks you if you want auto redirect from HTTP to HTTPS say Yes (the second option) so that traffic automatically runs on HTTPS only.
This redirect setting happens automatically on the newest Certbot version.
Afterwards check your firewall if you are allowing https traffic.
Then restart nginx with sudo nginx -s reload and you are good to go.
With nginx you can redirect URLs on your server to internal services on different ports / sockets.
This makes it easier to handle HTTPS (SSL) certificates for your services, because nginx can handle the HTTPS connections and internally redirect them to your applications in plain HTTP.
This way you don't have to install SSL modules into your applications because nginx will handle that.
For reverse-proxy to work under CentOS you have to enable the SELinux setting for it. Execute the following command for it to work:
setsebool -P httpd_can_network_connect 1
location /myapp/ { proxy_pass http://localhost:3000/; }Now people can access your nodejs app via example.com/myapp/ instead of example.com:3000/.
location /myapp/ { allow 1.1.1.1; deny all; proxy_pass http://localhost:3000/; }This would allow the ip 1.1.1.1 to access this location but all others get a 403 http error instead.
location /myapp/ { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://localhost:3000/; }Now your backend application will see the real IP of the requestor and not the localhost IP of the nginx server.
location ~* \.io { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy false; proxy_pass http://localhost:3001; proxy_redirect off; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; }Don't forget to change the port in the proxy_pass line.
NGINX allows you to cache responses from your proxied services.
This means you don't have to implement caching into your backend webservers.
The nginx doc website for caching: https://docs.nginx.com/nginx/admin-guide/content-cache/content-caching/
The nginx site for the proxy cache commands used: https://nginx.org/en/docs/http/ngx_http_proxy_module.html
First put the proxy_cache_path config into your http config block, which on Debian should be located in the default nginx.conf at /etc/nginx/nginx.conf
http { # ... proxy_cache_path /opt/nginx/cache keys_zone=mycache:10m; # ... }The above config creates the cache named "mycache", which will be stored in the folder "/opt/nginx/cache" and hold a maximum of 10MB (10m).
# MyService proxy_cache mycache; location /myservice/ { # ... proxy_cache_key "$scheme$proxy_host$request_uri"; proxy_cache_valid 200 302 5m; proxy_cache_valid 404 1m; proxy_cache_valid any 10s; add_header X-Cache-Status $upstream_cache_status; proxy_pass http://localhost:3000/; }The above example caches the requests to /myservice/. It caches requests by host and request URI. It caches 200 and 302 http response codes for 5 minutes and 404 for 1 minute. The Client (=Browser) can see if their response is cached if they inspect the "X-Cache-Status" header in the response.
proxy_cache_methods GET HEAD POST;To change the cache time from inside your backend you can set the "X-Accel-Expires" header to the number of seconds for how long this specific response sould be cached.
func main() { // ... router.GET("/getnothing", func(c *gin.Contect) { c.Writer.Header().Set("X-Accel-Expires","600") c.String(200,"Nothing") }) // ... }This example, no matter what is configured in the nginx server or location config, sets the cache time for this request/response to 600 seconds aka. 10 minutes.
PHP is not that easy to install with nginx. It's not difficult, but definetely more work then with apache2.
First install the PHP backend php-fpm, which will most likely be version 7.2 for you. Example command: sudo apt-get install php-fpm
Now check if php-fpm is actually running with the following command (replace version if yours is different):
sudo systemctl status php-fpmIf it says enabled and running then you are good to go. If it doesn't state that then enable and start it via sudo systemctl enable php-fpm --now.
location ~ \.php$ { include /etc/nginx/fastcgi_params; fastcgi_pass unix:/run/php-fpm/www.sock; }Debian-based:
location ~ \.php$ { include snippets/fastcgi-php.conf; fastcgi_pass unix:/run/php/php7.4-fpm.sock; }It is highly recommended to just use the code that should already be present in your nginx default config for enabling PHP.
server { index index.html index.php; ... }
A simple password-protected folder can be achieved via .htpasswd files.
location /admin/ { auth_basic "Admin Login"; auth_basic_user_file /etc/nginx/.htpasswd }If you want to have the .htpasswd file in your web folder then deny access to it:
location = /.htpasswd { deny all; return 404; }To create this file you can use the apache2-utils (Debian) or the httpd-tools (CentOS) tools.
The nginx server is fast, but it also has it's limits.
One limit is the "max files open at once" linux limitation. A system can only have that many files open at once. To see the number you can use the ulimit -a command. These are the soft limits. The hard limits can be seen by appending an -H to the command. On Raspbian this is 1024 (soft) and 1048576 (hard).
Another big limit is the max connections a system can have. There are only so many ports the system can use.
To see the number of ports you can use for connections use the following command: cat /proc/sys/net/ipv4/ip_local_port_range. On raspbian it is 32768-60999, which means around 25.000 Ports can be used, which means around 25.000 Connections can be handled at once.
Your PHP-FPM / other backend application can be limited too. Your nginx worker threads may work fast, but if you have a PHP website and PHP-FPM is slow, then everything gets slowed down too.