nginx CentOS 8 quickstart

About nginx

NGINX is an alternative to apache2 / httpd.
It is faster, less complex and more often used in production environments than apache2.
Commonly used features are: Reverse-proxy, Handling SSL Certificates, load balancing and caching (files, FastCGI, uWSGI).
It creates (by default) one worker thread per CPU core available, and each core can handle up to 10.000 connections easily.
With a 2 core CPU you have 3 processes: 1 main and 2 worker. With HTTPD (Apache2) you have ~220 Processes / Worker Threads.
NGINX uses way less RAM and processes then apache2. (source)
Warning: NGINX doesn't have a feature like .htaccess (but is faster because of that), so you have to deny access to folders via the nginx config instead of an .htaccess file.
It is way more difficult to install third-party modules into nginx then into apache2.

Read more: https://en.wikipedia.org/wiki/Nginx
Official site: https://www.nginx.com/company/ (Official partners are IBM, Microsoft, redhat, AWS and more)

Installation

Install nginx via sudo yum install nginx
You may have to start and enable nginx manually via sudo systemctl enable nginx --now
Default-Configuration is at /etc/nginx/nginx.conf
Default-Webpages are located in /usr/share/nginx/html
If you can't connect check your firewall, maybe it blocks http / port 80 traffic.
To enable the HTTP and HTTPS via firewalld run the following:
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo firewall-cmd --reload
You should now be able to reach your website from your browser.

Config & Commands

Configurations for domains or subdomains are stored in /etc/nginx/conf.d/ (on debian in /etc/nginx/sites-available) On debian, these files will be linked to the /etc/nginx/sites-enabled directory via the ln -s command.
To create your first file for your domain (we use the example.com domain here) simply create the file and paste your config in there: sudo nano /etc/nginx/conf.d/example.com.conf (on debian: sudo nano /etc/nginx/site-available/example.com)
Warning: On CentOS the config files have to end with .conf, or else they won't load.
Content of the file:

server {
  listen 80;
  listen [::]:80;
  server_name example.com;
  root /var/www/example.com;
  index index.html;
  location / {
    try_files $uri $uri/ =404;
  }
}
Please be careful with the root /var/www/example.com line of the config.
This folder has to exist! This folder is your web folder, so everything in there will be publicly available via the server_name example.com.
For Debian users: To "activate" the config you have to symlink the file to the sites-enabled folder. To do this simply use the following command: sudo ln -s /etc/nginx/sites-available/example.com /etc/nginx/sites-enabled/example.com
To test your config file you can use the nginx test-config command: sudo nginx -t
If there are no problems you can reload the config via sudo nginx -s reload

HTTPS

Having an HTTPS webserver is standard nowadays and easy to do aswell.
Install certbot to get ssl certificates for https: sudo yum install certbot python-certbot-nginx
To Create these certificates just run the command (in our example for example.com and test.example.com):
sudo certbot --nginx
For this command to work you need to have a config file with your ServerName already in the conf.d/ directory, because certbot will fetch all the ServerNames and generate HTTPS certificates automatically. You will get asked a few questions, just fill them out correctly. If certbot asks you if you want auto redirect from HTTP to HTTPS say Yes (the second option) so that traffic automatically runs on HTTPS only.
This redirect setting happens automatically on the newest Certbot version.
Afterwards check your firewall if you are allowing https traffic.
Then restart nginx with sudo nginx -s reload and you are good to go.

Reverse-Proxy

With nginx you can redirect URLs to your server to internal services on different ports / sockets.
This makes it easier to handle HTTPS (SSL) certificates for your services, because nginx can handle the HTTPS connections and internally redirect them to your applications in plain HTTP.
This way you don't have to install SSL modules into your applications, nginx handles all of that.

For reverse-proxy to work under CentOS you have to enable the SELinux setting for it. Execute the following command for it to work:

setsebool -P httpd_can_network_connect 1

Now you can, for example, make a NodeJS http-server that listens on port 3000 available through your website via the example.com/myapp/ url.
This exact scenario can easily be done with the nginx "location" block.
Simply add the following inside your server block:
location /myapp/ {
  proxy_pass http://localhost:3000/;
}
Now people can access your nodejs app via example.com/myapp/ instead of example.com:3000/. This way you also don't have to open port 3000.
The config file example from the Config & Commands Section of this article would then look like this:
server {
  listen 80;
  listen [::]:80;
  server_name example.com;
  root /var/www/example.com;
  index index.html;
  location / {
    try_files $uri $uri/ =404;
  }
  location /myapp/ {
    proxy_pass http://localhost:3000/;
  }
}

Warning: Be careful where you add the location block. If you already installed the HTTPS via certbot then you have to add the location block to the server block that listens on port 443 and NOT port 80.
The port 80 server block is then only there to redirect incoming requests to port 443.

Socket.io
If you want to use socket.io with a nginx reverse-proxy you have to add the following location block:
location ~* \.io {
  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
  proxy_set_header Host $http_host;
  proxy_set_header X-NginX-Proxy false;

  proxy_pass http://localhost:3001;
  proxy_redirect off;

  proxy_http_version 1.1;
  proxy_set_header Upgrade $http_upgrade;
  proxy_set_header Connection "upgrade";
}
Don't forget to change the port in the proxy_pass line.

PHP with nginx

PHP is not that easy to install with nginx. It's not difficult, but definetely more work then with apache2.
First install the PHP backend php-fpm, which will most likely be version 7.2 for you. Example command: sudo apt-get install php-fpm
Now check if php-fpm is actually running with the following command (replace version if yours is different):

sudo systemctl status php-fpm
If it says enabled and running then you are good to go. If it doesn't state that then enable and start it via sudo systemctl enable php-fpm --now.
Now you have to configure nginx to use that PHP. Edit your nginx domain config (probably located at /etc/nginx/conf.d/example.com) and add the following code in your server block:
location ~ \.php$ {
  include /etc/nginx/fastcgi_params;
  fastcgi_index index.php;
  fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
  fastcgi_pass unix:/run/php-fpm/www.sock;
}

Debian only config:
location ~ \.php$ {
  include snippets/fastcgi-php.conf;
  fastcgi_pass unix:/run/php/php7.3-fpm.sock;
}
Check your nginx config with sudo nginx -t.
If there are no problems then you can reload the config with sudo nginx -s reload.
Now you can create a test file and check if everything works:
simply create a file like /usr/share/nginx/html/test.php and write the php testcode in the file and save it.
If you now visit that file on your website (example: example.com/test.php) it should print the phpinfo!

Performance limits

The nginx server is fast, but it also has it's limits.
One limit is the "max files open at once" linux limitation. A system can only have that many files open at once. To see the number you can use the ulimit -a command. These are the soft limits. The hard limits can be seen by appending an -H to the command. On Raspbian this is 1024 (soft) and 1048576 (hard).

Another big limit is the max connections a system can have. There are only so many ports the system can use. To see the number of ports you can use for connections use the following command: cat /proc/sys/net/ipv4/ip_local_port_range. On raspbian it is 32768-60999, which means around 25.000 Ports can be used, which means around 25.000 Connections can be handled at once.
But this value needs to be cut in half if you have a reverse-proxy running. This doubles the connection usage, as one connection is from the client to the server and another one goes from the server to the reverse-proxy location.

Last but not least: Your PHP-FPM / other backend application can be limited too. Your nginx worker threads may work fast, but if you have a PHP website and PHP-FPM is slow, then everything gets slowed down too.