Nginx: Proxy, basic auth and URI to Port


I have a Digital Ocean droplet, and wanted to

  • Install Nginx;
  • Put Nginx it in front of two distinct web services I have running on this server;
  • Enable Basic Http Auth on some URIs;
  • One of these services dynamically creates HTTP ports, like I wanted to map this port to a name, as


Ubuntu 20.04 on a Digital Ocean droplet.

DNS Config

I created the following A records pointing to the droplet static IP:
*.service1.example.comCode language: Bash (bash)

Nginx install


sudo apt update
sudo apt install nginxCode language: Bash (bash)

More details here

Nginx configuration

The basic reverse proxy configuration was made with the help of this online nginx config generator by Digital Ocean

Some nice command snippets to highlight:

# Create a backup of your current NGINX configuration:
tar -czvf nginx_$(date +'%F_%H-%M-%S').tar.gz nginx.conf sites-available/ sites-enabled/

# Extract the new compressed configuration archive using tar:
tar -xzvf | xargs chmod 0644

# Reload Nginx configuration
sudo systemctl reload nginxCode language: Bash (bash)

Basic Auth

First, we need to install htpasswd, that is included in apache2-utils

sudo apt update
sudo apt install apache2-utilsCode language: Bash (bash)

To enable Http Basic Auth, just add the following lines inside a nginx location:

location / {

    auth_basic "Restricted";
    auth_basic_user_file /etc/nginx/.htpasswd;
}Code language: Nginx (nginx)

Then, add users credentials to the specified file, /etc/nginx/.htpasswd:

cd /etc/nginx
htpasswd -c .htpasswd user-nameCode language: Bash (bash)

Robots.txt and X-Robots-Tag

Now that the website has a public DNS pointing to it, Google and other search engines crawlers will try to index it. However, this web page should be kept private and not appear on Google search results. For this purpose, robots.txt and noindex exists. Robots.txt is a text file in the requested location and noindex may be a html tag or a http header.

When the crawler its gathering page information, it will try to open robots.txt and, based on its contents, will respectfully follows its directives, for instance to ignore pages on that location.

However, we do not even need to create the robots.txt file, because Nginx can serve it directly, with the following rule:

location = /robots.txt {
    log_not_found off;
    access_log    off;
    add_header Content-Type text/plain;
    return 200 "User-agent: *\nDisallow: /\n";
}Code language: Nginx (nginx)

We must comment the following lines in /etc/, to prevent conflict:

# robots.txt
# location = /robots.txt {
#     log_not_found off;
#     access_log    off;
# }
Code language: Nginx (nginx)

Regarding to noindex, the following rule will add the X-Robots-Tag header to all resources (pages and files):

    # reverse proxy
    location / {
        add_header X-Robots-Tag "noindex, nofollow, nosnippet, noarchive";
        auth_basic "Restricted";
        auth_basic_user_file /etc/nginx/.htpasswd;
    }Code language: Nginx (nginx)

Dynamic Port to URL mapping

The following config maps ports and make a reverse proxy. For instance, is mapped to and reverse proxied.

server {
    listen      80;
    listen      [::]:80;
    server_name "~(\d*)";

    # security

    # reverse proxy
    location / {

    # additional config
Code language: Nginx (nginx)

This approach is preferable to the alternative of listen a port range with the listen directive, because the latter will open sockets regardless of incoming connections and may exhaust the nginx or the OS socket limits.






Deixe um comentário

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *