Issue accessing SSL version of node-red

SSL is installed and working on port 1880. I can access the editor and its secure.

In my AWS Ubuntu server I've enabled ports to be listened by node-red so I can use 80/443.

Using the http in custom ports node, setting to 443 and trying to access a a page it comes up with "This site can’t provide a secure connection"

Using port 80 brings up the page but obivously not using SSL or switching.

Would this mean I would have to change the base port of 1880 to 80/443 in settings.js for this to work with a domain name to remove the port? OR how can I get this to work without altering the server port?

Hi @steveh92

I'm afraid you will find this difficult in the custom HTTP Node alone, the Node Itself needs to support SSL, and it currently does not.

Node RED's version DOES - Because its using the same server instance that Node RED is using (where SSL is configured to work correctly)

Your options are

  • Raise a feature request to support SSL to who ever looks after the node you are using
  • Use some sort of proxy service (Like cloudflare)
  • Use NGINX as a reverse proxy (as it will provide the SSL support layer)

A lot here use NGINX - as its all on the same machine, and generally works really well.

EDIT
This is how NGINX will work at a basic level.

You open up port 443 (that NGINX will listen on - and provide a secure connection)
it will then connect to Node RED internally (on any port you need such as 1881 for your custom HTTP)

You will not need to open up port 1881 - as its being connected to locally by NGINX

Please give this a read on accessing Node RED securely
Safely accessing Node-RED over the Internet - FAQs - Node-RED Forum (nodered.org)

1 Like

Thanks @marcus-j-davies.

I will give NGINX a try and report back.

Its like I get past one inconvenience to be presented with another :joy: but in general I'm really liking the functionality of node-red.

The wonderful thing about NGINX,
Is that you can map hostnames to different ports

editor.domain.com -> map to localhost:1880 (NR)
api.domain.com -> map to localhost:1881 (custom)

1 Like

That is usually because the browser and the server cannot agree a common encryption method.

No, and you shouldn't anyway since 80 and 443 are privileged ports.

Use a proxy server. It will let you configure external ports separate to the internal ones.


Ah, just read the rest and I see Marcus has already steered you in this direction.

2 Likes

So I've created a file called nodered.

server {
  listen 443;
  server_name **mydomain**;
  access_log /var/log/nginx/access.log;
  location / {
    proxy_pass http://localhost:1880;
    proxy_http_version 1.1;
    proxy_set_header Upgrade $http_upgrade;
    proxy_set_header Connection "upgrade";
  }
}

And restarted NGINX, now when I go into node-red it says that it cannot create server on port 443.

What have I missed?

Is something already listening to port 443?
Also remember to set the your SSL files, and flag for it support SSL

listen              443 ssl;
ssl_certificate     /etc/ssl/certs/server.crt;
ssl_certificate_key /etc/ssl/certs/server.key;
...

Note : I don't have much exposure to NGINX - I just know its a go to for this type of thing

1 Like

You need to change Node-RED's settings back to the default port.

You cannot use port 443 without setting up the TLS keys.

server {
  # Variables we don't want to share with other people (server names, etc)
  include /etc/nginx/conf.d/includes/core_variables.conf;

  # Ports to listen on. Remove insecure ports if not needed.
  listen *:443 ssl default_server;

  # If you want to limit this config to specific (sub)domain names:
  server_name $mydomain;

  # What to serve if no html file name provided
  index index.html index.htm;

  # A full set of headers have to be redone for every context that defines another header
  #include /etc/nginx/conf.d/includes/common_security_headers.conf;

  # TLS:
    # Specify the public cert and private key - need fullchain for max security - NB: vars don't appear to work here
    ssl_certificate /location/fullchain.cer;
    ssl_certificate_key /location/cert.key;
    # Require safe TLS protocols only
    ssl_protocols TLSv1.2 TLSv1.3;
    # Only use secure encryption ciphers
    #ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
    ssl_ciphers EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH;
    ssl_prefer_server_ciphers On;
    # Configure for Strict Transport Security (HSTS) - set conditionally in security_headers.hdr_conf
    #add_header Strict-Transport-Security "max-age=15768000; includeSubDomains preload" always;

    # enable session resumption to improve https performance http://vincent.bernat.im/en/blog/2011-ssl-session-reuse-rfc5077.html
    ssl_session_cache shared:SSL:128m;
    ssl_session_timeout 1d;
    ssl_session_tickets off;

    # Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits. To generate your dhparam.pem file, run in the terminal
    #     openssl dhparam -out /etc/nginx/ssl/dhparam.pem 2048
    #ssl_dhparam /etc/nginx/ssl/dhparam.pem;

    # enable ocsp stapling http://blog.mozilla.org/security/2013/07/29/ocsp-stapling-in-firefox/
      # Local router DNS resolver first followed by
      # Cloudflare resolver 1dot1dot1dot1.cloudflare-dns.com as this is fast and secure
      # Annoyingly, you cannot use variables in the resolver directive!
      resolver 192.168.1.1 1.1.1.1 1.0.0.1  ipv6=off; # [2606:4700:4700::1111] [2606:4700:4700::1001];
      ssl_stapling on;
      ssl_stapling_verify on;
      # trusted cert not required if using fullchain above
      #ssl_trusted_certificate /etc/nginx/ssl/star_forgott_com.crt;
  # End of TLS

  ## Help prevent buffer overflow attacks
    client_body_buffer_size 1K;
    client_header_buffer_size 1k;
    client_max_body_size 100k; # NB: 1k/10k too small for node-red deploy? OK: 2M, NOK: 1k, 5k, 10k, 
    large_client_header_buffers 2 1k;

  # Default root folder - if all else fails, look here for common content
  root $defaultroot;
  # Redirect missing pages to default static home page - remember to put this in the right place if using location /
  error_page 404 /404.html;
  # redirect server error pages to the static page /50x.html
  error_page 500 502 503 504 /50x.html;

  # Configuration for Node-RED
  include /etc/nginx/conf.d/includes/red.conf;

  # Test URL's for generating errors
  location = /err500 {
    return 500;
  }
  location = /err404 {
    return 404;
  }

}

Note that I put the Node-RED specific stuff into a separate config file. Similarly any common config that has to go in every location entry such as the security and proxy headers.

Incidentally, it is annoying but you cannot put your certificate file references into a variable. I don't think you can include them from another file either.

1 Like

NGINX is the only thing on 443/80.

And Node-RED is already set to 1880.

I've tried to follow your example but nginx wont start now with the following error.

EDIT
Realized you mentioned config files, I just need to understand how to do this aswell :joy:

I've followed this example

https://atextor.de/2016/06/17/how-to-properly-set-up-node-red-with-nginx.html

It says in settings.js to make sure that the https features for option 1, http refresh and require https are commented out. Still not working.

I am getting a different message when trying to access the page now though, 502 bad gateway. nginx/1.18.0 (Ubuntu)

Think I'm nearly done for the day, not used Ubuntu server ever so learning on the go :joy:

EDIT
Changes have worked, the domain now is working SSL without port. All working. Just not too sure I've followed it all correctly.

The 502 is some issue between the proxy and Node-RED.

NGINX config can be a bit complex but once you've set it up, it just works.

I make sure that I change the root url for the editor in settings.js:

    /** By default, the Node-RED UI is available at http://localhost:1880/
     * The following property can be used to specify a different root path.
     * If set to false, this is disabled.
     *  WARNING: If left unset or set to a path that user paths sit beneath,
     *           any admin middleware such as the httpAdminMiddleware function
     *           will also run for those user paths.
     *           This can have unintended consequences.
     */
    httpAdminRoot: process.env.httpAdminRoot || '/red',

That way, you know that you can proxy that separately. Don't forget to repeat any proxy and security settings for each proxied location - I'll add an example.

There are more settings for an ideal setup. And really, you should also add some proxy trust settings to settings.js:

    /** If you need to set an http proxy to reach out, please set an environment variable
     * called http_proxy (or HTTP_PROXY) outside of Node-RED in the operating system.
     * For example - http_proxy=http://myproxy.com:8080
     * (Setting it here will have no effect)
     * You may also specify no_proxy (or NO_PROXY) to supply a comma separated
     * list of domains to not proxy, eg - no_proxy=.acme.co,.acme.co.uk
     */

and also set Node-RED to only listen on localhost (if the proxy is on the same server):

    /** By default, the Node-RED UI accepts connections on all IPv4 interfaces.
     * To listen on all IPv6 addresses, set uiHost to "::",
     * The following property can be used to listen on a specific interface. For
     * example, the following would only allow connections from the local machine.
     * This can be useful security when putting NR behind a reverse proxy on the same device.
     */
    // uiHost: process.env.HOST || '127.0.0.1',
1 Like

@TotallyInformation @marcus-j-davies thank you both for your help once again.

Sounds like a job for Monday. This has been fun and frustrating at the same time.

I was planning to just do a windows installation on AWS and do this but I choose to follow the guide node-red had on ubuntu. Not sure if that's made things better for myself or not.

I've enabled adminAuth but it doesn't accept the specified username and password within it?

1 Like

The thing to achieve here, is to allow NGINX to act as your SSL proxy for Node RED
and map

(and authentication barrier to the editor :wink: )

Good ending to A Friday!

:slight_smile:

Short term - harder if you don't know Linux. Long term - much easier. And I say that as a mostly Windows user, Linux servers are (mostly) a lot easier and more performant than windows in so many cases.

Well, Tuesday's job will be to do the auth in NGINX, not in Node-RED. :wink: Again, steeper learning curve for the short term but massively better in the long term.

1 Like

And the promised NGINX config for proxying the Node-RED admin Editor:

  ## -- assumes you haven't nested this location in a parent location --
  # Proxy the Node-RED Editor https://my.public.domain/red/ to http://localhost:1880/red/
  location /red/ {

  # A full set of headers have to be redone for every context that defines another header
  include /etc/nginx/conf.d/includes/common_security_headers.conf;

  # Reverse Proxy
  proxy_pass https://localhost:1880/; # <== CHANGE TO MATCH Node-RED's base URL

  # Common headers MUST be re-included whenever setting another header in a route
  include /etc/nginx/conf.d/includes/common_proxy_headers.conf;
    
    # ==> Of course, you could have a separate auth here! <==

    # Reverse Proxy for websockets
    include /etc/nginx/conf.d/includes/common_ws_proxy_headers.conf;

    # Reverse Proxy
    proxy_pass https://localhost:1880/red/; # <== CHANGE TO MATCH THE EDITOR's URL
    
    # Tell upstream which proxy was used
    proxy_set_header X-JK-Proxy "REDadmin";
    # Tell client which proxy was used (not really all that useful)
    add_header X-JK-Proxy "REDadmin";
}

/etc/nginx/conf.d/includes/common_security_headers.conf:

# Default header properties
# These have to be respecified for EVERY server/location context if that context defines another header. 
# So we use an include file that won't be loaded by the nginx.conf file directly.

# don't allow the browser to render the page inside an frame or iframe and avoid clickjacking http://en.wikipedia.org/wiki/Clickjacking
# if you need to allow [i]frames, you can use SAMEORIGIN or even set an uri with ALLOW-FROM uri https://developer.mozilla.org/en-US/docs/HTTP/X-Frame-Options
add_header X-Frame-Options SAMEORIGIN;

# when serving user-supplied content, include a X-Content-Type-Options: nosniff header along with the Content-Type: header,
# to disable content-type sniffing on some browsers. https://www.owasp.org/index.php/List_of_useful_HTTP_headers
add_header X-Content-Type-Options nosniff;

# This header enables the Cross-site scripting (XSS) filter built into most recent web browsers.
# It's usually enabled by default anyway, so the role of this header is to re-enable the filter for 
# this particular website if it was disabled by the user.
# https://www.owasp.org/index.php/List_of_useful_HTTP_headers
add_header X-XSS-Protection "1; mode=block";

# Content Security Policy (CSP) -  tell the browser that it can only download content from the domains you explicitly allow
# http://www.html5rocks.com/en/tutorials/security/content-security-policy/
# https://www.owasp.org/index.php/Content_Security_Policy
# Must be configured for your specific needs
#add_header Content-Security-Policy ........ ;

# You may want this in case something tries to refer from your site to something like Facebook https://scotthelme.co.uk/a-new-security-header-referrer-policy/
add_header Referrer-Policy "strict-origin-when-cross-origin";

# Configure for Strict Transport Security (HSTS) - only if https is in use - see map in default.conf
add_header Strict-Transport-Security $sts;

# NGINX seems to often ignore directive to not blab
add_header server 'home'; # <== CHANGE THIS TO WHATEVER YOU WANT

/etc/nginx/conf.d/includes/common_proxy_headers.conf:

# Common reverse proxy settings
# Don't forget to also add proxy_pass url;

  proxy_set_header Forwarded "by=$host;for=$proxy_add_x_forwarded_for;host=$host;proto=$scheme";
  proxy_set_header Via       "$scheme/1.1 $host:$server_port";

  proxy_set_header Host              $host;
  proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
  proxy_set_header X-Forwarded-Host  $host;
  proxy_set_header X-Forwarded-Port  $server_port;
  proxy_set_header X-Forwarded-Proto $scheme;
  proxy_set_header X-Original-URI    $request_uri;
  proxy_set_header X-Real-IP         $remote_addr;

  # Proxy timeouts
  proxy_connect_timeout       60s;
  proxy_send_timeout          60s;
  proxy_read_timeout          60s;

  # If proxied responses happening too slowly, try turning off the buffering
  #proxy_buffering off;

/etc/nginx/conf.d/includes/common_ws_proxy_headers.conf:

# Common headers for proxy of websockets

proxy_http_version          1.1;
proxy_set_header Upgrade    $http_upgrade;
proxy_set_header Connection "upgrade";

And if you want to add a login to a location, this is a very simplistic example using HTTP BASIC AUTH.

  # Test route for HTTP Basic authentication
  # Needs a matching route in Node-RED
  location /authbasic/ {

    # A full set of headers have to be redone for every context that defines another header
    include /etc/nginx/conf.d/includes/common_security_headers.conf;

    # Reverse Proxy
    proxy_pass https://localhost:1880/; # <== CHANGE TO MATCH Node-RED's base URL

    # Common headers MUST be re-included whenever setting another header in a route
    include /etc/nginx/conf.d/includes/common_proxy_headers.conf;

    satisfy all; # Only really needed when using IP restrictions, etc

    # Block router, allow from anywhere else on local network and block everything else
    deny  192.168.1.1;
    allow 192.168.1.0/24;
    deny  all;

    # sudo htpasswd -c /etc/nginx/.htpasswd me # thisisme
    auth_basic "Auth Basic"; # relm, any text you like
    auth_basic_user_file /etc/nginx/.htpasswd;
    add_header X-JK-Proxy "Basic Auth Test";
    add_header X-JK-Username $remote_user; # Returns the logged in username

    # proxy_set_header Authorization ""; # If you don't want the upstream to recieve the auth
    proxy_pass https://localhost:1880/authbasic/;
  }

There is more if you need it, it can get quite complex and you need to think through what parts you want protected. Not forgetting that you can also nest locations so some things can be inherited (but some things cannot).


Oh, and any headers that start with X-JK - those are for convenience as you can check them in Node-RED. They can help debug routing issues. Anything that starts X- is generally a custom header.

2 Likes

I now know who to come to, when I need to delve into NGINX again :wink:

2 Likes

This is why I put LOTS of annotations into my config files - there is no way I'd remember all this stuff otherwise! :woozy_face:

2 Likes

Haven't had much time to look at this today and having a bit of issues before implementing any of the stuff @TotallyInformation posted on NGINX.

Thing's I've done

  • Changed root URL for editor.
  • Set Node-RED to only listen to localhost.

Questions I have

  1. I take the config for proxying Node-RED you pasted would be "include /etc/nginx/conf.d/includes/red.conf"?
  2. In the server config you posted, theres a file in the includes called core_variables.conf - How do I set this up?
  3. You posted about a login to a location where does this go? And will this fix the not being able to access the editor? As it doesn't take the username and password supplied in the settings.js

Think I'm having some seconds thoughts about this implementation.

Seems a lot of getting around things just to have a normal looking domain. And as @TotallyInformation said to probably not to use this to handle such a system as a client portal for users to log into to see information from our ERP system.

Really like this tool, maybe I can use it in some manner still.

I like to split my configs up into smaller pieces. red.conf is included at the top level. Here is my default.conf that lives in /etc/nginx/conf.d

# Default settings for NGINX
#
# See http://tautt.com/best-nginx-configuration-for-security/

# don't send the nginx version number in error pages and Server header
server_tokens off;

# Create a new var. if https is being used
map $https  $isHttps {
  "on"     true;
  default  '';
}
# Set Strict Transport Security (HSTS) if https in use - see security_headers.hdr_conf
map $https  $sts {
  "on"     "max-age=15768000; includeSubDomains preload";
  default  '';
}

# A full set of headers have to be redone for every context - only if another header is set
include /etc/nginx/conf.d/includes/common_security_headers.conf;

# Default HTTP Server - Redirect all http traffic to https
server {
  # Variables we don't want to share with other people (server names, certificate filenames, etc)
  include /etc/nginx/conf.d/includes/core_variables.conf;

  listen *:80 default_server;
  #listen [::]:80 default_server;

  # If you want to limit this config to specific (sub)domain names:
  server_name $servername;

  # Shouldn't be needed but just in case
  location / {
    root   $defaultroot;
    index  index.html index.htm;
  }
  error_page  404  /index.html;
  error_page   500 502 503 504  /50x.html;
  location = /50x.html {
      root   $defaultroot;
  }

  # Permanent redirect
  return 301 https://$host$request_uri;
}

# Default HTTPS server - only allow TLS - include Node-RED and other locations
server {
  # Variables we don't want to share with other people (server names, etc)
  include /etc/nginx/conf.d/includes/core_variables.conf;

	# Ports to listen on. Remove insecure ports if not needed.
	listen *:443 ssl default_server;
  #listen 1880 ssl http2;
	#listen [::]:443 ssl http2 default_server;

  # If you want to limit this config to specific (sub)domain names:
  server_name $mydomain;

  # What to serve if no html file name provided
  index index.html index.htm;

  # A full set of headers have to be redone for every context that defines another header
  #include /etc/nginx/conf.d/includes/common_security_headers.conf;

  # TLS:
    # Specify the public cert and private key - need fullchain for max security - NB: vars don't appear to work here
    ssl_certificate /path/to/fullchain.cer;
    ssl_certificate_key /path/to/cert.key;
    # Require safe TLS protocols only
    ssl_protocols TLSv1.2 TLSv1.3;
    # Only use secure encryption ciphers
    #ssl_ciphers EECDH+CHACHA20:EECDH+AES128:RSA+AES128:EECDH+AES256:RSA+AES256:EECDH+3DES:RSA+3DES:!MD5;
    ssl_ciphers EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH;
    ssl_prefer_server_ciphers On;
    # Configure for Strict Transport Security (HSTS) - set conditionally in security_headers.hdr_conf
    #add_header Strict-Transport-Security "max-age=15768000; includeSubDomains preload" always;

    # enable session resumption to improve https performance http://vincent.bernat.im/en/blog/2011-ssl-session-reuse-rfc5077.html
    ssl_session_cache shared:SSL:128m;
    ssl_session_timeout 1d;
    ssl_session_tickets off;

    # Diffie-Hellman parameter for DHE ciphersuites, recommended 2048 bits. To generate your dhparam.pem file, run in the terminal
    #     openssl dhparam -out /etc/nginx/ssl/dhparam.pem 2048
    #ssl_dhparam /etc/nginx/ssl/dhparam.pem;

    # enable ocsp stapling http://blog.mozilla.org/security/2013/07/29/ocsp-stapling-in-firefox/
      # Local router DNS resolver first followed by
      # Cloudflare resolver 1dot1dot1dot1.cloudflare-dns.com as this is fast and secure
      # Annoyingly, you cannot use variables in the resolver directive!
      resolver 192.168.1.1 1.1.1.1 1.0.0.1  ipv6=off; # [2606:4700:4700::1111] [2606:4700:4700::1001];
      ssl_stapling on;
      ssl_stapling_verify on;
      # trusted cert not required if using fullchain above
      #ssl_trusted_certificate /etc/nginx/ssl/star_forgott_com.crt;
  # End of TLS

  # If you want to log access for this server separate to the main log at /var/log/nginx/access.log
  # access_log  /var/log/nginx/host.access.log  main;

  ## Help prevent buffer overflow attacks
    client_body_buffer_size 1K;
    client_header_buffer_size 1k;
    client_max_body_size 100k; # NB: 1k/10k too small for node-red deploy? OK: 2M, NOK: 1k, 5k, 10k, 
    large_client_header_buffers 2 1k;

  # Default root folder - if all else fails, look here for common content
  root $defaultroot;
  # Redirect missing pages to default static home page - remember to put this in the right place if using location /
  error_page 404 /404.html;
  # redirect server error pages to the static page /50x.html
  error_page 500 502 503 504 /50x.html;

  # Configuration for Node-RED
  include /etc/nginx/conf.d/includes/red.conf;

  # Test URL's for generating errors
  location = /err500 {
    return 500;
  }
  location = /err404 {
    return 404;
  }

} # --- End of default HTTPS server --- #

Don't forget to change the paths to the certs. And change the DNS entries if needed.

# Some core variables so that we don't need
# to expose things we don't want to when sharing
# conf files

set $servername ".example.com";
set $defaultroot "/usr/share/nginx/html";

# Limit server name to specific domain
set $mydomain ".example.com";

It goes into any location where you want a login.

Sorry, not sure what is causing that for you.

No, you can remove the login from settings.js as you won't need/want it. You would be letting NGINX host the authentication. This is a whole separate subject since there are many ways to do logins with NGINX from simple BASIC HTTP logins, through forms based through to 3rd-party services - both local and cloud-based.

Node-RED is really for automation and data-driven apps including data-driven web apps. Using it just as a portal is probably overkill.