Uibuilder + reverse proxy

As it turns out, ChatGPT is a decent JavaScript coach. Since my last post, I've been able to create a fully working dynamic web form without relying on the low-code components in uibuilder. Very happy with the results.

I am running node-red in a Docker container along with some other Dockerized apps (using the excellent IOTStack).

I also have nginx acting as a reverse proxy for node-red and the other apps. Nginx is not running inside Docker, it's running on the host directly. I have mapped localhost:1880 to localhost/node-red and that works well (I have some minor annoyances with sometimes needing a final "/" and other times not, but that's an aside). I've set up other apps like Grafana the same way, each with a different path.

In uibuilder/node-red have a mixture of dynamic pages and some static pages. Each uibuilder node ends up having a URL like http://localhost/node-red/some-uibuilder-page/.

Here are my questions:

1/ I would like the uibuilder pages to be the default page the user sees if they try to access http://localhost. What's the best way to do that without breaking access to the other Docker apps?

2/ Is there a way to serve uibuilder pages outside the /node-red/ URL path? In a perfect world I'd like to have access to the node-red editor via a different url to the uibuilder pages.

3/ Any strong opinions on running nginx on the host versus as a dockerized app?

All comments and suggestions are welcome!

Thanks.

You can do that and you have a few choices. If your uibuilder url is test, then the local full path would be http://locahost:1880/test/ if you haven't changed any other node-red settings. So you can map that to / in NGINX if you like. Just make sure it appears before any other maps for Node-RED so that it takes preference.

Alternatively, you could put uibuilder onto its own Express server instead of Node-RED's. This is a simple configuration change in node-red's settings.js file, details in the uibuilder documentation. So, for example, you could make all uibuilder nodes attach to port 3001: http://localhost:3001/test/. This is arguably simpler to get your head around since there cannot be any overlapping or clashing endpoints between standard Node-RED and uibuilder. It also has the potential advantage that you can make changes to both the http and socket.io settings and middleware without impacting the rest of node-red or make changes to node-red's http and websocket servers without impacting uibuilder endpoints.

There are probably a couple of other variations on these themes but that's the main thrust of things.

Yup! This is really a variation on q1. By the way, I always change the default node-red editor path and make it something like http://localhost:1880/red, that way I know I'm never going to get a clash between the editor and anything else.

But, while you certainly don't have to do it this way, the ultimate approach is to put uibuilder onto its own port as shown above.

I am an Enterprise Architect, I have strong opinions on EVERYTHING! It is a natural outcome of the job. :grin:

But more seriously, I don't like to use Docker unless I have to since it adds a lot of complexity. So for something like NGINX and Node-RED, I prefer to run them direct. I keep Docker in the back pocket for complex things like the Ubiquiti Unifi Wi-Fi controller which needs MongoDB and Java servers. Also useful for things I only need to run occasionally such as a Linux remote desktop which I hardly ever need running.

If you run things like proxy and web servers under Docker, you really need to know what you are doing and all of the config is just that much harder. For home use, I just can't see the point because on the odd occasions I need to make a config change or run an upgrade, the OS takes care of the upgrades and the config changes will probably result in me messing around a couple of times with changes so a quick edit followed by a systemctl restart of the service is pretty quick and easy.

In more managed/corporate environments I wouldn't use Docker either. In fact I do my best to eliminate it entirely from our work environments. We would mostly run cloud environments and Docker is a waste of time there, better to put the effort into creating configuration templates that can set up everything including sizing virtual infrastructure, firewalls, virtual networking and more. For on-prem/custom datacentre setups, we would mostly use simple virtual machines.

Thanks for the helpful reply.

Some additional information and some follow up questions.

The way my project is currently set up is as follows:

  1. Nginx is set up as a reverse proxy running on an Ubuntu 20.04 host. The host is a SBC, but not a Rasperry Pi since those are impossible to find at reasonable prices.
  2. The host runs IOTStack, which is a collection of dockerized apps (Grafana, Node-Red, Influx etc).
  3. The host is typically behind a corporate firewall. It does not have a publicly accessible URL/domain name.
  4. I have TLS/https set up on nginx, but it's a bit hacky because there's not a public URL.
  5. I do not have TLS set up for the dockerized apps as I was told that if I secured the traffic to the nginx server, it wasn't necessary to secure traffic between nginx and the docker apps.

The reason I chose IOTStack is that is simplifies installation of the various apps considerably. It's also a pretty widely used project, so I figured it was reasonably battle-tested.

I like the idea of using a separate Express server, although that opens up more questions about how to secure things. If my existing approach of securing nginx and not everything behind it isn't a terrible idea, I suppose I could take the same approach and simply map "/" to http://localhost:3001/test/.

Follow up questions
(i) I tried adding the code from uibuilder Security Documentation to settings.js but could not access uibuilder. I commented out the https and certificate lines to keep things simple. Here's what I added:

uibuilder: {
   /** Tells uibuilder to stand up a separate instance of ExpressJS and use it
    *  as the uibuilder web server. Also defines the TCP port to use which
    *  has to be different to Node-RED's port. Adjust to meet your own requirements.
    */
   port: process.env.UIBPORT || 3001,
   /** Tells uibuilder to use https instead of http */
//   customType: 'https',
   /** Tells uibuilder what private key and public certificate to use for TLS encryption */
//   https: {
//      key:  fs.readFileSync( path.join( 'each', 'folder', 'to', 'your', 'privatekey.pem' ) ),
//      cert: fs.readFileSync( path.join( 'each', 'folder', 'to', 'your', 'certificate.pem' ) ),
//   },
},

Did I miss a step?

(ii) I know you're not a big fan of Docker and I understand that security etc gets into the realm of personal opinion quite quickly. However, I'd welcome your thoughts on the current setup, particularly the choices around TLS.

As ever, I appreciate the fantastic support!

Yes, that is mostly true. Though if you ever need to move some services to other devices, you need to remember to change that of course.

No, that should be OK. Check the Node-RED log at startup and it tells you what it is using. You can also check the data in the Editor.

If you had a working uibuilder page and get access to the command line on the device, you can also check with wget or similar since you should be able to request over localhost.

Most likely, you haven't got port 3001 enabled in the appropriate docker containers.

As long as you are happy with it, know (and have documented for future reference) how to configure it, have processes for updates and backups, and as long as the system isn't being overloaded, your current architecture should be fine.

In terms of TLS, you certainly need it for access from outside the device so you will want to make sure that the certificate update processes are robust. Depending on what CA service you are using, you will have various options for managing the certs. For example, with Let's Encrypt, I mostly use DNS verification rather than opening a port 80 into my local network. And I run the cert updates as a CRON job using the acme.sh official shell script as I get several certificates and that process puts them all into a known location so that I can reference or copy them to the right places as needed.

Most likely, you haven't got port 3001 enabled in the appropriate docker containers.

And how did I miss that? As you suspected, if you actually open the port, it's a lot more likely to work :crazy_face:

I successfully got uibuilder running on port 3001 and updated my nginx configuration. What's the best way to ensure all of the relevant paths get preserved? e.g. I have uibuilder set up to use "test", so I mapped "/" to "http://localhost:3001/test/". The CSS seems to be loading, but the dynamic form isn't showing up. In the console I see a mix of "blocked due to MIME type" and "loading failed" errors. I believe the MIME error is caused by not finding the file, so the root cause for both types of errors is the same.

The resource from “https://localhost/uibuilder/uibuilder.iife.min.js” was blocked due to MIME type (“text/html”) mismatch (X-Content-Type-Options: nosniff). system-settings.html
Loading failed for the <script> with source “https://localhost/uibuilder/uibuilder.iife.min.js”. system-settings.html:11:53
The resource from “https://localhost/uibuilder/vendor/vue-select/dist/vue-select.css” was blocked due to MIME type (“text/html”) mismatch (X-Content-Type-Options: nosniff). system-settings.html
The resource from “https://localhost/uibuilder/vendor/vue/dist/vue.js” was blocked due to MIME type (“text/html”) mismatch (X-Content-Type-Options: nosniff). system-settings.html
Loading failed for the <script> with source “https://localhost/uibuilder/vendor/vue/dist/vue.js”. system-settings.html:12:54
The resource from “https://localhost/uibuilder/vendor/vue-select/dist/vue-select.js” was blocked due to MIME type (“text/html”) mismatch (X-Content-Type-Options: nosniff). system-settings.html
Loading failed for the <script> with source “https://localhost/uibuilder/vendor/vue-select/dist/vue-select.js”. system-settings.html:14:68
Uncaught ReferenceError: Vue is not defined
    <anonymous> https://localhost/system-settings.js:4
system-settings.js:4:1
The resource from “https://localhost/uibuilder/vendor/vue-select/dist/vue-select.css” was blocked due to MIME type (“text/html”) mismatch (X-Content-Type-Options: nosniff). 6 system-settings.html
Source map error: Error: request failed with status 404
Resource URL: https://localhost/uibuilder/vendor/vue-select/dist/vue-select.css
Source Map URL: vue-select.css.map

Last, but not least, you've mentioned in a few places that documentation is a struggle (I agree!). For low-code examples where you're dealing with people like me that don't fully know what they are doing, I've found this tool to be invaluable: https://scribehow.com/

I tried a few different things:

  1. Set the redirect for "/" to http://localhost:3001, and typed in the full url i.e. http://ip_address/test/
    This worked, but obviously the user has to type in http://ip_address/test/

  2. Set the redirect for "/" to http://localhost:3001/test, and typed in the base url i.e. http://ip_address/
    I could not get this to work. I tried adding a rewrite statement:

rewrite ^/test/(.*)$ /$1 break;

but that didn't help. It's clearly a file paths issue, but I'm not sure how best to solve it.

There's nothing unusual about the install paths. They're all the normal/default.

You need to make sure that http://localhost:3001/uibuilder is aslo available via the proxy and that websockets are configured as well.

In my NGINX config, I have things split up like so:

image

The default config defines the base default server. It includes red.conf into that base server config. things such as security, websocket and custom header configs are in separate files because you have to include them for every defined location (they don't get inherited).

For example, on every path that uses websockets, you will need this:

# Common headers for proxy of websockets
proxy_http_version          1.1;
proxy_set_header Upgrade    $http_upgrade;
proxy_set_header Connection "upgrade";

For every proxied path, I also include these:

# Common reverse proxy settings
# Don't forget to also add proxy_pass url;

  proxy_set_header Forwarded "by=$host;for=$proxy_add_x_forwarded_for;host=$host;proto=$scheme";
  proxy_set_header Via       "$scheme/1.1 $host:$server_port";

  proxy_set_header Host              $host;
  proxy_set_header X-Forwarded-For   $proxy_add_x_forwarded_for;
  proxy_set_header X-Forwarded-Host  $host;
  proxy_set_header X-Forwarded-Port  $server_port;
  proxy_set_header X-Forwarded-Proto $scheme;
  proxy_set_header X-Original-URI    $request_uri;
  proxy_set_header X-Real-IP         $remote_addr;

  # Proxy timeouts
  proxy_connect_timeout       60s;
  proxy_send_timeout          60s;
  proxy_read_timeout          60s;

  # If proxied responses happening too slowly, try turning off the buffering
  #proxy_buffering off;

You should not need any rewrites when using NGINX as a reverse proxy. Here is a snippet that proxies the editor (which I always move to /red using node-red's settings.js) and the node-red Dashboard:

# Deal with all other Node-RED user endpoints - e.g. uibuilder
# Takes over the whole root url which means you can't use it for anything else
# Better to set httpNodeRoot to something (e.g. 'nr') and then just proxy that (e.g. '/nr/')
location / {
  # A full set of headers have to be redone for every context that defines another header
  include /etc/nginx/conf.d/includes/common_security_headers.conf;

  # Reverse Proxy
  proxy_pass https://localhost:1880/; # <== CHANGE TO MATCH Node-RED's base URL

  # Common headers MUST be re-included whenever setting another header in a route
  include /etc/nginx/conf.d/includes/common_proxy_headers.conf;

  # Add a header that helps identify what location we went through, remove for production
  add_header X-JK-Proxy "ROOT";

  # ... more stuff here not relevant to this thread ...

  # Proxy the Node-RED Editor https://my.public.domain/red/ to http://localhost:1880/red/
  location /red/ {
    
    # ==> Of course, you could have a separate auth here! <==

    # Reverse Proxy for websockets
    include /etc/nginx/conf.d/includes/common_ws_proxy_headers.conf;

    # Reverse Proxy
    proxy_pass https://localhost:1880/red/; # <== CHANGE TO MATCH THE EDITOR's URL
    
    # Tell upstream which proxy was used
    proxy_set_header X-JK-Proxy "RED";
    # Tell client which proxy was used (not really all that useful)
    add_header X-JK-Proxy "RED";

    # Proxy the Node-RED Dashboard https://my.public.domain/red/dash/ to http://localhost:1880/ui/
    location  /red/dash/ {

      # ==> Feel free to have different auth here <==

      # Reverse Proxy
      proxy_pass https://localhost:1880/ui/; # <== CHANGE TO MATCH THE EDITOR's URL
      
      # Tell upstream which proxy was used
      proxy_set_header X-JK-Proxy "RED-DASH";
      # Tell client which proxy was used
      add_header X-JK-Proxy "RED-DASH";
    }

    # Proxy the Node-RED Dashboard https://my.public.domain/red/ui/ to http://localhost:1880/ui/
    location  /red/ui/ {

      # ==> Feel free to have different auth here <==

      # Reverse Proxy
      proxy_pass https://localhost:1880/ui/; # <== CHANGE TO MATCH THE EDITOR's URL
      
      # Tell upstream which proxy was used
      proxy_set_header X-JK-Proxy "RED-UI";
      # Tell client which proxy was used
      add_header X-JK-Proxy "RED-UI";
    }
  }

}

As you can see, there are various things you can do in order to achieve more flexibility. You should also proxy port 1880 to an error page - or better still us IPTABLES or similar to block it from external access.

Don't forget to check your browser's dev tools network tab to see what isn't being loaded and add an appropriate entry to the proxy.

The easiest thing for you would be to proxy the whole of localhost:3001 and accept the use of /test/. If you want /test/ to become / via the proxy, that is more difficult to achieve since you then have to provide additional overrides to get all of the generic uibuilder paths to be correct and you probably have to change your html pages to correct their css and script paths.

1 Like

Thanks. All very helpful and everything now works. I kept the /test/ URL for simplicity, as you suggested. There are enough moving parts already.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.