You can do that and you have a few choices. If your uibuilder url is test
, then the local full path would be http://locahost:1880/test/
if you haven't changed any other node-red settings. So you can map that to /
in NGINX if you like. Just make sure it appears before any other maps for Node-RED so that it takes preference.
Alternatively, you could put uibuilder onto its own Express server instead of Node-RED's. This is a simple configuration change in node-red's settings.js file, details in the uibuilder documentation. So, for example, you could make all uibuilder nodes attach to port 3001: http://localhost:3001/test/
. This is arguably simpler to get your head around since there cannot be any overlapping or clashing endpoints between standard Node-RED and uibuilder. It also has the potential advantage that you can make changes to both the http and socket.io settings and middleware without impacting the rest of node-red or make changes to node-red's http and websocket servers without impacting uibuilder endpoints.
There are probably a couple of other variations on these themes but that's the main thrust of things.
Yup! This is really a variation on q1. By the way, I always change the default node-red editor path and make it something like http://localhost:1880/red
, that way I know I'm never going to get a clash between the editor and anything else.
But, while you certainly don't have to do it this way, the ultimate approach is to put uibuilder onto its own port as shown above.
I am an Enterprise Architect, I have strong opinions on EVERYTHING! It is a natural outcome of the job.
But more seriously, I don't like to use Docker unless I have to since it adds a lot of complexity. So for something like NGINX and Node-RED, I prefer to run them direct. I keep Docker in the back pocket for complex things like the Ubiquiti Unifi Wi-Fi controller which needs MongoDB and Java servers. Also useful for things I only need to run occasionally such as a Linux remote desktop which I hardly ever need running.
If you run things like proxy and web servers under Docker, you really need to know what you are doing and all of the config is just that much harder. For home use, I just can't see the point because on the odd occasions I need to make a config change or run an upgrade, the OS takes care of the upgrades and the config changes will probably result in me messing around a couple of times with changes so a quick edit followed by a systemctl restart of the service is pretty quick and easy.
In more managed/corporate environments I wouldn't use Docker either. In fact I do my best to eliminate it entirely from our work environments. We would mostly run cloud environments and Docker is a waste of time there, better to put the effort into creating configuration templates that can set up everything including sizing virtual infrastructure, firewalls, virtual networking and more. For on-prem/custom datacentre setups, we would mostly use simple virtual machines.