Reverse proxy for node red with nginx and secure link setup?

Hello,
not sure if that is the right forum for this topic:
I'm currently running node red via a reverse proxy and client-certificate based authentication, which is the most secure way I can think of to access home automation via internet.
Downside is that it is necessary to install a certificate in every browser I like to use for access.
For less critical use cases (I'm running different node red instances in docker) I was thinking if it is somehow possible to use the ngnix secure link feature. (which works quite well for static content)
Issue is: the node red url comes up with a different "socketid" parameter all the time - so I'm not sure how to handle that or if that is relevant at all in that case. (as the path extension to the url is happen only later when contacting the port of node red)
Did someone here ever try this "light"-version of authentication with a secret md5 hash in the URL?

Never looked at secure link. Not sure that I'd trust it for Internet connected links but might be OK internally.

Truth be told though, it is trivial to get a Let's Encrypt certificate that your browsers will trust. Though you do need to access node-red endpoints with an IP name not just an address. You can do that by distributing or updating HOSTS files on each computer or, if you have a decent router, by configuring "hairpin NAT" which will give you the ability to have the same DNS name to use internally and externally but that does not traverse the WAN because the router "hairpins" the traffic. That's what I do on my Ubiquiti router. But, as I said, you don't have to do that but you do need a public domain name either way. You can get one via Cloudflare DNS for a few dollars a year.

I use the acme.sh from Let's Encrypt and I run a CRON job to do updates on a schedule. I have a couple of wildcard certificates provided that way. Let's Encrypt requires domain verification and you can do that a couple of ways, I use DNS verification which was easy to set up with Cloudflare DNS and does not require an inbound hole in your firewall.

With a wildcard certificate from Let's Encrypt, you get a cert for your domain and can then use it on any sub-domain name. It is slightly more secure to get sub-domain specific certs but unless you are running something of high value, this really isn't needed.

My question has actually nothing to do with the encryption of the connection as such or the use of wildcard certificates. This is done via ssl certificate anyway. (One reason for the reverse proxy)

Just to make the use case a bit more precise: The client certificate is only used to identify the client, so the reverse proxy can decide if a certain browser is allowed to connect to the node red dashboard. That way, there is no need to set a password for the dashboard in node red, as the web server makes sure authentication of the client.

The secure link feature of ngnix does also an authentication just not by a client site certificate, but an md5 hash as part of the url. Secure link is not there to encrypt the connection. Now the issue is: the path of the url changes, as soon as node red comes into play. (E.g. parameter web socket id changes everytime) Consequently the initial hash would not work anymore.
I was just wondering if anyone had success to get an nginx setup running with secure links circumventing the later change of the url.

I am not well versed in nginx, but reading the documentation, the hash is based on an expression you setup yourself if i'm not incorrect.

secure_link_md5 "$secure_link_expires$uri$remote_addr secret";

Is there a way to inject the socketid into that expression ? (ie, get the socketid first and make it part of the hash), although that sounds more like obfuscation

I guess what you mean is to generate a new hash on the fly then? Because the socket id isn’t static:

  1. client request url with initial correct md5 hash.
    example.com/h/…md5hash…/ui
  2. nginx checks hash and rewrites to reverse proxy url if hash is ok.
  3. and here it gets complicated: node red extends the url with something like …/ui/#!/0?socketid=7et76P9FGmbwYoZ8AAtE

But then the md5 hash is of course not valid anymore. Of course it is very easy to recalculate the md5 hash, I just would not know at what point I could get in between to update the hash in the url.
All what happens in the nginx config file at this point is:
proxy_pass http://nodered-server:1880/;

All what happens in the nginx config file at this point is:
proxy_pass http://nodered-server:1880/;

I would expect this to be the last step after the hash has been validated ? But as said nginx is beyond me. I do use npm proxy manager with Authelia (with 2FA) which uses nginx, but i dont deal with the config so much.

Can't you tell it just to use the url up to the /ui? For a 'normal' website does one have to provide a different hash for every page of the site?

Ah, sorry I misunderstood. OK, so I'd need to look up the details but I think that, in outline, you need to start with a self-signed root cert on the server and use that to sign client certs. that way, you don't have to load each client's cert as a trusted root.

This article seems to be reasonable and explains how to create your own CA which is what you want: Client-Side Certificate Authentication with nginx (fardog.io). But yes, there is no way around having to either create/cross-sign or distribute the client certs.

That is true of a client cert as well of course. BUT, the starting point of all security for web services is TLS. You MUST have TLS/HTTPS enforced otherwise you might as well not bother.

You don't want the socket id in the expression since that changes on every reconnection between your browser tab and node-red.

Right, so going back to basics for a second. You need to work out what defines a "certain browser". It can only be something that the browser can return to the server of course - Off the top of my head, the only thing I can think of that the server would know in advance (because it has to calculate the hash in advance) would be the IP address - or rather the first 2-3 parts of the address. I can't think of anything else that would make sense. Not especially secure since IP addresses are fairly easy to spoof. And it would mean having all clients that can access the Editor on a separate VLAN.

I can think of a few tweaks but it all comes back to having something that is known in advance by NGINX.

HOWEVER, if that is all you want, then why not simply filter access to the Editors URL(s) using IP addresses. That is a built-in feature of NGINX already. I'll see if I can dig out an example.

OK, looks like I took my IP address filtering endpoint out of my live config so I can't share.

However, you will just need to lookup the satisfy all; directive and you should be able to find the details from that.

Sorry, but you are referring to something totally different. How to implement client site certificate or encrypt a connection is not at all the question here.
Probably you should google “nginx secure link” so you understand the issue described here.

IP address filtering is not applicable at all for replacing authentication: just think for a second about that you probably would like to use your mobile phone to connect from outside of your Wi-Fi at home… what IP address should nginx filter then?

And that thing in advance is for instance the md5 hash in the url of a nginx secure link…

I know about secure link, I was trying to indicate some ideas around better management of client certificates which is where you said you started from. From what I can tell, I don't believe that secure link is all that helpful.

I know all about authentication and authorisation. I was responding though to what you've said in your replies.

If I were implementing client-side passwordless authentication, I wouldn't be doing it any of these ways at all since other than client certificates (which can be a pain to manage), NONE of the other suggestions are in any way "Secure".

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.