Node-RED SSL reverse proxy w/ Google Oauth

You're so close. Looks like your API zone key doesn't have the exact right permissions. It's trying to hit the cloudflare API but the request is denied. Double check the key permissions in cloudflare.

To try again do a docker stack rm stackname
Then docker stack deploy -c compose.yml stackname

It looks ok to me, unless I am going blind. I did the curl test and that worked.

This API token will affect the below accounts and zones, along with their respective permissions

    All zones - Zone Settings:Read, Zone:Read, DNS:Edit

can you post your whole compose file to a pastebin and link it?

In the "environment" section, you need to put your tokens there with no quotes. just paste the token directly after the = just like in my example. Is it possible you have some invalid characters there?

Pretty sure I have put the token in as specified.
Before pasting it I was just going through it line by line and checking against yours and also re-reading the thread and I saw

Does that mean your docker file should have replicas:1 instead of mode: global? If so could that be a factor here. I suspect not.

if you only have one node, you should use replicas: 1

I can delete my stack completely and re-deploy from what I have above, on different hosts in different locations, so I know that it works.

I had to fudge with the formatting of those env vars for the key to get it to work, it has to be exactly right.

OK, progress being made. I changed it to replicas: 1 and rebooted and now when I wget https://vps02.mydomain.org.uk I get the certificate issued by CN=Fake LE Intermediate X1 which google leads me to believe is the staging certificate. So I commented out the staging line, removed the container and deployed it again, but I am still getting the staging one. Perhaps it takes time to get through....

yeah, you're there.

you need to hard refresh your browser now it will refuse to forget that staging cert.

open a new incognito tab and you'll likely get the real cert served.

in chrome press f12 then you can right click on the refresh button and click the option 'hard refresh'

Also, don't remove the container, remove the whole stack. Now that you're in swarm mode, forget about containers. You define stacks of services and you let docker manage the containers. It's confusing with one node, but once you have more, you'll see why.

In portainer you can leave the stack existing, and click the services and hit 'remove'. This will remove all the networks, containers, etc. Then you can easily redeploy.

Eureka!
You will forgive me if I forgo on the jumping out of the bath and running naked down the road, it is cold and raining.
Many thanks for the help.

One thing I notice that is not working is http -> https redirection. If I go to http://domain then I get 404 not found.

Also one further question, can I access node red other than via the domain name when on my local network?

I suppose the next thing I should do is get some authorisation going so I can leave the port open without worrying, though I assume the only way in is via the domain name so it is not as if the casual bot looking for open ports is going to find it. Then I need to start adding services but hopefully that won't be too difficult now I have got the first component running.

Thanks again.

Colin

To make this a bit easier I've put this up on github:

You can deploy directly from github in Portainer:

Supply the env vars as shown here and hit deploy.

Hmm, works for me. Check that port 80 is open on your node, and that your firewall is forwarding it properly.

If you want to bypass traefik for local access only, edit the compose file to expose the port:

  nodered:
    image: nodered/node-red:latest-12
    volumes: 
      - nodered:/data
    environment:
      - TZ=$TIMEZONE
    networks: 
      - traefik-public
      - bridge
    ports:
      - 1880:1880

Then it will behave just like any node red container when accessed locally by the hosts ip address. ie, you can access it at:

http://222.222.222.222:1880.   

If you add services to the stack, like an mqtt broker or something, you can access it like this (without exposing any ports):

http://nodered:1880

If you add services in other stacks, and they share a common network, again you can access it directly:

http://nodered:1880

If you add services in other stacks, in other networks, you need to specify the stackname to get docker to resolve it. So if you called your stack 'node-red' and your service 'nodered'

http://node-red.nodered:1880

I use a dns entry to force the hostname 'nodered.mydomain.net' to resolve the ip address of the docker master node. So, whether I am internal or external, I access with:

https://nodered.mydomain.net

for auth I suggest using oauth. I'll post an example. Then when you access you have to log in with your google creds.

I will continue to progress the docker-compose on github. I'll add the traefik dashboard and oauth.

I understand your excitement. Congrats!

Ok,

Check my Github repo, I have added the traefik dashboard and an IP address whitelist.

I use ranges for the IP whitelist to add all the networks I usually would connect from.

The dashboard is helpful as you continue adding more to traefik:

I have now updated the repo with the oauth instructions. This means you can log into node-red over ssl with your google creds.

When you reach your instance you will be redirected to google to login:

This also still also goes through the IP whitelist middleware. The flow is somewhat like:

traefik --> ip whitelist --> oauth container ---> google auth --> forward auth ---> node red

As you add services, they are all under the same single-sign-on, so you just get logged straight in.

As a bonus, if you have 2FA setup with your google account, it works the same, because all the authentication is handled by google.

Your node-red instance can be protected by Titan security keys (mine is!), yet you don't have to setup any auth, nor ssl, at all inside node-red.

The traefik dashboard is protected the same way.

I don't know much about Oauth (I seem to know less and less somehow). If I use it does all external traffic have to go through that? I will have a mosquitto broker which will be accessible externally but I presume Oauth authorisation is not appropriate for that.

Also, with Oauth and node-red, how would I define who is allowed access to node-red dashboard and flows?

On the port 80 problem it is definitely forwarded to the machine and I have checked the iptables and can't see anything that would stop it, and I can see that Traefik/Docker have added rules to accept prts 80 and 443. In fact I wouldn't get a 404 error if it were the firewall would I? I have looked at the logs and don't see anything happen when I hit the http url. Have you got any suggestions as to how I can debug this?

Sorry to burst another bubble but this is absolutely not true. Not only can and will bots discover domain names but unless you have taken steps with your local firewall, access is still possible via IP Address. Keep checking both the Cloudflare analytics and your local firewall/router logs.

Well I can't get in by IP address, and I did say 'casual bots', obviously the domain name can be found. It is irrelevant anyway. It is on an isolated network and I keep tearing it down and rebuilding while working on the ansible scripts to build it. Also it is a laptop that is suspended when I am not actually working on it.
Plus I did say that authorisation was the next thing to do, which it is.

1 Like

Oauth just lets Google handle the authentication, so you don't need to come up with new passwords and type them into yaml files or whatever. Once google authenticates you, they set a cookie in the browser with a key. That cookie allows the single-sign-on functionality.

The oauth container can be set to "allow all", allow only a list of emails, or a list of domains.

Grafana, with a couple of env vars, will take the oauth token and auto-create users that are authorized, attached to the email they used for auth, and auto-assign them to an organization. If we could figure out how they did that, I would love that capability in node-red as well.

I do not protect all services behind oauth, like you mentioned, mqtt clients can't authenticate that way. So they an get IP whitelist.

Get the traefik dashboard up. It will help you understand how traefik works, and you'll be able to more easily see what is missing for your redirect.

When you see "middleware" in traefik, it's exactly that, it's something in the middle between the incoming request to the proxy and the service. You can have just one, like the IP whitelist. Or, you can make a chain of middlewares, and then call that for your service. Middlewares can evaluate all the http headers that came in the request, the url that was requested, and the client's source IP.

I highly recommend playing with the service "whoami", it really helps troubleshooting. Sometimes your source IP and headers are not what you think :wink:

Will do, thanks again.

I am trying to get the OAuth example in your repo working and I have a couple of issues.
This first is just a typo I think, at line 140 or thereabouts is
- COOKIE_DOMAIN=$DOMAINN
I guess this should be
- COOKIE_DOMAIN=$DOMAIN
the second is that I can't get the .toml config to work.
In the instructions it says to create the config using
docker config create traefik-auth.toml
but I think the commands needs a config name as well as the file name. Looking in the compose file it has

    configs:
        - source: traefik-auth.toml
          target: /etc/traefik-auth.toml

which I deduced means that the config is supposed to be named the same as the file, so I used
sudo docker config create traefik-auth.toml traefik-auth.toml
and now I can use sudo docker config inspect traefik-auth.toml and it shows it. However, when I run stack deploy I get
service traefik: undefined config "traefik-auth.toml"
I have looked carefully for typos, retyped the line in case of unprintable characters, found examples online and so on, but I can't see what is wrong with it.

I see the problem in my compose file, give me a few mins to fix it.