Node Red in docker. Access to GPIO

I have been running Node Red on RPi3B for a while.
Since many of my IoT-services has been ported to Docker, I thought it's time for Node Red as well. With

docker run -d -it --restart=always --name nodered -p 1880:1880 -v /home/pi/Docker-data/nodered/data:/data nodered/node-red:latest

I get almost what I want, most of my flows work as expected.

But the nodes "node-red-contrib-dht-sensor", "node-red-contrib-ds18b20-sensor" and "node-red-node-pi-gpio" does not get access to GPIO, so I can not control relays or read
the sensors connected to GPIOs. There seems to be missing permissions:

bcm2835_init: Unable to open /dev/gpiomem: Permission denied

I have tried to run with the --device /dev/gpiomem and also --privileged attribute, without luck.

How should this be solved?

As usual it it all about reading the docs...

So I have followed the instructions and made general GPIO digital read/write work.
But DHT and DS18B20 is still not available. Maybe this is well documented under each nodes description page, but I can't figure out how to solve this.

Please help...

I think you would need to dig into the nodes to see how they actually talk to the device - and then try to allow those ways out through the container... maybe /dev/mem or i2c or... In general if you really need low level hardware access then containers may not be the best way.

Thanks, dceejay!
I don't HAVE to use docker, so I guess I will continue with the standard installation.
Or maybe I find another way to read the sensors outside Node Red and use mqtt to transfer data.

But if anyone have had success with this issue, please let me know.

Yes - There may well be a way - though I always feel if I have to open up that many holes then what is the point of the containerisation. The host "owns" the IO - so it should virtualise it and then share it back into the container (or as many containers as you like) as necessary.

One reason is to have simple access between several containers running on the same host if they've been set to run on the same Docker network as each container can be accessed with their container names as their hostname. I do however agree on what you said.

Yes. Though it’s generally much easier to get network connections in and out than low level drivers.

Thanks for your inputs!

The main reason for me to consider a docker approach (besides to get even more famliar with docker), is that my RPi will be located remote without regular physical access and low bandwidth.

So if something goes wrong, I guess it would be easier to replace a container than to reinstall Node Red or replace with a Raspian/sdcard backup.

But for now I will use regular RPi installation (with docker and Portainer running along), and perhaps do some lab tests if someone points out how I could make the docker alternative working with sensors etc. as mentioned in my first post. If I succeed, it should be easy to replace standard Node Red installation with a docker version.

While of course everything should be running in Node-RED.... :sunglasses: ... sometimes it may be easier to find some simple standalone scripts that read the sensors and (maybe) publish to MQTT or something. - Then just run those on the host... and then link to the containers via MQTT etc.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.