Is there much of a difference on resources running Node-RED in Docker or regular installation?

I have a Pi 3B+ in my boat that acts as an access point and has a few other duties, some of them through Node-RED. But I see that when running my Docker images it gets a lot slower than if I stop Docker, and it taxes the CPU a lot. I have one app (for Telldus Tellstick Duo) that has to run in Docker, so I can't get rid of it completely, but that is a very lightweight app. Will it lighten the load any if I move Node-RED to a regular installation instead of Docker, or is it about the same?

Most of us on the forum will generally recommend running Node-RED natively if you can, it takes the complexity of Docker out of the equation anyway.

Unfortunately, I can't give you any facts on running in/out of Docker though since I don't run it in Docker.

What I can say is that it isn't hard to setup, run and maintain Node-RED natively but of course, there are some maintenance overheads that you should be doing regularly and this may be slightly more complex than updating your docker image periodically.

Node-RED is "just" a Node.js app and so can be managed like any other Node.js app. npm is the native tool for managing Node.js apps.

It should be no different at all running it in docker compared to locally, they are both just processes running on the same kernel.

The only possible difference would be absolute minimal of the iptables routing packets to the container.

Is there no difference at all in the amount of memory used? Seems unlikely.

1 Like

Nope, should be the same, with the option to cap the amount of memory a container can allocate (though I'm sure you could do the same to a "native" process, it's just easier)

The question is there much of difference is kind of specific. Much is not measurable thus physics don't apply directly. Base still stands on physics so if the resources come from single source there must be difference.

Within a docker compose file, you can limit the amount of resources a container can use.
For my two separate Node-RED instances, I have limited them to 1cpu and 256mb RAM.

Depending on the flows you are running, you may need to increase the RAM value, but I have not had any issues so far.

    deploy:
        resources:
            limits:
                cpus: "1"
                memory: 258m

Thank you all for the interesting discussion! :+1: I will limit the memory usage, I think that's the problem. It's using too much for some reason or another and forcing the Pi to swap. I will also move all the flows I can from that Pi to the main Pi in the boat, a Pi 4 with 8 gig RAM (if I remember correctly), which has lots of ressources to spare.

Oh, and I was running Node-RED for about five years as a standalone Node.js app, but decided to consolidate everything I had in Docker containers to simplify updates, and to use a leaner SD card image, with the Docker system files and Docker Compose files and datafiles for the app on a separate SSD. So on the main Pi in the boat I'm running Node-RED, Glances (monitoring the Pi's ressources), Homeasisstant and SignalK (boating software) containers and ESPHome, and once a month I just stop them, run docker system prune -a to remove the images and restart them, upgrading everything to the latest image. So far that has worked very well.

Edit: BTW here is the top result on the Pi, isn't Node-RED a bit heavy on the memory here? I only have three small flows, mostly MQTT stuff and a reboot every night at 04.