I am sick of the performance my Raspberry Pi 4 offers and ordered a Mini PC.
I want to move to Proxmox container running on this machine. I am not sure if I should put
Node Red to Ubuntu 22 or ton Windows 10 / 11. And I am not sure which files to move ?
Take the ".nodered" dir and copy it over ? Just take parts of it ? Would that work on both systems (Ubuntu / Windows ) ?
Not sure that running on a mini-PC in a container will actually be faster. You would need to take careful views of the hardware and config. It would also be sensible to look at why the Pi4 seems slow. The most common mistake would be running a Linux desktop on the Pi along with the server tasks. But doing the same on a container would probably result in similar issues.
Also, I'd question what a container would actually give you on a mini-PC? It isn't like it is hard to run and manage node-red and associated servers natively. You couldn't run Proxmox with Windows as either host or guest anyway.
As to Windows vs Linux. For this kind of task, a Linux server will ALWAYS win out unless you have significant commitment to management of Windows environments.
To migrate, you just need the userDir folder (usually ~/.node-red). But you can leave behind the node_modules folder as you need to rebuild that anyway. So just copy everything else to the right location on the new device, change to the new folder and run npm install.
I ve just running the desktop on the Pi to connect by vnc if needed. the Linux desktop itself should just eat some memory as far as I know. There are no other task running beside wireguard which runs but is nut used while my problems. Node Red some times loses the connection without errors at the logs. I use massive amount of flows. The mini PC hast much better specs 32GB ram 5x faster single / multicore.
I would like to use Proxmox containers because I hope there will be enough power to run some other tasks like Nextcloud or Pihole, Telegram bot. And have nice and fast rollbacks if something goes wrong.
I think we've demonstrated multiple times on this forum that running a Linux desktop on a device with constrained resources is a bad idea that kills performance of the server.
Also, you don't need to run the desktop to use VNC - at least if you've got Docker - that's what I do on my home server which is an old Lenovo laptop (8GB RAM, i5) running Debian Buster. Start the container up if you want to remote to a desktop, otherwise leave it switched off. In any case, after a while of running a headless Linux device, you'll realise that the desktop delivers almost nothing in terms of usability (assuming that you have a real desktop somewhere to use). And if you are planning to get a mini-PC anyway, leave the remote desktop on the Pi and leave it off the mini-PC - getting the best of both.
Well, you've a point there. I use Docker just for things that would be complex to manage otherwise. Namely the VNC remote (which really I virtually never use since I can run a Hyper-V VM on my desktop anyway and I have a Windows environment and a Linux desktop configured) and the Unifi Controller for my 2 Ubiquiti WiFi AP's which are meshed. The controller doesn't need to be running full time either really though the server has plenty of umph and I generally leave it running.
I have Node-RED, Mosquitto, InfluxDB, Grafana, Zigbee2MQTT, Telegraf, and Docker (oh and in the last few days, I gave in an now run Samba as well) running natively on the server. Each of those is trivial to run, update and manage natively - I just keep a backup of /etc/ and my home folders and some config/install notes in a digital notebook (Microsoft OneNote). So a full rebuild is pretty easy though that only happens if I'm moving to a new generation of hardware. That might happen every 5 years or so. Maybe less now I'm on the laptop.
I had a few problems with ill-advised changes in Mosquitto (their failure not mine) which were easily resolved and Zigbee2MQTT (their upgrade process is a bit pants and I ended up with a dead config a couple of times until I put together my own upgrade script). On the very rare occasion that a major Node-RED upgrade causes an issue, I've both local and remote backups so I can simply restore, npm makes version changes trivial anyway.
Personally, I prefer the simplicity and stability of Debian but that is just personal preference. I like that Debian is also close to Rasbian (which is built on Debian) which is where I started with Node-RED and MQTT. I also run Debian on Hyper-V on the desktop and in WSL also on the desktop, I've also had various VPS's running Debian so I'm most familiar with it. Ubuntu, while nominally based on Debian, has enough differences these days to make a difference, and I really don't like them pushing things like SNAP which I think experience shows is just a distraction for now.
I am sick of the performance my Raspberry Pi 4 offers and ordered a Mini PC.
Thats interesting because there people that have pi's running with more than 25 docker containers (and ditched their servers for it) and there is also Proxmox for the Pi (called Pimox) that people run without too much issues. The key is just like @TotallyInformation states: don't install desktops if you use it as a server.
So I just should try to disable the desktop by raspi-config and thats all ? (just to make sure I did get this right)
Just to make clear:
I saw posts in other forums that told "desktop can run it just eats a little ram". I do not use the desktop just very seldom to do some setup stuff by vnc viewer. So what is the reason it should make problems ?
Yes, correct, it changes the startup to - oh I can never remember the proper name - but just starts up the OS, network and non-desktop systemd services then stops. Of course, that still means that you have all of the desktop files installed which is a bit of a pain since you will forever be updating them - you DO run a perioding sudo apt update && sudo apt upgrade I hope? But it also means you can reinstate the desktop if you ever want to.
Personally, I don't ever install the desktop but that's just me.
Hmm, well if you look at the architecture of the Linux desktop, you will quickly be able to see why it is a monster. It has many interconnected layers of software all developed by different people and teams. It is frankly amazing that it runs at all! To run all of that on a constrained device is going to occupy CPU threads, memory and make possibly heavy use of the slow SD-Card storage interface on the Pi.
As I say, you can get a slow but usable VNC remote desktop by running a suitable Docker container for the odd occasion where you want it.
Thanx for your answer!
I disabled the desktop and that didn´t help. Then I switched over from WiFi AC to LAN for the Pi what helped a little. But I found a bug with my cam flow and multiple connected clients. (other thread). Currently I work on a solution for that.
You speak bout connection errors. Do you mean that your node-red dashboard in the browser loses connection from time to time ?
If so this happens because node-red is busy with the processing of some very CPU incentive nodes in one of your flows. I would recommend to identify those nodes first and check if not something can be done to improve this.
Note that node-red is based on node.js which in essence is a single threaded application. So if the execution of a certain node is very cpu incentive then this might make your dashboard unresponsive and eventually result in connection lost error.
If ffmpeg is converting between formats, yes, that is very cpu intensive. But if it is just copying there is no hard load. Like if you presnt rtsp streams or HTTP Live streaming playlists (.hls), it works very well also in a RPi when using the nodes developed by @kevinGodell
In case you need to convert formats with ffmpeg, you should use the GPU and then a RPi might not be the best choice
Sorry, I didn't express myself clearly. It is that what I meant.
So the browser reports a connection lost because node-red running on your Pi was busy with executing a node (or several nodes) that uses the CPU for a long time.
As node-red is in essence a single threaded application and as it appears that your node-red application is very CPU intensive (it would be good to confirm this by monitoring your CPU usage) then you can also improve performance by splitting your node-red application into multiple node-red applications. This way you make use of the other CPU cores of your raspberry PI - otherwise you are only making use of one CPU core while the other 3 CPU cores of your PI are mainly idle.
Before going that path I would first check if you can not optimize those nodes in your flows that are so CPU intensive.
I think you are right with splitting Node Red to get multi thread power.... But it feels wrong. I think load balancing is something for a new version of Node Red. @knolleary is the only one who could say if there is something planed in upcoming releases that helps here !?
That s what I do permanently. But it s not enough to get things smooth. My thought was to get a Mini PC with much higher single thread performance could also be a solution. This is the performance compared to my current Raspberry shouldn't that help to boost things up:
Based on your input I assume that your node-red application is CPU bound (using 100% of the CPU of a single core) and then switching to AMD Ryzen would indeed have a big impact on the responsiveness as it is 4 times faster.
... but note that your AMD Ryzen is power hungry. So if you keep it continuously running then this is something to consider also.
As a raspberry 4 has 4 CPU cores you can also consider splitting your node-red application in 2, 3 or 4 node-red applications and divide the CPU intensive workload amongst those node-red applications.
Of course that requires a bit of a redesign as you need to split the workload amongst several node-red applications and most likely those node-red applications also need to communicate with each other.
Hi! I don’t want to make things more complicated by adding another alternative. However, I was in front of a similar problem (what to chose, how…), and I ended up in the following, very satisfying scenario:
I bought a Synology D920+ with a 20GB memory extension. Now, I’m running on it
Home Assistant in a Docker container
Node Red as an add-on in Home Assistant
a Debian client in a VM
Unify Cloud Controller in a Docker container
build-in cloud sync and backup services
Performance is awesome, plus, all convenient and useful services (backup etc.) which come with a NAS.
All software installation is more or less plug-and-play.
Maybe this scenario is useful for anyone. Happy to answer any questions.
PS: I forgot to mention that Home Assistant is my central integration platform for all devices and integrations, plus, it provides the UI for my Smart Home. Node-Red is my automation platform, which has seamless access to all entities provided in Home Assistant. Very handy!