Running external Python script fast

Hi,

I am working on a flow which runs on a Raspberry 4b. The flow occasionally calls external Python scripts which interact with various hats (reads analog signals, communicates via Modbus and CAN). The whole thing, or at least parts of it should run about 50 Hz. Not exactly, but in the range.

The problem is that each call to a Python script causes about 20-40 ms delay, which is way too much.
I tried to simplify the scripts to the minimum, tried to use the exec node and also the Pythonshell node.
The simplifying improved the situation a little, but I am still far from the desired 5 ms jitter.

Has anybody tried to do something similar?
Any suggestion is greatly appreciated.

Don't design it like that, instead have a loop inside the script itself that does the readings at the speed you need, then communicate the result back to NR, eventually after some calculation made already in the script. If you do not need to send every reading back, you could just send changes or some averaged values. 50 Hz means one reading every 20 ms

1 Like

If the delay while the script starts up from the exec node is a problem, perhaps it would be better to have it running all the time (via systemd, with auto restart).

When you want the readings, send it a message, perhaps by MQTT, to trigger the read/report loop.
Return the batch of readings by MQTT too.

1 Like

Great hints, thank you!

I redesigned the whole thing as suggested: the python script is started by an exec node, running continously, and communicates with the nodes via MQTT. The delays are much better, a proportional controller in the scripts moves it to the millisecond range.

The new difficulty is that I can't reliably stop and restart the script.
I can kill it by its PID, but when I re-start it, the MQTT links don't work, despite showing "connected" status.

How could I improve this?

As this is long running, the node-red-node-daemon may be a better way to run it

4 Likes

Without knowing the internal design of your Python script, I can only tell you how I have implemented it in all my own scripts. Most of my scripts also have several threads that are running and it is very important that those also gets stopped when you stop the script, otherwise they will be "hanging" in your system until next reboot. I also use MQTT in all of them and I can send a "stop" command via MQTT from Node-RED that aborts the threads and then the script terminates itself. This works perfect since years

4 Likes

You can set up a systemd job to run the script.
Then sudo systemctl enable script will start it up when the computer boots.

From node -red an exec node can terminate it via
sudo systemctl stop script
or start it
sudo systemctl restart script

Perhaps its not as nicely contained as using node-red daemon, but it works well for me on a raspberry.

2 Likes

… only if the user you are running as has sudo privileges (which for user pi on a pi it does by default but not most systems)

1 Like

Thank you all for the prompt replies.

I went for the NR daemon, worked perfectly for the first try. And I feel it will be easier to maintain, as it will be internal to NR.

Currently the NR daemon node starts a single Python script, which has a couple of separate threads (e.g. measure temperature 1, temp 2, etc.).
I wonder if it would make sense to use separate daemon nodes for them. What is your opinion?

If you are sure that the threads also are terminated when you terminate the main script, it is fine. Otherwise, if you have the threads hanging in the system, they will not release the memory they have occupied and if you start a new instance of the script, there will be new threads started. Memory leak or whatever you prefer to call it

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.