Avoiding race condition in flow context?

Hi folks!

I just started using Node-RED and I have a simple question: how can I avoid race conditions while manipulating the flow context? Is there some kind of lock mechanism regarding data stored in the different contexts, so only 1 node can work at them at any given time?

I would want to store a list of timers in the flow context, where one node would be responsible to insert new items to the list (or update existing ones), and an other node would be responsible to continously check this list for expired timers, (act upon them), and remove them from the list.

My current implementation works flawlessly, but I'm afraid it is very much only a matter of luck that it did not produce any errors or false behaviour yet from both nodes manipulating the same context variable roughly the same time. Both nodes get the current context value, make changes to it, then set it back - and this is probably risky.

What would you suggest to avoid this problem?
Thanks in advance!

Hi @Semmu, welcome to the forum.

I kinda sounds you might have fallen into the common trap of programming node-red like a regular language (as apposed to the flow paradigm) & there are potentially other more "node-red like ways" of structuring your flows.

Of course this is speculation without a closer look at your flows.

To put a little meat on that, why store timers in context when you could use an inject or CRON node? I've never had to do that. :thinking:

Often when people use a lot of context, there is an alternative means like passing variables in the msg properties instead of getting and setting context.

PS, a good sign of this is if you set something in context & later in the flow get that something back from context, you should be passing it in the message.

Again, this is speculation without seeing your flows.

Thanks for the quick reply!

I agree with you that my flow/algorithm feels a little weird in the flow-based programming world, but I actually couldn't figure out a different solution that would match the concept better and feel more native. So here I am :smiley:


Some details about why I need it:

I'm writing a simple "library" to define Zigbee events in a declarative way, i.e. you can define all your automations in a single yaml file and my code handles the rest (the parsing, processing and also sending other messages if needed).
I just got tired of creating a flow for every single automation in my flat, so this project was born.

Using this, you can very easily create an automation which, for example, turns a lamp on and off when you press a button, as follows:

base_topic: zigbee2mqtt
automations:
- when: "LEDButton"
  condition: action == "on"
  send: { state: "on" }
  to: "LEDLights"
- when: "LEDButton"
  condition: action == "off"
  send: { state: "off" }
  to: "LEDLights"

(actually this will be further simplified very soon, you will be able to access the caught event in the send directive as well, reducing the number of definitions needed)

Now to make it even more powerful, I want to introduce the concept of timers. You list them in the yaml file as well: you name them, define the duration, and the action they should do when they expire (i.e. send a message to a Zigbee device).

timers:
- name: bathroom_auto_off
  duration: 60
  send: { state: "off" }
  to: "BathroomLights"

And I want to start, reset, and cancel these timers based on other Zigbee devices, meaning these timers will start and fire based on the actual Zigbee traffic in my network. For example:

automations:
- when: "BathroomMotion"
  condition: occupancy == false
  start: bathroom_auto_off

Here we start the bathroom_auto_off timer after we receive a message from the BathroomMotion device with the payload containing { occupancy: false }. It will expire in 60 seconds, and then it should send the { state: "off" } message to BathroomLights.


To achieve this, I need to store the active timers somewhere, and currently, I'm using the flow context for it: every time something happens and it wants to start or modify a timer, I create a simple object containing its identifier and when it should expire/fire, and I append this object to the currently active timers.

Then in a separate flow, which is triggered every second, I check the list of these timers and act if any of them are expired, then remove those from the list.

Because I couldn't come up with a better solution, I have to manipulate this shared data between two flows, where one could fire any time, and the other is running once each second.

I may be missing some clever, flow-programming style solution, but I can't come up with anything else :smiley:

I thought about actually sending a msg object for each timer and using a variable delay node to delay firing the corresponding action, but that solution lacks the possibility of reseting a timer (i.e. starting it again from the beginning) and cancelling it altogether while it is waiting in the delay node.

By the way, in the meantime, I found a semaphore module, so I will probably use that to avoid the concurrent modification of my shared data. It should work without any issues, but I'm still interested in better solutions if you got any ideas!


And I actually plan to release this thing once it is complete and I have been using it for a while without any issues :nerd_face:

1 Like

Regarding the dynamic timers, the Cron-plus node is fully automatable. You can create, delete, pause, stop, restart, schedules (timers) add limits, send whatever payload you want. It can also persist your dynamic timers across restarts.

I too am a lazy programmer (is there any other type?) And a always look to automate instead of repeating the same mundane tasks - so I understand why. However, you have now moved from the friendly visual programming to a custom text file.

Having not looked at your code and flows, I can only speculate but I suspect much of this could be simplified to an inject+subflow. I.e. inject those parameters into a subflow to create the dynamic logic. Meaning instead of the dynamic part being a custom yaml file modification, you add another subflow instance & an inject with the parameters. I.e. stay in the nice friendly visual flow editor.

I am over simplifying but hope you grasp my train of thought.

My $0,02, FWIW:

The two approaches I personally use for flow synchronization are:

  1. join node in manual mode usually is sufficient
  2. only in rare cases, node-red-contrib-semaphore is required

"Proper" semaphores are necessary when paths arising from diverse asynchronous event sources converge.

In the more common case of a single flow splitting and then re-converging downstream, join is almost always sufficient.

Your approach appears to fall into the category where a semaphore is required. As you have obviously discovered for yourself, this treats static context as the resource being protected by the semaphore. As @Steve-Mcl said, that is not a very "flow-like" way of using Node-RED, but who are we to judge? (I personally am not a huge fan of the graphical paradigm and use Node-RED not because of its "flow-centric" design but because of its active community providing a rich ecosystem for my home automation needs. In fact, I tend to think about my flows' logic first through the lens of the functional programming paradigm and then translate such an algorithm into messages moving along paths through a Node-RED flow. Effectively, each message on each path is a lexical closure. The addition of the complete mechanism in Node-RED 1.x even brought a simplified form of continuation passing to its repertoire, There is an old adage to the effect that "all programming languages evolve in the direction of Scheme," but I digress...)

In that context (no pun intended), a very common pattern in my flows is to send a single message down two or more paths with distinct topics which are then recombined later using a join node set to convert a fixed number of incoming messages into a single payload using the topics as keys.

This allows each "parallel" path to fiddle with message properties to its heart's content and all of the message modifications can then be "summed" after the join. This "goes with the flow" (pun intended) of Node-RED's design where state is carried through a flow in message properties rather than using a lot of static context as global variables.

IMHO, static context is best used for data that needs to be persisted between asynchronous invocations of a flow, e.g. the current state of a physical device as captured from asynchronous state-change messages. This does appear to be your primary use-case. But even that can often be minimized through the "cheat" of mechanisms like retained MQTT messages that move the persistence of global state out of Node-RED entirely and into some other system better designed for that purpose.

My own approach to reducing the kind of redundancy for which your package is intended to solve supports the style recommended by @Steve-Mcl. I.e. another common pattern in my flows is to rely heavily on retained MQTT messages to persist state and drive my flow logic using sub-flows whose behaviors are conditional on the values both of the components of the incoming topic and the payloads.

You probably already know this, but it is always worth emphasizing that whether using a join based approach or a semaphore, be sure that you include the right catch nodes or exception handlers in JavaScript functions in the right places so that the message counts meet the flow's requirements in the case of runtime errors. Since Node-RED is single-threaded under the covers, deadlock can be even more deadly than for platforms with true concurrency.

HTH

1 Like