I'm certainly not against it, but for that to happen, Nick has to approve. And with the other proposals I'd like him to look at, I'm giving him a lot of work ![]()
I'm really interesting about the topic because I fully understand the need.
I'm using myself some subflows with a complex mechanism to use it almost as a singleton subflow:
- lock/unlock mechanism inside the subflow stored in flow/global environment object
- all input states needed are stored in flow/global environment object
- if sublow is not locked, lock it and process the flow
- if subflow is locked, then only stored input to flow/global environment that could be used by the first flow
As an example usage, I got events of presence sensors and lights controls and I need a subflow to manage automatic lights based on presence sensor values that take as input an object with a couple sensor / light. Obviously I want to reuse the same subflow for different couple sensor/light and I wish to reuse the same subflow for new events related to the same couple of sensor/light.
Practically a single instance subflow could be done by adding a subflow option "single instance by node call" that bind the subflow instance to the node id that call the subflow. It means that if the option is checked there is only one subflow that can be use and reuse from the same caller node.
If multi-binding is needed it means there is the need of a subflow instance variable that will be used as subflow instance id.
Both can be only one option if there is a variable that can represent the parent node id.
It would result in a single instance by caller node (using $caller_node_id as value) or a single named instance that can be used by several caller nodes (using an instance name).
I too struggled at first to understand the need for singletons, but after creating a complex NR install architecture with many nested subflow instances (totaling to many hundreds of nested nodes therefore) and seeing how Node-RED struggles with post-deployment processing (diffing & restarting all those node instances individually), the desire for them became clear.
Since link call nodes can't come in or out of subflow and I'm avoiding flow context almost entirely on my install, I created my own singleton architecture using the brilliant SubLink community node, which allows linking into/out of subflows.
I converted my two heaviest (in reuse and internal node count) subflows to use SubLinks instead of input + output nodes and used a minimal wrapper that filters the SubLinks based on subflow template name (saved into the properties of the heavy subflows) on the ingress and the wrapper subflow ID on the egress. To handle nesting, I'm using a simple return stack populated with those wrapper subflow IDs. I created a new flow Singletons that only contains the singleton subflow instance for each (and any future subflows I convert in order to optimize performance). Works great, and reduced my post-deploy processing time from 14 seconds (until the flows file is available and triggers process) to 2 seconds.
Link call can reach out from a subflow
Oh no, did I waste my time??
It is implied (by exclusion) in the built-in help - but yes, it absolutely could be clearer!
Yeah that tripped me up. Basically, the link call can reach out to a flow, and msg will return into the subflow, right?
Correct, but, to be ultra precise (unlike the docs), the link call can call out from within a subflow & reach a link-in node on a regular flow and the msg will return to the calling link-call node in the subflow instance. This means multiple instances can make use of a the "singleton"/"subroutine"
My solution is likely overengineered. Nevertheless I can leverage its architecture ā using the return stack to handle config for nesting ā in a new version leveraging the native Node-RED link call feature. In case anyoneās curious, Iām wrapping the link call node in a subflow that takes a subflow property to define the target and config for a given subroutine flow call.
Okay, I made the conversion away from my custom singleton solution using SubLinks to a Link Call-only approach. However, Iām noticing the optimization is poorer using Link Calls, despite a lower overall node count ā 2 vs. 3 wrappers, and no dummy subflow instances needed, but 2 more flows.
Post-deployment processing (i.e. after deploy finishes but before the flows file can be committed and/or triggers clicked) has slowed from 2s to 4s ā 100% increase. Are flows a lot heavier to process than subflows or something?
Yep, I just reverted to my manual singleton subflow architecture and the post-deployment processing is only 2 seconds vs 4 seconds using link call. So Iāll stick with my custom solution for now.
If anyone has any idea why the flows take longer to process (restart, presumably) than the subflows, Iām all ears. Especially interested to know if thereās some other tradeoff I donāt know about involved here (other than a little more complexity managing a custom solution).
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.