Just have some questions and thoughts before I change anything in my working flow. I have a "simple" subflow as seen below. Inside it I have the tf coco ssd node. The flow works very well but since the subflow is loaded multiple times I could save memory (RAM) if I instead could place the tf coco ssd node outside of the subflow, shared on a common tab, and instead call it on demand
With the newly released Link Call node, I can see an opportunity to optimize my flow, making a "subroutine" on a tab, using a single tf coco ssd node, loading the model only once. Would this reduce the overall perfomance if there are many images to be analyzed "at the same time"? I mean since NR is single threaded, would it help to improve performance to have several models loaded instead of just one that is shared?? I'm a bit worried, making this change would degrade the overall performance in those situations when there are many images to be analyzed "in parallel"
Well, yes, true. But I thought some theory was there somewhere in the background
If several models are loaded, one for each instance, would that mean that the execution in the underlaying library would happen "in parallel" for each loaded model once they receieved the data to crunch?? Or are they blocking each other in some way?
Unfortunately I found that I cannot use the Link Call node as I hoped, included in a subflow it cannot reach Link nodes outside of the subflow
EDIT: I already know I could do the tricks using another output in the flow and adding filtering after the input for the result, routing it further. But from a design perspective, I thought using the Link Call node would be more elegant. If I recall it correctly, I think @Steve-Mcl is the author of the Link Call node?? Hope he can make it work in a subflow so it can reach link nodes on the "outside" as well
No, I was in the discussions & possibly the loudest supporter but Nick wrote it.
I dont think that is a good idea (and i cant explain why right now ;P)
To test your idea, you would move (copy) your internal logic (from the subflow) to a separate (standard) flow tab then "call it" instead of instantiating another sub flow. Instead of having 3 returns, you would build a single return object but with an identifying property as to where to route the message to next.
Enlighten me why it would be a bad idea to allow a subflow to communicate "to the outside" and then receive a processed result? I can do it all by using MQTT inside the subflow; send out data, process in another tab or process, construct a suitable payload and return it back. All via MQTT