I am making an application in node red and I need to create new subflows and nodes from a function node. The idea is that this function node will create and delete nodes and flows when we send it certain msgs from mqtt (that is, at runtime).
The idea of the application is to connect an undetermined number of machines, one day there will be 3, another 20 to save operating data. The general idea would be that an operator, when he was going to use a specific machine in an app, would indicate which machine he is using using its identifier and when he finished using it he would also indicate it.
The problem I have is that these machines transmit these msgs through a very old protocol called Open Protocol and they have given me a driver in the form of a Red-Node node that needs that identifier to be used and I will not send it until it is sent through the App I will need a node for each new machine that connects and the only thing that has occurred to me is to have a function node that dynamically creates these nodes.
The truth is that my idea sounds terrible xd, can you think of a simpler solution?
I know (and have used) Open Protocol With Atlas Copco tools.
Part of the problem with the Open Protocol Nodes (as I remember them from 2y ago) was that you need to set the connection IP at design time. So I understand your idea of "dynamically generating flows" - but honestly, this is neither simple or ideal.
There are 2 approaches I would take (1 more effort, but more benefit than the other).
Make a subflow of your working version with environment properties "TOOL_IP", "TOOL_PORT" etc etc. Then it is a case of duplicating the subflow instance, change some settings, deploying the flows. Though, be careful as this can interrupt the comms of ongoing processes.
Fork and improve the Open Protocol nodes to permit dynamic connections to be created. With that capability, you could fashion a flow that could generate connections on demand and use link-call (subroutines) for a single reusable processing and mapping.
Oh, wait, I do have one other solution but I feel I may have used all of my "grace with the community" recently by posting links to FlowFuse (the company I now work for) as a solution. In short, with FlowFuse, you could create 1 Node-RED instance, with ENV VARs that describe & setup the machine. When you need another, you simply "fire up a new instance and set the Env Vars appropriately" Simple as that. flowFuse has a concept of "devices" too, Ideal for this. You create 1 working solution, replace the IP, PORT, destination server, ID (whatever changes between your machines) as ENV VARs. Then you can deploy that same instance to as many "devices" as you want (a device is just a NUC or Rasperry Pi or VM or some kinda computer).
BTW, FlowFuse is essentially a Node-RED runner, it is Open Source (exactly the same as Node-RED), it lets you create 1, 10, 1000 Node-RED instances, gives you user management, snapshot backups, env var management, audit logging, everything you would need to modularise this properly.
If you want to head down this path, I can point you in the right direction - shout up or give me a DM on here.
Steve, thank you very much for your help. I will discuss the 3 solutions with my colleagues. But the most likely thing is that we will opt for the third. If I run into problems, if it's not too much trouble I'll write you a DM.
Once again, thank you very much for his help and I send you a virtual hug.