Here I am at it again with another philosophy topic Got something on my mind and I'd love to hear other people's thoughts. I've been experimenting with different ways of streamlining and customizing tools in node red to make work simple. What I discover is I need a "function", ie. something reusable. Something I can use again and again to get the same output. And subflows work great for this purpose.
For example, I have a subflow which I can throw data on a simple format to, and it will reorganize the data as required by a remote service and send it via http. And outputs the same data I sent in, so I can continue working with it after if needed.
Example input msg:
msg = {
ts: 1742842713,
payload: {
power: 123,
temperature: 45,
},
// any other props for the specific context
}
The subflow then reorganizes this data to the required format (so I don't have to each and every time I'm sending data):
msg = {
payload: {
ts: 1742842713,
values: {
power: 123,
temperature: 45,
}
},
_backup: RED.cloneMessage(msg)
}
Now the trick here is the cloning, adding a backup. Upon completion, the subflow reverts back to the cloned backup of the incoming message! This is great for a number of reasons.
- gives me a limited "scope" which doesn't pollute what happens afterwards.
- easy to remove pollution by not making more of it, and at the same time temporarily removing any incoming pollution in terms of left-over props in msg from earlier nodes.
- no need to have 100% control of all possible mutations nodes may have on a msg (delete props, overwrite props, save new props, modify behavior depending on incoming props).
- I'm using http as an example here, but the same applies to virtually all I/O nodes or even nodes in general.
- easy to wrap repeating chores like splitting and joining inside a subflow and never have to deal with it again
The http node in the example above overwrites msg.payload with the response from the server. This is something I'm never interested in this case. Further, http node will be affected by incoming props like msg.headers, msg.method and so forth (which it should). But I'm never going to set or use those here. Still, it outputs response headers in msg.headers, which, if left unchecked, will pollute any subsequent http node down stream.
All of this made me nervous; I never have full overview of precisely all output props, the effect on incoming props and so forth. With the backup, I don't have to bother about it. However, when I log msg in transit between nodes, they grow to a gargantuan size. A simple msg with data could be (pretty)printed as ~200 lines of JSON. But with all these backups going on, it's thousands of lines of JSON (per message). This way of handling data going in and out of subflows was so successful, removing so much hassle and bugs, that I now have subflows within subflows, all doing the same cloning and reverting on input/output.
The huge size hasn't been a noticeable problem yet. After all, some thousands of lines isn't really a problem in modern javascript? But anyway, this doesn't scale well. And perhaps node red isn't the best tool for it? I guess the philosophic part of my problem is how to deal with scopes. I want something equivalent of a javascript function which takes a limited set of arguments and outputs a single response. Of course I could venture out and make a js library on the side. But I prefer the benefit of doing everything in node red.
Anyone else having similar experience, some good solutions or tips to design robust, easily maintained and reusable functionality in node red?