Ok I get that point and I don't disagree at all... Its about context / scope whatever you want to call it.
I am well aware that the more generic a lump of code is the more configuration it will take to make it do anything useful and I also know that the returns are diminishing the more generic you go.
Heck that is the whole point of Node-Red inst it?
It offers task-centric processing based on a common interface container that is shaped to an applicable form by the task in-hand.
Looking at the working with messages, again, in more detail and with better understanding now, I can see that split, change and join are more powerful than I originally realized and I also now appreciate the distinction between sending multiple messages, serially, and working with just one big one.
The point I think you were making with the two alternative structures you suggested.
But back to generic...
No I don't think I grated some thing generic in an absolute sense, of course it will only work in context, in this case processing data from one specific type of communication node and performing actions on that message.
However, using either structure you suggest would require code to be altered every time I added a a new source of data of that type or changed/augmented what was being returned,
That seems to fly in the face of the basic underlying strategy of Node-Red.
I get that I will never simply add a value to a block of data and have it magically appear on my Ui without configuring something but if the something can be a 'standard node' as opposed to custom code wouldn't that be better?
I an including a configurable function in the concept 'standard node' here.
When I say Generic, in this specific context, I am saying my function, or stack of identical functions, don't need to know what they are processing just what to do with the data, making their function generic, all be it for a specific task on a specific type of message...
Need a new value just add another identical node to get the next value.
Need to skip a few registers, insert a change node to modify the current offset, and then add another identical node.
Looking at the examples that seems entirely consistent with what is there now.
Weather I use a split change join or a branch structure is a very good point and one I need to understand more fully...
I am not aware yet of the finer points, and to be fair only just aware of the core principles, so I am in no place right now to go asserting which architecture will work best in this case.
I am very happy to conceded that my function, heck any function, may be better changing/appending the original message or splitting it into a series of new messages, on the same connector/wire,
For that matter it may be better to simply extract a value and save in global context for use elsewhere, which to be fair sounds like a good plan to me.
However at some point I need to
Identify a value based on its position in the data block.
process the raw data into a useful value, probably scaling and rounding at the same time.
give that thing a label/context to identify it.
Place the newly identified data into a database along with all previous data with similarly classified.
Mmmm... Looks like we are more or less on the same page, which is good as you will be on the right page and I have hardly skimmed the book yet.
I am assuming most of that process is easily doable with standard nodes and have no desire to go mucking about in code to do a worse job than has already been provided for by the Node Red platform.
I am also happy to concede that the bits I currently think standard nodes will not do can actually be done with standard nodes once I know how to use them.
However even the docs you pointed me to suggest a function node for tasks not already defined, and provisioned for by a standard function node.
It seems to me that jumping through hoops to split a message into multiple, potentially hundreds, of individual messages and then reassembling those into a new message, with the bytes reordered so that the standard conversion will work, would be counter productive, and hard to follow.
I would still need a node to label and format each item, or a custom function to do that for all of them, in order to put that data where it needs to be, in a useful form.
I have a device, a PLC, that is inherently flexible, and its doing a few jobs.
I could set up multiple queries to small blocks of memory associated with each PLC task.
If I do that every time I add or modify a function on the PLC I would need to add a new getter , and setter, and a new stream to process that data. I would also have to consider the impact on existing streams that also talk to that device.
I can define one block of memory to work with, handle it once per data exchange and have my PLC code use that block or all UI and integration functionality.
Going that way, a single stream as only two tasks,
Get and format data - make it available globally
Monitor global data, associated with that device - format and send the data if anything changes.
In either case I have to manually identify the data, essentially giving an offset, as the defining key, a name and format. This is no different conceptually than naming a memory address.
In Node-Red if I can do that simply by dragging a node into place and giving it a couple of set points surly that is better than having to mess about editing an existing node that is tested and working.
I build industrial applications for a living and the one thing I strive to avoid, all the time, is long gangly code that dose multiple things, mainly because editing that can be dangerous, literally life and limb stuff in many cases.
Look I take the point RE multiple outputs VS multiple messages on a single output is a thing I need to understand, the finer points of and don't. Given that node red is single threaded I suspect I need to know a great deal more to decide which is the best approach, why and when.
I am going to look at the docs and YouTube to see when a switch is recommended and what for, specifically in comparison to split type operations. That seems pertinent. is it?
With regard to building functions that can only handle a specific set of named things. as opposed to a function to be used multiple times and configured, simply, to handle one thing.
Well I an not convinced yet and the latter seem like the better plan.
Everything else I do is based on standard functions and node red is no different, configurable nodes to do common tasks.
Why would I build a function that wasn't configurable?
In all honesty what I relay want, ultimately, is a data object...
Within the object would be a name, a format for the raw data, a source, and a format for the displayed data, as a minimum.
Now I think a fully configured source for all data is a panacea too far so making the source an identifiable function block on a named stream steam seems reasonable.
Ans of course the address, offset, of the data is both unique to a block and implicit to the function.
Given that, there is no reason that said function couldn't use the data item to configure its functionality... Simply adding it would configure it, assuming the data was defined first.
Could a single function do that for multiple items of data.. Probably, but it would be way more complex.
Are there already nodes that do that... I don't know but I will find out before I go trying to make one.
If I start with the concept of one node one data item, no matter if the message is serial or parallel, I can start simple, data config in the node, and update that later if the data object idea gets implemented.
Looking where IOT is going and how that will impact industrial automation I think OPC is likely to get sidelined in favour of timeline databases and pushing value pares. without pre-configuring the underlying structure seems like the way forward for most of this stuff.
Lets face it if you need a classic SQL view for some reason and have to normalize the date tou can still do that, its just that all that pesky mucking about with com objects, and paying DB Admins to sort out the resulting mess, is long gone and the timeline tables form the input structure.
Which I guess gets us back to the original question...
Irrespective of the arrangement of the function, with respect to its handling of the original message, I still have to swap words in the underlying data.
I could process the original message and modify it, but I would rather not do that as it would ,make debugging harder and inst a recommended strategy anyway.
That mean I have to copy/clone the data, preferably a chunk at a time, 4 bytes or 2 words, depending on weather it is an array or a buffer we are talking about.
If I went with your example and built on that, which I very well might, I still have to manipulate the byte/word order for 32 Bit data and I am likely to want to cherry pick those segments from a much bigger data set.
Can I process the buffer items directly?
I will be looking at NodeJs now but pointers wouldn't hurt as there are probably many possible ways to go an only a few good ones.
If not, and I am therefore working with arrays, am I better using the array data in the first place.
I realize I have to get a buffer to do the conversion, that's fine
(Original data) buffer > array from part of that > mess with byte order > buffer > conversion(0)
(Original data) array > array from part of that > mess with byte order > buffer > conversion(0)
(Original data) array > mess with byte order > buffer > conversion(n)
The last thing that occurs to me is that the convert function its self may be an option.
I know you can add a function to a prototype cold I access the code for the conversion, make a new version handling the bytes in a different order and then add that to the environment?
That would be clean...Potentially.
I am so far out of my depth here its hard to know if that is just a silly thing to say.
I want to move forward so I am going with the first solution, clunky or otherwise, I find for now.
I will modify once I have a better Idea what I am doing but for now getting the value, any way that works, will do,