This is something that I mentioned in my transfer encoding chunked topic:
This could be avoid by having something that checks the
parts
and ensures that messages are passed on in order, something like a order guarantee node - that would buffer only those messages that arrive out of order. The node would maintain the current parts number, i.e., last part sent was X therefore the next part to be sent will be X+1, all other messages are buffered until X+1 arrives - is there something like that?
Of course the simplest answer would be: use a join node. But in the context of the post, that makes no sense since I'm splitting large files into small messages and joining them altogether would be counterproductive. Having a node that partially buffered messages and released them in order would - hopefully - avoid excessive memory usage.
How would this work?
For example ten messages sent in the following order:
4 5 0 1 8 2 3 7 6 9
an order node would do the following:
Receive: 4
Internal Buffer: 4
Send: <nothing>
Receive: 5
Internal Buffer: 4 5
Send: <nothing>
Receive: 0
Internal Buffer: 4 5
Send: 0
Receive: 1
Internal Buffer: 4 5
Send: 1
Receive: 8
Internal Buffer: 4 5 8
Send: <nothing>
Receive: 2
Internal Buffer: 4 5 8
Send: 2
Receive: 3
Internal Buffer: 8
Send: 3
Send: 4
Send: 5
Receive: 7
Internal Buffer: 8 7
Send: <nothing>
Receive: 6
Internal Buffer: <empty>
Send: 6
Send: 7
Send: 8
Receive: 9
Internal Buffer: <empty>
Send: 9
So the node would maintain a buffer and an index to know which messages it has sent. Of course this would need to be group by some id but I believe the file read node already does this by grouping all data blocks it generates with an id.
Is there any node that does this already?