How does the TCP In node receiving buffer payloads split it into messages?


I am using the TCP In node to listen to a port where a legacy application is sending a stream of binaries that I am decoding using buffer parser. So far so good.

I have one problem though: How does the TCP In node know when it split up the stream into multiple different messages? The stream I am handling from the legacy app is rather old-school but it seems like TCP In is sometime creating multiple messages for the "same legacy message" if you understand what I mean. Is it any way to steer how the input stream seperates into messages or how does TCP do that in general?

Kind regards

Is this using the TCP-in node or the TCP-request node ?
The TCP node just passes on what it is handed by the underlying OS - It may or may not decide to split a buffer or not.

I am using the TCP-In node with a stream of buffers setting. When I route it to the debug it seems to split the buffer a bit random.

I think that I probably need to pre-process the buffer with buffer parser so the buffer gets chopped up in segments according to the length I get as the first part of the buffer?

It depends if you need the rest of the buffer later. Ideally you feed into a function node and have some code in there to add each incoming buffer to any previous leftovers (held in node context) ... then slice off the front part you need and send that on... keeping the remainder for the next part to arrive.

Thanks, you don't happen to have a small example of this?

I've tried using some examples I found and it seems to work rather alright if the data comes in one stream in the TCP In but if it "splits up" in the debug I get problems. My example flow looks like this (also attach the json).

Do you mean that I should add a function node just after the TCP IN to handle the buffer?

flows.json (7.3 KB)

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.