TCP connect node splits answer in the middle

Hi!

I - again - need some help with the following issue:
I need to connect to a Server via TCP connection which works fine with the tcp request node.
Then I have to send an opening message to start the connection - until this point it works fine.
The server sends back a message containing some well defined bytes, which I can parse out, and then a few hundred following Bytes representing a JSON message. But since it is not a well defined connection according to standard protocols, the tcp request node splits the answer of the server somewhere in the middle of the incoming buffers, right before the last '}' of the JSON data.

Do I have to analyse the message.payload (parts) for having enough '}' at the end of the payload - and how do I do that?

At last I can't let the request node close the connection after ... because it should remain open until the next packet comes in, which is a few seconds to a couple of minutes later.

Does anybody have an idea, how to get this done?

Thanks for your ideas!
Florian

Hi @FloRu,

Does the packets at the start of each payload hint at the size?
if so, when you receive them, create a Buffer in context

Buffer.alloc(size), whilst at the same time, keeping track of how many bytes, and where the cursor is after a write.

once you have written upto the size, it, in theory should contain a complete JSON object, then rinse and repeat?

I don't use the TCP nodes that much - so there may be other options

https://nodejs.org/api/buffer.html

Have you tried a split node set to handle as stream and split using your well defined bytes.

2 Likes

that should work - but then you may have to wait for the next "well defined bytes" before it sends on the previous (now complete) packet - which may be a few minutes later...
If it always returns two parts then a join node could be set to manually join 2 messages... but could easily get confused I suspect.
Does the json section end with anything that is unique ?

Yes, Marcus, they do.

So it would be an option to try it like this. I think it'll take some effort for me to get this tried out, but I'll try it - thank you!

I tried to figure out the end of the (mostly truncated) Buffers, but it seems to be no unique end. The JSON tree has different depths... So the split node (streaming mode) doesn't work for me.

At next I will try to figure out how to work with buffers and try to fill them with the received values until the count will be the same. ...will take some time for me ... :wink:

Thanks for your ideas!

Best,
Florian

If they are buffers of strings you should be able to
convert input to string
add/concatenate anything left over + new input string
if length > target , take .substr(len) as the next complete chunk and send it
move what's left to left over - ready for next input;

Must admit I'm slightly surprised that any extra input that arrives would ever make it longer than the stated length.... what extra are they sending to make it too long ?

As above - this maybe becoming a little over-engineered.
But if you really wanted to go down this route, the below may be a starting point.

Note : This may not be fully correct, but I'm being shouted at to do the washing up.

/* Determine if this is a header payload */

/* Get Size Info start byte */ 

if(<is-header>){
    const Size = msg.payload.readInt32LE(<some-off-set>);
    const Buf = Buffer.alloc(Size);
    flow.set("Buf",Buf);
    flow.set("Cursor",0);
    flow.set("Length",Size);
}
else{
    const Length = msg.payload.length
    const Buf = flow.get("Buf")
    const Cursor = flow.get("Cursor")
    const Total =  Cursor + Length

    Buf.write(msg.payload.toString('utf-8'), Cursor, Length,'utf-8');
    flow.set("Cursor", Total)
    
    if(Total === flow.get("Length")){
       /* at this point - you should expect the next payload to be a new header */
        return {
           payload:JSON.parse(Buf.toString('utf-8'))
        }
    }
}


EDIT
forgot the msg prefix

Oh, an information I should have been posting before (sorry for that):
All Information comes with four equal predefined bytes, followed by four bytes which contain the length of the JSON object that follows those 8 bytes. (see here: Connect to a TCP Server with special packet order)
It is interesting, that the request TCP node nearly always splits the information immediately after the first 8 bytes and then - if the object is big enough - again after 1343 bytes.
Sometimes - but very seldom - there is only a second part that has up to around 14 something bytes, which is good to handle for me. Now I try to solve the third message parts...

@marcus-j-davies Thanks Marcus for this starting point. I'm gonna come back as soon as I stumble at the next point! :wink:

If doing this (I do suggest investigating other ways first)
you need to ensure you don't overflow the buffer (should I add .com here :wink: ).

so write in no more than what is left to write, I don't know how the buffer objects reacts to overflows

Sorry for being stupid, but I'm really new to the handling of Buffers...

At first I try to filter out the necessary header bytes and read out the length of the upcoming buffers. Here's my code:

let packetBuf = Buffer.from(msg.payload, 'hex');
let packetMessageLengthArr = []; 
let packetMessageLength = {};


// ### --- If packet starts with 0x6EDAC0DE...
if (packetBuf[0] == 0x6e && packetBuf[1] == 0xda && packetBuf[2] == 0xc0 && packetBuf[3] == 0xde) {

    // ### --- taking the four indicator bytes for the upcoming packet length...
    packetMessageLengthArr = [packetBuf[4], packetBuf[5], packetBuf[6], packetBuf[7]];

    // ### --- ...and trying to get a decimal length out of the values...
    let packetMessageLength = packetMessageLengthArr.readUInt32LE();

    // ### --- But only getting the error message: TypeError: packetMessageLengthArr.readUInt32LE is not a function
    let msgOut = { packetMessageLength: packetMessageLength };

    node.send (msgOut);
}

return;

If i use the same ".readUInt32LE()"-Function in another node with only four Buffer Bytes as msg.payload, everything works fine. What am I doing wrong? :smirk:

You have converted those 4 length bytes into an array… it needs to remain as a buffer. I think you need .slice. Instead

It sounds like the sending end is sending chunks I don’t know if they have limited hardware buffer size on the sending end or are just being conservative to fit into a single tcp packet, but def feels like they are just sending things that way.

Thanks @dceejay - that makes sense.

New code:

let packetBuf = Buffer.from(msg.payload);
let packetStart = packetBuf.slice(0, 4).toString('hex');
let packetLengthBytes = {}; 
let packetLength = {};

// ### --- If packet starts with 0x6EDAC0DE...
if (packetStart == "6EDAC0DE" || packetStart == "6edac0de"){

    // ### --- taking the four indicator bytes for the upcoming packet length...
    packetLengthBytes = packetBuf.slice(4,8);

    // ### --- calculating the length out of the buffers
    packetLength = packetLengthBytes.readUInt32LE();

    // ### --- noting and showing the length
    flow.set('packetLength', packetLength);
    node.status({fill:"green",shape:"dot",text:"Länge: " + packetLength + " Bytes."});
}

return;

works well!

I'm not sure, if the packets get sliced from the sender. But could be. If I have later some time, I'll try to connect with different Node-RED Instances to the same server to get more idea of what's happening there. But at first I have to carry on...

2 Likes

So you took the Red Pill then eh! :smile:

I must admit, I like working with raw data buffers, and when it works, feel I can wear the same shirts as Sheldon Cooper!

Little tip.
If you do this a lot - mixing strings and vars

node.status({fill:"green",shape:"dot",text:"Länge: " + packetLength + " Bytes."});

Use back ticks with literals, you will thank your self for it later.

node.status({fill:"green",shape:"dot",text:`Länge: ${packetLength} Bytes.`});
2 Likes

Thanks Marcus!

Yes, the red pill was my decision.
And also: yes - it works!
Since the packets seem to fly in in the right order, I can join them perfectly with these buffers!

I use the length described in the packet for allocation of the buffer size to check, if it's full. If it's filled up, I send out the cached values and delete the cache.
And to make sure that the buffer doesn't get overfilled, I drop the packets (and delete the cache) if the size would go higher than described. Normally the next packet should contain the start bytes again, so I can begin from start.

To all of you: thanks for your help! :+1:

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.