Serial port data at high rate problem with timestamp

There is no suggestion that any data is lost in this case

There is no suggestion that handshaking is in use in this case.

I was showing one correct way to achieve consistent 20ms interval for the data. You can achieve micro seconds level or higher accuracy by following this approach. At high speed, you may experience the mentioned issues.

Of course, if the current timing variation is acceptable to you, then there is no need for the extra effort :smile:

I have to disagree. The correct way would be to use a faster method of capturing the data, such as a dedicated Arduino etc.

We were talking about similar thing? A dedicated hardware is recommended to ensure the consistent rate. The data are saved into the external buffer, and feed into RPI's serial port.

I have architected and developed many products with nano second level accuracy.
Sorry I have no experience using Arduino for high speed design. How high speed can Arduino achieve?

That will guarantee the data gets to the pi at the correct time, but there is no indication that this is the problem here. The timestamps are calculated after the buffer gets into node-red. It may well be the variability in time that it takes node red to get round to applying the timestamp that is the problem.

Do I understand correctly that the issue boils down to the fact that between the reception of the serial input and the assignment of the timestamps are several layers and each will add (to some degree) variance to the assignment of the timestamp?

I personally like the idea of using an arduino with the sole purpose of waiting for serial input and assigning a timestamp; then somehow (MQTT was mentioned) to the RPi, but...

What would be the best option if the OP does not want to use separate hardware but only the RPi?

It is not so much layers as serial happenings. Node red works by passing messages from node to node. Each time a node completes its handling of a message node red queues the message for the next node to handle, and then gives the processor to another node to handle the next message in its queue (that may not be the node that has just been sent a message, that one will have to wait its turn). Also nodejs may do other things such as garbage collection at this point. The result is that even if the serial input arrives at exactly the right time and the serial node sends on the message, there is an indeterminate wait before the next node (which will add the timestamp) gets hold of the processor.
I am not saying that this is the whole of the problem here, but it may well be part of it.

1 Like

And it's not just nodejs that is "in the way" - Linux itself isn't a real time operating system so other running processes may get some time slice to run as well.. and then on the Pi in particular the USB (and thus probably the serial port) share the same hardware bus as the ethernet port so there will be contention /sharing there as well.

So if realtime is critical then Pi probably isn't the way to go (though @davidz suggestion to "force" correct timestamps could possibly be made to work, or indeed don't worry about individual timestamps - and just count say 1000 readings - and then apply a calculated average to them all later in your process.)


I would put it stronger than that :slight_smile: If it is critical, you need a sufficiently powerful microprocessor system running a realtime OS - such as an ESP32.

Yes you are correct that data coming in to the serial port may not be the cause of varying rate. 50Hz is extremely low rate in industry and can be guaranteed easily with a tiny microcontroller.

So I was suggesting that:

  1. Get the timestamp of the first message with Use context or flow variable to save the first time stamp.
  2. For each subsequent message, read the saved time stamp from the previous message, simply add 20ms. This is the time stamp for your current message. Save the time stamp again for the next message.

This can get rid of the OS/serial/hardware uncertainty. The starting time is still not accurate due to OS delay (this uncertainty can be solved by adding a GPS calibrated timestamp at the starting message. But this is an overkill 99.999% of the time).

BTW, we also need to identify when the current data stream stops, and a new one starts. When a new data stream comes in, we need to use to set the new starting time stamp.

I went with my own timer. Thank you very much!


This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.