I am writing the sensor data with time stamp in csv file using Nodered. Now I want to store the in time and out time of sensor data in same row of csv because of memory management.
Example sensor1 in time punch at 21-03-2024 17:40 which I will store in csv file by linking sensor1 details. Now this sensor out time punch at 25-03-2024 11:40 which i want to update in above same row.
shall you suggest me some efficient way (Shall using iteration is the write way)
You could hold all the data as a (complicated) context variable using persistent storage, and provide an "Export as CSV" option.
You could write your own code to manage a CSV file as if it was a database.
Or you could use a database, already written and optimised for managing data.
The choice is up to you, or maybe whoever set your pointless limitation.
No idea if you could do this in Node-Red but going way back in computing history (i.e. when I started) you had fixed length records in files and maintained your own index files (well posh folk using Cobol did not).
Each record was a fixed length so it was easy to work out the location of the record you wanted (record number * record length - 1) and use this as an offset in the file...
I think (but never tried) Python still supports this with the open 'RB' options and the seek() function.
I would then use the sensor number as the record number, so for sensor 1 you have:
DD-MM-YYYY HH:MM = Record length 16 + 1 for Line Feed at the end
Record number = 1 for Sensor 1
Position in file = (1 * 17) - 1 = Byte 16 is the start of the record
Note above assumes the first byte of the file is position zero and sensor numbers start at zero.
Technically this is not a CSV file as there are no commas in it - I would call this a plain text file for Linux (you would need to add a cr/lf at the end for Windows normally).