CSV ,Add extra column header to existing CSV file


Im having a problem adding another header column to an existing CSV file.

I have an existing CSV file with the following data

Header = Time, Temp , Humidity
Data = 102236, 16, 45

This csv file constantly updated every minute and new data is appended to the csv file
If I want to add an extra header to the existing csv file, how would i be able to write to it with out losing the data

So new CSV
Header = Time,Temp,Humidity,Pressure
Data = 102236,16,45,5

I appreciate your help

You can't really add a column to CSV using an append, only rows. What you need to do is to read the CSV to memory (I'd probably go with an array of objects) add the column to the in-memory entity you've created, then write out to a new (or if you're feeling cocky, overwrite the existing) CSV.

1 Like

Yes, that would work, but im afraid if the file gets too big it will crash node-red.
the file would be over 100mb.

i will try it

Thanks Jay

That is usually way too big to be working in CSV format. Is there a technical reason that you've decided to go that route? I would highly recommend migrating something that large to some sort of database.

1 Like

What do want to do with the “old” data ? Add 0 in the new column or leave blank or ...

Hi dceejay,

I want to add a new header column, i want to keep the old data to have a look at the sensor data throughout the year. I will add more column throughout my project if needed.

so You need to add some dummy value to the end of each just in case you add a second extra column ?

The new added column at the end of the original header will have new data.

The software im currently using to extract data uses csv and it compressing ratio is very good.

Jay, as mention above - you said i could read in the csv file and replace the header
Im struggling with replacing the header with the new header.
i can get the data and get the header of the file but i dont know how to get the new header with the old data i have.
Please elabrote what node i could use.

Many thanks

Play with this. You'll need to edit the csv column template to match your schema, but this should work to update legacy CSVs with a new column.

[{"id":"51cd58d4.63e4a8","type":"inject","z":"32697264.aa1c16","name":"","topic":"","payload":"","payloadType":"date","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":1040,"y":360,"wires":[["8114a6ea.1e9138"]]},{"id":"46d15bc3.978964","type":"csv","z":"32697264.aa1c16","name":"","sep":",","hdrin":true,"hdrout":"","multi":"mult","ret":"\\n","temp":"","skip":"0","x":1350,"y":360,"wires":[["c0b5860b.379e48"]]},{"id":"539b3829.709ed8","type":"change","z":"32697264.aa1c16","name":"Add column","rules":[{"t":"set","p":"payload.newcolumn","pt":"msg","to":"null","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":1270,"y":400,"wires":[["5f0fa5f7.f5475c"]]},{"id":"6456bc4d.498144","type":"csv","z":"32697264.aa1c16","name":"back to CSV","sep":",","hdrin":true,"hdrout":true,"multi":"one","ret":"\\n","temp":"","skip":"0","x":1050,"y":440,"wires":[["8ddb8c84.6a81"]]},{"id":"c0b5860b.379e48","type":"split","z":"32697264.aa1c16","name":"split (retaining order)","splt":"\\n","spltType":"str","arraySplt":1,"arraySpltType":"len","stream":false,"addname":"","x":1080,"y":400,"wires":[["539b3829.709ed8"]]},{"id":"5f0fa5f7.f5475c","type":"join","z":"32697264.aa1c16","name":"merge array","mode":"auto","build":"string","property":"payload","propertyType":"msg","key":"topic","joiner":"\\n","joinerType":"str","accumulate":false,"timeout":"","count":"","reduceRight":false,"reduceExp":"","reduceInit":"","reduceInitType":"","reduceFixup":"","x":1430,"y":400,"wires":[["6456bc4d.498144"]]},{"id":"8ddb8c84.6a81","type":"file","z":"32697264.aa1c16","name":"Destination file","filename":"","appendNewline":true,"createDir":false,"overwriteFile":"false","x":1220,"y":440,"wires":[[]]},{"id":"8114a6ea.1e9138","type":"file in","z":"32697264.aa1c16","name":"Source File","filename":"","format":"lines","chunk":false,"sendError":false,"x":1210,"y":360,"wires":[["46d15bc3.978964"]]}]

You need to import the data to a database. Then you can have additional fields as needed. SQLite would be a good place to start as it doesn't need an additional service running.

I would try to take a copy of the CSV periodically - though as you are writing it so often, that may be a challenge as you might find the end of the file being caught mid-write. There are probably several ways to mitigate that depending on the details of your sensor and whether it matters if you were to occasionally lose a reading. Clearing down the CSV file would help keep things manageable.

Of course, if you can , you should change the source to write to something that Node-RED can consume directly. Possibly you could substitute a UNIX Pipe instead of the CSV file for example.

I think what Dave wad hinting at is the question "When you add a new column, what will be in that column for all the rows that previously existed?"

You need to keep that in mind.

I will be storing the data to a cloud database periodically. Im very grateful for the information and will consider it once i have this up and running ; )

Sorry, I misunderstand Dave questions. As the data was not there for the new column all the the row for that column will be blank.
Im hoping it will write the new value for that column on the same row as the other reading.

Thanks Jay.

I have been working on it and im having trouble where after i Split it, then add in the new header and then join them again and the transfer to the 'CSV' node it only output the header string and doesnt send the data that came with that i read in from the file.
Even though i can see all the arrays in the join node

flows (5).json (2.4 KB)

Use your own column names of course. Can't show you what my output looks like because it has PHI in it, but hopefully this gets you on the right track.

is it possible to send in a "msg." for the column template instead of editing inside the csv node?

is it possible to send in a "msg." for the column template instead of editing inside the csv node?

Not to my knowledge, though this operation should only really be done if you change your schema, which shouldn't be a frequent event.

I hope you take that into consideration if you ever try to use the new column in calculations that include the old rows....