Node Red is to fast for influxdb2 ?!

Hello
I'm currently on vacation and have my new Raspi 5 with a top-of-the-line 256 GB SSD, Node Red, Influxdb2, and Grafana pre-installed.
I've also taken my saved power data with me as CSV files.
The analysis and possible error handling of the CSV files works flawlessly.
Saving the saved daily data since October 2023 (one line per day) to Influxdb2.x works.
Saving the saved minute data doesn't work.
Okay, the minute file has about 220,000 lines. After about 20,000 lines, InfluxDb gets stuck and nothing is saved.
What can I do?
I can only describe a flow verbally because the Fritzbox and the Raspi aren't connected to the network.
The flow is actually simple.
timestamp->(function node to set flow varables)->(filereading node)->CSV Node->Function Node to check and construct) influxdb out node

Best regards

Welcome to the forum @Egon

You could add a Delay node, set to Rate Limit mode at an appropriate rate in front of the influx node in order to slow it down a bit.

Or split the inbound data into chunks of 10k records perhaps.

Also worth noting, in case you didn't know, that InfluxDB can do summary records for you. You can automatically summarise detail data to a separate table. This is worth doing along with automatically trimming the size of your detailed table because eventually InfluxDB is likely to run out of memory if you don't.

I'd already tried delays in various places, but that wasn't a solution either.
Since the effort to solve the problem should be limited, because if it works, it's more or less a one-time event. Creating 20 files with the head and tail command isn't that much effort either.
Once the old data has been imported, the existing flows will be modified so that they no longer write to a CSV file, but directly to influxdb.
What I'll do with the minute data isn't yet clear anyway; we'll see how and whether I condense or delete it.

Many thanks so far

Hello everyone,
I'd like to briefly explain my solution.

  1. I used >split< to split the file into chunks of 14,000 lines each.
  2. I used >head< to copy the first line with the column descriptions into the split files. >for each< was helpful here.
  3. Since my Node Red Flow was working, I successfully executed the flow for each separated file.

Best Regards