It is not best if you still need access to the data in your Node-RED flows. Because then you'd either have to store it anyway or would have to retrieve it from the db again.
Really, until you get to much larger data stores (in which case, you would want a DB engine rather than SQLite anyway) or you want to make use of more complex relational SQL that might be harder and less efficient to do in JavaScript code, I don't see a lot of point in using a DB.
A JavaScript object is pretty powerful and fairly easy to deal with. If you need more complex data handling, you could always use a data analysis tool like nodejs-polars which gives you a dataframe handler similar to popular Python analysis tools.
A SQL db will be more efficient at handling certain types of table relationships like joins and at certain data calculations across tables but at the cost of additional memory and processor overheads. Don't forget that to get performance out of a DB, at least the indexes must all be in memory. If you are working on a memory constrained device and running Node-RED on it, you may already be pushing your memory quite hard. Then also, when doing lots of fast inserts, having indexing to make queries efficient actually reduces the performance of inserts as the engine has to not only update the table but also the index(es) and, for full engines, the transaction log as well.
Bottom line is that you have to go quite far to beat the native, in-memory performance of node.js variable handling.
Well, probably worth a bit of investigation, I suspect that even ChatGPT could guide you on this.
SQL DB's are "relational". That is, they use row-oriented tables (where columns are pre-defined, rows are indexed). If your data looks like a CSV file, a SQL DB should be good at handling it. Even better if your data looks like several CSV files, each with a common column.
However, if your data is primarily a stream of timestamped data values, you will probably be better off with a column-oriented DB such as the timeseries DB's like InfluxDB. Not only are these more efficient at recording streams of timestamped data but they are also really good at doing time/date-based analysis. Postgres has a timeseries extension as well if you are wanting to stay closer to familiar SQL DB engines.
If your data is super-simple key/value pairs, you might, I suppose, even look at something like REDIS.
The question really is: Why are you retaining the data? What are you going to do with it in the future? (OK, yes, that's 2 questions!)