If you have an unreliable platform, you need something with the lowest possible latency. REDIS is designed for that kind of thing - though not really on-server, more usual, the REDIS server would would on a low-latency network connection.
REDIS is a caching store rather than a db and so is generally much simpler and lower latency in use.
Personally, in the situation you are in, I would output things as quickly as possible to MQTT with replication to an external MQTT service. That would likely give you pretty reliable, low-latency, replicated data.
But, of course, much depends on the data, the flows and other services performance.
I would still want to know why that is corrupting? Is it that the data is too large and therefore takes too long to write the in-memory JSON to file (which is how persistent file storage works)?
Also, if you are getting corrupted data writes and you are using a Pi with SD-Card storage, there is a very good chance that the card itself is damaged due to unclean shutdowns. Even when using SSD or HDD, you are quite likely to get filing system corruption if not enforcing clean shutdowns. A regular disk check will be needed.
I do think it does what it can. It uses the standard Node.JS filing system (actually I think it uses the fs-extra library but it is the same thing) which already tried to be as reliable as possible. But filing systems are filing systems - if the power goes off 1/2 way through a large update, you are probably going to get a corrupted file.
So also check the size of data you have in a specific context variable and possibly look to split it into smaller chunks if possible. Or send break it down and send as chunks to a Mosquitto broker with retained messages. It is possible to model a JSON structure to multiple topics fairly easily
If you need to guarantee data capture, Node-RED is almost certainly not the correct tool to be using. It is too generic.