For some time now, I've used my Pi just to collect local sensor data, and forward it by MQTT to a 'free' Oracle cloud server, which does all the heavy lifting, like storing it in a database, visualising it in Grafana etc, so last month's flowforge webinar 'Build an Edge-to-Cloud Solution with the MING Stack' was of interest to me (it unfortunately hasn't been saved in the flowforge website, but here is a link from influx).
One issue that I have encountered is if the internet temporarily fails, then the edge/cloud connection fails, and I lose data, which appear as gaps in charts etc, so I reached out to Jay Clifford (Developer Advocate - InfluxData) who kindly came up with a great solution.
Jay suggested passing the sensor data into the node-red-contrib-influxdb
node, which is configured to output its data via local IP/Port to Telegraf, which is also installed on the Pi.
Telegraf in turn then forwards the data via http direct to Influxdb - which is hosted in the cloud alongside Grafana.
Telegraf has a configurable internal buffer, which stores data that has not been successfully acknowledged by Influxdb, and the default setting is 10,000 metrics, which in my case is sufficient to cover most network downtime.
To test, I setup 2 feeds, one using MQTT, and the other using Telegraf as above.
I'm feeding approx 8,000 metrics per hour to both, and then disabled the internet for approx 1 hour, which I then re-enabled.
Looking at just one of the feeds, the MQTT data is represented by white circles, and the Telegraf data by a blue line, and as can be seen, the buffer quickly added all of the metrics over that period as soon as the network resumed.