Influxdb output error handling

Hi all,
I am successfully writing to my Influxdb using a json message. I'm writing one message per second and so far everything works perfectly. However, I am looking for advice how to do this more intelligently.

  1. Should I be storing at least a few messages and then writing them in a one dump to the database? If so, does this just entail saving the messages into one large json message and sending it off with the influxdb batch node?

  2. At some point I am sure I will lose connection with the influxdb server or it will have to restart or have some other issue, so how do I ensure that I do not lose any data during that time? If I begin using a batch process I suppose I would try to send the message and if there is an error, I would just keep appending to the message and try to send later. How do I get access to the error message from either the standard influxdb node or the batch node? I see when error messages appear in the debug but how can I have them handled in my node-red flow code?

Thanks!

Hi,

  1. You could use a batch node (group messages by time interval) paired with a join node to aggregate values over 10s or so, and then create the points for the influx-batch node.

  2. In my flows, I use a catch node which is tied only to the influx-batch node (not "all nodes"), so the msg.payload object sent by the catch node will always contain the array of influx points. At the moment I just write these to a file (json node --> file node), which can be imported later.

So whenever a write operation fails, all missing influx points will be stored in the file.

1 Like

thank you. After posting my comment I stumbled on the catch node but wasn't really sure how I could use it. Your idea makes a lot of sense! Thanks for the help.

A small example of this pattern, if someone else needs it too:

[{"id":"39707832.e4efe8","type":"batch","z":"17cef33a.d6d7bd","name":"","mode":"interval","count":10,"overlap":0,"interval":"5","allowEmptySequence":false,"topics":[],"x":690,"y":340,"wires":[["d413bb3b.64f998"]]},{"id":"3f7f0ade.84fa56","type":"inject","z":"17cef33a.d6d7bd","name":"","topic":"","payload":"","payloadType":"date","repeat":"","crontab":"","once":false,"onceDelay":0.1,"x":500,"y":340,"wires":[["39707832.e4efe8"]]},{"id":"494a6509.c6ce1c","type":"join","z":"17cef33a.d6d7bd","name":"","mode":"auto","build":"string","property":"payload","propertyType":"msg","key":"topic","joiner":"\\n","joinerType":"str","accumulate":"false","timeout":"","count":"","reduceRight":false,"x":1070,"y":340,"wires":[["be73f92c.8107e8"]]},{"id":"d413bb3b.64f998","type":"function","z":"17cef33a.d6d7bd","name":"create point","func":"let point = {\n    timestamp: Date.now(),\n    measurement: 'measurement',\n    tags: {\n        foo: 'bar'\n    },\n    fields: {\n        value: msg.payload\n    }\n};\n\nmsg.payload = point;\n\n\nreturn msg;","outputs":1,"noerr":0,"x":870,"y":340,"wires":[["494a6509.c6ce1c"]]},{"id":"be73f92c.8107e8","type":"influxdb batch","z":"17cef33a.d6d7bd","influxdb":"266bb20f.6a81ee","precision":"ms","retentionPolicy":"","name":"","x":1330,"y":340,"wires":[]},{"id":"a2212189.549de","type":"catch","z":"17cef33a.d6d7bd","name":"","scope":["be73f92c.8107e8"],"x":890,"y":440,"wires":[["47c463fc.1974ec"]]},{"id":"47c463fc.1974ec","type":"json","z":"17cef33a.d6d7bd","name":"","property":"payload","action":"","pretty":false,"x":1050,"y":440,"wires":[["ebe307ae.0af588"]]},{"id":"ebe307ae.0af588","type":"file","z":"17cef33a.d6d7bd","name":"","filename":"influx.dat","appendNewline":true,"createDir":false,"overwriteFile":"false","x":1200,"y":440,"wires":[[]]},{"id":"266bb20f.6a81ee","type":"influxdb","z":"","hostname":"127.0.0.1","port":"8086","protocol":"http","database":"localhost","name":"","usetls":false,"tls":""}]

This works quite well for other storage nodes as well (mysql, mssql, ...).

If you are writing one message per second then I wouldn't worry about batching them up. If it were 1000/sec or 10000/sec then it might be a different matter.

Thanks Colin. It's good to know where the limits are. Initially I was having some serious IO issues that I thought was perhaps related to bad schema design or trying to write data too quickly. However after updating from influx 1.7.3 to 1.7.4 and in the config file changing from the default "inmem" to "tsi", I have not had any problems. I was still wondering if it was poor form to write as frequently as I am, but it sounds like I am well within reasonable limits.

I always start with the KISS technique (Keep It Simple Stupid). That means do things the simplest way initially. Only if it turns out there are performance issues do I look at optimising things. When it gets to that stage it virtually always turns out that the bottlenecks that you anticipated initially are not actually the significant ones, so the effort of premature optimisation is usually a waste of time and makes the solution more complex and therefore more likely to be buggy.