Influxdb node "internal server error" but the data is written to the database

My system
I am running Influx 1.7.6 on Ubuntu 16.04. I am using tsi1 and have tried various wal-fsync-delay in my config file from 0ms to 5000ms. I am running the ubuntu as a virtual server on my physical ubuntu server and writing to the database on a RAID 6 array with spinning disks.

Problem
Mostly, I am writing data from Node-RED to influx using the influxdb node and I seemingly randomly get generic internal server errors (as shown in Node-RED debug panel) after writing data. However I have checked, and data is definitely written to influx.

Does anyone have any idea why these errors appear or if I need to modify settings either on nodered or influx? Here is a screenshot to show the debug node output of the JSON to write to influx followed by the error message the influx node spits out and then my terminal inspection of influx to show the value is actually there:

nodered_influx_forum

What is node c24e7...

that is the influxdb batch node

Which version of influxdb are you using? The server, not the node.

Influx 1.7.6

I suppose the next step is to look in the influxdb log.
Is influx running on the same machine as node red?

The influxdb is on a separate Ubuntu server machine.
I have all kinds of these errors in the log (accessed with: sudo journalctl -u influxdb.service)

Jun 02 16:00:47 spvirt4 influxd[19351]: ts=2019-06-02T22:00:47.546934Z lvl=error msg="[500] - "timeout"" log_id=0FnKmlS0000 service=httpd
Jun 02 16:01:20 spvirt4 influxd[19351]: ts=2019-06-02T22:01:20.008765Z lvl=info msg="failed to store statistics" log_id=0FnKmlS0000 service=monitor error=timeout

This looks to be common, but it's unclear if the data is being written for these folks like it is for me.

My journalctl log is massive by the way, over 30,000 lines and it has only been running since May 14. I'll do some googling too, but if you have recommendations on how to limit the size that would be awesome. Thanks!

Is the Ubuntu server heavily loaded? Particularly the disc.

Influx is hosted on a virtual server. That virtual server only runs influx and I have been struggling with some high IO usage issues that I have had some luck in tracking down but I would say the problem is not totally solved. Some of the issue was how I provisioned processors for the virtual server.

In fact I get a timeout error just using the Influx command line interface when I insert a value. However, just like with nodered, the value is confirmed to be written to the database. So it seems this really has nothing to do with nodered. I posted this info on the influx forum a while back but did not get a response. I might try posting again with some updated info. It looks like I am not alone with this problem.

is there a way to disable the influxdb batch node timeout errors from being written to syslog? This is not a debug node, this is just the influxdb batch node itself logging the error. I have yet to figure out my problem, but since all the data is written to influx I'm inclined to ignore the issue for now. However, my syslog is growing really quickly with all these error messages.

as far as influx is concerned, it's pretty low usage. I have a few points written per minute on average.

You can configure logging in the influxdb config file. You can tell it where and, to some extent, what to log.