I just notice this recently, but I randomly see the following:
But when I check the node:
This some type of timing thing? intermittent network issue? Seems more annoying than anything else. My database server is rock solid, no warnings or errors, non of my other applications, python scripts, etc., are reporting connection issues with the database.
I can't seem to create the issue on demand, for example restart of NR does not exhibit the behavior. Looking through the database server log, not seeing spurious disconnects, so still looking for the issue, but though someone might have seen this or suggestions. Thinking at some point may setup a watch-dog routine to watch the database connections. LOL. One more quirky thing to track down.
Ah, right, I should have noted... MariaDB in this case.
Server version: 10.3.27-MariaDB-0+deb10u1-log Raspbian 10
So mysql variant. It is interesting I have been using MariaDB for years, and with NR now for over a year? If memory serves. Very few issues thus far. I have one instance running on a Pi, another on an old i386 Linux system. The Pi trans-log ships deltas to the i386 system, whereas NR flows connect to the Pi instance. Checked the /var/log/mysql/error.log, no errors during the time frame of the last hiccup.
Since the next line in the code after that is to set the status to "not yet connected" that suggests that it was a transient issue and by the time you looked at the node it had already connected.
Is node-red restarting possibly? Look in syslog or the node red log to find out.
Also you could attach a Catch node and a Status node and connect them to debug nodes set to output to the console. Then you will be able to look back through the node-red log (or syslog) to get a clue as to what is going on.
No evidence of NR restarting, in fact while I was reviewing the various logs, had an active session open to the editor, etc. The issue just happened to trigger... again. Since then nothing again. So it has happened 2 times in the last 36 hours, in neither case did NR restart. The node reconnects quickly whatever the cause.
Does not seem to happen on deploy, that would have been too easy. And it has disappeared again the last few days. Thought it might a connection keepalive timeout thing, but I am not optimizing or pooling connections, maybe the mysql configuration node does such? Definitely feels like environment specific scenario... sooner or later I will catch what is going on. Pi hosting database is a Pi 4 with 8GB, so watching it, nothing obvious thus far.
I had a similar (timing??) problem a few months ago and ended-up creating a simple 'keep alive' flow.
I just do a SELECT query every 15-minutes. There have been some updates (one just over a week ago) to node-red-node-mysql since I wrote the code, so I don't know if my workaround is still needed.
Cool... 15 minutes is solid for a default I would say. With any SQL connection... new connections are expensive. So, you want to keep them alive longer than you would with other communication protocols in general.
Connection pooling is also important, I use one or two SQL configuration node references across many flows, I don't create many unique SQL connections, in effect to reduce the connect/disconnect cycling as much as possible. Any SQL administrator or best practices guide will scream about applications that connect/disconnect too often.
For example, using ESP(8266 or 32) devices that talk directly to a SQL database have a bad habit of doing this connect/disconnect scenario... at the least the code I have seen has. A much better solution would be for ESP devices to use MQTT to a 'consolidator' NR flow, for example, and then that flow updates the SQL database downstream.