So the problem is not that they are not being rotated, but the fact that they are growing to ridiculous sizes. Run
tail -f /var/log/syslog
and you will hopefully get a clue as to what the problem is.
Well my /var/log/daemon.log currently has 17725 lines in it (wc -l /var/log/daemon.log
) of which 15167 contain "Node-RED" (grep Node-RED /var/log/daemon.log | wc -l
)
And, frankly I think my Node-red logging is out of control at 2MB.
Yours is clearly unacceptable.
This command may be useful. It should list the node ids in the log, with the number of occurrences:
grep Node-RED /var/log/daemon.log | awk '{print $11}' | sort | uniq -c | sort -n
Ignoring anything with less than 100 incidences, I get this
237 red/nodes/flows.start
563 [exec:1bb8bcbf.633e63]
2230 red/nodes/flows.stop
5628 [exec:5986a4c350aaea76]
5628 [exec:f9f419f7e7b8341b]
I have two nodes each having run 5600 times today. I can find these nodes and see what's going on. Is there a loop? Am I receiving data from MQTT faster than I expected? etc.
I think I found the problem.
I also run a node.js script which I found on Github a few years ago to connect to my Ziggo Next STB
This script is not updated to node.js 16
See some of the errors below.....and this script is constantly heartbeating and fills rapidly the daemon and syslog log files
Sep 20 00:12:26 NodeRED-Pi Node-RED[373]: at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1278:16) {
Sep 20 00:12:26 NodeRED-Pi Node-RED[373]: errno: -113,
Sep 20 00:12:26 NodeRED-Pi Node-RED[373]: code: 'EHOSTUNREACH',
Sep 20 00:12:26 NodeRED-Pi Node-RED[373]: syscall: 'connect',
Sep 20 00:12:26 NodeRED-Pi Node-RED[373]: address: '192.168.1.64',
Sep 20 00:12:26 NodeRED-Pi Node-RED[373]: port: 8009
Sep 20 00:12:26 NodeRED-Pi Node-RED[373]: }
Sep 20 00:12:26 NodeRED-Pi npm[13145]: /home/pi/Ziggo/NextRemoteJs/node_modules/axios/lib/core/createError.js:16
Sep 20 00:12:26 NodeRED-Pi npm[13145]: var error = new Error(message);
Sep 20 00:12:26 NodeRED-Pi npm[13145]: ^
Sep 20 00:12:26 NodeRED-Pi npm[13145]: Error: Request failed with status code 403
Sep 20 00:12:26 NodeRED-Pi npm[13145]: at createError (/home/pi/Ziggo/NextRemoteJs/node_modules/axios/lib/core/createError.js:16:15)
Sep 20 00:12:26 NodeRED-Pi npm[13145]: at settle (/home/pi/Ziggo/NextRemoteJs/node_modules/axios/lib/core/settle.js:17:12)
Sep 20 00:12:26 NodeRED-Pi npm[13145]: at IncomingMessage.handleStreamEnd (/home/pi/Ziggo/NextRemoteJs/node_modules/axios/lib/adapters/http.js:269:11)
Sep 20 00:12:26 NodeRED-Pi npm[13145]: at IncomingMessage.emit (node:events:525:35)
Sep 20 00:12:26 NodeRED-Pi npm[13145]: at endReadableNT (node:internal/streams/readable:1358:12)
Sep 20 00:12:26 NodeRED-Pi npm[13145]: at processTicksAndRejections (node:internal/process/task_queues:83:21) {
With the frequency and amount of logs you have I would put money on there being a loop in your flows (or a terribly buggy node)
True....it is this node.js script: /home/pi/Ziggo/NextRemoteJs/node_modules
This script accesses the functions of my STB (Ziggo Next) over HTTP
I installed this script a few years ago from Github
Back then I also created a Node-Red flow which accesses only the ON/OFF functions of the script with a http request node
The creator has not updated the script to node.js v16
This causes the constant loop of error messages I guess?
There is a thing called logrotate going on. Logfiles are kept (for a week?), the oldest ones get deleted. All but the current and 1 previous files are compressed as .gz.
It is all configurable but I'm not sure how, nor what the defaults are.
Writing huge log files to the root filesystem, especially on an SD card, is a significant issue for your Pi's lifespan.
I use Log2ram to reduce log writes to the storage on my main Pi.
Never had any issues with Pi log files to sd-card during all the years I ran Pi's. A decent card with plenty of spare room and a card that does wear levelling (most do not, the Samsung Evo cards do) takes care of it all and the card will last years. Not as long as a disk or SSD of course but long enough given the cost.
And, putting the log files to RAM disk can cause other issues, especially in cases like this where a log file suddenly grows really big.
Interestingly, on my Debian server:
Well I can't explain that. Maybe you used the "Alternative installation script", maybe you changed the log level in settings.js.
Yes that's true. We don't know what size or make the OP's SD card is, but he clearly failed at the plenty of spare room step.
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.