Tail node issues

I wanted to use the Tail node to monitor log files from various machines. In the tests I ran locally, the tail node worked fine, and sent the last line that I pasted on the file.

The remote folders that have the logs are mounted with NFS, and if I run the following command from the Node-Red folder, I get a constant stream of data:

 tail -f -n 1 apl_data.log

Which leads me to think that it should work fine. Now, I set up the Tail node with that same file, but I am not getting anything.

Does anyone know if I'm missing something?

EDIT: I just tested running the tail command above with the execution node, including the whole path for the file (same as in the tail node) and the standard output is what I would expect from the tail node. But the tail nodes still don't give me anything.

The underlying libraries used by the tail node don't support watching files over NFS.

I will stick with tailing using the Execute node, then.

What I am not sure is how Node-Red handles those processes... are they killed if the flows are redeployed or restarted?.

I know I can kill them by sending something to the execute node, but if I'm working on the flows and each time that I send deploy it's opening several new tails, memory will be scarce soon.

I post this here in case someone has similar problems and wants to save a lot of time and aggravation (like it has been my experience).

I had a few remote machines with logs I had to control, some running linux CentOS, some running Windows (W7 and XP).

As pointed out in previous messages, the Tail node does not support monitoring remote files, so for the Linux machines, I mounted the log file folder in my node-red server with NFS and set up the following flow with the execution node:
image
The command being run in the execution node is:
tail -f -n 1 <full file path>
The loop backwards with the 60 second delay picks up any error code being thrown and forces a retry. I could have used the -F option (follow with retry), but that won't throw errors, just wait for the file to be available. This way allows me to get debug output (if needed).

This went smooth, and allowed real-time log monitoring (or as close to it as it comes).

Now, for the Windows machines, I had to mount the log folders with Samba. I realised that if I used the same setup, the tail command would lock the file and the machine would not be able to write new lines to the log files.

I tried then copying the log files I needed from the mounted folders to a local folder, and monitoring them there. The issue here is that cp would overwrite the file, and the tail command (with the exec node, as seen earlier) would resend all the files every time the file was overwritten.

I solved this by setting up two separate flows: one to update the log files in the local folder, one to monitor those files.

In a similar flow setup, I used an execute node to update the files, with a 60 second retry in case there's any error code:
image
The full command being run is:
rsync -t --inplace --no-whole-file <remote file full path> <local file path>

  • The -t option will preserve the file's timestamp, so I can see when the file was last updated instead of the time when I synced it.
  • The --inplace --no-whole-file options will ensure that the new lines since last update are added to the end of the file, and that only the new lines are written (no duplicates).

With this done, I just used the regular Tail node on the local files, sending one message per line, and then process them (inserting each line on a MySQL DB, in my case, for information processing).

I hope this is of use.

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.