How can I stop spline-curve from spamming my logs every 2 minutes?

Hello.

I'm running Node-Red v2.0.6 in a docker version 20.10.8 container on a Raspberry Pi 3 B+ on Raspbian buster 10. The container is launched by a docker-compose (version 1.21.0) file.

I'm not exactly sure when, and spline-curve has begun spamming my journal logs in the past few days. It's possible that it started spamming the logs after I upgraded the node, although I upgraded several nodes and don't remember if spline-curve is one of them.

Here's an example of the offending log entry. It re-occurs every 2 minutes. If I disable that node, then the next spline-curve node starts acting up.

Oct 11 09:45:50 raspberrypi bda9e364eea0[647]: {
Oct 11 09:45:50 raspberrypi bda9e364eea0[647]:   id: '1e81b3fe9a34f938',
Oct 11 09:45:50 raspberrypi bda9e364eea0[647]:   type: 'spline-curve',
Oct 11 09:45:50 raspberrypi bda9e364eea0[647]:   z: '7ce022ba.012614',
Oct 11 09:45:50 raspberrypi bda9e364eea0[647]:   g: '93bf2d67.051f18',
Oct 11 09:45:50 raspberrypi bda9e364eea0[647]:   name: 'enhanced brightness curve',
Oct 11 09:45:50 raspberrypi bda9e364eea0[647]:   output_key: 'lightControl.brightness',
Oct 11 09:45:50 raspberrypi bda9e364eea0[647]:   input_key: 'lightControl.brightness',
Oct 11 09:45:50 raspberrypi bda9e364eea0[647]:   points: [
Oct 11 09:45:50 raspberrypi bda9e364eea0[647]:     { x: 0, y: 0.102, uid: 0, index: 0 },
Oct 11 09:45:50 raspberrypi bda9e364eea0[647]:     { x: 0.188, y: 0.427, uid: 1, index: 0 },
Oct 11 09:45:50 raspberrypi bda9e364eea0[647]:     { x: 0.748, y: 1, uid: 2, index: 0 }
Oct 11 09:45:50 raspberrypi bda9e364eea0[647]:   ],
Oct 11 09:45:50 raspberrypi bda9e364eea0[647]:   x: 1180,
Oct 11 09:45:50 raspberrypi bda9e364eea0[647]:   y: 260,
Oct 11 09:45:50 raspberrypi bda9e364eea0[647]:   wires: [ [ '95297af392f08256' ] ],
Oct 11 09:45:50 raspberrypi bda9e364eea0[647]:   _alias: '40d66f29.302368'
Oct 11 09:45:50 raspberrypi bda9e364eea0[647]: }
Oct 11 09:45:50 raspberrypi bda9e364eea0[647]: [ 'lightControl', 'brightness' ]

Here is my docker-compose.yaml that I'm using to launch node-red:

version: "3"
services:
  node-red:
    container_name: nodered
    image: nodered/node-red:latest
    volumes:
      - "/home/pi/projects/nodered/.node-red:/data"
      - "/etc/localtime:/etc/localtime:ro"
      - "/etc/timezone:/etc/timezone:ro"
    restart: unless-stopped
    network_mode: host
    healthcheck:
      disable: true
    environment:
      - TZ=America/New_York

And the following is my node-red settings.js file. Since the issue started I modified the logging section to what is listed below to try to resolve the issue. It was unsuccessful, though.

module.exports = {
    uiPort: process.env.PORT || 1880,
    mqttReconnectTime: 15000,
    serialReconnectTime: 15000,
    debugMaxLength: 1000,
    credentialSecret: "--snip--",
    httpNodeAuth: {user:"node",pass:"--snip--"},
    httpStaticAuth: {user:"node",pass:"--snip--"},
    functionGlobalContext: {
    },
    functionExternalModules: false,
    exportGlobalContextKeys: true,
    contextStorage: {
        default: {
            module:"localfilesystem"
        },
    },
    logging: {
        console: {
            level: "fatal",
            metrics: false,
            audit: false
        }
    },
    externalModules: {
    },
    editorTheme: {
        projects: {
            enabled: false,
            workflow: {
                mode: "manual"
            }
        }
    }
}

My theory has been that the log-level for either node-red, docker-compose, docker, or containerd is somehow set too low. This is what I've attempted to fix it (unsuccessfully).

I have docker set to log to the systemd journal. My /etc/docker/daemon.json looked like this:

{
    "log-driver": "journald",
    "storage-driver": "overlay2",
}

Thinking that perhaps I've tried to change the log-level and disable debug. My daemon.json file now looks like this, but the spam persists:

{
    "log-driver": "journald",
    "storage-driver": "overlay2",
    "debug": false,
    "log-level": "fatal"
}

Next I tried to set the log-level of containerd to fatal by modiying /etc/containerd/config.toml This is how it looked originally:

disabled_plugins = ["cri"]

And I've changed it to the following and restarted containerd, but the messages persist:

disabled_plugins = ["cri"]

[debug]
  address = "/run/containerd/debug.sock"
  uid = 0
  gid = 0
  level = "fatal"

Any thoughts?

Looking at the node's source code, it has two console.log() statements that will print out the node's configuration and another property every time an instance of the node receives a message.

This isn't configurable - it is hardcoded into the node.

You could try raising an issue to get its author to remove those statements - but it doesn't look like it has been touched for some time.

1 Like

Thanks for taking the time to look at the code in order to find the issue. I've forked the node and commented out the console.log() statements as a temporary fix.

And, as an aside, I want to make a correction. It seems that changing the log level in containerd's config file does also suppress the repeated log message (I think).

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.