Ghost of a node in past deleted flow?

Just a curious question, I keep seeing the log report the status of a node, that is part of a flow I have explicitly deleted. What causes this?

For example, the following node was a functional node, which was part of a flow I deleted in the editor. And yet it is still referenced by the log?

`27 Sep 20:53:57 - [warn] [function:Enable Or Disable?] Topic 0, Payload undefined
27 Sep 20:53:57 - [warn] [function:Enable Or Disable?] Topic 1, Payload undefined
27 Sep 20:53:57 - [warn] [function:Enable Or Disable?] Topic 2, Payload undefined
27 Sep 20:53:57 - [warn] [function:Enable Or Disable?] Topic 3, Payload undefined
27 Sep 20:53:57 - [warn] [function:Enable Or Disable?] Topic 4, Payload undefined
27 Sep 20:53:57 - [warn] [function:Enable Or Disable?] Topic 0, Payload undefined
27 Sep 20:53:57 - [warn] [function:Enable Or Disable?] Topic 1, Payload undefined
27 Sep 20:53:57 - [warn] [function:Enable Or Disable?] Topic 2, Payload undefined
27 Sep 20:53:57 - [warn] [function:Enable Or Disable?] Topic 3, Payload undefined
27 Sep 20:53:57 - [warn] [function:Enable Or Disable?] Topic 4, Payload undefined
27 Sep 20:53:57 - [warn] [function:Enable Or Disable?] Topic 0, Payload undefined
27 Sep 20:53:57 - [warn] [function:Enable Or Disable?] Topic 1, Payload undefined
27 Sep 20:53:57 - [warn] [function:Enable Or Disable?] Topic 2, Payload undefined
`

When this issue results, it tends to drive the node processor utilization up beyond anything that makes sense. In this specific example, top reports the following, see below, when the editor is effective empty. No configured flows, no active flows per se.

top - 21:12:11 up 2 days, 12:33,  1 user,  load average: 1.67, 1.93, 1.99
Tasks: 104 total,   2 running,  59 sleeping,   0 stopped,   0 zombie
%Cpu(s):  1.4 us,  0.7 sy, 50.0 ni, 47.8 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem :   997768 total,   178316 free,   595228 used,   224224 buff/cache
KiB Swap:   102396 total,    78588 free,    23808 used.   304308 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 6543 pi        25   5  667176 577972  27564 R 202.0 57.9   5:22.43 node
  335 root      20   0   52932   2560   1332 S   6.6  0.3 271:13.05 pigpiod
 6597 root      20   0    8136   3224   2704 R   0.7  0.3   0:00.30 top
 5177 root      20   0   11660   2648   2444 S   0.3  0.3   0:06.69 sshd
 6465 root      20   0       0      0      0 I   0.3  0.0   0:01.89 kworker/u8:3-br
 6534 root      20   0       0      0      0 I   0.3  0.0   0:00.18 kworker/0:0-eve
    1 root      20   0   28084   3048   2368 S   0.0  0.3   0:07.90 systemd
    2 root      20   0       0      0      0 S   0.0  0.0   0:00.27 kthreadd
    3 root       0 -20       0      0      0 I   0.0  0.0   0:00.00 rcu_gp
    4 root       0 -20       0      0      0 I   0.0  0.0   0:00.00 rcu_par_gp
    8 root       0 -20       0      0      0 I   0.0  0.0   0:00.00 mm_percpu_wq
    9 root      20   0       0      0      0 S   0.0  0.0   0:17.56 ksoftirqd/0
   10 root      20   0       0      0      0 I   0.0  0.0   1:32.18 rcu_sched
   11 root      rt   0       0      0      0 S   0.0  0.0   0:00.10 migration/0
   12 root      20   0       0      0      0 S   0.0  0.0   0:00.00 cpuhp/0
   13 root      20   0       0      0      0 S   0.0  0.0   0:00.00 cpuhp/1
   14 root      rt   0       0      0      0 S   0.0  0.0   0:00.08 migration/1
   15 root      20   0       0      0      0 S   0.0  0.0   0:01.63 ksoftirqd/1
   18 root      20   0       0      0      0 S   0.0  0.0   0:00.00 cpuhp/2
   19 root      rt   0       0      0      0 S   0.0  0.0   0:00.07 migration/2
   20 root      20   0       0      0      0 S   0.0  0.0   0:01.83 ksoftirqd/2
   23 root      20   0       0      0      0 S   0.0  0.0   0:00.00 cpuhp/3

Or is this some odd situation? I have only so far seen this one one specific Pi3 device, but I have not been looking for it either elsewhere as yet.

So looking at the flows.json... it really does have a flow still defined. So the editor and file are out sync.

There is a known, rare, issue where a tab is deleted but the nodes remain. They become lost to the editor. Given the node you have running in the background, check its z property - if it is set to 0 to an empty string, then you have hit this issue. Unfortunately, it happens so rarely, we've not been able to debug it properly.

In the next release, 1.2, we've added code to the editor to detect the 0 z property and add a 'recovery' tab to put the nodes on so the user can do something about it.

Cool, the interesting thing was this broken flow file sent the CPU utilization above 100%, but that happened ONLY while a deploy was being attempted. When the broken flow was running it was close to 100% but not over.

I think I have the same/related issue (on NR 1.2.2) since I am seeing MQTT traffic in my broker (domoticz) that I know I have created at some point in the past days, but I replaced by another format of MQTT message. The nodes are still sending messages to the MQTT node, which is still sending messages to the broker.

Multiple nodes involved, at least an inject, function node and a MQTT out.

Where can I find the recovery tab yuo mention? Can I provide something what is useful for debugging purposes?

What do you see if you use the Search feature (Ctrl+F) to search for the topic?

When searching for parts of the MQTT message content, I've found this node:

So I think I have found the source of the MQTT messages, resend every 5 minutes matches the log of the MQTT broker and Domoticz device. I was playing around with the trigger nodes to try to create a failsafe mode for a specific actuator. When I did not want to include that in my functional flows, I've cut/pasted the 2 nodes out of the flow (where it was connected to a MQTT out) and into my "Test" flow.

So the question is, why does a node which is not connected to a MQTT out, still send messages thru the MQTT out that it was connected to?

I've deleted the trigger nodes from my test flow and keep an eye on the Domoticz log. If you want to check the json? I've copied the flows file before I deleted the nodes.

Looks like something else is sending the MQTT.

2020-11-09 16:33:52.456 MQTT: Topic: domoticz/in, Message: {"idx":82,"nvalue":1,"svalue":"","Battery":100,"RSSI":7}

I've never added battery/RSSI/svalue to the msg.payload so it cannot be from Node RED. I need to see where this message is coming from...

[edit] disregard my replies, an ESP with somehow sending bogus MQTT status messages.

Do you know how to use a network sniffer? Or wireshark? A simple packet dump/capture, would give you a starting point to find the source of the MQTT messages by looking at the TCP/IP packet traffic... basic methodology for tracking down such.

Dang... I was thinking that it has been a while since I got to chase a hacker! If you don't practice, you get rusty.

I used the mosquitto logs to see where the msg was coming from, somehow the Tasmota fw is sending out messages every 300s. I did not wat to dive into the Tasmota functionality, flashed the device with ESPeasy to do the same trick and some added standalone ESPeasy rules could come in handy at that relay.

A standard feature of Tasmota is that, by default, it sends the status every 5 minutes. That is of course easily configurable to any period you want.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.