Ghost of a node in past deleted flow?

Just a curious question, I keep seeing the log report the status of a node, that is part of a flow I have explicitly deleted. What causes this?

For example, the following node was a functional node, which was part of a flow I deleted in the editor. And yet it is still referenced by the log?

`27 Sep 20:53:57 - [warn] [function:Enable Or Disable?] Topic 0, Payload undefined
27 Sep 20:53:57 - [warn] [function:Enable Or Disable?] Topic 1, Payload undefined
27 Sep 20:53:57 - [warn] [function:Enable Or Disable?] Topic 2, Payload undefined
27 Sep 20:53:57 - [warn] [function:Enable Or Disable?] Topic 3, Payload undefined
27 Sep 20:53:57 - [warn] [function:Enable Or Disable?] Topic 4, Payload undefined
27 Sep 20:53:57 - [warn] [function:Enable Or Disable?] Topic 0, Payload undefined
27 Sep 20:53:57 - [warn] [function:Enable Or Disable?] Topic 1, Payload undefined
27 Sep 20:53:57 - [warn] [function:Enable Or Disable?] Topic 2, Payload undefined
27 Sep 20:53:57 - [warn] [function:Enable Or Disable?] Topic 3, Payload undefined
27 Sep 20:53:57 - [warn] [function:Enable Or Disable?] Topic 4, Payload undefined
27 Sep 20:53:57 - [warn] [function:Enable Or Disable?] Topic 0, Payload undefined
27 Sep 20:53:57 - [warn] [function:Enable Or Disable?] Topic 1, Payload undefined
27 Sep 20:53:57 - [warn] [function:Enable Or Disable?] Topic 2, Payload undefined
`

When this issue results, it tends to drive the node processor utilization up beyond anything that makes sense. In this specific example, top reports the following, see below, when the editor is effective empty. No configured flows, no active flows per se.

top - 21:12:11 up 2 days, 12:33,  1 user,  load average: 1.67, 1.93, 1.99
Tasks: 104 total,   2 running,  59 sleeping,   0 stopped,   0 zombie
%Cpu(s):  1.4 us,  0.7 sy, 50.0 ni, 47.8 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem :   997768 total,   178316 free,   595228 used,   224224 buff/cache
KiB Swap:   102396 total,    78588 free,    23808 used.   304308 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 6543 pi        25   5  667176 577972  27564 R 202.0 57.9   5:22.43 node
  335 root      20   0   52932   2560   1332 S   6.6  0.3 271:13.05 pigpiod
 6597 root      20   0    8136   3224   2704 R   0.7  0.3   0:00.30 top
 5177 root      20   0   11660   2648   2444 S   0.3  0.3   0:06.69 sshd
 6465 root      20   0       0      0      0 I   0.3  0.0   0:01.89 kworker/u8:3-br
 6534 root      20   0       0      0      0 I   0.3  0.0   0:00.18 kworker/0:0-eve
    1 root      20   0   28084   3048   2368 S   0.0  0.3   0:07.90 systemd
    2 root      20   0       0      0      0 S   0.0  0.0   0:00.27 kthreadd
    3 root       0 -20       0      0      0 I   0.0  0.0   0:00.00 rcu_gp
    4 root       0 -20       0      0      0 I   0.0  0.0   0:00.00 rcu_par_gp
    8 root       0 -20       0      0      0 I   0.0  0.0   0:00.00 mm_percpu_wq
    9 root      20   0       0      0      0 S   0.0  0.0   0:17.56 ksoftirqd/0
   10 root      20   0       0      0      0 I   0.0  0.0   1:32.18 rcu_sched
   11 root      rt   0       0      0      0 S   0.0  0.0   0:00.10 migration/0
   12 root      20   0       0      0      0 S   0.0  0.0   0:00.00 cpuhp/0
   13 root      20   0       0      0      0 S   0.0  0.0   0:00.00 cpuhp/1
   14 root      rt   0       0      0      0 S   0.0  0.0   0:00.08 migration/1
   15 root      20   0       0      0      0 S   0.0  0.0   0:01.63 ksoftirqd/1
   18 root      20   0       0      0      0 S   0.0  0.0   0:00.00 cpuhp/2
   19 root      rt   0       0      0      0 S   0.0  0.0   0:00.07 migration/2
   20 root      20   0       0      0      0 S   0.0  0.0   0:01.83 ksoftirqd/2
   23 root      20   0       0      0      0 S   0.0  0.0   0:00.00 cpuhp/3

Or is this some odd situation? I have only so far seen this one one specific Pi3 device, but I have not been looking for it either elsewhere as yet.

So looking at the flows.json... it really does have a flow still defined. So the editor and file are out sync.

There is a known, rare, issue where a tab is deleted but the nodes remain. They become lost to the editor. Given the node you have running in the background, check its z property - if it is set to 0 to an empty string, then you have hit this issue. Unfortunately, it happens so rarely, we've not been able to debug it properly.

In the next release, 1.2, we've added code to the editor to detect the 0 z property and add a 'recovery' tab to put the nodes on so the user can do something about it.

Cool, the interesting thing was this broken flow file sent the CPU utilization above 100%, but that happened ONLY while a deploy was being attempted. When the broken flow was running it was close to 100% but not over.