Node Red Security: Auditing login successes and failures

(Raspbian Node Red)

If you plan to have Node Red running for a long time, you really should keep an eye on who is using it.
One handy thing you can do is tail the system logs to see when someone logs in or fails to log in.
You can hook it up to say a push notification service. After all, if someone is logging in and you are the only user, you would want to know about it!

Note that this example works on a native Raspbian Node Red which sends logs to /var/log/syslog, but you can adapt it to any OS if you know where your system log events go for Node Red. You will need to install the tail node.

[{"id":"a9668ccf.572f9","type":"tail","z":"f72ea602.fbc808","name":"Tail System Log (auditing)","filetype":"text","split":"[\\r]{0,1}\\n","filename":"/var/log/syslog","x":210,"y":820,"wires":[["1c9f220b.4d678e"]]},{"id":"9dbc0336.02c94","type":"switch","z":"f72ea602.fbc808","name":"Login OK/Fail?","property":"payload","propertyType":"msg","rules":[{"t":"cont","v":"auth.login.fail","vt":"str"},{"t":"cont","v":"auth.login.revoke","vt":"str"},{"t":"cont","v":"auth.login","vt":"str"},{"t":"cont","v":"Under-voltage detected","vt":"str"}],"checkall":"false","repair":false,"outputs":4,"x":240,"y":920,"wires":[["fc093431.ed41f8"],[],["2d4cfd71.4662c2"],["cca381cb.79448"]]},{"id":"fc093431.ed41f8","type":"change","z":"f72ea602.fbc808","name":"\"Failed Login\"","rules":[{"t":"set","p":"payload","pt":"msg","to":"Failed Login","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":480,"y":880,"wires":[["3b4c6619.a3904a"]]},{"id":"2d4cfd71.4662c2","type":"change","z":"f72ea602.fbc808","name":"\"New Login\"","rules":[{"t":"set","p":"payload","pt":"msg","to":"New Login","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":470,"y":920,"wires":[["cd9a5ece.cf544"]]},{"id":"1c9f220b.4d678e","type":"switch","z":"f72ea602.fbc808","name":"[audit] filter","property":"payload","propertyType":"msg","rules":[{"t":"cont","v":"[audit]","vt":"str"}],"checkall":"true","repair":false,"outputs":1,"x":430,"y":820,"wires":[["9dbc0336.02c94"]]},{"id":"577bc171.4e762","type":"comment","z":"f72ea602.fbc808","name":"Notify on all logins and failed logins.","info":"","x":180,"y":780,"wires":[]},{"id":"cca381cb.79448","type":"change","z":"f72ea602.fbc808","name":"\"Low Voltage Detected\"","rules":[{"t":"set","p":"payload","pt":"msg","to":"Low Voltage Detected","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":510,"y":960,"wires":[["a34d8895.d7de48"]]}]

You probably want to also set up:

Restart notification. This is a notification every time Node Red restarts because some poorly coded nodes have caused it to get into a "reboot loop". You can't really tell this from the UI so it's handy to know. If you have a crashing node red instance, the only way to fix it I found is to "npm uninstall" the crashing node - it will be listed in the logs.

[{"id":"3a0d9724.4282f8","type":"inject","z":"f72ea602.fbc808","name":"","topic":"","payload":"Node Red Started","payloadType":"str","repeat":"","crontab":"","once":true,"onceDelay":0.1,"x":210,"y":60,"wires":[["4efc3f1b.ad6b8","199e497.4e69cb7"]]},{"id":"199e497.4e69cb7","type":"change","z":"f72ea602.fbc808","name":"\"HTTPS Restarted\"","rules":[{"t":"set","p":"payload","pt":"msg","to":"HTTPS Restarted","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":410,"y":60,"wires":[["9b6e0c6.f7381f","de5fdc90.df014"]]},{"id":"3f0a6e2b.57b392","type":"comment","z":"f72ea602.fbc808","name":"Check for badly behaved code causing reboots - notify","info":"","x":220,"y":20,"wires":[]}]

Excessive CPU monitor to help flag when you accidentally add a flow that triggers too much CPU usage. You should be able to run hundreds of flows and graphs, plus an InfluxDB instance plus Grafana on a single Raspberry Pi 3b using less than 50% CPU as long as you turn off the debug logging in Grafana and Influxdb. A spike in CPU means you've added something in a loop or with extremely large messages. Here is an example of how you can monitor it:

[{"id":"8a37c273.d444a","type":"cpu","z":"f72ea602.fbc808","name":"","msgCore":true,"msgOverall":false,"msgArray":false,"msgTemp":false,"x":350,"y":3260,"wires":[["18fc59f4.559636","c3eefa51.281788"]]},{"id":"ea7dd38c.f1ca4","type":"comment","z":"f72ea602.fbc808","name":"Monitor CPU usage - it should never be 100% unless there is a badly behaved node just added","info":"","x":380,"y":3220,"wires":[]},{"id":"f0d643f6.fac5a","type":"inject","z":"f72ea602.fbc808","name":"Every second","topic":"","payload":"","payloadType":"date","repeat":"1","crontab":"","once":true,"onceDelay":0.1,"x":160,"y":3260,"wires":[["8a37c273.d444a"]]},{"id":"c3eefa51.281788","type":"switch","z":"f72ea602.fbc808","name":"CPU > 90% ?","property":"payload","propertyType":"msg","rules":[{"t":"gt","v":"80","vt":"str"}],"checkall":"true","repair":false,"outputs":1,"x":560,"y":3260,"wires":[["df97cbd7.758fe8"]]},{"id":"df97cbd7.758fe8","type":"timeframerlt","z":"f72ea602.fbc808","name":"If 10 or more times in a minute ","throttleType":"count","timeLimit":"60","timeLimitType":"seconds","countLimit":"10","byresetcountLimit":0,"x":790,"y":3260,"wires":[["aeb86c0.4721e98","6968a815.393d88"]]},{"id":"deacce9f.5d78d","type":"change","z":"f72ea602.fbc808","name":"\"Excessive CPU use\"","rules":[{"t":"set","p":"topic","pt":"msg","to":"Excessive CPU use","tot":"str"},{"t":"set","p":"payload","pt":"msg","to":"Excessive CPU use","tot":"str"}],"action":"","property":"","from":"","to":"","reg":false,"x":1360,"y":3260,"wires":[["e244649c.8695f8"]]},{"id":"aeb86c0.4721e98","type":"delay","z":"f72ea602.fbc808","name":"Throttle to 1 message per hour","pauseType":"rate","timeout":"5","timeoutUnits":"seconds","rate":"1","nbRateUnits":"1","rateUnits":"hour","randomFirst":"1","randomLast":"5","randomUnits":"seconds","drop":true,"x":1090,"y":3260,"wires":[["deacce9f.5d78d"]]}]

Happy to answer further questions.

2 Likes

This should be more than possible even with logging turned on. The key issue is whether you've exceeded your physical memory and therefore started to page. This is a very slow and intensive task on a Pi thanks to the SD-Card interface (makes no real difference how good your card is).

This is on my Pi2:

And my Pi3:

That's cool, which node did you use for the swap, load and memory sensing?

No nodes were harmed in the making of that dashboard :smile:

It is all Telegraf, InfluxDB and Grafana.

Sometimes, Node-RED isn't needed at all. I know, shocking isn't it!! :rofl:

By the way, there is virtually no configuration of InfluxDB needed, very little Telegraf configuration and there are several example dashboards for Grafana that will show details like that. So you don't have to be a Linux Ops expert to produce such a dashboard.

This topic was automatically closed after 60 days. New replies are no longer allowed.