Pi OS 11 64 bit, NR 3.0, Node 16, 11 IoT devices, using node-red-contrib-tuya-smart-device modules, load consistently over 100%?

NR 3.0, Node 16, Pi 3B, 11 IoT devices, using node-red-contrib-tuya-smart-device modules, load consistently over 100%? Anyone else seeing something similar? The Pi is running 64 bit OS 11, BTW. Just a sanity check question.

Pi is doing no other work, only NR environment, 1 single flow as noted above, 11 IoT switches. I disable all the IoT control nodes to 1, the 100% loading still occurs still.

Then switched off 64 bit kernel via /boot/config.txt via #ARM_64bit=1. Still load over 100%.

Memory loading is always around 15 to 16% in all tests.

Disable of the TUYA Smart based flow. And of course NR loading disappeared.

So then enabled another flow, which happen to be a WiFi scan audit using exec node to use iwlist linux CLI. Still using the 32 bit kernel. During wifi active scan, loading can go from 10% to 50% but once the scan is done drops almost to 0 for NR process(es).

So the way I see it, the node-red-contrib-tuya-smart-device modules are load intensive, so asking if anyone else seen similar or comparable behavior?

If you are running VNC, that consumes some resources.
A busy dashboard will as well.

One thing you could do is upgrade to a Raspberry Pi 4B... aka... throw hardware at the problem. You get very good memory options with the 4B (which is not an issue for you), but it also allows a fair amount of overclocking. I have mine running a very conservative 1.8Ghz on the CPU and 600Mhz on the GPU.

Here is a guide for overclocking the 4B (a very unlikely source, but Toms has a few contributing writers that like playing around with the Pi).
Overlclocking the RPi4B CPU and GPU

Yeah, I could 'cheat' but would rather not have the software design issue force hardware requirement. I will likely test on a 4 anyway just to confirm that it would work. I have each of the 4B with the various memory options I use for validation testing, so memory is not going to be an issue even when testing. :slight_smile:

What is sad is that almost everyone of the TUYA NR node contributors have stopped supporting their solution, I have tested the 5 options (or so) and 4 have abandoned their support, I just heard another earlier today, the guy offered to turn over the entire project to me if I was interested in taking it over.

This combined with RealTek based IoT devices that TUYA as locked down and no Tasmota for RealTek ICs, not a great situation. Even CloudFree is having issues supplying their out of the box Tasmota based IoT devices.

Really is looking like I will have to isolate all my IoT using TUYA to a separate VLAN, then use only TUYA official APIs, etc. Guess them is the breaks.

How often are you scanning?

I use allot of i2c... and I find it helpful to limit the amount of unnecessary reading I'm doing from the device just to keep Node-Red from unnecessarily updating variables/charts, etc...

node-red-contrib-tuya-smart-device was last updated just 4 weeks ago.

I have to say I haven't looked at the load of my Pi (mostly 'cause I don't have a clue how to) but I have quite a lot running and have not had any problems

Yeah he is more getting at the underlying code that they all rely on being almost abandoned - in most cases it is the TUYAAPI library - the guy who wrote it no longer has any Tuya devices and hence is not putting anything into it

I think most people are sweating on seeing what comes out of the Home Assistant patnership with Tuya and then seeing if there is something in there that can be reverse engineered.

Craig

The node-red-contrib-tuya-smart-device modules/package is just about the last one supported, that was my comment intent. As for polling... after the very first interaction, the CPU load goes through the roof and stays there. This is on a Pi3B that is doing nothing else, just NR, just one flow. It does not matter if I have one IoT smart plug active or 15, once the control node connects to the device, we are off the to races. There is no way I can see, to change the polling rate. I can turn on/off the automatic connection feature, which I did, but once I connect to the device, the issue results, so the code design is in part at issue, because even when I explicitly disconnect from the IoT device, the loading does not back off, which is just odd.

I have not looked at the actual code yet, but I will when I get the chance because I need to see why this is happening of course. It may not be an issue with the specific package node source, but lower in the API stack, even back to the TUYA API its-self... which should not surprise me at this point.

I have not tried using JavaScript directly or even Python methods, but that is on plate as well to try to find the true issue. I have not used the TUYA REST API yet, but I may get to that as well.

I have had a few Tuya light bulbs for several years now but at that time it was not being able to flash them with Tasmota that persuaded me to try something different, so I moved to Zigbee devices.

Fortunately (so far at least) I have never had a problem with controlling the Tuya lamps with Node-RED and I have never noticed any issues with the Pi that the node-red-contrib-tuya-smart-device nodes are running on. This may be because this particular Pi is a 4B and uses an SSD rather than an SD card as storage.

I had not realised that the Tuya API library was no longer supported and that is a shame as I guess there are people with a lot of Tuya devices.

Using top to see CPU load my Pi uses between 0.7% and 1.3% with a very occasional blip to 15.3% for Node-RED. Influx is a much bigger hit at 0.3% - 21.1% load

1 Like

Thanks for the details. this will be of great benefit when I test on a 4B, which I plan to do in the next few days. I have already re-imaged and re-tested on the original 3B and the heavy loading is consistent across Pi OS 11 images (at 64 bit) but different SD cards, next I will use a different 3B but the same SD cards, to see if the problem moves to the different hardware. If so, then I know pretty much this is not a hardware issue, but software based.

I am a bit fan of Tasmota, and where I can, I have re-flashed or even purchased devices already flashed with it. I am also working custom firmware/MicroPython images for ESPs, so my time to chase this down is a bit constrained for the next couple of weeks.

Interesting that you mention SSD on 4B, I did this for my ESXi ARM based 4B, running the VMs and docket images on SSD rather than SD. As I am typing this, I added to my test list, that I really should run a test in a VM and docker image, just to do complete testing of course. Given the issue looks to be software component based, not expecting any surprises, but you just never know!

For example, I recently found a quirk maybe even a bug with machine.WDT (Watch Dog Timer in MicroPython with Pico W), even when you feed the dog correctly, the WDT still fires, causing the Pico W to reboot/reset, that should not happen when you feed the dog appropriately.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.