Node-red-contrib-inspector fail with 32 bits os?

Sorry, deploy.

Do you mean global context?
You can set NR to use persistent storage, which will save all the data in a file.
Presumably you can transfer that file to the other Pi.
Or you can use the opportunity to rewrite with fewer global variables :upside_down_face:

I did that intentionally, so that's no option for me. :wink:

I think it is most unlikely that you have a corrupted deploy (unless you have a dodgy SD card). Did you keep a backup of a flow showing a problem so we can analyse it properly.

@Colin At the moment I am focusing on rebuilding my project on second rpi.

@Colin @dceejay @jbudd @BartButenaers @TotallyInformation ,
After quit some work, I managed to configure the second RPI 3B+, that I have, so I can experiment with it to try solving my problem, without disturbing my running project. After restoring flows and databases, I found I still have the problem with increasing CPU usage overtime. I tried lots of stuff, like switching of many parts of the application, delete databases, etc..., but nothing seems to help (Of course every time I had to wait quit a while to see the result.

While I was seeing no solution there I started to examine the whole application and finally after quit some work I found I very tricky loop. So I am very glad, I found the course of the problem!!!

The whole experience was a very good learning curve for me in many ways.

Some points here:
As I installed the latest 64-bit OS Debian 11 on RPI 3B+, I found that my application then uses about 65% of memory, in comparison, my current project 32-bits OS uses only about 40% of memory. I didn't see much better performance on the 64-bit OS though! So for me, I decided not to go for 64-bit OS in the near future for my application yet. Maybe when I would opt for a RPI 4B, I will consider it.

I also tried using bart's node-red-contrib-v8-cpu-profiler that also was an eye opener, but it didn't really help with my problem in this case. Maybe some problems in the future.

Influx. Among the influx nodes there is a backup node. Very easy to use, but there is no "Restore"-node. So I didn't know how to restore the databases in an easy way. Now that I have been playing around with the commands:

$ influxd backup -portable <database_name> <path + folder name>
$ influxd restore -portable <database_name> <path + folder name>

Note: <database_name> is optional. If left out, all databases will be backed-up or restored.
Because I found it was very easy, I am constructing a standard routine for this option.

Those were some of my experiences, hope they were helpful for some in a way.

Anyway, I want to thank you for all your help given to me so far.

3 Likes

Yes, that would be expected. Since memory addresses can be twice as long. If I remember rightly, you don't see any real benefit until you have 4GB+ or RAM.

1 Like

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.