I created a dashboard, which you can do messurements with at the Raspberry Pi. The results are getting displayed in a table and saved inside a MySQL database. The user is also able to download the database as CSV.
I almost finished with the project and made a stresstest. So what I did was to do one messurement every 200ms á 50 messuring instruments for one hour. As expected I got around 900.000 rows and a total size of 50MB. Since then whenever I'm trying to load the dashboard it freezes and nothing is working. The CPU goes sometimes up to 200-300%. I've disabled the table so no data is getting loaded by the database.
Without the table it is working fine again but when I try to download the CSV it freezes again. So I logged the MySQL slow queries to a file and got this:
/usr/sbin/mysqld, Version: 10.3.34-MariaDB-0+deb10u1-log (Raspbian 10). started with:
Tcp port: 0 Unix socket: /run/mysqld/mysqld.sock
Time Id Command Argument
# Time: 220923 15:22:27
# User@Host: nodered[nodered] @ localhost [127.0.0.1]
# Thread_id: 36 Schema: devices QC_hit: No
# Query_time: 2.656102 Lock_time: 0.001574 Rows_sent: 879100 Rows_examined: 879100
# Rows_affected: 0 Bytes_sent: 19673746
SELECT * FROM Messungen;
So I tried to get the database manually over the console.. with success.
Is there anything I can set up in node-RED? Why does node-RED freezes all the time (I mean 50MBs are nothing)?
Thanks in advance
What exactly do you mean by download the CSV?
And what are you doing with it ?
Reading it all in one chunk ? Reading it one line at a time ? feeding to chart in one chunk ? Or point by point ? -Are you feeding to a chart ? How many points across is it ? Have you limited the numbe you can display ? etc etc
@Colin I'm sending a query to the MySQL-Node to convert the data to CSV and export it to a folder from which the user downloads it
@dceejay Trying to convert the whole database to csv or json. The table is generated by the json and the user can download the database as csv
So it's all in memory then... Hopefully you have at least a 8GB ram,. Or at least 4GB and then edited the service file to set max ram to 3GB - otherwise it will be paging out to disk,
Also you can try breaking the flow at various points after the SQL node to find exactly where the problem is.
Seems like you are right. The browser error-message displayed an out of memory. I've tried to increase the given ram to node red with
node --max_old_space_size=3072 /usr/local/bin/node-red but it didn't work. @dceejay what service file are you talking about?
I've got at least the 4GB version of the raspberry.
@Colin That's what I did. In the first case the only node after the SQL node is the table. In the other case there is no node after. The SQL node should convert the database to csv and export the file into the tmp folder:
(SELECT 'Register', 'Druck', 'Zeitstempel') UNION (SELECT Register, Druck, Zeitstempel FROM Messungen INTO OUTFILE '/tmp/Messungen.csv' FIELDS ENCLOSED BY '\"' TERMINATED BY ';' ESCAPED BY '\"' LINES TERMINATED BY '\\r\\n');
the service file is usually
/lib/systemd/system/nodered.service - to then edit the max_old_space - then use node-red-start to start the service. (or node-red-reload to stop and start) - but I suspect you may still be out of luck as if it is 2GB and it ever gets modified you will end up with both the original and the "copied" version at the same time = 4GB... but you may be lucky.
Hi, thanks for your answer. There is actually no
nodered.service inside my
/lib/systemd/system. I also thought about splitting the SQL-request because.. why should I load 900.000 rows inside the table in one time. When crashing chrome uses like 6GB of ram But I don't really know how to do this yet.
Presumably you didn't install using the recommended script in the node red docs.
What are you going to do with a 900,000 route CSV file?
You might be better to run the SQL command using an exec node, then it will run outside the node red environment.
You should probably change your query to pull a certain range of records and have your server (Pi?) piece together the results into one file, if that's all you're doing. You can pull several queries relatively quickly in small chunks and then append each chunk to the file you're trying to create, since appending takes very little extra RAM or disk access. Without knowing what SQL you're using (MySQL, Oracle...), look up the LIMIT keyword and go from there. That should get you a range of results you can then start appending with. The only other thing you would need to know at that point is the size of your table, which you can also get very easily.