Slow database output +4 seconds

Hi!

I made a complex game with node red that uses 130 mysql nodes
The sql nodes a triggered between 1 sec and 30 seconds repeating
In the beginning it worked very fast but now more and more the replies become slowly
sometimes it takes 4 seconds to get a results
The tables size doesnt matter, with 100 or 30000 records
The querry is like below

SELECT * FROM `telegram` WHERE `messageType` = 'telegram' AND `send`= '0' AND `time` < '"+time+"' ORDER BY `time` LIMIT 1

Running latest versions of node red and sql node, database is mariadb

strange thing is that when i run a query in phpmyadmin it runs very fast and it seems that it affects the speed in node red, it seems to run faster then

allready added different indexes on fields without result

any help is welcome!

W.O.W

why?

At a wild guess (and depending on quite a few factors) but these 130 nodes will be the reason (potentially connections are being swamped).

Or perhaps you are hitting wait locks on the database?

Why dont you have just 1 sql node (or a smaller amount) and feed it with the SQL dynamically?

a lot of different checks and actions
have been thinking to make it less but dont know how it is a lot of if/then and after succesfull insert or action back to database to set it as done for example

Changed now to 67 nodes.
Used link nodes to one node for all single lines
also had some sql node just for testing removed
should that be a better approach? one sql node and multiple links to that node?

But still a delay of 4-5 seconds before getting a result

What device are you running NR on?
How much memory does it have?
What else is running on the device?
What type and how large is the storage is it using?

It runs on a vps server with lots of memory and cpu
Running on ssd and nothing else running on it

Running the query in phpmyadmin goes in 0.0023 seconds :cry:

If you create a new flow (start nr with node-red dbspeed.flow or if running 'Projects' just start a new project) and just add an inject``, the sqlnode anddebug` how long does it take compared to phpMyAdmin?

I'm trying to see if it is NR, the quantity of sql nodes or the rest of the flow...

That is still too many unless you are sure that they are using a shared connection. Otherwise you are not comparing like-for-like with your admin query.

You should also replace all of your queries with pre-defined queries - "prepared statements" - as these will be vastly faster to execute.

tried it.
then it runs very fast, almost instant

Ok, if it works fine in the little test, then there is something about the nubmber of sql nodes you are using or what ever else is happening in your flow.

What is the indexing configuration for the SQL server instance/tables? You maybe hashing the living daylights out of the indexing, the indexes can't catch up, or exceeding the default cache space for the SQL instance? There are many factors that can explain a slow down like this.

Even the fact that you maybe be tripping up the server so that it can't use connection pooling correctly. There is one key well know fact with SQL instances... creating a connection is a VERY costly transaction. This is universal to all SQL variants. This is why connection pooling exists. This why every valid SQL programming guide states in some manner... never create or close a connection until you are completely done with it.

What does your SQL configuration look like, how many maximum connections are allowed? How large is the transaction buffer? Even a simple select query can flood the transaction buffer in unexpected ways if you do something just the right way for example.

I had a SQL instance that I had to raise the query buffer to 2G because I had a select query that pulled 50 records... and no record was more than 1K, but sure enough, when I started watching the actual packet trace, another process was hammering the SQL instance, and no one scaled the configuration to handle it, so every other SQL application suffered. I mention this as illustration, not explanation of your issue. The point being, look for something crazy or non-obvious as the cause. Make no assumptions. :slight_smile:

Forgot to mention, mariaDB, is very tight on default resources allocation be default. Especially I have seen when run on Pi device. The configuration, is where I would start. The concurrent connections is only 100, the default query buffer is 500M if memory serves. These are shared resources per SQL instance. So if you are actively opening and close connections, SQL instances tend to lag on actually closing a connection even after it reports it closed. SQL instance will refresh caches, update indexes, and do all kinds of things to protect data integrity before it bothers to do connection state changes.

Thanks for all support help and tips!

I found the problem...
No red isn't, its me :wink:
I have another database with a table with user forms on remote server
I copy this database table to my local table with WHERE NOT EXISTS and was running it all over the whole table every time again. And if you do that every second.... yeah exactly.
With 1000 records that costs time, a lot of time.
Now i am doing it for only the last hour and that makes a big difference
But still looking for a better way to mirror a tabla on another server.

Doesn't mysql have a built in mechanism for mirroring? ...
After a quick google I came up with https://dev.mysql.com/doc/refman/8.0/en/replication.html

The only question I have, how much has mariaDB deviated from its mysql roots. I know for a fact that the low level communication protocol differences between mysql and mariaDB break some python libraries and ESP C libraries. I ran into this trying to use the ESP mysql module with mariaDB, the mariaDB deviated just enough to break the ESP C mysql library. I moved on to MQTT of course, to avoid the issue, but at the time, having a ESP module with direct access to a DB made sense at the time.