Batch processing / buffering SQL statements

I have a SQL inject statement that runs on 26 of my flows. I have each one triggered with a 250 ms delay from the previous query. Is there a way that can run all these commands at the same time without using a trigger or delay? I get connection errors when 2 queries run at the same time.

Here are 2 of my queries that I would like to run at 7am with no delays.

Flow 1:
[{"id":"366d13bb.948b9c","type":"inject","z":"b223080c.955768","name":"7am","topic":"","payload":"","payloadType":"date","repeat":"","crontab":"00 07 * * *","once":false,"onceDelay":0.1,"x":133,"y":1480,"wires":[["fe0c8074.2023c"]]},{"id":"fe0c8074.2023c","type":"function","z":"b223080c.955768","name":"SQL Function","func":"pld = \"SELECT COUNT(InCycle) as Cycles \";\npld = pld + \"FROM MachineData \";\npld = pld + \"WHERE DATEDIFF(hh, Timestamp, getutcdate()) \";\npld = pld + \"<= 8 AND InCycle = 1 AND Machine = 'LH35N-3000B' \";\npld = pld;\n\nmsg.payload = pld;\n\nreturn msg;","outputs":1,"noerr":0,"x":303,"y":1480,"wires":[["b21e9ed1.9670e"]]},{"id":"150c5165.6e873f","type":"MSSQL","z":"b223080c.955768","mssqlCN":"46d8d0b1.17143","name":"MSSQL","query":"","outField":"payload","x":703,"y":1480,"wires":[["d1aedc1f.a426"]]},{"id":"bab9b744.3ff668","type":"MSSQL","z":"b223080c.955768","mssqlCN":"46d8d0b1.17143","name":"MSSQL","query":"","outField":"payload","x":1143,"y":1480,"wires":[["af83d5e0.981e68"]]},{"id":"9c48e5f7.d7b7b8","type":"function","z":"b223080c.955768","name":"SQL","func":"m = \"LH35N-3000B\";\nw = 3;\ne = msg.payload;\n\npld = \"INSERT INTO [Dev].[dbo].[CycleStarts] (Machine, WorkShift, NumCycles) \";\npld = pld + \"VALUES ('\" + m + \"', '\" + w + \"', '\" + e + \"') \";\npld = pld;\n\nmsg.payload = pld\nreturn msg;","outputs":1,"noerr":0,"x":1013,"y":1480,"wires":[["bab9b744.3ff668"]]},{"id":"d1aedc1f.a426","type":"change","z":"b223080c.955768","name":"","rules":[{"t":"set","p":"payload","pt":"msg","to":"payload[0].Cycles","tot":"jsonata"}],"action":"","property":"","from":"","to":"","reg":false,"x":863,"y":1480,"wires":[["9c48e5f7.d7b7b8"]]},{"id":"b21e9ed1.9670e","type":"trigger","z":"b223080c.955768","op1":"","op2":"","op1type":"nul","op2type":"pay","duration":"250","extend":true,"units":"ms","reset":"","bytopic":"topic","name":"","x":503,"y":1480,"wires":[["150c5165.6e873f"]]},{"id":"af83d5e0.981e68","type":"link out","z":"b223080c.955768","name":"Link 3","links":["fea9dcc5.bd676","cd458d5.f0d907","b5b63582.b1b228","cf9dcc71.e4e5d","ff29118d.2ce2f","ac2bdfb1.5f8cf","17161625.f5b2ea","21d1f0bd.4a718","20eb599e.ef1026","c1ab3fcd.f630f","c779c1b.2fa324","d781c0ff.5bb21","7e83d8f8.230058","310367ed.becea8","e0445e5f.2b7cb","811262a7.8c46b","6485882c.940c38","a5e3a23d.227c8","b4962b01.928db8","36247116.f6e51e","4a5b8ad3.52e324","12d327d9.88e8f8","b507b6f8.73a698","c3173846.4b9fa8","2b7a31dc.8fc7fe","39ea60a8.92ad4","bb675ac8.b4fd18"],"x":1258,"y":1480,"wires":[]},{"id":"46d8d0b1.17143","type":"MSSQL-CN","z":"","name":"DevNew","server":"","encyption":true,"database":"Dev"}]


I'm no expert but surely you should be preparing these SQL statements so that SQL Server doesn't have to rebuild the statement each time. That will save a lot of overheads on the server. You should also be able to run them as a single transaction which would also reduce overheads.

For me it looks like a classic scenario to use insert with select (insert to a table results from a select clause)

From the docs:

It may even narrow down your flows from 26 to 1.. if I understood you correctly

Those codes where a small example of 2 flows. I can not post my entire flow, due to a 32000 character limit.

Here is a picture of 1 flow. I am trying to streamline it and reduce the amount of connection errors due to over-lapping sql commands. I have tried stored procedures, but after 1 flow, I lose track...

I will try to break up my export, so that I can get it all in a post.

WOW... that's simply... ENORMOUS

It looks like your are already mastering the art of copy & paste, but - may I suggest you to:

  • split your main flow to several (smaller) flows
  • each smaller flow will start with input link and will end with output link
  • also, each smaller flow will accept parameters (you can bind them to the message body)
  • create a main flow that will trigger the smaller flows (better flow and logical control)
  • execute flows only when other have already completed ("listen" to the output link node)

In my opinion, for this scale, you got to manage yourself in-front of NODE-RED and use the built-in nodes to do so (guess that is why these nodes are there in the first place)

What do you think?

Agree with @ymz that some rationalisation might be in order. But overall, it is stored procedures that will help you so you may want to look at how you can keep track of things.

Every time you send a text SQL command, the server has to go through the whole thing checking for errors, tokenising and so on. Stored procedures mean that this is only done when the procedure is updated, after that you are simply passing some parameters to the query.

You might also look for patterns in your flows and push those into sub-flows which will greatly simplify your Node-RED code.

1 Like

Thank you for the suggestions. I broke the main flow into 26 flows. Each flow represents an independent machine. I have approximately 60 machines in our plant.

I will do some more research into sub-flows and try to figure out how to keep track of my stored procedures.


I created a some stored procedures and then combined those into a machine-specific procedure. The query runs great using SSMS . See below:

When I run it in NR, I only get the first value. See Below:

Any help would be greatly appreciated.

Thanks again,

I would love to but I'm afraid I don't have access to MS SQL Server.

Is there some way to join that data together before returning?

Interesting... I will see what I can find.


Unless I'm mistaken, you can simply use the upload button to attach the flows_xxx.json file to your post, instead of breaking it into pieces and pasting into multiple posts... in fact, if you wanted to try it and it worked, feel free to delete those other 5 text blobs (I don't think anyone will read that far down past those posts otherwise).

If you are doing basically the same thing for every machine, you can almost certainly create a "parameterized" subflow -- just send in the bits that are different for each machine instance, and create one smarter flow that uses the input values to execute machine-specific tasks. Same with the stored queries, as Julian and others have mentioned.

But if you want to show dashboards for every machine, you probably already know that you cannot put the ui_xxx nodes into subflows and deploy 26 of them. Instead, it's better to create a single dashboard designed to show all data for a single machine -- then, based on some UI input (e.g.a pulldown list of machines), query for all the data related to that one machine and pass it to the dashboard.

1 Like

Put the results of the stored procedures into a table and query the table from NR?

Thank you for that info. I will upload the flows_xxx.json file and delete the chaos. :sweat_smile:flows_HMI-Server.json (2.0 MB)

That is exactly what I was thinking! Working on it now. Thanks!

I finally got the stored procedure data to one table. I will continue to work simplifying my flows.

Thanks to everyone!

New flow, same info. Huge difference!


Well done sir.
I concur that stored procedures are the way to go. Instead of a bunch of functions that just build queries that never change and having them scattered about the flow. If you need to change the query, you have to go to each one and change it.

A stored procedure can be changed in one place.