Stored Credentials- any progress in this

Continuing the discussion from Location of stored credentials:

Hi,
I am trying to use a single directory for node modules and create different directories for different instances of Node-red. so user directory is one but node-red flow files and settings files for each instances are in their own directory. this is causing problem for me, as everytime i restart an instance, the credentials are reset (blanked out).
i think this is the same topic in discussion here, is there any workaround ? or do i have to have user directory separate for each instance ?

I think that the only way you could do this would be to make the credential setting a more complex function. I can't remember whether Node-RED allows this off the top of my head.

My basic requirement is to log into mysql database, and i have only one database.
I can communicate with all instances with MQTT, is it advisable to use mqtt to send queries from different instances into one main instance with mysql node.
I am able to do it, but asking is there any performance issue ? I have not rigorously tested it what happens if multiple queries are close to each other, is there a way to have a sending code matching with receiving code ? so the query originated only gets the result and does not pass on to some other mqtt node ?

Certainly that is a viable way to simplify things. Turning 1 NR instance into a data hub. There probably is a small overhead for only using one node.js app to do all the queries but I doubt it is noticeable unless you get thousands of simultaneous requests. Depending on how fast client instances need their answers, you could maybe do some batching of the requests which might help in some circumstances. There will be a limit on the amount of data you can send out in short order as well of course. Too many variables to say for sure, you probably need to work up some tests.

This is a common requirement. The requests will need some kind of tracking ID. You could generate a random UUID for example. But if using MQTT, you could simply use the MQTT client id. I've never really pushed the features of MQTT v5 but I'm thinking there might be a way to use them to direct a reply to a message. If that doesn't work, you probably need to set up some topic filters on the broker such that specific MQTT client id's can only see certain topics.

I can't really test the MQTT v5 thing at the moment as I'm off to see my eye surgeon. However, I am interested in the results if you do some experimentation.

I think if you're using best practices you should be fine. So like batching requests, rate limiting, etc. I think the more concerning thing would be security, but assuming that you're accessing resources with some sort of secure layer (either actual or intrinsic) that should be fine. I would suggest - just to be friendly to your data - to also set up a delay between concurrent requests. Giving your requests a bit of breathing room will help mitigate any sort of performance concerns.

Do also be aware that your requests might need to be controlled in some way - e.g. filter out wildcard requests or drop tables etc. Just so you don't have a request come in that requests literally all of your data five times in a row or something to that effect.

This, of course, is a general problem when allowing user queries on DB's. You should also limit the number of queries per appropriate time period and by user. Between them, these together should help mitigate either accidental or deliberate misuse.