Node Lifecycles and error handling at "flow deployment time"

Hi Everyone.

TLDR: is there a way to raise an error and mark a node as uninitialized / unavailable at deploy time?

I have a node I implemented that does a bit of setup when it gets deployed- authentication, and pulling some basic config information from a third party system. When the node setup fails, I would like to inform the user that there was an issue (and what said issue was). Furthermore, if nodered failed to deploy the node, I would like it to report the error in the logs (or to the local "Catch" node).

Apologies if this is already well documented, but I am unfortunately not that experienced yet with nodejs and nodered development- but I am here to learn.

As you mention "pulling" "third party" and "authentication" you are likely doing async (callbacks/promises/async-await) so depending on how you have structured this code and where it is (regular node? config node?) will determine what is possible.

Typically, a node calls node.error(err, msg) in a regular node. The msg parameter makes the error catchable by the catch node but as this is during setup, there will be no msg but you can still call node.error(...) REF

For indicating to a user there is an issue with the node, you can use node.status REF

Thanks! I completely forgot about the node.error function.

For now I am delaying the registration of the node.on("input", (msg, send, done) => {}) until I know the node can be started up. I'm still trying to figure out a mechanism to prevent flows from being enabled up in the event that a node cannot be initialized-

Speaking of which, is there a cleaner way to get a current list of deployed flows other than this?
Without filtering for "tab" types, I end up with the nodes mixed into the list of flows as well.

            .then(result => {
                if(result === undefined || result === null  || result.flows === undefined || result.flows === null){
                    .filter(f => "tab" === f.type)
                    .map(f => { return {
                        "name": f.label,
                        "enabled": f.disabled

You firstly need to ensure that your setup code does not ever fail. That is to say that you trap any possible errors.

The standard method for synchronous code is to wrap things in a try { ... } catch (err) { ... } block. Where the code that might fail is in the try block and the code to do something it if fails is in the catch block (which gets the err object containing the error reasons).

Deploy-time code will most likely be in the runtime (.js) part of the node (rather than the Editor HTML part). It can be tricky to communicate between the editor and the runtime. Generally, if you may need feedback to the Editor, it is better to get the html code to do the work. In your case, that would either mean calling the 3rd-party API from the Editor or calling an API defined in your runtime code.

(Sorry, I wrote most of this before Steve's answer but had to do some "real" work :slight_smile: ).

So of course, I forgot that the best approach is probably to generate a node.error(...) from within the runtime and trap that in the Editor.

A better and more "normal"/"expected" way to do this is to hook up events as normal and have an internal setupNormal flag or check function that you can inspect when a msg arrives. Then if setupNormal !== true call node.error(new Error('setup invalid blah, blah'), msg). This means the user/flow has a chance to catch it with a catch node. You can sill also set the nodes status using node.status(...) for a visual cue.

there really should be no need for a node to know (or care) about other nodes. What are you trying to achieve?

Is there any particular advantage to log the same error for every message? The context seems a little misplaced by having the message throw the error when the error was picked up by the configuration / initialization of the node during deployment. The message is not the one at fault here- the node config is.

I am simply thinking in terms of some of the flows that we have implemented- I might do a deployment of the flow in the morning, but I might only see the first event on this particular flow this evening by which time the support team will be alerted and since they are mostly just keeping an eye on the monitoring dashboards, they will have to wake a very grumpy developer somewhere to go fix it. :rofl:

I completely agree with you, and this holds true for 99% of the cases, but in our particular case we are consuming messages from several source systems with a variety of protocols (webhooks, message queues, etc) using nodes we are building ourselves. If we know that a particular flow has a configuration issue (again, a node of our own creation), we are somewhat hesitant to accept a payload from a source system since we know that we wont be able to process it in full but they can cache it until we can. We would much rather tell the source system to hang on a bit than to just let them open the floodgates and swarm us with payloads, as once they handed over said payload and we have acknowledged receiving it, we are on the hook for it.

This is just an MVP for now and our hardware will be severely constraint until we have proven that it is fit for purpose. A local retry cache is a bit of a luxury, but something we will definitely look at.

It can be noisy but without this a system would not be configurable to catch (and log / react to) issues when a message was sent to a node that did not do anything (which is what you would have if you didn't hook up the "on input" handler). You can of course anti-repeat it and only send the error the first time but my point is not hooking up the input means a system cannot (in an automated fashion via flows) handle and react to misconfiguration.

1 Like

Can you not do this in onEditSave() so that the error is picked up when the node is configured rather than when it is deployed?

Not a bad idea. I did a quick test using this and it doesn't seem like there is a mechanism for aborting the save process- the couple of examples I have seen of it is mostly just for remapping or parsing user input and neither returning a value (true, false, or string), nor throwing an exception during the oneditsave function aborts the save process.

There is however a workaround: declare a hidden input on the node, and name it something like "validator". In the defaults, set its value to an empty string and define a validate function that looks something like this: value => value === "", Then, during the oneditsave function I can update the value with $("#node-input-validator").val("Failed authentication with system ABC"); based on a callback from the node's backend.

This will give a visual indicator that there is a validation issue with the node (though it wont tell you what the issue is (unless you add a whole bunch of fields and name them things like "SystemABC_Authentication" and "SystemABC_InvalidClientID"

As for disabling flows with invalid configurations on restart: it seems more cumbersome than I first suspected, so Instead I wrote a quick "control tower" service that inspects flows for nodes that have a "validate on startup property". the service is polled by ingress nodes that will remain dormant until the "control tower" gives it the go-ahead. all nodes that have a "validate on startup property" has to notify the control tower that they are ready to accept traffic, and this status gets reset every time the flow gets redeployed.

You will need to do the lookup before then. Typically I do it when an appropriate input field changes value. That way, you have time to make the API call, get the response and, if the result is invalid, you can turn off the node's "Done" button to prevent it from being saved.

Yes, that may work. Though may not be needed if you can hang the API call off the change event of another input.

Yes, this is simply a gatekeeper function at the start of your flow. You could use a flow variable for that of course. That is the standard way to maintain context in flows (they are generically called "context variables").