The flow designated as pre/post cannot communicate with the other flows. We could return the content of done output but that's it. Because as Ralph said, if we start all the flows we cannot freeze its logic without modifying the node structure - which will break all the nodes.
It's not enough to read the source code.
What I tried to express: The flow "just" creates the environment. All activity lives in the nodes. Once (or better: Only if) you get the nodes under control, you get the flow under control...
Isn't this like running the three segments in their own sandbox & in sequence? Sandboxing the segments could be possible with a reasonable effort...
That goes against the event-driven nature of Node-RED - events don't have specific start/stop points in time. Events are triggered and they trigger other events, but there is no clear delineation in time.
One of the first things I discovered: the naming conventions in the source code can be confusing. It's organic software that has grown and not something that was designed by a committee of specialist who created a standard against which Node-RED is implemented (see Unix for that but hence for Linux it was relative clear what had to be done).
What you (@GogoVega) could also explore is that function node and it's various stages:
I've never had need to understand exactly when "On Start" is called or "On Stop" but I assume that would be when a "flow" gets stopped and each individual node is also stopped. There would be no guarantee of the order in which the nodes get stopped - that I guess is @GogoVega question.
The reason I asked myself this thread is this FR #2296.
The reasoning is interesting (for PR) but not complete for an industrial process because the goal here is to create a sequence of three segments allowing the use of an entire flow (not just a function in the function node)
The NR process does not change, we add two segments which act like the rest with the difference that they are only used for one job in the runtime time.
Let's take an example, I have a device which controls a robot arm and this device is restarted, it takes a step which puts the arm back in the right position before it can launch its cycle.
NR Starting
Pre-flows => Puts the arm back in the right position (start position)
flows => normal cycle
NR Stopping (normal shutdown)
flows stopping => not possible to do a job here
Post-flow => Place arm in safety position
For me it's mainly the API because everything goes through it
But this exists on node level with the "on start" and "on stop" methods for a function node (for example). Of course these events also exist for custom nodes, so you can do the same there.
Imagine shutting down your computer (at least this is what happens with my mac): each running process is sent a message to shutdown. The operating system waits for a couple of minutes and then says "hey program XYZ is not shutting down, so I won't shut down". Those couple of minutes that the OS waits is completely random - and the program that didn't shutdown, does inform the OS that it's not shutting down, the OS assumed it after a couple of minutes.
And so it is with Node-RED: it can't know there is something wrong, it can only assume. So any pre/post calls can happen but Node-RED doesn't know when to continue. So the people start count message and then you have state where something doesn't reply to message ... then you have timeouts ... it gets very complicated to handle the edge cases.
So here what happens if the robot arm never hits its start position? What does NR do? If NR starts to rely on external "things" happening, then it will never be able to do anything. That's why operating systems have a hard reset: the power button!
It would be different if you have a clear, limited number of devices and a clear dependency between them but you can't generalise and assume Node-RED will always have this. Therefore this should be done by the setup that you are running and not built into Node-RED.
This is why a sequence is more effective, the pre and post sequence must call done to say I have finished my job. Allowing time for the node to close does not allow a flow to continue; a node can send a message to an already closed node...
If the arm cannot initialize itself to put it in the right place, the process cannot start. This is just an example but yes it will require human intervention with the dashboard for example.
and that's where the hand-waving starts - no offence. But those are the edge cases that crop up and then it's "oh, what do we do now"-time.
Hence my preference is not to support any of this because it causes too many unknowns for far fewer benefits.
If you really want to get into the weeds and start dealing with this dependency logic, then it might be better to have a real time operating system or some such other system designed specific for robot arm control.
Each sequence in the operation would be a tab or group of tabs. When a particular tab or group of tabs is enabled then that part of the sequence is running. To switch to the next part of the sequence disable and enable tabs as required. At the moment this could be done, but it would require a Deploy to switch sequences.
I'll put it another way: it's as if NR has three instances. When NR starts it executes the first instance until it has finished its job then closes this instance and starts the second instance which remains open as long as NR is running. When NR stops, it closes the second instance (like the current stop - each closes) then executes the third instance -> done -> stop
The pre and post sequences are not obligatory.
I think it's simpler to define an option in the flow than to modify the editor in order to integrate an editor split into three. It's a flow like the sub-flow look with a done output but which cannot communicate with the other flows
I think there is merit to exploring the concept - without any commitment to implement anything yet.
It is a common question on how to run something once when Node-RED starts up - to initialise some state. That is typically going to be an Inject node configured to trigger on startup - but that doesn't guarantee the message will trigger (or the initialisation flow has completed) before an MQTT node starts sending incoming messages.
The solution today is to add gate nodes through-out your flow. This can get complex the more 'regular' flows you have that need to block until the initialisation has completed.
As a concept, being able to guarantee that a given flow starts before the others would reduce that complexity and make it much easier to reason about the flows.
But there are lots of open-ended questions:
what signal is used to indicate the initialisation flow is complete?
what if the initialisation flow hangs or never completes due to some external influence? Does it then get a timeout of some sort?
I'll also point to some design work done (in 2019...) around graceful shutdown of flows - providing a way to halt incoming work whilst allowing in-progress work to complete before fully stopping.
There are some other designs in that repo pr list around timeouts and message tracking that could all be relevant to this discussion.
I saw its design, that's why I created this thread because as you said it's much more effective to block a flow than to block each node.
I had thought of an output called done.
K-Toumura offers it and it can be a way, the user will choose between a timeout and a complete blockage. As I said above, NR works in the same way, it is entirely possible to use the dashboard to manually unblock the situation (as an example of course).
I add that we add a sequence (this allows to do an additional job) - the safety of the machine must be studied with other mechanisms in addition to NR.
K-Toumura's idea is to close the injection nodes and wait for the other nodes to finish (done or timeout).
Which implies:
the runtime must know the injection nodes
the injection nodes must stop their message generation logic
the other nodes must call done as soon as they have finished their job
Question: how to guarantee the first point? in the node def? or leave the choice to the user but in this case I prefer a list of potential nodes (I mean that he chooses from a list of already filtered nodes containing only injection nodes)
and 3a - all nodes must implement this - so you can only make your flow do this if ALL the 3rd party nodes that you use (and indeed core nodes) all call done/complete correctly. You will need to audit this.
Yes it does - BUT - we are really talking about startup/shutdown sequences here aren't we? While we think about Node-RED being flow-based, it is "just" another Node.js app at heart and already has standard startup/shutdown sequences - all I'm saying is that having admin-level hooks to extend the startup/shutdown would be potentially useful in a few use-cases, wouldn't be adding a lot of complexity (but would be adding some flexibility), and indeed would be in-keeping with the other parts such as the ExpressJS and Socket.IO middleware capabilities we already have.
I think there is a documentation page where the events that occur for node initialisation are listed - not sure. Hooks and events are great and I'd be the first in line for more. As I was playing around with my server-less Node-RED, I also discovered that there was no event that was triggered once everything was done ... I ended using workspace:dirty and only the first time it was triggered meant that everything was basically ready in the editor. (Hint: RED.events.DEBUG = true is a good thing to see what is happening in the frontend.)
What I'm against is bending Node-RED into a something that it never was: sequential calculating machine, it's event driven and it should remain that way. On the other hand, if someone creates something like a systemd node that is in charge of starting, stopping and restarting stuff, why not. I always come back to my Linux analogue: all the kernel does is handle interrupts, things like systemd handle starting and stopping of stuff. (I know: the kernel does much more but not a lot ;))
I never asked for NR to be sequential (cyclical). (And thatās why giving a priority to each flow would be complex to do)
I would like to add as Julian said the startup/shutdown sequences. These sequences act like the current NR with the only difference that they only have one job and as soon as this job is finished, NR moves to the next sequence. Which allows to execute a flow before the main flow(s) are launched as Nick said again