Import flow overriding current one

Hi there!

I have a usability feedback that we may find a solution together for it.

Very often I find myself receiving an exported flow from somebody asking for help. I'll open an instance of Node-RED, import the flow, find and fix some bug, and then export the flow and e-mail it back. The issue happens when this flow needs to be imported back to the instance of the one who asked.

Currently, an imported flow is "appended" to the current flow. Which is exactly what we want in some cases, but in situations like the one above, what we really want is to replace the current flow with the imported one. Right now, the user needs to manually delete everything, including configuration nodes and subflows before importing, which can be pretty cumbersome and error prone, mainly if the other part is not very fluent with Node-RED (maybe your mom/dad helping out in a remote setup :upside_down_face:) .

As a possible solution for this without breaking much of the current workflow, I thought about adding a new option "overwrite flow" to the "Import to", like in this mock:

One thing that could also be changed (but that's debatable) is to show this same modal when dragging a file to the workspace, and the content would be pre-filled in the text area. So in the end, the instructions to the user would be "Drag File > 'Overwrite' > 'Import' > 'Deploy'".

What do you guys and girls think about? I can later try to implement it and send a PR for it.

2 Likes

Alternatively if you setup a private repository on github you could fix the flow, commit it to the repo, then they could do a git pulland restart node red.

That's for sure, but helps only when the instance running Node-RED has internet (or sometimes even network at all) :slight_smile:

Hi @gfcittolin

this is definitely something on our backlog to address.

It becomes even more relevant when you're sharing flows using subflows - as you end up importing copy after copy of the subflow definition.

In terms of design, the user may not know if they are importing something that would overwrite an existing flow - so presenting that as a choice they have to make at the start may not be ideal.

The approach I assumed we'd take is to detect whether the importing flow is a duplicate of something that's already been imported and then ask the user whether to import a copy or replace the existing thing.

However that is a non-trivial task.

The current code, by default, generates all new node ids when it imports something - allowing the user to import multiple copies. So that makes it hard to detect any potential clash.

So some design work needs to be done around how we detect the duplicates.

1 Like

Hey @knolleary

I always miss it when I come up with something and it's already somewhere else. Sorry about that!

For sure. I've thought about this possibility, but first there's the problem of different ids after import/export cycle, and second, the flow may be so different that it could be determined that it should be appended, when what we want is actually to replace it.

Agreed, mainly when importing snippets from the forum. Have someone posted here just the node I have to change or the whole flow fixed?

From the point of view of a software user, I personally prefer to have options to choose from that have a well-known behavior, than having an action that may result in different outcome depending on the situation, but that may incur in a cognitive overhead when operating it. Well thought defaults may help here (like keeping the current "current flow" option). That all said, there may be a need to a "mixed" import, like adding new nodes, but not duplicating subflows. If this was a simple issue, it would have been already solved :slight_smile:

Even though, I think adding something like the "overwrite flow" option above would solve the use case where you consciously want to replace everything with what is being imported

Just a bit more context to the situations we have: We deploy Node-RED on the with ST-One, and very often it is in an industrial network without network access. Sometimes the flow needs to be changed (new requirements, bugs, etc.), so we get our backups, make the required changes, and send the new flow to someone that can open Node-RED and replace with what we've sent.

Having git/projects don't help here (although we do have it managed on external git repos in some situations), and having some way to deterministically know that what you've sent is what is actually running (no duplicate nodes, no nothing) would be very good. We sometimes handle this by having the user to replace the flow.json file manually to overcome this.

1 Like

Git could still surely help. It is just that much of the logic would need to be outside of Node-RED itself. Git doesn't have to have a central server after all.

But wouldn't it be even simpler to have flows that send the flow file itself? A simple flow would let a user send their flow file. Another flow would let a user replace (with backup for safety) their flow with a new file and then restart Node-RED.

Or am I missing something?

Hi Julian! Thanks for your input! There are surely other ways around.

I don't like very much the alternative of the flow updating itself because of the risk of lockout in the case an update goes wrong for whatever reason. In my environment, Node-RED runs with very restrictive system permissions, in a way it is not able to restart itself. We could do it over the API (and I've done this for flow backups), but then we'd need some way to manage Node-RED's password, so we don't have it hard-coded in the flow itself, then we need an UI for this.. and then it starts to get over complicated.

Maybe what I'm failing to communicate is the benefit of the concept of "backup" and "restore". A big part of my professional background is interfacing with the automation world, and you can often find the concept of a "project file" that you can backup at any time, and then easily and reliably restore to the same or other devices. You can save a lot of downtime recovering from issues and greatly simplify the maintenance of a system this way. This is why I thought it would be worth adding something like this to the core of Node-RED and not monkey-patching it someway around. We do have the "backup" concept with the "export" feature, but the "restore" concept does not match exactly with the "import" we have today.

It is totally fine if we find out it's not the time to change something like this in the near future, we can totally live and work with how it works today. The discussion around it have been already very productive in my point of view.

(P.S.: In the land of dreams, It would be awesome to have the whole userDir as an exportable project unit, including installed nodes and everything... but when I think about the sort of problems this would cause, node version incompatibility, compiled binaries... better let this idea stay in the land of dreams :wink:)

Understood. That is why the process should be outside of Node-RED for the most part. But you can create a flow for users to use in order to simplify things if that is needed. Also why any update mechanism should allow for easy backout.

Sensible. One of the best ways to run a Node-RED instance is via systemd so that you can leveage all of the protections and flexibility that gives you including logging, enforcement of the running user, setting environment variables and controlled restarts. Your restart script can be quite comprehensive so that after an update of the flows file, if the instance wont restart, you could reinstate the previous flows file and try again.

But this isn't hard to do with Node-RED. It's just that it isn't yet standardised. Being a node.js microservice, you can use the same kind of tools that you'd use for any other similar service.

As I say, not everything should live in the Node-RED core and a productionised backup/restore/recovery would, I think, be a prime candidate for something that is slightly separate to the core. Though I certainly agree, as I think others would too, that having standardised methods would be great. As always though, core development falls mainly on just a very few people so has to be prioritised.

This is exactly what the forum is good for though. Talking through ideas and possibilities. For example, if a standardised backup/recovery/restore process could be agreed, possibly others would step up to help code it.

Your userDir is already exportable or indeed gitable (is that a word?!) a long as you exclude the node_modules and take care where you put it (since it includes your credentials file). Backup & recovery is actually really easy. Especially if you take care with the settings.js and package.json files to make sure that they are portable. My installations are all highly portable - at least outside some hardware specific nodes.

What I think you are considering though is taking that into a production, multi-user environment. I don't currently have that need which is why I've not thought through all of the details but it doesn't seem that hard to achieve.

1 Like

One thing I have to agree is, not everything should be inside Node-RED, otherwise you end up embracing the world instead of focusing on the core of the project. I've seen this happening in other projects already. There's this blurred line of what is worth bringing inside and what should be delegated to other tools.

Most of the times, the only tool our users have is the browser in a Windows machine, having to access Node-RED on a remote edge device. Due to these maybe odd constraints of our setup (no network/internet, no access to file system, restrictive environment, etc.), having something more tightly coupled for the backup/restore is very important for the user experience, at least this is what our experience of a couple of years in bringing Node-RED to the shop floor has shown us.

As a sort-term solution to this, we're planning to either extend the menu adding such options (an that's awesome that Node-RED has an API allowing such extensions), or implementing it externally in our device management interface. Having an option to overwrite the flow would solve most of the usability issues above, so instead of developing something specific for our use-case, we could do something that everybody could benefit from (since we'll be already investing time in developing it). If we go down the path of developing something internal for our use case, we can at least share our experience on doing so :slight_smile:

Thank you all so far for the discussions and inputs!

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.