Goal - export flows plus associated bits

... associated bits like config nodes (and credentials), plus sub-flows, plus packages, etc.

First I thought I should learn how to use the API and build some routines to go through a given flow and identify config nodes, sub-flows, packages and start with a list/report of what won't be copied.

I've looked at the flow json structure, the awkward settings.js file, the package.json file, and the .config file. It seems that by cross referencing these files I could come up with what's missing.

Questions:

    • Can someone suggest a way to start? What exists? What am I missing?
    • Why doesn't the export flow cover things like missing npm installs?
    • How about including things defined in functionGlobalContext?
    • Can the API be accessed, for example the Admin API Method GET /settings, from within a function node in a flow? If so, is there an example?
    • Is this a fool's errand? :wink:

I think at some point I'd like to find a way of updating a flow and it's parts via ansible and/or some kind of continuous deployment method.

Thanks for any tips.

Pretty much everything in your userDir folder is needed other than the node_modules folder.

Actually quite a difficult ask. Not impossible but I expect there are simply a great many things above this on the to-do list. Perhaps a suggestion to Nick and some supplied code might be welcome, you'll need to get input from him.

The installs are defined, like any other Node.js application, in the package.json file. Currently though, I don't think there is a direct link from the node name in the flows file to the package that installed it. So you'd probably have to trawl through every node module package.json to find all the data. Otherwise, I suspect a change to core would be required to capture the node/module mapping table when installing a new node - but even that will be hard since you can install node modules manually not just from within Node-RED. So NR would need to do a diff on the installed nodes and modules when it started up. That seems like a lot of work just to make a flow export a bit nicer?

How would you differentiate between what was needed for an export and what isn't?

No. But it might not be the best approach depending on what you are really trying to achieve. If all you want is to drive updates via a DevOps process. Doing so by replacing everything that has changed is possible right now. (needs a restart of NR of course). Doing so by, lets say, sending bits of subflow probably requires the pluggable stuff that is currently being worked on?

Thank you, Julian, for responding generously to my many questions. It helps me understand my challenge. I expect to deploy to multiple systems which have different sets of flows. And it will be an ongoing process. That's why I am looking at it through a devops lens.

I've never found a reference to the various data elements in the flow json and just assumed there was a path in one of the id values (id or z) that pointed back to the source nodes. That does make it more difficult. I see now that the z values seem to be the same for all the nodes in a flow. I assume that's the flow ID.

Your summary of the fool's errand is great. The goal is to be able to remove as much variability and manual tweaking of deliverable flows. Deploying an updated flow with a new node to 20 node-red instances 3 times a day will get very tedious. Restart of NR on a regular basis would likely include adding rabbitMQ or some store and forward mechanism.

So, you've inspired many thoughts. Thanks.

I think there is much that you can do. You probably want to simple standardise the Node-RED palette across all instances and have a change process that lets people request new nodes to be added. Not as flexible but certainly more controlled and would let you drive installations from Ansible.

I'd need to know more about the device config - whether Windows or Linux for example. If your users are running NR via task-manager (Windows) or systemd, then you can simply use remote OS features to restart remotely. (PowerShell or SSH/BASH scripts for example).

Personally, I'd use the Projects feature that lets me develop with git integration locally and build whatever CI/CD pipeline I want off the back of the git repository.

The project includes a package.json that will list the node dependencies for the project and can be edited from within the editor (currently a manual task, but better than nothing). I can then commit changes and push to a remote git repo to kick-off whatever CI/CD pipeline I want. The trick is to not run the projects features on the deployment targets - they run as 'standalone' instances, using the flow files and package.json file from the project.

I wrote up how you can do this on my blog a couple years ago - it uses the IBM Cloud as a deployment target, but that's just as an example. All of the basic principles remain the same.

1 Like

Node red would be running in a Linux environment. The data that flows in might be unreliable in the case of resending data when the system goes down, even if for a few seconds. That's why I was thinking a front end queue might help mitigate loss of data.

I am tending towards an ansible solution.

Thanks.

Nick,
Excellent article. It will server as a guide for future CI development.

Thank you,
Chris.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.