Making it easier to work within a team with Nodered

Hey everyone,

We have been using NodeRed within a team of 4 people on financial systems for almost a year now and we recently greatly improved our ability to actually work without going crazy.

So im here to share a few tips and also get some feedback from other teams!

  1. flows.json sucks. Seriously.
    = We managed to solve that using the great lib "Flow Manager" that splits every flow and subflow into its own json file. NodeRed should really consider making this official. A single json file is pure madness.

  2. Merging changes was impossible, if two people make changes to the same flow, there was no ordering at the json file, so without opening the flow and doing it manually (which is really hard depending on the flow) it was impossible.
    = We overrided the NodeRed method that generate the Node Ids (editor-client) to use ULID, that is a time-based unique id generator (We also needed to change the Flow Manager to ensure the saving does an order by Id...and unfortunately, also needed to monkeypatch the editor-client because the importFlows function (used when copy/paste nodes, or import) doesn't use RED.nodes.id like the other functions, which is what we override from outside using the editorTheme.page.scripts).
    That has improved our confidence on working with NodeRed by 10x. We can easily merge changes on github now, because Nodes never loose their ordering and new nodes are always at the end of the file, so we could finally relax and stop being afraid of PRs.

  3. NodeRed Themes collection rocks!
    Use it!
    We actually have our own theme that adds a css to change the tabs to a vertical layout, so its easier to find them.

Next we will work on adding some observability, tracking messages etc.

So, Teams using NodeRed, what you guys are doing to make your lives easier?

Thanks!

7 Likes

Interesting stuff, thanks for sharing!

1 Like

The idea of time based IDs for nodes is certainly interesting and could provide useful insight into how a flow was built, I assume the ID isn't updated when
How do these changes impact importing/exporting between your Node-RED environment and another "standard" one?
What about the examples in node-packages do they still work?

Perhaps this should be re-classified as a node-red enhancement request?

I know that splitting the json file has been looked at in the past. Of course, backward compatibility is something that would need extensive testing. But if you already have some workable code, that's a great starting point.

1 Like

Good topic indeed. My thoughts.

As everything in life a single json file has advantages and disadvantages. It seems really bad for your use case. I guess that a Node-RED application with dozens of json files would complicate the deployment in many use cases.

Perhaps on thing that is worth improvement (as mentioned many times in the past in this forum) is an overhaul on the admin API. I never could understand why the json file generated by the import/export of the editor is not the same as the one generated by the API. Anyway I found my way and ocasionally use the admin API for my benefit without hurdles.

Specifically on point #3 I advocated in the past that it would be nice to have a list of flows in the vertical orientation. This is no longer the case as we can see all tabs in the vertical position (as well as the associated flow id) on the information tab in the right with a single keystroke (ctr g + i). It is amazingly easy to find tabs, enable and disable them , etc.

Edit: forget to mention that there is also the "List Flows" action that helps you to navigate the tabs. A single right click brings up the menu with many options to make your life easier.

Hello Sam!

Everything still works because the ID is only used for loading and connecting everything together.

When we first changed the "RED.nodes.id" function to "ULID", pasting nodes into new flows, would cause the id of the pasted node to be the old one, and we worked like this for a while until we "monkey patched" the importFlows that exists in the editor client, so pasted nodes now get the new id method.

If you export a flow using ULID and import on another instance that doesn't include our id-override method, everything works fine.

Splitting flows is done by the this great lib: node-red-contrib-flow-manager (node) - Node-RED

We had to change it to save the json files ordering by the node id, but everything else remains untouched.
I will post some code below.

Hey Andrei,

IMHO, separated files are way easier to deploy, because you can develop all the flows together, and then only copy the flows/subflows you really want running.
We do that a lot (of course we needed to create a standard to never link flows or create dependencies between them).
But the original behavior is not changed by the Flow Manager plugin. When you deploy, the plugin will create the individual json files, but NodeRed will also save the flows.json with everything.
When NodeRed starts, if the Flow Manager is enabled, he will try to load all individual json files and then create a new flows.json with them.

1 Like

Im sending below a sample of the override method for the Id function.

We apply it on settings.js using:

    editorTheme: {
		page: {
			scripts: [path.join(__dirname, "path/to/the/script/below.js")]
		},

After you apply this, all new nodes will use the ULID id generation, but Pasted nodes or Import flow will keep using the old id method (they will work fine), because inside the editor-client code, instead of using the "RED.nodes.id" into "importFlows" like all other functions, they used the "getID" directly and we cannot override that from page scripts.
To fix that, we currently monkey patched the editor-client code (When our node-red instance starts, it loads the red.js from the node_modules editor-client package, replace the getID usage and minify it again.
Also to save it ordered, we changed the Flow Manager lib (node-red-contrib-flow-manager (node) - Node-RED) to just order by the element "id" before saving.

console.log("Initializing id-override");

function createError(message) {
    var err = new Error(message);
    err.source = "ulid";
    return err;
}
// These values should NEVER change. If
// they do, we're no longer making ulids!
var ENCODING = "0123456789ABCDEFGHJKMNPQRSTVWXYZ"; // Crockford's Base32
var ENCODING_LEN = ENCODING.length;
var TIME_MAX = Math.pow(2, 48) - 1;
var TIME_LEN = 10;
var RANDOM_LEN = 16;
function replaceCharAt(str, index, char) {
    if (index > str.length - 1) {
        return str;
    }
    return str.substr(0, index) + char + str.substr(index + 1);
}
function incrementBase32(str) {
    var done = undefined;
    var index = str.length;
    var char = void 0;
    var charIndex = void 0;
    var maxCharIndex = ENCODING_LEN - 1;
    while (!done && index-- >= 0) {
        char = str[index];
        charIndex = ENCODING.indexOf(char);
        if (charIndex === -1) {
            throw createError("incorrectly encoded string");
        }
        if (charIndex === maxCharIndex) {
            str = replaceCharAt(str, index, ENCODING[0]);
            continue;
        }
        done = replaceCharAt(str, index, ENCODING[charIndex + 1]);
    }
    if (typeof done === "string") {
        return done;
    }
    throw createError("cannot increment this string");
}
function randomChar(prng) {
    var rand = Math.floor(prng() * ENCODING_LEN);
    if (rand === ENCODING_LEN) {
        rand = ENCODING_LEN - 1;
    }
    return ENCODING.charAt(rand);
}
function encodeTime(now, len) {
    if (isNaN(now)) {
        throw new Error(now + " must be a number");
    }
    if (now > TIME_MAX) {
        throw createError("cannot encode time greater than " + TIME_MAX);
    }
    if (now < 0) {
        throw createError("time must be positive");
    }
    if (Number.isInteger(now) === false) {
        throw createError("time must be an integer");
    }
    var mod = void 0;
    var str = "";
    for (; len > 0; len--) {
        mod = now % ENCODING_LEN;
        str = ENCODING.charAt(mod) + str;
        now = (now - mod) / ENCODING_LEN;
    }
    return str;
}
function encodeRandom(len, prng) {
    var str = "";
    for (; len > 0; len--) {
        str = randomChar(prng) + str;
    }
    return str;
}
function decodeTime(id) {
    if (id.length !== TIME_LEN + RANDOM_LEN) {
        throw createError("malformed ulid");
    }
    var time = id.substr(0, TIME_LEN).split("").reverse().reduce(function (carry, char, index) {
        var encodingIndex = ENCODING.indexOf(char);
        if (encodingIndex === -1) {
            throw createError("invalid character found: " + char);
        }
        return carry += encodingIndex * Math.pow(ENCODING_LEN, index);
    }, 0);
    if (time > TIME_MAX) {
        throw createError("malformed ulid, timestamp too large");
    }
    return time;
}
function detectPrng() {
    var allowInsecure = arguments.length > 0 && arguments[0] !== undefined ? arguments[0] : false;
    var root = arguments[1];

    if (!root) {
        root = typeof window !== "undefined" ? window : null;
    }
    var browserCrypto = root && (root.crypto || root.msCrypto);
    if (browserCrypto) {
        return function () {
            var buffer = new Uint8Array(1);
            browserCrypto.getRandomValues(buffer);
            return buffer[0] / 0xff;
        };
    } else {
        try {
            var nodeCrypto = require("crypto");
            return function () {
                return nodeCrypto.randomBytes(1).readUInt8() / 0xff;
            };
        } catch (e) {}
    }
    if (allowInsecure) {
        try {
            console.error("secure crypto unusable, falling back to insecure Math.random()!");
        } catch (e) {}
        return function () {
            return Math.random();
        };
    }
    throw createError("secure crypto unusable, insecure Math.random not allowed");
}
function factory(currPrng) {
    if (!currPrng) {
        currPrng = detectPrng();
    }
    return function ulid(seedTime) {
        if (isNaN(seedTime)) {
            seedTime = Date.now();
        }
        return encodeTime(seedTime, TIME_LEN) + encodeRandom(RANDOM_LEN, currPrng);
    };
}
function monotonicFactory(currPrng) {
    if (!currPrng) {
        currPrng = detectPrng();
    }
    var lastTime = 0;
    var lastRandom = void 0;
    return function ulid(seedTime) {
        if (isNaN(seedTime)) {
            seedTime = Date.now();
        }
        if (seedTime <= lastTime) {
            var incrementedRandom = lastRandom = incrementBase32(lastRandom);
            return encodeTime(lastTime, TIME_LEN) + incrementedRandom;
        }
        lastTime = seedTime;
        var newRandom = lastRandom = encodeRandom(RANDOM_LEN, currPrng);
        return encodeTime(seedTime, TIME_LEN) + newRandom;
    };
}
var ulid = factory()

if (RED?.nodes) {
	RED.nodes.id = function() {
		return ulid();
	}
	
	console.log("Id function overrided!");
}
else {
	console.log("RED.nodes not found.");
}
1 Like

Thanks, I liked the idea. Totally agree with the split to a team use. Besides, I think a single-man operation would be easier with one single JSON file. But it's a clever solution. By the way @miziagui just as a curiosity, you are in finance, for what purposes do you use NR? Tks.

I don't have a strong opinion on this, but I think that introducing features and tools that support team development in commercial settings should be balanced against adding complexity and learning requirements for the individual, hobby-level user.

3 Likes

Hey Fabios!

Initially we started using NodeRed for integration purposes. Financial systems usually have lots of integration, and we used NR for that.
But later, we started to use it to manage our Event driven approach, in a Low-Code platform way.
For us, every flow in NodeRed is like a "service" (avoiding to incorrectly use the word micro-service), and its Input and Output is only trough Messaging (currently, NATS Jetstream).
There are no dependencies between flows, all the communication between them is executed using NATS and may be processed by any instance on the cluster.

2 Likes

I do understand your point, but there was a lot of effort put into NodeRed and i think the platform can aspire for more than just hobbyist usage...its actually already used for a lot more. We are not the only financial team using NR in our country...i know a few of them.
For that, it need tooling.

The only thing that i talked about that could change the way of doing things is Splitting flows. But that actually make things easier as i said above, because you can choose what you want to deploy...and honestly, deploying a single json or a folder with jsons, its about the same effort.
But again, even with the splitting turned on, you can also only deploy the Flows.json if you want! Thats what make that library really valuable. Btw, the Flow Manager has been around for a few years already.

1 Like

Agreed. If I understand correctly, NR was developed at IBM for serious internal use. I doubt it ever "aspired" to gain as much hobbyist support as it has.

As I said, I think your particular suggestion has merit, and it has the advantage of not forcing new or non-professional users to learn tools or techniques that go beyond their needs. I was simply pointing out that accessibility and ease of use have always been strengths of NR and should not be sacrificed for the sake of productivity.

1 Like

Node-RED is already very widely used in commercial settings running production workloads. It is already far more than a hobbyist tool. There's a long list of companies who use Node-RED as part of the products or services listed on nodered.org.

This isn't to take anything away from your feedback - just setting the record straight.

In terms of your specific feedback, splitting the flow file up has been on the backlog for a long time. It's just a matter of designing it properly and implementing it. In absence of someone choosing to contribute to help get it done, then it sits in the backlog alongside lots of other things. But it is one of those items that is nearing the top of the list.. so it's time may be coming.

Using a time based hash for node IDs is certainly an interesting idea - particularly when combined with the sorting you do.

I'm always open to ideas around improving the team collaboration features in Node-RED. It's come along way since the early days, but I certainly recognise there's more to do. Real world feedback like yours is really useful.

7 Likes

Thanks for the reply @miziagui, when I heard about NATS it didn't support mqtt. But now it does :star_struck:

1 Like

Yes it does. But we dont use it with MQTT, because we need more features from JetStream.
So we created a new NATS node with JetStream support. When we finish this first version, we will release it open source.

1 Like

Fantastic @miziagui, thanks for that. Now, @knolleary let you in front of the goalpost to implement the flow file split, just touch the ball and make the goal!

Using NR for financial systems send very scary to me. Please tell me you are not using it to process actual financial transactions. NR works out of memory. What happens to a financial transaction if you lose power or the system crashes. Do transactions get lost or possibly duplicated? For financial transactions I would look to something like Camunda where flows are backed by a database and have ACID properties.

I am not familiair with Camunda, but for data integrity Apache NiFi shouldnt be overlooked, they seem to have similar features, although NiFi is geared towards data-loss prevention.