Passing Data from Node-Red to a Node.js Script

Hi All,

Our Node-Red application is getting rather large and becoming hard to scale. In short, we pull data from a PLC using an OPC database, parse them out, and then send them do a cloud platform and have been slowly incorporating MongoDB. Flow is attached to better understand.

We want to slowly start to move away from Node-Red and run the connection to the OPC database, parsing of data, and sending it out to its respective location from a node.js script just running locally on the machine. As this would be quite the task to translate our entire flow, we were going to do it in small chunks - the first chunk being the MongoDB connection.

I did some google searches and I might be just using the wrong verbiage, but we're basically looking for a way to send data from Node-Red into the Node.js script we're running locally. So after the variables are parsed out and separated individually to be sent to the cloud, they'd also somehow be sent to this Node.js script and from there, based on the data we pull in, would be sent to/from our MongoDB. One easy way I can think of doing this would be to send the data to the cloud and pull back down from the cloud and send to/from MongoDB that way, but I was wondering if theres a node package or way to do that in a function node/HTTP request.

Hope this makes sense...it might not lol.

Thanks!
BGX_NODE-RED_12-30_MONGO.json (374.1 KB)

Are you running that script as a separate daemon or can you run it as a module within another app?

If the second, you are golden because you can simply embed it into node-red. Just make your script into a node.js module and you can include it via a require into globals in settings.js. Then it can be called from within a function node after doing a const modulename = global.get('modulename').

If you already run it as a separate daemon service, you should add an API interface to facilitate exchange of data. This could use UNIX Pipes, UDP, TCP, HTTP or websockets or even MQTT maybe, all of which are available from within Node-RED.

1 Like

Thank you for your response. As of now we run it as a separate service manually just in the testing phase. If this file (or various different files calling for different pieces to be queried/written to MongoDB) is running on the same machine that is running Node-Red, is this something that an exec node could do? Calling the file to run when needed, give us the response, and shoot it back into Node-Red? Bear with me if this is a dumb question or just doesn't make sense as I am pretty new to Node.js. I follow the API portion would just have to figure out how to do that.

Of course. But that isn't really helping you with your original stated problem is it? Nor will it necessarily be very performant. It will also limit what data you can exchange quite severely.

There are very good reasons for using Node-RED to prototype applications that then get migrated to something more specific and performant.

If you want to have a gradual transition, the best approach may well be to set up a simple node.js service (or .net or whatever your organisations preferred tooling is) with a simple REST interface (or one of the many other interface types that Node-RED supports - choose one that supports the ongoing development of the new service). Then you can use that interface and (re)develop it over time as you move more parts of the service workload to the live service.

The live service doesn't have to take on the whole workload straight away.

The live service can then be configured to run under its own user/group permissions via systemd (assuming a Linux host) which will give you system-level reporting, ability to properly monitor it, auto-recovery/restart and lots of other good features.

That makes sense, I appreciate your response. Could you please elaborate on the following statement as I believe I am following but after some quick googling I am still slightly confused.

If you want to have a gradual transition, the best approach may well be to set up a simple node.js service (or .net or whatever your organisations preferred tooling is) with a simple REST interface (or one of the many other interface types that Node-RED supports - choose one that supports the ongoing development of the new service). Then you can use that interface and (re)develop it over time as you move more parts of the service workload to the live service.

Apologies if any of this is not at the right level, I'm not sure what you know.

So node.js is good at setting up services - applications that run continuously, Node-RED itself is a good example of course. But you could use a different platform if you have people around who know something different. Python for example or .NET (typically only on Windows). The point being that you will now create INTERFACES to that application for the exchange of data (API's).

The most common API approach right now is REST which is basically web endpoints, like web pages but for machines to exchange information. These are simple to set up, pretty robust and reasonably performant. Node.js applications typically use a library called ExpressJS to provide web services (that's what node-red itself uses).

But there are loads of other ways to exchange data between two services and Node-RED supports most of them right out of the box. So you want to choose an interface type that you can use over and over again as your application grows and matures. From what you've said, you'll be eventually using OPC and MongoDB at least, each of which has its own interface too but these are more product specific and not so good at exchanging general data. Though you might be able to use MongoDB as a common interface - but now I'm digressing, lets keep it simple.

The basic "live" app starts life then as a node.js app with ExpressJS configured and listening. Setting up something like that as a service is covered many times in articles and easy enough to learn. systemd is the core service orchestrator for most Linux OS's and it will take care of making sure your app is always running and will provide a logging interface - with node.js apps, you can use simple console.log statements to output to the log and over time replace them with something more standardised and robust like the Winston logging module.

Now you have a platform you can work with. First, set up a POST endpoint using ExpressJS in your app. Just set it to dump any data it receives to the applications log. Then use an http-request node in node-red and send some data over - you should see your app log update.

Then you are in a position to start taking parts of your flows that represent a sensible chunk of logic/processing, translate to raw Node.js flavoured JavaScript, create as a function (or a class if getting fancy) in your app then add a new endpoint to the app that takes in the data from node-red (using a POST method normally) and if you need to return some data, that simply becomes the response from the endpoint (typically sending a JSON response).

It is probably easier to do than to explain in words.

Does that help?

1 Like

Ahh ok yes that is a huge help - appreciate you breaking it down into simple terms, that all makes sense. Now just a matter of making it happen! Thanks again!

1 Like