Advice on deploying apps with hardware specific nodes on different platforms

Looking for advice on how to organise an application I've written.

The app has two components, the first is a monitor for controlling the fermentation of beer so it senses the temperature and turns solenoids on and off to chill when necessary. The monitor writes sensor values to a database and reads from the database the temperature setpoints and other data about the tank contents to make chilling decisions. There are several of these monitors in the brewery, each controlling up to a dozen tanks.

The second part is the database and web server which displays the information in the database as raw numbers and as charts and allows the user to set parameters. It also interfaces with other systems in the brewery. I want one of these to bring together the monitors in one place.

The two components are currently in one app running on RPi but I'd like to be able to run the database component on more substantial hardware, potentially in the cloud. The two parts share a fair amount of code so I'd ideally like to have it as one codebase with an environment variable that controlled what "mode" the app was running in. Most of the shared code is for generating HTML pages to display data (right now my app uses plain HTML pages but I am looking to move it to UI Builder).

The problem is that the code uses RPi specific nodes that can't be installed on non Rpi hardware so the app won't deploy. Of course I could just use a RPi for the database/web server component which is the direction I'm headed if I can't solve this problem.

So is there any way to...

  1. Allow the code to be deployed with certain nodes missing? (that will never be executed)
  2. Share Node-Red flows between two apps and keep both in sync if changed?

Ideally I'd like to stay away from creating custom nodes.

Why not break it into two flows and use MQTT to pass the data from the RPI flow that’s reading the sensors to a flow on another device that would deal with the database

The issue isnt' passing the information, I'm using a database for that with no problems.

Problem is the two flows share code and I don't want to constantly be updating flows in two different codebases.

I'm just thinking outloud here so bear with me.
To do this, you would have to isolate the common code. You could put the common code is a seperate tab. Then the issue is connecting the inputs into that tab and the output from that tab(common code) to where ever it has to go.

You could use link-in and link-out nodes but the wire connections on the two different devices would not match up.

You could use a local MQTT broker on each divide. Then use that to pass the msgs in and out of the common code tab.

But you woud still have to export that tab from one devide and import it into the other device and remove the old version of the tab.

The issue is that all the code in the flow is in one file. When you do a deploy, all the nodes and tabs are written to the flow.json file. There is - currently - nothing like an 'include' to include a common flow chunk which would be in a json file.

One more thought. What if you took the comon code out and put it into a seperate device and used MQTT to send the data from all the places that use the common code. Each mqtt topic would contain the device the msg came from.
topic device 1: commoncode/input/device1
topic device 2: commoncode/input/device2
topic device 3: commoncode/input/device3
The common code mqtt-in node would subscribe to 'commoncode/input/#' causing it to get the msgs from all devices. At the end of processing you would publish the output using a topic 'commoncode/output/device1' etc. and device1 would hve a mqtt-in' subscribing to 'commoncode/output/device1' getting only its data back.

This way, you would only change the code in one place and all the devices would use the single copy.

p.s. What kinds of beer do you brew and are the good? :grin:

Hi Paul, check out our website I'd gladly buy you a beer but it might be hard to get to our taproom at the momement :grin:

The common code is for rendering html and it uses quite a bit of state information so I don't think MQTT would work well. I think the best solution is to isolate the common code in tabs and just go through an import/remove when code is updated which is a bit clumsy but should work.

Alternatively I'm testing whether a RPI will do the job that I was going to put on the server which means I can just use source control and keep the entire project together.

1 Like

Yeah especially since I’m in the US. Have a beer for me, enjoy it and tell me how it was so I can dream about it :grinning:

Can you be a bit more specific here? Are you talking about custom nodes you have developed or the "official" node-red-node-pi-gpio nodes? The latter can be installed, configured, and deployed on non-RPi hardware; they just will not execute. In that case you could do just what you propose: install identical flows on both machines and use and environment variables to determine which segments run on each machine. If you are talking about your own nodes, you may still be able to find out how the NR developers do it and steal their tricks.

Or do you mean that the devices all have identical flow files, which you want to synchronise across the devices?

I'm using some vendor supplied nodes, specifically ones from Sequent Microsystems for their RTD temperature measurement card (node-red-contrib-sm-rtd) and relay card (node-red-contrib-sm-8relind).
[rtd-rpi/node-red-contrib-sm-rtd at master · SequentMicrosystems/rtd-rpi · GitHub]

If I could get these to build on the server (it complains about not having RPI I2C libraries) then that would be the perfect solution.

@Colin I'm using projects so I can deploy flows okay across devices. I'm really looking for a solution like @drmibell suggested where I can deploy RPI specific nodes on non-RPI hardware.

I'm afraid I don't have time to research this for you, but I have a few suggestions. You might be able to patch the nodes yourself and submit a PR to the developer or, if not, at least raise an issue on GitHub. Several of the supported (non-core) RPi nodes have been modified to load on other platforms. See here and here. By checking the environment, these nodes are able to register with the runtime even if they cannot execute. The developer ought to have an interest in having his nodes available for developing and debugging flows on non-RPi platforms.

Thanks @drmibell for the pointers. I'll do a bit more digging.

I couldn't see the relevance of your first link but the second link has some code that looks to be doing the check for "BCM" in the /proc/cpuinfo - I'm assuming that every RPI hardware is a BCM**** something? This looks easy to implement.

    var fs = require('fs');
    // unlikely if not on a Pi
    try {
        var cpuinfo = fs.readFileSync("/proc/cpuinfo").toString();
        if (cpuinfo.indexOf(": BCM") === -1) { throw "Info : "+RED._("rpi-gpio.errors.ignorenode"); }
    catch(err) {
        throw "Info : "+RED._("rpi-gpio.errors.ignorenode");

So throwing
throw "Info : "+RED._("rpi-gpio.errors.ignorenode");

Makes the runtime disable that node?

Is this how the magic is being done in the official RPI nodes? I had a brief look at node-red-nodes/hardware/PiGpio at master · node-red/node-red-nodes · GitHub but it wasn't immediately apparent where it was checking or what it was doing.

Apologies, I'm pretty green on how nodes are constructed. Thanks for your time.

The relevant bit of the first link is at the bottom of the page under Node Updates:

  • the Pi-specific GPIO nodes are now available on all platforms - however they only do something when running on a Pi. This makes it easier to view/edit flows on your laptop that are destined for a Pi.

The GPIO nodes do their magic here by running a shell script that runs a python script that returns an error if the platform is wrong. (As far as I understand it, having reached the limit of my understanding.) Exactly what causes the runtime to disable the node is a mystery to me too.

Well the magic is the else block after that test - node-red-nodes/36-rpi-gpio.js at 1f2a25637babb274ea40f6060c9a92e06edea3d3 · node-red/node-red-nodes · GitHub where rather than running the python script that actually handles the IO is just shows the dummy status instead. Obviously nodes don't really need to go that far and can just not register any handler for messages - or not load up at all... up to the author - however if they "require" external libraries then it may be slightly more involved as they need to not load those up front if they can help it, so does require some thought.

1 Like

Thanks Dave -- I'm starting to catch on here. Aside from lack of elegance, are there reasons (at least in this case) not to just try to load the library and trap the error?

That is certainly a way to do it, and indeed may be as good a test as checking the processor etc.

1 Like

@blunoz I have done a fork of node-red-contrib-sm-rtd that should install and deploy on a non-RPi host. Since I do not have the Sequent Microsystems hardware, I can't test that it still works properly on a Pi. If it checks out for you, I could do a PR and possibly work on the relay card nodes.

You are a legend Mike! Thanks so much, I’ll give it a go and let you know how it works out.

@blunoz have you had a chance to test this? The link I posted is not quite right, since you want the "non-rpi" branch of this fork.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.