Our production environment consists of a raspberry pi that runs docker with node red. It uses bacnet node (node-red-contrib-bacnet (node) - Node-RED) which seems to have a strict requirement: Docker container must run with host network driver (Host network driver | Docker Docs). This doesn't work on windows, so how to run this node red container on windows dev environment?
I'm also curious why bacnet has this limitation. We can work with all sorts of client connections in node red without this problem. For example mqtt, http / rest api, database, modbus or anything else really.
I dont. I install what is necessary. However, it sounds like you want a means of managing remote instances in which case I would use FlowFuse to orchestrate Node-RED via its Device agent. It would have the necessary node packages assigned, any connection details set via ENV VAR, then I would deploy using the FlowFuse pipeline. I get seamless interconnectivity between Node-RED instances, one-place to monitor and access them all (even the Node-RED editor, via a secure tunnel, into the DMZ, all outbound (i.e. without poking holes in the firewall))
Personal reply (not an official FlowFuse response)
There are so many benefits and time saving features (automatic snapshots, remote deployments, technical support, Role Based Access, audit logs, Node-RED log access, Device Groups, pipelines, built in security, compliance, etc, etc) and new features pretty much every week. This all requires time, efforts, experience, man power (and ultimately money).
There are degrees of multi-player already released in latest Node-RED version and is being actively developed (by Nick (CTO of FlowFuse)). This will fuse seamlessly with FlowFuse once fully developed.
I don't doubt FlowFuse is a great product. But 35$ per instance per month is ridiculously expensive. And when you say multi-player, we have experience with this and I don't like it. It works okay-ish, but is a constant hassle with the merge popups.
The thing with a custom docker setup is this:
easy onboarding (docker compose up) to get node red and all related dependency services up and running (for example database and migration tools)
support for .env vars
pre-configured, all packages and settings automatically set up
git version control
separate and individual dev environment: when you work, you work on your own private instance. Ideally without affecting production environment in any way (we haven't gotten this far yet). Which means absolutely no disturbances from anyone else. You can add or delete anything without consequence.
everyone has exactly the same environment running
scales for free: add any amount of devices without cost
Installing what's necessary is pretty much the opposite of easy onboarding. It's a manual process of installing arbitrary amount of packages which you must find out which ones are needed and search and install one by one. With docker, you can switch between devices without ever worrying about which packages are needed.
I enjoy discussing pros and cons of different setups, but if you are employed by flowfuse or work as a salesman, I don't want to argue against that bias.
Back to BACnet, appreciate your knowledge. So it operates on layer 2 (MAC addresses)? Not IP or port?
That was my personal one-off instance approach (which is FAR easier than setup + learn docker) any day of the week.
The profession approach I suggested was FlowFuse (the company I do now work for) (and I am also a maintainer of the Node-RED project and I have 25 years automotive industrial controls integration and programming experience on most PLCs and networks).
As mentioned above I work for FlowFuse (senior software engineer). Gook luck in your endeavour.
Onboarding doesn't require learning docker. It's just running docker compose up. It's in the readme. I don't think there is any easier approach than running one command to get the entire environment up locally including db container or anything else. For me personally, learning docker and setting it up was useful and a big plus in software engineer career.
I don't know much about BACnet, but from what I can see, you specify IP, so already above level ISO network level 2? Sounds just like any other client, yet has this restriction.
In this video, they also specify client IP address:
Not sure why that is needed? Normally you don't have to specify client IP address to set up connections.
Something else related to bacnet, when running unsuccessfully in docker on windows, of course no connection was made. However, the BACnet nodes didn't produce any error messages? How is error handling supposed to be handled? Not producing error messages is going to be painful to work around. Need to send a duplicate message in parallel, then wait to be joined (if successful) or continue without join (if not successful)?
Edit: BACnet over IP indeed does broadcasts over layer 2 so needs to be on the same subnet.
One solution seems to be to add a BBMD (BacNET/IP Broadcast Management Devices).
Source:
This may be a complete red-herring - but this article may give you some clues as well - Homebridge, Docker, and Wake-on-Lan | Dev With Imagination
Whether the configuration then breaks your current ease of setup is another matter. Will be interesting to find out.
I am not sure if this may help to fix the problem. I am running NR on a windows machine but since the container is linux its more like a WSL docker emulation.... I would say. My devices are in the same network as the host and has you already noticed there is no way to change the network for the container and run in the same network as the host. Since I dont use fixed IP adresses but only Mac to define the devices in the flow I use an arp ping on the windows host and forward the network information (mac and Ips) to the docker container to make the network and the devices known to NR docker (json list). I also use this .wslconfig to bridge the wireguard container and make host network reachable while using VPN ip: