This does not need to be on any future todo list... everything you need is already in node-red. However, you are asking a lower-level question: how can one computer execute a command on another computer and see the results...
You really only have two choices, either
- computer A connects to computer B through ssh, runs the command remotely, and captures the output results directly, or
- computer A sends a msg to computer B, which executes a pre-determined command, and sends the results back to computer A
The first solution requires you to exchange public/private keys and establish the logins correctly between the two computers. This is not node-red specific -- IT people have been working this way for decades, but it requires root access and knowledge of the OSes involved.
The second solution relies on having node-red running on both computers A (the master) and B (the slave). Computer B listens for outside msgs through any of the supported protocols (http, tcp, udp, websocket, mqtt, etc). A flow running on computer A sends the request to Computer B, which runs the
cat /proc/cpuinfo command in an
exec node, and the output is sent back to Computer A.
Running commands on remote computers is supposed to hard to do -- otherwise none of our computing infrastructure would be safe.
Actually there is at least one more way to collect information from remote machines... and it's more inline with IoT and node-red's sweet spot. Each target machine runs its own automomous flows in node-red. Set an
inject node to run a flow every X minutes, that get the data and pushes it into either a central database (sql, influx, redis, etc) or a pub/sub queue (i.e. mqtt). Then, whenever the master flow needs to see what's happening at the remote machines, it just gets the data from that source database or queue. Much cleaner that way, imo.