Local NodeRED Server Communication with Arduino and Jetson Nano

I'm completely new to NodeRED and thinking of using it for our project. We are controlling a robotic arm with wearable glove movement (MPU and other sensors). The communication is to be done wirelessly on a local environment. How would I go about setting up something similar to the following: MPU/sensors --> Arduino Uno --> Serial --> Laptop --> NodeRED --> server/webapp <--> Jetson Nano --> robotic arm? The entire premise is to send the sensor data wireless and later handle it on the Jetson Nano. I have come across NodeRED as it allows serial communication to be easily transmitted and read. We must use an Arduino Nano (no wifi) due to some architectural issues we have discovered with MPU and other boards.

Understood. How time critical is the communication? Describe all the way from your sensors to the robotic arm. You really need all steps in the communication path? You can run Node-RED directly on the Nano (I do myself) and remove some middle steps

I can explain the project in much more depth. In terms of time, we are expecting a delay but nothing too crazy, maybe about 1 second? The Arduino Uno will be sampling the sensors at about 50 samples/second. I have tested the sensors, drivers and robot control locally on the Uno and it works perfect. The entire plan of the project is to contain the following: wearable glove component consisting of Arduino Uno, MPU6050 (using Jeff Rowberg library) and flex sensors. This will sample 4 values. The Uno on the glove component will be connected to a laptop via serial. The user (input) will be located say in room A. In room B located right beside room A, there will be the stationary robotic arm with 4-6 servo motors. The arm will be connected to a PCA9685 driver with external 6V power source. This driver is to be connected to the Jetson Nano. Additionally, a PI camera is connected. We need to get the input data values sampled from room A to room B. Furthermore, the PI camera video stream which includes object dimension recognition using opencv needs to be sent back to room A. The user in room A should be able to open some form of a web app to see the live video stream and control the robotic arm as he moves/tilts his hand and flexes his fingers. The environment is to be local, no internet connection involved. Local web app and server over wifi connection. If you have the time to discuss this off the forum, I'd also appreciate it.

You plan to run this on the Nano as well? It will require some resources for sure (I run YOLO on a Nano myself)

No, not so, I think you will be best supported on the forum, many experts are around for all kind of suggestions and ideas. Otherwise if better, you go here: Jobs - Node-RED Forum

Consider using MQTT for communication between your computers

That feels like a long chain - too long for reliable real-time control I suspect. I can't remember off the top of my head what the practical max delay time is for human interaction but I believe that it is <500ms? You have to fit the entire feedback chain into that I think? Otherwise, the remote user will find it extremely hard to control the arm. (I'm not an expert in this tho so I might be wrong).

You're probably right about the time delay this may incur.. I'm not sure of that atm. On a local hard-wired environment, I've tested everything and it seems to be running in order. The next step is the wireless integration part and hence I'm still looking for the best approach atm.

I'm assuming this is a research project rather than a prototype remote surgical appliance!

Trying to give user control of the arm but limiting the feedback to openCV via a pi I can't imagine ever achieving any kind of finesse but hey, I'm no expert.

Maybe a direct video feed to the person in addition to the dimension data? Merge the openCV images back onto the direct video feed in the laptop perhaps.

You could say its a bit of both.. its for our final year engineering project. Sorry if I wasn't clear earlier, but there is absolutely no opencv feedback. That is merely an object recognition feature which is sent along the video stream to the user. The entire system can still be considered as a single input single output system.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.