My project is using the Emotiv Epoc+ 14 channel headset to detect the EEG signals (brain signals) and send it to the node-RED. Then, the EEG signals will be sent to the Arduino board to control the motion of the robot. e.g. move left or right by using order come from the head set
I want to use this to trigger a prosthesis within my graduation requirements, so I hope you can help me.
that sounds a cool project - I did some work with the Emotiv headset years ago to drive a taxi-cab around a race track for a programme on the BBC. Was before Node-RED existed however. A colleague wrote some code using their SDK to get the data from the headset published to an MQTT broker. From there we could hook up anything we wanted by subscribing to the appropriate topics. Unfortunately it isn't code I'm able to share (this was a long time ago, I doubt I still have a copy, and it isn't my IP to share). But the principle is there.
Thank you for your reply
what advice would you give me ? knowing that I don't have enough information about MQTT .
For the basics to learn about MQTT and how you can use it, check out the Essentials of MQTT: https://www.hivemq.com/mqtt-essentials/
I'd say read at least until part 6, but if you're feeling like it read all parts.
Then realise that MQTT client nodes are available in Node-RED to use, and that there's clients for Arduino to interact with MQTT as well. I believe Nick even wrote one, PubSubClient: https://pubsubclient.knolleary.net/
You could follow the basic idea Nick wrote above: use the Emotiv SDK to publish to MQTT, then use Node-RED/your arduino to read those out and do things with it.
Mqtt is simply a method for handling data in a way that your program can "subscribe" to any updates.
So if you can work out how to get data out of the headset into a form that Node-RED can understand (serial? TCP/IP? Web API?) then you will be good to go.
MQTT is simply a useful mechanism to act as glue between systems. For example, if your Arduino's had network/wifi capability (or you used an ESP8266 instead), then you could send the headset signals to MQTT topics and get the Arduino's to subscribe to those topics so that they would react to changes.
In fact though, it may be that using MQTT that way might introduce too much latency between the headset signals and the Arduino controls of the robot. I seem to remember reading somewhere that you need latency of under 1/2 second for reasonable live control of a robot? Robotic surgery for example needs reliable latency below 600 milliseconds (0.6s). At or below 200 ms, latency is pretty much undetectable - at least for surgery. Also bear in mind that a Pi running Linux is nowhere near a realtime system and a background task or Node.js garbage collection could easily introduce transient delays above these levels.