That's my first project that I'll deploy it and I have many questions: what is the best practice and the good configuration? Multi instance of node red for every reader it's the best way if have many data as inputs in seconds?
With out any information about what you are doing, we can't really say what would be the right thing for you to do.
Good morning ,
i have to connect many Rfid readers to node red , One reader can read bettwen 4000 - 6000 tags per seconds and Node red need to check very tag , store history transaction and analyses .
That's why i think to link every reader with an instance node-red to minimize the temps processing .
Just out of curiosity, which real life application requires reading 4000 tags per second ?
Our application is warehouse management using RFID technologies
@Andrei if each ID card was 1 inch, 4000 stacked edge to edge would be 4000 inches. That works out to 333 feet. If they are read by one reader, that means it has to run at 333 feet/second or 227 mph. Man I'd like to see that.
p.s. 6000 ID cards would be running at 340 mph
You can get tags as small as 1.25mm square with a height of 0.55mm
There are also tags for medical use that go down to 700 microns (0.03 inches), or even 300 microns (0.01 inches)
Learn something new every day! Now I'd really like to see the scanning. (I'm just a big kid)
However if you want to process 4,000/second, that is one every 0.00025 seconds. So you would read it and do all the rest of the flow in that amount of time.
I don't think it would work on a Pi Zero
@HasseneKh what type of machinery/system is having 4000 reads per second? I’ve gotta see this!
He stated it was warehouse management.
I suspect that amount of readings isn't generated by one single line, but the whole location.
Still, 4000 is a lot. Even for the whole system.
Ever seen an Amazon location or something comparable? No that far fetched.
Actually yes, I’ve even installed one.
So there would be many locations where tags could be read simultaneously spread over the whole location.
All in all, a very interesting use-case. I would like to see if Node-RED can handle that properly, given it runs on appropriate hardware for the job.
I have never actually tried load testing my use case, but routinely handle 200-300 messages per second (running on windows server) with no issues.
I had verified the technical spec of the readers RFID zebra , the real range is bettween 400 ~ 900 tags /sec for mobile readers and for fixed readers can go to 1200 per second
i hope that also
Whilst the reader may go at 1200/sec that is very unlikely to be a sustained rate for a long period of time. It is more likely to be sudden bursts of activity as tags are brought into range. But regardless,
How would the RFID readers connect to Node-RED? Is Node-RED reading from the readers directly or is there some messaging bus in between? What sort of work will need to do for each tag that is read?
You need to pick the right architecture for the type of scenarios you want to manage.
If you have multiple readers working independently of each other (ie, scanning different sets of tags around a large warehouse), then having separate Node-RED instances may well be a better way to distribute the workload.
If its a more static setup, then you may get away with a single Node-RED instance running in a centralised location. But it would very much come down to what you wanted the flows to do.
If you push Node-RED out to the individual devices, you need to consider how you'd manage the flows and updates - although that's no different to any software you are running across multiple devices.
So it really does depend on the scenarios you want to support - and what testing you can do to find what works.
Thanks for your reply, I will try the scenarios that have and I hope to get the optimal one .
Rate of scanning doesn't have to match the rate of processing in node-red. (assuming rate of reading is not constant and has high and lows)
Scanner could read at whatever speed and queue it up somewhere and have one or more node-red instances process it from that queue.