The node-red-docker project also provides info and an example Debian build.
You could build your own TensorFlow or find a built version that supports it. I did that for when I was working in python a year ago for a CLI version of license plate detection.
Or switch to the GPU version ---> @tensorflow/tfjs-node-gpu and not process on the CPU if you are worried about performance losses not using AVX2
Higher up in this GIANT post, and now linked via the documentation, I've linked node-red's wonderful documentation on creating a custom version https://github.com/node-red/node-red-docker/blob/master/docker-custom/README.md
I've also linked a node-red docker I built that works with this node. And a long explanation/security on why you should not use my version.
https://discourse.nodered.org/t/announce-node-red-contrib-facial-recognition/37384/53
LOL - I'm really into working all the bugs out as I can with this project while the code is fresh in my brain.
WOOOOOOOOOOOOT! your the first windows user I know of that got it working.
I do not get any issues
20 Dec 12:04:02 - [info] Loading palette nodes
2020-12-20 12:04:03.421491: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2020-12-20 12:04:03.436727: I tensorflow/core/platform/profile_utils/cpu_utils.cc:94] CPU Frequency: 2894560000 Hz
2020-12-20 12:04:03.437173: I tensorflow/compiler/xla/service/service.cc:168] XLA service 0x52f7800 initialized for platform Host (this does not guarantee that XLA will be used). Devices:
2020-12-20 12:04:03.437380: I tensorflow/compiler/xla/service/service.cc:176] StreamExecutor device (0): Host, Default Version
I think it defaults to a lower working build number in your use case.
New Version: 0.29.105 released
Documentation only - No need to update, no changes to js file.
Because of issues when crashing because users are sending 100's of images to it, I've made a Simple_Queue_Method example flow. This way you are not waiting on a delay node. It will release the next msg (image) in the queue as soon as the first image is done being processed.
Do note: This will take up ram as the queue grows and tensorflow is ram hungry. Only use this method if the queue will decrease in time. Don't let it grow out of control.
NOTE: other node-red nodes required
This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.