I have set up this OCR node to read a two-digit value via a webcam. I convert it to grayscale and change the contrast, but after about 15 seconds of the yellow dot "processing", the OCR node "stalls out" whereby I get the red message as shown ("Lost connection to server, reconnecting...") and I never get the output of the node.
My understanding is the OCR node goes online and uses the tesseract library.
Is there a better way to approach this problem? FWIW, the digits that I am reading are always ranging from about 25 to 55.
This could indicate that this node raised an error that lead to a termination of the Node-RED (client or) server.
Have you checked the two consoles (in your browser & of the server) for any issues?
Thanks for the tip. There are indeed a host of errors being thrown by that node. It seems the help file for the node did not mention any of these parameters as being required inputs.
(sorry for the screenshot instead of the error code copied/pasted)
Looking at the Tesseract repo, no additional parameters should be necessary for a successful recognition.
There's a relevant remark regarding supported image format / data types:
Note: images must be a supported image format and a supported data type. For example, a buffer containing a png image is supported. A buffer containing raw pixel data is not supported.
You could check that your image data buffer fulfills this demand...
After a few days of experimenting, the ability of the Tesseract.js to correctly perform seven segment optical character recognition is not reliable or repeatable. So I went back to searching and found some interesting attempts & technologies, but the best was a standalone program called SSOCR (seven segment optical character recognition). After using the standard image cleanup tools available, it has done a very nice job of interpreting the numbers. My test flow is shown below, where you can see I needed to trim the output and convert from string to number. I will incorporate this into the overall flow whereby the image is grabbed via the webcam, cleaned up, saved to the hard drive, and then read by SSOCR via the Exec node.
You probably could use some AI-based image recognition node, which you should train on your 7-segment indicator images. You don't have so many combinations, so this shall be possible. Basically AI will just compare webcam picture with stored image.
See Image analysis and comparison - #5 by Jotarod