Cant get a Python Script to work

I am designing a Facial Recognition system for college, essentially a driver needs to be recognised. I am using Node-Red as a tool to sequence the chain of events.

I have a working Python script on my Pi. Using OpenCV. Iv tried numerous ways to get it to run but keep getting errors.

And this is my code in the function

#! /usr/bin/python

# import the necessary packages
from import VideoStream
from import FPS
import face_recognition
import imutils
import pickle
import time
import cv2

#Initialize 'currentname' to trigger only when a new person is identified.
currentname = "unknown"
#Determine faces from encodings.pickle file model created from
encodingsP = "encodings.pickle"
#use this xml file
cascade = "haarcascade_frontalface_default.xml"

# load the known faces and embeddings along with OpenCV's Haar
# cascade for face detection
print("[INFO] loading encodings + face detector...")
data = pickle.loads(open(encodingsP, "rb").read())
detector = cv2.CascadeClassifier(cascade)

# initialize the video stream and allow the camera sensor to warm up
print("[INFO] starting video stream...")
#vs = VideoStream(src=0).start()
vs = VideoStream(usePiCamera=True).start()

# start the FPS counter
fps = FPS().start()

# loop over frames from the video file stream
while True:
	# grab the frame from the threaded video stream and resize it
	# to 500px (to speedup processing)
	frame =
	frame = imutils.resize(frame, width=500)
	# convert the input frame from (1) BGR to grayscale (for face
	# detection) and (2) from BGR to RGB (for face recognition)
	gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
	rgb = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)

	# detect faces in the grayscale frame
	rects = detector.detectMultiScale(gray, scaleFactor=1.1, 
		minNeighbors=5, minSize=(30, 30),

	# OpenCV returns bounding box coordinates in (x, y, w, h) order
	# but we need them in (top, right, bottom, left) order, so we
	# need to do a bit of reordering
	boxes = [(y, x + w, y + h, x) for (x, y, w, h) in rects]

	# compute the facial embeddings for each face bounding box
	encodings = face_recognition.face_encodings(rgb, boxes)
	names = []

	# loop over the facial embeddings
	for encoding in encodings:
		# attempt to match each face in the input image to our known
		# encodings
		matches = face_recognition.compare_faces(data["encodings"],
		name = "Unknown" #if face is not recognized, then print Unknown

		# check to see if we have found a match
		if True in matches:
			# find the indexes of all matched faces then initialize a
			# dictionary to count the total number of times each face
			# was matched
			matchedIdxs = [i for (i, b) in enumerate(matches) if b]
			counts = {}

			# loop over the matched indexes and maintain a count for
			# each recognized face face
			for i in matchedIdxs:
				name = data["names"][i]
				counts[name] = counts.get(name, 0) + 1

			# determine the recognized face with the largest number
			# of votes (note: in the event of an unlikely tie Python
			# will select first entry in the dictionary)
			name = max(counts, key=counts.get)
			#If someone in your dataset is identified, print their name on the screen
			if currentname != name:
				currentname = name
		# update the list of names

	# loop over the recognized faces
	for ((top, right, bottom, left), name) in zip(boxes, names):
		# draw the predicted face name on the image - color is in BGR
		cv2.rectangle(frame, (left, top), (right, bottom),
			(0, 255, 225), 2)
		y = top - 15 if top - 15 > 15 else top + 15
		cv2.putText(frame, name, (left, y), cv2.FONT_HERSHEY_SIMPLEX,
			.8, (0, 255, 255), 2)

	# display the image to our screen
	cv2.imshow("Facial Recognition is Running", frame)
	key = cv2.waitKey(1) & 0xFF

	# quit when 'q' key is pressed
	if key == ord("q"):

	# update the FPS counter

# stop the timer and display FPS information
print("[INFO] elasped time: {:.2f}".format(fps.elapsed()))
print("[INFO] approx. FPS: {:.2f}".format(fps.fps()))

# do a bit of cleanup

This is the errors coming up - would really appreciate the help

I dont think your python code will have a chance running in that python node
Run the script outside of NR, use mqtt to communicate with it

Ok il give that a go now, thank you!

Ok im completely at a loss what to do here

no such file: encodings.pickle try it with an absolute path.

How do you mean

Have you looked at node-red-contrib-facial-recognition you might find it useful.

Yes i tried that out of curiosity.
I installed tensorflow also, so i could use the tensorflow.gpu. but was getting an error regrdling this.
il do it again and throw it up

I would much prefer to use the facial recognition script i already have working. So if someone could explain to me how i could use mqqt to communicate with while running it outside of NR

If the python program can subscription to mqtt msgs, then all you have to do in NR is use the mqtt-out node to send a msg and the mqtt-in node to subscript to the topic you want.

I have set up mosquito mqtt.
Looing up some youtube videos to understand it.

Please take into account im very new to all of this.
How do i subscribe my python program to mqtt ? Is it a bit code at the end of the script?

I understand that for instance the broker acts as a post office so to say. And my python programme is Publish. So i want my NR to be the subscribe.

Am i on the right track?

Read this tutorial, it is the best MQTT intro that I have found. MQTT Essentials - All Core Concepts explained


In your python script, change:

encodingsP = "encodings.pickle"


encodingsP = "/the/absolute/path/encodings.pickle"

ie. the location of the file as the function cannnot find it.
This will also apply to the haarcascade file.

If you search this forum you will find several discussions about python scripts and mqtt. I have even published some examples you can study, like this one

You can find more examples if you search for "python script mqtt"

1 Like

Tried that and still same error

Trying to run my python in the background, but its just bringing me to my Geany script. Tired "_run" after geany and brought up a blank terminal.

Need t get script running in background before i can proceed
Any help would be greatly appreciated

Please keep the discussion about your issue in one thread, it's easier to give advice that way

Maybe you could describe the complete functionality you are looking for?

Is it constantly running, reading from camera or is it triggered by some means? Does it require any intervention or action from an operator? From your flow I guess you trigger it by some means, a button or so, then you are waking up some operator with a sound file alert, after some delay starting the script that reads images from the camera and present them on the screen. When finished the operator closes the view. Is this your expected funtionality or just experimental?

I assume now that you have managed to get a script running. I suggest you to do the following

  1. Take my example script from the link I provided above
  2. Move all the necessary functionality from your script to my script so that you bring the parts you need together but see to that the script doesn't close when the viewing is done
  3. Set up a simple flow using a inject node to start the script (later you can configure the inject node to start the script when NR starts)

Basically you should now have a script that runs, waiting for commands from NR. The command I would implement support for should basically ask the script to

  • when triggered take one or more pictures using the camera & try to find match
  • present image on screen with matching results
  • send the result back to NR

In my script you can see the structure. There is already support for one command, namely "abort" that simply terminates the script

EDIT: just thinking if we you start with a simpler approach, later add more features. See next post

Could you try to make a simple flow and just start your script like this
It should work the same as if you run it from command line. If not, try to add sudo in front of the command
What is now happening?