Simple face recognition with various tools

chantana chantrapornchai
4 min readNov 12, 2018

--

We explore various face recognition libraries and tools. The tools we explore are OpenCV, Dlib (OpenFace), deep learning tool.

Since face recognition contains two steps: face detection and face recognition. The first phase, face detection is the process of detecting bounding box of the face in the frame. Face recognition is the task of recognizing the identity. The first phase is really important. There are many resources that perform this tasks. The common ones are OpenCV and dlibs. OpenCV has types of template classifiers: eg., LBP, Haar (https://github.com/informramiz/Face-Detection-OpenCV). Also there are templates for frontal face, eye and nose, etc. (https://github.com/opencv/opencv/tree/master/data)

In the first phase, we can perform the face detection using either openCV or dlib library.

  1. Using openCV, we load the template for face first.
f_cascade = cv2.CascadeClassifier(‘lbpcascade_fronta lface.xml’)

or other templates eg. ‘haarcascade_frontalface_alt.xml’

Then, convert the img (previously read by cv2.imread function or video capture) and detect the frame returned by in detectMultiScale in faces array.

grayimg = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = f_cascade.detectMultiScale(grayimg, scaleFactor=scaleFactor, minNeighbors=5)

Each face array element contains tuples (x,y,w,h) for face in faces:

(x,y,w,h) = face

where we can use to draw rectangle in the image for showing bounding box.

opencv haar

2. In case of dlib, we have to use dlib library. Then, we initialize Hog + svm based face detector.

import dlibface_detector = dlib.get_frontal_face_detector()
dlib HOG

or you can use your own detector, eg.

(wget http://arunponnusamy.com/files/mmod_human_face_detector.dat)cnn_face_detector = dlib.cnn_face_detection_model_v1(“mmod_human_face_detector.dat”)
dlib CNN

3. OpenCV has various template that we can use. To utilize each of them, the two parameters must be set properly: scale factor and minimum neighbour. The blog https://fairyonice.github.io/Object-detection-using-Haar-feature-based-cascade-classifiers-on-my-face.html discusses how to iteratively find the best two numbers.

4. From the above, it is seen that dlib is quite better than openCV. I see most of the face recognition open source relies on dlib. From dlib, it is also interesting to extract face 68 points landmark. Face landmark is a special feature which can be used for face recognition. Previous work used 5 points landmarks (two eyes, nose, mouth) as indicator for detecting faces in images. 68 points are more valuable since they can be used as features of a face. Also, they can be used to get parts of faces out: eyes, nose, mouth. To use dlib to get landmark, we need shape predictor:

dlib_detector = dlib.get_frontal_face_detector()
shape_predictor = dlib.shape_predictor(“shape_predictor_68_face_landmarks.dat”)

Then we detect faces in image first and send the face bounding to find landmark points.

faces = dlib_detector(image_gray, 1)

for (i, rect) in enumerate(faces):
shape = shape_predictor(image_gray, rect)

In order to draw the point x,y in the image given by array shape, we must convert the coordinate to the the values accepted by opencv.

shape = shape_predictor(image_gray, rect)
shape = face_utils.shape_to_np(shape)
# convert dlib’s rectangle to a OpenCV-style bounding box
# [i.e., (x, y, w, h)], then draw the face bounding box
(x, y, w, h) = face_utils.rect_to_bb(rect)
cv2.rectangle(image, (x, y), (x + w, y + h), (0, 255, 0), 1)
# show the face number
cv2.putText(image, “Face #{}”.format(i + 1), (x — 10, y — 10),
cv2.FONT_HERSHEY_SIMPLEX, 0.5, (0, 255, 0), 5)
# loop over the (x, y)-coordinates for the facial landmarks
# and draw them on the image
for (x, y) in shape:
cv2.circle(image, (x, y), 5, (0, 0, 255), -1)
dlib face landmark

We also know the position of each part of your face from this shape array.

FACIAL_LANDMARKS_IDXS = collections.OrderedDict([
(“mouth”, (48, 68)),
(“right_eyebrow”, (17, 22)),
(“left_eyebrow”, (22, 27)),
(“right_eye”, (36, 42)),
(“nose”, (27, 35)),
(“left_eye”, (42, 48)),
(“eyes_eyebrow”,(17,48)),
(“jaw”, (0, 17))
])

5. Now, we can do simple face recognition using these landmarks. We can come up with the simple idea that for each image of each class, we find the landmarks and average them somehow and used the representative value as a feature of each class. Then, from the unknown image, we can get the landmark set and compare to the representative value for each class. Which class gives the least distance, the class is the answer. This is the similar idea to OpenFace library (https://cmusatyalab.github.io/openface/). On the other hand, one can use the landmarks for each image, as part of features for training as in the idea of FaceNet.

The trial code in github for FaceNet using OpenFace library is at •github.com/cchantra/face_training: (face_recognition1.ipynb) We adopted from http://krasserm.github.io/2018/02/07/deep-face-recognition/

Also, the code for face detection in the same github in face_detection.ipynb.

6. There are many other open source that is based on dlib and face landmarks:

etc.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

chantana chantrapornchai
chantana chantrapornchai

Written by chantana chantrapornchai

I love many things about computer system such as system setup, big data & cloud tools, deep learning training, programming in many languages.

Responses (2)

Write a response