FaceAIKit is a Python library designed for face detection and recognition application. With FaceAIKit, you can easily integrate state-of-the-art face detection and recognition capabilities into your applications and research projects. The library is designed for inference on various devices such as CPU (provided by onnx), GPU (provided by onnx), Rockchip RK3566 (RKNN framework), Rockchip RK3588 (RKNN framework).
-
Face Detection - Quickly locate faces within images or video streams using efficient algorithms for detecting faces in various contexts and orientations.
-
Facial Landmark Detection - Identify key facial landmarks, such as eyes, nose, and mouth, to understand facial expressions.
-
Face Recognition - Perform facial recognition to identify and verify individuals by comparing detected faces.
-
Estimation of head rotation - Not available yet
Customizable: Fine-tune and customize the library's models to suit your specific needs and applications.
- 27.12.2023 - FaceAIKit alpha version was introduced.
You can install FaceAIKit using pip:
pip install face-ai-kitModules are configured using a yaml config file placed in the config folder. The library provides the ability to use your own config file, which must be placed in the folder linked by env FACEAIKITDIR.
Config file is split to following parts:
- retinaface_detector - Configuration for retinaface detector; supported modules (RetinaFace)
- recognition: supported modules (ArcFace)
- landmarks: supported modules (N19)
Publicly available models are for CPU and GPU only. Models for other platforms are available on request. The trained model can be downloaded from GDriver.
| Model name | Restriction | Note |
|---|---|---|
| ArcFace | For research purposes only | Trained on MS1M dataset, Resnet50 backbone |
| MagFace | Given by MagFace | Converted model from MagFace |
| N19 | For research purposes only | Internal neural network, trained on WFLW dataset |
| RetinaFace | For research purposes only | Trained on WIDERFace dataset |
To get started with FaceAIKit, please refer to the documentation and examples provided in this repository. You'll find detailed guides and sample code to help you integrate the library into your projects quickly.
Test images included in examples/data were obtained from https://vis-www.cs.umass.edu/fddb/.
To use the library, you need to perform initialization by creating a new instance of the FaceRecognition class. During initialization, the face recognition algorithm should be selected, and the path to a custom configuration file can be set as a class parameter.
Supported face recognition algorithms:
- ArcFace - 'arcface'
- MagFace - 'magface'
lib = FaceRecognition(recognition='arcface')To use our config file, use the config_file argument with path to a config file with the same structure as presented in face_ai_kit/config/base.yaml.
lib = FaceRecognition(recognition='arcface', config_file='config/base.yaml')Face detection is performed by calling the face_detection method. The function expects a NumPy image or a path to an image as an input parameter. In addition to this argument, the face alignment method can be set using the align parameter.
lib.face_detection(image, align='square')Supported face align algorithms:
- Square - 'square'
- Keypoints - 'keypoints'
- None - To use the output of the face detector directly
Output of face_detection is array cotaints dict with face, roi, score.
Face recognition can be performed using several methods, each of which has a different use case.
In the input images, the faces are detected, whereby the two faces with the highest confidence scores are then compared.
lib.verify(face_image1,face_image2)lib.verify_batch(face_image1,face_image2)lib.verify_rois(face_image1, face_roi1,face_image2, face_roi2)Detection of face landmarks can be perfomed by using following code:
lib.landmarks(face_image1, face_roi1,face_image2, face_roi2)The library supports N19 landmark (our model) and MediaPipi landmark detectors. With N19, it is possible to detect 98 keypoints defined by WFLW annotation. On the other hand, the MediaPipe detector allows to detect 192 face keypoints.
The model for N19 will be change to aother model from N19 series in the near future.
Contributions, bug reports, and feature requests are welcome! Feel free to submit issues to improve FaceAIKit and make it even more powerful and user-friendly.
FaceAIKit is licensed under the MIT License, allowing you to use it in both open-source and commercial projects.
Please note that the pre-trained models included in FaceAIKit are intended for research purposes only.
If you have any questions, feedback, or inquiries about FaceAIKit, please don't hesitate to contact us at tgoldmann@seznam.cz or igoldmann@fit.vutbr.cz.

