Details
Human faces are one of the most important biometrics for recognition. Face imagery is easily and nonintrusively collectible, whereas other biometrics such as fingerprints or iris scans are impractical to implement in many scenarios (e.g. a surveillance setting). Because of the universality of faces as a biometric, there has been a proliferation of face recognition approaches proposed in the research literature.
Along with the proliferation of algorithms for face recognition there has also been an explosion of datasets designed to
support research in face recognition. Face recognition remains a difficult problem. Much of this difficulty is due to challenging imaging conditions and variations caused by expressions, gender and pose. More recently, 3D scanning technology has matured and the price of entry is much less.
This has led to renewed interest in face recognition using 3D models of human faces. One unexplored avenue of research on facial analysis is the potential of using 3D models to augment the performance of traditional 2D, appearance-based techniques.
Our dataset is designed with two main goals in mind. First, we would like to make available accurate and complete 3D models of faces to researchers who are primarily interested in the analysis of 3D meshes and textures of human faces. That is, our dataset is designed to be useful for research on pure 3D analysis techniques.
We have also designed our dataset to go beyond the scope of 3D analysis techniques, allowing researchers to investigate the possibility of reducing the gap between 2Da computer vision algorithms and those methods that work on more precise, 3D models. In particular, our dataset is thought of in the context of evaluating the use of 3D information in computer vision problem like 3D face pose estimation and 3D face recognition directly from video data or still images.
To this end, the pipeline of data acquisition is designed to provide both 3D data and 2D videos consistent with each other.
- First, a 3D model is captured of each subject using a 3D scanner.
- Second, we record high definition (HD) video of the subject as he simulates a range of specific head rotations (this roughly corresponds to a cooperative environment). Five levels of zoom are used in order to capture the subject face at multiple image resolutions.
- After this, the subject is then recorded from two PTZ Cameras, one indoor and the other outdoor. These two scenarios represent a more non-cooperative subject and he is asked to be spontaneous. Three levels of zoom are captured in each video in order to cover a broad range of face resolutions.
The dataset consists of:
- High-resolution 3D scans of human faces from many subjects.
- Several video sequences of varying resolution, conditions and zoom level for each subject.
Each subject is recorded in the following situations:
- In a controlled setting in HD video.
- In a less-constrained (but still indoor) setting using a standard, PTZ surveillance camera.
- In an unconstrained, outdoor environment under challenging recording conditions.
Please, if you use the dataset cite our papers as follows:
@inproceedings{Bagdanov:2011:FHF:2072572.2072597,
author = {Bagdanov, Andrew D. and Del Bimbo, Alberto and Masi, Iacopo},
title = {The Florence 2D/3D Hybrid Face Dataset},
booktitle = {Proceedings of the 2011 Joint ACM Workshop on Human Gesture and Behavior Understanding},
series = {J-HGBU ’11},
year = {2011},
isbn = {978-1-4503-0998-1},
location = {Scottsdale, Arizona, USA},
pages = {79–80},
numpages = {2},
url = {http://doi.acm.org/10.1145/2072572.2072597},
doi = {10.1145/2072572.2072597},
acmid = {2072597},
publisher = {ACM},
address = {New York, NY, USA},
keywords = {3D, datasets, face recognition, face retrieval, facial analysis},
}
MICA (3D Morphable Model based on extended Florence 3d face dataset)
The MICA dataset consists of about 2300 subjects, built by unifying existing small- and medium-scale datasets under a common FLAME topology. For each subdataset, we provide the reconstructed face geometries including the FLAME parameters.
Download the 3D fittings including the flame model parameters.
Follow the instructions on our github repository (https://github.com/Zielon/MICA/tree/master/datasets to download the entire data (specifically, the datasets have to be downloaded separately).
In the case of any questions: mica@tue.mpg.de
The data is shared only for academic, non-commercial usage.
Please, if you use the dataset cite our papers as follows:
@article{MICA:ECCV2022,
author = {Zielonka, Wojciech and Bolkart, Timo and Thies, Justus},
title = {Towards Metrical Reconstruction of Human Faces},
journal = {European Conference on Computer Vision},
year = {2022}
}