Development of facial recognition or expression recognition algorithms requires input data to thoroughly test the performance of algorithms in various conditions. Researchers are developing various methods to face challenges like illumination, pose and expression changes, as well as facial disguises. In this paper, we propose and establish a dataset of thermal facial images, which contains a set of neutral images in various poses as well as a set of facial images with different posed expressions collected with a thermal infrared camera. Since the properties of face in the thermal domain strongly depend on time, in order to show the impact of aging, collection of the dataset has been repeated and a corresponding set of data is provided. The paper describes the measurement methodology and database structure. We present baseline results of processing using state-of-the-art facial descriptors combined with distance metrics for thermal face reidentification. Three selected local descriptors, a histogram of oriented gradients, local binary patterns and local derivative patterns are used for elementary assessment of the database. The dataset offers a wide range of capabilities – from thermal face recognition to thermal expression recognition.
Biometric identification systems, i.e. the systems that are able to recognize humans by analyzing their physiological or behavioral characteristics, have gained a lot of interest in recent years. They can be used to raise the security level in certain institutions or can be treated as a convenient replacement for PINs and passwords for regular users. Automatic face recognition is one of the most popular biometric technologies, widely used even by many low-end consumer devices such as netbooks. However, even the most accurate face identification algorithm would be useless if it could be cheated by presenting a photograph of a person instead of the real face. Therefore, the proper liveness measurement is extremely important. In this paper we present a method that differentiates between video sequences showing real persons and their photographs. First we calculate the optical flow of the face region using the Farnebäck algorithm. Then we convert the motion information into images and perform the initial data selection. Finally, we apply the Support Vector Machine to distinguish between real faces and photographs. The experimental results confirm that the proposed approach could be successfully applied in practice.