Abstract | ||
---|---|---|
Localizing facial features is a critical component in many computer vision applications such as expression recognition, face recognition, face tracking, animation, and red-eye correction. Practical applications require detectors that operate reliably under a wide range of conditions, including variations in illumination, pose, ethnicity, gender and age. One challenge for the development of such detectors is the inherent trade-off between robustness and precision. Robust detectors tend to provide poor localization and detectors sensitive to small changes in local structure, which are needed for precise localization, generate a large number of false alarms. Here we present an approach to this trade-off based on context dependent inference. First, robust detectors are used to detect contexts in which target features occur, then precise detectors are trained to localize the features given the detected context. This paper describes the approach and presents a thorough empirical examination of the parameters needed to achieve practical levels of performance, including the size of the training database, size of the detector's receptive fields and methods for information integration. The approach operates in real time and achieves, to our knowledge, the most accurate localization performance to date. |
Year | DOI | Venue |
---|---|---|
2009 | 10.1142/S0218001409007247 | INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE |
Keywords | Field | DocType |
Machine vision, feature detection, image registration | Machine vision,Computer science,Robustness (computer science),Artificial intelligence,Detector,Facial recognition system,Information integration,Computer vision,Pattern recognition,Inference,Image registration,Machine learning,Facial motion capture | Journal |
Volume | Issue | ISSN |
23 | 3 | 0218-0014 |
Citations | PageRank | References |
27 | 2.71 | 13 |
Authors | ||
3 |
Name | Order | Citations | PageRank |
---|---|---|---|
Micah Eckhardt | 1 | 89 | 11.86 |
Ian R. Fasel | 2 | 656 | 40.60 |
Javier R. Movellan | 3 | 1853 | 150.44 |