Int J Performability Eng ›› 2019, Vol. 15 ›› Issue (3): 763-771.doi: 10.23940/ijpe.19.03.p5.763771

Previous Articles     Next Articles

Facial Components-based Representation for Caricature Face Recognition

Qiang Ma* and Qingshan Liu   

  1. School of Information and Control, Nanjing University of Information Science and Technology, Nanjing, 210044, China
  • Submitted on ; Revised on ;
  • Contact: 20161221637@nuist.edu.cn
  • About author:Qiang Ma is a Master's student in the School of Information and Control at Nanjing University of Information Science and Technology. His research interests including machine learning and face recognition.Qingshan Liu received his M.S. degree from Southeast University in 2000 and his Ph.D. from the Chinese Academy of Sciences in 2003. From 2010 to 2011, he was an assistant research professor in the Department of Computer Science at the Computational Biomedicine Imaging and Modeling Center at Rutgers University. Prior to that, he was an associate professor in the National Laboratory of Pattern Recognition at the Chinese Academy of Sciences. From 2004 to 2005, he was an associate researcher in the Multimedia Laboratory at the Chinese University of Hong Kong. He is currently a professor in the School of Information and Control at Nanjing University of Information Science and Technology. His research interests include image and vision analysis.

Abstract: Caricature face recognition is an interesting but also difficult task due to the huge exaggeration between two different face modalities, photos, and caricatures. Therefore, we propose a new representation for recognition that is fused by the representation learned from photos, caricatures, and generated faces. Each generated face contains four main facial components. Photos, caricatures, and generated faces are sent to Photo-ResNet, Caricature-ResNet, and Generated-ResNet to learn specific representations. Then, the learned three representations are sent to a fully connected layer. We adopt Softmax loss and Center Loss for training, which can reduce the distance of intra-class. To test the performance of our proposed representation, we build a new dataset for caricature face recognition, which consists of 259 subjects, with 6490 caricatures and 8143 photos. The dataset we build is the biggest available caricature dataset. Several basic methods are used for caricature face recognition. To test the discrimination of our proposed representation, two more experiments are fulfilled, including searching photos according to the selected caricature (CTP) and searching caricatures according to the selected photo (PTC), and our proposed method performs better than other convolutional neural network (CNN)-based representations.

Key words: caricature face recognition, generated face, facial components, Center Loss, caricature dataset