1. Lucey P., Cohn J. F., Kanade T., Saragih J., Ambadar Z., Matthews, I. The extended cohn-kanade dataset (ck+): A complete dataset for action unit and emotion-specified expression. In2010 ieee computer society conference on computer vision and pattern recognition-workshops, IEEE, pp. 94-101, 2010 2. FER-2013.https://www.kaggle.com/c/challengesin-representation-learningfacial-expression-recognitionchallenge/data. (accessed on 21 November 2018) 3. KDEF. https://www.emotionlab.se/resources/kdef. (accessed on 27 November 2017) 4. Simonyan, K., Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014. 5. Harandi M. T., Sanderson C., Shirazi S., Lovell B. C. Graph embedding discriminant analysis on Grassmannian manifolds for improved image set matching. In CVPR2011. IEEE, 2 pp. 2705-271, 2011. 6. VIOLA AND JONES, C. Rapid object detection using a boosted cascade of simple features. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Kauai, Hawaii, USA. 2001. 7. Vora S., Rangesh A., Trivedi M. M. On generalizing driver gaze zone estimation using convolutional neural networks. In2017 IEEE Intelligent Vehicles Symposium (IV). IEEE. pp. 849-854, 2017 8. Pantic M., Valstar M., Rademaker R., Maat, L. Web-based database for facial expression analysis. In2005 IEEE international conference on multimedia and Expo. IEEE, pp. 5, 2005 9. Izard C. E.Emotion theory and research: Highlights, unanswered questions, and emerging issues. Annual review of psychology, vol. 60, no. 1, 2009. 10. Eyben F., Wöllmer M., Poitschke T., Schuller B., Blaschke C., Färber B.,Nguyen-Thien, N. Emotion on the road—necessity, acceptance, and feasibility of affective computing in the car. Advances in human-computer interaction, 2010. 11. Krizhevsky, A., Sutskever, I. and Hinton, G.E.Imagenet classification with deep convolutional neural networks. Communications of the ACM, vol. 60, no. 6, pp. 84-90, 2017. |