Face Expression Recognition in Grayscale Images Using Image Segmentation and Deep Learning

Authors

  • Ghena Zuhair Abdualghani Altinbas University, Electrical and Computer Engineering, Türkiye
  • Asst. Prof. Dr. Sefer Kurnaz Altinbas University, Electrical and Computer Engineering, Türkiye

Keywords:

DL, CNN, Machine Learning, Face recognition, VGG199.

Abstract

Face recognition relies heavily on the ability to read emotional states conveyed by the face. Face recognition enables a computer to identify individuals in a photograph or video. In contrast, facial expression recognition aids computers in analyzing the emotional state of a single human being, leading to enhanced human-computer interaction. There are several obvious traits, such as the eyes and the shape of the lips that can be used to decipher an individual's emotions. People's lips curve upward and their eyebrows descend as they grin. The same holds true for other emotions, such as anger, grief, surprise, and so on. This study proposed an approach based on transfer learning and deep learning techniques for human facial expression recognition. The Extended Cohn-Kanada (CK+) dataset is used in this study for the experiments. The proposed approach consists of four outstanding deep learning models named; VGG199, Conv2D, VGG16 and DenseNet201. Along with the CNN features the VGG16 model outperforms all the existing approaches as well as the DL models used in this research with the accuracy of 99.10%. The proposed approach is able to efficiently identify the human emotions from a gray-scale image in a very short time.

References

The template will number citations consecutively within brackets [1]. The sentence punctuation follows the bracket [2]. Refer simply to the reference number, as in [3]—do not use “Ref. [3]” or “reference [3]” except at the beginning of a sentence: “Reference [3] was the first ...”

Number footnotes separately in superscripts. Place the actual footnote at the bottom of the column in which it was cited. Do not put footnotes in the abstract or reference list. Use letters for table footnotes.

Unless there are six authors or more give all authors’ names; do not use “et al.”. Papers that have not been published, even if they have been submitted for publication, should be cited as “unpublished” [4]. Papers that have been accepted for publication should be cited as “in press” [5]. Capitalize only the first word in a paper title, except for proper nouns and element symbols.

For papers published in translation journals, please give the English citation first, followed by the original foreign-language citation [6].

Calvo, M. G., & Nummenmaa, L. (2016). Perceptual and affective mechanisms in facial expression recognition: An integrative review. Cognition and Emotion, 30(6), 1081-1106. doi: 10.1080/02699931.2015.1049124.

Happy, S. L., & Routray, A. (2014). Automatic facial expression recognition using features of salient facial patches. IEEE transactions on Affective Computing, 6(1), 1-12

Furrer, F., Burri, M., Achtelik, M., & Siegwart, R. (2016). RotorS—A modular Gazebo MAV simulator framework. In Robot Operating System (ROS) (pp. 595-625). Springer, Cham.

Scherer, K. R. (2000). Psychological models of emotion. The neuropsychology of emotion, 137(3), 137-162

Picard, R. W. (2000). Affective computing. MIT press

Sorbello, R., Chella, A., Calí, C., Giardina, M., Nishio, S., & Ishiguro, H. (2014). Telenoid android robot as an embodied perceptual social regulation medium engaging natural human–humanoid interaction. Robotics and Autonomous Systems, 62(9), 1329-1341.

C. Ruoxuan, L. Minyi, and L. Manhua Facial Expression Recognition Based on Ensemble of Multiple CNNs, CCBR 2016, LNCS 9967, pp. 511-578, Springer International Publishing AG 2016.M. F. Valstar, M. Mehu, B. Jiang, M. Pantic, and K. Scherer.

Happy, S. L., & Routray, A. (2014). Automatic facial expression recognition using features of salient facial patches. IEEE transactions on Affective Computing, 6(1), 1-12.

Sebe, N., Lew, M. S., Sun, Y., Cohen, I., Gevers, T., & Huang, T. S. (2007). Authentic facial expression analysis. Image and Vision Computing, 25(12), 1856-1863.

Siddiqi, M. H., Ali, R., Sattar, A., Khan, A. M., & Lee, S. (2014). Depth camera-based facial expression recognition system using multilayer scheme. IETE Technical Review, 31(4), 277-286.

Trujillo, L., Olague, G., Hammoud, R., & Hernandez, B. (2005). Automatic feature localization in thermal images for facial expression recognition. In 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05)- Workshops (pp. 14-14). IEEE.

Poursaberi, A., Noubari, H. A., Gavrilova, M., & Yanushkevich, S. N. (2012). Gauss– Laguerre wavelet textural feature fusion with geometrical information for facial expression identification. EURASIP Journal on Image and Video Processing, 2012(1), 17.

Owusu, E., Zhan, Y., & Mao, Q. R. (2014). A neural-AdaBoost based facial expression recognition system. Expert Systems with Applications, 41(7), 3383-3390.

Uçar, A., Demir, Y., & Güzeliş, C. (2016). A new facial expression recognition based on curvelet transform and online sequential extreme learning machine initialized with spherical clustering. Neural Computing and Applications, 27(1), 131-142

Jain, D. K., Shamsolmoali, P., & Sehdev, P. (2019). Extended deep neural network for facial emotion recognition. Pattern Recognition Letters, 120, 69-74. ISSN 0167-8655.

Lopes, A. T., de Aguiar, E., De Souza, A. F., & Oliveira-Santos, T. (2017). Facial expression recognition with convolutional neural networks: coping with few data and the training sample order.Pattern Recognition, 61, 610-628. ISSN 0031-3203.

Jain, N., Kumar, S., Kumar, A., Shamsolmoali, P., & Zareapoor, M. (2018). Hybrid deep neural networks for face emotion recognition. Pattern Recognition Letters, 115, 101-106. ISSN 0167-8655

Sajjanhar, A., Wu, Z., & Wen, Q. (2018). Deep learning models for facial expression recognition. In 2018 Digital Image Computing: Techniques and Applications (DICTA) (pp. 1-6). IEEE. doi: 10.1109/DICTA.2018.8615843.

Wen, G., Hou, Z., Li, H., Li, D., Jiang, L., & Xun, E. (2017). Ensemble of deep neural networks with probability-based fusion for facial expression recognition. Cognitive Computation, 9(5), 597-610.

Zavarez, M. V., Berriel, R. F., & Oliveira-Santos, T. (2017). Cross-database facial expression recognition based on fine-tuned deep convolutional network. In 2017 30th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI) (pp. 405-412). IEEE.

A. F. Abate, M. Nappi, D. Riccio, and G. Sabatino, “2D and 3D face recognition: A survey,” Pattern Recognition Letters, vol. 28, no. 14, pp. 1885 – 1906, 2007. Image: Information and Control.

X. Li, G. Mori, and H. Zhang, “Expression-invariant face recognition with expression classification,” in Computer and Robot Vision, 2006. The 3rd Canadian Conference on, pp. 77–77, June 2006.

M. Liang and X. Hu, “Recurrent convolutional neural network for object recognition,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3367–3375, 2015.

Cao, W., Feng, Z., Zhang, D., & Huang, Y. (2020). Facial expression recognition via a CBAM embedded network. Procedia Computer Science, 174, 463-477.

Downloads

Published

2023-06-23

How to Cite

Ghena Zuhair Abdualghani, & Asst. Prof. Dr. Sefer Kurnaz. (2023). Face Expression Recognition in Grayscale Images Using Image Segmentation and Deep Learning. International Journal of Scientific Trends, 2(6), 28–44. Retrieved from http://scientifictrends.org/index.php/ijst/article/view/107

Issue

Section

Articles