Int J Performability Eng ›› 2023, Vol. 19 ›› Issue (9): 579-586.doi: 10.23940/ijpe.23.09.p3.579586

Previous Articles     Next Articles

Hyperparameter Tuning in Deep Learning-Based Image Classification to Improve Accuracy using Adam Optimization

Janarthanan Sekar* and Ganesh Kumar T   

  1. School of Computing Science and Engineering, Galgotias University, Greater Noida, India
  • Contact: *E-mail address: jana.mkce@gmail.com

Abstract: Deep learning (DL) is a cutting-edge image-processing technology that includes various satellite image sources being processed to analyze, enhance, and classify. This article covers a multilayer DL framework that classifies different types of vegetation and land cover using IRS p6 satellite images from many time scales and sources. The core of the design is an ensemble of supervised NNs and unsupervised neural networks (NNs) for optical imaging categorization and incomplete data restitution due to mists, reflections, and other natural effects that affect images. In this article, we contrast the traditional densely integrated multilayer perceptron (MLP) with the most popular method in remote sensing field random forest as the basic supervised NN architecture with convolutional NNs (CNNs). In general, utilizing the aforementioned procedure reduced accuracy and required longer computation times to train the model which produced 94.3%. The hyperparameters to adjust are the number of neurons, input layer, optimizer, number of epochs, filter size, and iterations. The second stage involves adjusting the number of layers. Some other conventional algorithms are lacking in this. The accuracy might be impacted by many layers. To overcome that, applying the Adam optimizer will produce a higher accuracy level of 96.72% with faster computation time and less memory management.

Key words: accuracy assessment, classification, CNNs, deep learning, image classification, remote sensing