Options
An Experimental Study on the Deviations in Performance of FNNS and CNNS in the Realm of Grayscale Adversarial Images
Date Issued
01-01-2023
Author(s)
Mathew, D. A.Steve
Shree, N. Durga
Chowdhary, Chiranji Lal
Abstract
Convolutional Neural Networks, CNNs are known for their unparalleled accuracy in the classification of benign images. It is observed that neural networks are prone to having lesser accuracy in the classification of images with noise perturbation. The following study resulted in inferences to establish that CNNs are extremely vulnerable at predicting noisy images while Feed-forward Neural Networks, FNNs are least affected due to noise perturbation, maintaining their accuracy almost undisturbed. FNNs showcase better classification accuracy when tested with noise-intensive, single-channelled images that are just sheer noise to human vision. The hand-written digit images from the MNIST dataset t are classified using the architectures of FNNs with 1 and 2 hidden layers and CNNs with 3, 4, 6, and 8 convolutions, which provide the stated experimental inferences. Deviations in the performances of these architectures analyzed systematically propose that FNNs stand out to show a classification accuracy of more than 85%, irrespective of the intensity of noise and CNNs witness a trend in the reduction in speed of classification accuracy against increasing noise intensities. Correlation analysis and mathematical modelling of the accuracy trends act as roadmaps to picture that the change in the speed of classification accuracy against increasing noise intensities for CNN with 8 convolutions is half of that of the rest of the CNNs. This experimental study is a step to quantify the performance of deep learning image classification models in the context of adversarial images.
Volume
16