The CHARACTERIZATION OF ADVERSARIAL VULNERABILITIES IN NEURAL NETWORK ARCHITECTURES FOR IMAGE RECOGNITION

Authors

Abstract

Convolutional Neural Networks (CNNs) are central to modern computer vision, particularly in image recognition systems deployed in safety-critical scenarios such as traffic and parking management. Despite their high predictive performance under nominal conditions, these models can be severely affected by adversarial perturbations that remain almost imperceptible to the human eye. This paper investigates the adversarial vulnerability of two widely used architectures, DenseNet121 and GoogLeNet (InceptionV3), in the task of parking-space occupancy recognition. Both models were trained and evaluated on a subset of the PKLot dataset, comprising images from three distinct parking lots under different weather conditions. Adversarial examples were generated using the Fast Gradient Sign Method (FGSM) and the Carlini & Wagner (C&W) attack and subsequently presented to the trained models. The experimental results show that, although the original networks achieve high accuracy, precision, and recall on clean data, their predictions degrade substantially under adversarial perturbations, with a marked reduction in correct occupancy estimates. The C&W attack consistently induces stronger performance degradation than FGSM, highlighting critical weaknesses in state-of-the-art CNN architectures when exposed to carefully crafted adversarial inputs.

Downloads

Published

2026-04-27