ROBUST METHODS AGAINST ADVERSARIAL ATTACKS IN CLASSIFICATION OF EMOTIONS IN AUDIO
Abstract
Adversarial attacks can have a harmful effect on the performance of neural networks, and there is little study on the specific behavior of these patterns on audio emotion classification. This paper presents, analyzes and compares several defense methods against common adversarial attacks. Experimental results show that, through these techniques, it is possible to alleviate the impact of attacks that would normally lower accuracy by over 75%, to only lowering accuracy by less than 10% after applying the method. Furthermore, in the paper it is described how this is not only relevant for robustness against individual attacks, but also for the robustness of the system against small changes in general.
Downloads
Published
Issue
Section
License
Este obra está licenciado com uma Licença Creative Commons Atribuição 4.0 Internacional.