ROBUST METHODS AGAINST ADVERSARIAL ATTACKS IN CLASSIFICATION OF EMOTIONS IN AUDIO

Authors

  • Victor Begha UEPG
  • Alceu de Souza Britto Júnior Universidade Estadual de Ponta Grossa

Abstract

Adversarial attacks can have a harmful effect on the performance of neural networks, and there is little study on the specific behavior of these patterns on audio emotion classification. This paper presents, analyzes and compares several defense methods against common adversarial attacks. Experimental results show that, through these techniques, it is possible to alleviate the impact of attacks that would normally lower accuracy by over 75%, to only lowering accuracy by less than 10% after applying the method. Furthermore, in the paper it is described how this is not only relevant for robustness against individual attacks, but also for the robustness of the system against small changes in general.

Published

2021-04-14

Issue

Section

Artigos