Adversarial Attacks and Defences: Threats and Prospects Analysis

Graduate Thesis uoadl:3232946 21 Read counter

Unit:
Department of Informatics and Telecommunications
Πληροφορική
Deposit date:
2022-10-03
Year:
2022
Author:
PAPASTAVROU ARISTI
Supervisors info:
ΚΩΝΣΤΑΝΤΙΝΟΣ ΧΑΤΖΗΚΟΚΟΛΑΚΗΣ, ΑΝΑΠΛΗΡΩΤΗΣ ΚΑΘΗΓΗΤΗΣ, ΤΜΗΜΑ ΠΛΗΡΟΦΟΡΙΚΗΣ ΚΑΙ ΤΗΛΕΠΙΚΟΙΝΩΝΙΩΝ, ΕΘΝΙΚΟΝ ΚΑΙ ΚΑΠΟΔΙΣΤΡΙΑΚΟΝ ΠΑΝΕΠΙΣΤΗΜΙΟΝ ΑΘΗΝΩΝ
Original Title:
Adversarial Attacks and Defences: Threats and Prospects Analysis
Languages:
English
Greek
Translated title:
Adversarial Attacks and Defences: Threats and Prospects Analysis
Summary:
As users in the new age of data, most of the new technologies that we use in our day-to-
day life are based on machine learning models that are now more than ever, extremely
complex. Furthermore innovative technologies and state-of-the-art networks that used to
be considered safe, and effective appear to be unstable to small, well sought, perturb-
ations of the images.Despite the importance of this phenomenon, no effective methods
have been proposed to accurately compute the robustness of state-of-the-art deep clas-
sifiers to such perturbations on large-scale datasets.This makes it difficult to apply neural
networks in security-critical areas.
As a result exploiters,could easily trick the model into making incorrect predictions or give
away sensitive information. Fake (adversarial) data could even be used to corrupt models
without us knowing. The field of adversarial machine covers both sides of the coin.
Adversarial machine learning is the study of the attacks on machine learning algorithms,
and of the defenses against such attacks. A recent survey exposes the fact that prac-
titioners report a dire need for better protecting machine learning systems in industrial
applications. In this thesis we studied adversarial attacks that are more likely to fool from
everyday users to big companies due to the very realistic result they create and the diffi-
culty in finding out the error if you aren’t familiar with the data.Take as an example smart-
automated cars. The everyday driver trusts the vendors that they have programmed the
car well so that when its in autopilot, it will know how to analyze the signals in the road
and know when to start, stop,go. What happens though when a malicious programmer
applies a black-box attack to the database of said car and mislabels the STOP sign to GO
so to confuse the cars computer?
In fact, very small and often imperceptible perturbations of the data samples are sufficient
to fool state-of-the-art classifiers and result in incorrect classification. So after taking into
account the newly discovered gaps of Deep Neural Networks regarding noise perturba-
tions, what does it take to create a well structured attack that would lower the models
confidence? Is it possible to create defence mechanisms that can secure our networks
from the adversarial and some times, dynamic, attacks?
The goal of this thesis is to further examine, analyze and understand the mechanism,
reasoning behind adversarial attacks against machine learning models, as well as the
effectiveness of each attack to the same target,database.Also we will be presenting a
hybrid version of the famously known boundary attack, propose optimizations for different
attack strategies and evaluate some known defence mechanisms against said adversarial
attacks.
Throughout this thesis, a plethora of plots and boards will be provided to the reader, to
enhance his/her understanding of this study via experiments.
Main subject category:
Technology - Computer science
Keywords:
Adversarial Machine Learning, Neural Networks,Perturbations, Data Privacy, Classification, Mislabeling, Adversarial Attacks and Defences
Index:
Yes
Number of index pages:
6
Contains images:
Yes
Number of references:
21
Number of pages:
65
Ptixiaki_Aristi_Papastavrou.pdf (2 MB) Open in new window