Adversarial attacks on deep learning, defence mechanisms and their use for network explainability

Thursday, February 6, 2020 2 p.m. to 3 p.m.

Speaker: Ajmal Mian

From: The University of Western Australia

Abstract

Deep learning is at the heart of the current rise of machine learning and artificial intelligence. However, deep models are vulnerable to adversarial attacks in the form of subtle perturbations to inputs leading to incorrect decisions, often with high confidence. In this talk, I will give a brief introduction to the methods for generating adversarial perturbations. I will discuss early defence mechanisms, including our work, for defence against such attacks. I will then discuss our method for generating the first ever attack on skeleton based human action recognition that also translates to the physical world. Following this, I will explain our Label Universal Targeted Attack (LUTA) that makes a deep model predict a specific target label for any sample of only a given source class with high probability. This is achieved by stochastically maximizing the log-probability of the target label for only the source class while suppressing leakage to the non-source classes. LUTA perturbations achieve high fooling rates on the large-scale ImageNet models, and transfer well to the physical world. Finally, I will demonstrate the use of LUTA as a tool for deep model autopsy. LUTA results in interesting perturbation patterns revealing the inner working of the deep models and the training process itself exposes the feature embedding space.

For more info, please follow this link.

Read More

Location:

L3Harris Engineering Center: 101


Calendar:

Events at UCF

Category:

Speaker/Lecture/Seminar

Tags:

n/a