Dissertation Defense: Towards the Safety and Robustness of Deep Models

Wednesday, November 15, 2023 2 p.m. to 4 p.m.

Announcing the Final Examination of Md Nazmul Karim for the degree of Doctor of Philosophy

The main focus of this doctoral dissertation is to study the problem of robust representation learning under different scenarios. Deep neural networks (DNNs) have become an integral part of recent developments in tasks like image recognition, semantic segmentation, object detection, etc. Representation learning plays a crucial role in the success of DNNs where one tries to extract important features from data using certain mechanisms, e.g. convolution neural network (CNN) on image data. In practical applications, the robustness of those features needs to be ensured against different adversaries, hence, robust representation learning. By learning robust representations, DNNs can better generalize to new data, become robust to label noise as well as domain shift, and certainly can become more resilient to outside attacks, e.g. backdoor attacks. As such, in this dissertation, we explore the impact of robust representation learning in three directions: i) Backdoor Attack, ii) Backdoor Defense, and ii) Noisy Labels. First, we study the backdoor attack creation, detection, and removal from different perspectives. Backdoor attack addresses AI safety and robustness issues where an adversary can insert malicious behavior into a DNN by altering the training data. We analyze how inserting a backdoor impacts representation learning using the decision boundary hypothesis. Second, we aim to remove the backdoor from DNN using two different types of defense techniques: i) training time defense and ii) test-time defense. Training time defense prevents the model from learning the backdoor during model training whereas test time defense tries to purify the backdoor model after the backdoor has already been inserted. Third, we explore the direction of noisy label learning (NLL) from two perspectives: a) offline NLL and b) online continual NLL. The representation learning under noisy labels gets severely impacted due to the memorization of those noisy labels, which leads to poor generalization. We perform uniform sampling and contrastive learning-based representation learning. We also test the algorithm efficiency in an online continual learning setup.

Committee in Charge:
Nazanin Rahnavard , Chair, ECE
Azadeh Vosoughi, University of Central Florida
Chen Chen, University of Central Florida
Yogesh Singh Rawat, University of Central Florida
Mubarak Shah, Computer Science

Read More

Location:

L3Harris Engineering Center: HEC 356

Contact:

College of Graduate Studies 407-823-2766 editor@ucf.edu

Calendar:

Graduate Thesis and Dissertation

Category:

Uncategorized/Other

Tags:

engineering defense Dissertation