SECURE LEARNING IN ADVERSARIAL ENVIRONMENTS

Friday, March 24, 2017 noon to 1 p.m.

Advances in machine learning have led to rapid and widespread deployment of software-based inference and decision making, resulting in various applications such as data analytics, autonomous systems, and security diagnostics. Current machine learning systems, however, assume that training and test data follow the same, or similar, distributions, and do not consider active adversaries manipulating either distribution. Recent work has demonstrated that motivated adversaries can circumvent anomaly detection or classification models at test time through evasion attacks, or can inject well-crafted malicious instances into training data to induce errors in classification through poisoning attacks. In addition, by undermining the integrity of learning systems, the privacy of users' data can also be compromised.

In this talk, I will describe my recent research addressing evasion attacks, poisoning attacks, and privacy problems for machine learning systems in adversarial environments. The key approach is to utilize game theoretic analysis and model the interactions between an intelligent adversary and a machine learning system as a Stackelberg game, allowing us to design robust learning strategies which explicitly account for an adversary’s optimal response. I'll briefly discuss human subject experiments that support the results of mathematical models, and I will also introduce a real world malware detection system deployed based on adversarial machine learning analysis.

Read More

Location:

TA 117: TA 117


Calendar:

Fall

Category:

Speaker/Lecture/Seminar

Tags:

n/a