Event detail
Adversarial Machine Learning
Seminar | April 26 | 12-1 p.m. | 205 South Hall
Doug Tygar, Professor, Department of Electrical Engineering and Computer Sciences
Center for Long-Term Cybersecurity (CLTC)
Please join us on April 26, 2018 at 12pm for an interactive seminar focused on adversarial machine learning, featuring Doug Tygar, Professor of Computer Science at UC Berkeley and Professor of Information Management at UC Berkeley.
A light lunch is available for those who RSVP to attend.
Abstract
Machine learning would seem to be a powerful technology for Internet computer security. If machines can learn when a system is functioning normally and when it is under attack, then we can build mechanisms that automatically and rapidly respond to emerging attacks. Such a system might be able to automatically screen out a wide variety of spam, phishing, network intrusions, malware, and other nasty Internet behavior. But the actual deployment of machine learning in computer security has been less successful than we might hope. What accounts for the difference? To understand the issues, lets look more closely at what happens when we use machine learning. In one popular model, supervised learning, we train a system using labeled data to produce a classifier. While standard machine learning algorithms are robust against input data with errors from random distributions, it turns out that they are vulnerable to errors that are strategically chosen by an adversary. In this talk, I will demonstrate a number of methods that adversaries can use to corrupt machine learning.
My colleagues and I at UC Berkeley as well as other research teams around the world have been looking at these problems and developing new machine learning algorithms that are robust against adversarial input. The search for adversarial machine learning algorithms is thrilling: it combines the best work in robust statistics, machine learning, and computer security. One significant tool security researchers use is the ability to look at attack scenarios from the adversarys perspective (the black hat approach), and in that way, show the limits of computer security techniques. In the field of adversarial machine learning, this approach yields fundamental insights. Even though a growing number of adversarial machine learning algorithms are available, the black hat approach shows us that there are some theoretical limits to their effectiveness.
This talk discusses joint work with Anthony Joseph and other members of the SecML research group at UC Berkeley.
About the Speaker
Doug Tygar works in the areas of computer security, privacy, and electronic commerce. His current research includes privacy, security issues in sensor webs, digital rights management, and usable computer security. His awards include a National Science Foundation Presidential Young Investigator Award, an Okawa Foundation Fellowship, a teaching award from Carnegie Mellon, and invited keynote addresses at PODC, PODS, VLDB, and many other conferences.
Doug Tygar has written three books; his book Secure Broadcast Communication in Wired and Wireless Networks (with Adrian Perrig) is a standard reference and has been translated to Japanese. He designed cryptographic postage standards for the US Postal Service and has helped build a number of security and electronic commerce systems including: Strongbox, Dyad, Netbill, and Micro-Tesla. He served as chair of the Defense Departments ISAT Study Group on Security with Privacy, and he was a founding board member of ACMs Special Interest Group on Electronic Commerce.
This event is presented as part of the UC Berkeley Center for Long-Term Cybersecurity's 2018 Spring Seminar Series.
All Audiences
All Audiences
RSVP online by April 24.
Light lunch available for those who RSVP
