Title:
Stochastic optimization under distributional shifts
Abstract:
Learning problems commonly exhibit an interesting feedback
mechanism wherein the population data reacts to decision makers'
actions. This is the case for example when members of the population
respond to a deployed classifier by manipulating their features so as
to improve the likelihood of being positively labeled. In this way,
the population is manipulating the learning process by distorting the
data distribution that is accessible to the learner. In this talk, I will present some recent modelling frameworks and algorithms for dynamic problems of this type, rooted in stochastic optimization and game theory.
Joint work with Evan Faulkner (UW), Maryam Fazel (UW), Adhyyan Narang
(UW), Lillian J. Ratliff (UW), Lin Xiao (Facebook AI)
Bio:
Dmitriy Drusvyatskiy received his PhD from the Operations
Research and Information Engineering department at Cornell University
in 2013, followed by a post doctoral appointment in the Combinatorics
and Optimization department at Waterloo, 2013-2014. He joined the
Mathematics department at University of Washington as an Assistant
Professor in 2014, and was promoted to an Associate Professor in 2019.
Dmitriy's research broadly focuses on designing and analyzing
algorithms for large-scale optimization problems, primarily motivated
by applications in data science. Dmitriy has received a number of
awards, including the Air Force Office of Scientific Research (AFOSR)
Young Investigator Program (YIP) Award, NSF CAREER, INFORMS
Optimization Society Young Researcher Prize 2019, and finalist
citations for the Tucker Prize 2015 and the Young Researcher Best
Paper Prize at ICCOPT 2019. Dmitriy is currently a co-PI of the NSF
funded Transdisciplinary Research in Principles of Data Science
(TRIPODS) institute at University of Washington.
Research currently supported by NSF CAREER DMS 1651851 and NSF CCF 1740551.