Title:
When Does Interference Matter? Decision-Making in Platform Experiments
Abstract:
Online platforms and marketplaces use A/B experiments to test new features and design changes. Due to constraints on inventory, such experiments typically lead to biased estimation of treatment effects due to the presence of *interference* between treatment and control groups; this phenomenon has been extensively studied in recent literature. By contrast, there has been relatively little discussion of the impact of interference on *decision-making*. In this talk, we consider a benchmark Markovian model of a capacity-constrained platform, and study the impact of interference on (1) false positive probability and (2) statistical power. We show that for a particular class of "monotone" treatments (informally, treatments where the sign of the effect does not depend on the level of available inventory), using the standard t statistic with the naïve difference-in-means estimator and classical variance estimator both correctly controls the false positive probability, and generally yields *higher* statistical power than any unbiased estimation method. We show that in principle, these effects can be undermined when treatments are not monotone.
Our results have important implications for the practical deployment of debiasing strategies for A/B experiments. In particular, they highlight the need for platforms to carefully define their objectives and understand the nature of their interventions when determining appropriate estimation and decision-making approaches. Notably, when interventions are monotone, the platform may actually be worse off by pursuing a debiased decision-making approach.
Joint work with Hannah Li, Anushka Murthy, and Gabriel Weintraub.
Bio:
Ramesh Johari is a Professor at Stanford University, with a full-time appointment in the Department of Management Science and Engineering (MS&E), and a courtesy appointment in the Department of Electrical Engineering (EE). He is an associate director of Stanford Data Science, and co-director of the Stanford Causal Science Center. He is a member of the Operations Research group and the Social Algorithms Lab (SOAL) in MS&E, the Information Systems Laboratory in EE, and the Institute for Computational and Mathematical Engineering. He received an A.B. in Mathematics from Harvard, a Certificate of Advanced Study in Mathematics from Cambridge, and a Ph.D. in Electrical Engineering and Computer Science from MIT. His current research interests include market design, causal inference, and experimentation.