Online A/B tests have become an indispensable tool across all the technology industry: if performed correctly, “online” experiments can inform effective decision making and product development. It should therefore not be surprising that Gupta et al. [2019] estimates that online businesses alone collectively run hundreds of thousands of experiments annually.
Modern online experiments are often run in marketplaces where multiple populations of units (e.g., buyers and sellers) with competing interests and strategic responses interact and dynamically adapt their behavior to the treatment over time. Despite the modern setting in which online randomized experiments are performed, the industry still heavily relies on assumptions and corresponding designs that closely resemble those of classical randomized experiments dating back to Neyman [1923/1990] and Fisher [1937]. A natural concern in these settings is that the presence of cross-unit interference (spillovers) might invalidate the analysis. To address these shortcomings, a rapidly growing literature on experimental design in settings with interference or spillovers has been developed over the last few decades [Hudgens and Halloran, 2008, Rosenbaum, 2007, Aronow, 2012, VanderWeele et al., 2014, Ogburn and VanderWeele, 2014, Aronow and Samii, 2017]. In this paper we show that even in settings where interference is absent, multiple randomization designs can lead to greater efficiency to estimate causal effects.
Efficient switchback experiments via multiple randomization designs
2023
Research areas