The 2020 meeting of the American Economic Association begins on January 3 in San Diego, and among the Amazon economists attending will be Pat Bajari, VP and chief economist for Amazon’s Core AI group, who is a coauthor on two papers accepted to the conference.
Economic research at Amazon, Bajari explains, is distinctive in the way it crosses disciplinary boundaries. “These disciplines are like their own worlds,” he says. “It’s easy to get siloed doing engineering, machine learning, natural-language processing, computer vision, stats, operational research, economics, and so on. But when these disciplines interact, you get more interesting and useful results.”
Apples to apples
One of Bajari’s two papers at AEA is a case in point. Titled “New Goods, Productivity and the Measurement of Inflation: Using Machine Learning to Improve Quality Adjustments,” it applies new AI techniques to an old problem in the calculation of inflation rates.
“If you look at a product line, over the course of a year, 80% of the products might vanish,” Bajari explains. “When you calculate the rate of inflation, you’re usually doing an annual measure of price changes. But if 80% of products are gone, that measurement can be inaccurate.”
A famous example, Bajari explains, is personal computers in the late ’90s. At the time, he says, 95% of computers would sell out in the course of a year. The computers on the shelves one January could have very different technical specifications from those on the shelves a year later, making direct price comparison misleading.
Consequently, the standard method of calculating inflation indicated little change in the price of personal computers, even though the price of computational power was plummeting. The classical solution to this problem is so-called hedonic pricing, in which the price of a product is factored into several components, which can be compared independently.
So, for instance, late-’90s computers could be compared according to their price per megahertz of processing speed, per megabyte of random-access memory, per megabyte of storage, and so on. Bajari’s first AEA paper updates hedonic pricing for the age of deep learning. On the paper, he joins Victor Chernozhukov, a professor of economics at MIT and a senior principal economist in Amazon’s Core AI group; Ramon Huerta, a research scientist at the University of California, San Diego, and a principal applied scientist in the Amazon North American Consumer group; George Monokroussos, a former senior economist at Amazon; and three other members of Core AI: Zhihao Cen, a senior applied scientist Junbo Li; a senior software engineer; and Manoj Manukonda, a senior data engineer.
Instead of factorizing product prices themselves, the researchers trained a machine learning model to identify correlations between product features and prices. If the model is trained on data from one year but fed descriptions of products on the shelves a year later, it will spit out the products’ prices according to the earlier valuation. Comparing the predicted and actual prices provides a measure of inflation.
Internally, Amazon can use this type of model to analyze business trends. But if central bankers applied a similar model to products representative of the economy as a whole, they could observe inflation rate variations in real time.
“If central bankers have a view with a one-day latency, it could give them signals about whether monetary policy is too loose or too tight,” Bajari explains.
Feedback loops
Bajari’s other AEA paper examines the design of randomized experiments. It reports work done in collaboration with Guido Imbens, a member of Core AI and the Applied Econometrics Professor and professor of economics at Stanford Business School; Thomas Richardson, a professor of statistics at the University of Washington and an Amazon Scholar; Brian Burdick, the director of Core AI; Ido Rosen, a principal software engineer in Burdick’s group; and James McQueen, a senior applied scientist in Amazon’s Customer Behavior Analytics group.
The most familiar example of a randomized experiment is a drug trial, where some subjects receive an experimental drug, some receive a placebo, and their outcomes are compared. But randomized experiments are also common in industry.
Suppose, for instance, that Amazon researchers develop a new algorithm for calculating how much of a product to restock at a fulfillment center as a function of recent sales rates and supply on hand. In simulations, the algorithm promises more reliable delivery and greater customer satisfaction, but there’s a question about whether those theoretical gains will translate into practice.
Amazon might conduct a randomized experiment in which some fulfillment centers use the new algorithm, some use the old algorithm, and the average results are compared. Such experiments, however, are liable to so-called spillover effects, where the “treatment” — in this case, the deployment of the new algorithm — ends up having consequences for the control group — in this case, the fulfillment centers using the old algorithm.
Suppose that the treatment results in faster delivery of certain products, and consequently, those products grow in popularity. Amazon’s recommendation engine begins recommending those products more frequently, even to customers served by fulfillment centers using the old restock algorithm. Demand for the products spikes, and the control group starts selling through its stocks — a negative outcome, in terms of the experimental design. When the results of the experiment are tallied, the control group’s performance is artificially depreciated because of the treatment.
“This type of spillover does not happen in standard medical-drug trials, because one individual taking the new drug does not affect the outcome for another individual taking the placebo,” Imbens says. “But it is a feature of many experiments at Amazon and similar companies, where we have complex feedback loops.”
Exerting controls
One way to identify such spillover effects would be to ensure that, for every product that receives the treatment, there’s a related product that doesn’t, regardless of where it’s stored. That would make it possible to determine whether demand spikes are affecting product classes as a whole or are limited to treated products. But it complicates the experimental design.
The researchers’ paper presents an ambitious blueprint for performing such complex experiments. It describes how to simultaneously measure average effects and identify spillovers within a single experiment — by, for instance, systematically varying the treatment’s application to pairs of fulfillment centers and products. It also presents statistical techniques for analyzing the results of such experiments.
The researchers’ blueprint could be applied in a host of different contexts — movie recommendations, rideshare services, short-term-property-rental sites, homebuying sites, retail sites, job search sites, and the like. It also generalizes from double randomization — a given product can receive different treatments at different fulfillment centers, and a given fulfillment center can treat some products and not others — to higher-dimensional randomization — varying treatment according to season, delivery destination, vendor, and so on.
“When people do these kinds of experiments, they usually randomize only one variable at a time,” Bajari explains. “We want to go further with this idea, where we use multiple randomizations to learn supply responses, demand responses, equilibria — all with the goal to keep improving the customer experience.”
Helping identify the causal relationships that underlie the data, Bajari says, is one of the ways in which the economic perspective is useful. But another is in deciding what to measure, across what time frame.
“Usually, ML and AI are tools for making decisions,” Bajari says. “If you have a particular product, how much should you stock of it? You want to make that decision in a present-value-maximizing way. You don’t want to sacrifice long-term success for short-term gains. If you only looked at short-term numbers, we would cut safety stock by half. Then customers would be more apt to find products out of stock, which means they might be less likely to shop on Amazon, which in turn could hurt growth.
“If you want to use ML and AI to make decisions in a rational way, you need a way to trade off long-term and short-term results. This is a place where economists help. What should a firm rationally optimize for? That’s just squarely in economics. That’s what we do.”
Amazon's involvement at AEA/ASSA
Paper and presentation schedule
Friday, 1/3 | 2:30 pm - 4:30 pm | Marriott Marquee San Diego | San Diego Ballroom A
"GDPR and the Home Bias of Venture Investment"
Jian Jia (Illinois Institute of Technology) · Ginger Jin (University of Maryland/Amazon Scholar) · Liad Wagman (Illinois Institute of Technology)
Pat Bajari (Amazon) · Zhihao Cen (Amazon) · Victor Chernozhukov (MIT/Amazon) · Ramon Huerta (UCSD/Amazon) · Junbo Li (Amazon) · Manoj Manukonda (Amazon) · George Monokroussos (Wayfair)
"Double Randomized Online Experiments"
Pat Bajari (Amazon) · Brian Burdick (Amazon) · Guido Imbens (Stanford Graduate School of Business/Amazon) · James McQueen (Amazon) · Thomas Richardson (University of Washington/Amazon Scholar) · Ido Rosen (Amazon)
Saturday, 1/4 | 2:30 pm - 4:30 pm | Marriott Marquis San Diego | Del Mar
"Sustained Credit Card Borrowing"
Sergei Koulayev (Amazon) · Daniel Grodzicki (Pennsylvania State University/Consumer Financial Protection Bureau)
Workshops
Econometrica Session: New Developments in Econometrics
Chair: Guido Imbens