Amazon wins best-paper award for protecting privacy of training data

Calibrating noise addition to word density in the embedding space improves utility of privacy-protected text.

Differential privacy is a popular technique that provides a way to quantify the privacy risk of releasing aggregate statistics based on individual data. In the context of machine learning, differential privacy provides a way to protect privacy by adding noise to the data used to train a machine learning model. But the addition of noise can also reduce model performance.

In a pair of papers at the annual meeting of the Florida Artificial Intelligence Research Society (FLAIRS), the Privacy Engineering for Alexa team is presenting a new way to calibrate the noise added to the textual data used to train natural-language-processing (NLP) models. The idea is to distinguish cases where a little noise is enough to protect privacy from cases where more noise is necessary. This helps minimize the impact on model accuracy while maintaining privacy guarantees, which aligns with the team’s mission to measurably preserve customer privacy across Alexa.

One of the papers, “Density-aware differentially private textual perturbations using truncated Gumbel noise”, has won the conference’s best-paper award.

Calibrated noise addition.gif
A simplified example of the method proposed in the researchers' award-winning paper. Noise is added to the three nearest neighbors of a source word, A, and to A itself. After noise addition, the word closest to A's original position — B — is chosen as a substitute for A.
Credit: Glynis Condon

Differential privacy says that, given an aggregate statistic, the probability that the underlying dataset does or does not contain a particular item should be virtually the same. The addition of noise to the data helps enforce that standard, but it can also obscure relationships in the data that the model is trying to learn.

In NLP applications, a standard way to add noise involves embedding the words of the training texts. An embedding represents words as vectors, such that vectors that are close in the space have related meanings. 

Adding noise to an embedding vector produces a new vector, which would correspond to a similar but different word. Ideally, substituting the new words for the old should disguise the original data while preserving the attributes that the NLP model is trying to learn. 

However, words in an embedding space tend to form clusters, united by semantic similarity, with sparsely populated regions between clusters. Intuitively, within a cluster, much less noise should be required to ensure enough semantic distance to preserve privacy. However, if the noise added to each word is based on the average distance between embeddings — factoring in the sparser regions — it may be more than is necessary for words in dense regions.

Noise calibration.png
A simplified representation of words (red dots) in an embedding space. Adding noise to a source vector (A) produces a new vector, and the nearest (green circle) embedded word (B) is chosen as a substitute. In the graph at left, adding a lot of noise to the source word produces an output word that is far away and hence semantically dissimilar. In the middle graph, however, a lot of noise is needed to produce a semantically different output. In the graph at right, the amount of noise is calibrated to the density of the vectors around the source word.

This leads us to pose the following question in our FLAIRS papers: Can we recalibrate the noise added such that it varies for every word depending on the density of the surrounding space, rather than resorting to a single global sensitivity?

Calibration techniques

We study this question from two different perspectives. In the paper titled “Research challenges in designing differentially private text generation mechanisms”, my Alexa colleagues Oluwaseyi Feyisetan, Zekun Xu, Nathanael Teissier, and I discuss general techniques to enhance the privacy of text mechanisms by exploiting features such as local density in the embedding space.  

For example, one technique deduces a probability distribution (a prior) that assigns high probability to dense areas of the embedding and low probability to sparse areas. This prior can be produced using kernel density estimation, which is a popular technique for estimating distributions from limited data samples. 

However, these distributions are often highly nonlinear, which makes them difficult to sample from. In this case, we can either opt for an approximation to the distribution or adopt indirect sampling strategies such as the Metropolis–Hastings algorithm (which is based on well-known Monte Carlo Markov chain techniques). 

Another technique we discuss is to impose a limit on how far away a noisy embedding may be from its source. We explore two ways to do this: distance-based truncation and k-nearest-neighbor-based truncation. 

Distance-based truncation simply caps the distance between the noisy embedding and its source, according to some measure of distance in the space. This prevents the addition of a large amount of noise, which is useful in the dense regions of the embedding. But in the sparse regions, this can effectively mean zero perturbation, since there may not be another word within the distance limit. 

To avoid this drawback, we consider the alternate approach of k-nearest-neighbor-based truncation. In this approach, the  words closest to the source delineate the acceptable search area. We then execute a selection procedure to choose the new word from these candidates (plus the source word itself). This is the approach we adopt in our second paper.

Nearest-neighbor search.png
A schematic of distance-based (left and middle graphs) and nearest-neighbor-based (right graph) truncation techniques. In the first graph, the blue circle represents a limit on the distance from the source word, A. Randomly adding noise produces a vector within this limit, and the output word B is selected. In the middle graph, a large amount of noise has been randomly added, but it’s truncated at the boundary of the blue circle. The right graph shows k-nearest-neighbor truncation, where a random number of neighbors (in this case, three) are selected around the source word, A. Noise is added to each of these neighbors independently, and the nearest word after noise addition — B — is chosen (see animation, above).

In “Density-aware differentially private textual perturbations using truncated Gumbel noise”, Nan Xu, a summer intern with our group in 2020 and currently a PhD student in computer science at the University of Southern California, joins us to discuss a particular algorithm in detail. 

This algorithm calibrates noise by selecting a few neighbors of the source word and perturbing the distance to these neighbors using samples from the Gumbel distribution (the rightmost graph, above). We chose the Gumbel distribution because it is more computationally efficient than existing mechanisms for differentially private selection (e.g., the exponential mechanism). The number of neighbors is chosen randomly using Poisson samples.

Together, these two techniques, when calibrated appropriately, provide the required amount of differential privacy while enhancing utility. We call the resulting algorithm the truncated Gumbel mechanism, and it better preserves semantic meanings than multivariate Laplace mechanisms, a widely used method for adding noise to textual data. (The left and middle graphs of the top figure above depict the use of Laplace mechanisms). 

In tests, we found that this new algorithm provided improvements in accuracy of up to 9.9% for text classification tasks on two different datasets. Our paper also includes a formal proof of the privacy guarantees offered by this mechanism and analyzes relevant privacy statistics. 

Our ongoing research efforts continue to improve upon the techniques described above and enable Alexa to continue introducing new features and inventions that make customers’ lives easier while keeping their data private.

Related content

US, MA, Westborough
Amazon is looking for talented Postdoctoral Scientists to join our Fulfillment Technology and Robotics team for a one-year, full-time research position. The Innovation Lab in BOS27 is a physical space in which new ideas can be explored, hands-on. The Lab provides easier access to tools and equipment our inventors need while also incubating critical technologies necessary for future robotic products. The Lab is intended to not only develop new technologies that can be used in future Fulfillment, Technology, and Robotics products but additionally promote deeper technical collaboration with universities from around the world. The Lab’s research efforts are focused on highly autonomous systems inclusive of robotic manipulation of packages and ASINs, multi-robot systems utilizing vertical space, Amazon integrated gantries, advancements in perception, and collaborative robotics. These five areas of research represent an impactful set of technical capabilities that when realized at a world class level will unlock our desire for a highly automated and adaptable fulfillment supply chain. As a Postdoctoral Scientist you will be developing a coordinated multi-agent system to achieve optimized trajectories under realistic constraints. The project will explore the utility of state-of-the-art methods to solve multi-agent, multi-objective optimization problems with stochastic time and location constraints. The project is motivated by a new technology being developed in the Innovation Lab to introduce efficiencies in the last-mile delivery systems. Key job responsibilities In this role you will: * Work closely with a senior science advisor, collaborate with other scientists and engineers, and be part of Amazon’s diverse global science community. * Publish your innovation in top-tier academic venues and hone your presentation skills. * Be inspired by challenges and opportunities to invent new techniques in your area(s) of expertise.
IN, TS, Hyderabad
Welcome to the Worldwide Returns & ReCommerce team (WWR&R) at Amazon.com. WWR&R is an agile, innovative organization dedicated to ‘making zero happen’ to benefit our customers, our company, and the environment. Our goal is to achieve the three zeroes: zero cost of returns, zero waste, and zero defects. We do this by developing products and driving truly innovative operational excellence to help customers keep what they buy, recover returned and damaged product value, keep thousands of tons of waste from landfills, and create the best customer returns experience in the world. We have an eye to the future – we create long-term value at Amazon by focusing not just on the bottom line, but on the planet. We are building the most sustainable re-use channel we can by driving multiple aspects of the Circular Economy for Amazon – Returns & ReCommerce. Amazon WWR&R is comprised of business, product, operational, program, software engineering and data teams that manage the life of a returned or damaged product from a customer to the warehouse and on to its next best use. Our work is broad and deep: we train machine learning models to automate routing and find signals to optimize re-use; we invent new channels to give products a second life; we develop highly respected product support to help customers love what they buy; we pilot smarter product evaluations; we work from the customer backward to find ways to make the return experience remarkably delightful and easy; and we do it all while scrutinizing our business with laser focus. You will help create everything from customer-facing and vendor-facing websites to the internal software and tools behind the reverse-logistics process. You can develop scalable, high-availability solutions to solve complex and broad business problems. We are a group that has fun at work while driving incredible customer, business, and environmental impact. We are backed by a strong leadership group dedicated to operational excellence that empowers a reasonable work-life balance. As an established, experienced team, we offer the scope and support needed for substantial career growth. Amazon is earth’s most customer-centric company and through WWR&R, the earth is our customer too. Come join us and innovate with the Amazon Worldwide Returns & ReCommerce team!
GB, MLN, Edinburgh
We’re looking for a Machine Learning Scientist in the Personalization team for our Edinburgh office experienced in generative AI and large models. You will be responsible for developing and disseminating customer-facing personalized recommendation models. This is a hands-on role with global impact working with a team of world-class engineers and scientists across the Edinburgh offices and wider organization. You will lead the design of machine learning models that scale to very large quantities of data, and serve high-scale low-latency recommendations to all customers worldwide. You will embody scientific rigor, designing and executing experiments to demonstrate the technical efficacy and business value of your methods. You will work alongside a science team to delight customers by aiding in recommendations relevancy, and raise the profile of Amazon as a global leader in machine learning and personalization. Successful candidates will have strong technical ability, focus on customers by applying a customer-first approach, excellent teamwork and communication skills, and a motivation to achieve results in a fast-paced environment. Our position offers exceptional opportunities for every candidate to grow their technical and non-technical skills. If you are selected, you have the opportunity to make a difference to our business by designing and building state of the art machine learning systems on big data, leveraging Amazon’s vast computing resources (AWS), working on exciting and challenging projects, and delivering meaningful results to customers world-wide. Key job responsibilities Develop machine learning algorithms for high-scale recommendations problems. Rapidly design, prototype and test many possible hypotheses in a high-ambiguity environment, making use of both quantitative analysis and business judgement. Collaborate with software engineers to integrate successful experimental results into large-scale, highly complex Amazon production systems capable of handling 100,000s of transactions per second at low latency. Report results in a manner which is both statistically rigorous and compellingly relevant, exemplifying good scientific practice in a business environment.
US, CA, Palo Alto
Amazon’s Advertising Technology team builds the technology infrastructure and ad serving systems to manage billions of advertising queries every day. The result is better quality advertising for publishers and more relevant ads for customers. In this organization you’ll experience the benefits of working in a dynamic, entrepreneurial environment, while leveraging the resources of Amazon.com (AMZN), one of the world's leading companies. Amazon Publisher Services (APS) helps publishers of all sizes and on all channels better monetize their content through effective advertising. APS unites publishers with advertisers across devices and media channels. We work with Amazon teams across the globe to solve complex problems for our customers. The end results are Amazon products that let publishers focus on what they do best - publishing. The APS Publisher Products Engineering team is responsible for building cloud-based advertising technology services that help Web, Mobile, Streaming TV broadcasters and Audio publishers grow their business. The engineering team focuses on unlocking our ad tech on the most impactful Desktop, mobile and Connected TV devices in the home, bringing real-time capabilities to this medium for the first time. As a successful Data Scientist in our team, · You are an analytical problem solver who enjoys diving into data, is excited about investigations and algorithms, and can credibly interface between technical teams and business stakeholders. You will collaborate directly with product managers, BIEs and our data infra team. · You will analyze large amounts of business data, automate and scale the analysis, and develop metrics (e.g., user recognition, ROAS, Share of Wallet) that will enable us to continually measure the impact of our initiatives and refine the product strategy. · Your analytical abilities, business understanding, and technical aptitude will be used to identify specific and actionable opportunities to solve existing business problems and look around corners for future opportunities. Your expertise in synthesizing and communicating insights and recommendations to audiences of varying levels of technical sophistication will enable you to answer specific business questions and innovate for the future. · You will have direct exposure to senior leadership as we communicate results and provide scientific guidance to the business. Major responsibilities include: · Utilizing code (Apache, Spark, Python, R, Scala, etc.) for analyzing data and building statistical models to solve specific business problems. · Collaborate with product, BIEs, software developers, and business leaders to define product requirements and provide analytical support · Build customer-facing reporting to provide insights and metrics which track system performance · Influence the product strategy directly through your analytical insights · Communicating verbally and in writing to business customers and leadership team with various levels of technical knowledge, educating them about our systems, as well as sharing insights and recommendations
US, WA, Seattle
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! In Prime Video READI, our mission is to automate infrastructure scaling and operational readiness. We are growing a team specialized in time series modeling, forecasting, and release safety. This team will invent and develop algorithms for forecasting multi-dimensional related time series. The team will develop forecasts on key business dimensions with optimization recommendations related to performance and efficiency opportunities across our global software environment. As a founding member of the core team, you will apply your deep coding, modeling and statistical knowledge to concrete problems that have broad cross-organizational, global, and technology impact. Your work will focus on retrieving, cleansing and preparing large scale datasets, training and evaluating models and deploying them to production where we continuously monitor and evaluate. You will work on large engineering efforts that solve significantly complex problems facing global customers. You will be trusted to operate with complete independence and are often assigned to focus on areas where the business and/or architectural strategy has not yet been defined. You must be equally comfortable digging in to business requirements as you are drilling into design with development teams and developing production ready learning models. You consistently bring strong, data-driven business and technical judgment to decisions. You will work with internal and external stakeholders, cross-functional partners, and end-users around the world at all levels. Our team makes a big impact because nothing is more important to us than delivering for our customers, continually earning their trust, and thinking long term. You are empowered to bring new technologies to your solutions. If you crave a sense of ownership, this is the place to be.
US, WA, Seattle
Amazon Advertising operates at the intersection of eCommerce and advertising, and is investing heavily in building a world-class advertising business. We are defining and delivering a collection of self-service performance advertising products that drive discovery and sales. Our products are strategically important to our Retail and Marketplace businesses driving long-term growth. We deliver billions of ad impressions and millions of clicks daily and are breaking fresh ground to create world-class products to improve both shopper and advertiser experience. With a broad mandate to experiment and innovate, we grow at an unprecedented rate with a seemingly endless range of new opportunities. The Ad Response Prediction team in Sponsored Products organization build advanced deep-learning models, large-scale machine-learning pipelines, and real-time serving infra to match shoppers’ intent to relevant ads on all devices, for all contexts and in all marketplaces. Through precise estimation of shoppers’ interaction with ads and their long-term value, we aim to drive optimal ads allocation and pricing, and help to deliver a relevant, engaging and delightful ads experience to Amazon shoppers. As the business and the complexity of various new initiatives we take continues to grow, we are looking for talented Applied Scientists to join the team. Key job responsibilities As a Applied Scientist II, you will: * Conduct hands-on data analysis, build large-scale machine-learning models and pipelines * Work closely with software engineers on detailed requirements, technical designs and implementation of end-to-end solutions in production * Run regular A/B experiments, gather data, perform statistical analysis, and communicate the impact to senior management * Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving * Provide technical leadership, research new machine learning approaches to drive continued scientific innovation * Be a member of the Amazon-wide Machine Learning Community, participating in internal and external MeetUps, Hackathons and Conferences
US, WA, Bellevue
mmPROS Surface Research Science seeks an exceptional Applied Scientist with expertise in optimization and machine learning to optimize Amazon's middle mile transportation network, the backbone of its logistics operations. Amazon's middle mile transportation network utilizes a fleet of semi-trucks, trains, and airplanes to transport millions of packages and other freight between warehouses, vendor facilities, and customers, on time and at low cost. The Surface Research Science team delivers innovation, models, algorithms, and other scientific solutions to efficiently plan and operate the middle mile surface (truck and rail) transportation network. The team focuses on large-scale problems in vehicle route planning, capacity procurement, network design, forecasting, and equipment re-balancing. Your role will be to build innovative optimization and machine learning models to improve driver routing and procurement efficiency. Your models will impact business decisions worth billions of dollars and improve the delivery experience for millions of customers. You will operate as part of a team of innovative, experienced scientists working on optimization and machine learning. You will work in close collaboration with partners across product, engineering, business intelligence, and operations. Key job responsibilities - Design and develop optimization and machine learning models to inform our hardest planning decisions. - Implement models and algorithms in Amazon's production software. - Lead and partner with product, engineering, and operations teams to drive modeling and technical design for complex business problems. - Lead complex modeling and data analyses to aid management in making key business decisions and set new policies. - Write documentation for scientific and business audiences. About the team This role is part of mmPROS Surface Research Science. Our mission is to build the most efficient and optimal transportation network on the planet, using our science and technology as our biggest advantage. We leverage technologies in optimization, operations research, and machine learning to grow our businesses and solve Amazon's unique logistical challenges. Scientists in the team work in close collaboration with each other and with partners across product, software engineering, business intelligence, and operations. They regularly interact with software engineering teams and business leadership.
US, WA, Seattle
Success in any organization begins with its people and having a comprehensive understanding of our workforce and how we best utilize their unique skills and experience is paramount to our future success.. Come join the team that owns the technology behind AWS People Planning products, services, and metrics. We leverage technology to improve the experience of AWS Executives, HR/Recruiting/Finance leaders, and internal AWS planning partners. A Sr. Data Scientist in the AWS Workforce Planning team, will partner with Software Engineers, Data Engineers and other Scientists, TPMs, Product Managers and Senior Management to help create world-class solutions. We're looking for people who are passionate about innovating on behalf of customers, demonstrate a high degree of product ownership, and want to have fun while they make history. You will leverage your knowledge in machine learning, advanced analytics, metrics, reporting, and analytic tooling/languages to analyze and translate the data into meaningful insights. You will have end-to-end ownership of operational and technical aspects of the insights you are building for the business, and will play an integral role in strategic decision-making. Further, you will build solutions leveraging advanced analytics that enable stakeholders to manage the business and make effective decisions, partner with internal teams to identify process and system improvement opportunities. As a tech expert, you will be an advocate for compelling user experiences and will demonstrate the value of automation and data-driven planning tools in the People Experience and Technology space. Key job responsibilities * Engineering execution - drive crisp and timely execution of milestones, consider and advise on key design and technology trade-offs with engineering teams * Priority management - manage diverse requests and dependencies from teams * Process improvements – define, implement and continuously improve delivery and operational efficiency * Stakeholder management – interface with and influence your stakeholders, balancing business needs vs. technical constraints and driving clarity in ambiguous situations * Operational Excellence – monitor metrics and program health, anticipate and clear blockers, manage escalations To be successful on this journey, you love having high standards for yourself and everyone you work with, and always look for opportunities to make our services better.
US, WA, Bellevue
Alexa is the voice activated digital assistant powering devices like Amazon Echo, Echo Dot, Echo Show, and Fire TV, which are at the forefront of this latest technology wave. To preserve our customers’ experience and trust, the Alexa Sensitive Content Intelligence (ASCI) team creates policies and builds services and tools through Machine Learning techniques to detect and mitigate sensitive content across Alexa. We are looking for an experienced Senior Applied Scientist to build industry-leading technologies in attribute extraction and sensitive content detection across all languages and countries. An Applied Scientist will be a tech lead for a team of exceptional scientists to develop novel algorithms and modeling techniques to advance the state of the art in NLP or CV related tasks. You will work in a hybrid, fast-paced organization where scientists, engineers, and product managers work together to build customer facing experiences. You will collaborate with and mentor other scientists to raise the bar of scientific research in Amazon. Your work will directly impact our customers in the form of products and services that make use of speech, language, and computer vision technologies. We are looking for a leader with strong technical expertise and a passion for developing science-driven solutions in a fast-paced environment. The ideal candidate will have a solid understanding of state of the art NLP, Generative AI, LLM fine-tuning, alignment, prompt engineering, benchmarking solutions, or CV and Multi-modal models, e.g., Vision Language Models (VLM), zero-shot, few-shot, and semi-supervised learning paradigms, with the ability to apply these technologies to diverse business challenges. You will leverage your deep technical knowledge, a strong foundation in machine learning and AI, and hands-on experience in building large-scale distributed systems to deliver reliable, scalable, and high-performance products. In addition to your technical expertise, you must have excellent communication skills and the ability to influence and collaborate effectively with key stakeholders. You will be joining a select group of people making history producing one of the most highly rated products in Amazon's history, so if you are looking for a challenging and innovative role where you can solve important problems while growing as a leader, this may be the place for you. Key job responsibilities You'll lead the science solution design, run experiments, research new algorithms, and find new ways of optimizing customer experience. You set examples for the team on good science practice and standards. Besides theoretical analysis and innovation, you will work closely with talented engineers and ML scientists to put your algorithms and models into practice. Your work will directly impact the trust customers place in Alexa, globally. You contribute directly to our growth by hiring smart and motivated Scientists to establish teams that can deliver swiftly and predictably, adjusting in an agile fashion to deliver what our customers need. A day in the life You will be working with a group of talented scientists on researching algorithm and running experiments to test scientific proposal/solutions to improve our sensitive contents detection and mitigation. This will involve collaboration with partner teams including engineering, PMs, data annotators, and other scientists to discuss data quality, policy, and model development. You will mentor other scientists, review and guide their work, help develop roadmaps for the team. You work closely with partner teams across Alexa to deliver platform features that require cross-team leadership. About the hiring group About the team The mission of the Alexa Sensitive Content Intelligence (ASCI) team is to (1) minimize negative surprises to customers caused by sensitive content, (2) detect and prevent potential brand-damaging interactions, and (3) build customer trust through appropriate interactions on sensitive topics. The term “sensitive content” includes within its scope a wide range of categories of content such as offensive content (e.g., hate speech, racist speech), profanity, content that is suitable only for certain age groups, politically polarizing content, and religiously polarizing content. The term “content” refers to any material that is exposed to customers by Alexa (including both 1P and 3P experiences) and includes text, speech, audio, and video.
US, WA, Bellevue
Why this job is awesome? This is SUPER high-visibility work: Our mission is to provide consistent, accurate, and relevant delivery information to every single page on every Amazon-owned site. MILLIONS of customers will be impacted by your contributions: The changes we make directly impact the customer experience on every Amazon site. This is a great position for someone who likes to leverage Machine learning technologies to solve the real customer problems, and also wants to see and measure their direct impact on customers. We are a cross-functional team that owns the ENTIRE delivery experience for customers: From the business requirements to the technical systems that allow us to directly affect the on-site experience from a central service, business and technical team members are integrated so everyone is involved through the entire development process. You will have a chance to develop the state-of-art machine learning, including deep learning and reinforcement learning models, to build targeting, recommendation, and optimization services to impact millions of Amazon customers. - Do you want to join an innovative team of scientists and engineers who use machine learning and statistical techniques to deliver the best delivery experience on every Amazon-owned site? - Are you excited by the prospect of analyzing and modeling terabytes of data on the cloud and create state-of-art algorithms to solve real world problems? - Do you like to own end-to-end business problems/metrics and directly impact the profitability of the company? - Do you like to innovate and simplify? If yes, then you may be a great fit to join the DEX AI team. Key job responsibilities - Research and implement machine learning techniques to create scalable and effective models in Delivery Experience (DEX) systems - Solve business problems and identify business opportunities to provide the best delivery experience on all Amazon-owned sites. - Design and develop highly innovative machine learning and deep learning models for big data. - Build state-of-art ranking and recommendations models and apply to Amazon search engine. - Analyze and understand large amounts of Amazon’s historical business data to detect patterns, to analyze trends and to identify correlations and causalities - Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model implementation