Alexa at five: Looking back, looking forward

Today is the fifth anniversary of the launch of the Amazon Echo, so in a talk I gave yesterday at the Web Summit in Lisbon, I looked at how far Alexa has come and where we’re heading next.

Poster-captioned.jpg._CB447972009_.jpg
This poster of the original Echo device, signed by the scientists and engineers who helped make it possible, hangs in Rohit's office.

Amazon’s mission is to be the earth’s most customer-centric company. With that mission in mind and the Star Trek computer as an inspiration, on November 6, 2014, a small multidisciplinary team launched Amazon Echo, with the aspiration of revolutionizing daily convenience for our customers using artificial intelligence (AI).

Before Echo ushered in the convenience of voice-enabled ambient computing, customers were used to searches on desktops and mobile phones, where the onus was entirely on them to sift through blue links to find answers to their questions or connect to services. While app stores on phones offered “there’s an app for that” convenience, the cognitive load on customers continued to increase.

Alexa-powered Echo broke these human-machine interaction paradigms, shifting the cognitive load from customers to AI and causing a tectonic shift in how customers interact with a myriad of services, find information on the Web, control smart appliances, and connect with other people.

Enhancements in foundational components of Alexa

In order to be magical at the launch of Echo, Alexa needed to be great at four fundamental AI tasks:

  1. Wake word detection: On the device, detect the keyword “Alexa” to get the AI’s attention;
  2. Automatic speech recognition (ASR): Upon detecting the wake word, convert audio streamed to the Amazon Web Services (AWS) cloud into words;
  3. Natural-language understanding (NLU): Extract the meaning of the recognized words so that Alexa can take the appropriate action in response to the customer’s request; and
  4. Text-to-speech synthesis (TTS): Convert Alexa’s textual response to the customer’s request into spoken audio.

Over the past five years, we have continued to advance each of these foundational components. In both wake word and ASR, we’ve seen fourfold reductions in recognition errors. In NLU, the error reduction has been threefold — even though the range of utterances that NLU processes, and the range of actions Alexa can take, have both increased dramatically. And in listener studies that use the MUSHRA audio perception methodology, we’ve seen an 80% reduction in the naturalness gap between Alexa’s speech and human speech.

Our overarching strategy for Alexa’s AI has been to combine machine learning (ML) — in particular, deep learning — with the large-scale data and computational resources available through AWS. But these performance improvements are the result of research on a variety of specific topics that extend deep learning, including

  • semi-supervised learning, or using a combination of unlabeled and labeled data to improve the ML system;
  • active learning, or the learning strategy where the ML system selects more-informative samples to receive manual labels;
  • large-scale distributed training, or parallelizing ML-based model training for efficient learning on a large corpus; and
  • context-aware modeling, or using a wide variety of information — including the type of device where a request originates, skills the customer uses or has enabled, and past requests — to improve accuracy.

For more coverage of the anniversary of the Echo's launch, see "Alexa, happy birthday" on Amazon's Day One blog.

Customer impact

From Echo’s launch in November 2014 to now, we have gone from zero customer interactions with Alexa to billions per week. Customers now interact with Alexa in 15 language variants and more than 80 countries.

Through the Alexa Voice Service and the Alexa Skills Kit, we have democratized conversational AI. These self-serve APIs and toolkits let developers integrate Alexa into their devices and create custom skills. Alexa is now available on hundreds of different device types. There are more than 85,000 smart-home products that can be controlled with Alexa, from more than 9,500 unique brands, and third-party developers have built more than 100,000 custom skills.

Ongoing research in conversational AI

Alexa’s success doesn’t mean that conversational AI is a solved problem. On the contrary, we’ve just scratched the surface of what’s possible. We’re working hard to make Alexa …

1. More self-learning

Our scientists and engineers are making Alexa smarter faster by reducing reliance on supervised learning (i.e., building ML models on manually labeled data). A few months back, we announced that we’d trained a speech recognition system on a million hours of unlabeled speech using the teacher-student paradigm of deep learning. This technology is now in production for UK English, where it has improved the accuracy of Alexa’s speech recognizers, and we’re working to apply it to all language variants.

LSTMnetworkanimationV3.gif._CB467045280_.gif
In the teacher-student paradigm of deep learning, a powerful but impractically slow teacher model is trained on a small amount of hand-labeled data, and it in turn annotates a much larger body of unlabeled data to train a leaner, more efficient student model.

This year, we introduced a new self-learning paradigm that enables Alexa to automatically correct ASR and NLU errors without any human annotator in the loop. In this novel approach, we use ML to detect potentially unsatisfactory interactions with Alexa through signals such as the customer’s barging in on (i.e., interrupting) Alexa. Then, a graphical model trained on customers’ paraphrases of their requests automatically revises failing requests into semantically equivalent forms that work.

For example, “play Sirius XM Chill” used to fail, but from customer rephrasing, Alexa has learned that “play Sirius XM Chill” is equivalent to “play Sirius Channel 53” and automatically corrects the failing variant.

Using this implicit learning technique and occasional explicit feedback from customers — e.g., “did you want/mean … ?” — Alexa is now self-correcting millions of defects per week.

2. More natural

In 2015, when the first third-party skills began to appear, customers had to invoke them by name — e.g., “Alexa, ask Lyft to get me a ride to the airport.” However, with tens of thousands of custom skills, it can be difficult to discover skills by voice and remember their names. This is a unique challenge that Alexa faces.

To address this challenge, we have been exploring deep-learning-based name-free skill interaction to make skill discovery and invocation seamless. For several thousands of skills, customers can simply issue a request — “Alexa, get me a ride to the airport” — and Alexa uses information about the customer’s context and interaction history to decide which skill to invoke.

Another way we’ve made interacting with Alexa more natural is by enabling her to handle compound requests, such as “Alexa, turn down the lights and play music”. Among other innovations, this required more efficient techniques for training semantic parsers, which analyze both the structure of a sentence and the meanings of its parts.

Alexa’s responses are also becoming more natural. This year, we began using neural networks for text-to-speech synthesis. This not only results in more-natural-sounding speech but makes it much easier to adapt Alexa’s TTS system to different speaking styles — a newscaster style for reading the news, a DJ style for announcing songs, or even celebrity voices, like Samuel L. Jackson’s.

3. More knowledgeable

Every day, Alexa answers millions of questions that she’s never been asked before, an indication of customers’ growing confidence in Alexa’s question-answering ability.

The core of Alexa’s knowledge base is a knowledge graph, which encodes billions of facts and has grown 20-fold over the past five years. But Alexa also draws information from hundreds of other sources.

And now, customers are helping Alexa learn through Alexa Answers, an online interface that lets people add to Alexa’s knowledge. In a private beta test and the first month of public release, Alexa customers have furnished Alexa Answers with hundreds of thousands of new answers, which have been shared with customers millions of times.

4. More context-aware and proactive

Today, through an optional feature called Hunches, Alexa can learn how you interact with your smart home and suggest actions when she senses that devices such as lights, locks, switches, and plugs are not in the states that you prefer. We are currently expanding the notion of Hunches to include another Alexa feature called Routines. If you set your alarm for 6:00 a.m. every day, for example, and on waking, you immediately ask for the weather, Alexa will suggest creating a Routine that sets the weekday alarm to 6:00 and plays the weather report as soon as the alarm goes off.

Earlier this year, we launched Alexa Guard, a feature that you can activate when you leave the house. If your Echo device detects the sound of a smoke alarm, a carbon monoxide alarm, or glass breaking, Alexa Guard sends you an alert. Guard’s acoustic-event-detection model uses multitask learning, which reduces the amount of labeled data needed for training and makes the model more compact.

This fall, we will begin previewing an extended version of Alexa Guard that recognizes additional sounds associated with activity, such as footsteps, talking, coughing, or doors closing. Customers can also create Routines that include Guard — activating Guard automatically during work hours, for instance.

5. More conversational

Customers want Alexa to do more for them than complete one-shot requests like “Alexa, play Duke Ellington” or “Alexa, what’s the weather?” This year, we have improved Alexa’s ability to carry context from one request to another, the way humans do in conversation.

For instance, if an Alexa customer asks, “When is The Addams Family playing at the Bijou?” and then follows up with the question “Is there a good Mexican restaurant near there?”, Alexa needs to know that “there” refers to the Bijou. Some of our recent work in this area won one of the two best-paper awards at the Association for Computational Linguistics’ Workshop on Natural-Language Processing for Conversational AI. The key idea is to jointly model the salient entities with transformer networks that use a self-attention mechanism.

However, completing complex tasks that require back-and-forth interaction and anticipation of the customer’s latent goals is still a challenging problem. For example, a customer using Alexa to plan a night out would have to use different skills to find a movie, a restaurant near the theater, and a ride-sharing service, coordinating times and locations.

We are currently testing a new deep-learning-based technology, called Alexa Conversations, with a small group of skill developers who are using it to build high-quality multiturn experiences with minimal effort. The developer supplies Alexa Conversations with a set of sample dialogues, and a simulator expands it into 100 times as much data. Alexa Conversations then uses that data to train a bleeding-edge deep-learning model to predict dialogue actions, without the need for a priori hand-authored rules.

State_tracking.png._CB438077172_.png
Dialogue management involves tracking the values of "slots", such as time and location, throughout a conversation. Here, blue arrows indicate slots whose values must be updated across conversational turns.

At re:MARS, we demonstrated a new Night Out planning experience that uses Alexa Conversations technology and novel skill-transitioning algorithms to automatically coordinate conversational planning tasks across multiple skills.

We’re also adapting Alexa Conversations technology to the new concierge feature for Ring video doorbells. With this technology, the doorbell can engage in short conversations on your behalf, taking messages or telling a delivery person where to leave a package. We’re working hard to bring both of these experiences to customers.

What will the next five years look like?

Five years ago, it was inconceivable to us that customers would be interacting with Alexa billions of times per week and that developers would, on their own, build 100,000-plus skills. Such adoption is inspiring our teams to invent at an even faster pace, creating novel experiences that will increase utility and further delight our customers.

1. Alexa everywhere

The Echo family of devices and Alexa’s integration into third-party products has made Alexa a part of millions of homes worldwide. We have been working arduously on bringing the convenience of Alexa, which revolutionized daily convenience in homes, to our customers on the go. Echo Buds, Echo Auto, and the Day 1 Editions of Echo Loop and Echo Frames are already demonstrating that Alexa-on-the-go can simplify our lives even further.

With greater portability comes greater risk of slow or lost Internet connections. Echo devices with built-in smart-home hubs already have a hybrid mode, which allows them to do some spoken-language processing when they can’t rely on Alexa’s cloud-based models. This is an important area of ongoing research for us. For instance, we are investigating new techniques for compressing Alexa’s machine learning models so that they can run on-device.

The new on-the-go hardware isn’t the only way that Alexa is becoming more portable. The new Guest Connect experience allows you to log into your Alexa account from any Echo device — even ones you don’t own — and play your music or preferred news.

2. Moving up the AI stack

Alexa’s unparalleled customer and developer adoption provides new challenges for AI research. In particular, to further shift the cognitive load from customers to AI, we must move up the AI stack, from predictions (e.g., extracting customers’ intents) to more contextual reasoning.

One of our goals is to seamlessly connect disparate skills to increase convenience for our customers. Alexa Conversations and the Night Out experience are the first steps in that direction, completing complex tasks across multiple services and skills.

To enable the same kind of interoperability across different AIs, we helped found the Voice Interoperability Initiative, a consortium of dozens of tech companies uniting to promote customer choice by supporting multiple, interoperable voice services on a single device.

Alexa will also make better decisions by factoring in more information about the customer’s context and history. For instance, when a customer asks an Alexa-enabled device in a hotel room “Alexa, what are the pool hours?”, Alexa needs to respond with the hours for the hotel pool and not the community pool.

We are inspired by the success of learning directly from customers through the self-learning techniques I described earlier. This is an important area where we will continue to incorporate new signals, such as vocal frustration with Alexa, and learn from direct and indirect feedback to make Alexa more accurate.

3. Alexa for everyone

As AI systems like Alexa become an indispensable part of our social fabric, bias mitigation and fairness in AI will require even deeper attention. Our goal is for Alexa to work equally well for all our customers. In addition to our own research, we’ve entered into a three-year collaboration with the National Science Foundation to fund research on fairness in AI.

We envision a future where anyone can create conversational-AI systems. With the Alexa Skills Kit and Alexa Voice Service, we made it easy for developers to innovate using Alexa’s AI. Even end users can build personal skills within minutes using Alexa Skill Blueprints.

We are also thrilled with the Alexa Prize competition, which is democratizing conversational AI by letting university students perform state-of-the-art research at scale. University teams are working on the ultimate conversational-AI challenge of creating socialbots that can converse coherently and engagingly for 20 minutes with humans on a range of current events and popular topics”.

The third instance of the challenge is under way, and we are confident that the university teams will continue to push boundaries — perhaps even give their socialbots an original sense of humor, by far one of the hardest AI challenges.

Together with developers and academic researchers, we’ve made great strides in conversational AI. But there’s so much more to be accomplished. While the future is difficult to predict, one thing I am sure of is that the Alexa team will continue to invent on behalf of our customers.

Research areas

Related content

GB, MLN, Edinburgh
We’re looking for a Machine Learning Scientist in the Personalization team for our Edinburgh office experienced in generative AI and large models. You will be responsible for developing and disseminating customer-facing personalized recommendation models. This is a hands-on role with global impact working with a team of world-class engineers and scientists across the Edinburgh offices and wider organization. You will lead the design of machine learning models that scale to very large quantities of data, and serve high-scale low-latency recommendations to all customers worldwide. You will embody scientific rigor, designing and executing experiments to demonstrate the technical efficacy and business value of your methods. You will work alongside a science team to delight customers by aiding in recommendations relevancy, and raise the profile of Amazon as a global leader in machine learning and personalization. Successful candidates will have strong technical ability, focus on customers by applying a customer-first approach, excellent teamwork and communication skills, and a motivation to achieve results in a fast-paced environment. Our position offers exceptional opportunities for every candidate to grow their technical and non-technical skills. If you are selected, you have the opportunity to make a difference to our business by designing and building state of the art machine learning systems on big data, leveraging Amazon’s vast computing resources (AWS), working on exciting and challenging projects, and delivering meaningful results to customers world-wide. Key job responsibilities Develop machine learning algorithms for high-scale recommendations problems. Rapidly design, prototype and test many possible hypotheses in a high-ambiguity environment, making use of both quantitative analysis and business judgement. Collaborate with software engineers to integrate successful experimental results into large-scale, highly complex Amazon production systems capable of handling 100,000s of transactions per second at low latency. Report results in a manner which is both statistically rigorous and compellingly relevant, exemplifying good scientific practice in a business environment.
IN, TS, Hyderabad
Welcome to the Worldwide Returns & ReCommerce team (WWR&R) at Amazon.com. WWR&R is an agile, innovative organization dedicated to ‘making zero happen’ to benefit our customers, our company, and the environment. Our goal is to achieve the three zeroes: zero cost of returns, zero waste, and zero defects. We do this by developing products and driving truly innovative operational excellence to help customers keep what they buy, recover returned and damaged product value, keep thousands of tons of waste from landfills, and create the best customer returns experience in the world. We have an eye to the future – we create long-term value at Amazon by focusing not just on the bottom line, but on the planet. We are building the most sustainable re-use channel we can by driving multiple aspects of the Circular Economy for Amazon – Returns & ReCommerce. Amazon WWR&R is comprised of business, product, operational, program, software engineering and data teams that manage the life of a returned or damaged product from a customer to the warehouse and on to its next best use. Our work is broad and deep: we train machine learning models to automate routing and find signals to optimize re-use; we invent new channels to give products a second life; we develop highly respected product support to help customers love what they buy; we pilot smarter product evaluations; we work from the customer backward to find ways to make the return experience remarkably delightful and easy; and we do it all while scrutinizing our business with laser focus. You will help create everything from customer-facing and vendor-facing websites to the internal software and tools behind the reverse-logistics process. You can develop scalable, high-availability solutions to solve complex and broad business problems. We are a group that has fun at work while driving incredible customer, business, and environmental impact. We are backed by a strong leadership group dedicated to operational excellence that empowers a reasonable work-life balance. As an established, experienced team, we offer the scope and support needed for substantial career growth. Amazon is earth’s most customer-centric company and through WWR&R, the earth is our customer too. Come join us and innovate with the Amazon Worldwide Returns & ReCommerce team!
US, WA, Bellevue
mmPROS Surface Research Science seeks an exceptional Applied Scientist with expertise in optimization and machine learning to optimize Amazon's middle mile transportation network, the backbone of its logistics operations. Amazon's middle mile transportation network utilizes a fleet of semi-trucks, trains, and airplanes to transport millions of packages and other freight between warehouses, vendor facilities, and customers, on time and at low cost. The Surface Research Science team delivers innovation, models, algorithms, and other scientific solutions to efficiently plan and operate the middle mile surface (truck and rail) transportation network. The team focuses on large-scale problems in vehicle route planning, capacity procurement, network design, forecasting, and equipment re-balancing. Your role will be to build innovative optimization and machine learning models to improve driver routing and procurement efficiency. Your models will impact business decisions worth billions of dollars and improve the delivery experience for millions of customers. You will operate as part of a team of innovative, experienced scientists working on optimization and machine learning. You will work in close collaboration with partners across product, engineering, business intelligence, and operations. Key job responsibilities - Design and develop optimization and machine learning models to inform our hardest planning decisions. - Implement models and algorithms in Amazon's production software. - Lead and partner with product, engineering, and operations teams to drive modeling and technical design for complex business problems. - Lead complex modeling and data analyses to aid management in making key business decisions and set new policies. - Write documentation for scientific and business audiences. About the team This role is part of mmPROS Surface Research Science. Our mission is to build the most efficient and optimal transportation network on the planet, using our science and technology as our biggest advantage. We leverage technologies in optimization, operations research, and machine learning to grow our businesses and solve Amazon's unique logistical challenges. Scientists in the team work in close collaboration with each other and with partners across product, software engineering, business intelligence, and operations. They regularly interact with software engineering teams and business leadership.
US, WA, Seattle
Amazon Advertising operates at the intersection of eCommerce and advertising, and is investing heavily in building a world-class advertising business. We are defining and delivering a collection of self-service performance advertising products that drive discovery and sales. Our products are strategically important to our Retail and Marketplace businesses driving long-term growth. We deliver billions of ad impressions and millions of clicks daily and are breaking fresh ground to create world-class products to improve both shopper and advertiser experience. With a broad mandate to experiment and innovate, we grow at an unprecedented rate with a seemingly endless range of new opportunities. The Ad Response Prediction team in Sponsored Products organization build advanced deep-learning models, large-scale machine-learning pipelines, and real-time serving infra to match shoppers’ intent to relevant ads on all devices, for all contexts and in all marketplaces. Through precise estimation of shoppers’ interaction with ads and their long-term value, we aim to drive optimal ads allocation and pricing, and help to deliver a relevant, engaging and delightful ads experience to Amazon shoppers. As the business and the complexity of various new initiatives we take continues to grow, we are looking for talented Applied Scientists to join the team. Key job responsibilities As a Applied Scientist II, you will: * Conduct hands-on data analysis, build large-scale machine-learning models and pipelines * Work closely with software engineers on detailed requirements, technical designs and implementation of end-to-end solutions in production * Run regular A/B experiments, gather data, perform statistical analysis, and communicate the impact to senior management * Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving * Provide technical leadership, research new machine learning approaches to drive continued scientific innovation * Be a member of the Amazon-wide Machine Learning Community, participating in internal and external MeetUps, Hackathons and Conferences
US, WA, Seattle
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! In Prime Video READI, our mission is to automate infrastructure scaling and operational readiness. We are growing a team specialized in time series modeling, forecasting, and release safety. This team will invent and develop algorithms for forecasting multi-dimensional related time series. The team will develop forecasts on key business dimensions with optimization recommendations related to performance and efficiency opportunities across our global software environment. As a founding member of the core team, you will apply your deep coding, modeling and statistical knowledge to concrete problems that have broad cross-organizational, global, and technology impact. Your work will focus on retrieving, cleansing and preparing large scale datasets, training and evaluating models and deploying them to production where we continuously monitor and evaluate. You will work on large engineering efforts that solve significantly complex problems facing global customers. You will be trusted to operate with complete independence and are often assigned to focus on areas where the business and/or architectural strategy has not yet been defined. You must be equally comfortable digging in to business requirements as you are drilling into design with development teams and developing production ready learning models. You consistently bring strong, data-driven business and technical judgment to decisions. You will work with internal and external stakeholders, cross-functional partners, and end-users around the world at all levels. Our team makes a big impact because nothing is more important to us than delivering for our customers, continually earning their trust, and thinking long term. You are empowered to bring new technologies to your solutions. If you crave a sense of ownership, this is the place to be.
US, CA, Palo Alto
Amazon’s Advertising Technology team builds the technology infrastructure and ad serving systems to manage billions of advertising queries every day. The result is better quality advertising for publishers and more relevant ads for customers. In this organization you’ll experience the benefits of working in a dynamic, entrepreneurial environment, while leveraging the resources of Amazon.com (AMZN), one of the world's leading companies. Amazon Publisher Services (APS) helps publishers of all sizes and on all channels better monetize their content through effective advertising. APS unites publishers with advertisers across devices and media channels. We work with Amazon teams across the globe to solve complex problems for our customers. The end results are Amazon products that let publishers focus on what they do best - publishing. The APS Publisher Products Engineering team is responsible for building cloud-based advertising technology services that help Web, Mobile, Streaming TV broadcasters and Audio publishers grow their business. The engineering team focuses on unlocking our ad tech on the most impactful Desktop, mobile and Connected TV devices in the home, bringing real-time capabilities to this medium for the first time. As a successful Data Scientist in our team, · You are an analytical problem solver who enjoys diving into data, is excited about investigations and algorithms, and can credibly interface between technical teams and business stakeholders. You will collaborate directly with product managers, BIEs and our data infra team. · You will analyze large amounts of business data, automate and scale the analysis, and develop metrics (e.g., user recognition, ROAS, Share of Wallet) that will enable us to continually measure the impact of our initiatives and refine the product strategy. · Your analytical abilities, business understanding, and technical aptitude will be used to identify specific and actionable opportunities to solve existing business problems and look around corners for future opportunities. Your expertise in synthesizing and communicating insights and recommendations to audiences of varying levels of technical sophistication will enable you to answer specific business questions and innovate for the future. · You will have direct exposure to senior leadership as we communicate results and provide scientific guidance to the business. Major responsibilities include: · Utilizing code (Apache, Spark, Python, R, Scala, etc.) for analyzing data and building statistical models to solve specific business problems. · Collaborate with product, BIEs, software developers, and business leaders to define product requirements and provide analytical support · Build customer-facing reporting to provide insights and metrics which track system performance · Influence the product strategy directly through your analytical insights · Communicating verbally and in writing to business customers and leadership team with various levels of technical knowledge, educating them about our systems, as well as sharing insights and recommendations
US, MA, Boston
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to build industry-leading technology with Large Language Models (LLMs) and multi-modal systems. You will support projects that work on technologies including multi-modal model alignment, moderation systems and evaluation. Key job responsibilities As an Applied Scientist with the AGI team, you will support the development of novel algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in generative artificial intelligence (GenAI). You are also expected to publish in top tier conferences. About the team The AGI team has a mission to push the envelope in LLMs and multimodal systems. Specifically, we focus on model alignment with an aim to maintain safety while not denting utility, in order to provide the best-possible experience for our customers.
IN, KA, Bengaluru
At Amazon Search, we discover experience that connects millions of customers to the products they seek across our global marketplace. Our team develops sophisticated search solutions that make Amazon's vast product catalog easily accessible to customers worldwide. We're building next-generation search infrastructure that enables seamless expansion into new markets and product categories. Our mission is to ensure every customer enjoys an exceptional search experience from day one, whether they're shopping in an established market or a newly launched region. We develop intelligent, scalable systems that optimize search quality and effectiveness across Amazon's diverse product ecosystem. Our innovations help customers find exactly what they're looking for, while enabling Amazon to rapidly expand its global presence with consistent, high-quality search capabilities. We are seeking a strong applied scientists to join the newly formed Relevance India team. This team’s charter is to increase the pace at which Amazon expands and improve the search experience at launch. In practice, we aim to invent universally applicable signals and algorithms for training machine-learned ranking models and improve the machine-learning framework for training and offline evaluation that is used for all new relevance models. Key job responsibilities * Build machine learning models for Product Search. * Develop new ranking features and techniques building upon the latest results from the academic research community. * Propose and validate hypothesis to direct our business and product road map. Work with engineers to make low latency model predictions and scale the throughput of the system. * Focus on identifying and solving customer problems with simple and elegant solutions. * Design, develop, and implement production level code that serves billions of search requests. Own the full development cycle: design, development, impact assessment, A/B testing (including interpretation of results) and production deployment. * Take ownership. Understand the needs of various search teams, distill those into coherent projects, and implement them with an eye on long-term impact. * Be a leader. Use your expertise to set a high bar for the team, mentor team members, set the tone for how to take on and deliver on large impossible-sounding projects. * Be curious. You will work alongside systems engineers, machine learning scientists, and data analysts. Your effectiveness and impact will depend on discussing problems with and learning from them. You will have access to the advanced edge technologies and vast technical tools and resources of Amazon and will need to learn how to use them effectively.
US, MA, Boston
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Senior Applied Scientist with a strong deep learning background, to build industry-leading technology with Large Language Models (LLMs) and multimodal systems. Key job responsibilities As a Senior Applied Scientist with the AGI team, you will work with talented peers to lead the development of novel algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in generative artificial intelligence (GenAI). About the team The AGI team has a mission to push the envelope in LLMs and multimodal systems, in order to provide the best-possible experience for our customers.
US, WA, Seattle
Do you want to transform millions of customer's experience of interacting with AWS products using artificial intelligence and machine learning? Do you want to see the impacts of your work moving the needles on the billions dollars of AWS business? Do you want to stay on the cutting edge of technology (e.g. Gen AI, graph neural network, reinforcement learning, and forecasting models) to build scalable ML products that help AWS grow? The AWS Product Analytics and Data Science (PANDAS) team is at the forefront of leveraging cutting-edge AI/ML technology and infrastructure to redefine how internal product teams interact with and derive insights from their data. Our vision is to use artificial intelligence and machine learning to enable AWS product teams and business leaders to drive product growth and create personalized, optimized, and simplified product experience. We strive to improve customers’ product experience, directly influence AWS’s top line and bottom line, and help AWS business leaders drive product growth. We want to be a centralized ML platform team that democratizes ML capabilities to AWS product teams and transform their product and customer experience. You will work cross-functionally, typically collaborating with several teams of scientists, data engineer, product managers, and business leaders (GM/VP) in order to influence the business and technical strategy for a complex, high-performance organization. You will also drive impactful, long-term choices on system architecture, spearhead a high-quality science and engineering culture, leading the science innovation and business impacts across the org. Key job responsibilities - Utilize state-of-the-art machine learning, deep learning, and statistical techniques to develop models that can predict/classify business outcomes, automate decision-making processes, and enhance user experiences. - Conduct comprehensive data analyses to extract insights, identify patterns, and inform model development, utilizing large and complex datasets from diverse sources. - Design, development, and evaluation of innovative models for predictive learning, ensuring high-quality standards are maintained. Drive the best science and engineering practices. - Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation, and model implementation. - Monitor and assess the performance of deployed models, implementing continuous improvement strategies to adapt to changing data patterns and business requirements. - Work cross-functionally with data & software engineering teams to build model implementations and integrate successful models and algorithms in production systems at very large scale, focusing on scalability, efficiency, and performance. - Research and implement novel machine learning and statistical approaches that can contribute to state of the art science with publication A day in the life In your role as an applied scientist, you will play a pivotal role in shaping product development by working closely with product managers, software engineers, and designers to translate business objectives into actionable scientific projects. You will be instrumental in identifying and securing the necessary datasets in collaboration with data management teams. Your expertise will guide the selection and implementation of advanced statistical and machine learning methods, ensuring the development of robust models. These models will then be refined, tested, and deployed in production. You'll communicate your ML solution to stakeholders and product teams through effective verbal and written communication. About the team We are a team of scientists and engineers supporting AWS product leaders to make high impact decisions through sophisticated analytical frameworks, trusted data science methods, and scalable ML products. We came from diverse backgrounds from statistics, computer science, engineering, and business analytics. We specialized in the full end to end ML development process, including data ingestion, ETL, model development, and model deployment in production. We are supporting the data science needs across AWS EC2, Database & Analytics, and S3 teams using deep learning, graph neural network, forecasting, reinforcement learning, causal inference, etc. About AWS Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. This team is part of AWS Utility Computing: Utility Computing (UC) AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (Iot), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services.