Interspeech
This year's Interspeech will be held in Graz, Austria, whose famed clock tower was built in the mid-1500s
Photo courtesy of Getty Images

The 16 Alexa-related papers at this year’s Interspeech

At next week’s Interspeech, the largest conference on the science and technology of spoken-language processing, Alexa researchers have 16 papers, which span the five core areas of Alexa functionality: device activation, or recognizing speech intended for Alexa and other audio events that require processing; automatic speech recognition (ASR), or converting the speech signal into text; natural-language understanding, or determining the meaning of customer utterances; dialogue management, or handling multiturn conversational exchanges; and text-to-speech, or generating natural-sounding synthetic speech to convey Alexa’s responses. Two of the papers are also more-general explorations of topics in machine learning.

Device Activation

Model Compression on Acoustic Event Detection with Quantized Distillation
Bowen Shi, Ming Sun, Chieh-Chi Kao, Viktor Rozgic, Spyros Matsoukas, Chao Wang

The researchers combine two techniques to shrink neural networks trained to detect sounds by 88%, with no loss in accuracy. One technique, distillation, involves using a large, powerful model to train a leaner, more-efficient one. The other technique, quantization, involves using a fixed number of values to approximate a larger range of values.

Sub-band Convolutional Neural Networks for Small-footprint Spoken Term Classification
Chieh-Chi Kao, Ming Sun, Yixin Gao, Shiv Vitaladevuni, Chao Wang

Convolutional neural nets (CNNs) were originally designed to look for the same patterns in every block of pixels in a digital image. But they can also be applied to acoustic signals, which can be represented as two-dimensional mappings of time against frequency-based “features”. By restricting an audio-processing CNN’s search only to the feature ranges where a particular pattern is likely to occur, the researchers make it much more computationally efficient. This could make audio processing more practical for power-constrained devices.

A Study for Improving Device-Directed Speech Detection toward Frictionless Human-Machine Interaction
Che-Wei Huang, Roland Maas, Sri Harish Mallidi, Björn Hoffmeister

This paper is an update of prior work on detecting device-directed speech, or identifying utterances intended for Alexa. The researchers find that labeling dialogue turns (distinguishing initial utterances from subsequent utterances) and using signal representations based on Fourier transforms rather than mel-frequencies improve accuracy. They also find that, among the features extracted from speech recognizers that the system considers, confusion networks, which represent word probabilities at successive sentence positions, have the most predictive power.

Automatic Speech Recognition (ASR)

Acoustic Model Bootstrapping Using Semi-Supervised Learning
Langzhou Chen, Volker Leutnant

The researchers propose a method for selecting machine-labeled utterances for semi-supervised training of an acoustic model, the component of an ASR system that takes an acoustic signal as input. First, for each training sample, the system uses the existing acoustic model to identify the two most probable word-level interpretations of the signal at each position in the sentence. Then it finds examples in the training data that either support or contradict those probability estimates, which it uses to adjust the uncertainty of the ASR output. Samples that yield significant reductions in uncertainty are preferentially selected for training.

Improving ASR Confidence Scores for Alexa Using Acoustic and Hypothesis Embeddings
Prakhar Swarup, Roland Maas, Sri Garimella, Sri Harish Mallidi, Björn Hoffmeister

Speech recognizers assign probabilities to different interpretations of acoustic signals, and these probabilities can serve as inputs to a machine learning model that assesses the recognizer’s confidence in its classifications. The resulting confidence scores can be useful to other applications, such as systems that select machine-labeled training data for semi-supervised learning. The researchers append embeddings — fixed-length vector representations — of both the raw acoustic input and the speech recognizer’s best estimate of the word sequence to the inputs to a confidence-scoring network. The result: a 6.5% reduction in equal-error rate (the error rate that results when the false-negative and false-positive rates are set as equal).

Multi-Dialect Acoustic Modeling Using Phone Mapping and Online I-Vectors
Harish Arsikere, Ashtosh Sapru, Sri Garimella

Multi-dialect acoustic models, which help convert multi-dialect speech signals to words, are typically neural networks trained on pooled multi-dialect data, with separate output layers for each dialect. The researchers show that mapping the phones — the smallest phonetic units of speech — of each dialect to those of the others offers comparable results with shorter training times and better parameter sharing. They also show that recognition accuracy can be improved by adapting multi-dialect acoustic models, on the fly, to a target speaker.

Neural Machine Translation for Multilingual Grapheme-to-Phoneme Conversion
Alex Sokolov, Tracy Rohlin, Ariya Rastrow

Grapheme-to-phoneme models, which translate written words into their phonetic equivalents (“echo” to “E k oU”), enable speech recognizers to handle words they haven’t seen before. The researchers train a single neural model to handle grapheme-to-phoneme conversion in 18 languages. The results are comparable to those of state-of-the-art single-language models for languages with abundant training data and better for languages with sparse data. Multilingual models are more flexible and easier to maintain in production environments.

Scalable Multi Corpora Neural Language Models for ASR
Anirudh Raju, Denis Filimonov, Gautam Tiwari, Guitang Lan, Ariya Rastrow

Language models, which compute the probability of a given sequence of words, help distinguish between different interpretations of speech signals. Neural language models promise greater accuracy than existing models, but they’re difficult to incorporate into real-time speech recognition systems. The researchers describe several techniques to make neural language models practical, from a technique for weighting training samples from out-of-domain data sets to noise contrastive estimation, which turns the calculation of massive probability distributions into simple binary decisions.

Natural-Language Understanding

Neural Named Entity Recognition from Subword Units
Abdalghani Abujabal, Judith Gaspers

Named-entity recognition is crucial to voice-controlled systems — as when you tell Alexa “Play ‘Spirit’ by Beyoncé”. A neural network that recognizes named entities typically has dedicated input channels for every word in its vocabulary. This has two drawbacks: (1) the network grows extremely large, which makes it slower and more memory intensive, and (2) it has trouble handling unfamiliar words. The researchers trained a named-entity recognizer that instead takes subword units — characters, phonemes, and bytes — as inputs. It offers comparable performance with a vocabulary of only 332 subwords, versus 74,000-odd words.

Dialogue Management

HyST: A Hybrid Approach for Flexible and Accurate Dialogue State Tracking
Rahul Goel, Shachi Paul, Dilek Hakkani-Tür

Dialogue-based computer systems need to track “slots” — types of entities mentioned in conversation, such as movie names — and their values — such as Avengers: Endgame. Training a machine learning system to decide whether to pull candidate slot values from prior conversation or compute a distribution over all possible slot values improves slot-tracking accuracy by 24% over the best-performing previous system.

Towards Universal Dialogue Act Tagging for Task-Oriented Dialogues
Shachi Paul, Rahul Goel, Dilek Hakkani-Tür

Dialogue-based computer systems typically classify utterances by “dialogue act” — such as requesting, informing, and denying — as a way of gauging progress toward a conversational goal. As a first step in developing a system that will automatically label dialogue acts in human-human conversations (to, in turn, train a dialogue-act classifier), the researchers create a “universal tagging scheme” for dialogue acts. They use this scheme to reconcile the disparate tags used in different data sets.

Topical-Chat: Towards Knowledge-Grounded Open-Domain Conversations
Karthik Gopalakrishnan, Behnam Hedayatnia, Qinlang Chen, Anna Gottardi, Sanjeev Kwatra, Anu Venkatesh, Raefer Gabriel, Dilek Hakkani-Tür

The researchers report a new data set, which grew out of the Alexa Prize competition and is intended to advance research on AI agents that engage in social conversations. Pairs of workers recruited through Mechanical Turk were given information on topics that arose frequently during Alexa Prize interactions and asked to converse about them, documenting the sources of their factual assertions. The researchers used the resulting data set to train a knowledge-grounded response generation network, and they report automated and human evaluation results as state-of-the-art baselines.

Text-to-Speech

Towards Achieving Robust Universal Neural Vocoding
Jaime Lorenzo Trueba, Thomas Drugman, Javier Latorre, Thomas Merritt, Bartosz Putrycz, Roberto Barra-Chicote, Alexis Moinet, Vatsal Aggarwal

A vocoder is the component of a speech synthesizer that takes the frequency-spectrum snapshots generated by other components and fills in the information necessary to convert them to audio. The researchers trained a neural-network-based vocoder using data from 74 speakers of both genders in 17 languages. The resulting “universal vocoder” outperformed speaker-specific vocoders, even on speakers and languages it had never encountered before and unusual tasks such as synthesized singing.

Fine-Grained Robust Prosody Transfer for Single-Speaker Neural Text-to-Speech
Viacheslav Klimkov, Srikanth Ronanki, Jonas Rohnke, Thomas Drugman

The researchers present a new technique for transferring prosody (intonation, stress, and rhythm) from a recording to a synthesized voice, enabling the user to choose whose voice will read recorded content, with inflections preserved. Where earlier prosody transfer systems used spectrograms — frequency spectrum snapshots — as inputs, the researchers’ system uses easily normalized prosodic features extracted from the raw audio.

Machine Learning

Two Tiered Distributed Training Algorithm for Acoustic Modeling
Pranav Ladkat, Oleg Rybakov, Radhika Arava, Sree Hari Krishnan Parthasarathi,I-Fan Chen, Nikko Strom

When neural networks are trained on large data sets, the training needs to be distributed, or broken up across multiple processors. A novel combination of two state-of-the-art distributed-learning algorithms — GTC and BMUF — achieves both higher accuracy and more-efficient training then either, when learning is distributed to 128 parallel processors.

BMUF-GTC.gif._CB436386414_.gif
The researchers' new method splits distributed processors into groups, and within each group, the processors use the highly accurate GTC method to synchronize their models. At regular intervals, designated representatives from all the groups use a different method — BMUF — to share their models and update them accordingly. Finally, each representative broadcasts its updated model to the rest of its group.
Animation by Nick Little

One-vs-All Models for Asynchronous Training: An Empirical Analysis
Rahul Gupta, Aman Alok, Shankar Ananthakrishnan

A neural network can be trained to perform multiple classifications at once: it might recognize multiple objects in an image, or assign multiple topic categories to a single news article. An alternative is to train a separate “one-versus-all” (OVA) classifier for each category, which classifies data as either in the category or out of it. The advantage of this approach is that each OVA classifier can be re-trained separately as new data becomes available. The researchers present a new metric that enables comparison of multiclass and OVA strategies, to help data scientists determine which is more useful for a given application.

Research areas

Related content

GB, MLN, Edinburgh
We’re looking for a Machine Learning Scientist in the Personalization team for our Edinburgh office experienced in generative AI and large models. You will be responsible for developing and disseminating customer-facing personalized recommendation models. This is a hands-on role with global impact working with a team of world-class engineers and scientists across the Edinburgh offices and wider organization. You will lead the design of machine learning models that scale to very large quantities of data, and serve high-scale low-latency recommendations to all customers worldwide. You will embody scientific rigor, designing and executing experiments to demonstrate the technical efficacy and business value of your methods. You will work alongside a science team to delight customers by aiding in recommendations relevancy, and raise the profile of Amazon as a global leader in machine learning and personalization. Successful candidates will have strong technical ability, focus on customers by applying a customer-first approach, excellent teamwork and communication skills, and a motivation to achieve results in a fast-paced environment. Our position offers exceptional opportunities for every candidate to grow their technical and non-technical skills. If you are selected, you have the opportunity to make a difference to our business by designing and building state of the art machine learning systems on big data, leveraging Amazon’s vast computing resources (AWS), working on exciting and challenging projects, and delivering meaningful results to customers world-wide. Key job responsibilities Develop machine learning algorithms for high-scale recommendations problems. Rapidly design, prototype and test many possible hypotheses in a high-ambiguity environment, making use of both quantitative analysis and business judgement. Collaborate with software engineers to integrate successful experimental results into large-scale, highly complex Amazon production systems capable of handling 100,000s of transactions per second at low latency. Report results in a manner which is both statistically rigorous and compellingly relevant, exemplifying good scientific practice in a business environment.
IN, TS, Hyderabad
Welcome to the Worldwide Returns & ReCommerce team (WWR&R) at Amazon.com. WWR&R is an agile, innovative organization dedicated to ‘making zero happen’ to benefit our customers, our company, and the environment. Our goal is to achieve the three zeroes: zero cost of returns, zero waste, and zero defects. We do this by developing products and driving truly innovative operational excellence to help customers keep what they buy, recover returned and damaged product value, keep thousands of tons of waste from landfills, and create the best customer returns experience in the world. We have an eye to the future – we create long-term value at Amazon by focusing not just on the bottom line, but on the planet. We are building the most sustainable re-use channel we can by driving multiple aspects of the Circular Economy for Amazon – Returns & ReCommerce. Amazon WWR&R is comprised of business, product, operational, program, software engineering and data teams that manage the life of a returned or damaged product from a customer to the warehouse and on to its next best use. Our work is broad and deep: we train machine learning models to automate routing and find signals to optimize re-use; we invent new channels to give products a second life; we develop highly respected product support to help customers love what they buy; we pilot smarter product evaluations; we work from the customer backward to find ways to make the return experience remarkably delightful and easy; and we do it all while scrutinizing our business with laser focus. You will help create everything from customer-facing and vendor-facing websites to the internal software and tools behind the reverse-logistics process. You can develop scalable, high-availability solutions to solve complex and broad business problems. We are a group that has fun at work while driving incredible customer, business, and environmental impact. We are backed by a strong leadership group dedicated to operational excellence that empowers a reasonable work-life balance. As an established, experienced team, we offer the scope and support needed for substantial career growth. Amazon is earth’s most customer-centric company and through WWR&R, the earth is our customer too. Come join us and innovate with the Amazon Worldwide Returns & ReCommerce team!
US, WA, Seattle
Amazon Advertising operates at the intersection of eCommerce and advertising, and is investing heavily in building a world-class advertising business. We are defining and delivering a collection of self-service performance advertising products that drive discovery and sales. Our products are strategically important to our Retail and Marketplace businesses driving long-term growth. We deliver billions of ad impressions and millions of clicks daily and are breaking fresh ground to create world-class products to improve both shopper and advertiser experience. With a broad mandate to experiment and innovate, we grow at an unprecedented rate with a seemingly endless range of new opportunities. The Ad Response Prediction team in Sponsored Products organization build advanced deep-learning models, large-scale machine-learning pipelines, and real-time serving infra to match shoppers’ intent to relevant ads on all devices, for all contexts and in all marketplaces. Through precise estimation of shoppers’ interaction with ads and their long-term value, we aim to drive optimal ads allocation and pricing, and help to deliver a relevant, engaging and delightful ads experience to Amazon shoppers. As the business and the complexity of various new initiatives we take continues to grow, we are looking for talented Applied Scientists to join the team. Key job responsibilities As a Applied Scientist II, you will: * Conduct hands-on data analysis, build large-scale machine-learning models and pipelines * Work closely with software engineers on detailed requirements, technical designs and implementation of end-to-end solutions in production * Run regular A/B experiments, gather data, perform statistical analysis, and communicate the impact to senior management * Establish scalable, efficient, automated processes for large-scale data analysis, machine-learning model development, model validation and serving * Provide technical leadership, research new machine learning approaches to drive continued scientific innovation * Be a member of the Amazon-wide Machine Learning Community, participating in internal and external MeetUps, Hackathons and Conferences
US, WA, Seattle
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video technologist, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! In Prime Video READI, our mission is to automate infrastructure scaling and operational readiness. We are growing a team specialized in time series modeling, forecasting, and release safety. This team will invent and develop algorithms for forecasting multi-dimensional related time series. The team will develop forecasts on key business dimensions with optimization recommendations related to performance and efficiency opportunities across our global software environment. As a founding member of the core team, you will apply your deep coding, modeling and statistical knowledge to concrete problems that have broad cross-organizational, global, and technology impact. Your work will focus on retrieving, cleansing and preparing large scale datasets, training and evaluating models and deploying them to production where we continuously monitor and evaluate. You will work on large engineering efforts that solve significantly complex problems facing global customers. You will be trusted to operate with complete independence and are often assigned to focus on areas where the business and/or architectural strategy has not yet been defined. You must be equally comfortable digging in to business requirements as you are drilling into design with development teams and developing production ready learning models. You consistently bring strong, data-driven business and technical judgment to decisions. You will work with internal and external stakeholders, cross-functional partners, and end-users around the world at all levels. Our team makes a big impact because nothing is more important to us than delivering for our customers, continually earning their trust, and thinking long term. You are empowered to bring new technologies to your solutions. If you crave a sense of ownership, this is the place to be.
IL, Tel Aviv
Come join the AWS Agentic AI science team in building the next generation models for intelligent automation. AWS, the world-leading provider of cloud services, has fostered the creation and growth of countless new businesses, and is a positive force for good. Our customers bring problems that will give Applied Scientists like you endless opportunities to see your research have a positive and immediate impact in the world. You will have the opportunity to partner with technology and business teams to solve real-world problems, have access to virtually endless data and computational resources, and to world-class engineers and developers that can help bring your ideas into the world. As part of the team, we expect that you will develop innovative solutions to hard problems, and publish your findings at peer reviewed conferences and workshops. We are looking for world class researchers with experience in one or more of the following areas - autonomous agents, API orchestration, Planning, large multimodal models (especially vision-language models), reinforcement learning (RL) and sequential decision making. Key job responsibilities PhD, or Master's degree and 4+ years of CS, CE, ML or related field experience 3+ years of building models for business application experience Experience in patents or publications at top-tier peer-reviewed conferences or journals Experience programming in Java, C++, Python or related language Experience in any of the following areas: algorithms and data structures, parsing, numerical optimization, data mining, parallel and distributed computing, high-performance computing
US, WA, Seattle
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video team member, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities As an Applied Scientist in the Content Understanding Team, you will lead the end-to-end research and deployment of video and multi-modal models applied to a variety of downstream applications. More specifically, you will: - Work backwards from customer problems to research and design scientific approaches for solving them - Work closely with other scientists, engineers and product managers to expand the depth of our product insights with data, create a variety of experiments to determine the high impact projects to include in planning roadmaps - Stay up-to-date with advancements and the latest modeling techniques in the field - Publish your research findings in top conferences and journals About the team Our Prime Video Content Understanding team builds holistic media representations (e.g. descriptions of scenes, semantic embeddings) and apply them to new customer experiences supply chain problems. Our technology spans the entire Prime Video catalogue globally, and we enable instant recaps, skip intro timing, ad placement, search, and content moderation.
US, WA, Seattle
Prime Video is a first-stop entertainment destination offering customers a vast collection of premium programming in one app available across thousands of devices. Prime members can customize their viewing experience and find their favorite movies, series, documentaries, and live sports – including Amazon MGM Studios-produced series and movies; licensed fan favorites; and programming from Prime Video add-on subscriptions such as Apple TV+, Max, Crunchyroll and MGM+. All customers, regardless of whether they have a Prime membership or not, can rent or buy titles via the Prime Video Store, and can enjoy even more content for free with ads. Are you interested in shaping the future of entertainment? Prime Video's technology teams are creating best-in-class digital video experience. As a Prime Video team member, you’ll have end-to-end ownership of the product, user experience, design, and technology required to deliver state-of-the-art experiences for our customers. You’ll get to work on projects that are fast-paced, challenging, and varied. You’ll also be able to experiment with new possibilities, take risks, and collaborate with remarkable people. We’ll look for you to bring your diverse perspectives, ideas, and skill-sets to make Prime Video even better for our customers. With global opportunities for talented technologists, you can decide where a career Prime Video Tech takes you! Key job responsibilities As an Applied Scientist in the Content Understanding Team, you will lead the end-to-end research and deployment of video and multi-modal models applied to a variety of downstream applications. More specifically, you will: - Work backwards from customer problems to research and design scientific approaches for solving them - Work closely with other scientists, engineers and product managers to expand the depth of our product insights with data, create a variety of experiments to determine the high impact projects to include in planning roadmaps - Stay up-to-date with advancements and the latest modeling techniques in the field - Publish your research findings in top conferences and journals About the team Our Prime Video Content Understanding team builds holistic media representations (e.g. descriptions of scenes, semantic embeddings) and apply them to new customer experiences supply chain problems. Our technology spans the entire Prime Video catalogue globally, and we enable instant recaps, skip intro timing, ad placement, search, and content moderation.
IN, HR, Gurugram
We're on a journey to build something new a green field project! Come join our team and build new discovery and shopping products that connect customers with their vehicle of choice. We're looking for a talented Senior Applied Scientist to join our team of product managers, designers, and engineers to design, and build innovative automotive-shopping experiences for our customers. This is a great opportunity for an experienced engineer to design and implement the technology for a new Amazon business. We are looking for a Applied Scientist to design, implement and deliver end-to-end solutions. We are seeking passionate, hands-on, experienced and seasoned Senior Applied Scientist who will be deep in code and algorithms; who are technically strong in building scalable computer vision machine learning systems across item understanding, pose estimation, class imbalanced classifiers, identification and segmentation.. You will drive ideas to products using paradigms such as deep learning, semi supervised learning and dynamic learning. As a Senior Applied Scientist, you will also help lead and mentor our team of applied scientists and engineers. You will take on complex customer problems, distill customer requirements, and then deliver solutions that either leverage existing academic and industrial research or utilize your own out-of-the-box but pragmatic thinking. In addition to coming up with novel solutions and prototypes, you will directly contribute to implementation while you lead. A successful candidate has excellent technical depth, scientific vision, project management skills, great communication skills, and a drive to achieve results in a unified team environment. You should enjoy the process of solving real-world problems that, quite frankly, haven’t been solved at scale anywhere before. Along the way, we guarantee you’ll get opportunities to be a bold disruptor, prolific innovator, and a reputed problem solver—someone who truly enables AI and robotics to significantly impact the lives of millions of consumers. Key job responsibilities Architect, design, and implement Machine Learning models for vision systems on robotic platforms Optimize, deploy, and support at scale ML models on the edge. Influence the team's strategy and contribute to long-term vision and roadmap. Work with stakeholders across , science, and operations teams to iterate on design and implementation. Maintain high standards by participating in reviews, designing for fault tolerance and operational excellence, and creating mechanisms for continuous improvement. Prototype and test concepts or features, both through simulation and emulators and with live robotic equipment Work directly with customers and partners to test prototypes and incorporate feedback Mentor other engineer team members. A day in the life - 6+ years of building machine learning models for retail application experience - PhD, or Master's degree and 6+ years of applied research experience - Experience programming in Java, C++, Python or related language - Experience with neural deep learning methods and machine learning - Demonstrated expertise in computer vision and machine learning techniques.
US, WA, Seattle
Do you want to re-invent how millions of people consume video content on their TVs, Tablets and Alexa? We are building a free to watch streaming service called Fire TV Channels (https://techcrunch.com/2023/08/21/amazon-launches-fire-tv-channels-app-400-fast-channels/). Our goal is to provide customers with a delightful and personalized experience for consuming content across News, Sports, Cooking, Gaming, Entertainment, Lifestyle and more. You will work closely with engineering and product stakeholders to realize our ambitious product vision. You will get to work with Generative AI and other state of the art technologies to help build personalization and recommendation solutions from the ground up. You will be in the driver's seat to present customers with content they will love. Using Amazon’s large-scale computing resources, you will ask research questions about customer behavior, build state-of-the-art models to generate recommendations and run these models to enhance the customer experience. You will participate in the Amazon ML community and mentor Applied Scientists and Software Engineers with a strong interest in and knowledge of ML. Your work will directly benefit customers and you will measure the impact using scientific tools.
US, MA, Boston
The Artificial General Intelligence (AGI) team is looking for a passionate, talented, and inventive Applied Scientist with a strong deep learning background, to build industry-leading technology with Large Language Models (LLMs) and multi-modal systems. You will support projects that work on technologies including multi-modal model alignment, moderation systems and evaluation. Key job responsibilities As an Applied Scientist with the AGI team, you will support the development of novel algorithms and modeling techniques, to advance the state of the art with LLMs. Your work will directly impact our customers in the form of products and services that make use of speech and language technology. You will leverage Amazon’s heterogeneous data sources and large-scale computing resources to accelerate advances in generative artificial intelligence (GenAI). You are also expected to publish in top tier conferences. About the team The AGI team has a mission to push the envelope in LLMs and multimodal systems. Specifically, we focus on model alignment with an aim to maintain safety while not denting utility, in order to provide the best-possible experience for our customers.