Accelerating parallel training of neural nets

Earlier this year, we reported a speech recognition system trained on a million hours of data, a feat possible through semi-supervised learning, in which training data is annotated by machines rather than by people.

These sorts of massive machine learning projects are becoming more common, and they require distributing the training process across multiple processors. Otherwise, training becomes too time consuming.

In theory, doubling the number of processors should halve the training time, but in practice, it doesn’t work that way: the need to synchronize distributed computations results in some inevitable communication overhead. Parallelization schemes always involve some trade-off between training efficiency and the accuracy of the resulting model.

In a paper we’re presenting at this year’s Interspeech, we describe a new approach to parallelizing the training of neural networks that combines two state-of-the-art methods and improves on both. One of the existing methods prioritizes model accuracy, and the other prioritizes training efficiency. In tests that involved training an acoustic model — a vital component of a speech recognition system — our hybrid method proved both more accurate than the accuracy-prioritizing method and more efficient than the efficiency-prioritizing method.

The accuracy-prioritizing method was proposed in 2015 by Amazon senior principal scientist Nikko Strom, who’s also a coauthor on the new paper. At the time, the method — known as synchronous GTC, for gradient threshold compression — was a breakthrough, enabling parallelization to as many as 32 processors with little loss in model accuracy. Our experiments indicate, however, that beyond 32 processors, communications overhead can make GTC much less efficient.

When we needed to increase the processor count to handle a million hours of data, we switched to a different parallelization method, called BMUF, for blockwise model update filtering. BMUF scales better than GTC, but at the price of accuracy. In experiments we report in our new paper, at 64 processors, BMUF triples the training rate achievable with GTC — but it also triples the loss in accuracy.

In our new work, we split the distributed processors into groups, and each group performs GTC within itself. Every so often, however, the groups share their latest models using BMUF, which re-synchronizes models across all processors.

BMUF-GTC.gif._CB436386414_ (1).gif
Our new method splits distributed processors into groups, and within each group, the processors use the highly accurate GTC method to synchronize their models. At regular intervals, designated representatives from all the groups share their models and update their own local models accordingly. Finally, each representative broadcasts its updated model to the rest of its group.
Animation by Nick Little

A neural network consists of thousands or even millions of very simple processors, often called neurons. Each neuron receives data from several other neurons, processes it, and passes the results to still other neurons. Connections between neurons have associated weights, which determine how big a role the outputs of one neuron play in the computations performed by the next.

During training, a network will process a small amount of data, known as a minibatch, evaluate the results, and update its weights accordingly. Then it will process the next minibatch, and so on.

The natural way to parallelize this procedure is to give each processor its own copy of the initial network weights and its own set of minibatches. After each minibatch, each processor will broadcast the updates it needs to make to its weights (technically, “gradients”) to all the other processors. A given processor simply combines all the updates it receives and applies them to its own, local copy of the network. Then it processes its next minibatch.

This method is equivalent to training a model on a single processor. But broadcasting a full set of weight updates after each minibatch is so bandwidth intensive that it eats up the time savings from parallelization. GTC modifies this procedure in a few crucial ways.

First, it exploits the fact that the weight updates that follow the processing of a single minibatch tend to make major modifications to only a few neural connections. GTC requires the establishment of a threshold — the T in GTC — below which weight updates will not be broadcast, saving bandwidth. (Weights that fall below the threshold are, however, stored locally, where they may still factor into later computations.)

The weight threshold, denoted by the Greek letter tau, is determined empirically and varies from application to application. In our experiments, the optimal setting of tau turned out to be 8.

Next, when broadcasting its weight updates, each processor sends only one of two values, tau or -tau. Those two values can be represented by a single bit of information, compressing (the C in GTC) the update message. (Like updates that fall below the threshold, the residuals of weights above the threshold are stored locally and factor into later computations.)

These two modifications do sacrifice some accuracy. That’s why, in our experiments, we evaluate relative increase in error rate. All three methods we compare — GTC, BMUF, and our GTC-BMUF hybrid — increase error rate. The question is how much, and what we gain in efficiency in exchange.

With BMUF, each processor continually updates its own local copy of the neural model. After a fixed number of minibatches — say, 50 — it broadcasts its model, and all the processors update their models accordingly. This drastically cuts down on bandwidth consumption, but it decreases model accuracy.

By combining these approaches — GTC locally, BMUF globally — we get the best of both worlds. On the acoustic-modeling task we used for testing, our hybrid is slightly less efficient than BMUF at 32 cores, offering a 31-fold speedup relative to single-core processing; BMUF’s speedup is actually superlinear, at 36.6-fold. But our method reduces the error rate, while BMUF increases it by 3.5%. GTC, with a 26-fold speedup, increases error rate by 1.4%.

At 64 cores, GTC offers the best accuracy, with an error rate increase of 2.8%, versus 3.1% for our method and 8.9% for BMUF. But GTC’s efficiency actually falls, to 17-fold, versus 57-fold for BMUF and 42-fold for our method.

At 128 cores, our method is the undisputed champion, offering a 97-fold speedup and a 4.7% error rate increase, to 80-fold/9.6% for BMUF and 11-fold/15.6% for GTC.

Related content

US, WA, Bellevue
Welcome to the Worldwide Returns & ReCommerce team (WWR&R) at Amazon.com. WWR&R is an agile, innovative organization dedicated to ‘making zero happen’ to benefit our customers, our company, and the environment. Our goal is to achieve the three zeroes: zero cost of returns, zero waste, and zero defects. We do this by developing groundbreaking products and driving truly innovative operational excellence to help customers keep what they buy, recover returned and damaged product value, keep thousands of tons of waste from landfills, and create the best customer returns experience in the world. We have an eye to the future – we create long-term value at Amazon by focusing not just on the bottom line, but on the planet. We are building the most sustainable re-use channel we can by driving multiple aspects of the Circular Economy for Amazon – Returns & ReCommerce. Amazon WWR&R is comprised of business, product, operational, program, software engineering and data teams that manage the life of a returned or damaged product from a customer to the warehouse and on to its next best use. Our work is broad and deep: we train machine learning models to automate routing and find signals to optimize re-use; we invent new channels to give products a second life; we develop highly respected product support to help customers love what they buy; we pilot smarter product evaluations; we work from the customer backward to find ways to make the return experience remarkably delightful and easy; and we do it all while scrutinizing our business with laser focus. You will help create everything from customer-facing and vendor-facing websites to the internal software and tools behind the reverse-logistics process. You can develop scalable, high-availability solutions to solve complex and broad business problems. We are a group that has fun at work while driving incredible customer, business, and environmental impact. We are backed by a strong leadership group dedicated to operational excellence that empowers a reasonable work-life balance. As an established, experienced team, we offer the scope and support needed for substantial career growth. Amazon is earth’s most customer-centric company and through WWR&R, the earth is our customer too. Come join us and innovate with the Amazon Worldwide Returns & ReCommerce team! Key job responsibilities * Design, develop, and evaluate highly innovative models for Natural Language Programming (NLP), Large Language Model (LLM), or Large Computer Vision Models. * Use SQL to query and analyze the data. * Use Python, Jupyter notebook, and Pytorch to train/test/deploy ML models. * Use machine learning and analytical techniques to create scalable solutions for business problems. * Research and implement novel machine learning and statistical approaches. * Mentor interns. * Work closely with data & software engineering teams to build model implementations and integrate successful models and algorithms in production systems at very large scale. A day in the life If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! Benefits: Amazon offers a full range of benefits that support you and eligible family members, including domestic partners and their children. Benefits can vary by location, the number of regularly scheduled hours you work, length of employment, and job status such as seasonal or temporary employment. The benefits that generally apply to regular, full-time employees include: 1. Medical, Dental, and Vision Coverage 2. Maternity and Parental Leave Options 3. Paid Time Off (PTO) 4. 401(k) Plan Learn more about our benefits here: https://amazon.jobs/en/internal/benefits/us-benefits-and-stock About the team When a customer returns a package to Amazon, the request and package will be passed through our WWRR machine learning (ML) systems so that we could improve the customer experience, identify return root cause, optimize re-use, and evaluate the returned package. Our problems touch multiple modalities spanning from: textual, categorical, image, to speech data. We operate at large scale and rely on state-of-the-art modeling techniques to power our ML models: XGBoost, BERT, Vision Transformers, Large Language Models.
US, CA, Santa Clara
Amazon CloudWatch is the native AWS monitoring and observability service for cloud resources and applications. We are seeking a talented Senior Applied Scientist to develop next-generation scientific methods and infrastructure to support a core AWS business that delivers critical services to millions of customers operating at scale. This is a high visibility and high impact role that work on highly strategic projects in the AI/ML and Analytics space, will interact with all levels of AWS leadership. We are developing solutions that not only surface the “what” but also the “why” and “how to fix it”, without requiring operators to have extensive domain knowledge and technical expertise to efficiently troubleshoot and remediate incidents. Using decades of AWS operational excellence coupled with the advances in LLMs and Gen-AI technologies, we are transforming the very core of how customers can effortlessly interact with our offerings to build and operate their applications in the cloud. We are hiring to grow our team, and are looking for well-rounded applied scientists with backgrounds in machine learning, foundation models, and natural language processing. You'll be working with talented scientists, engineers, and product managers to innovate on behalf of our customers. If you're fired up about being part of a dynamic, mission driven team, then this is your moment to join us on this exciting journey! Key job responsibilities As an Applied Scientist II you will be responsible for * Research and development of algorithms that improve training of foundation models across pre-training, multitask learning, supervised finetuning, and reinforcement learning from human feedback * Research and development of novel approaches for anomaly detection, root cause analysis, and provide intelligent insights from vast amounts of monitoring and observability data * Collaborating with scientists, engineers, and Product Managers across CloudWatch team as well as directly with customers * Lead key science initiatives in strategic investment areas of AI/ML/LLM Ops and Observability * Be an industry thought leader representing Amazon at top-tier scientific conferences * Engaging in the hiring process and developing, growing, and mentoring junior scientists A day in the life Working closely with and across agile teams, you will be able to see and feel the impact of your work on our customers. This is a high visibility and high impact role that will interact with all levels of AWS leadership. Our ideal candidate is excited about the incredible opportunity that cloud monitoring represents and is deeply passionate about delivering the highest quality services leveraging AI/ML/LLMs. You’re naturally customer centric and thrive in a fast-paced environment that requires strong technical and business judgment and solid communication skills. About the team Amazon CloudWatch Logs is a core monitoring service used by millions of AWS customers. We move fast and have delivered remarkable products and features over the last few years to streamline how AWS customers troubleshoot their critical applications. Our mission is to be the most cost effective, integrated, fast, and secure logs management and analytics platform for AWS customers. We are a diverse group of product and engineering professionals that are passionate about delivering logging features that delight customers operating at any scale. Why AWS Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Utility Computing (UC) AWS Utility Computing (UC) provides product innovations — from foundational services such as Amazon’s Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), to consistently released new product innovations that continue to set AWS’s services and features apart in the industry. As a member of the UC organization, you’ll support the development and management of Compute, Database, Storage, Internet of Things (IoT), Platform, and Productivity Apps services in AWS, including support for customers who require specialized security solutions for their cloud services. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Diverse Experiences Amazon values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying.
US, MA, Boston
The Automated Reasoning Group is looking for a Applied Scientist with expertise in programming language semantics and deductive verification techniques (e.g. Lean, Dafny) to deliver novel code reasoning capabilities at scale. You will be part of a larger organization that develops a spectrum of formal software analysis tools and applies them to software at all levels of abstraction from assembler through high-level programming languages. You will work with a team of world class automated reasoning experts to deliver code reasoning technology that is accessible to all developers.
NL, Amsterdam
Are you interested in creating a large business impact on millions of customers through the use of machine learning and analytics? We are seeking an Data Scientist to join our PriMA (Prime & Marketing) science team to model customer behavior, improve the engagement of our existing customers, and help us grow our customer base. In this role, you will collaborate with cross-functional teams and stakeholders to solve problems, and you will regularly interact with software engineering teams and business leadership. Some of the technical challenges you will contribute in this role are: - Measuring marketing campaigns across external marketing channels (Youtube, TikTok, Google,....) - Modeling the causal impact that some actions have over customers. - Building better product recommendation for deals and promotions at Amazon Key job responsibilities - Develop accurate and scalable data science models to address business use cases ranging from: analyzing customer behavior, building recommender systems to increase engagement, or measuring the impact of marketing channels. - Partner with engineers and applied scientists to implement data science solutions for complex business problems, guiding the application of best practices in data analysis, statistical modeling, and machine learning. - Lead comprehensive data analyses to provide insights and recommendations that help management and business stakeholders make key strategic decisions. About the team The PRIMAS (Prime & Marketing Analytics and Science) is the team that support the science & analytics needs of the EU Prime and Marketing organization, an org that supports the Prime and Marketing programs in European marketplaces and comprises 250-300 employees. The PRIMAS team, is part of a larger tech tech team of 50 people (comprising other job families like SDEs) that gives support to all the tech needs of the Prime & marketing org.
BR, SP, Sao Paulo
Esta é uma posição de colaborador individual, com base em nosso escritório de São Paulo. Procuramos uma pessoa dinâmica, analítica, inovadora, orientada para a prática e com foco inabalável no cliente. Na Amazon, nosso objetivo é exceder as expectativas dos clientes, garantindo que seus pedidos sejam entregues com máxima rapidez, precisão e eficiência de custo. A determinação da rota de cada pacote é realizada por sistemas complexos, que precisam acompanhar o crescimento acelerado e a complexidade da malha logística no Brasil. Diante desse cenário, a equipe de Otimização de Supply Chain está à procura de um cientista de dados experiente, capaz de desenvolver modelos, ferramentas e processos para garantir confiabilidade, agilidade, eficiência de custos e a melhor utilização dos ativos. O candidato ideal terá sólidas habilidades quantitativas e experiência com conjuntos de dados complexos, sendo capaz de identificar tendências, inovar processos e tomar decisões baseadas em dados, considerando a cadeia de suprimentos de ponta a ponta. Key job responsibilities * Executar projetos de melhoria contínua na malha logística, aproveitando boas práticas de outros países e/ou desenvolvendo novos modelos. * Desenvolver modelos de otimização e cenários para planejamentos logísticos. * Criar modelos de otimização voltados para a execução de eventos e períodos de alta demanda. Automatizar processos manuais para melhorar a produtividade da equipe. * Auditar operações, configurações sistêmicas e processos que possam impactar custos, produtividade e velocidade de entregas. * Realizar benchmarks com outros países para identificar melhores práticas e processos avançados, conectando-os às operações no Brasil. About the team Nosso time é composto por engenheiros de dados, gerentes de projetos e cientistas de dados, todos dedicados a criar soluções escaláveis e inovadoras que suportem e otimizem as operações logísticas da Amazon no Brasil. Nossa missão é garantir a eficiência de todas as etapas da cadeia de suprimentos, desde a primeira até a última milha, ajudando a Amazon a entregar resultados com agilidade, precisão e a um custo competitivo, especialmente em um ambiente de rápido crescimento e complexidade.
US, CA, San Francisco
We are hiring an Economist with the ability to disambiguate very challenging structural problems in two and multi-sided markets. The right hire will be able to get dirty with the data to come up with stylized facts, build reduced form model that motivate structural assumptions, and build to more complex structural models. The main use case will be understanding the incremental effects of subsidies to a two sided market relate to sales motions characterized by principal agent problems. Key job responsibilities This role with interface directly with product owners, scientists/economists, and leadership to create multi-year research agendas that drive step change growth for the business. The role will also be an important collaborator with other science teams at AWS. A day in the life Our team takes big swings and works on hard cross organizational problems where the optimal success rate is not 100%. We also ask people to grow their skills and stretch and make sure we do it in a supportive and fun environment. It’s about empirically measured impact, advancement, and fun on our team. We work hard during work hours but we also don’t encourage working at nights or on weekends except in very rare, high stakes cases. Burn out isn’t a successful long run strategy. Because we invest in the long run success of our group it’s important to have hobbies, relax and then come to work refreshed and excited. It makes for bigger impact, faster skill accrual and thus career advancement. About the team Our group is technically rigorous and encourages ongoing academic conference participation and publication. Our leaders are here for you and to enable you to be successful. We believe in being servant leaders focused on influence: good data work has little value if it doesn’t translate into actionable insights that are rolled out and impact the real economy. We are communication centric since being able to explain what we do ensures high success rates and lowers administrative churn. Also: we laugh a lot. If it’s not fun, what’s the point?
US, CA, San Diego
Do you want to be part of the team developing the future technology that impacts the customer experience of ground-breaking products? Then come join us and make history. We are looking for a passionate, talented, and inventive Applied Scientist with a background in AI, Gen AI, Machine Learning, NLP, to help build LLM solutions for Amazon core shopping. As an Applied Scientist, you will be working closely with a team of applied scientists and engineers to build systems that shape the future of Amazon's by automatically generating relevant content and building a whole page experience that is coherent, dynamic, and interesting. You will improve ranking and optimization in our algorithm. You will participate in driving features from idea to deployment, and your work will directly impact millions of customers.
US, WA, Seattle
Amazon is the 4th most popular site in the US. Our product search engine is one of the most heavily used services in the world, indexes billions of products, and serves hundreds of millions of customers world-wide. We are working on a new AI-first initiative to re-architect and reinvent the way we do search through the use of extremely large scale next-generation deep learning techniques. Our goal is to make step function improvements in the use of advanced Machine Learning (ML) on very large scale datasets, specifically through the use of aggressive systems engineering and hardware accelerators. This is a rare opportunity to develop cutting edge ML solutions and apply them to a problem of this magnitude. Some exciting questions that we expect to answer over the next few years include: - Can combining supervised multi-task training with unsupervised training help us to improve model accuracy? - Can we transfer our knowledge of the customer to every language and every locale ? - Can we build foundational ML models that can serve different business lines. This is a unique opportunity to get in on the ground floor, shape, and build the next-generation of Amazon ML. We are looking for exceptional scientists and ML engineers who are passionate about innovation and impact, and want to work in a team with a startup culture within a larger organization. Key job responsibilities Train large deep learning models with hundreds of billions parameters. Build foundational ML models that can be applied to different business applications in Amazon such as Search and Ads. Areas of interest include efficient model architecture, training and data optimization/scaling, model/data/pipeline parallel techniques, and much more.
US, WA, Bellevue
Ring is looking for a Senior Applied Science Manager to lead the development of computer vision algorithm on the Edge. In this role, you will be the leader of our passionate, talented, and inventive scientists, to develop industry-leading Computer Vision (CV), Multimodal, and AI and drive them successfully to production for the benefit of millions of Amazon Devices users. This is a unique, high visibility opportunity for a leader who wants to have business impact, and dive deep into computer vision problems. We are particularly interested in candidates with experience productizing edge-based computer vision systems. Key job responsibilities As a Senior Manager, Applied Science, you bring structure to ambiguous business problems and use science, logic, and practical experience to decompose them into straightforward, scalable solutions. You set the standard for scientific excellence and make decisions that affect the way we build and integrate algorithms. Your solutions are exemplary in terms of algorithm design, clarity, model structure, efficiency, and extensibility. You tackle intrinsically hard problems; you're interested in learning; and you acquire skills and expertise as needed. The ideal candidate is a strong, creative and highly-motivated Scientist with hands-on experience in leading multiple research and engineering initiatives. You balance technical leadership with strong business judgment to make the right decisions about technology, tools, and methodologies.
US, MA, Boston
The Automated Reasoning Group is looking for a Applied Scientist with expertise in programming language semantics and deductive verification techniques (e.g. Lean, Dafny) to deliver novel code reasoning capabilities at scale. You will be part of a larger organization that develops a spectrum of formal software analysis tools and applies them to software at all levels of abstraction from assembler through high-level programming languages. You will work with a team of world class automated reasoning experts to deliver code reasoning technology that is accessible to all developers.