A version of the BERT language model that’s 20 times as fast

Determining the optimal architectural parameters reduces network size by 84% while improving performance on natural-language-understanding tasks.

In natural-language understanding (NLU), the Transformer-based BERT language model is king. Its high performance on multiple tasks has strongly influenced contemporary NLU research. 

On the other hand, it is a relatively big and slow model, which makes it unsuitable for some applications. Multiple efforts have been made to compress the BERT architecture, but the choice of architectural parameters (the number of layers, the number of processing nodes per layer, and so on) has been somewhat arbitrary, and the resulting models are rarely much better than the original at optimizing the balance between the model’s size, speed, and error rate.

A few weeks ago, we released part of the code for Bort, a highly optimized language model (LM) extracted from the BERT architecture through a combination of two rigorous algorithmic techniques especially designed for neural-network compression.

We tested Bort on 23 NLU tasks, and on 20 of them, it improved on BERT’s performance — by 31% in one case — even though it’s 16% the size and about 20 times as fast.

The code doesn’t contain the full versions of the algorithms we used to extract Bort from BERT, but I will discuss them here briefly.

Principled approach

A solid understanding of the intrinsic difficulty of a problem allows us to design efficient and correct algorithms for solving that problem that can be trusted to perform consistently. Think of trying to sort an array of numbers by just trying out random permutations: this approach will quickly become intractable as the size of the input grows. Thankfully, sorting is a well-studied problem with fast and correct solutions and a nicely bounded runtime growth. Why can’t compressing BERT, or any network, be the same way? 

We wanted to produce a version of BERT whose architectural parameters minimized its parameter sizeinference speed, and error rate, and I wanted to show that these architectural parameters were optimal. I call this problem “optimal subarchitecture extraction”, or OSE. Note that this is a different compression approach than weight pruning, which heuristically removes connections from a previously trained network. 

An algorithm solving OSE should also work for any input, not just BERT; a general-purpose algorithm would save a lot of time — and GPU cycles — otherwise spent on trial and error approaches.

Model showing comparison of weight pruning and optimal subarchitecture extractionfor a toy source network.
Comparison of weight pruning (left, bottom) and optimal subarchitecture extraction (or OSE, bottom right) for a toy source network (top). Weight pruning removes edges from a (usually trained) network, while OSE reparametrizes the layers. For example, the source network has two fully connected linear layers of dimensions 4x3 and 4x4 (second and third from the left), making its architectural parameters (2, 4, 3, 4, 4). The optimal subarchitecture ended up with two layers of dimensions 3x3 and 3x2, so the architectural parameters are (2, 3, 3, 3, 2). The parameter size was brought down from 2(4*3 + 4 + 4*4 + 4) = 72 to 2(3*3 + 3 + 3*2 + 2) = 40. 
Credit: Glynis Condon

OSE is a computationally hard problem, so the best we can hope for is an algorithm that returns something in the ballpark of the optimum. But even a ballpark algorithm could be impractically time consuming with a large enough input. 

My research showed that there is an efficient algorithm for extracting a subarchitecture whose functions — parameter size, inference speed, and error rate — have a polynomial correlation with the architectural parameters, meaning that they’re bounded by some polynomial function of the architectural parameters. 

This is easy to see for the parameter size of the network, since it boils down to a counting argument based on the number of nodes per layer, times the number of layers (see image caption above). The same argument can be made for the inference speed — the rate at which a network outputs a result given an input — as it is related to the parameter size. 

Under some circumstances, the algorithm’s error rate has a similar correlation. Furthermore, whenever the “cost” associated with the first (call it A) and last layers of the network is lower than that of the middle layers (B), the runtime of a solution will not explode. I call all these assumptions the ABnC property, which BERT turns out to have.

For inputs having the ABnC property, the algorithm I designed behaves like a fully polynomial-time approximation scheme, or FPTAS. Being an FPTAS means that an approximation parameter — which defines how far short of the optimal solution you’re willing to fall — can now be given as an input to the algorithm, and the algorithm’s execution time depends polynomially on that parameter. 

The more accurate you want the approximation to be, the longer the algorithm takes to execute. But at least the trade-off can be precisely specified, and the runtime of the algorithm is guaranteed to not grow too quickly.

I ran the FPTAS on BERT and obtained a set of architectural parameters (Bort), and from the proofs we knew that it would be Pareto optimal. That is, it optimized the balance between inference speed, parameter size, and error rate: a faster model would necessarily be bigger or more error prone; a more accurate model would be larger or slower. Any model that broke that balance would necessarily be suboptimal.

Generalizability

The true strength of BERT lies in its generalizability across tasks. BERT learns to represent words of a language as points in a shared space, where proximity in the space implies semantic similarity. That general model can then be fine-tuned on specific tasks, such as question answering or text classification.

To see whether Bort generalizes as well as BERT, we needed to pre-train it and then test it on several different natural-language-understanding tasks. The FPTAS doesn’t return a trained model, and it didn’t know that our ultimate goal was fine-tuning.

When fine-tuning, I opted to follow BERT’s approach and attach a single linear classifier to Bort. I supposed that a good LM would have no problems with this setup, but the fine-tuning process was hard, and several variations on this strategy (e.g., knowledge distillation, deeper classifiers, and hyperparameter search) failed to yield a model that reached the theoretical limits predicted by the FPTAS.

In machine learning, it is crucial to have a good selection of features to make the data representative of the task, which in turn makes the task learnable. Deep networks operate in a hierarchical fashion, which makes them fantastic at selecting features automatically. If the model is too small and the data too scarce, however, the task may not be learnable. And Bort is, by design, a small model.

The Agora algorithm

To address this problem, I developed a second algorithm, called Agora, which leverages the development set for the task Bort is being fine-tuned on. In machine learning, a labeled data set is usually split into three components: the training set is used to train a model; the development (or dev) set is used to check for over-/underfitting; and the test set is used to assess the model’s generalizability.

With Agora, Bort is first fine-tuned on the training set, then applied to the data in the dev set. Agora finds the data points in the dev set that the pretrained Bort model labeled incorrectly and samples new points near them in a chosen representation space. Those samples are labeled by a second model, and then they’re added into the training set.

It might sound silly to add randomized points from the dev set to the training set. You might expect the fine-tuned model to overfit the dev set data, so that it doesn’t generalize well to unfamiliar inputs — which means that it didn’t actually learn.

But this is a powerful approach to situations with scarce or ill-formed data. The proof of this is non-trivial, but intuitively, this algorithm works because it “reconstructs” the input in a way that is equivalent, in a precise sense, to the distribution of the task we are modeling in the first place. I believe the best part about Agora is not its effectiveness but that it is another proof of what the great Patrick Winston showed fifty years ago: you can’t learn something that you do not already know almost completely.

Graphic of Agora sampling the development set and generating, labeling and adding new points back to the training set.
In this toy example, we are attempting to learn a data set that looks like a torus. However, the training set is not representative of the distribution. Agora samples from the development set, generates new points, labels them, and adds them back to the training set. Given enough rounds, it is able to generate a data set learnable by any model — even a random guesser!

Credit: Glynis Condon

The application of both algorithms to BERT yielded Bort, which has an effective (not counting the embedding layer) size of 5.5% of the original BERT architecture and a net size of 16%. The effective size is more important because the first layer, by the ABnC property, is not as expensive as the other layers when performing inference. Indeed, Bort is up to 20 times faster, on a CPU, than BERT. It is also able to be pretrained much more rapidly than usual — likely due to the FPTAS’s preferring faster-converging architectures. It also obtained improvements of up to 31%, absolute, with respect to BERT, across multiple NLU benchmarks.

Acknowledgments: Daniel J. Perry, my coauthor for the Bort paper.

Related content

US, VA, Arlington
We are seeking an exceptional Data Scientist to join our team in PXT Central Science. The ideal candidate will thrive in a dynamic, multifaceted role where you'll translate complex business challenges into rigorous quantitative frameworks, extract actionable insights from structured and unstructured datasets, and architect science-backed, scalable solutions that elevate the experience of our 1 million+ employees worldwide. If you're energized by the opportunity to apply data science to our mission of making Amazon Earth's Best Employer, we want to hear from you. Key job responsibilities • Own the design, development, and maintenance of scalable models and prototypes leveraging statistical, machine learning, or GenAI methodologies to enhance employee experience. • Partner with scientists, engineers, and product leaders to solve for employee experience defects using scientific approaches, building new services and tools that deliverable measurable impact. • Author and maintain detailed technical documentation related to the projects you drive. • Communicate results to diverse audiences of varying technical background with effective writing, visualizations, and presentations • Stay current with emerging methods and technologies, and implement them strategically to amplify the team’s impact. About the team The Central Science Team within Amazon’s People Experience and Technology org (PXTCS) uses economics, behavioral science, statistics, machine learning, and Generative AI to proactively identify mechanisms and process improvements which simultaneously improve Amazon and the lives, well-being, and the value of work to Amazonians. We are an interdisciplinary team, which combines the talents of science, engineering, and UX to develop and deliver solutions that measurably achieve this goal.
US, WA, Bellevue
The Amazon Fulfillment Technologies (AFT) Science team is looking for an exceptional Applied Scientist, with strong optimization and analytical skills, to develop production solutions for one of the most complex systems in the world: Amazon’s Fulfillment Network. At AFT Science, we design, build and deploy optimization, simulation, and machine learning solutions to power the production systems running at world wide Amazon Fulfillment Centers. We solve a wide range of problems that are encountered in the network, including labor planning and staffing, demand prioritization, pick assignment and scheduling, and flow process optimization. We are tasked to develop innovative, scalable, and reliable science-driven solutions that are beyond the published state of art in order to run frequently (ranging from every few minutes to every few hours per use case) and continuously in our large scale network. Key job responsibilities As an Applied Scientist, you will work with other scientists, software engineers, product managers, and operations leaders to develop scientific solutions and analytics using a variety of tools and observe direct impact to process efficiency and associate experience in the fulfillment network. Key responsibilities include: * Develop an understanding and domain knowledge of operational processes, system architecture and functions, and business requirements * Deep dive into data and code to identify opportunities for continuous improvement and/or disruptive new approach * Develop scalable mathematical models for production systems to derive optimal or near-optimal solutions for existing and new challenges * Create prototypes and simulations for agile experimentation of devised solutions * Advocate technical solutions to business stakeholders, engineering teams, and senior leadership * Partner with engineers to integrate prototypes into production systems * Design experiment to test new or incremental solutions launched in production and build metrics to track performance About the team Amazon Fulfillment Technology (AFT) designs, develops and operates the end-to-end fulfillment technology solutions for all Amazon Fulfillment Centers (FC). We harmonize the physical and virtual world so Amazon customers can get what they want, when they want it. The AFT Science team has expertise in operations research, optimization, scheduling, planning, simulation, and machine learning. We also have domain expertise in the operational processes within the FCs and their defects. We prioritize advancements that support AFT tech teams and focus areas rather than specific fields of research or individual business partners. We influence each stage of innovation from inception to deployment which includes both developing novel solutions or improving existing approaches. Resulting production systems rely on a diverse set of technologies, our teams therefore invest in multiple specialties as the needs of each focus area evolves.
US, WA, Seattle
We are seeking an exceptional Data Scientist to join our team in PXT Central Science. The ideal candidate will thrive in a dynamic, multifaceted role where you'll translate complex business challenges into rigorous quantitative frameworks, extract actionable insights from structured and unstructured datasets, and architect science-backed, scalable solutions that elevate the experience of our 1 million+ employees worldwide. If you're energized by the opportunity to apply data science to our mission of making Amazon Earth's Best Employer, we want to hear from you. Key job responsibilities • Own the design, development, and maintenance of scalable models and prototypes leveraging statistical, machine learning, or GenAI methodologies to enhance employee experience. • Partner with scientists, engineers, and product leaders to solve for employee experience defects using scientific approaches, building new services and tools that deliverable measurable impact. • Author and maintain detailed technical documentation related to the projects you drive. • Communicate results to diverse audiences of varying technical background with effective writing, visualizations, and presentations • Stay current with emerging methods and technologies, and implement them strategically to amplify the team’s impact. About the team The Central Science Team within Amazon’s People Experience and Technology org (PXTCS) uses economics, behavioral science, statistics, machine learning, and Generative AI to proactively identify mechanisms and process improvements which simultaneously improve Amazon and the lives, well-being, and the value of work to Amazonians. We are an interdisciplinary team, which combines the talents of science, engineering, and UX to develop and deliver solutions that measurably achieve this goal.
US, WA, Bellevue
Alexa International is looking for a passionate, talented, and inventive Applied Scientist to help build industry-leading technology with Large Language Models (LLMs) and multimodal systems, requiring strong deep learning and generative models knowledge. You will contribute to developing novel solutions and deliver high-quality results that impact Alexa's international products and services. Key job responsibilities As an Applied Scientist with the Alexa International team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art with LLMs. Your work will directly impact our international customers in the form of products and services that make use of digital assistant technology. You will leverage Amazon's heterogeneous data sources, unique and diverse international customer nuances and large-scale computing resources to accelerate advances in text, voice, and vision domains in a multimodal setup. The ideal candidate possesses a solid understanding of machine learning, natural language understanding, modern LLM architectures, LLM evaluation & tooling, and a passion for pushing boundaries in this vast and quickly evolving field. They thrive in fast-paced environments to tackle complex challenges, excel at swiftly delivering impactful solutions while iterating based on user feedback, and collaborate effectively with cross-functional teams. A day in the life * Analyze, understand, and model customer behavior and the customer experience based on large-scale data. * Build novel online & offline evaluation metrics and methodologies for multimodal personal digital assistants. * Fine-tune/post-train LLMs using techniques like SFT, DPO, RLHF, and RLAIF. * Set up experimentation frameworks for agile model analysis and A/B testing. * Collaborate with partner teams on LLM evaluation frameworks and post-training methodologies. * Contribute to end-to-end delivery of solutions from research to production, including reusable science components. * Communicate solutions clearly to partners and stakeholders. * Contribute to the scientific community through publications and community engagement.
US, WA, Bellevue
Amazon’s Last Mile Team is looking for a passionate individual with strong optimization and analytical skills to join its Last Mile Science team in the endeavor of designing and improving the most complex planning of delivery network in the world. Last Mile builds global solutions that enable Amazon to attract an elastic supply of drivers, companies, and assets needed to deliver Amazon's and other shippers' volumes at the lowest cost and with the best customer delivery experience. Last Mile Science team owns the core decision models in the space of jurisdiction planning, delivery channel and modes network design, capacity planning for on the road and at delivery stations, routing inputs estimation and optimization. Our research has direct impact on customer experience, driver and station associate experience, Delivery Service Partner (DSP)’s success and the sustainable growth of Amazon. Optimizing the last mile delivery requires deep understanding of transportation, supply chain management, pricing strategies and forecasting. Only through innovative and strategic thinking, we will make the right capital investments in technology, assets and infrastructures that allows for long-term success. Our team members have an opportunity to be on the forefront of supply chain thought leadership by working on some of the most difficult problems in the industry with some of the best product managers, scientists, and software engineers in the industry. Key job responsibilities Candidates will be responsible for developing solutions to better manage and optimize delivery capacity in the last mile network. The successful candidate should have solid research experience in one or more technical areas of Operations Research or Machine Learning. These positions will focus on identifying and analyzing opportunities to improve existing algorithms and also on optimizing the system policies across the management of external delivery service providers and internal planning strategies. They require superior logical thinkers who are able to quickly approach large ambiguous problems, turn high-level business requirements into mathematical models, identify the right solution approach, and contribute to the software development for production systems. To support their proposals, candidates should be able to independently mine and analyze data, and be able to use any necessary programming and statistical analysis software to do so. Successful candidates must thrive in fast-paced environments, which encourage collaborative and creative problem solving, be able to measure and estimate risks, constructively critique peer research, and align research focuses with the Amazon's strategic needs.
US, WA, Bellevue
Alexa International is looking for a passionate, talented, and inventive Applied Scientist to help build industry-leading technology with Large Language Models (LLMs) and multimodal systems, requiring strong deep learning and generative models knowledge. You will contribute to developing novel solutions and deliver high-quality results that impact Alexa's international products and services. Key job responsibilities As an Applied Scientist with the Alexa International team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art with LLMs. Your work will directly impact our international customers in the form of products and services that make use of digital assistant technology. You will leverage Amazon's heterogeneous data sources, unique and diverse international customer nuances and large-scale computing resources to accelerate advances in text, voice, and vision domains in a multimodal setup. The ideal candidate possesses a solid understanding of machine learning, natural language understanding, modern LLM architectures, LLM evaluation & tooling, and a passion for pushing boundaries in this vast and quickly evolving field. They thrive in fast-paced environments to tackle complex challenges, excel at swiftly delivering impactful solutions while iterating based on user feedback, and collaborate effectively with cross-functional teams. A day in the life * Analyze, understand, and model customer behavior and the customer experience based on large-scale data. * Build novel online & offline evaluation metrics and methodologies for multimodal personal digital assistants. * Fine-tune/post-train LLMs using techniques like SFT, DPO, RLHF, and RLAIF. * Set up experimentation frameworks for agile model analysis and A/B testing. * Collaborate with partner teams on LLM evaluation frameworks and post-training methodologies. * Contribute to end-to-end delivery of solutions from research to production, including reusable science components. * Communicate solutions clearly to partners and stakeholders. * Contribute to the scientific community through publications and community engagement.
US, CA, Pasadena
The Amazon Web Services (AWS) Center for Quantum Computing (CQC) is a multi-disciplinary team of theoretical and experimental physicists, materials scientists, and hardware and software engineers on a mission to develop a fault-tolerant quantum computer. Throughout your internship journey, you'll have access to unparalleled resources, including state-of-the-art computing infrastructure, cutting-edge research papers, and mentorship from industry luminaries. This immersive experience will not only sharpen your technical skills but also cultivate your ability to think critically, communicate effectively, and thrive in a fast-paced, innovative environment where bold ideas are celebrated. Join us at the forefront of applied science, where your contributions will shape the future of Quantum Computing and propel humanity forward. Seize this extraordinary opportunity to learn, grow, and leave an indelible mark on the world of technology. Amazon has positions available for Quantum Research Science and Applied Science Internships in Santa Clara, CA and Pasadena, CA. We are particularly interested in candidates with expertise in any of the following areas: superconducting qubits, cavity/circuit QED, quantum optics, open quantum systems, superconductivity, electromagnetic simulations of superconducting circuits, microwave engineering, benchmarking, quantum error correction, fabrication, etc. Key job responsibilities In this role, you will work alongside global experts to develop and implement novel, scalable solutions that advance the state-of-the-art in the areas of quantum computing. You will tackle challenging, groundbreaking research problems, work with leading edge technology, focus on highly targeted customer use-cases, and launch products that solve problems for Amazon customers. The ideal candidate should possess the ability to work collaboratively with diverse groups and cross-functional teams to solve complex business problems. A successful candidate will be a self-starter, comfortable with ambiguity, with strong attention to detail and the ability to thrive in a fast-paced, ever-changing environment. About the team Diverse Experiences AWS values diverse experiences. Even if you do not meet all of the qualifications and skills listed in the job description, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Hybrid Work We value innovation and recognize this sometimes requires uninterrupted time to focus on a build. We also value in-person collaboration and time spent face-to-face. Our team affords employees options to work in the office every day or in a flexible, hybrid work model near one of our U.S. Amazon offices.
US, WA, Bellevue
Alexa International Science team is looking for a passionate, talented, and inventive Senior Applied Scientist to help build industry-leading technology with Large Language Models (LLMs) and multimodal systems, requiring strong deep learning and generative models knowledge. At this level, you will drive cross-team scientific strategy, influence partner teams, and deliver solutions that have broad impact across Alexa's international products and services. Key job responsibilities As a Senior Applied Scientist with the Alexa International team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art with LLMs, particularly delivering industry-leading scientific research and applied AI for multi-lingual applications — a challenging area for the industry globally. Your work will directly impact our global customers in the form of products and services that support Alexa+. You will leverage Amazon's heterogeneous data sources and large-scale computing resources to accelerate advances in text, speech, and vision domains. The ideal candidate possesses a solid understanding of machine learning, speech and/or natural language processing, modern LLM architectures, LLM evaluation & tooling, and a passion for pushing boundaries in this vast and quickly evolving field. They thrive in fast-paced environment, like to tackle complex challenges, excel at swiftly delivering impactful solutions while iterating based on user feedback, and are able to influence and align multiple teams around a shared scientific vision.
US, WA, Bellevue
Amazon is seeking a Language Data Scientist to join the Alexa International science team as domain expert. This role focuses on expanding analysis and evaluation of conversational interaction data deliverables. The Language Data Scientist is an expert in conversation assessment processes, working closely with a team of skilled machine learning scientists and engineers, and is a key member in developing new conventions for relevant annotation workflows. The Language Data Scientist will be own unique data analysis and research requests that support the training and evaluation of LLMs and machine learning models, and the overall processing of a data collection. Key job responsibilities To be successful in this role, you must have a passion for data, efficiency, and accuracy. Specifically, you will: - Own data analyses for customer-facing features, including launch go/no-go metrics for new features and accuracy metrics for existing features - Handle unique data analysis requests from a range of stakeholders, including quantitative and qualitative analyses to elevate customer experience with speech interfaces - Lead and evaluate changing dialog evaluation conventions, test tooling developments, and pilot processes to support expansion to new data areas - Continuously evaluate workflow tools and processes and offer solutions to ensure they are efficient, high quality, and scalable - Provide expert support for a large and growing team of data analysts - Provide support for ongoing and new data collection efforts as a subject matter expert on conventions and use of the data - Conduct research studies to understand speech and customer-Alexa interactions - Collaborate with scientists and product managers, and other stakeholders in defining and validating customer experience metrics
US, WA, Bellevue
Alexa International is looking for a passionate, talented, and inventive Applied Scientist to help build industry-leading technology with Large Language Models (LLMs) and multimodal systems, requiring strong deep learning and generative models knowledge. You will contribute to developing novel solutions and deliver high-quality results that impact Alexa's international products and services. Key job responsibilities As an Applied Scientist with the Alexa International team, you will work with talented peers to develop novel algorithms and modeling techniques to advance the state of the art with LLMs. Your work will directly impact our international customers in the form of products and services that make use of digital assistant technology. You will leverage Amazon's heterogeneous data sources, unique and diverse international customer nuances and large-scale computing resources to accelerate advances in text, voice, and vision domains in a multimodal setup. The ideal candidate possesses a solid understanding of machine learning, natural language understanding, modern LLM architectures, LLM evaluation & tooling, and a passion for pushing boundaries in this vast and quickly evolving field. They thrive in fast-paced environments to tackle complex challenges, excel at swiftly delivering impactful solutions while iterating based on user feedback, and collaborate effectively with cross-functional teams. A day in the life * Analyze, understand, and model customer behavior and the customer experience based on large-scale data. * Build novel online & offline evaluation metrics and methodologies for multimodal personal digital assistants. * Fine-tune/post-train LLMs using techniques like SFT, DPO, RLHF, and RLAIF. * Set up experimentation frameworks for agile model analysis and A/B testing. * Collaborate with partner teams on LLM evaluation frameworks and post-training methodologies. * Contribute to end-to-end delivery of solutions from research to production, including reusable science components. * Communicate solutions clearly to partners and stakeholders. * Contribute to the scientific community through publications and community engagement.