Transformers (Vaswani et al., 2017) have gradually become a key component for many state-of-the-art natural-language-representation models. A recent Transformer-based model — BERT (Devlin et al., 2018) — achieved state-of-the-art results on various natural-language-processing tasks, including GLUE, SQuAD v1.1, and SQuAD v2.0. This model however is computationally prohibitive and has a huge number of parameters. In this work we revisit the architecture choices of BERT in efforts to obtain a lighter model. We focus on reducing the number of parameters yet our methods can be applied towards other objectives such as FLOPs or latency. We show that much more efficient light BERT models can be obtained by reducing algorithmically chosen correct architecture design dimensions rather than reducing the number of Transformer encoder layers. In particular, our schuBERT gives 6.6% higher average accuracy on GLUE and SQuAD datasets as compared to BERT with three encoder layers while having the same number of parameters.
Research areas