Syncfree optimizers and compiler improvements for efficient model training

2023
Download Copy BibTeX
Copy BibTeX
Deep learning training compilers accelerate and achieve more resource-efficient training. We present a deep learning compiler for training consisting of three main features, a syncfree optimizer, compiler caching and multi-threaded execution. We demonstrate speedups for common language and vision problems against native and XLA baselines implemented in PyTorch.
Research areas

Latest news

GB, MLN, Edinburgh
We’re looking for a Machine Learning Scientist in the Personalization team for our Edinburgh office experienced in generative AI and large models. You will be responsible for developing and disseminating customer-facing personalized recommendation models. This is a hands-on role with global impact working with a team of world-class engineers and scientists across the Edinburgh offices and wider organization. You will lead the design of machine learning models that scale to very large quantities of data, and serve high-scale low-latency recommendations to all customers worldwide. You will embody scientific rigor, designing and executing experiments to demonstrate the technical efficacy and business value of your methods. You will work alongside aRead more