Over the past few years, a team of Amazon scientists has been developing a book that is gaining popularity with students and developers attracted to the booming field of deep learning, a subset of machine learning focused on large-scale artificial neural networks. Called Dive into Deep Learning, the book arrives in a unique form factor, integrating text, mathematics, and runnable code. Drafted entirely through Jupyter notebooks, the book is a fully open source living document, with each update triggering updates to the PDF, HTML, and notebook versions.
Its authors are Aston Zhang, an AWS senior applied scientist; Zachary Lipton, an AWS scientist and assistant professor of Operations Research and Machine Learning at Carnegie Mellon University; Mu Li, AWS principal scientist; and Alex Smola, AWS vice president and distinguished scientist.
Dive into Deep Learning now supports @TensorFlow. The first 7 chapters are released today with more on their way. Special thanks to @TerryTangYuan and @NeurlAP for contributing code and our Google friends @JeffDean @DynamicWebPaige and @tomerk11. More at https://t.co/MzhIsaxnwY. pic.twitter.com/2UaNe2Zx3t
— D2L.ai (@D2L_ai) July 7, 2020
Recently the authors added two programming frameworks to their book: PyTorch and TensorFlow. That gives the book—originally written for MXNet—even broader appeal within the open-source machine-learning community of students, developers, and scientists. The book also is incorporated into Amazon Machine Learning University courseware.
Amazon Science spoke to the authors previously about their book, and we recently reconnected with them to learn about the significance of the new frameworks they’ve added to their book.
Q. What’s the significance of adding PyTorch and TensorFlow implementations to Dive into Deep Learning?
Mu Li: The book is designed to teach people different algorithms used in machine learning. A big asset of the book is the fact we provide all the coding information. Originally, we used MXNet because it’s a major interface and easy to learn. But then we started getting a lot of requests for PyTorch and TensorFlow implementations. So, we decided to add them to the book.
Aston Zhang: Another factor is that for machine learning practitioners, it’s not enough to know just one framework. That’s because a researcher may propose a new model or algorithm and provide implementation in only one framework. So, if you don’t know that framework, you can’t work with the model. Dive into Deep Learning now provides a way to address those different implementations. It fixes a pain point for our readers.
Zachary Lipton: Like any good product, you have to pay attention to what people are doing. And the audience available for a book that's only available in one framework is somewhat limited. Already, a great team from IIT Roorkee asked us if they could translate the code portions of our book, yielding a PyTorch version, and we gave it our blessing. We knew that a massive audience of students and practitioners would be excited for the PyTorch and TensorFlow versions.
Q. How does the change make the book better?
Alex Smola: The book is basically a collection of Jupyter notebooks – you can read the book in your web browser and run every code example in real time. Because we support multiple frameworks, we can have multiple code paths within each notebook, so you can compare both the implementations, and the results that they give side by side. That’s very powerful as a teaching tool.
Mu Li: We feel that by adding PyTorch and TensorFlow to Dive into Deep Learning, we’ve made it the best textbook to learn about and execute machine learning and deep learning. It’s a textbook, but also teaches you how to implement the code. Another thing is that some people already using PyTorch want to systematically learn deep learning. Now they can run different algorithms from scratch and learn how to do that in different frameworks.
Zachary Lipton: Nobody can survive as a professional in machine learning without having the skills to work with multiple frameworks. You might learn in MXNet or TensorFlow, but then switch jobs, and need to rapidly port those skills over to a place that uses PyTorch when you’re not familiar with it. In general, it’s important for people to learn several languages.
Q. Is any one of the frameworks superior to the others?
Alex Smola: Each of them has some advantages over the other and given the state of the open-source landscape, those advantages constantly shift. They’re all competing with each other for which is the fastest, most usable, has the best data loaders and so on. At one point, people argued that philosophy was best written in German, and music best written in Italian. If you want to have the widest audience, you don’t want to limit yourself to one approach to doing things.
Dive into Deep Learning now supports @PyTorch. The first 8 chapters are ready with more on their way. Thanks to DSG IIT Roorkee @dsg_iitr, particularly @gollum_here who adapted the code into PyTorch. More at https://t.co/CbXbMUO3I9. @mli65 @smolix @zacharylipton @astonzhangAZ pic.twitter.com/FWmXxHg0ri
— D2L.ai (@D2L_ai) June 4, 2020
Aston Zhang: We’re not asking our readers to use just one framework. We provide three implementations. Readers can click on each framework, learn how it works, and decide what works best for them. If you’re a new user, you can see the subtle differences between the three and can compare their speed. Also, we separate text and code—the text is framework-neutral, but in the code book we ask people to contribute material. We’ve had people from Google, Alibaba, IIT students and others add material. For the first few chapters, Anirudh Dagar and Yuan Tang have contributed most of the PyTorch and TensorFlow adaptations.. Many others have also helped with the adaptations to these frameworks.
Zachary Lipton: The book is starting to be useful as a Rosetta stone of sorts to allow practitioners to see what the best strategy is to solve the same problem in multiple frameworks— MXNet, PyTorch, TensorFlow—without having to chase down incompatible and idiosyncratic variants on GitHub.
Q. Was it challenging to add the different frameworks to the book?
Mu Li: Yes! PyTorch and MXNet are similar, but TensorFlow is pretty different. Fortunately, TensorFlow 2.0 is very different from TensorFlow 1.0, and closer to MXNet.
Alex Smola: The proper tuning and refinement of the models took quite a while to ensure the implementations for TensorFlow on modern convolutional neural networks were just as good as the ones in PyTorch and MXNet. That’s due to the different ways in how the frameworks implement things. And we still need to back-port the content into Mandarin. This isn’t a trivial endeavor, because there currently isn’t great tooling available to synch the text with the source code.
Q. What has been the response to the additions to Dive into Deep Learning?
Mu Li: Very good. In the past three months, compared with the prior three, we’ve seen about a 40 percent increase in users.
Q. What motivates you to continue improving Dive into Deep Learning?
Alex Smola: We write books because we want to teach and share content. It’s also our way of saying “thanks” to the machine-learning community. The book is a key enabler for spreading knowledge about machine learning and its applications much more widely. We want to make it easy for people to come in, learn about machine learning, and then surprise us with their additions to the book.
Zach Lipton: I don't think anyone involved in the project thinks of it as a book that will someday be finished, in the traditional sense. Having everything online, we can update and add material much, much more quickly than if it were made from dead trees.
Aston Zhang: Every day we get feedback from users around the world. Their comments, suggestions, encouragement, and endorsement motivates me to continue improving our book.