text "and to reduce the potential for harm. we should consider what kind of organizations we are prepared to work for. how serious are they in their commitment to reducing the potential harms of ai? are they simply “ethics-washing” to reduce reputational risk, or do they actually implement mechanisms to halt ethically suspect projects? all readers are encouraged to investigate these issues further. the online course at https://ethics-of-ai.mooc.fi/ is a useful introductory resource. if you are a professor teaching from this book, you are encouraged to raise these issues with your students. if you are a student taking a course where this is not done, then lobby your professor to make this happen. if you are deploying or researching ai in a corporate environment, you are encouraged to scrutinize your employer’s values and to help change them (or leave) if they are wanting. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.1.5 structure of book 15 1.5 structure of book the structure of the book follows the structure of this introduction. chapters 2–9 walk throughthesupervisedlearningpipeline. wedescribeshallowanddeepneuralnetworks and discuss how to train them and measure and improve their performance. chap- ters 10–13 describe common architectural variations of deep neural networks, including convolutional networks, residual connections, and transformers. these architectures are used across supervised, unsupervised, and reinforcement learning. chapters 14–18 tackle unsupervised learning using deep neural networks. we devote a chapter each to four modern deep generative models: generative adversarial networks, variational autoencoders, normalizing flows, and diffusion models. chapter 19 is a brief introduction to deep reinforcement learning. this is a topic that easily justifies its own book, so the treatment is necessarily superficial. however, this treatment is intended to be a good starting point for readers unfamiliar with this area. despite the title of this book, some aspects of deep learning remain poorly under- stood. chapter 20 poses some fundamental questions. why are deep networks so easy to train? why do they generalize so well? why do they need so many parameters? do they need to be deep? along the way, we explore unexpected phenomena such as the structure of the loss function, double descent, grokking, and lottery tickets. the book concludes with chapter 21, which discusses ethics and deep learning. 1.6 other books this book is self-contained but is limited to coverage of deep learning. it is intended to bethespiritualsuccessortodeep learning(goodfellowetal.,2016)whichisafantastic resourcebutdoesnotcoverrecentadvances. forabroaderlookatmachinelearning,the most up-to-date and encyclopedic resource is probabilistic machine learning (murphy, 2022, 2023). however, pattern recognition and machine learning (bishop, 2006) is still an excellent and relevant book. ifyouenjoythisbook,thenmypreviousvolume,computervision: models,learning, and inference (prince, 2012), is still worth reading. some parts have dated badly, but it contains a thorough introduction to probability, including bayesian methods, and good introductorycoverageoflatentvariablemodels,geometryforcomputervision,gaussian processes,andgraphicalmodels. itusesidenticalnotationtothisbookandcanbefound online. adetailedtreatmentofgraphicalmodelscanbefoundinprobabilistic graphical models: principles and techniques (koller & friedman, 2009), and gaussian processes arecoveredbygaussianprocessesformachinelearning(williams&rasmussen,2006). for background mathematics, consult mathematics for machine learning (deisen- rothetal.,2020). foramorecoding-orientedapproach,consultdiveintodeeplearning (zhang et al., 2023). the best overview for computer vision is szeliski (2022), and there is also the impending book foundations of computer vision (torralba et al., 2024). a good starting point to learn about graph neural networks is graph representation learning (hamilton, 2020). the definitive work on reinforcement learning is reinforce- draft: please send errata to udlbookmail@gmail.com.16 1 introduction ment learning: an introduction (sutton & barto, 2018). a good initial resource is foundations of deep reinforcement learning (graesser & keng, 2019). 1.7 how to read this book most remaining chapters in this"