text "roomwassilentagain. istoodthereforafewmoments,tryingtomakesenseofwhathadjusthappened. thenirealizedthat thestudentswereallstaringatme,waitingformetosaysomething. itriedtothinkofsomethingwitty orclevertosay,butmymindwasblank. soijustsaid,“well,thatwasstrange,’andthenistartedmy lecture. figure 1.8 conditional text synthesis. given an initial body of text (in black), generative models of text can continue the string plausibly by synthesizing the “missing”remainingpartofthestring. generatedbygpt3(brownetal.,2020). figure 1.9 variation of the human face. the human face contains roughly 42 muscles, so it’s possible to describe most of the variation in images of the same personinthesamelightingwithjust42numbers. ingeneral,datasetsofimages, music, and text can be described by a relatively small number of underlying variables although it is typically more difficult to tie these to particular physical mechanisms. images from dynamic faces database (holland et al., 2019). 1.2.2 latent variables some (but not all) generative models exploit the observation that data can be lower dimensionalthantherawnumberofobservedvariablessuggests. forexample,thenum- ber of valid and meaningful english sentences is considerably smaller than the number of strings created by drawing words at random. similarly, real-world images are a tiny subsetoftheimagesthatcanbecreatedbydrawingrandomrgbvaluesforeverypixel. this is because images are generated by physical processes (see figure 1.9). thisleadstotheideathatwecandescribeeachdataexampleusingasmallernumber of underlying latent variables. here, the role of deep learning is to describe the mapping betweentheselatentvariablesandthedata. thelatentvariablestypicallyhaveasimple draft: please send errata to udlbookmail@gmail.com.10 1 introduction figure 1.10latentvariables. manygenerativemodelsuseadeeplearningmodel to describe the relationship between a low-dimensional “latent” variable and the observed high-dimensional data. the latent variables have a simple probability distributionbydesign. hence,newexamplescanbegeneratedbysamplingfrom thesimpledistributionoverthelatentvariablesandthenusingthedeeplearning model to map the sample to the observed data space. figure 1.11 image interpolation. in each row the left and right images are real and the three images in between represent a sequence of interpolations created byagenerativemodel. thegenerativemodelsthatunderpintheseinterpolations havelearnedthatallimagescanbecreatedbyasetofunderlyinglatentvariables. byfindingthesevariablesforthetworealimages,interpolatingtheirvalues,and then using these intermediate variables to create new images, we can generate intermediate results that are both visually plausible and mix the characteristics of the two original images. top row adapted from sauer et al. (2022). bottom row adapted from ramesh et al. (2022). this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.1.3 reinforcement learning 11 figure 1.12 multiple images generated from the caption “a teddy bear on a skateboard in times square.” generated by dall·e-2 (ramesh et al., 2022). probability distribution by design. by sampling from this distribution and passing the result through the deep learning model, we can create new samples (figure 1.10). thesemodelsleadtonewmethodsformanipulatingrealdata. forexample,consider findingthelatentvariablesthatunderpintworealexamples. wecaninterpolatebetween these examples by interpolating between their latent representations and mapping the intermediate positions back into the data space (figure 1.11). 1.2.3 connecting supervised and unsupervised learning generative models with latent variables can also benefit supervised learning models where the outputs have structure (figure 1.4). for example, consider learning to predict the images corresponding to a caption. rather than directly map the text input to an image, we can learn a relation between latent variables that explain the text and the latent variables that explain the image. this"