text ", we apply downsampling separately to each channel, so the output has half the width and height but the same number of channels. 10.4.2 upsampling the simplest way to scale up a network layer to double the resolution is to duplicate all the channels at each spatial position four times (figure 10.12a). a second method is max unpooling; this is used where we have previously used a max pooling operation for downsampling, and we distribute the values to the positions they originated from (figure 10.12b). a third approach uses bilinear interpolation to fill in the missing values between the points where we have samples. (figure 10.12c). a fourth approach is roughly analogous to downsampling using a stride of two. in notebook10.4 that method, there were half as many outputs as inputs, and for kernel size three, each downsampling &upsampling output was a weighted sum of the three closest inputs (figure 10.13a). in transposed convolution, this picture is reversed (figure 10.13c). there are twice as many outputs this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.10.4 downsampling and upsampling 173 figure 10.11 methods for scaling down representation size (downsampling). a) sub-sampling. theoriginal4×4representation(left)isreducedtosize2×2(right) byretainingeveryotherinput. colorsontheleftindicatewhichinputscontribute totheoutputsontheright. thisiseffectivelywhathappenswithakernelofstride two, except that the intermediate values are never computed. b) max pooling. each output comprises the maximum value of the corresponding 2×2 block. c) mean pooling. each output is the mean of the values in the 2×2 block. figure 10.12 methods for scaling up representation size (upsampling). a) the simplest way to double the size of a 2d layer is to duplicate each input four times. b) in networks where we have previously used a max pooling operation (figure10.11b),wecanredistributethevaluestothesamepositionstheyoriginally camefrom(i.e.,wherethemaximawere). thisisknownasmaxunpooling. c)a third option is bilinear interpolation between the input values. figure 10.13 transposed convolution in 1d. a) downsampling with kernel size three, stride two, and zero padding. each output is a weighted sum of three inputs (arrows indicate weights). b) this can be expressed by a weight matrix (same color indicates shared weight). c) in transposed convolution, each input contributesthreevaluestotheoutputlayer,whichhastwiceasmanyoutputsas inputs. d) the associated weight matrix is the transpose of that in panel (b). draft: please send errata to udlbookmail@gmail.com.174 10 convolutional networks as inputs, and each input contributes to three of the outputs. when we consider the associated weight matrix of this upsampling mechanism (figure 10.13d), we see that it is the transpose of the matrix for the downsampling mechanism (figure 10.13b). 10.4.3 changing the number of channels sometimes we want to change the number of channels between one hidden layer and the nextwithoutfurtherspatialpooling. thisisusuallysowecancombinetherepresentation with another parallel computation (see chapter 11). to accomplish this, we apply a convolution with kernel size one. each element of the output layer is computed by taking a weighted sum of all the channels at the same position (figure 10.14). we can repeatthismultipletimeswithdifferentweightstogenerateasmanyoutputchannelsas we need. the associated convolution weights have size 1×1×c ×c . hence, this is i o knownas1×1convolution. combinedwithabiasandactivationfunction,itisequivalent to running the same fully connected network on the channels at every position. 10.5 applications we conclude by describing three computer vision applications. we describe convolu- tional networks for image classification where the goal is to assign the image to one of a predetermined set of categories. then we consider object detection, where the goal is to identify multiple objects in an image and find the bounding box around each. finally, wedescribeanearlysystemforsemanticsegment"