text ". the convolutional kernel is now a 2d object. a 3×3 kernel ω ∈ r3×3 applied to a 2d input comprising of elements x ij computes a single layer of hidden units h as: ij "" # x3 x3 hij = a β+ ωmnxi+m−2,j+n−2 , (10.6) m=1n=1 where ω are the entries of the convolutional kernel. this is simply a weighted sum mn overasquare3×3inputregion. thekernelistranslatedbothhorizontallyandvertically problem10.13 across the 2d input (figure 10.9) to create an output at each position. oftentheinputisanrgbimage,whichistreatedasa2dsignalwiththreechannels (figure 10.10). here, a 3×3 kernel would have 3×3×3 weights and be applied to the notebook10.3 threeinputchannelsateachofthe3×3positionstocreatea2doutputthatisthesame 2dconvolution height and width as the input image (assuming zero padding). to generate multiple problem10.14 output channels, we repeat this process with different kernel weights and append the resultstoforma3dtensor. ifthekernelissizek×k, andtherearec inputchannels, i appendixb.3 eachoutputchannelisaweightedsumofc ×k×k quantitiesplusonebias. itfollows i tensors that to compute c output channels, we need c ×c ×k×k weights and c biases. o i o o this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.10.4 downsampling and upsampling 171 figure10.92dconvolutionallayer. eachoutputh computesaweightedsumof ij the 3×3 nearest inputs, adds a bias, and passes the result through an activation function. a) here, the output h (shaded output) is a weighted sum of the nine 23 positionsfromx tox (shadedinputs). b)differentoutputsarecomputedby 12 34 translating the kernel across the image grid in two dimensions. c–d) with zero padding, positions beyond the image’s edge are considered to be zero. 10.4 downsampling and upsampling the network in figure 10.7 increased receptive field size by scaling down the representa- tion at each layer using stride two convolutions. we now consider methods for scaling down or downsampling 2d input representations. we also describe methods for scaling them back up (upsampling), which is useful when the output is also an image. finally, we consider methods to change the number of channels between layers. this is helpful when recombining representations from two branches of a network (chapter 11). 10.4.1 downsampling therearethreemainapproachestoscalingdowna2drepresentation. here,weconsider the most common case of scaling down both dimensions by a factor of two. first, we draft: please send errata to udlbookmail@gmail.com.172 10 convolutional networks figure 10.10 2d convolution applied to an image. the image is treated as a 2d inputwiththreechannelscorrespondingtothered,green,andbluecomponents. with a 3×3 kernel, each pre-activation in the first hidden layer is computed by pointwisemultiplyingthe3×3×3kernelweightswiththe3×3rgbimagepatch centered at the same position, summing, and adding the bias. to calculate all the pre-activations in the hidden layer, we “slide” the kernel over the image in bothhorizontalandverticaldirections. theoutputisa2dlayerofhiddenunits. to create multiple output channels, we would repeat this process with multiple kernels, resulting in a 3d tensor of hidden units at hidden layer h . 1 can sample every other position. when we use a stride of two, we effectively apply this problem10.15 method simultaneously with the convolution operation (figure 10.11a). second, max pooling retains the maximum of the 2×2 input values (figure 10.11b). this induces some invariance to translation; if the input is shifted by one pixel, many of these maximum values remain the same. finally, mean pooling or average pooling averages the inputs. for all approaches"