text "used when inpainting missing pixels and account for the partial masking of the input. gated convolutions learn the mask from the previous layer (yu et al., 2019; chang et al., 2019b). hu et al. (2018b) propose squeeze-and-excitation networks which re-weight the channels using information pooled across all spatial positions. downsamplingandupsampling: averagepoolingdatesbacktoatleastlecunetal.(1989a) and max pooling to zhou & chellappa (1988). scherer et al. (2010) compared these methods and concluded that max pooling was superior. the max unpooling method was introduced by zeiler et al. (2011) and zeiler & fergus (2014). max pooling can be thought of as applying draft: please send errata to udlbookmail@gmail.com.182 10 convolutional networks figure 10.21imagenetperformance. eachcirclerepresentsadifferentpublished model. blue circles represent models that were state-of-the-art. models dis- cussed in this book are also highlighted. the alexnet and vgg networks were remarkable for their time but are now far from state of the art. resnet-200 and densenet are discussed in chapter 11. imagegpt, vit, swin, and davit are discussedinchapter12. adaptedfromhttps://paperswithcode.com/sota/image- classification-on-imagenet. an l∞ norm to the hidden units that are to be pooled. this led to applying other lk norms appendixb.3.2 (springenberg et al., 2015; sainath et al., 2013), although these require more computation and vectornorms are not widely used. zhang (2019) introduced max-blur-pooling, in which a low-pass filter is appliedbeforedownsamplingtopreventaliasing,andshowedthatthisimprovesgeneralization over translation of the inputs and protects against adversarial attacks (see section 20.4.6). shi et al. (2016) introduced pixelshuffle, which used convolutional filters with a stride of 1/s to scale up 1d signals by a factor of s. only the weights that lie exactly on positions are used to create the outputs, and the ones that fall between positions are discarded. this can be implemented by multiplying the number of channels in the kernel by a factor of s, where the sth output position is computed from just the sth subset of channels. this can be trivially extended to 2d convolution, which requires s2 channels. convolution in 1d and 3d: convolutionalnetworksareusuallyappliedtoimagesbuthave also been applied to 1d data in applications that include speech recognition (abdel-hamid etal.,2012),sentenceclassification(zhangetal.,2015;conneauetal.,2017),electrocardiogram classification (kiranyaz et al., 2015), and bearing fault diagnosis (eren et al., 2019). a survey of 1d convolutional networks can be found in kiranyaz et al. (2021). convolutional networks havealsobeenappliedto3ddata,includingvideo(jietal.,2012;sahaetal.,2016;tranetal., 2015) and volumetric measurements (wu et al., 2015b; maturana & scherer, 2015). invariance and equivariance: part of the motivation for convolutional layers is that they are approximately equivariant with respect to translation, and part of the motivation for max this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.notes 183 pooling is to induce invariance to small translations. zhang (2019) considers the degree to which convolutional networks really have these properties and proposes the max-blur-pooling modification that demonstrably improves them. there is considerable interest in making net- works equivariant or invariant to other types of transformations, such as reflections, rotations, and scaling. sifre & mallat (2013) constructed a system based on wavelets that induced both translational and rotational invariance in image patches and applied this to texture classifica- tion. kanazawa et al. (2014) developed locally scale-invariant convolutional neural networks. cohen&welling(2016)exploitedgrouptheorytoconstructgroupcnns,whichareequivariant to larger families of transformations, including reflections and rotations."