text "network depth indefinitely doesn’t continue to help; after a certain depth, the system becomes difficult to train. this is the motivation for residual connections, which are the topic of the next chapter. notes dumoulin&visin(2016)presentanoverviewofthemathematicsofconvolutionsthatexpands on the brief treatment in this chapter. convolutional networks: early convolutional networks were developed by fukushima & miyake (1982), lecun et al. (1989a), and lecun et al. (1989b). initial applications included this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.notes 181 handwritingrecognition(lecunetal.,1989a;martin,1993),facerecognition(lawrenceetal., 1997),phonemerecognition(waibeletal.,1989),spokenwordrecognition(bottouetal.,1990), and signature verification (bromley et al., 1993). however, convolutional networks were popu- larizedbylecunetal.(1998),whobuiltasystemcalledlenetforclassifying28×28grayscale images of handwritten digits. this is immediately recognizable as a precursor of modern net- works;itusesaseriesofconvolutionallayers,followedbyfullyconnectedlayers,sigmoidactiva- tions rather than relus, and average pooling rather than max pooling. alexnet (krizhevsky et al., 2012) is widely considered the starting point for modern deep convolutional networks. imagenet challenge: dengetal.(2009)collatedtheimagenetdatabaseandtheassociated classificationchallengedroveprogressindeeplearningforseveralyearsafteralexnet. notable subsequent winners of this challenge include the network-in-network architecture (lin et al., 2014), which alternated convolutions with fully connected layers that operated independently on all of the channels at each position (i.e., 1×1 convolutions). zeiler & fergus (2014) and simonyan&zisserman(2014)trainedlargeranddeeperarchitecturesthatwerefundamentally similar to alexnet. szegedy et al. (2017) developed an architecture called googlenet, which introduced inception blocks. these use several parallel paths with different filter sizes, which are then recombined. this effectively allowed the system to learn the filter size. thetrendwasforperformancetoimprovewithincreasingdepth. however,itultimatelybecame difficult to train deeper networks without modifications; these include residual connections and normalization layers, both of which are described in the next chapter. progress in the imagenet challenges is summarized in russakovsky et al. (2015). a more general survey of image classification using convolutional networks can be found in rawat & wang (2017). the improvement of image classification networks over time is visualized in figure 10.21. types of convolutional layers: atrous or dilated convolutions were introduced by chen etal.(2018c)andyu&koltun(2015). transposedconvolutionswereintroducedbylongetal. (2015). odenaetal.(2016)pointedoutthattheycanleadtocheckerboardartifactsandshould be used with caution. lin et al. (2014) is an early example of convolution with 1×1 filters. many variants of the standard convolutional layer aim to reduce the number of parameters. theseincludedepthwiseorchannel-separateconvolution(howardetal.,2017;tranetal.,2018), inwhichadifferentfilterconvolveseachchannelseparatelytocreateanewsetofchannels. for akernelsizeofk×k withc inputchannelsandc outputchannels,thisrequiresk×k×c parameters rather than the k ×k ×c ×c parameters in a regular convolutional layer. a related approach is grouped convolutions (xie et al., 2017), where each convolution kernel is only applied to a subset of the channels with a commensurate reduction in the parameters. in fact, groupedconvolutionswereusedinalexnetforcomputationalreasons; thewholenetwork could not run on a single gpu, so some channels were processed on one gpu and some on another, with limited interaction points. separable convolutions treat each kernel as an outer product of 1d vectors; they use c +k +k parameters for each of the c channels. partial convolutions (liu et al., 2018a) are"