text "esteves et al. (2018) introduced polar transformer networks, which are invariant to translations and equivariant to rotation and scale. worrall et al. (2017) developed harmonic networks, the first example of a group cnn that was equivariant to continuous rotations. initialization and regularization: convolutional networks are typically initialized using xavierinitialization(glorot&bengio,2010)orheinitialization(heetal.,2015),asdescribed insection7.5. however,theconvolutionorthogonalinitializer(xiaoetal.,2018a)isspecialized problem10.19 for convolutionalnetworks (xiao et al., 2018a). networks of up to 10,000 layerscan be trained using this initialization without the need for residual connections. dropout is effective for fully connected networks but less so for convolutional layers (park & kwak,2016). thismaybebecauseneighboringimagepixelsarehighlycorrelated,soifahidden unitdropsout,thesameinformationispassedonviaadjacentpositions. thisisthemotivation for spatial dropout and cutout. in spatial dropout (tompson et al., 2015), entire feature maps are discarded instead of individual pixels. this circumvents the problem of neighboring pixels carryingthesameinformation. similarly, devries&taylor(2017b)propose cutout, inwhicha square patch of each input image is masked at training time. wu & gu (2015) modified max poolingfordropoutlayersusingamethodthatinvolvessamplingfromaprobabilitydistribution over the constituent elements rather than always taking the maximum. adaptive kernels: the inception block (szegedy et al., 2017) applies convolutional filters of different sizes in parallel and, as such, provides a crude mechanism by which the network can learn the appropriate filter size. other work has investigated learning the scale of convolutions as part of the training process (e.g., pintea et al., 2021; romero et al., 2021) or the stride of downsampling layers (riad et al., 2022). insomesystems,thekernelsizeischangedadaptivelybasedonthedata. thisissometimesin thecontextofguidedconvolution,whereoneinputisusedtohelpguidethecomputationfrom another input. for example, an rgb image might be used to help upsample a low-resolution depth map. jia et al. (2016) directly predicted the filter weights themselves using a different network branch. xiong et al. (2020b) change the kernel size adaptively. su et al. (2019a) moderate weights of fixed kernels by a function learned from another modality. dai et al. (2017) learn offsets of weights so that they do not have to be applied in a regular grid. object detection and semantic segmentation: objectdetectionmethodscanbedivided into proposal-based and proposal-free schemes. in the former case, processing occurs in two stages. a convolutional network ingests the whole image and proposes regions that might contain objects. these proposal regions are then resized, and a second network analyzes them toestablishwhetherthereisanobjectthereandwhatitis. anearlyexampleofthisapproach wasr-cnn(girshicketal.,2014). thiswassubsequentlyextendedtoallowend-to-endtraining (girshick, 2015) and to reduce the cost of the region proposals (ren et al., 2015). subsequent workonfeaturepyramidnetworksimprovedbothperformanceandspeedbycombiningfeatures draft: please send errata to udlbookmail@gmail.com.184 10 convolutional networks across multiple scales lin et al. (2017b). in contrast, proposal-free schemes perform all the processinginasinglepass. yoloredmonetal.(2016),whichwasdescribedinsection10.5.2, is the most celebrated example of a proposal-free scheme. the most recent iteration of this framework at the time of writing is yolov7 (wang et al., 2022a). a recent review of object detection can be found in zou et al. (2023). the semantic segmentation network described in section 10.5.3 was developed by noh et al. (2015). manysubsequentapproacheshavebeenvariationsofu-net(ronnebergeretal.,2015), which is described in section 11.5.3. recent surveys of semantic segmentation can be found in minaee"