dl_dataset_1 / dataset_chunk_125.csv
Vishwas1's picture
Upload dataset_chunk_125.csv with huggingface_hub
fe89955 verified
raw
history blame contribute delete
No virus
4.12 kB
text
"net architecture, which concatenatesoutputsofallprior layerstofeedintothecurrentlayer, andu-nets, which incorporate residual connections into encoder-decoder models. notes residual connections: residualconnectionswereintroducedbyheetal.(2016a),whobuilt anetworkwith152layers,whichwaseighttimeslargerthanvgg(figure10.17),andachieved state-of-the-artperformanceontheimagenetclassificationtask. eachresidualblockconsisted draft: please send errata to [email protected] 11 residual networks of a convolutional layer followed by batch normalization, a relu activation, a second convolu- tional layer, and second batch normalization. a second relu function was applied after this block was added back to the main representation. this architecture was termed resnet v1. he et al. (2016b) investigated different variations of residual architectures, in which either (i) processing could also be applied along the skip connection or (ii) after the two branches had recombined. they concluded neither was necessary, leading to the architecture in figure 11.7, which is sometimes termed a pre-activation residual block and is the backbone of resnet v2. they trained a network with 200 layers that improved further on the imagenet classification task (see figure 11.8). since this time, new methods for regularization, optimization, and data augmentationhavebeendeveloped,andwightmanetal.(2021)exploitthesetopresentamore modern training pipeline for the resnet architecture. why residual connections help: residual networks certainly allow deeper networks to be trained. presumably, this is related to reducing shattered gradients (balduzzi et al., 2017) at the start of training and the smoother loss surface near the minima as depicted in figure 11.13 (li et al., 2018b). residual connections alone (i.e., without batch normalization) increase the trainabledepthofanetworkbyroughlyafactoroftwo(sankararamanetal.,2020). withbatch normalization, very deep networks can be trained, but it is unclear that depth is critical for performance. zagoruyko&komodakis(2016)showedthatwideresidualnetworkswithonly16 layers outperformed all residual networks of the time for image classification. orhan & pitkow (2017) propose a different explanation for why residual connections improve learning in terms of eliminating singularities (places on the loss surface where the hessian is degenerate). related architectures: residualconnectionsareaspecialcaseofhighway networks(srivas- tavaetal.,2015)whichalsosplitthecomputationintotwobranchesandadditivelyrecombine. highway networks use a gating function that weights the inputs to the two branches in a way thatdependsonthedataitself,whereasresidualnetworkssendthedatadownbothbranchesin astraightforwardmanner. xieetal.(2017)introducedtheresnextarchitecture,whichplaces a residual connection around multiple parallel convolutional branches. residual networks as ensembles: veit et al. (2016) characterized residual networks as en- semblesofshorternetworksanddepictedthe“unravelednetwork”interpretation(figure11.4b). they provide evidence that this interpretation is valid by showing that deleting layers in a trained network (and hence a subset of paths) only has a modest effect on performance. con- versely, removing a layer in a purely sequential network like vgg is catastrophic. they also lookedatthegradientmagnitudesalongpathsofdifferentlengthsandshowedthatthegradient vanishesinlongerpaths. inaresidualnetworkconsistingof54blocks,almostallofthegradient updates during training were from paths of length 5 to 17 blocks long, even though these only constitute 0.45% of the total paths. it seems that adding more blocks effectively adds more parallel shorter paths rather than creating a network that is truly deeper. regularization for residual networks: l2regularizationoftheweightshasafundamentally differenteffectinvanillanetworksandresidualnetworkswithoutbatchnorm. intheformer,it encourages the output of the layer to be a constant function determined by the biases. in the latter, it encourages the residual block to compute the identity plus a constant determined"