dl_dataset_1 / dataset_chunk_121.csv
Vishwas1's picture
Upload dataset_chunk_121.csv with huggingface_hub
5390123 verified
raw
history blame contribute delete
No virus
4.17 kB
text
"atch normalization is applied independently to each hidden unit. in a standard neural network with k layers, each containing d hidden units, there would be kd problem11.6 learned offsets δ and kd learned scales γ. in a convolutional network, the normalizing statistics are computed over both the batch and the spatial position. if there were k notebook11.3 layers, each containing c channels, there would be kc offsets and kc scales. at test batchnorm time, we do not have a batch from which we can gather statistics. to resolve this, the statistics m and s are calculated across the whole training dataset (rather than just a h h batch) and frozen in the final network. 11.4.1 costs and benefits of batch normalization batchnormalizationmakesthenetworkinvarianttorescalingtheweightsandbiasesthat contribute to each activation; if these are doubled, then the activations also double, the estimated standard deviation s doubles, and the normalization in equation 11.8 com- h pensatesforthesechanges. thishappensseparatelyforeachhiddenunit. consequently, therewillbealargefamilyofweightsandbiasesthatallproducethesameeffect. batch normalizationalsoaddstwoparameters,γ andδ, ateveryhiddenunit, whichmakesthe modelsomewhatlarger. hence,itbothcreatesredundancyintheweightparametersand adds extra parameters to compensate for that redundancy. this is obviously inefficient, but batch normalization also provides several benefits. stableforwardpropagation: ifweinitializetheoffsetsδtozeroandthescalesγ toone, then each output activation will have unit variance. in a regular network, this ensures thevarianceisstableduringforwardpropagationatinitialization. inaresidualnetwork, the variance must still increase as we add a new source of variation to the input at each layer. however, it will increase linearly with each residual block; the kth layer adds one unit of variance to the existing variance of k (figure 11.6c). at initialization, this has the side-effect that later layers make a smaller change to theoverallvariationthanearlierones. thenetworkiseffectivelylessdeepatthestartof training since later layers are close to computing the identity. as training proceeds, the network can increase the scales γ in later layers and can control its own effective depth. higher learning rates: empirical studies and theory both show that batch normaliza- tion makes the loss surface and its gradient change more smoothly (i.e., reduces shat- tered gradients). this means we can use higher learning rates as the surface is more predictable. we saw in section 9.2 that higher learning rates improve test performance. regularization: we also saw in chapter 9 that adding noise to the training process can improve generalization. batch normalization injects noise because the normaliza- tion depends on the batch statistics. the activations for a given training example are normalized by an amount that depends on the other members of the batch and will be slightly different at each training iteration. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.11.5 common residual architectures 195 11.5 common residual architectures residual connections are now a standard part of deep learning pipelines. this section reviews some well-known architectures that incorporate them. 11.5.1 resnet residual blocks were first used in convolutional networks for image classification. the resultingnetworksareknownasresidualnetworks,orresnetsforshort. inresnets,each residualblockcontainsabatchnormalizationoperation,areluactivationfunction,and a convolutional layer. this is followed by the same sequence again before being added problem11.7 backtotheinput(figure11.7a). trialanderrorhaveshownthatthisorderofoperations works well for image classification. for very deep networks, the number of parameters may become undesirably large. bottleneckresidualblocksmakemoreefficientuseofparametersusingthreeconvolutions. the first has a 1×1 kernel and reduces the number of channels. the second is a regular 3×3kernel, andthethirdisanother1×1kerneltoincreasethenumberofchannelsback to the original amount (figure 11.7b). in this way, we can integrate"