Vishwas1 commited on
Commit
ef7ab22
1 Parent(s): 3ef34c8

Upload dataset_chunk_117.csv with huggingface_hub

Browse files
Files changed (1) hide show
  1. dataset_chunk_117.csv +2 -0
dataset_chunk_117.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ text
2
+ "11.1 sequential processing 187 figure 11.1 sequential processing. standard neural networks pass the output of each layer directly into the next layer. lineartransformation. inaconvolutionalnetwork,eachlayerconsistsofasetofconvolu- tions followed by an activation function, and the parameters comprise the convolutional kernels and biases. since the processing is sequential, we can equivalently think of this network as a series of nested functions: (cid:20) h (cid:2) (cid:3) i (cid:21) y=f f f f [x,ϕ ],ϕ ,ϕ ,ϕ . (11.2) 4 3 2 1 1 2 3 4 11.1.1 limitations of sequential processing inprinciple, wecan add asmanylayersas wewant, andin the previous chapter, wesaw thataddingmorelayerstoaconvolutionalnetworkdoesimproveperformance;thevgg network (figure 10.17), which has eighteen layers, outperforms alexnet (figure 10.16), which has eight layers. however, image classification performance decreases again as further layers are added (figure 11.2). this is surprising since models generally perform betterasmorecapacityisadded(figure8.10). indeed,thedecreaseispresentforboththe training set and the test set, which implies that the problem is training deeper networks rather than the inability of deeper networks to generalize. this phenomenon is not completely understood. one conjecture is that at initial- ization, the loss gradients change unpredictably when we modify parameters in early network layers. with appropriate initialization of the weights (see section 7.5), the gra- dient of the loss with respect to these parameters will be reasonable (i.e., no exploding or vanishing gradients). however, the derivative assumes an infinitesimal change in the parameter,whereasoptimizationalgorithmsuseafinitestepsize. anyreasonablechoice notebook11.1 of step size may move to a place with a completely different and unrelated gradient; the shattered loss surface looks like an enormous range of tiny mountains rather than a single smooth gradients structure that is easy to descend. consequently, the algorithm doesn’t make progress in the way that it does when the loss function gradient changes more slowly. this conjecture is supported by empirical observations of gradients in networks with a single input and output. for a shallow network, the gradient of the output with re- spect to the input changes slowly as we change the input (figure 11.3a). however, for a appendixb.2.1 deep network, a tiny change in the input results in a completely different gradient (fig- autocorrelation ure11.3b). thisiscapturedbytheautocorrelationfunctionofthegradient(figure11.3c). function nearby gradients are correlated for shallow networks, but this correlation quickly drops to zero for deep networks. this is termed the shattered gradients phenomenon. draft: please send errata to [email protected] 11 residual networks figure11.2decreaseinperformancewhenaddingmoreconvolutionallayers. a)a 20-layer convolutional network outperforms a 56-layer neural network for image classification on the test set of the cifar-10 dataset (krizhevsky & hinton, 2009). b) this is also true for the training set, which suggests that the problem relatestotrainingtheoriginalnetworkratherthanafailuretogeneralizetonew data. adapted from he et al. (2016a). figure 11.3 shattered gradients. a) consider a shallownetwork with 200 hidden units and glorot initialization (he initialization without the factor of two) for both the weights and biases. the gradient ∂y/∂x of the scalar network output y with respect to the scalar input x changes relatively slowly as we change the in- put x. b) for a deep network with 24 layers and 200 hidden units per layer, this gradientchangesveryquicklyandunpredictably. c)theautocorrelationfunction ofthegradientshowsthatnearbygradientsbecomeunrelated(haveautocorrela- tion close to zero) for deep networks. this shattered gradients phenomenon may explain why it is hard to train deep networks. gradient descent algorithms rely on the loss surface being relatively smooth, so the gradients should be related before and after each update step. adapted from balduzzi et al. (2017"