Vishwas1 commited on
Commit
1dd03b4
1 Parent(s): 53475e7

Upload dataset_chunk_35.csv with huggingface_hub

Browse files
Files changed (1) hide show
  1. dataset_chunk_35.csv +2 -0
dataset_chunk_35.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ text
2
+ "approximated much more efficiently with deep networks. functions have been identifiedthatrequireashallownetworkwithexponentiallymorehiddenunitstoachieve an equivalent approximation to that of a deep network. this phenomenon is referred to as the depth efficiency of neural networks. this property is also attractive, but it’s not clear that the real-world functions that we want to approximate fall into this category. 4.5.4 large, structured inputs wehavediscussedfullyconnectednetworkswhereeveryelementofeachlayercontributes to every element of the subsequent one. however, these are not practical for large, this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.4.5 shallow vs. deep neural networks 51 figure 4.7 the maximum number of linear regions for neural networks increases rapidly with the network depth. a) network with d =1 input. each curve rep- i resentsafixednumberofhiddenlayersk,aswevarythenumberofhiddenunits d perlayer. forafixedparameterbudget(horizontalposition),deepernetworks produce more linear regions than shallower ones. a network with k = 5 layers and d = 10 hidden units per layer has 471 parameters (highlighted point) and can produce 161,051 regions. b) network with d =10 inputs. each subsequent i pointalongacurverepresentstenhiddenunits. here,amodelwithk =5layers andd=50hiddenunitsperlayerhas10,801parameters(highlightedpoint)and can create more than 1040 linear regions. structuredinputslikeimages,wheretheinputmightcomprise∼106 pixels. thenumber of parameters would be prohibitive, and moreover, we want different parts of the image to be processed similarly; there is no point in independently learning to recognize the same object at every possible position in the image. thesolutionistoprocesslocalimageregionsinparallelandthengraduallyintegrate information from increasingly large regions. this kind of local-to-global processing is difficult to specify without using multiple layers (see chapter 10). 4.5.5 training and generalization a further possible advantage of deep networks over shallow networks is their ease of fitting; it is usually easier to train moderately deep networks than to train shallow ones (see figure 20.2). it may be that over-parameterized deep models have a large family of roughlyequivalentsolutionsthatareeasytofind. however,asweaddmorehiddenlayers, training becomes more difficult again, although many methods have been developed to mitigate this problem (see chapter 11). deep neural networks also seem to generalize to new data better than shallow ones. in practice, the best results for most tasks have been achieved using networks with tens or hundreds of layers. neither of these phenomena are well understood, and we return to them in chapter 20. draft: please send errata to [email protected] 4 deep neural networks 4.6 summary inthischapter,wefirstconsideredwhathappenswhenwecomposetwoshallownetworks. we argued that the first network “folds” the input space, and the second network then applies a piecewise linear function. the effects of the second network are duplicated where the input space is folded onto itself. we then showed that this composition of shallow networks is a special case of a deep network with two layers. we interpreted the relu functions in each layer as clipping theinputfunctionsinmultipleplacesandcreatingmore“joints”intheoutputfunction. we introduced the idea of hyperparameters, which for the networks we’ve seen so far, comprise the number of hidden layers and the number of hidden units in each. finally, we compared shallow and deep networks. we saw that (i) both networks can approximate any function given enough capacity, (ii) deep networks produce many more linear regions per parameter, (iii) some functions can be approximated much more efficiently by deep networks, (iv) large, structured inputs like images are best processed in multiple stages, and (v) in practice, the best results for most tasks are achieved using deep networks with many layers. now that we understand deep and shallow network models, we turn our attention to training them. in the next chapter, we discuss loss functions. for any given parameter values ϕ, the loss function returns a single number that indicates the mismatch between"