File size: 3,522 Bytes
53475e7
 
1
2
3
text
"1 general formulation this notation becomes cumbersome for networks with many layers. hence, from now on, we will describe the vector of hidden units at layer k as h , the vector of biases k (intercepts) that contribute to hidden layer k+1 as β , and the weights (slopes) that k are applied to the kth layer and contribute to the (k+1)th layer as ω . a general deep k network y=f[x,ϕ] with k layers can now be written as: h = a[β +ω x] 1 0 0 h = a[β +ω h ] 2 1 1 1 h = a[β +ω h ] 3 2 2 2 . . . hk = a[βk−1+ωk−1hk−1] y = β +ω h . (4.15) k k k the parameters ϕ of this model comprise all of these weight matrices and bias vectors ϕ={β ,ω }k . k k k=0 if the kth layer has d hidden units, then the bias vector β will be of size d . k k−1 k the last bias vector β has the size d of the output. the first weight matrix ω has size d ×d where dkis the size of theoinput. the last weight matrix ω is d ×0d , notebook4.3 1 i i k o k deepnetworks and the remaining matrices ω are d ×d (figure 4.6). k k+1 k we can equivalently write the network as a single function: problems4.3–4.6 (cid:2) (cid:3) y = βk +ωka βk−1+ωk−1a[...β2+ω2a[β1+ω1a[β0+ω0x]]...] . (4.16) 4.5 shallow vs. deep neural networks chapter 3 discussed shallow networks (with a single hidden layer), and here we have described deep networks (with multiple hidden layers). we now compare these models. draft: please send errata to [email protected] 4 deep neural networks 4.5.1 ability to approximate different functions in section 3.2, we argued that shallow neural networks with enough capacity (hidden units) could model any continuous function arbitrarily closely. in this chapter, we saw that a deep network with two hidden layers could represent the composition of two shallow networks. if the second of these networks computes the identity function, then this deep network replicates a single shallow network. hence, it can also approximate any continuous function arbitrarily closely given sufficient capacity. problem4.7 4.5.2 number of linear regions per parameter a shallow network with one input, one output, and d > 2 hidden units can create up to d+1 linear regions and is defined by 3d+1 parameters. a deep network with one problems4.8–4.11 input, one output, and k layers of d >2 hidden units can create a function with up to (d+1)k linear regions using 3d+1+(k−1)d(d+1) parameters. figure4.7ashowshowthemaximumnumberoflinearregionsincreasesasafunction of the number of parameters for networks mapping scalar input x to scalar output y. deepneuralnetworkscreatemuchmorecomplexfunctionsforafixedparameterbudget. this effect is magnified as the number of input dimensions d increases (figure 4.7b), i although computing the maximum number of regions is less straightforward. thisseemsattractive,buttheflexibilityofthefunctionsisstilllimitedbythenumber of parameters. deep networks can create extremely large numbers of linear regions, but these contain complex dependencies and symmetries. we saw some of these when we considereddeepnetworksas“folding”theinputspace(figure4.3). so,it’snotclearthat the greater number of regions is an advantage unless (i) there are similar symmetries in the real-world functions that we wish to approximate or (ii) we have reason to believe that the mapping from input to output really does involve a composition of simpler functions. 4.5.3 depth efficiency both deep and shallow networks can model arbitrary functions, but some functions can be"