text "in equation 4.6, the nine slope parameters ψ ,ψ ,...,ψ can 11 21 33 take arbitrary values, whereas, in equation 4.5, these parameters are constrained to be the outer product [θ′ ,θ′ ,θ′ ]t[ϕ ,ϕ ,ϕ ]. 11 21 31 1 2 3 4.3 deep neural networks intheprevioussection, weshowedthatcomposingtwoshallownetworksyieldsaspecial case of a deep network with two hidden layers. now we consider the general case of a deep network with two hidden layers, each containing three hidden units (figure 4.4). the first layer is defined by: h = a[θ +θ x] 1 10 11 h = a[θ +θ x] 2 20 21 h = a[θ +θ x], (4.7) 3 30 31 the second layer by: ′ h = a[ψ +ψ h +ψ h +ψ h ] 1 10 11 1 12 2 13 3 ′ h = a[ψ +ψ h +ψ h +ψ h ] 2 20 21 1 22 2 23 3 ′ h = a[ψ +ψ h +ψ h +ψ h ], (4.8) 3 30 31 1 32 2 33 3 and the output by: ′ ′ ′ ′ ′ ′ ′ ′ y =ϕ +ϕ h +ϕ h +ϕ h . (4.9) 0 1 1 2 2 3 3 draft: please send errata to udlbookmail@gmail.com.46 4 deep neural networks considering these equations leads to another way to think about how the network con- notebook4.2 structs an increasingly complicated function (figure 4.5): clipping functions 1. the three hidden units h ,h , and h in the first layer are computed as usual by 1 2 3 forming linear functions of the input and passing these through relu activation functions (equation 4.7). 2. the pre-activations at the second layer are computed by taking three new linear functions of these hidden units (arguments of the activation functions in equa- tion 4.8). at this point, we effectively have a shallow network with three outputs; wehavecomputedthreepiecewiselinearfunctionswiththe“joints”betweenlinear regions in the same places (see figure 3.6). 3. atthesecondhiddenlayer,anotherrelufunctiona[•]isappliedtoeachfunction (equation 4.8), which clips them and adds new “joints” to each. 4. the final output is a linear combination of these hidden units (equation 4.9). inconclusion,wecaneitherthinkofeachlayeras“folding”theinputspaceorascre- atingnewfunctions,whichareclipped(creatingnewregions)andthenrecombined. the former view emphasizes the dependencies in the output function but not how clipping creates new joints, and the latter has the opposite emphasis. ultimately, both descrip- tions provide only partial insight into how deep neural networks operate. regardless, it’s important not to lose sight of the fact that this is still merely an equation relating input x to output y′. indeed, we can combine equations 4.7–4.9 to get one expression: ′ ′ ′ y = ϕ +ϕ a[ψ +ψ a[θ +θ x]+ψ a[θ +θ x]+ψ a[θ +θ x]] 0 1 10 11 10 11 12 20 21 13 30 31 ′ +ϕ a[ψ +ψ a[θ +θ x]+ψ a[θ +θ x]+ψ a[θ +θ x]] 2 20 21 10 11 22 20 21 23 30 31 ′ +ϕ a[ψ +ψ a[θ +θ x]+ψ a[θ +θ x]+ψ a[θ +θ x]], 3 30 31 10 11 32 20 21 33 30 31 (4.10) although this is admittedly rather difficult to understand. 4.3.1 hyperparameters we can extend the deep network construction to more than two hidden layers; modern networksmighthavemorethan"