text "three times to create nine linear composing regions. the same principle applies in higher dimensions (figure 4.2). networks a different way to think about composing networks is that the first network “folds” the input space x back onto itself so that multiple inputs generate the same output. then the second network applies a function, which is replicated at all points that were folded on top of one another (figure 4.3). 4.2 from composing networks to deep networks the previous section showed that we could create complex functions by passing the output of one shallow neural network into a second network. we now show that this is a special case of a deep network with two hidden layers. the output of the first network (y = ϕ +ϕ h +ϕ h +ϕ h ) is a linear combina- 0 1 1 2 2 3 3 tion of the activations at the hidden units. the first operations of the second network (equation 4.3 in which we calculate θ′ +θ′ y, θ′ +θ′ y, and θ′ +θ′ y) are linear in 10 11 20 21 30 31 the output of the first network. applying one linear function to another yields another linear function. substituting the expression for y into equation 4.3 gives: h′ = a[θ′ +θ′ y] = a[θ′ +θ′ ϕ +θ′ ϕ h +θ′ ϕ h +θ′ ϕ h ] 1 10 11 10 11 0 11 1 1 11 2 2 11 3 3 h′ = a[θ′ +θ′ y] = a[θ′ +θ′ ϕ +θ′ ϕ h +θ′ ϕ h +θ′ ϕ h ] 2 20 21 20 21 0 21 1 1 21 2 2 21 3 3 h′ = a[θ′ +θ′ y] = a[θ′ +θ′ ϕ +θ′ ϕ h +θ′ ϕ h +θ′ ϕ h ], (4.5) 3 30 31 30 31 0 31 1 1 31 2 2 31 3 3 which we can rewrite as: ′ h = a[ψ +ψ h +ψ h +ψ h ] 1 10 11 1 12 2 13 3 ′ h = a[ψ +ψ h +ψ h +ψ h ] 2 20 21 1 22 2 23 3 ′ h = a[ψ +ψ h +ψ h +ψ h ], (4.6) 3 30 31 1 32 2 33 3 draft: please send errata to udlbookmail@gmail.com.44 4 deep neural networks figure 4.2 composing neural networks with a 2d input. a) the first network (fromfigure3.8)hasthreehiddenunitsandtakestwoinputsx andx andreturns 1 2 ascalaroutputy. thisispassedintoasecondnetworkwithtwohiddenunitsto produce y′. b) the first network produces a function consisting of seven linear regions,oneofwhichisflat. c)thesecondnetworkdefinesafunctioncomprising two linear regions in y ∈[−1,1]. d) when these networks are composed, each of the six non-flat regions from the first networkis divided intotwo new regions by the second network to create a total of 13 linear regions. figure 4.3 deep networks as folding input space. a) one way to think about the first network from figure 4.1 is that it “folds” the input space back on top of itself. b) the second network applies its function to the folded space. c) the final output is revealed by “unfolding” again. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.4.3 deep neural networks 45 figure 4.4 neural network with one input, one output, and two hidden layers, each containing three hidden units. where ψ = θ′ +θ′ ϕ ,ψ = θ′ ϕ ,ψ = θ′ ϕ and so on. the result is a network 10 10 11 0 11 11 1 12 11 2 with two hidden layers (figure 4.4). itfollowsthatanetworkwithtwolayerscanrepresentthefamilyoffunctionscreated by passing the output of one single-layer network into another. in fact, it represents a broader family because"