text ".3) is θ ϕ +θ ϕ , where the first term is the slope in 11 1 31 3 panel (g) and the second term is the slope in panel (i). each hidden unit contributes one “joint” to the function, so with three hidden units, notebook3.1 there can be four linear regions. however, only three of the slopes of these regions are shallownetworksi independent; the fourth is either zero (if all the hidden units are inactive in this region) or is a sum of slopes from the other regions. problem3.9 3.1.2 depicting neural networks we have been discussing a neural network with one input, one output, and three hidden units. wevisualizethisnetworkinfigure3.4a. theinputisontheleft,thehiddenunits are in the middle, and the output is on the right. each connection represents one of the ten parameters. to simplify this representation, we do not typically draw the intercept parameters, so this network is usually depicted as in figure 3.4b. ∑ 1forthepurposesofthisbook,alinearfunctionhastheformz′=ϕ0+ iϕizi. anyothertypeof functionisnonlinear. forinstance, therelufunction(equation3.2)andtheexampleneuralnetwork thatcontainsit(equation3.1)arebothnonlinear. seenotesatendofchapterforfurtherclarification. draft: please send errata to udlbookmail@gmail.com.28 3 shallow neural networks figure 3.3 computation for function in figure 3.2a. a–c) the input x is passed throughthreelinearfunctions,eachwithadifferenty-interceptθ•0 andslopeθ•1. d–f) each line is passed through the relu activation function, which clips neg- ative values to zero. g–i) the three clipped lines are then weighted (scaled) by ϕ ,ϕ , and ϕ , respectively. j) finally, the clipped and weighted functions are 1 2 3 summed, and an offset ϕ that controls the height is added. each of the four 0 linear regions corresponds to a different activation pattern in the hidden units. in the shaded region, h is inactive (clipped), but h and h are both active. 2 1 3 this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.3.2 universal approximation theorem 29 figure 3.4 depicting neural networks. a) the input x is on the left, the hidden units h ,h , and h in the center, and the output y on the right. computation 1 2 3 flowsfromlefttoright. theinputisusedtocomputethehiddenunits,whichare combined to create the output. each of the ten arrows represents a parameter (intercepts in orange and slopes in black). each parameter multiplies its source and adds the result to its target. for example, we multiply the parameter ϕ 1 by source h and add it to y. we introduce additional nodes containing ones 1 (orange circles) to incorporate the offsets into this scheme, so we multiply ϕ by 0 one (with no effect) and add it to y. relu functions are applied at the hidden units. b) more typically, the intercepts, relu functions, and parameter names are omitted; this simpler depiction represents the same network. 3.2 universal approximation theorem in the previous section, we introduced an example neural network with one input, one output, relu activation functions, and three hidden units. let’s now generalize this slightly and consider the case with d hidden units where the dth hidden unit is: h =a[θ +θ x], (3.5) d d0 d1 and these are combined linearly to create the output: xd y =ϕ + ϕ h . (3.6) 0 d d d=1 the number of hidden units in a shallow network is a measure of the network capacity. with relu activation functions, the output of a network with d hidden units has at problem3.10 mostd jointsandsoisapiecewiselinearfunctionwithatmostd+1linearregions. as we add more hidden units, the model can approximate more complex functions. indeed, with enough capacity (hidden units), a shallow network can describe any continuous1dfunctiondefinedonacompact"