text "subsetofthereallinetoarbitraryprecision. toseethis,considerthateverytimeweaddahiddenunit,weaddanotherlinearregionto the function. as these regions become more numerous, they represent smaller sections of the function, which are increasingly well approximated by a line (figure 3.5). the universal approximation theorem proves that for any continuous function, there exists a shallow network that can approximate this function to any specified precision. draft: please send errata to udlbookmail@gmail.com.30 3 shallow neural networks figure 3.5 approximation of a 1d function (dashed line) by a piecewise linear model. a–c) as the number of regions increases, the model becomes closer and closer to the continuous function. a neural network with a scalar input creates one extra linear region per hidden unit. the universal approximation theorem provesthat,withenoughhiddenunits,thereexistsashallowneuralnetworkthat can describe any given continuous function defined on a compact subset of rdi to arbitrary precision. 3.3 multivariate inputs and outputs intheaboveexample,thenetworkhasasinglescalarinputxandasinglescalaroutputy. however, the universal approximation theorem also holds for the more general case wherethenetworkmapsmultivariateinputsx=[x ,x ,...,x ]t tomultivariateoutput 1 2 di predictions y = [y ,y ,...,y ]t. we first explore how to extend the model to predict 1 2 do multivariate outputs. then we consider multivariate inputs. finally, in section 3.4, we present a general definition of a shallow neural network. 3.3.1 visualizing multivariate outputs toextendthenetworktomultivariateoutputsy,wesimplyuseadifferentlinearfunction of the hidden units for each output. so, a network with a scalar input x, four hidden units h ,h ,h , and h , and a 2d multivariate output y=[y ,y ]t would be defined as: 1 2 3 4 1 2 h = a[θ +θ x] 1 10 11 h = a[θ +θ x] 2 20 21 h = a[θ +θ x] 3 30 31 h = a[θ +θ x], (3.7) 4 40 41 and this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.3.3 multivariate inputs and outputs 31 figure 3.6 network with one input, four hidden units, and two outputs. a) visualizationofnetworkstructure. b)thisnetworkproducestwopiecewiselinear functions,y [x]andy [x]. thefour“joints”ofthesefunctions(atverticaldotted 1 2 lines) are constrained to be in the same places since they share the same hidden units, but the slopes and overall height may differ. figure 3.7 visualization of neural net- work with 2d multivariate input x = [x ,x ]t and scalar output y. 1 2 y = ϕ +ϕ h +ϕ h +ϕ h +ϕ h 1 10 11 1 12 2 13 3 14 4 y = ϕ +ϕ h +ϕ h +ϕ h +ϕ h . (3.8) 2 20 21 1 22 2 23 3 24 4 the two outputs are two different linear functions of the hidden units. as we saw in figure 3.3, the “joints” in the piecewise functions depend on where the initial linear functions θ•0+θ•1x are clipped by the relu functions a[•] at the hidden units. sincebothoutputsy andy aredifferentlinearfunctionsofthesamefourhidden 1 2 problem3.11 units, the four “joints” in each must be in the same places. however, the slopes of the linear regions and the overall vertical offset can differ (figure 3.6). 3.3.2 visualizing multivariate inputs to cope with multivariate inputs x, we extend the linear relations between the input and the hidden units. so a network with two inputs x=[x ,x ]t and a scalar output y 1 2 (figure 3.7) might have three hidden units defined by: draft: please send errata to udlbookmail@gmail.com.32 3 shallow neural networks figure 3.8"