dl_dataset_1 / dataset_chunk_25.csv
Vishwas1's picture
Upload dataset_chunk_25.csv with huggingface_hub
50f9fef verified
raw
history blame contribute delete
No virus
3.93 kB
text
"= i {1,5,10,50,100}. the number of regions increases rapidly in high dimensions; with d = 500 units and input size d = 100, there can be greater than 10107 i regions(solidcircle). b)thesamedataareplottedasafunctionofthenumberof parameters. thesolidcirclerepresentsthesamemodelasinpanel(a)withd= 500 hidden units. this network has 51,001 parameters and would be considered very small by modern standards. figure3.10numberoflinearregionsvs. inputdimensions. a)withasingleinput dimension,amodelwithonehiddenunitcreatesonejoint,whichdividestheaxis into two linear regions. b) with two input dimensions, a model with two hidden unitscandividetheinputspaceusingtwolines(herealignedwithaxes)tocreate four regions. c) with three input dimensions, a model with three hidden units candividetheinputspaceusingthreeplanes(againalignedwithaxes)tocreate eight regions. continuing this argument, it follows that a model with d input i dimensions and d hidden units can divide the input space with d hyperplanes i i to create 2di linear regions. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.3.5 terminology 35 figure 3.11 visualization of neural net- work with three inputs and two out- puts. this network has twenty param- eters. therearefifteenslopes(indicated by arrows) and five offsets (not shown). xd y =ϕ + ϕ h , (3.12) j j0 jd d d=1 where a[•] is a nonlinear activation function. the model has parameters ϕ={θ••,ϕ••}. figure 3.11 shows an example with three inputs, three hidden units, and two outputs. problems3.14–3.17 the activation function permits the model to describe nonlinear relations between input and the output, and as such, it must be nonlinear itself; with no activation func- tion, or a linear activation function, the overall mapping from input to output would be restricted to be linear. many different activation functions have been tried (see fig- ure 3.13), but the most common choice is the relu (figure 3.1), which has the merit notebook3.4 of being easily interpretable. with relu activations, the network divides the input activation space into convex polytopes defined by the intersections of hyperplanes computed by functions the “joints” in the relu functions. each convex polytope contains a different linear function. the polytopes are the same for each output, but the linear functions they contain can differ. 3.5 terminology weconcludethischapterbyintroducingsometerminology. regrettably,neuralnetworks have a lot of associated jargon. they are often referred to in terms of layers. the left of figure3.12istheinput layer,thecenteristhehidden layer,andtotherightistheoutput layer. we would say that the network in figure 3.12 has one hidden layer containing four hidden units. the hidden units themselves are sometimes referred to as neurons. when we pass data through the network, the values of the inputs to the hidden layer (i.e., before the relu functions are applied) are termed pre-activations. the values at the hidden layer (i.e., after the relu functions) are termed activations. forhistoricalreasons,anyneuralnetworkwithatleastonehiddenlayerisalsocalled amulti-layerperceptron,ormlpforshort. networkswithonehiddenlayer(asdescribed in this chapter) are sometimes referred to as shallow neural networks. networks with multiple hidden layers (as described in the next chapter) are referred to as deep neural networks. neural networks in which the connections form an acyclic graph (i.e., a graph with no loops, as in all the examples in this chapter) are referred to as feed-forward networks. if every element in one layer connects to every element in the next (as in all the examples in this chapter), the network is fully connected. these connections draft: please send errata to [email protected] 3 shallow neural networks figure 3.12 terminology. a shallow network consists of an input layer, a hidden layer, and an output layer"