dl_dataset_1 / dataset_chunk_21.csv
Vishwas1's picture
Upload dataset_chunk_21.csv with huggingface_hub
da2cdc6 verified
raw
history blame
No virus
3.72 kB
text
"). second, we pass the 10 11 20 21 30 31 three results through an activation function a[•]. finally, we weight the three resulting activations with ϕ ,ϕ , and ϕ , sum them, and add an offset ϕ . 1 2 3 0 to complete the description, we must define the activation function a[•]. there are many possibilities, but the most common choice is the rectified linear unit or relu: ( 0 z <0 a[z]=relu[z]= . (3.2) z z ≥0 this returns the input when it is positive and zero otherwise (figure 3.1). it is probably not obvious which family of input/output relations is represented by equation 3.1. nonetheless, the ideas from the previous chapter are all applicable. equa- tion 3.1 represents a family of functions where the particular member of the family draft: please send errata to [email protected] 3 shallow neural networks figure 3.1 rectified linear unit (relu). this activation function returns zero if the input is less than zero and returns theinputunchangedotherwise. inother words, it clips negative values to zero. note that there are many other possi- ble choices for the activation function (see figure 3.13), but the relu is the most commonly used and the easiest to understand. figure 3.2 family of functions defined by equation 3.1. a–c) functions for three differentchoicesofthetenparametersϕ. ineachcase,theinput/outputrelation is piecewise linear. however, the positions of the joints, the slopes of the linear regions between them, and the overall height vary. depends on the ten parameters in ϕ. if we know these parameters, we can perform inference (predict y) by evaluating the equation for a given input x. given a training dataset {x ,y }i , we can define a least squares loss function l[ϕ] and use this to mea- i i i=1 sure how effectively the model describes this dataset for any given parameter values ϕ. to train the model, we search for the values ϕˆ that minimize this loss. 3.1.1 neural network intuition in fact, equation 3.1 represents a family of continuous piecewise linear functions (fig- ure 3.2) with up to four linear regions. we now break down equation 3.1 and show why it describes this family. to make this easier to understand, we split the function into two parts. first, we introduce the intermediate quantities: this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.3.1 neural network example 27 h = a[θ +θ x] 1 10 11 h = a[θ +θ x] 2 20 21 h = a[θ +θ x], (3.3) 3 30 31 where we refer to h , h , and h as hidden units. second, we compute the output by 1 2 3 combining these hidden units with a linear function:1 y =ϕ +ϕ h +ϕ h +ϕ h . (3.4) 0 1 1 2 2 3 3 figure 3.3 shows the flow of computation that creates the function in figure 3.2a. each hidden unit contains a linear function θ•0 +θ•1x of the input, and that line is clipped by the relu function a[•] below zero. the positions where the three lines cross zero become the three “joints” in the final output. the three clipped lines are then weighted by ϕ , ϕ , and ϕ , respectively. finally, the offset ϕ is added, which controls 1 2 3 0 the overall height of the final function. problems3.1–3.8 each linear region in figure 3.3j corresponds to a different activation pattern in the hidden units. when a unit is clipped, we refer to it as inactive, and when it is not clipped, we refer to it as active. for example, the shaded region receives contributions from h and h (which are active) but not from h (which is inactive). the slope of 1 3 2 eachlinearregionisdeterminedby(i)theoriginalslopesθ•1 oftheactiveinputsforthis region and (ii) the weights ϕ• that were subsequently applied. for example, the slope in the shaded region (see problem 3"