Vishwas1 commited on
Commit
c74fcee
1 Parent(s): 880ce1d

Upload dataset_chunk_19.csv with huggingface_hub

Browse files
Files changed (1) hide show
  1. dataset_chunk_19.csv +2 -0
dataset_chunk_19.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ text
2
+ "values and visualize the loss function as a surface (figure 2.3). the “best” parameters are at the minimum of this surface. draft: please send errata to [email protected] 2 supervised learning 2.2.3 training theprocessoffindingparametersthatminimizethelossistermedmodelfitting,training, or learning. the basic method is to choose the initial parameters randomly and then improvethemby“walkingdown”thelossfunctionuntilwereachthebottom(figure2.4). one way to do this is to measure the gradient of the surface at the current position and take a step in the direction that is most steeply downhill. then we repeat this process until the gradient is flat and we can improve no further.2 2.2.4 testing having trained the model, we want to know how it will perform in the real world. we do this by computing the loss on a separate set of test data. the degree to which the prediction accuracy generalizes to the test data depends in part on how representative andcompletethetrainingdatais. however,italsodependsonhowexpressivethemodel is. asimplemodellikealinemightnotbeabletocapturethetruerelationshipbetween input and output. this is known as underfitting. conversely, a very expressive model may describe statistical peculiarities of the training data that are atypical and lead to unusual predictions. this is known as overfitting. 2.3 summary a supervised learning model is a function y=f[x,ϕ] that relates inputs x to outputs y. the particular relationship is determined by parameters ϕ. to train the model, we definealossfunctionl[ϕ]overatrainingdataset{x ,y }. thisquantifiesthemismatch i i between the model predictions f[x ,ϕ] and observed outputs y as a function of the i i parameters ϕ. then we search for the parameters that minimize the loss. we evaluate the model on a different set of test data to see how well it generalizes to new inputs. chapters 3–9 expand on these ideas. first, we tackle the model itself; 1d linear regressionhastheobviousdrawbackthatitcanonlydescribetherelationshipbetweenthe inputandoutputasastraightline. shallowneuralnetworks(chapter3)areonlyslightly more complex than linear regression but describe a much larger family of input/output relationships. deep neural networks (chapter 4) are just as expressive but can describe complex functions with fewer parameters and work better in practice. chapter 5 investigates loss functions for different tasks and reveals the theoretical underpinnings of the least-squares loss. chapters 6 and 7 discuss the training process. chapter 8 discusses how to measure model performance. chapter 9 considers regular- ization techniques, which aim to improve that performance. 2thisiterativeapproachisnotactuallynecessaryforthelinearregressionmodel. here,it’spossible to find closed-form expressions for the parameters. however, this gradient descent approach works for morecomplexmodelswherethereisnoclosed-formsolutionandwheretherearetoomanyparameters toevaluatethelossforeverycombinationofvalues. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.notes 23 figure2.4linearregressiontraining. thegoalistofindthey-interceptandslope parameters that correspond to the smallest loss. a) iterative training algorithms initializetheparametersrandomlyandthenimprovethemby“walkingdownhill” untilnofurtherimprovementcanbemade. here,westartatposition0andmove a certain distance downhill (perpendicular to the contours) to position 1. then we re-calculate the downhill direction and move to position 2. eventually, we reachtheminimumofthefunction(position4). b)eachposition0–4frompanel (a) corresponds to a different y-intercept and slope and so represents a different line. as the loss decreases, the lines fit the data more closely. notes lossfunctionsvs. costfunctions: inmuchofmachinelearningandinthisbook,theterms lossfunctionandcostfunctionareusedinterchangeably. however,moreproperly,alossfunction istheindividualtermassociatedwithadatapoint(i.e.,eachofthesquaredtermsontheright- handsideofequation 2.5), andthecostfunction istheoverallquantitythat ismin"