dl_dataset_1 / dataset_chunk_17.csv
Vishwas1's picture
Upload dataset_chunk_17.csv with huggingface_hub
1a9934e verified
raw
history blame
No virus
3.72 kB
text
"2.1 supervised learning overview in supervised learning, we aim to build a model that takes an input x and outputs a prediction y. for simplicity, we assume that both the input x and output y are vectors ofapredeterminedandfixedsizeandthattheelementsofeachvectorarealwaysordered in the same way; in the prius example above, the input x would always contain the age ofthecarandthenthemileage, inthatorder. thisistermedstructuredortabulardata. to make the prediction, we need a model f[•] that takes input x and returns y, so: y=f[x]. (2.1) draft: please send errata to [email protected] 2 supervised learning when we compute the prediction y from the input x, we call this inference. the model is just a mathematical equation with a fixed form. it represents a family ofdifferentrelationsbetweentheinputandtheoutput. themodelalsocontainsparam- eters ϕ. the choice of parameters determines the particular relation between input and output, so we should really write: y=f[x,ϕ]. (2.2) when we talk about learning or training a model, we mean that we attempt to find parameters ϕ that make sensible output predictions from the input. we learn these parametersusingatrainingdatasetofi pairsofinputandoutputexamples{x ,y }. we i i aimtoselectparametersthatmapeachtraininginputtoitsassociatedoutputasclosely as possible. we quantify the degree of mismatch in this mapping with the loss l. this is a scalar value that summarizes how poorly the model predicts the training outputs from their corresponding inputs for parameters ϕ. we can treat the loss as a function l[ϕ] of these parameters. when we train the model, we are seeking parameters ϕˆ that minimize this loss function:1 h i ϕˆ =argmin l[ϕ] . (2.3) ϕ if the loss is small after this minimization, we have found model parameters that accu- rately predict the training outputs y from the training inputs x . i i after training a model, we must now assess its performance; we run the model on separatetest datatoseehowwellitgeneralizestoexamplesthatitdidn’tobserveduring training. if the performance is adequate, then we are ready to deploy the model. 2.2 linear regression example let’s now make these ideas concrete with a simple example. we consider a model y = f[x,ϕ] that predicts a single output y from a single input x. then we develop a loss function, and finally, we discuss model training. 2.2.1 1d linear regression model a 1d linear regression model describes the relationship between input x and output y as a straight line: y = f[x,ϕ] = ϕ +ϕ x. (2.4) 0 1 1more properly, the loss function also depends on the training data {xi,yi}, so we should writel[{xi,yi},ϕ],butthisisrathercumbersome. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.2.2 linear regression example 19 figure 2.1 linear regression model. for a given choice of parameters ϕ = [ϕ ,ϕ ]t, the model makes a predic- 0 1 tion for the output (y-axis) based on the input (x-axis). different choices for the y-intercept ϕ and the slope ϕ 0 1 change these predictions (cyan, orange, and gray lines). the linear regression model (equation 2.4) defines a family of input/output relations (lines) and the parametersdeterminethememberofthe family (the particular line). this model has two parameters ϕ = [ϕ ,ϕ ]t, where ϕ is the y-intercept of the line 0 1 0 and ϕ is the slope. different choices for the y-intercept and slope result in different 1 relations between input and output (figure 2.1). hence, equation 2.4 defines a fam- ily of possible input-output relations (all possible lines), and the choice of parameters determines the member of this family (the particular line). 2.2.2 loss forthismodel,thetrainingdataset"