dl_dataset_1 / dataset_chunk_27.csv
Vishwas1's picture
Upload dataset_chunk_27.csv with huggingface_hub
8fe1d26 verified
raw
history blame
No virus
3.79 kB
text
"notes 37 figure 3.13 activation functions. a) logistic sigmoid and tanh functions. b) leaky relu and parametric relu with parameter 0.25. c) softplus, gaussian errorlinearunit,andsigmoidlinearunit. d)exponentiallinearunitwithparam- eters0.5and1.0,e)scaledexponentiallinearunit. f)swishwithparameters0.4, 1.0, and 1.4. history of neural networks: mcculloch & pitts (1943) first came up with the notion of an artificialneuronthatcombinedinputstoproduceanoutput,butthismodeldidnothaveaprac- tical learning algorithm. rosenblatt (1958) developed the perceptron, which linearly combined inputs and then thresholded them to make a yes/no decision. he also provided an algorithm to learn the weights from data. minsky & papert (1969) argued that the linear function was inadequate for general classification problems but that adding hidden layers with nonlinear activation functions (hence the term multi-layer perceptron) could allow the learning of more generalinput/outputrelations. however,theyconcludedthatrosenblatt’salgorithmcouldnot learn the parameters of such models. it was not until the 1980s that a practical algorithm (backpropagation, see chapter 7) was developed, and significant work on neural networks re- sumed. thehistoryofneuralnetworksischronicledbykurenkov(2020),sejnowski(2018),and schmidhuber (2022). activation functions: the relu function has been used as far back as fukushima (1969). however,intheearlydaysofneuralnetworks,itwasmorecommontousethelogisticsigmoidor tanhactivationfunctions(figure3.13a). thereluwasre-popularizedbyjarrettetal.(2009), nair&hinton(2010),andglorotetal.(2011)andisanimportantpartofthesuccessstoryof modernneuralnetworks. ithasthenicepropertythatthederivativeoftheoutputwithrespect to the input is always one for inputs greater than zero. this contributes to the stability and efficiency of training (see chapter 7) and contrasts with the derivatives of sigmoid activation draft: please send errata to [email protected] 3 shallow neural networks functions, which saturate (become close to zero) for large positive and large negative inputs. however,therelufunctionhasthedisadvantagethatitsderivativeiszerofornegativeinputs. if all the training examples produce negative inputs to a given relu function, then we cannot improve the parameters feeding into this relu during training. the gradient with respect to the incoming weights is locally flat, so we cannot “walk downhill.” this is known as the dying relu problem. many variations on the relu have been proposed to resolve this problem (figure3.13b), including(i)theleaky relu(maasetal.,2013), whichalsohasa linearoutput fornegativevalueswithasmallerslopeof0.1,(ii)theparametricrelu(heetal.,2015),which treats the slope of the negative portion as an unknown parameter, and (iii) the concatenated relu(shangetal.,2016),whichproducestwooutputs,oneofwhichclipsbelowzero(i.e.,like a typical relu) and one of which clips above zero. a variety of smooth functions have also been investigated (figure 3.13c–d), including the soft- plus function (glorot et al., 2011), gaussian error linear unit (hendrycks & gimpel, 2016), sigmoid linear unit (hendrycks & gimpel, 2016), and exponential linear unit (clevert et al., 2015). mostoftheseareattemptstoavoidthedyingreluproblemwhilelimitingthegradient for negative values. klambauer et al. (2017) introduced the scaled exponential linear unit (fig- ure 3.13e), which is particularly interesting as it helps stabilize the variance of the activations when the input variance has a limited range (see section 7.5). ramachandran et al. (2017) adopted an empirical approach to choosing an activation function. they searched the space of possible functions to find the one that performed best over a variety of supervised learning"