dl_dataset_1 / dataset_chunk_101.csv
Vishwas1's picture
Upload dataset_chunk_101.csv with huggingface_hub
ce09f01 verified
raw
history blame contribute delete
No virus
3.37 kB
text
"changingormanipulatingthecolorspace,noiseinjection,andapplyingspatialfilters. moreelaboratetechniquesincluderandomlymixingimages(inoue,2018;summers&dinneen, 2019), randomly erasing parts of the image (zhong et al., 2020), style transfer (jackson et al., 2019), and randomly swapping image patches (kang et al., 2017). in addition, many studies haveusedgenerativeadversarialnetworksorgans(seechapter15)toproducenovelbutplau- sible data examples (e.g., calimeri et al., 2017). in other cases, the data have been augmented with adversarial examples (goodfellow et al., 2015a), which are minor perturbations of the training data that cause the example to be misclassified. a review of data augmentation for images can be found in shorten & khoshgoftaar (2019). draft: please send errata to [email protected].160 9 regularization augmentationmethodsforacousticdataincludepitchshifting,timestretching,dynamicrange compression, and adding random noise (e.g., abeßer et al., 2017; salamon & bello, 2017; xu etal.,2015;lasseck,2018),aswellasmixingdatapairs(zhangetal.,2017c;yunetal.,2019), maskingfeatures(parketal.,2019),andusingganstogeneratenewdata(munetal.,2017). augmentationforspeechdataincludesvocaltractlengthperturbation(jaitly&hinton,2013; kandaetal.,2013),styletransfer(gales,1998;ye&young,2004),addingnoise(hannunetal., 2014), and synthesizing speech (gales et al., 2009). augmentationmethodsfortextincludeaddingnoiseatacharacterlevelbyswitching,deleting, and inserting letters (belinkov & bisk, 2018; feng et al., 2020), or by generating adversarial examples(ebrahimietal.,2018),usingcommonspellingmistakes(coulombe,2018),randomly swapping or deleting words (wei & zou, 2019), using synonyms (kolomiyets et al., 2011), altering adjectives (li et al., 2017c), passivization (min et al., 2020), using generative models tocreatenewdata(qiuetal.,2020), and round-triptranslationtoanotherlanguageandback (aiken & park, 2010). augmentation methods for text are reviewed by bayer et al. (2022). problems problem 9.1 consider a model where the prior distribution over the parameters is a normal distribution with mean zero and variance σ2 so that ϕ yj pr(ϕ)= norm [0,σ2], (9.21) ϕj ϕ j=1 q wherej indexesthemodelparameters. wenowmaximize i pr(y |x ,ϕ)pr(ϕ). showthat i=1 i i the associated loss function of this model is equivalent to l2 regularization. problem 9.2 how do the gradients of the loss function change when l2 regularization (equa- tion 9.5) is added? problem 9.3∗ consider a linear regression model y = ϕ +ϕ x with input x, output y, and 0 1 parameters ϕ and ϕ . assume we have i training examples {x ,y } and use a least squares 0 1 i i loss. consider adding gaussian noise with mean zero and variance σ2 to the inputs x at each x i training iteration. what is the expected gradient update? problem 9.4∗ derive the loss function for multiclass classification when we use label smooth- ing so that the target probability distribution has 0.9 at the correct class and the remaining probability mass of 0.1 is divided between the remaining d −1 classes. o problem 9.5 show that the weight decay parameter update with decay rate λ: ∂l ϕ←−(1−λ)ϕ−α , (9.22) ∂ϕ on the original loss function l[ϕ] is equivalentto a standard gradientupdate using l2 regular- ization so that the modified loss function l˜[ϕ] is: x λ"