Vishwas1 commited on
Commit
b8c68ba
1 Parent(s): 619690e

Upload dataset_chunk_106.csv with huggingface_hub

Browse files
Files changed (1) hide show
  1. dataset_chunk_106.csv +2 -0
dataset_chunk_106.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ text
2
+ "convolutional net- workhas2,050parameters,andthefullyconnectednetworkhas150,185parameters. by the logic of figure 10.4, the convolutional network is a special case of the fully connected draft: please send errata to [email protected] 10 convolutional networks figure 10.6 receptive fields for network with kernel width of three. a) an input with eleven dimensions feeds into a hidden layer with three channels and convo- lution kernel of size three. the pre-activations of the three highlighted hidden unitsinthefirsthiddenlayerh aredifferentweightedsumsofthenearestthree 1 inputs, so the receptive field in h has size three. b) the pre-activations of the 1 four highlighted hidden units in layer h each take a weighted sum of the three 2 channels in layer h at each of the three nearest positions. each hidden unit in 1 layer h weights the nearest three input positions. hence, hidden units in h 1 2 have a receptive field size of five. c) the hidden units in the third layer (kernel size three, stride two) increases the receptive field size to seven. d) by the time we add a fourth layer, the receptive field of the hidden units at position three have a receptive field that covers the entire input. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.10.2 convolutional networks for 1d inputs 169 figure10.7convolutionalnetworkforclassifyingmnist-1ddata(seefigure8.1). the mnist-1d input has dimension d =40. the first convolutional layer has i fifteen channels, kernel size three, stride two, and only retains “valid” positions to make a representation with nineteen positions and fifteen channels. the fol- lowing two convolutional layers have the same settings, gradually reducing the representation size. finally, a fully connected layer takes all sixty hidden units from the third hidden layer. it outputs ten activations that are subsequently passed through a softmax layer to produce the ten class probabilities. figure 10.8 mnist-1d results. a) the convolutional network from figure 10.7 eventually fits the training data perfectly and has ∼17% test error. b) a fully connected network with the same number of hidden layers and the number of hiddenunitsineachlearnsthetrainingdatafasterbutfailstogeneralizewellwith ∼40% test error. the latter model can reproduce the convolutional model but failstodoso. theconvolutionalstructurerestrictsthepossiblemappingstothose thatprocesseverypositionsimilarly,andthisrestrictionimprovesperformance. draft: please send errata to [email protected] 10 convolutional networks one. the latter has enough flexibility to replicate the former exactly. figure 10.8 shows notebook10.2 bothmodelsfitthetrainingdataperfectly. however, thetesterrorfortheconvolutional convolution formnist-1d network is much less than for the fully connected network. this discrepancy is probably not due to the difference in the number of parameters; we know overparameterization usually improves performance (section 8.4.1). the likely explanation is that the convolutional architecture has a superior inductive bias (i.e., interpolates between the training data better) because we have embodied some prior knowledge in the architecture; we have forced the network to process each position in the input in the same way. we know that the data were created by starting with a template that is (among other operations) randomly translated, so this is sensible. thefullyconnectednetworkhastolearnwhateachdigittemplatelookslikeatevery position. in contrast, the convolutional network shares information across positions and hence learns to identify each category more accurately. another way of thinking about thisisthatwhenwetraintheconvolutionalnetwork,wesearchthroughasmallerfamily of input/output mappings, all of which are plausible. alternatively, the convolutional structure can be considered a regularizer that applies an infinite penalty to most of the solutions that a fully connected network can describe. 10.3 convolutional networks for 2d inputs the previous section described convolutional networks for processing 1d data. such networkscanbeappliedtofinancialtimeseries,audio,andtext. however,convolutional networks are more usually applied to 2d image data"