Vishwas1 commited on
Commit
619690e
1 Parent(s): de44c29

Upload dataset_chunk_105.csv with huggingface_hub

Browse files
Files changed (1) hide show
  1. dataset_chunk_105.csv +2 -0
dataset_chunk_105.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ text
2
+ "a convolutional layer with kernel size three and stride two computes a weighted sum at every other position. f) this is also a special case of a fully connected network with a different sparse weight structure. figure10.5channels. typically,multipleconvolutionsareappliedtotheinputx and stored in channels. a) a convolution is applied to create hidden units h 1 toh ,whichformthefirstchannel. b)asecondconvolutionoperationisapplied 6 to create hidden units h to h , which form the second channel. the channels 7 12 arestoredina2darrayh thatcontainsallthehiddenunitsinthefirsthidden 1 layer. c) if we add a further convolutional layer, there are now two channels at each input position. here, the 1d convolution defines a weighted sum over both input channels at the three closest positions to create each new output channel. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.10.2 convolutional networks for 1d inputs 167 figure 10.5a–b illustrates this with two convolution kernels of size three and with zeropadding. thefirstkernelcomputesaweightedsumofthenearestthreepixels,adds abias,andpassestheresultsthroughtheactivationfunctiontoproducehiddenunitsh 1 toh . thesecomprisethefirstchannel. thesecondkernelcomputesadifferentweighted 6 sum of the nearest three pixels, adds a different bias, and passes the results through the activationfunctiontocreatehiddenunitsh toh . thesecomprisethesecondchannel. 7 12 in general, the input and the hidden layers all have multiple channels (figure 10.5c). iftheincominglayerhasc channelsandkernelsize k, thehidden unitsineachoutput i problems10.6–10.8 channel are computed as a weighted sum over all c channels and k kernel positions i using a weight matrix ω ∈ rci×k and one bias. hence, if there are co channels in the notebook10.1 next layer, then we need ω∈rci×co×k weights and β ∈rco biases. 1dconvolution 10.2.6 convolutional networks and receptive fields chapter 4 described deep networks, which consisted of a sequence of fully connected layers. similarly, convolutional networks comprise a sequence of convolutional layers. thereceptive fieldofahiddenunitinthenetworkistheregionoftheoriginalinputthat feedsintoit. consideraconvolutionalnetworkwhereeachconvolutionallayerhaskernel size three. the hidden units in the first layer take a weighted sum of the three closest inputs,sohavereceptivefieldsofsizethree. theunitsinthesecondlayertakeaweighted sum of the three closest positions in the first layer, which are themselves weighted sums of three inputs. hence, the hidden units in the second layer have a receptive field of size five. inthisway,thereceptivefieldofunitsinsuccessivelayersincreases,andinformation from across the input is gradually integrated (figure 10.6). problems10.9–10.11 10.2.7 example: mnist-1d we now apply a convolutional network to the mnist-1d data (see figure 8.1). the input x is a 40d vector, and the output f is a 10d vector that is passed through a softmax layer to produce class probabilities. we use a network with three hidden layers (figure 10.7). the fifteen channels of the first hidden layer h are each computed using 1 a kernel size of three and a stride of two with “valid” padding, giving nineteen spatial positions. the second hidden layer h is also computed using a kernel size of three, a 2 strideoftwo,and“valid”padding. thethirdhiddenlayeriscomputedsimilarly. atthis stage, the representation has four spatial positions and fifteen channels. these values are reshaped into a vector of size sixty, which is mapped by a fully connected layer to the ten output activations. thisnetworkwastrainedfor100,000stepsusingsgdwithoutmomentum,alearning rate of 0.01, and a batch size of 100 on a dataset of 4,000 examples. we compare this to problem10.12 a fully connected network with the same number of layers and hidden units (i.e., three hidden layers with 285, 135, and 60 hidden units, respectively). the"