dl_dataset_1 / dataset_chunk_116.csv
Vishwas1's picture
Upload dataset_chunk_116.csv with huggingface_hub
3ef34c8 verified
raw
history blame contribute delete
No virus
3.76 kB
text
"convolution with kernel size five, stride two, and a dilation rate of one. the second hidden layer h is computed using a convolution with kernelsize three, stride one, and 2 a dilation rate of one. the third hidden layer h is computed using a convolution with kernel 3 sizefive,strideone,andadilationrateoftwo. whatarethereceptivefieldsizesateachhidden layer? problem10.12the1dconvolutionalnetworkinfigure10.7wastrainedusingstochasticgradient descentwithalearningrateof0.01andabatchsizeof100onatrainingdatasetof4,000examples for 100,000 steps. how many epochs was the network trained for? problem 10.13 draw a weight matrix in the style of figure 10.4d that shows the relationship between the 24 inputs and the 24 outputs in figure 10.9. problem 10.14 consider a 2d convolutional layer with kernel size 5×5 that takes 3 input channels and returns 10 output channels. how many convolutional weights are there? how many biases? problem 10.15 draw a weight matrix in the style of figure 10.4d that samples every other variable in a 1d input (i.e., the 1d analog of figure 10.11a). show that the weight matrix for 1d convolution with kernel size and stride two is equivalent to composing the matrices for 1d convolution with kernel size one and this sampling matrix. problem 10.16∗ consider the alexnet network (figure 10.16). how many parameters are used in each convolutional and fully connected layer? what is the total number of parameters? problem 10.17 what is the receptive field size at each of the first three layers of alexnet (figure 10.16)? problem 10.18 how many weights and biases are there at each convolutional layer and fully connected layer in the vgg architecture (figure 10.17)? problem 10.19∗ consider two hidden layers of size 224×224 with c and c channels, respec- 1 2 tively, connected by a 3×3 convolutional layer. describe how to initialize the weights using he initialization. draft: please send errata to [email protected] 11 residual networks the previous chapter described how image classification performance improved as the depth of convolutional networks was extended from eight layers (alexnet) to eighteen layers (vgg). this led to experimentation with even deeper networks. however, per- formance decreased again when many more layers were added. this chapter introduces residual blocks. here, each network layer computes an addi- tive change to the current representation instead of transforming it directly. this allows deeper networks to be trained but causes an exponential increase in the activation mag- nitudes at initialization. residual blocks employ batch normalization to compensate for this, which re-centers and rescales the activations at each layer. residual blocks with batch normalization allow much deeper networks to be trained, and these networks improve performance across a variety of tasks. architectures that combine residual blocks to tackle image classification, medical image segmentation, and human pose estimation are described. 11.1 sequential processing every network we have seen so far processes the data sequentially; each layer receives the previous layer’s output and passes the result to the next (figure 11.1). for example, a three-layer network is defined by: h = f [x,ϕ ] 1 1 1 h = f [h ,ϕ ] 2 2 1 2 h = f [h ,ϕ ] 3 3 2 3 y = f [h ,ϕ ], (11.1) 4 3 4 where h , h , and h denote the intermediate hidden layers, x is the network input, y 1 2 3 is the output, and the functions f [•,ϕ ] perform the processing. k k in a standard neural network, each layer consists of a linear transformation followed byanactivationfunction, andtheparametersϕ comprisetheweightsandbiasesofthe k this work is subject to a creative commons cc-by-nc-nd license. (c) mit press."