dl_dataset_1 / dataset_chunk_109.csv
Vishwas1's picture
Upload dataset_chunk_109.csv with huggingface_hub
cc68530 verified
raw
history blame contribute delete
No virus
4.14 kB
text
"ationwherethegoalistoassignalabel to each pixel according to which object is present. 10.5.1 image classification much of the pioneering work on deep learning in computer vision focused on image classificationusingtheimagenetdataset(figure10.15). thiscontains1,281,167training images, 50,000validationimages, and100,000testimages, andeveryimageislabeledas belonging to one of 1000 possible categories. most methods reshape the input images to a standard size; in a typical system, the input x to the network is a 224×224 rgb image, and the output is a probability distribution over the 1000 classes. the task is challenging; there are a large number of classes, and they exhibit considerable variation (figure 10.15). in 2011, before deep networkswereapplied,thestate-of-the-artmethodclassifiedthetestimageswith∼25% errors for the correct class being in the top five suggestions. five years later, the best deep learning models eclipsed human performance. in 2012, alexnet was the first convolutional network to perform well on this task. it consists of eight hidden layers with relu activation functions, of which the first five are convolutional and the rest fully connected (figure 10.16). the network starts by this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.10.5 applications 175 figure10.141×1convolution. tochangethenumberofchannelswithoutspatial pooling, we apply a 1×1 kernel. each output channel is computed by taking a weighted sum of all of the channels at the same position, adding a bias, and passing through an activation function. multiple output channels are created by repeating this operation with different weights and biases. figure 10.15exampleimagenetclassificationimages. themodelaimstoassign an input image to one of 1000 classes. this task is challenging because the images vary widely along different attributes (columns). these include rigidity (monkey<canoe), number of instances in image (lizard<strawberry), clutter (compass<steeldrum),size(candle<spiderweb),texture(screwdriver<leopard), distinctiveness of color (mug<red wine), and distinctiveness of shape (headland <bell). adapted from russakovsky et al. (2015). draft: please send errata to [email protected] 10 convolutional networks figure 10.16alexnet(krizhevskyetal., 2012). the network maps a 224×224 color image to a 1000-dimensional vec- tor representing class probabilities. the network first convolves with 11×11 ker- nels and stride 4 to create 96 channels. it decreases the resolution again using a max pool operation and applies a 5×5 convolutional layer. another max pool- ing layer follows, and three 3×3 convo- lutional layers are applied. after a fi- nal max pooling operation, the result is vectorized and passed through three fully connected (fc) layers and finally the softmax layer. downsamplingtheinputusingan11×11kernelwithastrideoffourtocreate96channels. it then downsamples again using a max pooling layer before applying a 5×5 kernel to create 256 channels. there are three more convolutional layers with kernel size 3×3, problems10.16–10.17 eventually resulting in a 13×13 representation with 256 channels. this is resized into a single vector of length 43,264 and then passed through three fully connected layers containing 4096, 4096, and 1000 hidden units, respectively. the last layer is passed through the softmax function to output a probability distribution over the 1000 classes. the complete network contains ∼60 million parameters. most of these are in the fully connected layers and the end of the network. the dataset size was augmented by a factor of 2048 using (i) spatial transformations notebook10.5 and (ii) modifications of the input intensities. at test time, five different cropped and convolution formnist mirrored versions of the image were run through the network, and their predictions averaged. the system was learned using sgd with a momentum coefficient of 0.9 and a batch size of 128. dropout was applied in the fully connected layers, and an l2 (weight decay) regularizer was used. this system achieved a 16.4% top-5 error rate and a 38.1"