text hyperparameters . . . . . . . . . . . . . . . . . . . . . . . . . . 132 8.6 summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 9 regularization 138 9.1 explicit regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 9.2 implicit regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 9.3 heuristics to improve performance. . . . . . . . . . . . . . . . . . . . . . 144 9.4 summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 10 convolutional networks 161 10.1 invariance and equivariance . . . . . . . . . . . . . . . . . . . . . . . . . 161 10.2 convolutional networks for 1d inputs . . . . . . . . . . . . . . . . . . . . 163 10.3 convolutional networks for 2d inputs . . . . . . . . . . . . . . . . . . . . 170 this work is subject to a creative commons cc-by-nc-nd license. (c) mit press.contents v 10.4 downsampling and upsampling . . . . . . . . . . . . . . . . . . . . . . . 171 10.5 applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 10.6 summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 11 residual networks 186 11.1 sequential processing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 11.2 residual connections and residual blocks . . . . . . . . . . . . . . . . . . 189 11.3 exploding gradients in residual networks . . . . . . . . . . . . . . . . . . 192 11.4 batch normalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192 11.5 common residual architectures . . . . . . . . . . . . . . . . . . . . . . . 195 11.6 why do nets with residual connections perform so well? . . . . . . . . . 199 11.7 summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 12 transformers 207 12.1 processing text data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 12.2 dot-product self-attention . . . . . . . . . . . . . . . . . . . . . . . . . . 208 12.3 extensions to dot-product self-attention . . . . . . . . . . . . . . . . . . 213 12.4 transformers. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 12.5 transformers for natural language processing. . . . . . . . . . . . . . . . 216 12.6 encoder model example: bert . . . . . . . . . . . . . . . . . . . . . . . 219 12.7 decoder model example: gpt3 . . . . . . . . . . . . . . . . . . . . . . . 222 12.8 encoder-decoder model example: machine translation . . . . . . . . . . . 226 12.9 transformers for long sequences . . . . . . . . . . . . . . . . . . . . . . . 227 12