text ". each layer is connected to the next by forward con- nections (arrows). for this reason, these models are referred to as feed-forward networks. when every variable in one layer connects to every variable in the next, we call this a fully connected network. each connection represents a slope parameter in the underlying equation, and these parameters are termed weights. thevariablesinthehiddenlayeraretermedneuronsorhiddenunits. thevalues feeding into the hidden units are termed pre-activations, and the values at the hidden units (i.e., after the relu function is applied) are termed activations. represent slope parameters in the underlying equations and are referred to as network weights. the offset parameters (not shown in figure 3.12) are called biases. 3.6 summary shallowneuralnetworkshaveonehiddenlayer. they(i)computeseverallinearfunctions of the input, (ii) pass each result through an activation function, and then (iii) take a linear combination of these activations to form the outputs. shallow neural networks make predictions y based on inputs x by dividing the input space into a continuous surface of piecewise linear regions. with enough hidden units (neurons), shallow neural networks can approximate any continuous function to arbitrary precision. chapter4discussesdeepneuralnetworks,whichextendthemodelsfromthischapter by adding more hidden layers. chapters 5–7 describe how to train these models. notes “neural” networks: if the models in this chapter are just functions, why are they called “neural networks”? the connection is, unfortunately, tenuous. visualizations like figure 3.12 consistofnodes(inputs,hiddenunits,andoutputs)thataredenselyconnectedtooneanother. this bears a superficial similarity to neurons in the mammalian brain, which also have dense connections. however, there is scant evidence that brain computation works in the same way as neural networks, and it is unhelpful to think about biology going forward. this work is subject to a creative commons cc-by-nc-nd license. (c) mit press."