Network implementation 

As we have all the parameters (x, w, b, and y) required to implement the network, we perform a matrix multiplication between w and x. Then, sum the result with b. That will give our predicted y. The function is implemented as follows:

def simple_network(x):
y_pred = torch.matmul(x,w)+b
return y_pred

PyTorch also provides a higher-level abstraction in torch.nn called layers, which will take care of most of these underlying initialization and operations associated with most of the common techniques available in the neural network. We are using the lower-level operations to understand what happens inside these functions. In later chapters, that is Chapter 5, Deep Learning for Computer Vision and Chapter 6, Deep Learning with Sequence Data and Text, we will be relying on the PyTorch abstractions to build complex neural networks or functions. The previous model can be represented as a torch.nn layer, as follows:

f = nn.Linear(17,1) # Much simpler.

Now that we have calculated the y values, we need to know how good our model is, which is done in the loss function.