Tuning a neural network by hand
The essence of neural networks is just simple function fitting. A neural network is a universal function approximator and given enough data and network parameters it can approximate any (continuous) function to any desired accuracy[1]. This is immensely powerful given that pretty much any behaviour (identifying faces, playing chess, holding a conversation) can be defined as a function, and using the right encoding any function can be described by a mathematical function.Above is the simplest case of a feed-forward neural network, a single hidden layer with a single neuron. Using the activation function ReLU this is simply a linear function that has been cut off at zero:
Node 1
Node 2
Node 3
Output node
The output node is the weighted sum of the hidden nodes and its bias. For simplicity the weights of the final edges have been set to 1. It is now fairly easy to hand-tune the network parameters to approximate a target function of:
||[
0.012 \cdot x^{2} + 1
||]