Applied Math for Creative Coders
  1. AI by Hand
  • Math Models for Creative Coders
    • Maths Basics
      • Vectors
      • Matrix Algebra Whirlwind Tour
      • Things at Right Angles
      • content/courses/MathModelsDesign/Modules/05-Maths/70-MultiDimensionGeometry/index.qmd
    • Tech
      • Tools and Installation
      • Adding Libraries to p5.js
      • Using Constructor Objects in p5.js
      • The Open Sound Protocol
    • Geometry
      • Circles
      • Complex Numbers
      • Fractals
      • Affine Transformation Fractals
      • L-Systems
      • Kolams and Lusona
    • Media
      • Fourier Series
      • Additive Sound Synthesis
      • Making Noise Predictably
      • The Karplus-Strong Guitar Algorithm
    • AI
      • Working with Neural Nets
      • The Perceptron
      • The Multilayer Perceptron
      • MLPs and Backpropagation
      • Gradient Descent
    • Projects
      • Projects

On this page

  • Training the Neural Network
    • Cost-Gradient for each Weight
    • What does this Gradient Look Like?
    • How Does the NN Use this Gradient?
  • Here Comes the Rain Maths Again!
  • Gradient Descent in Code
  • References

AI by Hand

Published

November 23, 2024

Modified

July 19, 2025

Training the Neural Network

Let us head off to this website and build a small Neural network by hand.

https://student.desmos.com/join/6x8gme

Every time we give the network a new “sketch” to train with, potentially all network connections are updated, using backpropagation.

Cost-Gradient for each Weight

  1. The cost function was the squared error averaged over all \(n\) neurons:

\[ \begin{align} C(W, b) &= \frac{1}{2n}\sum^{n ~ neurons}_{i=1}e^2(i)\\ \end{align} \tag{1}\]

  1. Serious Magic: We want to differentiate this sum for each Weight. Before we calculate \(\frac{dC}{dW^l_{jk}}\), we realize that any weight \(W^l_{jk}\) connects only as input to one neuron \(k\), which outputs \(a_k\). No other neuron-terms in the above summation depend upon this specific Weight, so the summation becomes just one term, pertaining to activation-output, say \(a_k\)!

\[ \begin{align} \frac{d~C}{d~\color{orange}{\pmb{W^l_{jk}}}} &= \large\frac{d}{d~\color{orange}{\pmb{W^l_{jk}}}}\Bigg({\frac{1}{2n}\sum^{all~n~neurons}_{i=1}(e_i)^2}~\Bigg)\\ \\ &= \frac{\color{skyblue}{\large{e^l_k}} }{n} ~ * \frac{d}{d~\color{orange}{\pmb{W^l_{jk}}}} ~ \Bigg(\pmb{\color{red}{\Large{{e^{l}_k}}}} ~ \Bigg) ~~only~~k^{th}~neuron~l^{th}~layer\\ \\ &= \frac{\color{skyblue}{\large{e^l_k}} }{n} ~ * ~ {\frac{d}{d~\color{orange}{\pmb{W^l_{jk}}}}\bigg(\Large{\color{red}{a^{l}_k - d^l_k}}}\bigg) \end{align} \]

  1. Now, the relationship between \(a^{l}_k\) and \(W^l_{jk}\) involves the sigmoid function. (And \(d_k\) is not dependent upon anything!)

\[ \begin{align} \color{red}{\pmb{a^l_k}} ~ &= \sigma~\bigg(\sum^{neurons~in~l-1}_{j=1} \pmb{\color{orange}{W^l_{jk}}} ~ * ~{a^{l-1}_j + b^l_j}\bigg)\\ &= \color{red}{\sigma(everything)}\\ \end{align} \]

  1. We also know \[ \large{\frac{d\sigma(x)}{dx}} = \sigma(x) * \big(1 - \sigma(x)\big) \]

  2. Final Leap: Using the great chain rule for differentiation, we obtain:

\[ \begin{align} \frac{d~C}{d~\color{orange}{\pmb{W^l_{jk}}}} &= \frac{\color{skyblue}{\large{e^l_k}} }{n} ~ * ~ {\frac{d}{d~\color{orange}{\pmb{W^l_{jk}}}}\bigg(\Large{\color{red}{a^{l}_k - d^l_k}}}\bigg)\\ &= \frac{\color{skyblue}{\large{e^l_k}} }{n} ~ * ~\frac{d~\color{red}{\pmb{a^l_k}}}{d~\color{orange}{\pmb{W^l_{jk}}}}\\ &= \frac{\color{skyblue}{\large{e^l_k}} }{n} ~ *\frac{d~ \color{red}{\sigma(everything)}}{d~\color{orange}{\pmb{W^l_{jk}}}}\\ \\ &= \frac{\color{skyblue}{\large{e^l_k}} }{n} ~ * \sigma(everything) * (1 -\sigma(everything)) * \frac{d(everything)}{d~\color{orange}{\pmb{W^l_{jk}}}}~~ \text{Applying Chain Rule!}\\ &= \huge{\frac{\color{skyblue}{\large{e^l_k}} }{n} ~ * \color{red}{~a^{l-1}_k} * ~\\ \large{\sigma~\bigg(\sum^{neurons~in~l-1}_{j=1} \pmb{\color{orange}{W^l_{jk}}} ~ * ~ {a^{l-1}_j + b^l_j}\bigg) * \\ \bigg(1 - \sigma~\bigg(\sum^{neurons~in~l-1}_{j=1} \pmb{\color{orange}{W^l_{jk}}} ~ * ~ {a^{l-1}_j + b^l_j}\bigg)\bigg)}} \end{align} \tag{2}\]

How to understand this monster equation intuitively? Let us first draw a diagram to visualize the components:

Let us take the Weight \(Wjk\). It connects neuron \(j^{l-1}\) with neuron \(k^l\), using the activation \(a^{l-1}_j\). The relevant output error ( that contributes to the Cost function) is \(e^l_{k}\).

  • The product \(\large{\color{red}{a^{l-1}_j} ~ * ~ \color{lightblue}{e^l_k}}\) is like a correlation product of the two quanties at the input and output of the neuron \(k\). This product contributes to a sense of slope: the larger either of these, larger is the Cost-slope going from neuron \(j\) to \(k\).
  • How do we account for the magnitude of the Weight \(Wjk\) itself? Surely that matters! Yes, but note that \(Wjk\) is entwined with the remaining inputs and weights via the \(\sigma\) function term! We must differentiate that and put that differential into the product! That gives is the two other product terms in the formula above which involve the sigmoid function.

So, monster as it is, the formula is quite intuitive and even beautiful!

What does this Gradient Look Like?

This gradient is calculated (in vector fashion) for all weights.

How Does the NN Use this Gradient?

So now that we have the gradient of Cost vs \(W^l_{jk}\), we can adapt \(W^l_{jk}\) by moving a small tuning step in the opposite direction:

\[ W^l_{jk}~|~new = W^l_{jk}~|~old - \alpha * gradient \tag{3}\]

and we adapt all weights in opposition to their individual cost gradient. The parameter \(\alpha\) is called the learning rate.

Yes, but not all neurons have a desired output; so what do we use for error?? Only the output neurons have a desired output!!

The backpropagated error, peasants! Each neuron has already “received” its share of error, which is converted to Cost, whose gradient wrt all input weights of the specific neuron is calculated using Equation 2, and each weight thusly adapted using Equation 3.

Here Comes the Rain Maths Again!

Now, we are ready (maybe?) to watch these two very beautifully made videos on Backpropagation. One is of course from Dan Shiffman, and the other from Grant Sanderson a.k.a. 3Blue1Brown.

Gradient Descent in Code

  • Using p5.js
  • Using R

Using torch.

References

  1. Tariq Rashid. Make your own Neural Network. PDF Online
  2. Mathoverflow. Intuitive Crutches for Higher Dimensional Thinking. https://mathoverflow.net/questions/25983/intuitive-crutches-for-higher-dimensional-thinking
  3. Interactive Backpropagation Explainer https://xnought.github.io/backprop-explainer/
Back to top

License: CC BY-SA 2.0

Website made with ❤️ and Quarto, by Arvind V.

Hosted by Netlify .