Applied Math for Creative Coders
  1. Math Models for Creative Coders
  2. AI
  3. The Multilayer Perceptron
  • Math Models for Creative Coders
    • Maths Basics
      • Vectors
      • Matrix Algebra Whirlwind Tour
      • Things at Right Angles
      • content/courses/MathModelsDesign/Modules/05-Maths/70-MultiDimensionGeometry/index.qmd
    • Tech
      • Tools and Installation
      • Adding Libraries to p5.js
      • Using Constructor Objects in p5.js
      • The Open Sound Protocol
    • Geometry
      • Circles
      • Complex Numbers
      • Fractals
      • Affine Transformation Fractals
      • L-Systems
      • Kolams and Lusona
    • Media
      • Fourier Series
      • Additive Sound Synthesis
      • Making Noise Predictably
      • The Karplus-Strong Guitar Algorithm
    • AI
      • Working with Neural Nets
      • The Perceptron
      • The Multilayer Perceptron
      • MLPs and Backpropagation
      • Gradient Descent
    • Projects
      • Projects

On this page

  • What is a Multilayer Perceptron?
  • Wait, But Why?
  • MLPs in Code
  • References
  1. Math Models for Creative Coders
  2. AI
  3. The Multilayer Perceptron

The Multilayer Perceptron

Published

November 23, 2024

Modified

July 19, 2025

What is a Multilayer Perceptron?

This was our bare bones Perceptron, or neuron as we will refer to it henceforth:

\[ y_k = sign~(~\sum_{k=1}^n W_k*x_k + b~) \]

For the multi-layer perceptron, two changes were made:

  • Changing the hard-threshold activation into a more soft sigmoid activation

  • addition of (one or more ) hidden layers.

Let us discuss these changes in detail.

What is the Activation Block?

  • We said earlier that the weighting and adding is a linear operation.
  • While this is great, simple linear translations of data are not capable of generating what we might call learning or generalization ability.
  • The outout of the perceptron is a “learning decision” that is made by deciding if the combined output is greater or smaller than a threshold.
  • We need to have some non-linear block to allow the data to create nonlinear transformations of the data space, such as curving it, or folding it, or creating bumps, depressions, twists, and so on.

Activation

Activation
  • This nonlinear function needs to be chosen with care so that it is both differentiable and keeps the math analysis tractable. (More later)
  • Such a nonlinear mathematical function is implemented in the Activation Block.
  • See this example: red and blue areas, which we wish to separate and classify these with our DLNN, are not separable unless we fold and curve our 2D data space.
  • The separation is achieved using a linear operation, i.e. a LINE!!
Figure 1: From Colah Blog, used sadly without permission
  • For instance in Figure 2, no amount of stretching or compressing of the surface can separate the two sets ( blue and red ) using a line or plane, unless the surface can be warped into another dimension by folding.

What is the Sigmoid Function?

The hard-threshold used in the Perceptron allowed us to make certain decisions based on linear combinations of the input data. But what is the dataset possesses classes that are not separable in a linear way? What if different categories of points are intertwined with a curved boundary between classes?

We need to have some non-linear block to allow the data to create nonlinear transformations of the data space, such as curving it, or folding it, or creating bumps, depressions, twists, and so on.

Activation

Activation
  • This nonlinear function needs to be chosen with care so that it is both differentiable and keeps the math analysis tractable. (More later)
  • Such a nonlinear mathematical function is implemented in the Activation Block.
  • See this example: red and blue areas, which we wish to separate and classify these with our DLNN, are not separable unless we fold and curve our 2D data space.
  • The separation is achieved using a linear operation, i.e. a LINE!!
Figure 2: From Colah Blog, used sadly without permission
  • For instance in Figure 2, no amount of stretching or compressing of the surface can separate the two sets ( blue and red ) using a line or plane, unless the surface can be warped into another dimension by folding.

So how do we implement this nonlinear Activation Block?

  • One of the popular functions used in the Activation Block is a function based on the exponential function \(e^x\).
  • Why? Because this function retains is identity when differentiated! This is a very convenient property!

Sigmoid Activation

Sigmoid Activation
NoteRemembering Logistic Regression

Recall your study of Logistic Regression. There, the Sigmoid function was used to model the odds of the (Qualitative) target variable against the (Quantitative) predictor.

NoteBut Why Sigmoid?

Because the Sigmoid function is differentiable. And linear in the mid ranges. Oh, and remember the Chain Rule?

\[ \begin{align} \frac{df(x)}{dx} &= \frac{d}{dx} * \frac{1}{1 + e^{-x}} \\\ &= -(1 + e^{-x})^{-2} * \frac{d}{dx}(1 + e^{-x})~~\text{(Using Chain Rule)}\\ &= -(1 + e^{-x})^{-2} * (-e^{-x})\\ &= \frac{e^{-x}}{(1 + e^{-x})^{2}}\\ &= \frac{(1 + e^{-x}) -1}{(1 + e^{-x})^{2}}\\ &= \frac{1}{1 + e^{-x}} * \Bigg({\frac{1 + e^{-x}}{1 + e^{-x}}} - \frac{1}{1 + e^{-x}}\Bigg)\\\ &\text{ and therefore}\\\ \Large{\frac{df(x)}{dx}} &= \Large{f(x) * (1 - f(x))}\\ \end{align} \]

What are Hidden Layers?

The MLP adds several layers of perceptrons, in layers, as shown below:


  • Here, i1, i2, and i3 are input neurons: they are simply inputs and are drawn as circles in the literature.
  • The h1, h2, h3 are neuron in the so-called hidden layer; hidden because they are not inputs!
  • The neurons o1, o2, and o3 are output neurons.
  • The signals/information flows from left to right in the diagram. And we have shown every neuron connected to everyone in the next layer downstream.

How do we mathematically, and concisely, express the operation of the MLP? Let us setup a notation for the MLP weights.

  • \(l\) : layer index;
  • \(j\), \(k\) : neuron index in two adjacent layers
  • \(W^l_{jk}\) (i.e. \(W^{layer}_{{source}~{destn}}\)) : weight from \(j\)th neuron / \((l−1)\)th layer to \(k\)th neuron / \(l\)th layer;
  • \(b^l_k\) : bias of the \(k\)th neuron in the \(l\)th layer.
  • \(a^l_k\) : activation (output) of \(k\)th neuron / \(l\)th layer.


We can write the outputs of the layer-2 as:

\[ \begin{align} (k = 1): ~ a_{12} = sigmoid~(~\color{red}{W^2_{11}*a_{11}} + \color{skyblue}{W^2_{21}*a_{21}} + \color{forestgreen}{W^2_{31}*a_{31}} ~ + b_{12})\\ (k = 2): ~ a_{22} = sigmoid~(~W^2_{12}*a_{11} + W^2_{22}*a_{21} + W^2_{32}*a_{31}~ + b_{22} )\\ (k = 3): ~ a_{32} = sigmoid~(~W^2_{13}*a_{11} + W^2_{23}*a_{21} + W^2_{33}*a_{31}~ + b_{32})\\ \end{align} \]

In (dreaded?) matrix notation :

\[ \begin{bmatrix} a_{12}\\ a_{22}\\ a_{32}\\ \end{bmatrix} = sigmoid~\Bigg( \begin{bmatrix} \color{red}{W^2_{11}} & \color{skyblue}{W^2_{21}} & \color{forestgreen}{W^2_{31}}\\ W^2_{12} & W^2_{22} & W^2_{32}\\ W^2_{13} & W^2_{23} & W^2_{33}\\ \end{bmatrix} * \begin{bmatrix} \color{red}{a_{11}}\\ \color{skyblue}{a_{21}}\\ \color{forestgreen}{a_{31}}\\ \end{bmatrix} + \begin{bmatrix} b_{12}\\ b_{22}\\ b_{32}\\ \end{bmatrix} \Bigg) \]

In compact notation we write, in general:

\[ A^l = \sigma\Bigg(W^lA^{l-1} + B^l\Bigg) \]

\[ a^l_j=σ(\sum_kW^l_{jk} * a^{l−1}_k+b^l_j) \tag{1}\]

Wait, But Why?

  • The “vanilla” perceptron was big advance in AI and learning. However, it was realized that this can only make classification decisions with data that are linearly separable.
  • Including a differentiable non-linearity in the activation block allows us to deform the coordinate space in which the data points are mapped.
  • This deformation may permit unique views of the data wherein the categories of data are separable by an n-dimensional plane.
  • This idea is also used in a machine learning algorithm called Support Vector Machines.

MLPs in Code

  • Using p5.js
  • Using R

Using torch.

References

  1. Tariq Rashid. Make your own Neural Network. PDF Online
  2. Mathoverflow. Intuitive Crutches for Higher Dimensional Thinking. https://mathoverflow.net/questions/25983/intuitive-crutches-for-higher-dimensional-thinking
  3. 3D MatMul Visualizerhttps://bhosmer.github.io/mm/ref.html
Back to top
The Perceptron
MLPs and Backpropagation

License: CC BY-SA 2.0

Website made with ❤️ and Quarto, by Arvind V.

Hosted by Netlify .