Skip to content

Introduction to Neural Networks (The Perceptron)

The perceptron

The perceptron is the simplest neural unit:

  1. take inputs
  2. compute weighted sum
  3. add bias
  4. apply activation

Mathematically:

z = w·x + bz = w·x + b

output = activation(z)output = activation(z)

false


  flowchart LR
  x1[x1] --> N[Σ (weights) + bias]
  x2[x2] --> N
  x3[x3] --> N
  N --> A[Activation]
  A --> y[Output]

false

What it can do

A single perceptron can learn a linear decision boundary.

That means:

  • it can separate data that is linearly separable
  • it cannot solve XOR alone (needs multiple layers)

Key terms

  • weights: importance of each input
  • bias: shift term
  • activation: non-linear function

Mini-checkpoint

Why do we need activation functions?

  • Without them, the network is just a linear model (even with many layers).

If this helped you, consider buying me a coffee ☕

Buy me a coffee

Was this page helpful?

Let us know how we did