DEV Community

Shrijith Venkatramana
Shrijith Venkatramana

Posted on

Modeling a Neuron in micrograd (As Explained by Karpathy)

Hi there! I'm Shrijith Venkatrama, founder of Hexmos. Right now, I’m building LiveAPI, a tool that makes generating API docs from your code ridiculously easy.

Modeling a Neuron

Neuron Model

In serious neural network implementations, we model the neuron in the following way:

  1. Input x0 (axon)
  2. Weight w0 (synapse)
  3. 1 "Influence" x0*w0 (dendrite)
  4. Sum of "influences" = x0*w0 + x1*w1 + ... (cell body)
  5. Bias b

The above leads to the cell body expression:

(xiwi)+b\sum (x_i \cdot w_i) + b

We also have:

  1. Activation function - squashing fuction (tanh, sigmoid)

Activation Function

  1. The output axon is then:
f((xiwi)+b)f(\sum (x_i \cdot w_i) + b)

Representing the Model Neuron (defined above) in micrograd

# inputs x1, x2
x1 = Value(2.0, label='x1')
x2 = Value(0.0, label='x2')

# weights w1, w2
w1 = Value(-3.0, label='w1')
w2 = Value(1.0, label='w2')

# bias of the neuron
b = Value(6.7, label='b')

x1w1 = x1 * w1; x1w1.label = 'x1*w1'
x2w2 = x2 * w2; x2w2.label = 'x2*w2'
x1w1x2w2 = x1w1 + x2w2; x1w1x2w2.label = 'x1*w1 + x2*w2'

n = x1w1x2w2 + b; n.label = 'n'

draw_dot(n)
Enter fullscreen mode Exit fullscreen mode

Result:

Neuron in micrograd

Implementing tanh into Value (for the Activation Function)

We have the following tanh formula:

tanh formula

We can implement the function as follows:

class Value:
    ...

    def tanh(self):
        x = self.data
        t = (math.exp(2*x) - 1) / (math.exp(2*x) + 1)
        out = Value(t, (self, ), 'tanh')
        return out
Enter fullscreen mode Exit fullscreen mode

We'll add a new node o which is the tanh(n):

# inputs x1, x2
x1 = Value(2.0, label='x1')
x2 = Value(0.0, label='x2')

# weights w1, w2
w1 = Value(-3.0, label='w1')
w2 = Value(1.0, label='w2')

# bias of the neuron
b = Value(6.8813735870195432, label='b')

x1w1 = x1 * w1; x1w1.label = 'x1*w1'
x2w2 = x2 * w2; x2w2.label = 'x2*w2'
x1w1x2w2 = x1w1 + x2w2; x1w1x2w2.label = 'x1*w1 + x2*w2'

n = x1w1x2w2 + b; n.label = 'n'

o = n.tanh(); o.label = 'o'

draw_dot(o)
Enter fullscreen mode Exit fullscreen mode

And we get:

tanh demo

Derivative of o - Derivative of tanh

The formula for derivative of tanh is the following:

Derivative of tanh

So, we want to find out do/dn:

do/dn = 1 - tanh(n)**2 = 1 - o**2
Enter fullscreen mode Exit fullscreen mode

We know that do/do = 1

So, o.grad = 1

To find do/dn, we do:

derivative of o wrt n

Therefore:

n.grad = 0.5
Enter fullscreen mode Exit fullscreen mode

Getting all the backprop values calculated (manually)

We leverage some patterns we've learned previously about how backprop works with addition/multiplication, to quickly fill in the values for grad in each node:

o.grad = 1
n.grad = 1 - o.data**2

## addition - grad just flows through to previous stages
x1w1x2w2.grad = n.grad
b.grad = n.grad
x2w2.grad = x1w1x2w2.grad
x1w1.grad = x1w1x2w2.grad

## multiplication - element.grad = sibling.data * next.grad
x2.grad = w2.data * x2w2.grad
w2.grad = x2.data * x2w2.grad
x1.grad = w1.data * x1w1.grad
w1.grad = x1.data * x1w1.grad
draw_dot(o)
Enter fullscreen mode Exit fullscreen mode

Result:

Backprop result

Reference

The spelled-out intro to neural networks and backpropagation: building micrograd - YouTube

Top comments (0)