Close your eyes and listen to the brain.
What does it sound like?
Scientists did just that — they used amplifiers to listen to how neurons in our brain sounded, so they could discover how our eyes form an image of the outside world.
Where would you start if you wanted to model this process?
One thing that is generally helpful is to first identify the different components of a system. We have:
(1) the image in front of us
(2) some representation of the neuron
(3) an output of that image we see
How should we put those pieces together?
But what is wrong with this? That it’s probably more like:
But the brain is made up of many neurons, how would we model just one?
Now what if we do this for every neuron? Then we would have a whole picture.
The next question that arises is: How does each neuron know which slice of the picture to look at?
Well, what are the different components of a picture? A picture is fundamentally made up of:
Edges seem like they would give us more information about a picture, so let’s go down that route. What is an edge and how can we break that concept down? A major component of edges is the orientation of an edge.
So maybe each neuron has a preferred orientation it favors. And all of these neurons work together to create one picture.
So we can take lots of bars with different orientations, run them through a gabor filter (a fancy term that refers to a function that is similar to those in our visual system), and see when the gabor filter produces the most excited response.
If we run these through a gabor filter, we end up with a curve looking like this.
Thus, this neuron’s preferred orientation is ~40 degrees. If we do this for every neuron, we’ll have a starting model of how our brain sees edges.
You just learned a few fundamentals about how your retina sends signals to your brain, creating a picture for the world in front of you.
To come: Code as to how you would model this.