Abstraction in neuron banks

[Audio Version]

On an exhilarating walk with my wife, we discussed the subject of how to build on the lessons I learned from my Pattern Sniffer project and its "neuron bank", documented in my previous blog entry. There are loads of things to do and it was not obvious how to squeeze more value out of what little I've done so far. But it finally became apparent.

One thing that I was not happy about with Pattern Sniffer is that the world it perceives is "pure". There is just one pattern to perceive at a time. The world we perceive is rarely like this. As I walk along, I hear a bird singing, a car, and a lawn mower at the same time and am aware of each, separately. Clearly, there is lots of raw information overlap, yet I'm able to filter these things out and be aware of all three at once. Pattern Sniffer could see two things going on in its tiny 5 x 5 pixel visual field, but it would see them as a single pattern. This is the kind of sterile world so many AI systems live in because the experimenters don't know how to rise above this problem. Yet rising above is a requirement if we want to be able to get machines that can exist at the "perceptual level", and not just the "sensory level" of intelligence.

I said in my previous blog entry that my neurons' dendrites had a "care" property, but that I didn't make use of it yet. My vision was that this would play an important role in being able to recognize patterns in a more abstract way, but I didn't know how, yet. I need to get to work and document my results, but I wanted to document some of the thoughts we came up with that I can now practically explore.

As we walked, I pointed at a car and explained that somehow, I'm able to "mask out" all the not-car parts of the scene and focus only on the car part. It's very hard to explain what that means, but I tried to relate it in terms of my neuron banks. Consider the "left bar" pattern:

"Left Bar" pattern.

What if we had a neuron in a bank that could recognize this pattern. But let's say I have another neuron that's a copy of this, save for one thing: each dendrite that now expects white pixels now doesn't actually care what's in the white area. We'll represent "don't care" pixels (dendrites) with blue diagonal stripes, like so:

"Left Bar" pattern with white pixels replaced by "don't care" pixels.

In this case, I'm assuming the "care" property would be a numeric value, from 0 (don't care) to 1 (care very much), multiplied while calculating the strength of the match on that dendrite that ultimately contributes to the total match score for the neuron. Now let's say the neuron bank is confronted by a perfect left bar pattern. Clearly, the neuron with the "solid" left bar pattern, with all dendrites having care = 1, will get a stronger match than the neuron with the "masked" version of the left bar pattern, because the don't-care dendrites will not contribute positively to the match score. So if only one neuron gets to "win" this matching game, the neuron with the solid left bar pattern will always win.

An exact match trumps a masked match.

But now let's say we showed our neuron bank an "L" shaped pattern. The "masked" left bar pattern is going to fare better than the "solid" left bar, like so:

The "don't care" pixels don't get penalized by the "lower bar" part.

Now let's say we also had "bottom bar" neurons that match both the solid and masked versions of that. Things get interesting with the "L" pattern. Let's say we even have a neuron that has learned the solid "L" pattern. Following illustrates these variations:

The "L" neuron has the best match, followed by the masked left and bottom bar.

OK, so if we have a neuron that already has a strong match of the "L" pattern, what good are the masked left and bottom bar? Here's where having a neuron hierarchy comes in handy. If we are regularly seeing left bars, bottom bars, and L patterns, a higher level neuron bank could potentially see that the masked-pattern neurons match more things than the solid-pattern neurons do and thus find them to be more generally useful than the specific-pattern neurons. It could then reward them by encouraging them to gain confidence, even though they are not the best matches.

One thing my current neuron banks assume is that there is only one single best match and that only that one neuron gets rewarded for matching a pattern, while all the others may in fact be penalized. Yet this doesn't seem to fit how our brains work, at some level. Remember: I said I can hear and be aware of a bird singing, a car, and a lawn mower at the same time. That's what I want my software to do, too. See, if we're regularly seeing left bars and bottom bars, it may just be that, when we see an "L" in the input, that it's actually just a left bar and a bottom bar, seen together. That's another interpretation.

Being able to explain the total input in terms of multiple perceived stimuli must be more "satisfying" to certain parts of our brains than alternative explanations that see the input as all part of a single cause that is not currently known. Being able to engender this could bring a machine a lot closer to the perceptual level of intelligence.

So that's what I'm probably going to study next. One challenge will be figuring out how to deal with allowing multiple neurons to be rewarded for doing the right thing in a given moment without encouraging neurons to learn redundant information. We'll see.


Popular posts from this blog

Neural network in C# with multicore parallelization / MNIST digits demo

Discovering English syntax

Virtual lexicon vs Brown corpus