Emotional and moral tagging of percepts and concepts

Back in April, I suffered head trauma that almost killed me and landed me in the hospital for, thankfully, only a day. My wife, the sweet prankster that she is, went to a newsstand and got me copies of Scientific American Mind and Discover Presents: The Brain, an Owner's Manual (a one-off, not a periodical). The former had a picture of a woman with the upper portion of her head as a hamburger and the latter a picture of a head with its skullcap removed revealing the brain. So I got a good laugh and some interesting reading.

I'm reading an article now in The Brain titled "Conflict". The basic position author Carl Zimmer offers is encapsulated in the subtitle: morality may be hardwired into our brains by evolution. In my opinion, there is some merit to this idea, but I don't subscribe wholeheartedly to all of what the article promotes. Zimmer argues that the parts of our brains that respond emotionally to moral dilemmas are different from the parts that respond rationally and that, in fact, the emotional responses often happen faster than the intellectual ones. He further contends that our moral judgments come out of these more primitive, instant emotional responses. I have thought this as well, but not for the reason Zimmer proffers: that moral reasoning is automatic and built in.

I'd agree that, yes, we are reacting automatically and almost instantly, emotionally and moralistically, before we start seriously analyzing a moral question. But I would argue that it's because one's "moral compass" is programmable, but largely knee-jerk. Most humans may be born with some basic moral elements, like empathy and a desire to not see or let other people suffer. But we can readily reprogram this mechanism to respond instantly to things evolution obviously didn't plan for. For example, most Americans recognize the danger smoking poses to health. So smoking around other people comes with an understanding that it's a danger to their health, and often without their consenting to the risks. That knowledge quickly becomes associated with the "second-hand smoke" concept. I would argue that people with this knowledge instantly respond emotionally and moralistically when the subject of second-hand smoking comes up, regardless of the content of the conversation in which it's referenced. Even before the sentence is completely uttered, the moral judgments and emotional indignation are kicking in in the listener's mind. Why is this?

The article just prior to this one by Steven Johnson and titled "Fear" points out that the amygdala is activated when the brain is responding to "fear conditioning", as when a rat is trained to associate a sound tone with electric shock.

Johnson cites a fascinating case of a woman who suffered a tragic case of short term memory. Her doctor could leave for 15 minutes and return and the woman would not recognize him or recall having any history or relation to the doctor. Each time they met, he would shake her hand as part of the greeting ritual. One day, he concealed a tack in his hand when he went to shake her hand. After that, while she still did not recognize the doctor in any conscious way, she no longer wished to shake his hand. In experiments with rats, researchers found that removing the part of the neocortex that remembers events did not stop the rats from continuing to respond to fear conditioning. On the other hand, removing the amygdala did seem to take away the automatic fear reaction they had learned, even if they could remember events associated with their fear conditioning.

Johnson leaves open the question of whether the amygdala is actually storing memories of events for later responses versus simply being a way of "tagging" memories stored in other parts of the brain. My opinion is that tagging makes more sense. Imagine some part of your cortex stores the salient facts associated with some historical event that was traumatic. If the amygdala has connections to that portion of the cortex, they could be strengthened in such a way that anything that triggers memories of that event would also activate the amygdala via that strong link. If the amygdala is really just a part of the brain that kicks off the emotional responses the body and mind undergo, this seems a really simple mechanism for connecting thoughts with emotions.

In the hypothetical example I gave earlier, there could be a strong link between the "second-hand smoke" concept and the amygdala (or some other part of the brain associated with anger). So anything that activates those neurons would also trigger an instant emotional response that would become part of the context of the conversation or event.

I would propose the inclusion of this sort of "tagging" of the contents of consciousness (or even subconsciousness) for just about any broad AI research project. Strong emotions tend to be important in mediating learning. We remember things that evoke strong emotions, after all, and more easily forget things that don't. That has implications for learning algorithms. But conversely, memories of just about any sort in an intelligent machine could come with emotional tags that help to set the machine's "emotional state", even when that low-level response seems incongruous with the larger context. For example, a statement like "we are eliminating second-hand smoke here by banning smoking in this office" might be intended to make a non-smoker happy, but the "second-hand smoke" concept, by simply being invoked, might instantly add a small anger component to the emotional soup of the listener. That way, when the mind recognizes that the statement is about a remedy, the value of the remedy is recognized as proportional to the anger engendered by the problem.

Although I haven't talked much about moralistic tagging, per se, I guess I'm assuming that there is a strong relationship between how we respond emotionally to things and how we view their moral content. To be sure, I'm not suggesting that one's ethical judgments always (or should always) jibe with one's knee-jerk emotional reactions to things. Still, it seems this is somewhat a default for us, and not a bad starting point for thinking about how to relate moral thinking to rational thinking in machines.

Being able to tag any particular percepts or concepts learned (or even given a priori) may sound circular, mainly because it is. Emotions beget emotions, as it were. But there are obvious bootstraps. If a robot is given "pain sensors" to, say, detect damage or potential damage, that could be a source of emotional fear and / or anger.

These emotions, in addition to affecting short-term planning, could also be saved with the memory of a damage event and even any other perceptual input (e.g., location in the world or smells) available during that event. Later, recalling the event or detecting or thinking about any of those related percepts could trigger the very same emotions, thus affecting whatever else is the subject of consideration, including affecting its emotional tagging. In this way, the emotions associated with a bad event could propagate through many different facets of the machine's knowledge and "life". This may sound like random chaos -- like tracking mud into a room and having other feet track that mud into other rooms -- but I would expect there to be natural connections from state to state, provided the machine is not prone to random thinking without reason. I think putting "tracers" in such a process and seeing what thoughts become "infected" would be fascinating fodder for study.


Popular posts from this blog

Neural network in C# with multicore parallelization / MNIST digits demo

Discovering English syntax

Virtual lexicon vs Brown corpus