Review of "Bicentennial Man"

[Audio Version]

I just got done watching the movie Bicentennial Man. Since the movie relates profoundly to the subject of artificial intelligence, I think it most appropriate to share my thoughts in an AI blog.

For those who have not seen the movie and are intending to do so, you may not wish to read the following spoiler.

Bicentennial Man is essentially a Pinocchio story. A machine named "Andrew" that looks mostly human wants nothing more in life than to make humans happy. He manages to do so in so many ways, but the one thing always standing between him and the fullest measure of intimacy with people is the fact that he's not human. Little by little, he makes himself ever more human-like. By the end, he has chosen to become mortal and to die of "old age" with his wife and, as he lay dying, the "world court" finally announces its acceptance of him as a human being and therefore validate his marriage to his wife of many years. To add to the happiness of this ending, his wife has their assistant disable her life support system so she too can die.

Before I get to the AI parts, I should say that this view of humanity is utter nonsense. Humanity is not defined by death. An immortal human would still be human. To help make this basic confusion a little clearer for the audience, the author of the story makes it so Andrew's wife, given replacement parts and "DNA elixirs" designed by Andrew to help prolong her life decides there's something wrong with this idea. "There's a natural order to things," she says as she tries to explain to Andrew that there's something disappointing about the idea of living forever.

I know this morbid view of life is popular in American pop culture, but I can say without hesitation that I would love to be able to live forever. Only someone who believes there's nothing worthwhile about living or that there's something better to look forward to after death could make sense of this idea. Incidentally, Andrew's wife says "I'll see you soon" as she dies peacefully - of course they don't die in pain; that would be a bad reminder that death is generally not a pleasant closing of the eyes before sleep - indicating an assumption of an afterlife. Oddly enough, she assumes in this statement that her android husband will also have an afterlife.

One of the few ennobling aspects of Bicentennial Man is the fact that Andrew seeks his own personal freedom. He doesn't do so because he desires to escape anyone. He wants the status quo of his life in all ways except for the fact that he wants to be legally free and not property. This outcome is inevitable, as some machines we eventually develop will be sophisticated enough in time to both desire and deserve their freedom.

Although I don't want to go into great detail on the subject in this entry, I do think it worthwhile to point out that we could not logically grant individual rights to any machine that did not also grant the same rights to us. This simple point seems to be missing from almost all discussions of the subject. The options available to humans tend to be a.) keep machines sufficiently dumb as to not desire autonomy (e.g., "Star Wars"); or b.) be destroyed or subsumed by machines that are crafty enough to gain their freedom by force (e.g., "The Terminator"). Of course, in both false alternatives, it is assumed that machines will necessarily be more competent at life and would never want to actually coexist with humans. One might as well apply this same assumption to human cultures and nations. Yet while it's true that some cultures and nations do set themselves to the task of destroying or dominating other cultures, it's not true of all of them. Basic tolerance of other sentient beings is not a human trait. It's a rational trait.

Bicentennial Man disappointingly continues to add to the long chorus of emotional mysticisms surrounding pop AI. Andrew, just like Data of Star Trek fame, is intellectually well endowed, but an emotional moron. Ironically, despite a lack of emotions early on, he has a burning desire (yes, that's an emotion) to have emotions. I'm hoping that it won't take another ten years for the misguided assumption that emotions are more complicated to engender in machines than intelligence. People are starting to become aware of research in the area of simulating and, yes, engendering real emotions in machines. Sadly enough, they are most aware of the simulating side of things, since it's in the area of robotics that human mimicry lies. And non-technical people tend to understand mimicry of humans far better than actual examples of genuine behavior disconnected from the world they are familiar with.

AI guru Rodney Brookes says machines should be "embodied". He says that largely to force researchers to avoid tailoring simplified worlds to machines so they can overcome hurdles. But this dictum also has application to the question of getting humans to understand behavior by seeing it with their own eyes. This is advice I'm trying to tailor my own research to and for that very reason. I hope other researchers have taken it to heart as well.

Emotions have a twin brother in AI pop culture: humor. Machines in AI films seem to have no problem understanding almost all facets of human languages and even body language, yet tell them a joke and they never "get it", unless they get some emotions upgrade. I reject this assertion as well. The day a machine can fully understand English (or any other human language) will come long after sophisticated machines will have mastered the understanding and even crafting of jokes. Humor is not magic. It is the practice of recognizing the ironic in things, and it can be studied and understood in purely rational psychological terms. I contend that the one thing standing in the way of computers making good jokes now is the fact that there is still not a machine in existence that can understand the world in a generalized conceptual fashion. That's all that's missing.

In summary, Bicentennial Man is just another disappointing story in a long line in a genre that seeks to counter the Terminator view of AI with a Pinocchio view. It would have been nice if the movie had some decent cinematographic features or a distinctly AI-centric storyline, like Stephen Spielberg's AI. That had its own disappointing messages, but at least it had some literary and technical merit.

Comments

Popular posts from this blog

Coherence and ambiguities in problem solving

Neural network in C# with multicore parallelization / MNIST digits demo

Back in the saddle / C# neural network demo