Search This Blog

Sunday, January 23, 2005

The fallacy of bigger brains

[Audio Version]

I recently read a great article in the February 2005 issue of Scientific American titled "The Littlest Human", by Kate Wong. Scientists have been studying a newly found member of the Homo evolutionary family, of which Homo Sapiens is the last surviving species, which they have named Homo Floresiensis, after the Indonesian island of Flores on which was discovered the first known remains of one. As you can see in the artist's rendition of H. Floresiensis, they were very small creatures. In fact, they were about the size of the Australopithicene (remember Lucy?) line from which the Homo tree is thought to have emerged and as such the smallest of Homo that we have yet found. They appear to have existed as recently as 18,000 years ago, long after the demise of Neanderthal, believed to have been the last of the Homo line to die out, leaving only us.

While I have a deep interest in the origin of the human species, what made this story particularly interesting in the context of AI is the question of intelligence that it has raised in the scientific community. Wong describes the creatures some scientists have affectionately dubbed "hobbits" as having brains the size of a grapefruit, yet points out that there is evidence that these hobbits were making sophisticated stone tools, even though some species of Homo with larger brains did not. The obvious question then is: is intelligence measured in brain size?

Wong carefully points out that scientists of various persuasions are weighing in on this question and that there is as yet no clear answer. I think the answer is obvious, though. Intelligence is a reflection of structure, not mass. Wong points out, for example, that some of the people given credit for being among the brightest of humanity run the full gamut of human brain sizes. In one example, two well known intellectuals are cited in which one actually had half the cranial volume as the other. He might as well have been missing an entire brain hemisphere.

So why should I care as an AI researcher? Because for years people have been telling us that the reason we don't have intelligent machines yet is that computers are too slow today. It's just a matter of time, they tell us, until they will have enough transistors, memory, or whatever other basic physical characteristics we care to use to measure computing power. When we reach that threshold, somehow computers will magically wake up and start cracking jokes and deciding whether or not to enslave humans or just kill them altogether.

This equation of greater numbers of computing units with greater intelligence is misguided. If brain size is key in "wet" life, then why don't those creatures who have much larger brains than us (e.g., certain whales) exhibit at least our own levels of wit and creativity? I am fully convinced that we could have had intelligent machines decades ago. I am further convinced that multiplying today's computing power by ten or a hundred times will not automatically bring them about, either. Google is a monster of computing power and it's still not "smart". Don't plan on having a computer of your own that has as much computing power as Google any time soon, by the way.

The actual question is one of structure and complexity. This concept is illustrated over and over again throughout the history of computer science. Computer games illustrate it well. When games like Doom and Tomb Raider came about in the early nineties, people were astonished at what a whole new generation of computer graphics could do with the average home computer. What people now may not remember is that there had been 3D graphics engines around for decades that could render graphics just as compelling. Few could use them because few had the expensive hardware needed to run them fast enough. Did these games come with hardware upgrades? Of course not. What they had was a set of ingenious new algorithms for generating compelling 3D graphics. The same thing happened when people started streaming audio and video through the Internet. The first systems were power hungry, requiring massive bandwidth and expensive hardware. Now, the average user can get the same results with lower bandwidth and a cheap PC, thanks to some incredible compression algorithms and other ingenious techniques invented by companies like Real Networks.

AI researchers like to blame our failure to achieve the goals we've been boasting we could achieve for decades now on lots of things, but insufficient hardware is our favorite whipping boy. Let's be honest, though, and tell the world that we just haven't found the right algorithms, yet.

Funny that rocket science would be standard against which we measure engineering complexity. AI research sometimes seems to make rocket science look like a weekend crafts project. Everyone who has contributed and continues to do so deserves credit for doing the difficult and pursuing what seems the impossible. To anyone who has thought of giving up -- especially those who wonder whether they should even bother getting started in our largely dead field -- I want to encourage you to keep the faith. I am convinced you have more than enough computing power in your own PC to give life to an intelligent mind. It's just a question of your creativity and persistence, and it will happen. Don't give up.

No comments:

Post a Comment