Using your face and a webcam to control a computer

[Audio Version]

I don't normally do reviews of ordinary products. Still, I tried out an interesting one recently that makes practical use of a fairly straightforward machine vision technique that I thought worth describing.

The product is called EyeTwig (, and is billed as "head mouse". That is, you put a camera near your computer monitor, aim it at your face, and run the program. Then, when you move your head left and right, up and down, the Windows cursor, typically controlled by your mouse, moves about the screen in a surprisingly intuitive and smooth fashion.

Most people would recognize the implication that this could be used by the disabled. I thought about it, though, and realized that this application is limited mainly to those without mobility below the neck. And many of those in that situation have limited mobility of their heads. Still, a niche market is still a market. I think the product's creator sees that the real potential lies in an upcoming version that will also be useful as a game controller.

In any event, the program impressed me enough to wonder how it works. The vendor was unwilling to tell me in detail, but I took a stab at hypothesizing how it worked and running some simple experiments. I think the technique is fascinating by itself, but also could be used in kiosks, military, and various other interesting applications.

When I first saw how EyeTwig worked, I was impressed. I wondered what sorts of techniques it might use for recognizing a face and realizing that it is changing its orientations. The more I studied how it behaved, though, the more I realized it uses a very simple set of techniques. I realized, for example, that it ultimately uses 2D techniques and not 3D techniques. Although the instructions are to tilt your head, I found that simply shifting my head left and right, up and down worked just as well.

The process for machines of recognizing faces is now a rather conventional one. My understanding is that most techniques start by searching for the eyes on a face. It is almost universal that human eyes will be found in two dark patches (eye sockets are usually shadowed) of similar size and roughly side by side and with a pretty tight distance-between proportion. So programs find candidate patch pairs, assume they are eyes, and then look for the remaining facial features in relation to those patches.

Using a white-board to simulate a face. EyeTwig appears to be no different. In addition to finding eyes, though, I discovered that it looks for what I'll loosely call a "chin feature". It could be a mustache, a mouth, or some other horizontal, dark feature directly under the eyes. I discovered this by experimenting with abstract drawings of the human face. My goal was to see how little a drawing needed to be sufficient for EyeTwig to work. The figure at right shows one of the minimal designs that worked very well: a small white-board with two vertical lines for eyes and one horizontal line for a "chin". When I slid the board left and right, up and down, EyeTwig moved the cursor as expected.

One thing that made testing this program out much easier is the fact that the border of the program's viewer changes color between red and green to indicate whether it recognizes what it sees as a face.

In short, EyeTwig employs an ingenious, yet simple technique for recognizing that a face is prominently featured in the view of a simple web-cam. No special training of the software is required for that. For someone looking to deploy practical face recognition applications, this seems to provide an interesting illustration and technique.


Popular posts from this blog

Neural network in C# with multicore parallelization / MNIST digits demo

Discovering English syntax

Virtual lexicon vs Brown corpus