New Roamer project

[Audio Version]

I keep trying to figure out how to get started with this blog. I'm in the process of a new research project, but I don't really have a lot of time now to describe it in detail. Yet I think it'll be worthwhile to give updates as it progresses. So I guess the best compromise is to at least summarize my current project.

For various reasons, I've called it "Roamer". One of my first design goals was to do what I've wanted to for many years: to create a rich "physical" environment that can be used for AI and AL research. I've basically succeeded in that already. The environment allows for one or more "planets", like Petri dishes with their own different experiments. Each planet is a 2-dimensional, rectangular region populated by various barriers, force fields and, most importantly, particles. All "critters" are composed of particles, basically circles with distinct masses, radii, colors, and so on that act like balls zipping around the planet and interacting with the other elements. One key element is the link, which is a sort of spring that connects any two particles. So a critter can be minimally thought of as a collection of particles joined by springs. The physics of this are such that one may laugh at how a critter jiggles and bounces, but I find it's easy to get what's going on as one watches. The math behind the forces involved in the interactions is pretty simple, but the overall behavior is fairly convincing.

On top of this "physical world", I've begun creating robotic components. These are derived from the basic particle class. Some sense touch and smell. Some produce thrust or induce links to act like muscles. There will be others soon that offer things like vision, enable grabbing objects, and so on.

I've also begun creating "brain" components. I originally made these as particles, but found that cumbersome. So I created a "brain chassis" particle that's meant to house decision making components. The first two I've created thus far are the finite state machine (FSM) and the "olfactor", which is concerned with recognizing smells the nose particles detect.

I'm at a point now where creating new demonstrations is getting to be quite a chore, because each critter design is hard-coded into the program. Now that I've gotten a bit of experience creating critters and wiring brains for them, I have an understanding of the commonality that's involved with these tasks and so am now devising a way to represent designs for worlds in XML files instead of code. This may sound superfluous and overly limiting, but one significant benefit is that I've already engendered a notion of one "body segment" to be modeled after another one already defined and even to modify it a little. As such, it's easy to have a critter that's composed of repeating segments and even segments that grow progressively different or have different uses. It's a sort of object oriented way of describing things, with inheritance and polymorphism. So far, I've proven the concept with segments within segments and ultimately embodied as particles. I have yet to implement the links that tie them together, but that'll be pretty easy. More importantly, I have yet to start implementing this same concept with brain components. Once these steps are done, I'll be able to create richly complex critters with much less effort.

Although my present goals are oriented toward AI research, I keep getting tempted by how relevant this "physical world" I've created seems to be to artificial life (AL) research. There's no reason I couldn't add some extra code to all this and turn it into a world of evolving critters using traditional genetic algorithm techniques. The XML definitions of critters could be the genetic code, for instance. One reason I don't intend to any time soon, though, is that while my simulation of how the world works is pretty good, it's also brittle. I tried to engender conservation of energy and entropy into the system, but I was not able to get away from the fact that in some circumstances, "energy" does get created from nothing and sometimes spiral out of control until the world experiences numeric overflow exceptions. I would expect that evolving critter designs would find and exploit these features. And while such exploits would not necessarily be bad - so long as they don't cause exceptions - one thing I consider unacceptable about them is that they lose that nicely intuitive feel of the system, making it harder for the casual viewer to get a quick sense of what's happening.

On that note, I consider it important in AI and AL research to not only create things that are smart or lifelike, but also to do so in a way that most people can see it for themselves, at least on some level. That's one reason I've wanted for so many years to create a physics-plus-graphics engine like the one I have now. For a researcher like me with no research funding, I think it basically satisfies the requirement of Rodney Brooks that a robot be "embodied" in a way that grid-style worlds and other tightly constrained artifices can't reliably be expected to simulate. I don't ever expect a robot designed in this 2D world to be turned into a physical machine in our 3D world for us to kick around, though. I see this as the end of the line for such critters.

I do, incidentally, think that the model I've devised thus far can readily be transformed into a 3D world. The main two reasons I chose a 2D model are that it's harder to program a useful graphics engine and viewer for a 3D world and that it's simply harder for the researcher or casual observer to understand what's going on in a 3D world, where lots of important things can be hidden from view. Still, this seems a natural step ... for another day.


Popular posts from this blog

Neural network in C# with multicore parallelization / MNIST digits demo

Discovering English syntax

Virtual lexicon vs Brown corpus