Search This Blog

Wednesday, January 12, 2005

A review of the premises behind Pile

[Audio Version]

Meandering through the trickle of AI-related news out on the Web, I recently came across information about a purportedly novel kind of computing paradigm named "Pile" (http://pilesys.com/). The company formed to capitalize on it, Pile Systems, Inc., makes the following bold claim on it's "about" page under the heading "Why Pile can change computing":



The Pile system is a revolutionary new approach to data and computing which eliminates the most fundamental current restrictions in regard to complexity, scalability and computability.

Pile represents and computes arbitrary electronic input exclusively as relations (virtual data) in a fully connected and scalable combinatory space. It dynamically generates data like a computer game instead of storing and retrieving it in a traditionally slow and clumsy process.



This sounds benign and interesting enough on first flush. Having read an outside review of Pile, I can genuinely say I'm interested in learning more about a way of representing information as relationships because it sounds a bit like the ideas I've been pursuing in my own AI research.

Still, the little red flag in my head goes up whenever I read things like "revolutionary new approach", because that doesn't really happen very often. Most innovations are modest extensions of existing conceptions. The red flag is the iconoclasm / excessively bold claims warning.

I am continuing to study the site and the concept of Pile. There may be genuine value to it. Or it may be a fraud. Until I give it a fair airing, I can't make that final judgment.

That said, though, I wanted to give a preliminary review of the premises given in what seems the seminal introductory text on the subject: "Pile System White Paper: Computing Relations", by Peter Krieg. I normally would wait until I'd gotten further along in my understanding of the subject, but I am so incensed by the stated premises thus far about the limitations of current computers and of AI that I thought they merited their own separate review.

The paper begins, "In the 60+ years of modern computing history we have taken the fundamental architecture of computing, i.e. the logic governing the way we represent, structure and operate as well as the method of representation we use to register events, for granted. We rarely become aware that these are mere design decision from the early days of computer technology, neither naturally given nor possibly the best of choices." OK, this is a fair statement. Anyone familiar with neural networks would agree that there are already demonstrated alternatives. A little later, though, Krieg raises the tempo a bit as he writes, "A time of crisis has always been a time where we are willing to take a closer look at foundations in order to find long term cures that go beyond patches and band-aids. The current crisis of computing -- an economic as well as a technical crisis -- is also an opportunity to reconsider the very basic assumptions that this industry has been built upon and reflect on possible alternatives that hold the promise of curing the systemic ills of computing." Let's be honest: there is no crisis of computing. Most organizations that need computing resources are doing just fine with the current breed of computers. In all my years as a software developer, I've never heard any businessman lament of a crisis in computing. They complain about the cost of computer hardware and software licensing, not about basic capabilities. The little red flag starts waving around a bit.

"Attempts in the 1960ies and 1970ies to address these issues have been silenced by the onslaught of Artificial Intelligence, for over 40 years the 'Great White Hope' of computing. Only now that the failure of AI has become evident -- as was predicted early by its critics -- and even the mention of it becomes a liability to anyone seeking publication or funding, can we revisit some of the arguments and take a fresh look at the foundations." Few would argue that AI researchers have made some bold claims that they have not yet been able to deeply substantiate. And yes, AI has a black eye now because of it. Yet while even I would argue that AI is nearly dead as a field, I wouldn't say AI has been a complete failure, nor that its time has passed. Such are the claims of people who don't really understand much about machines or intelligence, I think. So Krieg now has laid out the smoldering ashes of the dark ages out of which we are prepared to emerge into a bright new future. The little red flag waves a little more enthusiastically, now.

"In fact, computers today are just that: extremely complicated, highly integrated yet fundamentally stupid clocks." This, of course, is nonsense. A clock is not a general purpose computer. A Turing machine, by contrast, can be used to solve any information processing problem that can be solved. "They are neither adaptive nor even scalable: in spite of ever speedier and more complicated chips, in spite of even faster growing memories and storage devices, their operations keep drowning in data and complexity." Given that any given Turing machine can be used to emulate any other kind of information processing machine, saying that a Von Neumann machine (VNM) -- most all computers today are of this type -- is not adaptive is just plain nonsense. One may quip that software used on a VNM are not adaptive enough to deal with a certain class of problems, but one should not equate the limits of a program with the limits of the VNM it runs on. Saying that a VNM is not scalable is also nonsense. The famous Connection Machine (up to 10K processors in one system) and now Google (over 100K computers) should easily lay that question to rest. The little red flag begins hopping around frantically.

I would be remiss if I overlooked that last part about "drowning in data and complexity." What could this mean? The next statement is even more puzzling: "The reasons lie in the very foundations of their architecture: logic and representation." The little red flag stops its waving and hopping and scratches its head.

Krieg goes on to reveal the nature of the problem by reference to a collection jargon pulled from AI, philosophy, and even quantum mechanics. He goes so far as to claim that VNMs -- and yes, he's clearly mixing VNMs and AI programs that run on them, at this point -- rely on deterministic rules and that quantum mechanics suggests that there are no such things. Well, he's right, and can even go further to say that almost all technology we have ever created relies on basic determinism, the view of causality that says that we can predict likely outcomes to certain classes of starting states. Before declaring this is just quaint, back-country superstition, let's acknowledge that nature has done the same. Almost everything about the machines that we and all other known life forms has evolved in concert with the basic premise of determinism. What good is a muscle if one can't assume that it won't work in a predictable manner, for example? How about an eye?

"All machines including today's computers are exactly such closed deterministic mechanisms." This claim worries me a little, as I'm assuming that Pile is going to be presented as an alternative to this paradigm. The only problem is that Pile Systems sells software that runs on these deterministic machines.

"Deterministic systems by definition are incapable of learning, as learning would change them in unpredicted ways - turning it into non-deterministic systems." I guess this is supposed to be the killing blow to VNMs and / or AI research to date. This premise is just plain false, though. Determinism does not preclude learning. I could point to the simple neural network demonstrator program I made recently, but I'll grant that Krieg places neural networks somehow above VNMs and other AI. So take FLARE, which I recently reviewed. Now there's a system that is about as classical as one gets in the realm of AI. It relies wholly on deterministic processes for reasoning and learning, yet it's clearly able to adapt itself to new knowledge. How about Cyc? It may not yet have achieved the goals Doug Lenat had for it decades ago, but it clearly adapts to new knowledge. The little red flag is pretty confident the rest of my brain can take it from here and retires for the day, cheerful about another job well done.

After a bit of mumbo jumbo attempting to play on our annoyance at having to adapt to computers instead of having them adapt to us and about how adaptive systems cannot have "pre-knowledge about the signals they detect", Krieg goes on to introduce a new term: "poly-logic systems" and declares that it "is essential to understand living systems and phenomena like cognition, learning, adapting or complexity." The little red flag pokes its head out again, ears perked. To his credit, Krieg decides to rescind his abrogation of logic and declares that "polylogic" is not "another logic", but is instead another "architecture of logic".

It becomes clear at this point that the rest of the white paper will focus on what polylogic is and hence what Pile's novel conception of computing is. I'm going to read on and find out more. Still, I can't help but have a bad taste in my mouth at this point. Given the gross misunderstandings and continual confusion between Von Neumann machines, relational databases, and traditional AI research so far, it's hard to imagine a clean concept will follow. It may be a valid one, still. I'm eager to find out.

No comments:

Post a Comment