Friday, September 24, 2004

Fresh Ideas

I swung by Marvin's place yesterday and he showed me a section he is drafting for his upcoming book The Emotion Machine. The basic idea is that there may be sections of the brain that are "consciousness activity detectors" -- agents that recognize that self-reflective processes are active. This could explain why there seems to be some uniformity to the phenomena we call consciousness, even if there are really many different kinds of processes at play.

It's awesome to see these ideas fresh from his mind; generally, it's been fascinating seeing the book evolve over the past ten years. It's very much been like watching someone else's child grow up -- you see the incubation of the early, ill-formed notions, the trying out of different variations, the changes of interest and focus, the sudden spurts of development, and eventually the maturation into a distinguished member of society. I expect this book will be one for the ages.

Monday, September 20, 2004

older, better?

Is it a myth that people slow down as they get older? In my experience, complex ideas now seem simpler, I'm better at solving problems, and generally I'm faster at learning new things. I'm certainly more critical of my ideas, but my sense is that the ones that get through my filters are better than the ones I would produce a few years ago.

Someone once told me about a class they took with Gerry Sussman, one of the great teachers here at MIT. One day he came in and started writing equations on the board. As he went on the equations got more and more painful to follow, and eventually one of the students gave up and asked him, how can you possibly understand this complex stuff? Gerry's response was, "Well, I couldn't when I was your age -- but when I turned 26 I grew three new registers."

Sunday, September 19, 2004

the term "symbolic AI"

There seems to be a basic misunderstanding among many amateurs and students of AI (and shamefully, among some professionals as well) about what "symbolic AI" refers to.

Today there are many AI techniques that employ symbols -- logical AI obviously, AI based on the use of frames and semantic networks (which is distinct from logical AI because they often do not have a clear logical semantics), genetic programming (where one searches through spaces of symbolically represented genotypes), and Bayesian networks (where the nodes are often labeled with meaningful symbols.) Neural networks often do not have anything you could reasonably call a symbol in them, but there are communities within AI that have developed many ideas about how to express structures such as frames and semantic networks in connectionist substrates.

Yet despite this variety I often run into students who lump logical AI, frames, and semantic networks into one category; genetic programming into another; and Bayesian networks and other probabilistic models into a third. There seems to be no special reason for this lumping other than the so-called symbolic vs connectionist debates of the late 80s and early 90s which established this false dichotomy.

If AI is going to become a mature field we need to start teaching students to make finer distinctions -- which means that they will need to be trained more deeply in more subject areas, including all of the above.

Saturday, September 18, 2004

Commonsense Reasoning Wikipedia entry

I started a Wikipedia entry on Commonsense reasoning. I started it with the line:

"Commonsense reasoning is the branch of Artificial intelligence concerned with replicating human thinking."

I'm not sure everyone would agree with this definition--some might argue that it is the branch of AI concerned with particular techniques such as default reasoning with logic, in which case it is applicable to a wide range of problems beyond those people solve.

Or, one might choose to regard Commonsense Reasoning as concerned with solving the kinds of problems people solve--e.g. understanding sentences, recognizing scenes, interacting socially, etc.--but not necessarily in the same ways people solve them. I tend to fall more into this camp.

Friday, September 17, 2004

LifeNet coming soon

We're getting closer to our release of LifeNet, a commonsense reasoning system that uses a 1.5 slice DBN-like probabilistic model (it's actually a pairwise Markov random field.) This isn't the best way to represent commonsense knowledge, but I suspect it's not a bad substrate on top of which to layer other techniques, plus we have a number of ideas about how to rapidly grow this model.

Marvin Minsky once drew the following diagram as a way to think about how to relate the various representations discussed in The Society of Mind. LifeNet was originally intended to represent knowledge at the Transframe level, and it does indeed have some of the properties of the knowledge at that level. But it is probably a better fit to the Microneme level, with the extension that LifeNet associates propositions across time slices as well as within time slices.

Friday, September 10, 2004

a practical problem in modern computational epistemology

To me the idea that humans represent knowledge as stories is one of the more compelling arguments for the need to have rich knowledge representations like those used by Cyc. Simpler representations simply lack the power to express the variety of content and structures that stories contain -- situations, places, times, objects, events, beliefs, goals, emotions, and so forth.

That said, I am dubious of the proliferation of symbol names that seems to be the product of most ontological approaches to AI. The problem is that it is difficult to decide when to stop. Pat Hayes once recounted to me a story about a group of ontological engineers arguing about whether a painting that was hanging on the wall was “in" the room--but then an even fiercer argument broke out about whether the paint on the wall was "in" the room! It's certainly an interesting exercise to produce and refine such distinctions, but my sense is that this is not a well-defined task outside of some purpose or goal that guides the production of these distinctions.

Stories, however, open up an interesting new possibility. Rather than using a large collection of special symbol names that distinguish between an ever-increasing collection of cases of "in-ness", we can instead say ‘in’ as in the story STORY-532. At some point symbolic distinctions could begin to be made by exploiting their extrinsic contexts of use to provide a basis for further refinement, as opposed to trying to shoehorn that context into the symbol name itself.