Thursday, July 28, 2005

John McCarthy in town



This is John McCarthy, me, Ed Fredkin, and Marvin Minsky at dinner. John McCarthy was in town for an interview about the Dartmouth conference, the event many regard to mark the birth of AI. It's rare that he and Marvin get together these days. I consider them the two people in the field most dedicated to building human-level AI.

Saturday, July 16, 2005

Interior Grounding

It has been hard to give this blog any attention while finishing my dissertation, but now that it is done -- I am now Dr. Singh! -- I will try to post a bit more often (although it is difficult to combat the Inverse Law of Usenet Bandwidth.)

I travelled last week to the AAAI meeting with Marvin Minsky, who gave the keynote talk on some new ideas he has been developing about how minds grow. The basic idea is called "interior grounding," and it is about how minds might develop certain simple ideas before they begin building articulate connections to the outside world. Marvin developed this idea partly in reaction to the popular desire to build AI "baby machines" that start with blank slates and develop ideas by extensive exposure to the outside world. The difficulty with baby machines is that the world is a confusingly complex place, and making sense of it is not simple matter. There may be ways to discover useful concepts in advance of exposure to the rich, outside world, and that -- when the mind is ready -- make learning about the outside world easier.

I think one reason Marvin likes the interior grounding idea is that it is very compatible with the Society of Mind theory, where the mind is seen not as a single agent, but as a society of simpler agents. In this society, some agents are directly connected to sensors and effectors that interface to the external world. But other agents don't see the outside world at all -- their concern is with the activities of other agents within the mind. Perhaps most of our agents are of this sort, concerned with problems not in the outside world but with other agents within the mind. If this is the case, then many of our mental agents may begin learning and developing, and develop sophisticated notions about how minds work, well before our sensorimotor agents have learned very much about the outside world.

Sunday, February 27, 2005

big minds need to be SOMs

Every now and then someone asks me why there haven't been many implementations of Marvin Minsky's Society of Mind theory of intelligence; instead, most AI people seem to use homogenous planners or inference engines and fairly uniform knowledge representation schemes, and these methods are often reasonably effective.

I've always puzzled about this myself, and I have some ideas for why this is the case. One possible reason is quite simple. It could be that the systems we've been building so far are simply too small to need the kinds of organizational principles that are described in the Society of Mind. Only now, when we are beginning to accumulate knowledge bases with millions or even billions of items, intricately represented and capable of being combined in many ways to produce many inferences, that society-like organizational principles are needed. Until now, using societal principles was overkill, like writing quicksort in object-oriented style -- procedural style is shorter, cleaner, and simpler. "Cellular" abstractions are beneficial mainly for larger programs.

Monday, February 21, 2005

Machine Desirantes

Most television science fiction these days lacks advanced ideas -- generally, they seem to be run-of-the-mill soap operas or social commentary. But the other day I ran across an impressive animated show called Ghost in the Shell. The episode concerned the development of some self-reflection and community-reflection in a group of AI robots. I was reminded a little bit of when I first saw Max Headroom on television -- it's clear that this new show is ahead of its time, in the sense that it treats as commonsensical ideas like minds being able to enter and influence other minds, AI systems that share cognitive resources so that there is a blurring of identity, and there being a continuum of entities between people and AI systems. Max Headroom didn't make it past its second season, but the world it predicted is in many ways familiar today. I expect the same to hold true for Ghost in the Shell.

Saturday, January 15, 2005

Dualism

I ran across this peculiar thought experiment by David Chalmers.

He starts with the concept of the world as a simulation running in an external reality. But he looks not at the usual case where the brains of the agents in the simulation are also in the simulation, but instead at the different case where their brains are running outside the simulation and where the agents have no direct access to those brains. This is the situation, for example, of a typical "artificial life" system, or of a video game where the characters are driven by some AI system.

Chalmers seems to be claiming that, to the agents living in the simulation, assuming they've gotten interested in the question of what minds are and how they work, the most reasonable explanation is a dualistic one -- and that we ourselves should consider dualistic explanations as not so outlandish.

I see a couple of issues here.

First, it's not clear that the simulated agents need access to their physical brains in order to posit theories of how they work. People have been doing this for years by indirect psychological experiments. It may be possible for the agents to do similar experiments, ones that help them choose between different theories of their cognitive architectures. If they're smart enough and spend enough time on the problem, perhaps they would hit upon the right mechanism.

Second, Chalmer's scenario doesn't make me very much more comfortable about dualism. The analogue in the real world would be that the brain that we see is only a conduit to another "mental world" where the mind really happens. But why would the brain need hundreds of specialized centers to be, in effect, a glorified radio transmitter? One could make a case for this, but it strikes me as enormously unlikely.

Monday, November 15, 2004

Akihabara

Just got back from a fund raising trip to Tokyo with Marvin and Ian. We were hosted by friends who took very good care of us, and I got to see quite a bit of the Tokyo area. It was great fun to visit Japan again -- the last time I was there was just about 10 years ago. My favorite part was wandering the electronic components stores in Akihabara. I didn't realize Japan had such a vibrant hardware hacker culture. There were several multistory buildings packed with dozens of small shops that sold parts of all sorts. I haven't seen these sorts of shops in the Boston area (with the exception of the monthly MIT Swapfest) -- presumably because mail order is so efficient in this country. But I remember how nice it was as a kid in Montreal to be able to just run over to a well-stocked electronics store when I needed a part. One question I had was how the Japanese proprietors could afford to run such small businesses -- as in the one in this picture.


Monday, October 25, 2004

the meaning of knowledge

I met with some officials from the Department of Defense's National Security Agency today. I was describing our work on commonsense reasoning when the question came up of what did we mean by the word "knowledge." In our systems we usually start off with a corpus of commonsense facts, stories, descriptions, etc. expressed in english, but which are then converted through a variety of processing techniques into more usable knowledge representations such as semantic networks and probabilistic graphical models. My suggestion was that, from the perspective of the computer, only the latter forms should be considered "knowledge" because they could be put to use by an automated inference procedure. But in the long run, as our parsing and reasoning tools get more sophisticated, we may come to be able to use more and more of the collected corpus.

Generally, the word "knowledge" is very inclusive because the AI community has discovered a vast array of knowledge representation forms, and every one of them is useful for some purpose or the other. Thus, the more important questions may not be what is and isn't knowledge, but given some knowledge, questions such as the following:

For what purposes can it be used? When is it applicable? Is it true? According to who? Under what circumstances? Who might find this knowledge useful? Is it expressed clearly enough? Are there other units of knowledge that may be useful in conjunction with this one? How long should we expect this knowledge to stay relevant? How might have this knowledge been acquired, and from where might we acquire more like it? What background might you need to make sense of it?

And so forth. The point, I suppose, is that like most words that point to complex ideas, understanding the word "knowledge" requires that we consider its many contexts of use, and the issues that show up in those contexts.