Sunday, February 27, 2005

big minds need to be SOMs

Every now and then someone asks me why there haven't been many implementations of Marvin Minsky's Society of Mind theory of intelligence; instead, most AI people seem to use homogenous planners or inference engines and fairly uniform knowledge representation schemes, and these methods are often reasonably effective.

I've always puzzled about this myself, and I have some ideas for why this is the case. One possible reason is quite simple. It could be that the systems we've been building so far are simply too small to need the kinds of organizational principles that are described in the Society of Mind. Only now, when we are beginning to accumulate knowledge bases with millions or even billions of items, intricately represented and capable of being combined in many ways to produce many inferences, that society-like organizational principles are needed. Until now, using societal principles was overkill, like writing quicksort in object-oriented style -- procedural style is shorter, cleaner, and simpler. "Cellular" abstractions are beneficial mainly for larger programs.


Blogger Bob Mottram said...

As any organisation grows it needs to differentiate in order to be able to maintain coherence. This applies whether the organisation is a growing embryo or a growing business or society. When the system consists of only a few intercommunicating parts homogenous architectures work well, but as things get bigger you need more specialists and managers.

Few people have considered what the architecture of an artificial mind should look like. Most just take whatever algorithmic techniques which we have at present (such as neural networks) as gospel and assume that they can do it all. Many AI algorithms are very behaviorist in flavour and simply consider the brain to be a simplistic input-output device with very little in the way of intermediate processes. Reinforcement learning theory goes to the ultimate extreme in this respect, abstracting away everything which might be of interest.

Of course its the intermediate processes which are critical for producing any sort of sophisticated cognition, and the way that we learn to understand what we see with our eyes is perhaps the ultimate demonstration of this.

February 28, 2005 at 5:53 PM  
Blogger mindpixel said...

Ah, Push. Just found your blog. Saw SOM and my heart skipped a beat. Thought you meant Self-organizing map. And now that I think about it, they are really the same thing from symbolic and continuous POVs respectively.

And Hi, Bob. Funny running into you here.

January 12, 2006 at 7:13 AM  

Post a Comment

<< Home