To me the idea that humans represent knowledge as stories is one of the more compelling arguments for the need to have rich knowledge representations like those used by Cyc. Simpler representations simply lack the power to express the variety of content and structures that stories contain -- situations, places, times, objects, events, beliefs, goals, emotions, and so forth.
That said, I am dubious of the proliferation of symbol names that seems to be the product of most ontological approaches to AI. The problem is that it is difficult to decide when to stop. Pat Hayes once recounted to me a story about a group of ontological engineers arguing about whether a painting that was hanging on the wall was “in" the room--but then an even fiercer argument broke out about whether the paint on the wall was "in" the room! It's certainly an interesting exercise to produce and refine such distinctions, but my sense is that this is not a well-defined task outside of some purpose or goal that guides the production of these distinctions.
Stories, however, open up an interesting new possibility. Rather than using a large collection of special symbol names that distinguish between an ever-increasing collection of cases of "in-ness", we can instead say ‘in’ as in the story STORY-532. At some point symbolic distinctions could begin to be made by exploiting their extrinsic contexts of use to provide a basis for further refinement, as opposed to trying to shoehorn that context into the symbol name itself.