Wednesday, April 19, 2006

Into the matrix ...

As preparation for this week's Computing session on AI, I briefly considered browsing the library's holdings of Artificial Intelligence magazine, a journal with such recent articles as "Conformant planning via the heuristic forward search" [*] (the what, now?) and "Is real-valued minimax pathological?" [**]

Instead, I had a quick look at the Advanced Higher AI resource pack (all 132 pages of it). But it's a truly scary document. Far too scary, in fact.

Finally, I dug out my paperback copy of William Gibson's Neuromancer, purchased in 1986 to read on the bus to university (the first time around). A terrific read. Still gripping, twenty years on. (Incidentally, something called cherishedwig-at-yahoo has the bare-faced cheek to post a review on Amazon, headlined "Not great, but I still like it". Not great?! "Lacklustre writing style ... poor writing ... rather flat characters"?! Whaaat? How dare he disrespect the wellspring of cyberpunk!)

So, if I end up teaching AI next year, I know what my recommended reading will be!


[*] Abstract: Conformant planning is the task of generating plans given uncertainty about the initial state and action effects, and without any sensing capabilities during plan execution. The plan should be successful regardless of which particular initial world we start from. It is well known that conformant planning can be transformed into a search problem in belief space, the space whose elements are sets of possible worlds. We introduce a new representation of that search space, replacing the need to store sets of possible worlds with a need to reason about the effects of action sequences. The reasoning is done by implication tests on propositional formulas in conjunctive normal form (CNF) that capture the action sequence semantics. Based on this approach, we extend the classical heuristic forward-search planning system FF to the conformant setting. The key to this extension is an appropriate extension of the relaxation that underlies FF's heuristic function, and of FF's machinery for solving relaxed planning problems: the extended machinery includes a stronger form of the CNF implication tests that we use to reason about the effects of action sequences. Our experimental evaluation shows the resulting planning system to be superior to the state-of-the-art conformant planners MBP, KACMBP, and GPT in a variety of benchmark domains. [Now scroll back up]
[**] Abstract: Deeper searches in game-playing programs relying on the minimax principle generally produce better results. Theoretical analyses, however, suggest that in many cases minimaxing amplifies the noise introduced by the heuristic function used to evaluate the leaves of the game tree, leading to what is known as pathological behavior, where deeper searches produce worse results. In most minimax models analyzed in previous research, positions' true values and sometimes also heuristic values were only losses and wins. In contrast to this, a model is proposed in this paper that uses real numbers for both true and heuristic values. This model did not behave pathologically in the experiments performed. The mechanism that causes deeper searches to produce better evaluations is explained. A comparison with chess is made, indicating that the model realistically reflects position evaluations in chess-playing programs. Conditions under which the pathology might appear in a real-value model are also examined. The essential difference between our real-value model and the common two-value model, which causes the pathology in the two-value model, is identified. Most previous research reports that the pathology tends to disappear when there are dependences between the values of sibling nodes in a game tree. In this paper, another explanation is presented which indicates that in the two-value models the error of the heuristic evaluation was not modeled realistically. [Phew ... now scroll back up]