Show simple item record

Learning by Failing to Explain

dc.date.accessioned2004-10-20T20:02:23Z
dc.date.accessioned2018-11-24T10:22:08Z
dc.date.available2004-10-20T20:02:23Z
dc.date.available2018-11-24T10:22:08Z
dc.date.issued1986-05-01en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/6850
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/6850
dc.description.abstractExplanation-based Generalization requires that the learner obtain an explanation of why a precedent exemplifies a concept. It is, therefore, useless if the system fails to find this explanation. However, it is not necessary to give up and resort to purely empirical generalization methods. In fact, the system may already know almost everything it needs to explain the precedent. Learning by Failing to Explain is a method which is able to exploit current knowledge to prune complex precedents, isolating the mysterious parts of the precedent. The idea has two parts: the notion of partially analyzing a precedent to get rid of the parts which are already explainable, and the notion of re-analyzing old rules in terms of new ones, so that more general rules are obtained.en_US
dc.format.extent140 p.en_US
dc.format.extent15467251 bytes
dc.format.extent5755509 bytes
dc.language.isoen_US
dc.subjectlearningen_US
dc.subjectexplanationen_US
dc.subjectheuristic parsingen_US
dc.subjectdesignen_US
dc.subjectsgraph grammarsen_US
dc.subjectsubgraph isomorphismen_US
dc.titleLearning by Failing to Explainen_US


Files in this item

FilesSizeFormatView
AITR-906.pdf5.755Mbapplication/pdfView/Open
AITR-906.ps15.46Mbapplication/postscriptView/Open

This item appears in the following Collection(s)

Show simple item record