Show simple item record

Learning Disjunctive Concepts From Examples

dc.date.accessioned2004-10-04T14:51:10Z
dc.date.accessioned2018-11-24T10:12:55Z
dc.date.available2004-10-04T14:51:10Z
dc.date.available2018-11-24T10:12:55Z
dc.date.issued1979-09-01en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/6325
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/6325
dc.description.abstractThis work proposes a theory for machine learning of disjunctive concepts. The paradigm followed is one of teaching and testing, where the teaching is accomplished by presenting a sequence of positive and negative examples of the target concept. The core of the theory has been implemented and tested as computer programs. The theory addresses the problem of deciding when it is appropriate to merge descriptions and when it is appropriate to form a disjunctive split. The approach outlined has the advantage that it allows recovery from over generalizations. It is observed that negative examples play an important role in the decision making process, as well as in detecting over generalizations and instigating recovery. Because of the ability to recover from over generalizations when they occur, the system is less sensitive to the ordering of the training sequence than other systems. The theory is presented in a domain and representation independent format. A few conditions are presented, which abstract the assumptions made about any representation scheme that is to be employed within the theory. The work is illustrated in several different domains, illustrating the generality and flexibility of the theory.en_US
dc.format.extent15147837 bytes
dc.format.extent12035000 bytes
dc.language.isoen_US
dc.titleLearning Disjunctive Concepts From Examplesen_US


Files in this item

FilesSizeFormatView
AIM-548.pdf12.03Mbapplication/pdfView/Open
AIM-548.ps15.14Mbapplication/postscriptView/Open

This item appears in the following Collection(s)

Show simple item record