Show simple item record

Learning from Incomplete Data

dc.date.accessioned2004-10-20T20:49:37Z
dc.date.accessioned2018-11-24T10:23:20Z
dc.date.available2004-10-20T20:49:37Z
dc.date.available2018-11-24T10:23:20Z
dc.date.issued1995-01-24en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/7202
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/7202
dc.description.abstractReal-world learning tasks often involve high-dimensional data sets with complex patterns of missing features. In this paper we review the problem of learning from incomplete data from two statistical perspectives---the likelihood-based and the Bayesian. The goal is two-fold: to place current neural network approaches to missing data within a statistical framework, and to describe a set of algorithms, derived from the likelihood-based framework, that handle clustering, classification, and function approximation from incomplete data in a principled and efficient manner. These algorithms are based on mixture modeling and make two distinct appeals to the Expectation-Maximization (EM) principle (Dempster, Laird, and Rubin 1977)---both for the estimation of mixture components and for coping with the missing data.en_US
dc.format.extent11 p.en_US
dc.format.extent388268 bytes
dc.format.extent515095 bytes
dc.language.isoen_US
dc.subjectAIen_US
dc.subjectMITen_US
dc.subjectArtificial Intelligenceen_US
dc.subjectmissing dataen_US
dc.subjectmixture modelsen_US
dc.subjectstatistical learningen_US
dc.subjectEM algorithmen_US
dc.subjectmaximum likelihooden_US
dc.subjectneural networksen_US
dc.titleLearning from Incomplete Dataen_US


Files in this item

FilesSizeFormatView
AIM-1509.pdf515.0Kbapplication/pdfView/Open
AIM-1509.ps388.2Kbapplication/postscriptView/Open

This item appears in the following Collection(s)

Show simple item record