Show simple item record

Extensions of a Theory of Networks for Approximation and Learning: Outliers and Negative Examples

dc.date.accessioned2004-10-04T15:14:46Z
dc.date.accessioned2018-11-24T10:14:44Z
dc.date.available2004-10-04T15:14:46Z
dc.date.available2018-11-24T10:14:44Z
dc.date.issued1990-07-01en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/6530
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/6530
dc.description.abstractLearning an input-output mapping from a set of examples can be regarded as synthesizing an approximation of a multi-dimensional function. From this point of view, this form of learning is closely related to regularization theory. In this note, we extend the theory by introducing ways of dealing with two aspects of learning: learning in the presence of unreliable examples and learning from positive and negative examples. The first extension corresponds to dealing with outliers among the sparse data. The second one corresponds to exploiting information about points or regions in the range of the function that are forbidden.en_US
dc.format.extent3388253 bytes
dc.format.extent1212626 bytes
dc.language.isoen_US
dc.titleExtensions of a Theory of Networks for Approximation and Learning: Outliers and Negative Examplesen_US


Files in this item

FilesSizeFormatView
AIM-1220.pdf1.212Mbapplication/pdfView/Open
AIM-1220.ps3.388Mbapplication/postscriptView/Open

This item appears in the following Collection(s)

Show simple item record