Show simple item record

Extensions of a Theory of Networks for Approximation and Learning: Dimensionality Reduction and Clustering

dc.date.accessioned2004-10-04T14:35:52Z
dc.date.accessioned2018-11-24T10:11:32Z
dc.date.available2004-10-04T14:35:52Z
dc.date.available2018-11-24T10:11:32Z
dc.date.issued1990-04-01en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/6014
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/6014
dc.description.abstractThe theory developed in Poggio and Girosi (1989) shows the equivalence between regularization and a class of three-layer networks that we call regularization networks or Hyper Basis Functions. These networks are also closely related to the classical Radial Basis Functions used for interpolation tasks and to several pattern recognition and neural network algorithms. In this note, we extend the theory by defining a general form of these networks with two sets of modifiable parameters in addition to the coefficients $c_\\ alpha$: moving centers and adjustable norm- weight.en_US
dc.format.extent18 p.en_US
dc.format.extent2271885 bytes
dc.format.extent901116 bytes
dc.language.isoen_US
dc.subjectlearning networksen_US
dc.subjectregularizationen_US
dc.titleExtensions of a Theory of Networks for Approximation and Learning: Dimensionality Reduction and Clusteringen_US


Files in this item

FilesSizeFormatView
AIM-1167.pdf901.1Kbapplication/pdfView/Open
AIM-1167.ps2.271Mbapplication/postscriptView/Open

This item appears in the following Collection(s)

Show simple item record