dc.date.accessioned | 2005-12-22T02:29:32Z | |
dc.date.accessioned | 2018-11-24T10:24:29Z | |
dc.date.available | 2005-12-22T02:29:32Z | |
dc.date.available | 2018-11-24T10:24:29Z | |
dc.date.issued | 2005-05-17 | |
dc.identifier.uri | http://hdl.handle.net/1721.1/30545 | |
dc.identifier.uri | http://repository.aust.edu.ng/xmlui/handle/1721.1/30545 | |
dc.description.abstract | We study properties of algorithms which minimize (or almost minimize) empirical error over a Donsker class of functions. We show that the L2-diameter of the set of almost-minimizers is converging to zero in probability. Therefore, as the number of samples grows, it is becoming unlikely that adding a point (or a number of points) to the training set will result in a large jump (in L2 distance) to a new hypothesis. We also show that under some conditions the expected errors of the almost-minimizers are becoming close with a rate faster than n^{-1/2}. | |
dc.format.extent | 9 p. | |
dc.format.extent | 7033622 bytes | |
dc.format.extent | 434782 bytes | |
dc.language.iso | en_US | |
dc.subject | AI | |
dc.subject | empirical risk minimization | |
dc.subject | stability | |
dc.subject | empirical processes | |
dc.title | Some Properties of Empirical Risk Minimization over Donsker Classes | |