dc.date.accessioned | 2008-05-05T15:46:05Z | |
dc.date.accessioned | 2018-11-26T22:25:16Z | |
dc.date.available | 2008-05-05T15:46:05Z | |
dc.date.available | 2018-11-26T22:25:16Z | |
dc.date.issued | 2008-04-11 | en_US |
dc.identifier.uri | http://hdl.handle.net/1721.1/41517 | |
dc.identifier.uri | http://repository.aust.edu.ng/xmlui/handle/1721.1/41517 | |
dc.description.abstract | When a series of problems are related, representations derived from learning earlier tasks may be useful in solving later problems. In this paper we propose a novel approach to transfer learning with low-dimensional, non-linear latent spaces. We show how such representations can be jointly learned across multiple tasks in a Gaussian Process framework. When transferred to new tasks with relatively few training examples, learning can be faster and/or more accurate. Experiments on digit recognition and newsgroup classification tasks show significantly improved performance when compared to baseline performance with a representation derived from a semi-supervised learning approach or with a discriminative approach that uses only the target data. | en_US |
dc.format.extent | 10 p. | en_US |
dc.relation | Massachusetts Institute of Technology Computer Science and Artificial Intelligence Laboratory | en_US |
dc.relation | | en_US |
dc.title | Transferring Nonlinear Representations using Gaussian Processes with a Shared Latent Space | en_US |