Show simple item record

Model Selection in Summary Evaluation

dc.date.accessioned2004-10-20T20:48:55Z
dc.date.accessioned2018-11-24T10:23:15Z
dc.date.available2004-10-20T20:48:55Z
dc.date.available2018-11-24T10:23:15Z
dc.date.issued2002-12-01en_US
dc.identifier.urihttp://hdl.handle.net/1721.1/7181
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/7181
dc.description.abstractA difficulty in the design of automated text summarization algorithms is in the objective evaluation. Viewing summarization as a tradeoff between length and information content, we introduce a technique based on a hierarchy of classifiers to rank, through model selection, different summarization methods. This summary evaluation technique allows for broader comparison of summarization methods than the traditional techniques of summary evaluation. We present an empirical study of two simple, albeit widely used, summarization methods that shows the different usages of this automated task-based evaluation system and confirms the results obtained with human-based evaluation methods over smaller corpora.en_US
dc.format.extent1739841 bytes
dc.format.extent1972183 bytes
dc.language.isoen_US
dc.subjectAIen_US
dc.titleModel Selection in Summary Evaluationen_US


Files in this item

FilesSizeFormatView
AIM-2002-023.pdf1.972Mbapplication/pdfView/Open
AIM-2002-023.ps1.739Mbapplication/postscriptView/Open

This item appears in the following Collection(s)

Show simple item record