Show simple item record

Nuggeteer: Automatic Nugget-Based Evaluation Using Descriptions and Judgements

dc.date.accessioned2006-01-10T18:47:00Z
dc.date.accessioned2018-11-24T10:24:42Z
dc.date.available2006-01-10T18:47:00Z
dc.date.available2018-11-24T10:24:42Z
dc.date.issued2006-01-09
dc.identifier.urihttp://hdl.handle.net/1721.1/30604
dc.identifier.urihttp://repository.aust.edu.ng/xmlui/handle/1721.1/30604
dc.description.abstractTREC Definition and Relationship questions are evaluated on thebasis of information nuggets that may be contained in systemresponses. Human evaluators provide informal descriptions of eachnugget, and judgements (assignments of nuggets to responses) for eachresponse submitted by participants.The best present automatic evaluation for these kinds of questions isPourpre. Pourpre uses a stemmed unigram similarity of responses withnugget descriptions, yielding an aggregate result that is difficult tointerpret, but is useful for relative comparison. Nuggeteer, bycontrast, uses both the human descriptions and the human judgements,and makes binary decisions about each response, so that the end resultis as interpretable as the official score.I explore n-gram length, use of judgements, stemming, and termweighting, and provide a new algorithm quantitatively comparable to,and qualitatively better than the state of the art.
dc.format.extent15 p.
dc.format.extent236402 bytes
dc.language.isoen_US
dc.subjectnatural language
dc.subjectquestion answering
dc.titleNuggeteer: Automatic Nugget-Based Evaluation Using Descriptions and Judgements


Files in this item

FilesSizeFormatView
nuggeteer.pdf236.4Kbapplication/pdfView/Open

This item appears in the following Collection(s)

Show simple item record