Volltext-Downloads (blau) und Frontdoor-Views (grau)

Improving extractive dialogue summarization by utilizing human feedback

  • Automatic summarization systems usually are trained and evaluated in a particular domain with fixed data sets. When such a system is to be applied to slightly different input, labor- and cost-intensive annotations have to be created to retrain the system. We deal with this problem by providing users with a GUI which allows them to correct automatically produced imperfect summaries. The corrected summary in turn is added to the pool of training data. The performance of the system is expected to improve as it adapts to the new domain.

Download full text files

Export metadata

Additional Services

Search Google Scholar


Author:Margot MieskesGND, Christoph MüllerORCiDGND, Michael StrubeGND
Parent Title (English):Proceedings of the IASTEAD international conference on artificial intelligence and applications as part of the 25th IASTED international multi-conference on applied informatics. February 12 - 14, 2007, Innsbruck, Austria
Publisher:ACTA Press
Place of publication:Anaheim [u.a.]
Editor:Vladan Devedzic
Document Type:Conference Proceeding
Year of first Publication:2007
Date of Publication (online):2022/11/30
Publishing Institution:Leibniz-Institut für Deutsche Sprache (IDS) [Zweitveröffentlichung]
Contributing Corporation:International Association of Science and Technology for Development (IASTED)
Tag:GUI; automatic summarization; feedback; learning; multi-party dialogues
GND Keyword:Annotation; Computerlinguistik; Dialog; Digital Humanities; Graphische Benutzeroberfläche; Maschinelles Lernen; Zusammenfassung
First Page:627
Last Page:632
DDC classes:400 Sprache / 400 Sprache, Linguistik
Open Access?:ja
Licence (German):License LogoUrheberrechtlich geschützt