Volltext-Downloads (blau) und Frontdoor-Views (grau)

Evaluating a Dependency Parser on DeReKo

  • We evaluate a graph-based dependency parser on DeReKo, a large corpus of contemporary German. The dependency parser is trained on the German dataset from the SPMRL 2014 Shared Task which contains text from the news domain, whereas DeReKo also covers other domains including fiction, science, and technology. To avoid the need for costly manual annotation of the corpus, we use the parser’s probability estimates for unlabeled and labeled attachment as main evaluation criterion. We show that these probability estimates are highly correlated with the actual attachment scores on a manually annotated test set. On this basis, we compare estimated parsing scores for the individual domains in DeReKo, and show that the scores decrease with increasing distance of a domain to the training corpus.

Export metadata

Additional Services

Share in Twitter Search Google Scholar


Author:Peter FankhauserGND, Bich-Ngoc DoGND, Marc KupietzGND
Parent Title (English):Proceedings of the LREC 2020 Workshop, Language Resources and Evaluation Conference, 11–16 May 2020, 8th Workshop on Challenges in the Management of Large Corpora (CMLC-8)
Publisher:European Language Resources Association
Place of publication:Paris
Editor:Piotr Bański, Adrien Barbaresi, Simon Clematide, Marc Kupietz, Harald Lüngen, Ines Pisetta
Document Type:Conference Proceeding
Year of first Publication:2020
Date of Publication (online):2020/05/12
Tag:Dependency Parsing; Large Corpora
GND Keyword:Computerlinguistik; Evaluation; Korpus <Linguistik>; Parser; Zuverlässigkeit
First Page:10
Last Page:14
DDC classes:400 Sprache / 400 Sprache, Linguistik
Open Access?:ja
Leibniz-Classification:Sprache, Linguistik
Licence (English):License LogoCreative Commons - Attribution-NonCommercial 4.0 International