Refine
Document Type
- Working Paper (2)
- Conference Proceeding (1)
Has Fulltext
- yes (3)
Is part of the Bibliography
- yes (3)
Keywords
- Evaluation (3) (remove)
Publicationstate
- Veröffentlichungsversion (3) (remove)
Reviewstate
- Peer-Review (1)
Publisher
This White Paper sets out commonly agreed definitions on activities of consortia within NFDI. It aims to provide a common basis for reporting and reference regarding selected questions of cross-consortial relevance in DFG’s template for the Interim Reports. The questions were prioritised by an NFDI Task Force on Evaluation and Reporting (formerly Task Force Monitoring) as a result of discussing possible answers to the DFG template. In this process the need to agree on a generalizable meaning of terms commonly used in the context of NFDI, and reporting in particular, were identified from cross-consortial perspectives. Questions that showed the highest requirement on clarification are discussed in this White Paper. As NFDI evolves, the Task Force will likely propose further joint approaches for reporting in information infrastructures.
While each of broad relevance, the questions addressed relate to substantially different aspects of consortia’s work. They are thus also structured slightly different.
In der Bund-Länder-Vereinbarung (BLV) zu Aufbau und Förderung einer Nationalen Forschungsdateninfrastruktur (NFDI) (im Folgenden BLV-NFDI) wird in §1 festgehalten, dass mit der Förderung "eine Etablierung und Fortentwicklung eines übergreifenden Forschungsdatenmanagements" und damit eine "Steigerung der Effizienz des gesamten Wissenschaftssystems verfolgt" wird. In der BLV-NFDI werden dazu sieben Ziele vorgegeben, die eine Verfeinerung dieser Hauptziele darstellen. Dieses White Paper formuliert das gemeinsame Verständnis der beteiligten Konsortien für die sieben in der BLV-NFDI vorgegebenen Ziele. Auf der Grundlage dieses Verständnisses hat die Task Force Evaluation und Reporting Vorschläge gemacht, wie das Erreichen der Ziele erfasst, beschrieben und gemessen werden kann.
We evaluate a graph-based dependency parser on DeReKo, a large corpus of contemporary German. The dependency parser is trained on the German dataset from the SPMRL 2014 Shared Task which contains text from the news domain, whereas DeReKo also covers other domains including fiction, science, and technology. To avoid the need for costly manual annotation of the corpus, we use the parser’s probability estimates for unlabeled and labeled attachment as main evaluation criterion. We show that these probability estimates are highly correlated with the actual attachment scores on a manually annotated test set. On this basis, we compare estimated parsing scores for the individual domains in DeReKo, and show that the scores decrease with increasing distance of a domain to the training corpus.