Refine
Document Type
- Conference Proceeding (2)
- Working Paper (2)
- Article (1)
- Book (1)
Has Fulltext
- yes (6)
Is part of the Bibliography
- yes (6) (remove)
Keywords
- Evaluation (6) (remove)
Publicationstate
Reviewstate
- Peer-Review (2)
- (Verlags)-Lektorat (1)
Publisher
This White Paper sets out commonly agreed definitions on activities of consortia within NFDI. It aims to provide a common basis for reporting and reference regarding selected questions of cross-consortial relevance in DFG’s template for the Interim Reports. The questions were prioritised by an NFDI Task Force on Evaluation and Reporting (formerly Task Force Monitoring) as a result of discussing possible answers to the DFG template. In this process the need to agree on a generalizable meaning of terms commonly used in the context of NFDI, and reporting in particular, were identified from cross-consortial perspectives. Questions that showed the highest requirement on clarification are discussed in this White Paper. As NFDI evolves, the Task Force will likely propose further joint approaches for reporting in information infrastructures.
While each of broad relevance, the questions addressed relate to substantially different aspects of consortia’s work. They are thus also structured slightly different.
In der Bund-Länder-Vereinbarung (BLV) zu Aufbau und Förderung einer Nationalen Forschungsdateninfrastruktur (NFDI) (im Folgenden BLV-NFDI) wird in §1 festgehalten, dass mit der Förderung "eine Etablierung und Fortentwicklung eines übergreifenden Forschungsdatenmanagements" und damit eine "Steigerung der Effizienz des gesamten Wissenschaftssystems verfolgt" wird. In der BLV-NFDI werden dazu sieben Ziele vorgegeben, die eine Verfeinerung dieser Hauptziele darstellen. Dieses White Paper formuliert das gemeinsame Verständnis der beteiligten Konsortien für die sieben in der BLV-NFDI vorgegebenen Ziele. Auf der Grundlage dieses Verständnisses hat die Task Force Evaluation und Reporting Vorschläge gemacht, wie das Erreichen der Ziele erfasst, beschrieben und gemessen werden kann.
Lexikalische Diskurspartikeln wie ‚gut‘, ‚schön‘, ‚genau‘, ‚richtig‘, ‚klar‘ etc. mit Äquivalenten in anderen Wortklassen (z.B. als Adjektive) und einem inhärenten semantischen Gehalt sind ein häufiges Phänomen in der gesprochenen Sprache. In ihrem vielfältigen, feinnuancierten Gebrauch tragen sie maßgeblich zur Organisation von Gesprächen bei. Der Fokus dieser empirischen interaktionslinguistischen Untersuchung liegt auf der detaillierten Beschreibung des Formen- und Funktionsspektrums sowie der Verwendungspraktiken von ‚gut‘ und ‚schön‘. Dabei werden funktionale, sequenzielle, prosodische und kombinatorische Regelhaftigkeiten aufgezeigt sowie das Verhältnis zwischen ‚gut/schön‘ und ihren Pendants als Adjektiven diskutiert. Die Verwendungsmerkmale und -bereiche der Diskurspartikeln werden zudem mit prädikativen Formen mit ‚gut/schön‘ verglichen, um die Spezifika und Leistungsfähigkeit von lexikalischen Diskurspartikeln aufzuzeigen und die Formate im Hinblick auf Pragmatikalisierung zu diskutieren.
Dictionary usage research views dictionaries primarily as tools for solving linguistic problems. A large proportion of dictionary use now takes place online and can thus be easily monitored using tracking technologies. Using the data gathered through tracking usage data, we hope to optimize user experiences of dictionaries and other linguistic resources. Usage statistics are also used for external evaluation of linguistic resources. In this paper, we pursue the following three questions from a quantitative perspective: (1) What new insights can we gain from collecting and analysing usage data? (2) What limitations of the data and/or the collection process do we need to be aware of? (3) How can these insights and limitations inform the development and evaluation of linguistic resources?
We evaluate a graph-based dependency parser on DeReKo, a large corpus of contemporary German. The dependency parser is trained on the German dataset from the SPMRL 2014 Shared Task which contains text from the news domain, whereas DeReKo also covers other domains including fiction, science, and technology. To avoid the need for costly manual annotation of the corpus, we use the parser’s probability estimates for unlabeled and labeled attachment as main evaluation criterion. We show that these probability estimates are highly correlated with the actual attachment scores on a manually annotated test set. On this basis, we compare estimated parsing scores for the individual domains in DeReKo, and show that the scores decrease with increasing distance of a domain to the training corpus.
We describe a systematic and application-oriented approach to training and evaluating named entity recognition and classification (NERC) systems, the purpose of which is to identify an optimal system and to train an optimal model for named entity tagging DeReKo, a very large general-purpose corpus of contemporary German (Kupietz et al., 2010). DeReKo 's strong dispersion wrt. genre, register and time forces us to base our decision for a specific NERC system on an evaluation performed on a representative sample of DeReKo instead of performance figures that have been reported for the individual NERC systems when evaluated on more uniform and less diverse data. We create and manually annotate such a representative sample as evaluation data for three different NERC systems, for each of which various models are learnt on multiple training data. The proposed sampling method can be viewed as a generally applicable method for sampling evaluation data from an unbalanced target corpus for any sort of natural language processing.