Refine
Year of publication
Document Type
- Conference Proceeding (14)
- Article (3)
- Part of a Book (2)
- Doctoral Thesis (1)
- Working Paper (1)
Language
- English (21) (remove)
Has Fulltext
- yes (21)
Keywords
- Computerlinguistik (8)
- Korpus <Linguistik> (6)
- Forschungsdaten (5)
- Empirische Linguistik (4)
- Erzählforschung (4)
- Handlungsstruktur <Literatur> (4)
- Metadatenmodell (4)
- Pragmatik (4)
- Datenmanagement (3)
- Fokus <Linguistik> (3)
Publicationstate
- Veröffentlichungsversion (9)
- Zweitveröffentlichung (8)
- Postprint (3)
Reviewstate
- Peer-Review (15)
- (Verlags)-Lektorat (4)
Publisher
The understanding of story variation, whether motivated by cultural currents or other factors, is important for applications of formal models of narrative such as story generation or story retrieval. We present the first stage of an experiment to elicit natural narrative variation data suitable for evaluation with respect to story similarity, to qualitative and quantitative analysis of story variation, and also for data processing. We also present few preliminary results from the first stage of the experiment, using Red Riding Hood and Romeo and Juliet as base texts.
Accentuation, Uncertainty and Exhaustivity - Towards a Model of Pragmatic Focus Interpretation
(2010)
This paper presents a model of pragmatic focus interpretation that is assumed to be part of a complete language comprehension model and that is inspired by Levelt's language processing model. The model is derived from our empirical data on the role of accentuation, prosodic indicators of uncertainty and context for pragmatic focus interpretation. In its present state, the model is restricted to these data, but nevertheless generates predictions.
This paper addresses long-term archival for large corpora. Three aspects specific to language resources are focused, namely (1) the removal of resources for legal reasons, (2) versioning of (unchanged) objects in constantly growing resources, especially where objects can be part of multiple releases but also part of different collections, and (3) the conversion of data to new formats for digital preservation. It is motivated why language resources may have to be changed, and why formats may need to be converted. As a solution, the use of an intermediate proxy object called a signpost is suggested. The approach will be exemplified with respect to the corpora of the Leibniz Institute for the German Language in Mannheim, namely the German Reference Corpus (DeReKo) and the Archive for Spoken German (AGD).
We continue the study of the reproducibility of Propp’s annotations from Bod et al. (2012). We present four experiments in which test subjects were taught Propp’s annotation system; we conclude that Propp’s system needs a significant amount of training, but that with sufficient time investment, it can be reliably trained for simple tales.
We present web services which implement a workflow for transcripts of spoken language following the TEI guidelines, in particular ISO 24624:2016 “Language resource management – Transcription of spoken language”. The web services are available at our website and will be available via the CLARIN infrastructure, including the Virtual Language Observatory and WebLicht.
We present web services implementing a workflow for transcripts of spoken language following TEI guidelines, in particular ISO 24624:2016 "Language resource management - Transcription of spoken language". The web services are available at our website and will be available via the CLARIN infrastructure, including the Virtual Language Observatory and WebLicht.
CMDI Explorer
(2021)
We present CMDI Explorer, a tool that empowers users to easily explore the contents of complex CMDI records and to process selected parts of them with little effort. The tool allows users, for instance, to analyse virtual collections represented by CMDI records, and to send collection items to other CLARIN services such as the Switchboard for subsequent processing. CMDI Explorer hence adds functionality that many users felt was lacking from the CLARIN tool space.
We present a technique called event mapping that allows to project text representations into event lists, produce an event table, and derive quantitative conclusions to compare the text representations. The main application of the technique is the case where two classes of text representations have been collected in two different settings (e.g., as annotations in two different formal frameworks) and we can compare the two classes with respect to their systematic differences in the event table. We illustrate how the technique works by applying it to data collected in two experiments (one using annotations in Vladimir Propp’s framework, the other using natural language summaries).
The study empirically examines the interpretation of focus accents in German. To this end, a methodology is developed, and it is discussed how experimental investigation can proceed at the current state of the focus theory. Methodologically, experiments directly measuring interpretation provide an alternative to the widespread practice of using only empirical preference and production data to investigate the interpretation of stimuli, and it is shown why such an alternative is necessary.
The empirical results show that one must extend and restrict theories assuming an association of free focus and scalar implicature (exhaustivity) or question–answer congruence as follows: On the one hand, situational factors in the interpretation must be taken into account to a greater extent than until now, especially their interaction with ‘physical’ properties of the speech signal (focus marking). On the other hand, a prototypical definition of Focus is called for which connects the major concepts of focus on the phonetic-phonological, semantic and information-structural levels and takes their prototypical coincidence to be the basis of focus interpretation and corresponding intuitions.
From Proof Texts to Logic. Discourse Representation Structures for Proof Texts in Mathematics
(2009)
We present an extension to Discourse Representation Theory that can be used to analyze mathematical texts written in the commonly used semi-formal language of mathematics (or at least a subset of it). Moreover, we describe an algorithm that can be used to check the resulting Proof Representation Structures for their logical validity and adequacy as a proof.