Refine
Year of publication
- 2018 (46) (remove)
Document Type
- Part of a Book (46) (remove)
Has Fulltext
- yes (46)
Keywords
- Deutsch (22)
- Korpus <Linguistik> (13)
- Gesprochene Sprache (6)
- Semantische Analyse (6)
- Grammatik (5)
- Annotation (4)
- Automatische Sprachverarbeitung (4)
- Englisch (4)
- Negation (3)
- Automatische Sprachanalyse (2)
Publicationstate
Reviewstate
- Peer-Review (46) (remove)
Publisher
- European language resources association (ELRA) (11)
- Znanstvena založba Filozofske fakultete Univerze v Ljubljani / Ljubljana University Press, Faculty of Arts (7)
- Heidelberg University Publishing (4)
- de Gruyter (4)
- Leibniz-Zentrum allgemeine Sprachwissenschaft (ZAS); Humboldt-Universität zu Berlin (3)
- The Association for Computational Linguistics (3)
- De Gruyter (2)
- Springer (2)
- Association for Computational Linguistics (1)
- Austrian Academy of Sciences (1)
In Beispielen wie
(1) Du hast scheints / Weiß Gott nichts begriffen.
(2) It cost £200, give or take.
(3) Qu’est ce qu’il a dit?
werden verbale Konstruktionen (kurz: VK, hier jeweils die fett gesetzten Teile) in einer Weise gebraucht, die der Grammatik verbaler Konstruktionen zuwiderläuft. In (1) und (2) wird die verbale Konstruktion wie ein Adverb/eine Partikel gebraucht bzw. wie ein Ausdruck in der Funktion eines (adverbialen) Adjunkts/ Supplements. In (3) ist die verbale Konstruktion zum Bestandteil einer periphrastischen interrogativen Konstruktion geworden. Wie sind solche ‘Umfunktionalisierungen’ – wie ich das Phänomen zunächst vortheoretisch bezeichnen möchte – einzuordnen? Handelt es sich um Lexikalisierung oder um Grammatikalisierung? Oder um ein Phänomen der dritten Art? Die Umfunktionalisierung verbaler Syntagmen bzw. Konstruktionen – ich gebrauche die Abkürzung UVK für ‘umfunktionalisierte verbale Konstruktion(en)’ – ist ein bisher weniger gut untersuchtes Phänomen, etwa gegenüber der Umfunktionalisierung von Präpositionalphrasen, die sprachübergreifend zu komplexen, „sekundären“ Präpositionen werden können (man vergleiche DEU auf Grund + Genitiv / von, ENG on top of, FRA à cause de).
We study German affixoids, a type of morpheme in between affixes and free stems. Several properties have been associated with them – increased productivity; a bleached semantics, which is often evaluative and/or intensifying and thus of relevance to sentiment analysis; and the existence of a free morpheme counterpart – but not been validated empirically. In experiments on a new data set that we make available, we put these key assumptions from the morphological literature to the test and show that despite the fact that affixoids generate many low-frequency formations, we can classify these as affixoid or non-affixoid instances with a best F1-score of 74%.
In this paper we use methods for creating a large lexicon of verbal polarity shifters and apply them to German. Polarity shifters are content words that can move the polarity of a phrase towards its opposite, such as the verb “abandon” in “abandon all hope”. This is similar to how negation words like “not” can influence polarity. Both shifters and negation are required for high precision sentiment analysis. Lists of negation words are available for many languages, but the only language for which a sizable lexicon of verbal polarity shifters exists is English. This lexicon was created by bootstrapping a sample of annotated verbs with a supervised classifier that uses a set of data- and resource-driven features. We reproduce and adapt this approach to create a German lexicon of verbal polarity shifters. Thereby, we confirm that the approach works for multiple languages. We further improve classification by leveraging cross-lingual information from the English shifter lexicon. Using this improved approach, we bootstrap a large number of German verbal polarity shifters, reducing the annotation effort drastically. The resulting German lexicon of verbal polarity shifters is made publicly available.
The present submission reports on a pilot project conducted at the Institute for the German Language (IDS), aiming at strengthening the connection between ISO TC37SC4 “Language Resource Management” and the CLARIN infrastructure. In terminology management, attempts have recently been made to use graph-theoretical analyses to get a better understanding of the structure of terminology resources. The project described here aims at applying some of these methods to potentially incomplete concept fields produced over years by numerous researchers serving as experts and editors of ISO standards. The main results of the project are twofold. On the one hand, they comprise concept networks dynamically generated from a relational database and browsable by the user. On the other, the project has yielded significant qualitative feedback that will be offered to ISO. We provide the institutional context of this endeavour, its theoretical background, and an overview of data preparation and tools used. Finally, we discuss the results and illustrate some of them.
The actual or anticipated impact of research projects can be documented in scientific publications and project reports. While project reports are available at varying level of accessibility, they might be rarely used or shared outside of academia. Moreover, a connection between outcomes of actual research project and potential secondary use might not be explicated in a project report. This paper outlines two methods for classifying and extracting the impact of publicly funded research projects. The first method is concerned with identifying impact categories and assigning these categories to research projects and their reports by extension by using subject matter experts; not considering the content of research reports. This process resulted in a classification schema that we describe in this paper. With the second method which is still work in progress, impact categories are extracted from the actual text data.
In recent years, the availability of large annotated and searchable corpora, together with a new interest in the empirical foundation and validation of linguistic theory and description, has sparked a surge of novel and interesting work using corpus-based methods to study the grammar of natural languages. However, a look at relevant current research on the grammar of the Germanic, Romance, and Slavic languages reveals a variety of different theoretical approaches and empirical foci, which can be traced back to different philological and linguistic traditions. Still, this current state of affairs should not be seen as an obstacle but as an ideal basis for a fruitful exchange of ideas between different research paradigms.
We present a study on gaps in spoken language interaction as a potential candidate for syntactic boundaries. On the basis of an online annotation experiment, we can show that there is an effect of gap duration and gap type on its likelihood of being a syntactic boundary. We discuss the potential of these findings for an automation of the segmentation process.
A syntax-based scheme for the annotation and segmentation of German spoken language interactions
(2018)
Unlike corpora of written language where segmentation can mainly be derived from orthographic punctuation marks, the basis for segmenting spoken language corpora is not predetermined by the primary data, but rather has to be established by the corpus compilers. This impedes consistent querying and visualization of such data. Several ways of segmenting have been proposed,
some of which are based on syntax. In this study, we developed and evaluated annotation and segmentation guidelines in reference to the topological field model for German. We can show that these guidelines are used consistently across annotators. We also investigated the influence of various interactional settings with a rather simple measure, the word-count per segment and unit-type. We observed that the word count and the distribution of each unit type differ in varying interactional settings and that our developed segmentation and annotation guidelines are used consistently across annotators. In conclusion, our syntax-based segmentations reflect interactional properties that are intrinsic to the social interactions that participants are involved in. This can be used for further analysis of social interaction and opens the possibility for automatic segmentation of transcripts.