Refine
Year of publication
- 2018 (90) (remove)
Document Type
- Part of a Book (39)
- Article (27)
- Conference Proceeding (15)
- Book (4)
- Review (3)
- Periodical (1)
- Part of Periodical (1)
Keywords
- Deutsch (27)
- Korpus <Linguistik> (19)
- Grammatik (9)
- Multimodalität (9)
- Computerlinguistik (8)
- Gesprochene Sprache (8)
- Interaktion (8)
- Interaktionsanalyse (8)
- Konversationsanalyse (8)
- Annotation (6)
Publicationstate
- Veröffentlichungsversion (90) (remove)
Reviewstate
- Peer-Review (90) (remove)
Publisher
- European language resources association (ELRA) (13)
- Verlag für Gesprächsforschung (7)
- Znanstvena založba Filozofske fakultete Univerze v Ljubljani / Ljubljana University Press, Faculty of Arts (7)
- de Gruyter (6)
- Heidelberg University Publishing (5)
- Association for Computational Linguistics (4)
- Institut für Deutsche Sprache (4)
- Leibniz-Zentrum allgemeine Sprachwissenschaft (ZAS); Humboldt-Universität zu Berlin (3)
- The Association for Computational Linguistics (3)
- Austrian Academy of Sciences (2)
In Beispielen wie
(1) Du hast scheints / Weiß Gott nichts begriffen.
(2) It cost £200, give or take.
(3) Qu’est ce qu’il a dit?
werden verbale Konstruktionen (kurz: VK, hier jeweils die fett gesetzten Teile) in einer Weise gebraucht, die der Grammatik verbaler Konstruktionen zuwiderläuft. In (1) und (2) wird die verbale Konstruktion wie ein Adverb/eine Partikel gebraucht bzw. wie ein Ausdruck in der Funktion eines (adverbialen) Adjunkts/ Supplements. In (3) ist die verbale Konstruktion zum Bestandteil einer periphrastischen interrogativen Konstruktion geworden. Wie sind solche ‘Umfunktionalisierungen’ – wie ich das Phänomen zunächst vortheoretisch bezeichnen möchte – einzuordnen? Handelt es sich um Lexikalisierung oder um Grammatikalisierung? Oder um ein Phänomen der dritten Art? Die Umfunktionalisierung verbaler Syntagmen bzw. Konstruktionen – ich gebrauche die Abkürzung UVK für ‘umfunktionalisierte verbale Konstruktion(en)’ – ist ein bisher weniger gut untersuchtes Phänomen, etwa gegenüber der Umfunktionalisierung von Präpositionalphrasen, die sprachübergreifend zu komplexen, „sekundären“ Präpositionen werden können (man vergleiche DEU auf Grund + Genitiv / von, ENG on top of, FRA à cause de).
Many studies on dictionary use presuppose that users do indeed consult lexicographic resources. However, little is known about what users actually do when they try to solve language problems on their own. We present an observation study where learners of German were allowed to browse the web freely while correcting erroneous German sentences. In this paper, we are focusing on the multi-methodological approach of the study, especially the interplay between quantitative and qualitative approaches. In one example study, we will show how the analysis of verbal protocols, the correction task and the screen recordings can reveal the effects of intuition, language (learning) awareness, and determination on the accuracy of the corrections. In another example study, we will show how preconceived hypotheses about the problem at hand might hinder participants from arriving at the correct solution.
We present ESDexplorer (https://owid.shinyapps.io/ESDexplorer), a browser application which allows the user to explore the data from a large European survey on dictionary use and culture. We built ESDexplorer with several target groups in mind: our cooperation partners, other researchers, and a more general public interested in the results. Also, we present in detail the architecture and technological realisation of the application and discuss some legal aspects of data protection that motivated some architectural choices.
The actual or anticipated impact of research projects can be documented in scientific publications and project reports. While project reports are available at varying level of accessibility, they might be rarely used or shared outside of academia. Moreover, a connection between outcomes of actual research project and potential secondary use might not be explicated in a project report. This paper outlines two methods for classifying and extracting the impact of publicly funded research projects. The first method is concerned with identifying impact categories and assigning these categories to research projects and their reports by extension by using subject matter experts; not considering the content of research reports. This process resulted in a classification schema that we describe in this paper. With the second method which is still work in progress, impact categories are extracted from the actual text data.
We present an approach for modeling German negation in open-domain fine grained sentiment analysis. Unlike most previous work in sentiment analysis, we assume that negation can be conveyed by many lexical units (and not only common negation words) and that different negation words have different scopes. Our approach is examined on a new dataset comprising sentences with mentions of polar expressions and various negation words. We identify different types of negation words that have the same scopes. We show that already negation modeling based on these types largely outperforms traditional negation models which assume the same scope for all negation words and which employ a window-based scope detection rather than a scope detection based on syntactic information.
We present the pilot edition of the GermEval Shared Task on the Identification of Offensive Language. This shared task deals with the classification of German tweets from Twitter. It comprises two tasks, a coarse-grained binary classification task and a fine-grained multi-class classification task. The shared task had 20 participants submitting 51 runs for the coarse-grained task and 25 runs for the fine-grained task. Since this is a pilot task, we describe the process of extracting the raw-data for the data collection and the annotation schema. We evaluate the results of the systems submitted to the shared task. The shared task homepage can be found at https://projects.cai. fbi.h-da.de/iggsa/
We address the detection of abusive words. The task is to identify such words among a set of negative polar expressions. We propose novel features employing information from both corpora and lexical resources. These features are calibrated on a small manually annotated base lexicon which we use to produce a large lexicon. We show that the word-level information we learn cannot be equally derived from a large dataset of annotated microposts. We demonstrate the effectiveness of our (domain-independent) lexicon in the crossdomain detection of abusive microposts.
Negation is an important contextual phenomenon that needs to be addressed in sentiment analysis. Next to common negation function words, such as not or none, there is also a considerably large class of negation content words, also referred to as shifters, such as the verbs diminish, reduce or reverse. However, many of these shifters are ambiguous. For instance, spoil as in spoil your chance reverses the polarity of the positive polar expression chance while in spoil your loved ones, no negation takes place. We present a supervised learning approach to disambiguating verbal shifters. Our approach takes into consideration various features, particularly generalization features.
A syntax-based scheme for the annotation and segmentation of German spoken language interactions
(2018)
Unlike corpora of written language where segmentation can mainly be derived from orthographic punctuation marks, the basis for segmenting spoken language corpora is not predetermined by the primary data, but rather has to be established by the corpus compilers. This impedes consistent querying and visualization of such data. Several ways of segmenting have been proposed,
some of which are based on syntax. In this study, we developed and evaluated annotation and segmentation guidelines in reference to the topological field model for German. We can show that these guidelines are used consistently across annotators. We also investigated the influence of various interactional settings with a rather simple measure, the word-count per segment and unit-type. We observed that the word count and the distribution of each unit type differ in varying interactional settings and that our developed segmentation and annotation guidelines are used consistently across annotators. In conclusion, our syntax-based segmentations reflect interactional properties that are intrinsic to the social interactions that participants are involved in. This can be used for further analysis of social interaction and opens the possibility for automatic segmentation of transcripts.