Refine
Year of publication
- 2018 (212) (remove)
Document Type
- Part of a Book (90)
- Article (59)
- Book (28)
- Conference Proceeding (15)
- Other (9)
- Working Paper (6)
- Review (3)
- Periodical (1)
- Part of Periodical (1)
Language
Keywords
- Deutsch (62)
- Korpus <Linguistik> (40)
- Grammatik (15)
- Konversationsanalyse (15)
- Gesprochene Sprache (14)
- Interaktion (13)
- Interaktionsanalyse (13)
- Linguistik (13)
- Multimodalität (12)
- Computerlinguistik (11)
Publicationstate
- Veröffentlichungsversion (212) (remove)
Reviewstate
Publisher
- Institut für Deutsche Sprache (50)
- de Gruyter (42)
- Heidelberg University Publishing (15)
- European language resources association (ELRA) (13)
- Verlag für Gesprächsforschung (8)
- Znanstvena založba Filozofske fakultete Univerze v Ljubljani / Ljubljana University Press, Faculty of Arts (7)
- Association for Computational Linguistics (4)
- De Gruyter (4)
- Hungarian Academy of Sciences (3)
- Leibniz-Zentrum allgemeine Sprachwissenschaft (ZAS); Humboldt-Universität zu Berlin (3)
Am Beispiel der polyfunktionalen Mehrworteinheit <was weiß ich> wird das Zusammenspiel von pragmatischer und phonetischer Ausdifferenzierung in Pragmatikalisierungsprozessen untersucht. Hierzu werden spontan-sprachliche Belege aus dem Korpus „Deutsch heute“ analysiert. Die beobachtete phonetische Variationsbreite deutet auf eine komplexe Beziehung zu den jeweiligen pragmatischen Funktionen hin.
We present a testsuite for POS tagging German web data. Our testsuite provides the original raw text as well as the gold tokenisations and is annotated for parts-of-speech. The testsuite includes a new dataset for German tweets, with a current size of 3,940 tokens. To increase the size of the data, we harmonised the annotations in already existing web corpora, based on the Stuttgart-Tübingen Tag Set. The current version of the corpus has an overall size of 48,344 tokens of web data, around half of it from Twitter. We also present experiments, showing how different experimental setups (training set size, additional out-of-domain training data, self-training) influence the accuracy of the taggers. All resources and models will be made publicly available to the research community.
A syntax-based scheme for the annotation and segmentation of German spoken language interactions
(2018)
Unlike corpora of written language where segmentation can mainly be derived from orthographic punctuation marks, the basis for segmenting spoken language corpora is not predetermined by the primary data, but rather has to be established by the corpus compilers. This impedes consistent querying and visualization of such data. Several ways of segmenting have been proposed,
some of which are based on syntax. In this study, we developed and evaluated annotation and segmentation guidelines in reference to the topological field model for German. We can show that these guidelines are used consistently across annotators. We also investigated the influence of various interactional settings with a rather simple measure, the word-count per segment and unit-type. We observed that the word count and the distribution of each unit type differ in varying interactional settings and that our developed segmentation and annotation guidelines are used consistently across annotators. In conclusion, our syntax-based segmentations reflect interactional properties that are intrinsic to the social interactions that participants are involved in. This can be used for further analysis of social interaction and opens the possibility for automatic segmentation of transcripts.
Beim Lesen stolpert man über den unscheinbaren Artikel den. Muss das nicht dem heißen? Richtig. Die lokale Angabe am Stadioneingang und die temporale Angabe am Sonntag stehen im Dativ, wie sich eindeutig an dem definiten Artikel dem erkennen lässt, der hier mit der Präposition an zu am verschmolzen ist. Und der Artikel, der nach dem Komma folgt und den ‚lockere‘ oder
‚lose Apposition‘ genannten Nachtrag einleitet, bezieht sich ebenfalls auf Stadioneingang bzw. Sonntag und sollte mit diesem Bezugsnomen kongruieren, das heißt ebenfalls im Dativ – und nicht wie in den Beispielen in im Akkusativ – stehen.
This paper aims to describe different patterns of syntactic extensions of turns-at-talk in mundane conversations in Czech. Within interactional linguistics, same-speaker continuations of possibly complete syntactic structures have been described for typologically diverse languages, but have not yet been investigated for Slavic languages. Based on previously established descriptions of various types of extensions (Vorreiter 2003; Couper-Kuhlen & Ono 2007), our initial description shall therefore contribute to the cross-linguistic exploration of this phenomenon. While all previously described forms for continuing a turn-constructional unit seem to exist in Czech, some grammatical features of this language (especially free word order and strong case morphology) may lead to problems in distinguishing specific types of syntactic extensions. Consequently, this type of language allows for critically evaluating the cross-linguistic validity of the different categories and underlines the necessity of analysing syntactic phenomena within their specific action contexts.
The grammatical information system grammis combines descriptive texts on German grammar with dictionaries of specific word classes and grammatical terminology. In this paper, we describe the first attempts at analyzing user behavior for an online grammar of the German language and the implementation of an analysis and data extraction tool based on Matomo, a web analytics tool. We focus on the analysis of the keywords the users search for, either within grammis or via an external search platform like Google, and the analysis of the interaction between the text components within grammis and the integrated dictionaries. The overall results show that about 50% of the searches are for grammatical terms, and that the users shift from texts to dictionaries, mainly by using the integrated links to the dictionary of terminology within the texts. Based on these findings, we aim to improve grammis by extending its integrated dictionaries.