Refine
Year of publication
Document Type
- Working Paper (83) (remove)
Keywords
- Korpus <Linguistik> (22)
- Deutsch (20)
- Gesprochene Sprache (17)
- Konversationsanalyse (10)
- Transkription (8)
- Forschungsdaten (7)
- Infrastruktur (6)
- Interaktionsanalyse (6)
- Interaktion (5)
- Sprachgebrauch (5)
Publicationstate
- Veröffentlichungsversion (48)
- Zweitveröffentlichung (2)
- Erstveröffentlichung (1)
- Preprint (1)
Reviewstate
- (Verlags)-Lektorat (24)
- Peer-Review (5)
- Review-Status-unbekannt (3)
- Preprint (1)
- Review-Status unbekannt (1)
- Verlagslektorat (1)
Publisher
- Institut für Deutsche Sprache (16)
- Zenodo (5)
- Leibniz-Institut für Deutsche Sprache (IDS) (4)
- Leibniz-Institut für Deutsche Sprache (3)
- Universität Bielefeld (3)
- Universität Zürich (3)
- CLARIN Legal and Ethical Issues Committee (CLIC) (2)
- DYLAN Project (2)
- Institut für Kommunikationsforschung und Phonetik (2)
- Institut für Phonetik und Sprachliche Kommunikation, Ludwig Maximilians Universität München (2)
In this paper, a method for measuring synchronic corpus (dis-)similarity put forward by Kilgarriff (2001) is adapted and extended to identify trends and correlated changes in diachronic text data, using the Corpus of Historical American English (Davies 2010a) and the Google Ngram Corpora (Michel et al. 2010a). This paper shows that this fully data-driven method, which extracts word types that have undergone the most pronounced change in frequency in a given period of time, is computationally very cheap and that it allows interpretations of diachronic trends that are both intuitively plausible and motivated from the perspective of information theory. Furthermore, it demonstrates that the method is able to identify correlated linguistic changes and diachronic shifts that can be linked to historical events. Finally, it can help to improve diachronic POS tagging and complement existing NLP approaches. This indicates that the approach can facilitate an improved understanding of diachronic processes in language change.
This introductory tutorial describes a strictly corpus-driven approach for uncovering indications for aspects of use of lexical items. These aspects include ‘(lexical) meaning’ in a very broad sense and involve different dimensions, they are established in and emerge from respective discourses. Using data-driven mathematical-statistical methods with minimal (linguistic) premises, a word’s usage spectrum is summarized as a collocation profile. Self-organizing methods are applied to visualize the complex similarity structure spanned by these profiles. These visualizations point to the typical aspects of a word’s use, and to the common and distinctive aspects of any two words.