nein
Refine
Year of publication
- 2017 (6) (remove)
Document Type
- Article (2)
- Book (2)
- Part of a Book (2)
Is part of the Bibliography
- yes (6)
Keywords
- Korpus <Linguistik> (6) (remove)
Publicationstate
- Postprint (1)
Reviewstate
- Peer-Review (2)
- Peer-Revied (1)
Publisher
- De Gruyter (1)
- Institut für deutsche Sprache (1)
- Oxford University Press (OUP) (1)
- Routledge, Taylor & Francis (1)
- Springer (1)
- Stauffenburg (1)
Sprichwörter im Gebrauch
(2017)
Die Beiträge dieses Tagungsbandes thematisieren die Erstellung digitaler historischer Zeitungskorpora, Merkmale und Entwicklungstendenzen der Sprache der Zeitungen auf verschiedenen Ebenen und auf der Grundlage einzelner Korpora sowie die Bewertung der Zeitungssprache aus zeitgenössischer Sicht.
Die Vorträge gehen zurück auf den Workshop "Die Zeitung als das Medium der neueren Sprachgeschichte? Korpora, Analyse und Wirkung" am Institut für Deutsche Sprache (IDS) - in Zusammenarbeit mit dem Europäischen Zentrum für Sprachwissenschaften (EZS) - am 20./21.11.2014 in Mannheim.
We present a method to identify and document a phenomenon on which there is very little empirical data: German phrasal compounds occurring in the form of as a single token (without punctuation between their components). Relying on linguistic criteria, our approach implies to have an operational notion of compounds which can be systematically applied as well as (web) corpora which are large and diverse enough to contain rarely seen phenomena. The method is based on word segmentation and morphological analysis, it takes advantage of a data-driven learning process. Our results show that coarse-grained identification of phrasal compounds is best performed with empirical data, whereas fine-grained detection could be improved with a combination of rule-based and frequency-based word lists. Along with the characteristics of web texts, the orthographic realizations seem to be linked to the degree of expressivity.
In this paper, an exploratory data-driven method is presented that extracts word-types from diachronic corpora that have undergone the most pronounced change in frequency of occurrence in a given period of time. Combined with statistical methods from time series analysis, the method is able to find meaningful patterns and relationships in diachronic corpora, an idea that is still uncommon in linguistics. This indicates that the approach can facilitate an improved understanding of diachronic processes.
The Google Ngram Corpora seem to offer a unique opportunity to study linguistic and cultural change in quantitative terms. To avoid breaking any copyright laws, the data sets are not accompanied by any metadata regarding the texts the corpora consist of. Some of the consequences of this strategy are analyzed in this article. I chose the example of measuring censorship in Nazi Germany, which received widespread attention and was published in a paper that accompanied the release of the Google Ngram data (Michel et al. (2010): Quantitative analysis of culture using millions of digitized books. Science, 331(6014): 176–82). I show that without proper metadata, it is unclear whether the results actually reflect any kind of censorship at all. Collectively, the findings imply that observed changes in this period of time can only be linked directly to World War II to a certain extent. Therefore, instead of speaking about general linguistic or cultural change, it seems to be preferable to explicitly restrict the results to linguistic or cultural change ‘as it is represented in the Google Ngram data’. On a more general level, the analysis demonstrates the importance of metadata, the availability of which is not just a nice add-on, but a powerful source of information for the digital humanities.