Refine
Year of publication
- 2017 (13) (remove)
Document Type
- Article (13) (remove)
Has Fulltext
- yes (13)
Keywords
- Korpus <Linguistik> (13) (remove)
Publicationstate
- Veröffentlichungsversion (8)
- Zweitveröffentlichung (2)
- Postprint (1)
Reviewstate
- (Verlags)-Lektorat (5)
- Peer-Review (5)
- Peer-Revied (2)
- Peer-review (1)
Publisher
Mit diesem Bild beschreibt Hermann Unterstöger in einem „Sprachlabor“- Artikel der Süddeutschen Zeitung vom 23.3.2013 die Erfolgsgeschichte, die das Substantiv (das) Narrativ in den letzten 30 Jahren vorgelegt hat. Während Unterstöger feinsinnig den intertextuellen Bezug zum „Narrenschiff“ des Sebastian Brant oder dem gleichnamigen Roman von Katherine Ann Porter bemüht, wird Matthias Heine, der Autor von „Seit wann hat geil nichts mehr mit Sex zu tun? 100 deutsche Wörter und ihre erstaunlichen Karrieren“ in einem Artikel in der WELT vom 13.11.2016, wie nach diesem Buchtitel zu erwarten, eher grob: Dort heißt es: „Hinz und Kunz schwafeln heutzutage vom ,Narrativ‘“.
The Google Ngram Corpora seem to offer a unique opportunity to study linguistic and cultural change in quantitative terms. To avoid breaking any copyright laws, the data sets are not accompanied by any metadata regarding the texts the corpora consist of. Some of the consequences of this strategy are analyzed in this article. I chose the example of measuring censorship in Nazi Germany, which received widespread attention and was published in a paper that accompanied the release of the Google Ngram data (Michel et al. (2010): Quantitative analysis of culture using millions of digitized books. Science, 331(6014): 176–82). I show that without proper metadata, it is unclear whether the results actually reflect any kind of censorship at all. Collectively, the findings imply that observed changes in this period of time can only be linked directly to World War II to a certain extent. Therefore, instead of speaking about general linguistic or cultural change, it seems to be preferable to explicitly restrict the results to linguistic or cultural change ‘as it is represented in the Google Ngram data’. On a more general level, the analysis demonstrates the importance of metadata, the availability of which is not just a nice add-on, but a powerful source of information for the digital humanities.
Recently, a claim was made, on the basis of the German Google Books 1-gram corpus (Michel et al., Quantitative Analysis of Culture Using Millions of Digitized Books. Science 2010; 331: 176–82), that there was a linear relationship between six non-technical non-Nazi words and three ‘explicitly Nazi words’ in times of World War II (Caruana-Galizia. 2015. Politics and the German language: Testing Orwell’s hypothesis using the Google N-Gram corpus. Digital Scholarship in the Humanities [Online]. http://dsh.oxfordjournals.org/cgi/doi/10.1093/llc/fqv011 (accessed 15 April 2015)). Here, I try to show that apparent relationships like this are the result of misspecified models that do not take into account the temporal aspect of time-series data. The main point of this article is to demonstrate why such analyses run the risk of incorrect statistical inference, where potential effects are both meaningless and can potentially lead to wrong conclusions.
Am 1. September 2016 hat das Forschungsprojekt „Lexik des gesprochenen Deutsch“ (= LeGeDe) am Institut für Deutsche Sprache in Mannheim als Kooperationsprojekt der Abteilungen Pragmatik und Lexik seine Arbeit aufgenommen. Dieses drittmittelgeförderte Projekt der Leibniz-Gemeinschaft (Leibniz-Wettbewerb 2016; Förderlinie 1: Innovative Vorhaben) hat eine Laufzeit von drei Jahren (1.9.2016-31.8.2019) und besteht aus einem Team von Mitarbeiterinnen und Mitarbeitern aus den Bereichen Lexikologie, Lexikografie, Gesprächsforschung, Korpus- und Computerlinguistik sowie Empirische Methoden. Im folgenden Beitrag werden neben Informationen zu den Eckdaten des Projekts, zu den unterschiedlichen Ausgangspunkten, dem Gegenstandsbereich, den Zielen sowie der LeGeDe-Datengrundlage vor allem einige grundlegende Forschungsfragen und methodologische Ansätze aufgezeigt sowie erste Vorschläge zur Gewinnung, Analyse und Strukturierung der Daten präsentiert. Zur lexikografischen Umsetzung werden verschiedene Möglichkeiten skizziert und im Ausblick einige Herausforderungen zusammengefasst.
In the first volume of Corpus Linguistics and Linguistic Theory, Gries (2005. Null-hypothesis significance testing of word frequencies: A follow-up on Kilgarriff. Corpus Linguistics and Linguistic Theory 1(2). doi:10.1515/ cllt.2005.1.2.277. http://www.degruyter.com/view/j/cllt.2005.1.issue-2/cllt.2005. 1.2.277/cllt.2005.1.2.277.xml: 285) asked whether corpus linguists should abandon null-hypothesis significance testing. In this paper, I want to revive this discussion by defending the argument that the assumptions that allow inferences about a given population – in this case about the studied languages – based on results observed in a sample – in this case a collection of naturally occurring language data – are not fulfilled. As a consequence, corpus linguists should indeed abandon null-hypothesis significance testing.
Lexicographic meaning descriptions of German lexical items which are formally and semantically similar and therefore easily confused (so-called paronyms) often do not reflect their current usage of lexical items. They can even contradict one’s personal intuition or disagree with lexical usage as observed in public discourse. The reasons are manifold. Language data used for compiling dictionaries is either outdated, or lexicographic practice is rather conventional and does not take advantage of corpus-assisted approaches to semantic analysis. Despite of various modern electronic or online reference works speakers face uncertainties when dealing with easily confusable words. These are for example sensibel/sensitiv (sensitive) or kindisch/kindlich (childish/childlike). Existing dictionaries often do not provide satisfactory answers as to how to use these sets correctly. Numerous questions addressed in online forums show where uncertainties with paronyms are and why users demand further assistance concerning proper contextual usage (cf. Storjohann 2015). There are different reasons why users misuse certain items or mix up words which are similar in form and meaning. As data from written and more spontaneous language resources suggest, some confusions arise due to ongoing semantic change in the current use of some paronyms. This paper identifies shortcomings of contemporary German Dictionaries and discusses innovative ways of empirical lexicographic work that might pave the way for a new data-driven, descriptive reference work of confusable German terms. Currently, such a guide is being developed at the Institute for German Language in Mannheim implementing corpora and diverse corpus-analytical methods. Its objective is to compile a dictionary with contrastive entries which is a useful reference tool in situation of language doubt. At the same time, it aims at sensitizing users of context dependency and language change.
In this paper, an exploratory data-driven method is presented that extracts word-types from diachronic corpora that have undergone the most pronounced change in frequency of occurrence in a given period of time. Combined with statistical methods from time series analysis, the method is able to find meaningful patterns and relationships in diachronic corpora, an idea that is still uncommon in linguistics. This indicates that the approach can facilitate an improved understanding of diachronic processes.