L3: Lexik empirisch und digital
Refine
Document Type
- Article (11)
- Part of a Book (5)
- Preprint (3)
- Other (2)
- Conference Proceeding (1)
Is part of the Bibliography
- yes (22)
Keywords
- Wortschatz (11)
- COVID-19 (10)
- Deutsch (10)
- Lexikostatistik (9)
- Online-Medien (8)
- Vielfalt (8)
- Korpus <Linguistik> (6)
- Informationstheorie (3)
- Sprachstatistik (3)
- Datenanalyse (2)
Publicationstate
- Veröffentlichungsversion (14)
- Zweitveröffentlichung (8)
- Postprint (3)
Reviewstate
- Peer-Review (10)
- (Verlags)-Lektorat (5)
Publisher
- Leibniz-Institut für Deutsche Sprache (IDS) (6)
- Cornell University (3)
- IDS-Verlag (2)
- MDPI (2)
- Springer Nature (2)
- de Gruyter (2)
- Benjamins (1)
- De Gruyter (1)
- Erich Schmidt (1)
- Wiley (1)
We introduce DeReKoGram, a novel frequency dataset containing lemma and part-of-speech (POS) information for 1-, 2-, and 3-grams from the German Reference Corpus. The dataset contains information based on a corpus of 43.2 billion tokens and is divided into 16 parts based on 16 corpus folds. We describe how the dataset was created and structured. By evaluating the distribution over the 16 folds, we show that it is possible to work with a subset of the folds in many use cases (e.g., to save computational resources). In a case study, we investigate the growth of vocabulary (as well as the number of hapax legomena) as an increasing number of folds are included in the analysis. We cross-combine this with the various cleaning stages of the dataset. We also give some guidance in the form of Python, R, and Stata markdown scripts on how to work with the resource.
Computational language models (LMs), most notably exemplified by the widespread success of OpenAI's ChatGPT chatbot, show impressive performance on a wide range of linguistic tasks, thus providing cognitive science and linguistics with a computational working model to empirically study different aspects of human language. Here, we use LMs to test the hypothesis that languages with more speakers tend to be easier to learn. In two experiments, we train several LMs—ranging from very simple n-gram models to state-of-the-art deep neural networks—on written cross-linguistic corpus data covering 1293 different languages and statistically estimate learning difficulty. Using a variety of quantitative methods and machine learning techniques to account for phylogenetic relatedness and geographical proximity of languages, we show that there is robust evidence for a relationship between learning difficulty and speaker population size. However, contrary to expectations derived from previous research, our results suggest that languages with more speakers tend to be harder to learn.
One of the fundamental questions about human language is whether all languages are equally complex. Here, we approach this question from an information-theoretic perspective. We present a large scale quantitative cross-linguistic analysis of written language by training a language model on more than 6500 different documents as represented in 41 multilingual text collections consisting of ~ 3.5 billion words or ~ 9.0 billion characters and covering 2069 different languages that are spoken as a native language by more than 90% of the world population. We statistically infer the entropy of each language model as an index of what we call average prediction complexity. We compare complexity rankings across corpora and show that a language that tends to be more complex than another language in one corpus also tends to be more complex in another corpus. In addition, we show that speaker population size predicts entropy. We argue that both results constitute evidence against the equi-complexity hypothesis from an information-theoretic perspective.
It was recently suggested in a study published in Nature Human Behaviour that the historical loosening of American culture was associated with a trade-off between higher creativity and lower order. To this end, Jackson et al. generate a linguistic index of cultural tightness based on the Google Books Ngram corpus and use this index to show that American norms loosened between 1800 and 2000. While we remain agnostic toward a potential loosening of American culture and a statistical association with creativity/order, we show here that the methods used by Jackson et al. are neither suitable for testing the validity of the index nor for establishing possible relationships with creativity/order.
In a previous study published in Nature Human Behaviour, Varnum and Grossmann claim that reductions in gender inequality are linked to reductions in pathogen prevalence in the United States between 1951 and 2013. Since the statistical methods used by Varnum and Grossmann are known to induce (seemingly) significant correlations between unrelated time series, so-called spurious or non-sense correlations, we test here whether the statistical association between gender inequality and pathogens prevalence in its current form also is the result of mis-specified models that do not correctly account for the temporal structure of the data. Our analysis clearly suggests that this is the case. We then discuss and apply several standard approaches of modelling time-series processes in the data and show that there is, at least as of now, no support for a statistical association between gender inequality and pathogen prevalence.
In a recent paper published in the Journal of Language Evolution, Kauhanen, Einhaus & Walkden (KEW) challenge the results presented in one of my papers (Koplenig, Royal Society Open Science, 6, 181274 (2019)), in which I tried to show through a series of statistical analyses that large numbers of L2 (second language) speakers do not seem to affect the (grammatical or statistical) complexity of a language. To this end, I focus on the way in which the Ethnologue assesses language status: a language is characterised as vehicular if, in addition to being used by L1 (first language) speakers, it should also have a significant number of L2 users. KEW criticise both the use of vehicularity as a (binary) indicator of whether a language has a significant number of L2 users and the idea of imputing a zero proportion of L2 speakers to non-vehicular languages whenever a direct estimate of that proportion is unavailable. While I recognise the importance of post-publication commentary on published research, I show in this rejoinder that both points of criticism are explicitly mentioned and analysed in my paper. In addition, I also comment on other points raised by KEW and demonstrate that both alternative analyses offered by KEW do not stand up to closer scrutiny.
We investigate the optional omission of the infinitival marker in a Swedish future tense construction. During the last two decades the frequency of omission has been rapidly increasing, and this process has received considerable attention in the literature. We test whether the knowledge which has been accumulated can yield accurate predictions of language variation and change. We extracted all occurrences of the construction from a very large collection of corpora. The dataset was automatically annotated with language-internal predictors which have previously been shown or hypothesized to affect the variation. We trained several models in order to make two kinds of predictions: whether the marker will be omitted in a specific utterance and how large the proportion of omissions will be for a given time period. For most of the approaches we tried, we were not able to achieve a better-than-baseline performance. The only exception was predicting the proportion of omissions using autoregressive integrated moving average models for one-step-ahead forecast, and in this case time was the only predictor that mattered. Our data suggest that most of the language-internal predictors do have some effect on the variation, but the effect is not strong enough to yield reliable predictions.
In a recent article, Meylan and Griffiths (Meylan & Griffiths, 2021, henceforth, M&G) focus their attention on the significant methodological challenges that can arise when using large-scale linguistic corpora. To this end, M&G revisit a well-known result of Piantadosi, Tily, and Gibson (2011, henceforth, PT&G) who argue that average information content is a better predictor of word length than word frequency. We applaud M&G who conducted a very important study that should be read by any researcher interested in working with large-scale corpora. The fact that M&G mostly failed to find clear evidence in favor of PT&G's main finding motivated us to test PT&G's idea on a subset of the largest archive of German language texts designed for linguistic research, the German Reference Corpus consisting of ∼43 billion words. We only find very little support for the primary data point reported by PT&G.
Der folgende Leitfaden bietet eine grundlegende Übersicht darüber, welche Schritte bei der Konzeption und Durchführung einer empirischen Untersuchung in der germanistischen Linguistik zu beachten sind. Wir werden den grundlegenden Ablauf und die zugrunde liegenden Konzepte allgemein bzw. modellhaft beschreiben und sie anhand von einfachen Beispielen illustrieren. Eine stärkere Ausgestaltung anhand von Beispielen zu verschiedenen linguistischen Forschungsfragen und -feldern und damit auch mehr Illustrationen, wie die einzelnen Schritte für bestimmte Forschungsfragen umzusetzen sind, finden Sie in den Fallstudien im —> Teil III dieses Bandes. Detailliertere Ausführungen zu den zentralen Konzepten des empirischen Arbeitens in der Linguistik finden Sie in —> Teil VI dieses Bandes. Weiterführende Literatur findet sich am Ende des Beitrags.