Quantitative Linguistik
Refine
Year of publication
Document Type
- Article (21) (remove)
Has Fulltext
- yes (21)
Keywords
- Sprachstatistik (9)
- Deutsch (8)
- COVID-19 (4)
- Korpus <Linguistik> (4)
- Lexikostatistik (4)
- Online-Medien (4)
- Vielfalt (4)
- Wortschatz (4)
- Entropie (2)
- Informationstheorie (2)
Publicationstate
- Veröffentlichungsversion (15)
- Zweitveröffentlichung (6)
- Postprint (3)
Reviewstate
- Peer-Review (15)
- (Verlags)-Lektorat (2)
- Peer-Revied (1)
- Verlags-Lektorat (1)
Publisher
- Leibniz-Institut für Deutsche Sprache (IDS) (4)
- De Gruyter (2)
- Springer Nature (2)
- Benjamins (1)
- Buske (1)
- Erich Schmidt (1)
- MDPI (1)
- MEITS (1)
- Peeters (1)
- RAM (1)
Computational language models (LMs), most notably exemplified by the widespread success of OpenAI's ChatGPT chatbot, show impressive performance on a wide range of linguistic tasks, thus providing cognitive science and linguistics with a computational working model to empirically study different aspects of human language. Here, we use LMs to test the hypothesis that languages with more speakers tend to be easier to learn. In two experiments, we train several LMs—ranging from very simple n-gram models to state-of-the-art deep neural networks—on written cross-linguistic corpus data covering 1293 different languages and statistically estimate learning difficulty. Using a variety of quantitative methods and machine learning techniques to account for phylogenetic relatedness and geographical proximity of languages, we show that there is robust evidence for a relationship between learning difficulty and speaker population size. However, contrary to expectations derived from previous research, our results suggest that languages with more speakers tend to be harder to learn.
One of the fundamental questions about human language is whether all languages are equally complex. Here, we approach this question from an information-theoretic perspective. We present a large scale quantitative cross-linguistic analysis of written language by training a language model on more than 6500 different documents as represented in 41 multilingual text collections consisting of ~ 3.5 billion words or ~ 9.0 billion characters and covering 2069 different languages that are spoken as a native language by more than 90% of the world population. We statistically infer the entropy of each language model as an index of what we call average prediction complexity. We compare complexity rankings across corpora and show that a language that tends to be more complex than another language in one corpus also tends to be more complex in another corpus. In addition, we show that speaker population size predicts entropy. We argue that both results constitute evidence against the equi-complexity hypothesis from an information-theoretic perspective.
Information theory can be used to assess how efficiently a message is transmitted on the basis of different symbolic systems. In this paper, I estimate the information-theoretic efficiency of written language for parallel text data in more than 1000 different languages, both on the level of characters and on the level of words as information encoding units. The main results show that (i) the median efficiency is ∼29% on the character level and ∼45% on the word level, (ii) efficiency on both levels is strongly correlated with each other and (iii) efficiency tends to be higher for languages with more speakers.
The annual microcensus provides Germany’s most important official statistics. Unlike a census it does not cover the whole population, but a representative 1%-sample of it. In 2017, the German microcensus asked a question on the language of the population, i.e. ‘Which language is mainly spoken in your household?’ Unfortunately, the question, its design and its position within the whole microcensus’ questionnaire feature several shortcomings. The main shortcoming is that multilingual repertoires cannot be captured by it. Recommendations for the improvement of the microcensus’ language question: first and foremost the question (i.e. its wording, design, and answer options) should make it possible to count multilingual repertoires.
The coronavirus pandemic may be the largest crisis the world has had to face since World War II. It does not come as a surprise that it is also having an impact on language as our primary communication tool. In this short paper, we present three inter-connected resources that are designed to capture and illustrate these effects on a subset of the German language: An RSS corpus of German-language newsfeeds (with freely available untruncated frequency lists), a continuously updated HTML page tracking the diversity of the vocabulary in the RSS corpus and a Shiny web application that enables other researchers and the broader public to explore the corpus in terms of basic frequencies.
cOWIDplus Analyse ist eine kontinuierlich aktualisierte Ressource zu der Frage, ob und wie stark sich der Wortschatz ausgewählter deutscher Online-Pressemeldungen während der Corona-Pandemie systematisch einschränkt und ob bzw. wann sich das Vokabular nach der Krise wieder ausweitet. In diesem Artikel erläutern die Autor*innen die hinter der Ressource stehende Forschungsfrage, die zugrunde gelegten Daten, die Methode sowie die bisherigen Ergebnisse.
Seit 2017 wird im deutschen Mikrozensus eine Frage zur Sprache der Bevölkerung gestellt. Die letzte Spracherhebung in einem deutschen Zensus datiert aus dem Jahr 1939; entsprechend gibt es aktuell keine aussagekräftigen Sprachstatistiken in Deutschland. Die neue Sprachfrage des Mikrozensus weist jedoch erhebliche Mängel auf; offensichtlich wurde sie als Stellvertreterfrage zur Messung kultureller Integration konzipiert. Im vorliegenden Text werden die Fragen diskutiert und ihre ersten Ergebnisse analysiert. Daran anschließend werden andere Varianten von Sprachfragen dargestellt, dabei wird insbesondere auf die vorbildlichen Sprachfragen im kanadischen Zensus eingegangen. Abschließend wird die Sprachfrage der Deutschland-Erhebung 2018 des IDS inklusive ihrer Ergebnisse vorgestellt; die Deutschland-Erhebung 2018 stellt neben dem Mikrozensus bislang die einzige repräsentative Spracherhebung in Deutschland dar.