Refine
Year of publication
Document Type
- Article (27)
- Part of a Book (11)
- Other (3)
- Preprint (3)
- Book (1)
- Conference Proceeding (1)
Keywords
- Deutsch (15)
- Korpus <Linguistik> (15)
- Sprachstatistik (10)
- Wortschatz (10)
- COVID-19 (6)
- Online-Medien (6)
- Wörterbuch (6)
- Datenanalyse (5)
- Lexikostatistik (5)
- Vielfalt (5)
Publicationstate
- Veröffentlichungsversion (46) (remove)
Reviewstate
- Peer-Review (25)
- (Verlags)-Lektorat (12)
Publisher
- Leibniz-Institut für Deutsche Sprache (IDS) (6)
- Cornell University (3)
- IDS-Verlag (3)
- MDPI (3)
- Buro van die WAT (2)
- De Gruyter (2)
- Springer (2)
- Springer Nature (2)
- de Gruyter (2)
- Buro van die Wat (1)
One of the fundamental questions about human language is whether all languages are equally complex. Here, we approach this question from an information-theoretic perspective. We present a large scale quantitative cross-linguistic analysis of written language by training a language model on more than 6500 different documents as represented in 41 multilingual text collections consisting of ~ 3.5 billion words or ~ 9.0 billion characters and covering 2069 different languages that are spoken as a native language by more than 90% of the world population. We statistically infer the entropy of each language model as an index of what we call average prediction complexity. We compare complexity rankings across corpora and show that a language that tends to be more complex than another language in one corpus also tends to be more complex in another corpus. In addition, we show that speaker population size predicts entropy. We argue that both results constitute evidence against the equi-complexity hypothesis from an information-theoretic perspective.
This paper presents the results of a survey on dictionary use in Europe, the largest survey of dictionary use to date with nearly 10,000 participants in nearly thirty countries. The paper focuses on the comparison of the results of the Slovenian participants with the results of the participants from other European countries. The comparisons are made both with the European averages, and with the results from individual countries, in order to determine in which aspects Slovenian participants share similarities with other dictionary users (and non-users) around Europe, and in which aspects they differ. The findings show that in many ways the Slovenian users are similar to their European counterparts, with some noticeable exceptions, including (much) stronger preference for digital dictionaries over print ones, above-average reliance on other people when dictionary does not contain the relevant information, and the largest difference between the price of a dictionary and the amount willing to spend on it.
Dieser Beitrag gibt einen Überblick über die methodischen Ausgangspunkte des Projekts MIT. Qualität und stellt einige zentrale Erkenntnisse zur Modellbildung, der korpuslinguistischen Analyse und Akzeptabilitätserhebungen in der Sprachgemeinschaft vor. Wir zeigen dabei, wie bestehende Textqualitätsmodelle anhand einer Analyse einschlägiger Ratgeberliteratur erweitert werden können. Es wurden zwei empirische Fallstudien durchgeführt, die beide auf die Herstellung von textueller Kohärenz mittels des Kausalkonnektors weil fokussieren. Wir stellen zunächst eine korpuskontrastive Analyse vor. Weiterhin zeigen wir, wie man anhand verschiedener Aufgabenstellungen diverse Aspekte von Akzeptabilität in der Sprachgemeinschaft abprüfen kann.
Der Anlass dieser Untersuchung war zunächst anekdotische Evidenz: Eines der Kinder der Autor*innen macht 2022 Abitur und las in ihrer gesamten gymnasialen Laufbahn genau eine ›Ganzschrift‹ einer Autorin: Die Judenbuche von Annette von Droste-Hülshoff. Zweifellos ein lesenswerter Text, aber konnte es wirklich sein, dass man in Deutschland 2022 Abitur macht, sogar Deutsch-Leistungskurs gewählt hat und sonst kein Buch einer Autorin im Deutschunterricht liest? Auch in den Pflichtlektüren für das Deutschabitur ist im entsprechenden Bundesland bei den empfohlenen Texten kein Roman und kein Drama einer Verfasserin verzeichnet. Neugierig geworden, recherchierten wir nach einer Liste, welche Literatur für den Deutschunterricht an Gymnasien in Baden-Württemberg (wo die Anekdote sich ereignete) insgesamt empfohlen wurde, und fanden auf den Seiten des Kultusministeriums eine umfangreiche Liste, auf der 298 Werke verzeichnet sind. Eine Auswertung nach dem Geschlecht der Verfasser*innen ergab, dass von den Einträgen auf dieser Liste 31 Titel bzw. Autor*innen (von) Frauen sind, d.h. rund 10 %.
It was recently suggested in a study published in Nature Human Behaviour that the historical loosening of American culture was associated with a trade-off between higher creativity and lower order. To this end, Jackson et al. generate a linguistic index of cultural tightness based on the Google Books Ngram corpus and use this index to show that American norms loosened between 1800 and 2000. While we remain agnostic toward a potential loosening of American culture and a statistical association with creativity/order, we show here that the methods used by Jackson et al. are neither suitable for testing the validity of the index nor for establishing possible relationships with creativity/order.
This contribution explores the relationship between the English CEFR (Common European Framework of Reference for Languages) vocabulary levels and user interest in English Wiktionary entries. User interest was operationalized through the number of views of these entries in Wikimedia server logs covering a period of four years (2019–2022). Our findings reveal a significant relationship between CEFR levels and user interest: entries classified at lower CEFR levels tend to attract more views, which suggests a greater user interest in more basic vocabulary. A multiple regression model controlling for other known or potential factors affecting interest: corpus frequency, polysemy, word prevalence, and age of acquisition confirmed that lower CEFR levels attract significantly more views even after taking into account the other predictors. These findings highlight the importance of CEFR levels in predicting which words users are likely to look up, with implications for lexicography and the development of language learning materials.
Many studies on dictionary use presuppose that users do indeed consult lexicographic resources. However, little is known about what users actually do when they try to solve language problems on their own. We present an observation study where learners of German were allowed to browse the web freely while correcting erroneous German sentences. In this paper, we are focusing on the multi-methodological approach of the study, especially the interplay between quantitative and qualitative approaches. In one example study, we will show how the analysis of verbal protocols, the correction task and the screen recordings can reveal the effects of intuition, language (learning) awareness, and determination on the accuracy of the corrections. In another example study, we will show how preconceived hypotheses about the problem at hand might hinder participants from arriving at the correct solution.
In the past two decades, more and more dictionary usage studies have been published, but most of them deal with questions related to what users appreciate about dictionaries, which dictionaries they use and what type of information they need in specific situations — presupposing that users actually consult lexicographic resources. However, language teachers and lecturers in linguistics often have the impression that students do not use enough high-quality dictionaries in their everyday work. With this in mind, we launched an international cooperation project to collect empirical data to evaluate what it is that students actually do while attempting to solve language problems. To this end, we applied a new methodological setting: screen recording in conjunction with a thinking-aloud task. The collected empirical data offers a broad insight into what users really do while they attempt to solve language-related tasks online.
cOWIDplus
(2020)
Die Corona-Krise hat Einfluss auf die Sprache in deutschsprachigen Online-Medien. Wir haben die Hypothese, dass sich die Vielfältigkeit des verwendeten Vokabulars einschränkt. Wir glauben zudem, dass sich die Diversität des Vokabulars nach "überstandener" Krise wieder auf ein "Prä-Pandemie-Niveau" einpendeln wird. Diese zweite Hypothese lässt sich erst im Laufe der Zeit überprüfen.
cOWIDplus Analyse ist eine kontinuierlich aktualisierte Ressource zu der Frage, ob und wie stark sich der Wortschatz ausgewählter deutscher Online-Pressemeldungen während der Corona-Pandemie systematisch einschränkt und ob bzw. wann sich das Vokabular nach der Krise wieder ausweitet. In diesem Artikel erläutern die Autor*innen die hinter der Ressource stehende Forschungsfrage, die zugrunde gelegten Daten, die Methode sowie die bisherigen Ergebnisse.
cOWIDplus Viewer
(2020)
Eine europaweite Umfrage zu Wörterbuchbenutzung und -kultur. Ergebnisse der deutschen Teilnehmenden
(2018)
Gebrauchsgegenstand, Streitschlichter, Spielzeug, Nationalsymbol, Arbeitshilfe oder doch nur etwas, für das sich hauptsächlich Akademikerinnen und Akademiker interessieren? Welche Rolle spielen einsprachige Wörterbücher heute? Um unter anderen diesen Fragen nachzugehen, koordinierten wir gemeinsam mit Iztok Kosem (Universität Ljubljana) und Robert Lew (Adam-Mickiewicz Universität Poznań) die bis dato größte europaweite Umfrage zur Wörterbuchbenutzung und -kultur. Gemeinsam mit 26 ‚lokalen‘ Partnerinnen und Partnern aus ganz Europa führten wir im Rahmen des European Network of e-Lexicography (ENeL) diese Umfrage durch. Die Ergebnisse der Studie versprechen neue Einsichten in den gesellschaftlichen Status von Wörterbüchern in vielen europäischen Ländern. Durch die möglichst parallele Erhebung der Daten in den teilnehmenden Ländern werden außerdem interessante Vergleiche der lokalen ‚Wörterbuchkulturen‘ möglich sein. Im Fokus der Befragung standen allgemeine einsprachige Wörterbücher in der oder den jeweiligen Landessprache(n).
This article examines the contrasts and commonalities between languages for specific purposes (LSP) and their popularizations on the one hand and the frequency patterns of LSP register features in English and German on the other. For this purpose corpora of expertexpert and expert-lay communication are annotated for part-of-speech and phrase structure information. On this basis, the frequencies of pre- and post-modifications in complex noun phrases are statistically investigated and compared for English and German. Moreover, using parallel and comparable corpora it is tested whether English-German translations obey the register norms of the target language or whether the LSP frequency patterns of the source language Ñshine throughì. The results provide an empirical insight into language contact phenomena involving specialized communication.
Wiktionary is increasingly gaining influence in a wide variety of linguistic fields such as NLP and lexicography, and has great potential to become a serious competitor for publisher-based and academic dictionaries. However, little is known about the "crowd" that is responsible for the content of Wiktionary. In this article, we want to shed some light on selected questions concerning large-scale cooperative work in online dictionaries. To this end, we use quantitative analyses of the complete edit history files of the English and German Wiktionary language editions. Concerning the distribution of revisions over users, we show that — compared to the overall user base — only very few authors are responsible for the vast majority of revisions in the two Wiktionary editions. In the next step, we compare this distribution to the distribution of revisions over all the articles. The articles are subsequently analysed in terms of rigour and diversity, typical revision patterns through time, and novelty (the time since the last revision). We close with an examination of the relationship between corpus frequencies of headwords in articles, the number of article visits, and the number of revisions made to articles.
A central goal of linguistics is to understand the diverse ways in which human language can be organized (Gibson et al. 2019; Lupyan/Dale 2016). In our contribution, we present results of a large scale cross-linguistic analysis of the statistical structure of written language (Koplenig/Wolfer/Meyer 2023) we approach this question from an information-theoretic perspective. To this end, we conduct a large scale quantitative cross-linguistic analysis of written language by training a language model on more than 6,500 different documents as represented in 41 multilingual text collections, so-called corpora, consisting of ~3.5 billion words or ~9.0 billion characters and covering 2,069 different languages that are spoken as a native language by more than 90% of the world population. We statistically infer the entropy of each language model as an index of un. To this end, we have trained a language model on more than 6,500 different documents as represented in 41 parallel/multilingual corpora consisting of ~3.5 billion words or ~9.0 billion characters and covering 2,069 different languages that are spoken as a native language by more than 90% of the world population or ~46% of all languages that have a standardized written representation. Figure 1 shows that our database covers a large variety of different text types, e.g. religious texts, legalese texts, subtitles for various movies and talks, newspaper texts, web crawls, Wikipedia articles, or translated example sentences from a free collaborative online database. Furthermore, we use word frequency information from the Crúbadán project that aims at creating text corpora for a large number of (especially under-resourced) languages (Scannell 2007). We statistically infer the entropy rate of each language model as an information-theoretic index of (un)predictability/complexity (Schürmann/Grassberger 1996; Takahira/Tanaka-Ishii/Dębowski 2016). Equipped with this database and information-theoretic estimation framework, we first evaluate the so-called ‘equi-complexity hypothesis’, the idea that all languages are equally complex (Sampson 2009). We compare complexity rankings across corpora and show that a language that tends to be more complex than another language in one corpus also tends to be more complex in another corpus. This constitutes evidence against the equi-complexity hypothesis from an information-theoretic perspective. We then present, discuss and evaluate evidence for a complexity-efficiency trade-off that unexpectedly emerged when we analysed our database: high-entropy languages tend to need fewer symbols to encode messages and vice versa. Given that, from an information theoretic point of view, the message length quantifies efficiency – the shorter the encoded message the higher the efficiency (Gibson et al. 2019) – this indicates that human languages trade off efficiency against complexity. More explicitly, a higher average amount of choice/uncertainty per produced/received symbol is compensated by a shorter average message length. Finally, we present results that could point toward the idea that the absolute amount of information in parallel texts is invariant across different languages.
We introduce DeReKoGram, a novel frequency dataset containing lemma and part-of-speech (POS) information for 1-, 2-, and 3-grams from the German Reference Corpus. The dataset contains information based on a corpus of 43.2 billion tokens and is divided into 16 parts based on 16 corpus folds. We describe how the dataset was created and structured. By evaluating the distribution over the 16 folds, we show that it is possible to work with a subset of the folds in many use cases (e.g., to save computational resources). In a case study, we investigate the growth of vocabulary (as well as the number of hapax legomena) as an increasing number of folds are included in the analysis. We cross-combine this with the various cleaning stages of the dataset. We also give some guidance in the form of Python, R, and Stata markdown scripts on how to work with the resource.
Based on the privative derivational suffix -los, we test statements found in the literature on word formation using a – at least in this field – novel empirical basis: a list of affective-emotional ratings of base nouns and associated -los derivations. In addition to a frequency analysis based on the German Reference Corpus, we show that, in general, emotional polarity (so-called valence, positive vs. negative emotions) is reversed by suffixation with -los. This change is stronger for more polarized base nouns. The perceived intensity of emotion (so-called arousal) is generally lower for -los derivations than for base nouns. Finally, to capture the results theoretically, we propose a prototypical -los construction in the framework of Construction Morphology.