Refine
Year of publication
- 2019 (10) (remove)
Document Type
- Article (6)
- Part of a Book (3)
- Book (1)
Has Fulltext
- yes (10)
Is part of the Bibliography
- yes (10)
Keywords
- Sprachstatistik (10) (remove)
Publicationstate
- Veröffentlichungsversion (6)
- Zweitveröffentlichung (3)
- Erstveröffentlichung (1)
- Postprint (1)
Reviewstate
- Peer-Review (6)
- (Verlags)-Lektorat (4)
Publisher
- Narr (3)
- de Gruyter (3)
- Equinox Publishing (1)
- MDPI (1)
- PLOS (1)
- Royal Society of London (1)
Language attitudes matter; they influence people’s behaviour and decisions. Therefore, it is crucial to learn more about patterns in the way that languages are evaluated. One means of doing so is using a quantitative approach with data representative of a whole population, so that results mirror dispositions at a societal level. This kind of approach is adopted here, with a focus on the situation in Germany. The article consists of two parts. First, I will present some results of a new representative survey on language attitudes in Germany (the Germany Survey 2017). Second, I will show how language attitudes penetrate even seemingly objective data collection processes by examining the German Microcensus. In 2017, for the first time in eighty years, the German Microcensus included a question on language use ‘at home’. Unfortunately, however, the question was clearly tainted by language attitudes instead of being objective. As a result, the Microcensus significantly misrepresents the linguistic reality of different migrant languages spoken in Germany.
In the first volume of Corpus Linguistics and Linguistic Theory, Gries (2005. Null-hypothesis significance testing of word frequencies: A follow-up on Kilgarriff. Corpus Linguistics and Linguistic Theory 1(2). doi:10.1515/cllt.2005.1.2.277. http://www.degruyter.com/view//cllt.2005.1.issue-2/cllt.2005.1.2.277/cllt.2005.1.2.277.xml: 285) asked whether corpus linguists should abandon null-hypothesis significance testing. In this paper, I want to revive this discussion by defending the argument that the assumptions that allow inferences about a given population – in this case about the studied languages – based on results observed in a sample – in this case a collection of naturally occurring language data – are not fulfilled. As a consequence, corpus linguists should indeed abandon null-hypothesis significance testing.
Classical null hypothesis significance tests are not appropriate in corpus linguistics, because the randomness assumption underlying these testing procedures is not fulfilled. Nevertheless, there are numerous scenarios where it would be beneficial to have some kind of test in order to judge the relevance of a result (e.g. a difference between two corpora) by answering the question whether the attribute of interest is pronounced enough to warrant the conclusion that it is substantial and not due to chance. In this paper, I outline such a test.
Studying Lexical Dynamics and Language Change via Generalized Entropies: The Problem of Sample Size
(2019)
Recently, it was demonstrated that generalized entropies of order α offer novel and important opportunities to quantify the similarity of symbol sequences where α is a free parameter. Varying this parameter makes it possible to magnify differences between different texts at specific scales of the corresponding word frequency spectrum. For the analysis of the statistical properties of natural languages, this is especially interesting, because textual data are characterized by Zipf’s law, i.e., there are very few word types that occur very often (e.g., function words expressing grammatical relationships) and many word types with a very low frequency (e.g., content words carrying most of the meaning of a sentence). Here, this approach is systematically and empirically studied by analyzing the lexical dynamics of the German weekly news magazine Der Spiegel (consisting of approximately 365,000 articles and 237,000,000 words that were published between 1947 and 2017). We show that, analogous to most other measures in quantitative linguistics, similarity measures based on generalized entropies depend heavily on the sample size (i.e., text length). We argue that this makes it difficult to quantify lexical dynamics and language change and show that standard sampling approaches do not solve this problem. We discuss the consequences of the results for the statistical analysis of languages.
This contribution presents a quantitative approach to speech, thought and writing representation (ST&WR) and steps towards its automatic detection. Automatic detection is necessary for studying ST&WR in a large number of texts and thus identifying developments in form and usage over time and in different types of texts. The contribution summarizes results of a pilot study: First, it describes the manual annotation of a corpus of short narrative texts in relation to linguistic descriptions of ST&WR. Then, two different techniques of automatic detection – a rule-based and a machine learning approach – are described and compared. Evaluation of the results shows success with automatic detection, especially for direct and indirect ST&WR.
Die Arbeiten in diesem Band zeigen anhand ausgewählter morphosyntaktischer Phänomene exemplarisch auf, wie ein korpuslinguistischer Zugang genutzt werden kann, um die Vielfalt und Variabilität des Sprachgebrauchs in einer größeren Detailschärfe zu beschreiben, als dies bisher möglich war. Ausgangspunkt ist die Überlegung, dass sprachliche Variation als integraler Bestandteil der (Standard-)Sprache anzusehen ist und somit auch deskriptiv erfasst werden muss. Dabeigeht es zunächst um eine möglichst genaue Beschreibung der Verteilung und Häufigkeit verschiedener Ausprägungen ausgewählter Variablen. Eine umfassende Beschreibung eines Variationsphänomens beinhaltet zudem die Ermittlung und Gewichtung der Faktoren, die die Distribution der Variantensteuern. In diesem Zusammenhang werden Hypothesen aus der einschlägigen Forschungsliteratur unter Verwendung moderner statistischer Verfahren überprüft. Darüber hinaus enthalten die vorliegenden Studien eine explorative Komponente, die sich mit der Aufdeckung neuer Muster, Regularitäten und linguistischer Zusammenhänge befasst. Dabei werden verschiedene korpuslinguistische und statistische Ansätze und Verfahren erprobt und evaluiert.
Large-scale empirical evidence indicates a fascinating statistical relationship between the estimated number of language users and its linguistic and statistical structure. In this context, the linguistic niche hypothesis argues that this relationship reflects a negative selection against morphological paradigms that are hard to learn for adults, because languages with a large number of speakers are assumed to be typically spoken and learned by greater proportions of adults. In this paper, this conjecture is tested empirically for more than 2000 languages. The results question the idea of the impact of non-native speakers on the grammatical and statistical structure of languages, as it is demonstrated that the relative proportion of non-native speakers does not significantly correlate with either morphological or information-theoretic complexity. While it thus seems that large numbers of adult learners/speakers do not affect the (grammatical or statistical) structure of a language, the results suggest that there is indeed a relationship between the number of speakers and (especially) information-theoretic complexity, i.e. entropy rates. A potential explanation for the observed relationship is discussed.
Diachrone Wortschatzveränderungen werden in der Regel exemplarisch anhand bestimmter Phänomene oder Phänomenbereiche untersucht. Wir widmen uns der Frage, ob und wie Wandelprozesse auch auf globaler Ebene, also ohne sich auf bestimmte Wortschatzausschnitte festzulegen, messbar sind. Zur Untersuchung dieser Frage nutzen wir das Spiegel-Korpus, in dem alle Ausgaben der Wochenzeitschrift seit 1947 enthalten sind. Dabei gehen wir auf grundlegende Herausforderungen ein, die es dabei zu lösen gilt, wie die Verteilung sprachlicher Daten und die Folgen unterschiedlicher Subkorpusgrößen, d.h. im konkreten Fall die variierende Größe des Spiegelkorpus über die Zeit hinweg. Wir stellen ein Verfahren vor, mit dem wir in der Lage sind, flankiert von einem „Lackmustest“ zur Überprüfung der Ergebnisse, Wortschatzwandelprozesse bis auf die Mikroebene, d.h. zwischen zwei Monaten oder gar Wochen, quantitativ nachzuvollziehen.