Refine
Year of publication
Document Type
- Article (12)
- Part of a Book (10)
- Conference Proceeding (5)
- Book (2)
- Working Paper (2)
- Doctoral Thesis (1)
- Review (1)
Keywords
- Sprachstatistik (33) (remove)
Publicationstate
- Veröffentlichungsversion (33) (remove)
Reviewstate
This paper explores how attitudes affect the seemingly objective process of counting speakers of varieties using the example of Low German, Germany’s sole regional language. The initial focus is on the basic taxonomy of classifying a variety as a language or a dialect. Three representative surveys then provide data for the analysis: the Germany Survey 2008, the Northern Germany Survey 2016, and the Germany Survey 2017. The results of these surveys indicate that there is no consensus concerning the evaluation of Low German’s status and that attitudes towards Low German are related to, for example, proficiency in the language. These attitudes are shown to matter when counting speakers of Low German and investigating the status it has been accorded.
Studying Lexical Dynamics and Language Change via Generalized Entropies: The Problem of Sample Size
(2020)
Recently, it was demonstrated that generalized entropies of order α offer novel and important opportunities to quantify the similarity of symbol sequences where α is a free parameter. Varying this parameter makes it possible to magnify differences between different texts at specific scales of the corresponding word frequency spectrum. For the analysis of the statistical properties of natural languages, this is especially interesting, because textual data are characterized by Zipf’s law, i.e., there are very few word types that occur very often (e.g., function words expressing grammatical relationships) and many word types with a very low frequency (e.g., content words carrying most of the meaning of a sentence). Here, this approach is systematically and empirically studied by analyzing the lexical dynamics of the German weekly news magazine Der Spiegel (consisting of approximately 365,000 articles and 237,000,000 words that were published between 1947 and 2017). We show that, analogous to most other measures in quantitative linguistics, similarity measures based on generalized entropies depend heavily on the sample size (i.e., text length). We argue that this makes it difficult to quantify lexical dynamics and language change and show that standard sampling approaches do not solve this problem. We discuss the consequences of the results for the statistical analysis of languages.
Large-scale empirical evidence indicates a fascinating statistical relationship between the estimated number of language users and its linguistic and statistical structure. In this context, the linguistic niche hypothesis argues that this relationship reflects a negative selection against morphological paradigms that are hard to learn for adults, because languages with a large number of speakers are assumed to be typically spoken and learned by greater proportions of adults. In this paper, this conjecture is tested empirically for more than 2000 languages. The results question the idea of the impact of non-native speakers on the grammatical and statistical structure of languages, as it is demonstrated that the relative proportion of non-native speakers does not significantly correlate with either morphological or information-theoretic complexity. While it thus seems that large numbers of adult learners/speakers do not affect the (grammatical or statistical) structure of a language, the results suggest that there is indeed a relationship between the number of speakers and (especially) information-theoretic complexity, i.e. entropy rates. A potential explanation for the observed relationship is discussed.
Classical null hypothesis significance tests are not appropriate in corpus linguistics, because the randomness assumption underlying these testing procedures is not fulfilled. Nevertheless, there are numerous scenarios where it would be beneficial to have some kind of test in order to judge the relevance of a result (e.g. a difference between two corpora) by answering the question whether the attribute of interest is pronounced enough to warrant the conclusion that it is substantial and not due to chance. In this paper, I outline such a test.
Studying Lexical Dynamics and Language Change via Generalized Entropies: The Problem of Sample Size
(2019)
Recently, it was demonstrated that generalized entropies of order α offer novel and important opportunities to quantify the similarity of symbol sequences where α is a free parameter. Varying this parameter makes it possible to magnify differences between different texts at specific scales of the corresponding word frequency spectrum. For the analysis of the statistical properties of natural languages, this is especially interesting, because textual data are characterized by Zipf’s law, i.e., there are very few word types that occur very often (e.g., function words expressing grammatical relationships) and many word types with a very low frequency (e.g., content words carrying most of the meaning of a sentence). Here, this approach is systematically and empirically studied by analyzing the lexical dynamics of the German weekly news magazine Der Spiegel (consisting of approximately 365,000 articles and 237,000,000 words that were published between 1947 and 2017). We show that, analogous to most other measures in quantitative linguistics, similarity measures based on generalized entropies depend heavily on the sample size (i.e., text length). We argue that this makes it difficult to quantify lexical dynamics and language change and show that standard sampling approaches do not solve this problem. We discuss the consequences of the results for the statistical analysis of languages.
This contribution presents a quantitative approach to speech, thought and writing representation (ST&WR) and steps towards its automatic detection. Automatic detection is necessary for studying ST&WR in a large number of texts and thus identifying developments in form and usage over time and in different types of texts. The contribution summarizes results of a pilot study: First, it describes the manual annotation of a corpus of short narrative texts in relation to linguistic descriptions of ST&WR. Then, two different techniques of automatic detection – a rule-based and a machine learning approach – are described and compared. Evaluation of the results shows success with automatic detection, especially for direct and indirect ST&WR.
Die Arbeiten in diesem Band zeigen anhand ausgewählter morphosyntaktischer Phänomene exemplarisch auf, wie ein korpuslinguistischer Zugang genutzt werden kann, um die Vielfalt und Variabilität des Sprachgebrauchs in einer größeren Detailschärfe zu beschreiben, als dies bisher möglich war. Ausgangspunkt ist die Überlegung, dass sprachliche Variation als integraler Bestandteil der (Standard-)Sprache anzusehen ist und somit auch deskriptiv erfasst werden muss. Dabeigeht es zunächst um eine möglichst genaue Beschreibung der Verteilung und Häufigkeit verschiedener Ausprägungen ausgewählter Variablen. Eine umfassende Beschreibung eines Variationsphänomens beinhaltet zudem die Ermittlung und Gewichtung der Faktoren, die die Distribution der Variantensteuern. In diesem Zusammenhang werden Hypothesen aus der einschlägigen Forschungsliteratur unter Verwendung moderner statistischer Verfahren überprüft. Darüber hinaus enthalten die vorliegenden Studien eine explorative Komponente, die sich mit der Aufdeckung neuer Muster, Regularitäten und linguistischer Zusammenhänge befasst. Dabei werden verschiedene korpuslinguistische und statistische Ansätze und Verfahren erprobt und evaluiert.
Quantitativ ausgerichtete empirische Linguistik hat in der Regel das Ziel, grose Mengen sprachlichen Materials auf einmal in den Blick zu nehmen und durch geeignete Analysemethoden sowohl neue Phanomene zu entdecken als auch bekannte Phanomene systematischer zu erforschen. Das Ziel unseres Beitrags ist es, anhand zweier exemplarischer Forschungsfragen methodisch zu reflektieren, wo der quantitativ-empirische Ansatz fur die Analyse lexikalischer Daten wirklich so funktioniert wie erhofft und wo vielleicht sogar systembedingte Grenzen liegen. Wir greifen zu diesem Zweck zwei sehr unterschiedliche Forschungsfragen heraus: zum einen die zeitnahe Analyse von produktiven Wortschatzwandelprozessen und zum anderen die Ausgleichsbeziehung von Wortstellungsvs. Wortstrukturregularitat in den Sprachen der Welt. Diese beiden Forschungsfragen liegen auf sehr unterschiedlichen Abstraktionsebenen. Wir hoffen aber, dass wir mit ihnen in groser Bandbreite zeigen konnen, auf welchen Ebenen die quantitative Analyse lexikalischer Daten stattfinden kann. Daruber hinaus mochten wir anhand dieser sehr unterschiedlichen Analysen die Moglichkeiten und Grenzen des quantitativen Ansatzes reflektieren und damit die Interpretationskraft der Verfahren verdeutlichen.
Sprecher haben (oft unbewusste) Meinungen und Einstellungen zu Sprachen und Varietäten. Ein Weg, solche Spracheinstellungen zu erfassen, führt über quantitative Verfahren (Fragebogen mit offenen und geschlossenen Fragen), mit denen man Daten großer Gruppen – im Idealfall repräsentativ – erheben kann. Im Beitrag werden solche Erhebungen, die am Institut für Deutsche Sprache durchgeführt wurden, vorgestellt, und es werden zwei Instrumente der Spracheinstellungsforschung, die dabei zum Einsatz kamen, naher erläutert: das Allgemeine Sprachbewertungsinstrument (ASBI) und die Attitudes-Towards-Languages-Skala (AToL). Außerdem wird gezeigt, wie dialektometrische Auswertungs- und Visualisierungsverfahren auf Einstellungsdaten angewendet werden können.
Die ansprechende und geeignete Visualisierung linguistischer Daten gewinnt analog zum steigenden Einfluss quantitativer Methoden in der Linguistik immer mehr an Bedeutung. R ist eine flexible und freie Entwicklungsumgebung zur Umsetzung von statistischen Analysen, die zahlreiche Optionen zur Datenvisualisierung bereithält und sehr gut für große Datensätze geeignet ist. Statistische Analysen und Visualisierungen von Daten werden auf diese Weise in einer Umgebung verzahnt. Durch die zahlreichen Zusatzpakete stehen auch weiterhin zeitgemäße Methoden zur Verfügung, um (linguistische) Daten zu analysieren und darzustellen.
Der Beitrag vermittelt einen stark anwendungsorientierten Einstieg in das Programm und legt mithilfe von vielen praktischen Übungen und Anwendungsbeispielen die Grundlagen für ein eigenständiges Weiterentwickeln der individuellen Fähigkeiten im Umgang mit der Software. Neben einer kurzen, eher theoretisch angelegten Einleitung zu explorativen und explanatorischen Visualisierungsstrategien von Daten werden verschiedene Pakete vorgestellt, die für die Visualisierung in R benutzt werden können.
Visualisierungen spielen in den Wissenschaften eine wichtige Rolle im Forschungsprozess. Sie dienen der Illustration von gewonnener Erkenntnis, aber auch als eigenständiges Mittel der Erkenntnisgewinnung. Auch in der Linguistik sind solche Visualisierungen bedeutend. Beispielsweise in Form von Karten, Baumgraphen und Begriffsnetzen. Bei korpuslinguistischen Methoden sind explorative Visualisierungen oft ein wichtiges Mittel, um die Daten überblickbar und interpretierbar zu machen. Das Buch reflektiert die theoretischen Grundlagen wissenschaftlicher Visualisierungen in der Linguistik, zeigt Praxisbeispiele und stellt auch Visualisierungswerkzeuge vor.
Who understands Low German today and who can speak it? Who makes use of media and cultural events in Low German? What images do people in northern Germany associate with Low German and what is their view of their regional language?
These and further questions are answered in this brochure with the help of representative data collected in a telephone survey of a total of 1,632 people from eight federal states (Bremen, Hamburg, Lower Saxony, Mecklenburg-West Pomerania and Schleswig-Holstein as well as Brandenburg, North Rhine-Westphalia and Saxony-Anhalt).
In the NLP literature, adapting a parser to new text with properties different from the training data is commonly referred to as domain adaptation. In practice, however, the differences between texts from different sources often reflect a mixture of domain and genre properties, and it is by no means clear what impact each of those has on statistical parsing. In this paper, we investigate how differences between articles in a newspaper corpus relate to the concepts of genre and domain and how they influence parsing performance of a transition-based dependency parser. We do this by applying various similarity measures for data point selection and testing their adequacy for creating genre-aware parsing models.
In the first volume of Corpus Linguistics and Linguistic Theory, Gries (2005. Null-hypothesis significance testing of word frequencies: A follow-up on Kilgarriff. Corpus Linguistics and Linguistic Theory 1(2). doi:10.1515/ cllt.2005.1.2.277. http://www.degruyter.com/view/j/cllt.2005.1.issue-2/cllt.2005. 1.2.277/cllt.2005.1.2.277.xml: 285) asked whether corpus linguists should abandon null-hypothesis significance testing. In this paper, I want to revive this discussion by defending the argument that the assumptions that allow inferences about a given population – in this case about the studied languages – based on results observed in a sample – in this case a collection of naturally occurring language data – are not fulfilled. As a consequence, corpus linguists should indeed abandon null-hypothesis significance testing.
Languages employ different strategies to transmit structural and grammatical information. While, for example, grammatical dependency relationships in sentences are mainly conveyed by the ordering of the words for languages like Mandarin Chinese, or Vietnamese, the word ordering is much less restricted for languages such as Inupiatun or Quechua, as these languages (also) use the internal structure of words (e.g. inflectional morphology) to mark grammatical relationships in a sentence. Based on a quantitative analysis of more than 1,500 unique translations of different books of the Bible in almost 1,200 different languages that are spoken as a native language by approximately 6 billion people (more than 80% of the world population), we present large-scale evidence for a statistical trade-off between the amount of information conveyed by the ordering of words and the amount of information conveyed by internal word structure: languages that rely more strongly on word order information tend to rely less on word structure information and vice versa. Or put differently, if less information is carried within the word, more information has to be spread among words in order to communicate successfully. In addition, we find that–despite differences in the way information is expressed–there is also evidence for a trade-off between different books of the biblical canon that recurs with little variation across languages: the more informative the word order of the book, the less informative its word structure and vice versa. We argue that this might suggest that, on the one hand, languages encode information in very different (but efficient) ways. On the other hand, content-related and stylistic features are statistically encoded in very similar ways.
In order to demonstrate why it is important to correctly account for the (serial dependent) structure of temporal data, we document an apparently spectacular relationship between population size and lexical diversity: for five out of seven investigated languages, there is a strong relationship between population size and lexical diversity of the primary language in this country. We show that this relationship is the result of a misspecified model that does not consider the temporal aspect of the data by presenting a similar but nonsensical relationship between the global annual mean sea level and lexical diversity. Given the fact that in the recent past, several studies were published that present surprising links between different economic, cultural, political and (socio-)demographical variables on the one hand and cultural or linguistic characteristics on the other hand, but seem to suffer from exactly this problem, we explain the cause of the misspecification and show that it has profound consequences. We demonstrate how simple transformation of the time series can often solve problems of this type and argue that the evaluation of the plausibility of a relationship is important in this context. We hope that our paper will help both researchers and reviewers to understand why it is important to use special models for the analysis of data with a natural temporal ordering.
The author presents a study using eye-tracking-while-reading data from participants reading German jurisdictional texts. I am particularly interested in nominalisations. It can be shown that nominalisations are read significantly longer than other nouns and that this effect is quite strong. Furthermore, the results suggest that nouns are read faster in reformulated texts. In the reformulations, nominalisations were transformed into verbal structures. Reformulations did not lead to increased processing times of verbal constructions but reformulated texts were read faster overall. Where appropriate, results are compared to a previous study of Hansen et al. (2006) using the same texts but other methodology and statistical analysis.