Refine
Year of publication
Document Type
- Part of a Book (31)
- Article (27)
- Other (4)
- Conference Proceeding (3)
- Preprint (2)
Keywords
- Computerunterstützte Lexikographie (20)
- Deutsch (17)
- Wörterbuch (15)
- Korpus <Linguistik> (12)
- Benutzer (9)
- Wortschatz (8)
- Geschlechtergerechte Sprache (7)
- Sprachstatistik (6)
- COVID-19 (5)
- Lexikostatistik (5)
Publicationstate
- Veröffentlichungsversion (67) (remove)
Reviewstate
- (Verlags)-Lektorat (28)
- Peer-Review (21)
- Verlags-Lektorat (7)
- Peer-review (2)
- (Verlags)Lektorat (1)
- Verlagslektorat (1)
Publisher
- Leibniz-Institut für Deutsche Sprache (IDS) (9)
- de Gruyter (9)
- De Gruyter (8)
- IDS-Verlag (3)
- MDPI (3)
- Narr (3)
- Buro van die WAT (2)
- Cornell University (2)
- Institut für Deutsche Sprache (2)
- TU Dresden, Institut für Germanistik (2)
Das Kommunizieren in Sozialen Medien und der Umgang mit Hypertexten ist im Jahr 2020 kein Randphänomen mehr. Die sprachlichen Besonderheiten internetbasierter Kommunikation und Sozialer Medien sind mittlerweile auch gut erforscht und beschrieben, allerdings werden diese bislang in deutschen Grammatiken, mit Ausnahme von Hoffmann (2014), allenfalls am Rande behandelt. Selbst neuere Ansätze zur Textanalyse, z. B. Ágel (2017), konzentrieren sich auf gestaltstabile, linear organisierte Schrifttexte. Dasselbe gilt für Ansätze, die primär für die Bewertung von Schreibprodukten in Bildungskontexten entwickelt wurden.
In this paper, the authors use the 2012 log files of two German online dictionaries (Digital Dictionary of the German Language and the German Version of Wiktionary) and the 100,000 most frequent words in the Mannheim German Reference Corpus from 2009 to answer the question of whether dictionary users really do look up frequent words, first asked by de Schryver et al. (2006). By using an approach to the comparison of log files and corpus data which is completely different from that of the aforementioned authors, we provide empirical evidence that indicates - contrary to the results of de Schryver et al. and Verlinde/Binon (2010) - that the corpus frequency of a word can indeed be an important factor in determining what online dictionary users look up. Finally, we incorporate word class Information readily available in Wiktionary into our analysis to improve our results considerably.
Languages employ different strategies to transmit structural and grammatical information. While, for example, grammatical dependency relationships in sentences are mainly conveyed by the ordering of the words for languages like Mandarin Chinese, or Vietnamese, the word ordering is much less restricted for languages such as Inupiatun or Quechua, as these languages (also) use the internal structure of words (e.g. inflectional morphology) to mark grammatical relationships in a sentence. Based on a quantitative analysis of more than 1,500 unique translations of different books of the Bible in almost 1,200 different languages that are spoken as a native language by approximately 6 billion people (more than 80% of the world population), we present large-scale evidence for a statistical trade-off between the amount of information conveyed by the ordering of words and the amount of information conveyed by internal word structure: languages that rely more strongly on word order information tend to rely less on word structure information and vice versa. Or put differently, if less information is carried within the word, more information has to be spread among words in order to communicate successfully. In addition, we find that–despite differences in the way information is expressed–there is also evidence for a trade-off between different books of the biblical canon that recurs with little variation across languages: the more informative the word order of the book, the less informative its word structure and vice versa. We argue that this might suggest that, on the one hand, languages encode information in very different (but efficient) ways. On the other hand, content-related and stylistic features are statistically encoded in very similar ways.
In order to demonstrate why it is important to correctly account for the (serial dependent) structure of temporal data, we document an apparently spectacular relationship between population size and lexical diversity: for five out of seven investigated languages, there is a strong relationship between population size and lexical diversity of the primary language in this country. We show that this relationship is the result of a misspecified model that does not consider the temporal aspect of the data by presenting a similar but nonsensical relationship between the global annual mean sea level and lexical diversity. Given the fact that in the recent past, several studies were published that present surprising links between different economic, cultural, political and (socio-)demographical variables on the one hand and cultural or linguistic characteristics on the other hand, but seem to suffer from exactly this problem, we explain the cause of the misspecification and show that it has profound consequences. We demonstrate how simple transformation of the time series can often solve problems of this type and argue that the evaluation of the plausibility of a relationship is important in this context. We hope that our paper will help both researchers and reviewers to understand why it is important to use special models for the analysis of data with a natural temporal ordering.
Questions of design
(2014)
All lexicographers working on online dictionary projects that do not wish to use an established form of design for their online dictionary, or simply have new kinds of lexicographic data to present, face the problem of what kind of arrangement is best suited for the intended users of the dictionary. In this chapter, we present data about questions relating to the design of online dictionaries. This will provide projects that use these or similar ways of presenting their lexicographic data with valuable information about how potential dictionary users assess and evaluate them. In addition, the answers to corresponding open-ended questions show, detached from concrete design models, which criteria potential users value in a good online representation. Clarity and an uncluttered look seem to dominate in many answers, as well as the possibility of customization, if the latter is not connected with a too complex usability model.
The first international study (N=684) we conducted within our research project on online dictionary use included very general questions on that topic. In this chapter, we present the corresponding results on questions like the use of both printed and online dictionaries as well as on the types of dictionaries used, devices used to access online dictionaries and some information regarding the willingness to pay for premium content. The data collected by us, show that our respondents both use printed and online dictionaries and, according to their self-report, many different kinds of dictionaries. In this context, our results revealed some clear cultural differences: in German-speaking areas spelling dictionaries are more common than in other linguistic areas, where thesauruses are widespread. Only a minority of our respondents is willing to pay for premium content, but most of the respondents are prepared to accept advertising. Our results also demonstrate that our respondents mainly tend to use dictionaries on big-screen devices, e.g. desktop computers or laptops.
Studying Lexical Dynamics and Language Change via Generalized Entropies: The Problem of Sample Size
(2020)
Recently, it was demonstrated that generalized entropies of order α offer novel and important opportunities to quantify the similarity of symbol sequences where α is a free parameter. Varying this parameter makes it possible to magnify differences between different texts at specific scales of the corresponding word frequency spectrum. For the analysis of the statistical properties of natural languages, this is especially interesting, because textual data are characterized by Zipf’s law, i.e., there are very few word types that occur very often (e.g., function words expressing grammatical relationships) and many word types with a very low frequency (e.g., content words carrying most of the meaning of a sentence). Here, this approach is systematically and empirically studied by analyzing the lexical dynamics of the German weekly news magazine Der Spiegel (consisting of approximately 365,000 articles and 237,000,000 words that were published between 1947 and 2017). We show that, analogous to most other measures in quantitative linguistics, similarity measures based on generalized entropies depend heavily on the sample size (i.e., text length). We argue that this makes it difficult to quantify lexical dynamics and language change and show that standard sampling approaches do not solve this problem. We discuss the consequences of the results for the statistical analysis of languages.
Studying Lexical Dynamics and Language Change via Generalized Entropies: The Problem of Sample Size
(2019)
Recently, it was demonstrated that generalized entropies of order α offer novel and important opportunities to quantify the similarity of symbol sequences where α is a free parameter. Varying this parameter makes it possible to magnify differences between different texts at specific scales of the corresponding word frequency spectrum. For the analysis of the statistical properties of natural languages, this is especially interesting, because textual data are characterized by Zipf’s law, i.e., there are very few word types that occur very often (e.g., function words expressing grammatical relationships) and many word types with a very low frequency (e.g., content words carrying most of the meaning of a sentence). Here, this approach is systematically and empirically studied by analyzing the lexical dynamics of the German weekly news magazine Der Spiegel (consisting of approximately 365,000 articles and 237,000,000 words that were published between 1947 and 2017). We show that, analogous to most other measures in quantitative linguistics, similarity measures based on generalized entropies depend heavily on the sample size (i.e., text length). We argue that this makes it difficult to quantify lexical dynamics and language change and show that standard sampling approaches do not solve this problem. We discuss the consequences of the results for the statistical analysis of languages.