Refine
Year of publication
Document Type
- Article (17) (remove)
Has Fulltext
- yes (17)
Keywords
- Deutsch (4)
- Sprachstatistik (4)
- Kontrastive Linguistik (2)
- Lehnwort (2)
- Rezension (2)
- Sprachphilosophie (2)
- Word length (2)
- Wortlänge (2)
- Appellativum (1)
- Buchstabenhäufigkeit (1)
Publicationstate
- Veröffentlichungsversion (12)
- Postprint (4)
Reviewstate
- (Verlags)-Lektorat (6)
- Peer-Review (5)
- Verlags-Lektorat (4)
- Peer-review (1)
Publisher
- Institut für Deutsche Sprache (1)
- Leibniz-Institut für Deutsche Sprache (IDS) (1)
- Lucius & Lucius (1)
- RAM (1)
- Springer (1)
- Springer Nature (1)
- Winter (1)
- de Gruyter (1)
One of the fundamental questions about human language is whether all languages are equally complex. Here, we approach this question from an information-theoretic perspective. We present a large scale quantitative cross-linguistic analysis of written language by training a language model on more than 6500 different documents as represented in 41 multilingual text collections consisting of ~ 3.5 billion words or ~ 9.0 billion characters and covering 2069 different languages that are spoken as a native language by more than 90% of the world population. We statistically infer the entropy of each language model as an index of what we call average prediction complexity. We compare complexity rankings across corpora and show that a language that tends to be more complex than another language in one corpus also tends to be more complex in another corpus. In addition, we show that speaker population size predicts entropy. We argue that both results constitute evidence against the equi-complexity hypothesis from an information-theoretic perspective.
In this paper, we address issues of inconsistencies of dictionary information and how different corpus methods and computer tools can assist in providing systematic cross-referencing. The question is raised how hyperlinking in an electronic reference work can be approached systematically in order to warrant consistent symmetrical links between synonyms or antonyms. Firstly, it is argued that working with a comprehensive corpus does not account for consistent cross-referencing. It is shown that a top-down corpus-driven linguistic analysis also does not guarantee the lexicographic documentation of binary lexico-semantic relations covered by corpus data, as proposed by Paradis/Willners (2006a, b). Secondly, with the help of dictionary examples taken from elexiko (an online dictionary of contemporary German) we demonstrate how a combination of both corpus-driven and corpus-based procedures enables lexicographers to systematically exploit corpus material in more depth than by using only one of these methods. It is also discussed where and why lexicographers are still prone to inconsistencies in the editing processes, irrespective of their underlying corpus methodologies. Finally, we introduce a cross-reference management tool that has been developed for elexiko and we explain its technological prerequisites and implications. This software supports lexicographers in detecting existing and missing references from and to a specific headword. It also offers options to automatically and comfortably correct discrepancies. Overall, we suggest a method that includes linguistic competence, complementary corpus approaches and additional software in order to ensure that links or references between synonymic and antonymic pairings are given in both directions.
According to a widespread conception, quantitative linguistics will eventually be able to explain empirical quantitative findings (such as Zipf’s Law) by deriving them from highly general stochastic linguistic ‘laws’ that are assumed to be part of a general theory of human language (cf. Best (1999) for a summary of possible theoretical positions). Due to their formal proximity to methods used in the so-called exact sciences, theoretical explanations of this kind are assumed to be superior to the supposedly descriptive-only approaches of linguistic structuralism and its successors. In this paper I shall try to argue that on close inspection such claims turn out to be highly problematic, both on linguistic and on science-theoretical grounds.
Am 12. Mai 1965 nahmen der Staat Israel und die Bundesrepublik Deutschland offiziell diplomatische Beziehungen auf. Damit kam über 15 Jahre nach der Konstitution der beiden Länder und 20 Jahre nach dem Ende der Shoah ein komplexer Prozess der langsamen politischen Annäherung zu einem keineswegs selbstverständlichen Abschluss. Das fünfzigjährige Jubiläum dieses Ereignisses im Jahr 2015 war weltweit, vor allem aber in Israel und Deutschland, Anlass für zahlreiche Veranstaltungen, über die eine offizielle bilaterale Webseite <www.de50il.org/> (Stand: 6.11.2017) Auskunft gibt. Im Rahmen des Jubiläums wurde am 30. September 2015 in einer feierlichen Abendveranstaltung im Jüdischen Museum Berlin offiziell das „Wörterbuch deutscher Lehnwörter im Hebräischen“ von Uriel Adiv in einer ersten Fassung im „Lehnwortportal Deutsch“ des IDS freigeschaltet. Eine von Koautor Jakob Mendel erheblich überarbeitete und verbesserte zweite Version ging im Mai 2017 online. Der vorliegende Beitrag möchte einige Hintergründe zum deutschen Lehnwortschatz im modernen Hebräischen darstellen sowie die Entstehungsgeschichte des Werks und seinen Platz in der lehnwortlexikografischen Publikationsplattform „Lehnwortportal Deutsch“ <http://lwp.ids-mannheim.de/> (Stand: 6.11.2017) beleuchten.
The present study examines the dynamics of the kanji combinations that form common (or general) and proper nouns in Japanese. The following three results were obtained. First, the degree of distribution results from two similar processes which are based on a steady-state of birth-and-death processes with different birth and death rates, resulting in a positive negative binomial distribution with the proper nouns and in a positive Waring distribution with common nouns. Second, all rank-frequency distributions follow the negative hypergeometric distribution used very frequently in ranking problems. Third, the building of kanji compounds follows a dissortative strategy. The higher the outdegree of a kanji, the more it prefers kanji with lower indegrees. A linear dependence can be observed with common nouns, whereas the relationship between compounded kanji is rather curvilinear with proper nouns. The actual analytical expression is not yet known.
Sprachkritik, dahinsickernd
(2007)
Three popular collections of essays concerning correct language use in German are reviewed from a linguist’s point of view. It is claimed that the overall picture of language that Sick conveys to the layperson is inadequate; in addition, the author fails to reflect explicitly on the purpose and consequences of his prescriptive approach to language use.
Languages employ different strategies to transmit structural and grammatical information. While, for example, grammatical dependency relationships in sentences are mainly conveyed by the ordering of the words for languages like Mandarin Chinese, or Vietnamese, the word ordering is much less restricted for languages such as Inupiatun or Quechua, as these languages (also) use the internal structure of words (e.g. inflectional morphology) to mark grammatical relationships in a sentence. Based on a quantitative analysis of more than 1,500 unique translations of different books of the Bible in almost 1,200 different languages that are spoken as a native language by approximately 6 billion people (more than 80% of the world population), we present large-scale evidence for a statistical trade-off between the amount of information conveyed by the ordering of words and the amount of information conveyed by internal word structure: languages that rely more strongly on word order information tend to rely less on word structure information and vice versa. Or put differently, if less information is carried within the word, more information has to be spread among words in order to communicate successfully. In addition, we find that–despite differences in the way information is expressed–there is also evidence for a trade-off between different books of the biblical canon that recurs with little variation across languages: the more informative the word order of the book, the less informative its word structure and vice versa. We argue that this might suggest that, on the one hand, languages encode information in very different (but efficient) ways. On the other hand, content-related and stylistic features are statistically encoded in very similar ways.
Open peer commentary on the target article “Who Conceives of Society?” by Ernst von Glasersfeld. Excerpt: I will focus on one crucial step in von Glasersfeld’s argumentation, viz. his view that every individual constructs his own private meanings (understood as conceptual structures or elements thereof) for linguistic expressions, so that linguistic interaction and even communication in general is based on a notion of compatibility between different speakers’ private conceptual schemes. The central question here is: “Just what does it mean that different private conceptual schemes (private meanings) are compatible, or what constitutes a viable criterion to this end?” As von Glasersfeld himself stresses twice (§28, §37), the criteria to be looked for can only be “public,” residing in properties of verbal and non-verbal actions of the interacting individuals, properties that can be sensed and processed by the participating system.
This paper deals with the distribution of word length in short native mythological and historical Eskimo narrative texts. To my knowledge, no Eskimo‐Aleut data have been the object of quantitative linguistic investigation so far. Due to the strong linguistic and Stylistic homogeneity of the examined texts it was assumed that these texts can be subsumed under a single law of word length distribution, if word length distribution of a text is considered as a function of certain of its properties, such as author, language, and genre. So far, word length distribution in texts of a wide variety of languages and genres has been demonstrated to follow distributions of the compound Poisson family of discrete probability distributions. In view of the morphological idiosyncrasies of the Eskimo language in general, which are responsible for an unusually high mean word length of about 4.5 to 5.2 syllables per word in the texts, it is interesting to see whether Eskimo texts show a significantly different behaviour with respect to word length. The results demonstrate that the Eskimo data employed in this study can be fitted well by the Hyperpoisson distribution. Two further discrete probability distributions will be deduced from certain morphology‐based assumptions about Eskimo. It turns out that most of the Eskimo data can be fitted by these two distributions. The question to what extent these results point to a more grammar‐oriented theory of word length is also discussed.