Refine
Year of publication
Document Type
- Article (12) (remove)
Has Fulltext
- yes (12)
Keywords
- Deutsch (3)
- Kontrastive Linguistik (2)
- Lehnwort (2)
- Sprachphilosophie (2)
- Sprachstatistik (2)
- Ceteris paribus laws (1)
- Chomsky, Noam (1)
- Complexity theory (1)
- Emergence (1)
- Entropie (1)
Publicationstate
- Veröffentlichungsversion (12) (remove)
Reviewstate
- (Verlags)-Lektorat (5)
- Verlags-Lektorat (4)
- Peer-Review (2)
- Peer-review (1)
Publisher
- Institut für Deutsche Sprache (1)
- Leibniz-Institut für Deutsche Sprache (IDS) (1)
- Lucius & Lucius (1)
- RAM (1)
- Springer (1)
- Springer Nature (1)
- Winter (1)
One of the fundamental questions about human language is whether all languages are equally complex. Here, we approach this question from an information-theoretic perspective. We present a large scale quantitative cross-linguistic analysis of written language by training a language model on more than 6500 different documents as represented in 41 multilingual text collections consisting of ~ 3.5 billion words or ~ 9.0 billion characters and covering 2069 different languages that are spoken as a native language by more than 90% of the world population. We statistically infer the entropy of each language model as an index of what we call average prediction complexity. We compare complexity rankings across corpora and show that a language that tends to be more complex than another language in one corpus also tends to be more complex in another corpus. In addition, we show that speaker population size predicts entropy. We argue that both results constitute evidence against the equi-complexity hypothesis from an information-theoretic perspective.
According to a widespread conception, quantitative linguistics will eventually be able to explain empirical quantitative findings (such as Zipf’s Law) by deriving them from highly general stochastic linguistic ‘laws’ that are assumed to be part of a general theory of human language (cf. Best (1999) for a summary of possible theoretical positions). Due to their formal proximity to methods used in the so-called exact sciences, theoretical explanations of this kind are assumed to be superior to the supposedly descriptive-only approaches of linguistic structuralism and its successors. In this paper I shall try to argue that on close inspection such claims turn out to be highly problematic, both on linguistic and on science-theoretical grounds.
Am 12. Mai 1965 nahmen der Staat Israel und die Bundesrepublik Deutschland offiziell diplomatische Beziehungen auf. Damit kam über 15 Jahre nach der Konstitution der beiden Länder und 20 Jahre nach dem Ende der Shoah ein komplexer Prozess der langsamen politischen Annäherung zu einem keineswegs selbstverständlichen Abschluss. Das fünfzigjährige Jubiläum dieses Ereignisses im Jahr 2015 war weltweit, vor allem aber in Israel und Deutschland, Anlass für zahlreiche Veranstaltungen, über die eine offizielle bilaterale Webseite <www.de50il.org/> (Stand: 6.11.2017) Auskunft gibt. Im Rahmen des Jubiläums wurde am 30. September 2015 in einer feierlichen Abendveranstaltung im Jüdischen Museum Berlin offiziell das „Wörterbuch deutscher Lehnwörter im Hebräischen“ von Uriel Adiv in einer ersten Fassung im „Lehnwortportal Deutsch“ des IDS freigeschaltet. Eine von Koautor Jakob Mendel erheblich überarbeitete und verbesserte zweite Version ging im Mai 2017 online. Der vorliegende Beitrag möchte einige Hintergründe zum deutschen Lehnwortschatz im modernen Hebräischen darstellen sowie die Entstehungsgeschichte des Werks und seinen Platz in der lehnwortlexikografischen Publikationsplattform „Lehnwortportal Deutsch“ <http://lwp.ids-mannheim.de/> (Stand: 6.11.2017) beleuchten.
Sprachkritik, dahinsickernd
(2007)
Three popular collections of essays concerning correct language use in German are reviewed from a linguist’s point of view. It is claimed that the overall picture of language that Sick conveys to the layperson is inadequate; in addition, the author fails to reflect explicitly on the purpose and consequences of his prescriptive approach to language use.
Languages employ different strategies to transmit structural and grammatical information. While, for example, grammatical dependency relationships in sentences are mainly conveyed by the ordering of the words for languages like Mandarin Chinese, or Vietnamese, the word ordering is much less restricted for languages such as Inupiatun or Quechua, as these languages (also) use the internal structure of words (e.g. inflectional morphology) to mark grammatical relationships in a sentence. Based on a quantitative analysis of more than 1,500 unique translations of different books of the Bible in almost 1,200 different languages that are spoken as a native language by approximately 6 billion people (more than 80% of the world population), we present large-scale evidence for a statistical trade-off between the amount of information conveyed by the ordering of words and the amount of information conveyed by internal word structure: languages that rely more strongly on word order information tend to rely less on word structure information and vice versa. Or put differently, if less information is carried within the word, more information has to be spread among words in order to communicate successfully. In addition, we find that–despite differences in the way information is expressed–there is also evidence for a trade-off between different books of the biblical canon that recurs with little variation across languages: the more informative the word order of the book, the less informative its word structure and vice versa. We argue that this might suggest that, on the one hand, languages encode information in very different (but efficient) ways. On the other hand, content-related and stylistic features are statistically encoded in very similar ways.