nein
Refine
Document Type
- Article (7)
Has Fulltext
- yes (7)
Keywords
- Deutsch (4)
- Korpus <Linguistik> (2)
- Abstractness (1)
- Allomorph (1)
- Allomorphy (1)
- Datenstruktur (1)
- Diskursanalyse (1)
- German (1)
- Grammatik (1)
- Grammatikalität (1)
Publicationstate
- Postprint (1)
Reviewstate
- Peer-Revied (7) (remove)
Publisher
- E. Schmidt (2)
- Beltz Juventa (1)
- Heidelberg u.a. (1)
- Oxford University Press (1)
- Oxford University Press (OUP) (1)
A model of grammar needs to reconcile the undesirability inherent to allomorphy, the apparent extra burden on learning and memory, with its occurrence and possible stability. OT approaches this task by positing an anti-allomorphy constraint, henceforth referred to as "OO-correspondence", which requires leveling (i.e. sameness of sound structure) in related word forms (Benua 1997). The occurrence of allomorphy then indicates crucial domination of OO-correspondence by other constraints. To assess the adequacy of this proposal it is necessary to establish the level of abstractness at which OO-correspondence applies and to examine the consequences of this decision for ranking order. While proponents of OT tacitly assume the level in question to be rather concrete, the notion of allomorphy as originally envisioned in Structuralism was defined by distinctness at a more abstract level referred to as "phonemic" (Harris 1942; Nida 1944). The basic intuition here is that the defining property of subphonemic sound properties, their conditionedness by context, entails that whatever burden they put on learning and memory is of a fundamentally different nature than that entailed by phonemic distinctness. The evidence from German supports that intuition in that leveling can be shown to target phonemic sound structure to the exclusion of subphonemic properties. Allomorphy, defined by phonemic alterna-tion, tends to serve phonological optimization in closed class items (function words, affixes) while serving to express morphological distinctions in open class items. The key to demonstrating the correlations in question lies in the discernment of phonemic structure, which is therefore at the core of the article.
Linguistic usage patterns are not just coincidental phenomena on the textual surface but constitute a fundamental constructional principle of language. At the same time, however, linguistic patterns are highly idiosyncratic in the sense that they tend to be item-specific and unpredictable, thus defying all attempts at capturing them by general abstract rules. […] What all these approaches [that deal with constructions, collocations, patterns, etc. K.S.] share, in addition to their interest in recurrent patterns, is a strong commitment to the value of usage, be it in the wider sense of usage as an empirical basis for sound linguistic analysis and description or in the narrower sense of usage as constituting the basis for the emergence and consolidation of linguistic knowledge. (Herbst et al. 2014: 1)
In consequence of the feasibility of studying language data in new quantitative dimensions, the phraseology faces a paradigm shift. The traditional focus on strongly lexicalized, often idiomatic multi-word expressions (MWE) has led to an overestimation of their unique status in the mental lexicon. The majority of MWEs are typical lexical realisations of templates (‘MW patterns’) that emerged from repeated usage and can be instantiated with ever changing lexical elements. The – primarily functional – pattern restrictions cannot always be predicted with rules, but are the result of recurring context factors. In this article, at first, it has been shown the nature and the interrelations of MW patterns that are reconstructed with complex corpus-driven methods. Furthermore, a vision of a new phraseography of MW pattern that described their hierarchies and functions based on authentic corpus data like KWIC bundles, slot filler tables and collocation profiles has been discussed.
The Google Ngram Corpora seem to offer a unique opportunity to study linguistic and cultural change in quantitative terms. To avoid breaking any copyright laws, the data sets are not accompanied by any metadata regarding the texts the corpora consist of. Some of the consequences of this strategy are analyzed in this article. I chose the example of measuring censorship in Nazi Germany, which received widespread attention and was published in a paper that accompanied the release of the Google Ngram data (Michel et al. (2010): Quantitative analysis of culture using millions of digitized books. Science, 331(6014): 176–82). I show that without proper metadata, it is unclear whether the results actually reflect any kind of censorship at all. Collectively, the findings imply that observed changes in this period of time can only be linked directly to World War II to a certain extent. Therefore, instead of speaking about general linguistic or cultural change, it seems to be preferable to explicitly restrict the results to linguistic or cultural change ‘as it is represented in the Google Ngram data’. On a more general level, the analysis demonstrates the importance of metadata, the availability of which is not just a nice add-on, but a powerful source of information for the digital humanities.
We analyze the linguistic evolution of selected scientific disciplines over a 30-year time span (1970s to 2000s). Our focus is on four highly specialized disciplines at the boundaries of computer science that emerged during that time: computational linguistics, bioinformatics, digital construction, and microelectronics. Our analysis is driven by the question whether these disciplines develop a distinctive language use—both individually and collectively—over the given time period. The data set is the English Scientific Text Corpus (scitex), which includes texts from the 1970s/1980s and early 2000s. Our theoretical basis is register theory. In terms of methods, we combine corpus-based methods of feature extraction (various aggregated features [part-of-speech based], n-grams, lexico-grammatical patterns) and automatic text classification. The results of our research are directly relevant to the study of linguistic variation and languages for specific purposes (LSP) and have implications for various natural language processing (NLP) tasks, for example, authorship attribution, text mining, or training NLP tools.