Korpuslinguistik
Refine
Document Type
- Part of a Book (5)
- Conference Proceeding (2)
- Other (2)
- Article (1)
- Working Paper (1)
Has Fulltext
- yes (11)
Keywords
- Korpus <Linguistik> (11)
- Deutsch (6)
- Methode (5)
- Neologismus (3)
- Deutsches Referenzkorpus (DeReKo) (2)
- Institut für Deutsche Sprache <Mannheim> (2)
- Textkorpus (2)
- Absolute Häufigkeit (1)
- Differenzenkoeffizient (1)
- Distribution <Linguistik> (1)
Publicationstate
Reviewstate
- (Verlags)-Lektorat (7)
- Verlags-Lektorat (1)
Publisher
- Institut für Deutsche Sprache (6)
- ELRA (1)
- Narr (1)
- Tokyo University of Foreign Studies (1)
- University of Liverpool (1)
In der Korpuslinguistik und der Quantitativen Linguistik werden ganz verschiedenartige formale Maße verwendet, mit denen die Gebrauchshäufigkeit eines Wortes, eines Ausdrucks oder auch abstrakter oder komplexer sprachlicher Elemente in einem gegebenen Korpus gemessen und ggf. mit anderen Gebrauchshäufigkeiten verglichen werden kann. Im Folgenden soll für eine Auswahl dieser Maße (absolute Häufigkeit, relative Häufigkeit, Wahrscheinlichkeitsverteilung, Differenzenkoeffizient, Häufigkeitsklasse) zusammengefasst werden, wie sie definiert sind, welche Eigenschaften sie haben und unter welchen Bedingungen sie (sinnvoll) anwendbar und interpretierbar sind – dabei kann eine Rolle spielen, ob das Häufigkeitsmaß auf ein Korpus als Ganzes angewendet wird oder auf einzelne Teilkorpora. Zusätzlich zu den bei den einzelnen Häufigkeitsmaßen genannten Einschränkungen gilt generell der folgende vereinfachte Zusammenhang: Je seltener ein Wort im gegebenen Korpus insgesamt vorkommt und je kleiner dieses Korpus ist, desto stärker hängt die beobachtete Gebrauchshäufigkeit des Wortes von zufälligen Faktoren ab, d.h., desto geringer ist die statistische Zuverlässigkeit der Beobachtung.
This paper presents ongoing research which is embedded in an empirical-linguistic research program, set out to devise viable research strategies for developing an explanatory theory of grammar as a psychological and social phenomenon. As this phenomenon cannot be studied directly, the program attempts to approach it indirectly through its correlates in language corpora, which is justified by referring to the core tenets of Emergent Grammar. The guiding principle for identifying such corpus correlates of grammatical regularities is to imitate the psychological processes underlying the emergent nature of these regularities. While previous work in this program focused on syntagmatic structures, the current paper goes one step further by investigating schematic structures that involve paradigmatic variation. It introduces and explores a general strategy by which corpus correlates of such structures may be uncovered, and it further outlines how these correlates may be used to study the nature of the psychologically real schematic structures.
Empirical synchronic language studies generally seek to investigate language phenomena for one point in time, even though this point in time is often not stated explicitly. Until today, surprisingly little research has addressed the implications of this time-dependency of synchronic research on the composition and analysis of data that are suitable for conducting such studies. Existing solutions and practices tend to be too general to meet the needs of all kinds of research questions. In this theoretical paper that is targeted at both corpus creators and corpus users, we propose to take a decidedly synchronic perspective on the relevant language data. Such a perspective may be realised either in terms of sampling criteria or in terms of analytical methods applied to the data. As a general approach for both realisations, we introduce and explore the FReD strategy (Frequency Relevance Decay) which models the relevance of language events from a synchronic perspective. This general strategy represents a whole family of synchronic perspectives that may be customised to meet the requirements imposed by the specific research questions and language domain under investigation.
This introductory tutorial describes a strictly corpus-driven approach for uncovering indications for aspects of use of lexical items. These aspects include ‘(lexical) meaning’ in a very broad sense and involve different dimensions, they are established in and emerge from respective discourses. Using data-driven mathematical-statistical methods with minimal (linguistic) premises, a word’s usage spectrum is summarized as a collocation profile. Self-organizing methods are applied to visualize the complex similarity structure spanned by these profiles. These visualizations point to the typical aspects of a word’s use, and to the common and distinctive aspects of any two words.
Valenz und Kookkurrenz
(2015)