Korpuslinguistik
Refine
Year of publication
- 2022 (45) (remove)
Document Type
- Part of a Book (24)
- Article (11)
- Conference Proceeding (5)
- Book (3)
- Doctoral Thesis (2)
Has Fulltext
- yes (45)
Keywords
- Korpus <Linguistik> (40)
- Deutsch (24)
- Annotation (7)
- Gesprochene Sprache (7)
- Korpuslinguistik (7)
- Sprachdaten (6)
- corpus linguistics (5)
- Computerlinguistik (4)
- Daten (4)
- Datenanalyse (4)
Publicationstate
- Veröffentlichungsversion (26)
- Zweitveröffentlichung (19)
- Postprint (9)
Reviewstate
Publisher
Seit der Forschung große Datenmengen und Rechenkapazitäten zur Verfügung stehen arbeitet auch die Sprachwissenschaft zunehmend datengeleitet. Datengeleitete Forschung geht nicht von einer Hypothese aus, sondern sucht nach statistischen Auffälligkeiten in den Daten. Sprache wird dabei oft stark vereinfacht als lineare Abfolge von Wörtern betrachtet. Diese Studie zeigt erstmals, wie der zusätzliche Einbezug syntaktischer Annotationen dabei hilft, sprachliche Strukturen des Deutschen besser zu erfassen.
Als Anwendungsbeispiel dient der Vergleich der Wissenschaftssprachen von Linguistik und Literaturwissenschaft. Die beiden Fächer werden oft als Teildisziplinen der Germanistik zusammengefasst. Ihre wissenschaftliche Praxis unterscheidet sich jedoch systematisch hinsichtlich Forschungsdaten, Methoden und Erkenntnisinteressen, was sich auch in den Wissenschaftssprachen niederschlägt.
Dieses Kapitel gibt einen Überblick über Korpora internetbasierter Kommunikation, die als digitale Ressourcen frei zur Verfügung stehen und für eigene linguistische Forschungsarbeiten genutzt werden können. In Abschnitt 1 erläutern wir korpuslinguistische Basiskonzepte, die für die Arbeit mit Korpora internetbasierter Kommunikation benötigt werden, und präzisieren die Sprachgebrauchsdomäne Internetbasierte Kommunikation, die den Gegenstand des hier beschriebenen Ressourcentyps bildet. Abschnitt 2 gibt einen Überblick zu existierenden Korpusressourcen für das Deutsche und stellt ausgewählte Korpora zu weiteren europäischen Sprachen vor. In Abschnitt 3 geben wir abschließend einen kurzen Einblick in aktuelle Forschungsfelder, die sich im Bereich der Korpuslinguistik und Sprachtechnologie in Bezug auf den Aufbau und die Aufbereitung von Korpora internetbasierter Kommunikation stellen.
Eine korpuslinguistische Untersuchung mit umfassender Analyse der häufiger vorkommenenden Adverbbildungsmuster des Deutschen legt nahe, dass die Sättigung des internen Argumentplatzes eines ursprünglich relationalen Ausdrucks eine wichtige Rolle bei der Adverbproduktion spielt (Brandt 2020). Eine genauere Betrachtung der Unterschiede zwischen -ermaßen- vs. -erweise-Adverbien deutet auf eine grammatische Unterscheidung zwischen Satzadverbien und Adverbien der Art und Weise: Im Fall von -ermaßen erfolgt die Sättigung über Token-Reflexivität, während der interne Slot von -erweise- Bildungen über häufigere und möglicherweise expansive Mechanismen geschlossen wird. Darüber hinaus fördert die pleonastische Qualität von Bildungen auf der Basis gerundivaler Partizipien die Produktivität von -erweise Adverbien.
Im Corona-Diskurs prallen völlig unterschiedliche Meinungen und Positionen zur Rolle des Staates aufeinander. Die Studie untersucht diese Positionen mit korpuslinguistischen Methoden anhand der Berichterstattung von Medien und Kommentaren von Leserinnen und Lesern in der Deutschschweiz. Dabei werden auch rechte und Corona-skeptische Plattformen in die Analyse einbezogen. Grundlage des korpuspragmatischen Zugangs ist die Berechnung und Interpretation von Word Embeddings, einer Methode zur Modellierung von semantischen Räumen. Es zeigt sich, wie sich im Diskurs inkommensurable Semantiken entwickeln.
This thesis is a corpus linguistic investigation of the language used by young German speakers online, examining lexical, morphological, orthographic, and syntactic features and changes in language use over time. The study analyses the language in the Nottinghamer Korpus deutscher YouTube‐Sprache ("Nottingham corpus of German YouTube language", or NottDeuYTSch corpus), one of the first large corpora of German‐language comments taken from the videosharing website YouTube, and built specifically for this project. The metadatarich corpus comprises c.33 million tokens from more than 3 million comments posted underneath videos uploaded by mainstream German‐language youthorientated YouTube channels from 2008‐2018.
The NottDeuYTSch corpus was created to enable corpus linguistic approaches to studying digital German youth language (Jugendsprache), having identified the need for more specialised web corpora (see Barbaresi 2019). The methodology for compiling the corpus is described in detail in the thesis to facilitate future construction of web corpora. The thesis is situated at the intersection of Computer‐Mediated Communication (CMC) and youth language, which have been important areas of sociolinguistic scholarship since the 1980s, and explores what we can learn from a corpus‐driven, longitudinal approach to (online) youth language. To do so, the thesis uses corpus linguistic methods to analyse three main areas:
1. Lexical trends and the morphology of polysemous lexical items. For this purpose, the analysis focuses on geil, one of the most iconic and productive words in youth language, and presents a longitudinal analysis, demonstrating that usage of geil has decreased, and identifies lexical items that have emerged as potential replacements. Additionally, geil is used to analyse innovative morphological productiveness, demonstrating how different senses of geil are used as a base lexeme or affixoid in compounding and derivation.
2. Syntactic developments. The novel grammaticalization of several subordinating conjunctions into both coordinating conjunctions and discourse markers is examined. The investigation is supported by statistical analyses that demonstrate an increase in the use of non‐standard syntax over the timeframe of the corpus and compares the results with other corpora of written language.
3. Orthography and the metacommunicative features of digital writing. This analysis identifies orthographic features and strategies in the corpus, e.g. the repetition of certain emoji, and develops a holistic framework to study metacommunicative functions, such as the communication of illocutionary force, information structure, or the expression of identities. The framework unifies previous research that had focused on individual features, integrating a wide range of metacommunicative strategies within a single, robust system of analysis.
By using qualitative and computational analytical frameworks within corpus linguistic methods, the thesis identifies emergent linguistic features in digital youth language in German and sheds further light on lexical and morphosyntactic changes and trends in the language of young people over the period 2008‐2018. The study has also further developed and augmented existing analytical frameworks to widen the scope of their application to orthographic features associated with digital writing.
This paper presents an algorithm and an implementation for efficient tokenization of texts of space-delimited languages based on a deterministic finite state automaton. Two representations of the underlying data structure are presented and a model implementation for German is compared with state-of-the-art approaches. The presented solution is faster than other tools while maintaining comparable quality.
When comparing different tools in the field of natural language processing (NLP), the quality of their results usually has first priority. This is also true for tokenization. In the context of large and diverse corpora for linguistic research purposes, however, other criteria also play a role – not least sufficient speed to process the data in an acceptable amount of time. In this paper we evaluate several state of the art tokenization tools for German – including our own – with regard to theses criteria. We conclude that while not all tools are applicable in this setting, no compromises regarding quality need to be made.
When comparing different tools in the field of natural language processing (NLP), the quality of their results usually has first priority. This is also true for tokenization. In the context of large and diverse corpora for linguistic research purposes, however, other criteria also play a role – not least sufficient speed to process the data in an acceptable amount of time. In this paper we evaluate several state-ofthe-art tokenization tools for German – including our own – with regard to theses criteria. We conclude that while not all tools are applicable in this setting, no compromises regarding quality need to be made.
So far, Sepedi negations have been considered more from the point of view of lexicographical treatment. Theoretical works on Sepedi have been used for this purpose, setting as an objective a neat description of these negations in a (paper) dictionary. This paper is from a different perspective: instead of theoretical works, corpus linguistic methods are used: (1) a Sepedi corpus is examined on the basis of existing descriptions of the occurrences of a relevant verb, looking at its negated forms from a purely prescriptive point of view; (2) a "corpus-driven" strategy is employed, looking only for sequences of negation particles (or morphemes) in order to list occurring constructions, without taking into account the verbs occurring in them, apart from their endings. The approach in (2) is only intended to show a possible methodology to extend existing theories on occurring negations. We would also like to try to help lexicographers to establish a frequency-based order of entries of possible negation forms in their dictionaries by showing them the number of respective occurrences. As with all corpus linguistic work, however, we must regard corpus evidence not as representative, but as tendencies of language use that can be detected and described. This is especially true for Sepedi, for which only few and small corpora exist. This paper also describes the resources and tools used to create the necessary corpus and also how it was annotated with part of speech and lemmas. Exploring the quality of available Sepedi part-of-speech taggers concerning verbs, negation morphemes and subject concords may be a positive side result.