Refine
Document Type
- Article (1)
- Part of a Book (1)
Has Fulltext
- yes (2)
Keywords
- Direkte Rede (2)
- Indirekte Rede (2)
- Annotation (1)
- Automatic recognition of speech (1)
- Automatische Spracherkennung (1)
- Deutsch (1)
- Direct speech (1)
- Fallstudie (1)
- German (1)
- Hochliteratur (1)
Publicationstate
- Postprint (2) (remove)
Reviewstate
- (Verlags)-Lektorat (1)
- Verlags-Lektorat (1)
Publisher
- Wilhelm Fink (1)
Diese Fallstudie untersucht die quantitative Verteilung von direkten und nicht-direkten Formen von Redewiedergabe im Vergleich zwischen zwei Literaturtypen: Hochliteratur - definiert als Werke, die auf der Auswahlliste von Literaturpreisen standen - und Heftromanen - massenproduzierten Erzählwerken, die zumeist über den Zeitschriftenhandel vertrieben werden. Die Studie geht von manuell annotierten Daten aus und überprüft daran die Verlässlichkeit automatischer Annotationswerkzeuge, die im Anschluss eingesetzt werden, um eine Untersuchung von insgesamt 250 Volltexten durchzuführen. Es kann nachgewiesen werden, dass sich die Literaturtypen sowie auch unterschiedliche Genres von Heftromanen hinsichtlich der verwendeten Wiedergabeformen unterscheiden.
Automatic recognition of speech, thought, and writing representation in German narrative texts
(2013)
This article presents the main results of a project, which explored ways to recognize and classify a narrative feature—speech, thought, and writing representation (ST&WR)—automatically, using surface information and methods of computational linguistics. The task was to detect and distinguish four types—direct, free indirect, indirect, and reported ST&WR—in a corpus of manually annotated German narrative texts. Rule-based as well as machine-learning methods were tested and compared. The results were best for recognizing direct ST&WR (best F1 score: 0.87), followed by indirect (0.71), reported (0.58), and finally free indirect ST&WR (0.40). The rule-based approach worked best for ST&WR types with clear patterns, like indirect and marked direct ST&WR, and often gave the most accurate results. Machine learning was most successful for types without clear indicators, like free indirect ST&WR, and proved more stable. When looking at the percentage of ST&WR in a text, the results of machine-learning methods always correlated best with the results of manual annotation. Creating a union or intersection of the results of the two approaches did not lead to striking improvements. A stricter definition of ST&WR, which excluded borderline cases, made the task harder and led to worse results for both approaches.