Refine
Year of publication
Document Type
- Part of a Book (10)
- Article (3)
Has Fulltext
- yes (13)
Keywords
- Computerlinguistik (4)
- Annotation (2)
- Deutsch (2)
- Digitalisierung (2)
- Diskursanalyse (2)
- E-Learning (2)
- Multimodalität (2)
- Texttechnologie (2)
- XML (2)
- Automatische Sprachanalyse (1)
Publicationstate
- Postprint (13) (remove)
Reviewstate
- (Verlags)-Lektorat (8)
- Peer-Review (2)
- Zweitveröffentlichung (1)
Publisher
- Springer (4)
- Lang (2)
- Benjamins (1)
- Buske (1)
- Deutsche Hochschulverband (DHV) (1)
- Deutscher Universitätsverlag (1)
- Metzler (1)
- Narr (1)
- Springer-Verlag (1)
This chapter addresses the requirements and linguistic foundations of automatic relational discourse analysis of complex text types such as scientific journal articles. It is argued that besides lexical and grammatical discourse markers, which have traditionally been employed in discourse parsing, cues derived from the logical and generical document structure and the thematic structure of a text must be taken into account. An approach to modelling such types of linguistic information in terms of XML-based multi-layer annotations and to a text-technological representation of additional knowledge sources is presented. By means of quantitative and qualitative corpus analyses, cues and constraints for automatic discourse analysis can be derived. Furthermore, the proposed representations are used as the input sources for discourse parsing. A short overview of the projected parsing architecture is given.
Discourse segmentation is the division of a text into minimal discourse segments, which form the leaves in the trees that are used to represent discourse structures. A definition of elementary discourse segments in German is provided by adapting widely used segmentation principles for English minimal units, while considering punctuation, morphology, sytax, and aspects of the logical document structure of a complex text type, namely scientific articles. The algorithm and implementation of a discourse segmenter based on these principles is presented, as well an evaluation of test runs.
Researchers in many disciplines, sometimes working in close cooperation, have been concerned with modeling textual data in order to account for texts as the prime information unit of written communication. The list of disciplines includes computer science and linguistics as well as more specialized disciplines like computational linguistics and text technology. What many of these efforts have in common is the aim to model textual data by means of abstract data types or data structures that support at least the semi-automatic processing of texts in any area of written communication.
In dependenzsyntaktischen Systemen wie denen von Engel (1982), Hudson (1984), Schubert (1987), Mel'čuk (1988) oder Starosta (1988) können gemeinhin nur Wörter andere Wörter oder Phrasen regieren. Auch wenn diese Annahme durchaus praktikabel ist, führt sie doch zu einer ganzen Reihe von syntaxtheoretischen Unzulänglichkeiten, die ausgearbeitete Dependenzgrammatiken gegenüber konkurrierenden Grammatiktheorien als unzulänglich erscheinen lassen. Ziel des vorliegenden Beitrages ist es, die Notwendigkeit darzulegen, auch komplexeren Einheiten Rektionsfähigkeit zuzugestehen, und mit dem Konzept des 'komplexen Elements' ein geeignetes formales Instrument dafür zur Verfügung zu stellen.
The paper investigates the evolution of document grammars from a linguistic point of view. Document grammars have been developed in the past decades in order to formalize knowledge on the structure of textual information. A well-known instance of a document grammar is the »Document Type Definition« (DTD) as part of the Extensible Markup Language (XML). DTDs allow to define so-called tree grammars that constrain the application of tag-sets in the process of annotation of a document. In an XML-based document workflow, DTDs play a crucial role for validation and transforming huge amounts of texts in standardized data formats. An interesting point in the development of XML DTDs is the fact that the restriction of the formal expressiveness paved the way to understand the formal properties of document grammars better and to develop more a powerful version like XML Schema recently. In this sense, the simplicity of the original approach, resulting from the necessary restriction of previous approaches, yielded new complexity on formally understood grounds.
Vor dem Hintergrund einer neuen linguistischen Betrachtungsweise, die wissenschaftliche Präsentationen als eine eigenständige, komplexe, multimodale Textsorte auffasst, wird in diesem Beitrag zunächst der Aspekt der Multimodalität von Präsentationen fokussiert. Die analytische Beschäftigung mit wissenschaftlichen Präsentationen wird dann um erste Ergebnisse unserer Rezeptionsexperimente ergänzt, in denen unter anderem Erhebungen zur Wissensvermittlung unterschiedlicher wissenschaftlicher Präsentationen durchgeführt wurden.
Discourse parsing of complex text types such as scientific research articles requires the analysis of an input document on linguistic and structural levels that go beyond traditionally employed lexical discourse markers. This chapter describes a text-technological approach to discourse parsing. Discourse parsing with the aim of providing a discourse structure is seen as the addition of a new annotation layer for input documents marked up on several linguistic annotation levels. The discourse parser generates discourse structures according to the Rhetorical Structure Theory. An overview of the knowledge sources and components for parsing scientific joumal articles is given. The parser’s core consists of cascaded applications of the GAP, a Generic Annotation Parser. Details of the chart parsing algorithm are provided, as well as a short evaluation in terms of comparisons with reference annotations from our corpus and with recently developed Systems with a similar task.