Refine
Document Type
- Part of a Book (5)
- Article (1)
- Book (1)
Is part of the Bibliography
- no (7)
Keywords
- Computerlinguistik (3)
- XML (2)
- Annotation (1)
- Automatische Sprachanalyse (1)
- Datenstruktur (1)
- Deutsch (1)
- Discourse parsing (1)
- Discourse relations (1)
- Document structure (1)
- Dokumentenverarbeitung (1)
Reviewstate
- (Verlags)-Lektorat (3)
- Peer-Review (1)
Publisher
- Springer (7) (remove)
Researchers in many disciplines, sometimes working in close cooperation, have been concerned with modeling textual data in order to account for texts as the prime information unit of written communication. The list of disciplines includes computer science and linguistics as well as more specialized disciplines like computational linguistics and text technology. What many of these efforts have in common is the aim to model textual data by means of abstract data types or data structures that support at least the semi-automatic processing of texts in any area of written communication.
Discourse parsing of complex text types such as scientific research articles requires the analysis of an input document on linguistic and structural levels that go beyond traditionally employed lexical discourse markers. This chapter describes a text-technological approach to discourse parsing. Discourse parsing with the aim of providing a discourse structure is seen as the addition of a new annotation layer for input documents marked up on several linguistic annotation levels. The discourse parser generates discourse structures according to the Rhetorical Structure Theory. An overview of the knowledge sources and components for parsing scientific joumal articles is given. The parser’s core consists of cascaded applications of the GAP, a Generic Annotation Parser. Details of the chart parsing algorithm are provided, as well as a short evaluation in terms of comparisons with reference annotations from our corpus and with recently developed Systems with a similar task.
Die Extensible Markup Language (XML), eine vereinfachte Version der Standard Generalized Markup Language (SGML), wurde für den Austausch strukturierter Daten im Internet entwickelt. Informationen können damit nicht nur in einem einheitlichen, medienunabhängigen Format strukturiert werden, sondern die Strukturierungsprinzipien selbst sind auch durch ein formales Regelwerk, eine Grammatik, beschreibbar. Erst so werden weitergehende Verarbeitungsprozesse wie geleitete Dateneingaben, Datenkonvertierung, flexibles Navigieren und Viewing der Daten möglich. Neben der elementaren Informationsmodellierung ist mit der Meta-Strukturierung durch sog. Architekturen ein neuer Aspekt hinzugekommen: die objektorientierte Schichtung von Struktur-Grammatiken. Das vorliegende Buch stellt beide Strukturierungstechniken - elementar und architektonisch - erstmalig in zusammenhängender Form dar. Es wendet sich an Leser, die sich detailliert und praxisorientiert mit den Möglichkeiten der SGML-basierten Informationsmodellierung auseinandersetzen wollen.
Situiertheit
(1993)
Integrated Linguistic Annotation Models and Their Application in the Domain of Antecedent Detection
(2011)
Seamless integration of various, often heterogeneous linguistic resources in terms of their output formats and a combined analysis of the respective annotation layers are crucial tasks for linguistic research. After a decade of concentration on the development of formats to structure single annotations for specific linguistic issues, in the last years a variety of specifications to store multiple annotations over the same primary data has been developed. The paper focuses on the integration of the knowledge resource logical document structure information into a text document to enhance the task of automatic anaphora resolution both for the task of candidate detection and antecedent selection. The paper investigates data structures necessary for knowledge integration and retrieval.
This chapter addresses the requirements and linguistic foundations of automatic relational discourse analysis of complex text types such as scientific journal articles. It is argued that besides lexical and grammatical discourse markers, which have traditionally been employed in discourse parsing, cues derived from the logical and generical document structure and the thematic structure of a text must be taken into account. An approach to modelling such types of linguistic information in terms of XML-based multi-layer annotations and to a text-technological representation of additional knowledge sources is presented. By means of quantitative and qualitative corpus analyses, cues and constraints for automatic discourse analysis can be derived. Furthermore, the proposed representations are used as the input sources for discourse parsing. A short overview of the projected parsing architecture is given.
In this contribution, we discuss and compare alternative options of modelling the entities and relations of wordnet-like resources in the Web Ontology Language OWL. Based on different modelling options, we developed three models of representing wordnets in OWL, i.e. the instance model, the dass model, and the metaclass model. These OWL models mainly differ with respect to the ontological Status of lexical units (word senses) and the synsets. While in the instance model lexical units and synsets are represented as individuals, in the dass model they are represented as classes; both model types can be encoded in the dialect OWL DL. As a third alternative, we developed a metaclass model in OWL FULL, in which lexical units and synsets are defined as metaclasses, the individuals of which are classes themselves. We apply the three OWL models to each of three wordnet-style resources: (1) a subset of the German wordnet GermaNet, (2) the wordnet-style domain ontology TermNet, and (3) GermaTermNet, in which TermNet technical terms and GermaNet synsets are connected by means of a set of “plug-in” relations. We report on the results of several experiments in which we evaluated the performance of querying and processing these different models: (1) A comparison of all three OWL models (dass, instance, and metaclass model) of TermNet in the context of automatic text-to-hypertext conversion, (2) an investigation of the potential of the GermaTermNet resource by the example of a wordnet-based semantic relatedness calculation.