Refine
Year of publication
Document Type
- Part of a Book (7)
- Article (4)
- Conference Proceeding (3)
- Working Paper (1)
Has Fulltext
- yes (15)
Keywords
- Computerlinguistik (9)
- Annotation (5)
- E-Learning (4)
- Standardisierung (3)
- Digitale Sprachressourcen (2)
- Forschungsdaten (2)
- Multimodalität (2)
- Annotations (1)
- Auszeichnungssprache (1)
- Automatische Sprachanalyse (1)
Publicationstate
- Veröffentlichungsversion (5)
- Postprint (4)
- Zweitveröffentlichung (3)
- (Verlags)-Lektorat (1)
Reviewstate
- (Verlags)-Lektorat (10)
- Peer-Review (1)
Publisher
- Springer (4)
- ACL (1)
- BBAW (1)
- Deutscher Universitätsverlag (1)
- European Language Resources Association (ELRA) (1)
- GSCL (1)
- Institut für Deutsche Sprache (1)
- Narr (1)
- Univ.-Verl. Rhein-Ruhr (1)
Integrated Linguistic Annotation Models and Their Application in the Domain of Antecedent Detection
(2011)
Seamless integration of various, often heterogeneous linguistic resources in terms of their output formats and a combined analysis of the respective annotation layers are crucial tasks for linguistic research. After a decade of concentration on the development of formats to structure single annotations for specific linguistic issues, in the last years a variety of specifications to store multiple annotations over the same primary data has been developed. The paper focuses on the integration of the knowledge resource logical document structure information into a text document to enhance the task of automatic anaphora resolution both for the task of candidate detection and antecedent selection. The paper investigates data structures necessary for knowledge integration and retrieval.
Lexical chaining has become an important part of many NLP tasks. However, the goodness of a chaining process and hence its annotation output depends on the quality of the chaining resource. Therefore, a framework for chaining is needed which integrates divergent resources in order to balance their deficits and to compare their strengths and weaknesses. In this paper we present an application that incorporates the framework of a meta model of lexical chaining exemplified on three resources and its generalized exchange format.
The paper discusses two topics: firstly an approach of using multiple layers of annotation is sketched out. Regarding the XML representation this approach is similar to standoff annotation. A second topic is the use of heterogeneous linguistic resources (e.g., XML annotated documents, taggers, lexical nets) as a source for semiautomatic multi-dimensional markup to resolve typical linguistic issues, dealing with anaphora resolution as a case study.
Research today is often performed in collaborated projects composed of project partners with different backgrounds and from different institutions and countries. Standards can be a crucial tool to help harmonizing these differences and to create sustainable resources. However, choosing a standard depends on having enough information to evaluate and compare different annotation and metadata formats. In this paper we present ongoing work on an interactive, collaborative website that collects information on standards in the field of linguistics as a means to guide interested researchers.
Digital research infrastructures can be divided into four categories: large equipment, IT infrastructure, social infrastructure, and information infrastructure. Modern research institutions often employ both IT infrastructure and information infrastructure, such as databases or large-scale research data. In addition, information infrastructure depends to some extent on IT infrastructure. In this paper, we discuss the IT, information, and legal infrastructure issues that research institutions face.
The TEI has served for many years as a mature annotation format for corpora of different types, including linguistically annotated data. Although it is based on the consensus of a large community, it does not have the legal status of a standard. During the last decade, efforts have been undertaken to develop definitive de jure standards for linguistic data that not only act as a normative basis for the exchange of language corpora but also address recent advancements in technology, such as web-based standards, and the use of large and multiply annotated corpora.
In this article we will provide an overview of the process of international standardization and discuss some of the international standards currently being developed under the auspices of ISO/TC 37, a technical committee called “Terminology and other Language and Content Resources”. After that the relationship between the TEI Guidelines and these specifications, according to their formal model, notation format, and annotation model, will be discussed. The conclusion of the paper provides recommendations for dealing with language corpora.
Der vorliegende Artikel skizziert die Möglichkeiten, die durch den Gebrauch offener Standards im Bereich des eLearning und Web Based Trainig (WBT) eröffnet werden. Ausgehend von den Erfahrungen aus dem BMBF-Projekt MiLCA ("Medienintensive Lehrmodule in der Computerlinguistik-Ausbildung") werden die Vorteile einer XML basierten Markupsprache in Verbindung mit einer Open Source WBT-Plattform für die Strukturierung von Lernobjekten diskutiert. Dabei ist die Realisierung eines vollständigen XML Imports in das WBT-System nur der erste Schritt in einer sehr viel weiter gehenden Entwicklung, in der textlinguistische und computerlinguistische Methoden mehr und mehr an Bedeutung gewinnen. So wird zum Beispiel der Gebrauch von didaktisch motivierten Metadaten Autoren in die Lage versetzen, Lernobjekte adaptiv und lernerzentriert aufzubereiten. Die Integration von Ontologien und Taxonomien ist ein weiterer Aspekt, der noch präzisere Möglichkeiten der Wartung und Wiederverwendung von Lernobjekten eröffnet. Teil dieses Artikels ist ein annotiertes Beispiel-Lernobjekt zur Verdeutlichung der oben angesprochenen Entwicklungen und deren Auswirkungen auf die zukünftige akademische Ausbildung.