Refine
Year of publication
Document Type
- Conference Proceeding (20)
- Part of a Book (12)
- Article (6)
- Book (1)
- Doctoral Thesis (1)
- Working Paper (1)
Keywords
- XML (41) (remove)
Publicationstate
- Veröffentlichungsversion (16)
- Zweitveröffentlichung (11)
- Postprint (10)
Reviewstate
Publisher
- Association for Computational Linguistics (4)
- Springer (4)
- ACM (3)
- European Language Resources Association (ELRA) (2)
- VS Verlag für Sozialwissenschaften (2)
- Aarhus University, School of Business and Social Sciences (1)
- Bielefeld University (1)
- DFKI GmbH (1)
- De Gruyter Mouton (1)
- De Gruyter Oldenbourg (1)
Gerd Hentschel gehört zu den Pionieren der heutigen Computerlexikografie und der IT-gestützten Korpuserschließung. Eine seiner ersten Zeitschriftenpublikationen, mit dem Titel Einsatz von EDV und Mikrocomputer in einem lexikographischen Forschungsprojekt zum deutschen Lehnwort im Polnischen (Hentschel 1983), befasst sich mit der Frage, wie - unter den damaligen technischen Vorzeichen - Forschungs- und Dokumentationsarbeiten zu polnischen Germanismen sinnvoll durch die Verwendung von Computern unterstützt werden können. Die besagten Arbeiten mündeten später in die Online-Publikation des Wörterbuchs der deutschen Lehnwörter in der polnischen Schrift- und Standardsprache (WDLP). Es ist aus heutiger Sicht bemerkenswert, mit welchen Beschränkungen die Arbeit mit dem Computer noch vor 40 Jahren zu kämpfen hatte. Aus gegebenem Anlass sei es gestattet, diesen Punkt etwas ausführlicher zu illustrieren.
This paper reports on an ongoing international project of compiling a freely accessible online Dictionary of German Loans in Polish Dialects. The dictionary will be the first comprehensive lexicographic compendium of its kind, serving as a complement to existing resources on German lexical loans in the literary or standard language. The empirical results obtained in the project will shed new light on the distribution of German loanwords among different dialects, also in comparison to the well-documented situation in written Polish. The dictionary will have a strong focus on the dialectal distribution of Polish dialectal variants for a given German etymon, accessible through interactive cartographic representations and corresponding search options. The editorial process is realized with dedicated collaborative web tools. The new resource will be published as an integrated part of an online information system for German lexical borrowings in other languages, the Lehnwortportal Deutsch, and is therefore highly cross-linked with other loanword dictionaries on Polish as well as Slavic and further European languages.
We describe a simple and efficient Java object model and application programming interface (API) for (possibly multi-modal) annotated natural language corpora. Corpora are represented as elements like Sentences, Turns, Utterances, Words, Gestures and Markables. The API allows linguists to access corpora in terms of these discourse-level elements, i.e. at a conceptual level they are familiar with, with the flexibility offered by a general purpose programming language. It is also a contribution to corpus standardization efforts because it is based on a straightforward and easily extensible data model which can serve as a target for conversion of different corpus formats.
We present an implemented XML data model and a new, simplified query language for multi-level annotated corpora. The new query language involves automatic conversion of queries into the underlying, more complicated MMAXQL query language. It supports queries for sequential and hierarchical, but also associative (e.g. coreferential) relations. The simplified query language has been designed with non-expert users in mind.
We describe a simple procedure for the automatic creation of word-level alignments between printed documents and their respective full-text versions. The procedure is unsupervised, uses standard, off-the-shelf components only, and reaches an F-score of 85.01 in the basic setup and up to 86.63 when using pre- and post-processing. Potential areas of application are manual database curation (incl. document triage) and biomedical expression OCR.
pyMMAX2 is an API for processing MMAX2 stand-off annotation data in Python. It provides a lightweight basis for the development of code which opens up the Java- and XML-based ecosystem of MMAX2 for more recent, Python-based NLP and data science methods. While pyMMAX2 is pure Python, and most functionality is implemented from scratch, the API re-uses the complex implementation of the essential business logic for MMAX2 annotation schemes by interfacing with the original MMAX2 Java libraries. pyMMAX2 is available for download at http://github.com/nlpAThits/pyMMAX2.
The chapter on formats and models for lexicons deals with different available data formats of lexical resources. It elaborates on their structure and possible uses. Motivated by the restrictions in merging different lexical resources based on widely spread formalisms and international standards, a formal lexicon model for lexical resources is developed which is related to graph structures in annotations. For lexicons this model is termed the Lexicon Graph. Within this model the concepts of lexicon entries and lexical structures frequently described in the literature are formally defined and examples are given. The article addresses the problem of ambiguity in those formal terms. An implementation based on XML and XML technology such as XQuery for the defined structures is given. The relation to international standards is included as well.
XML has been designed for creating structured documents, but the information that is encoded in these structures are, by definition, out of scope for XML. Additional sources, normally not easily interpretable by computers, such as documentation are needed to determine the intention of specific tags in a tag-set. The Component Metadata Infrastructure (CMDI) takes a rather pragmatic approach to foster interoperability between XML instances in the domain of metadata descriptions for language resources. This paper gives an overview of this approach.
Der Kurzbeitrag berichtet über ein Projekt ”Hypertextualisierung auf textgrammatischer Grundlage“ (HyTex), in dem erforscht wird, wie sich linear organisierte Dokumente mit semiautomatischen Methoden auf der Grundlage von textgrammatischem Markup und der linguistisch motivierten Modellierung terminologischen Wissens in delinearisierte Hyperdokumente überführen lassen. Ziel ist es, eine Sammlung von Fachtexten so in einen Hypertext zu überführen, dass terminologiebedingte Verständnisschwierigkeiten beim Lesen durch entsprechende Linkangebote aufgelöst werden, so dass die Fachtexte auch von Semi-Experten der Domäne selektiv gelesen werden können. Der Schwerpunkt des Beitrags liegt auf der Modellierung terminologischen Wissens mit XML Topic Maps und dessen Stellenwert für die automatische Erzeugung von Hyperlinks.