Computerlinguistik
Refine
Year of publication
Document Type
- Conference Proceeding (302)
- Part of a Book (126)
- Article (87)
- Book (26)
- Working Paper (16)
- Other (15)
- Report (11)
- Contribution to a Periodical (7)
- Doctoral Thesis (7)
- Master's Thesis (4)
Language
- English (422)
- German (186)
- Multiple languages (2)
- French (1)
Keywords
- Computerlinguistik (205)
- Korpus <Linguistik> (166)
- Annotation (78)
- Deutsch (76)
- Automatische Sprachanalyse (69)
- Forschungsdaten (50)
- Natürliche Sprache (49)
- Digital Humanities (42)
- Gesprochene Sprache (40)
- Maschinelles Lernen (33)
Publicationstate
- Veröffentlichungsversion (373)
- Zweitveröffentlichung (108)
- Postprint (55)
- Preprint (2)
- (Verlags)-Lektorat (1)
- Erstveröffentlichung (1)
Reviewstate
Publisher
- Association for Computational Linguistics (40)
- European Language Resources Association (32)
- de Gruyter (30)
- Springer (26)
- European Language Resources Association (ELRA) (23)
- Institut für Deutsche Sprache (21)
- Zenodo (17)
- Linköping University Electronic Press (13)
- The Association for Computational Linguistics (11)
- CLARIN (9)
Active Learning (AL) has been proposed as a technique to reduce the amount of annotated data needed in the context of supervised classification. While various simulation studies for a number of NLP tasks have shown that AL works well on goldstandard data, there is some doubt whether the approach can be successful when applied to noisy, real-world data sets. This paper presents a thorough evaluation of the impact of annotation noise on AL and shows that systematic noise resulting from biased coder decisions can seriously harm the AL process. We present a method to filter out inconsistent annotations during AL and show that this makes AL far more robust when applied to noisy data.
Problems for parsing morphologically rich languages are, amongst others, caused by the higher variability in structure due to less rigid word order constraints and by the higher number of different lexical forms. Both properties can result in sparse data problems for statistical parsing. We present a simple approach for addressing these issues. Our approach makes use of self-training on instances selected with regard to their similarity to the annotated data. Our similarity measure is based on the perplexity of part-of-speech trigrams of new instances measured against the annotated training data. Preliminary results show that our method outperforms a self-training setting where instances are simply selected by order of occurrence in the corpus and argue that selftraining is a cheap and effective method for improving parsing accuracy for morphologically rich languages.
In diesem Beitrag werden erste Erfahrungen mit und Überlegungen zu der Aufgabe dargelegt, ein Mikrostrukturenprogramm für ein Hypertext-Wörterbuch zu entwerfen. Zur Hypertextualisierung gedruckter Wörterbücher gibt es inzwischen erste Veröffentlichungen; meist bleibt hier die Bindung an eine gedruckte Vorlage, und sei die Hypertextualisierung noch so konsequent, bestehen. Im Unterschied zu solchen Hypertext-Wörterbüchern gehen nachfolgende Überlegungen von einem vorlagenunabhängigen Hypertext aus, dessen allgemeines Ziel es ist, Informationen zum deutschen Wortschatz zu vermitteln. Die hier vorgestellten Erfahrungen und Überlegungen sind an ein konkretes Projekt gebunden: LEKSIS - das lexikalisch-lexikologische Informationssystem des Instituts für Deutsche Sprache, Mannheim. Auf eine (weitere) Projektbeschreibung wird hier aber verzichtet; sie findet sich in Fraas/Haß-Zumkehr (1999), ferner auf der Homepage unter http://www.ids-mannheim.de/wiw. Vor dem Hintergrund dieses Projektes stehen die Bedingungen bzw. lexikografischen Konsequenzen des Mediums Hypertext im Unterschied zum Druck zur Diskussion.
We present SPLICR, the Web-based Sustainability Platform for Linguistic Corpora and Resources. The system is aimed at people who work in Linguistics or Computational Linguistics: a comprehensive database of metadata records can be explored in order to find language resources that could be appropriate for one’s specific research needs. SPLICR also provides an interface that enables users to query and to visualise corpora. The project in which the system is being developed aims at sustainably archiving the ca. 60 language resources that have been constructed in three collaborative research centres. Our project has two primary goals: (a) To process and to archive sustainably the resources so that they are still available to the research community in five, ten, or even 20 years time. (b) To enable researchers to query the resources both on the level of their metadata as well as on the level of linguistic annota-tions. In more general terms, our goal is to enable solutions that leverage the interoperability, reusability, and sustainability of heterogeneous collections of language resources.
In the context of the HyTex project, our goal is to convert a corpus into a hypertext, basing conversion strategies on annotations which explicitly mark up the text-grammatical structures and relations between text segments. Domain-specific knowledge is represented in the form of a knowledge net, using topic maps. We use XML as an interchange format. In this paper, we focus on a declarative rule language designed to express conversion strategies in terms of text-grammatical structures and hypertext results. The strategies can be formulated in a concise formal syntax which is independend of the markup, and which can be transformed automatically into executable program code.
Generierung von Linkangeboten zur Rekonstruktion terminologiebedingter Wissensvoraussetzungen
(2002)
Dieser Beitrag skizziert Strategien zur (semi-)automatischen Annotation von definitorischen Textsegmenten und Termverwendungsinstanzen auf der Grundlage grammatisch annotierter Korpora. Ziel unserer Überlegungen ist es, bei der selektiven Rezeption von Fachtexten in einer Hypertextumgebung die je spezifischen Wissensvoraussetzungen, die der Verwendung von Fachtermini unterliegen und die für das Textverständnis eine entscheidende Rolle spielen, über automatisch generierte Linkangebote rekonstruierbar zu machen.
Co-development of action, conceptualization and social interaction mutually scaffold and support each other within a virtuous feedback cycle in the development of human language in children. Within this framework, the purpose of this article is to bring together diverse but complementary accounts of research methods that jointly contribute to our understanding of cognitive development and in particular, language acquisition in robots. Thus, we include research pertaining to developmental robotics, cognitive science, psychology, linguistics and neuroscience, as well as practical computer science and engineering. The different studies are not at this stage all connected into a cohesive whole; rather, they are presented to illuminate the need for multiple different approaches that complement each other in the pursuit of understanding cognitive development in robots. Extensive experiments involving the humanoid robot iCub are reported, while human learning relevant to developmental robotics has also contributed useful results.
Disparate approaches are brought together via common underlying design principles. Without claiming to model human language acquisition directly, we are nonetheless inspired by analogous development in humans and consequently, our investigations include the parallel co-development of action, conceptualization and social interaction. Though these different approaches need to ultimately be integrated into a coherent, unified body of knowledge, progress is currently also being made by pursuing individual methods.
This paper describes the lexical database tool LOLA (Linguistic-Oriented Lexical database Approach) which has been developed for the construction and maintenance of lexicons for the machine translation system LMT. First, the requirements such a tool should meet are discussed, then LMT and the lexical information it requires, and some issues concerning vocabulary acquisition are presented. Afterwards the architecture and the components of the LOLA system are described and it is shown how we tried to meet the requirements worked out earlier. Although LOLA originally has been designed and implemented for the German-English LMT prototype, it aimed from the beginning at a representation of lexical data that can be reused for other LMT or MT prototypes or even other NLP applications. A special point of discussion will therefore be the adaptability of the tool and its components as well as the reusability of the lexical data stored in the database for the lexicon development for LMT or for other applications.
Bislang hat die mit dem Aufbau von Lexika für Sprachverarbeitungssysteme befaßte Computerlexikographie metalexikographische Forschungsergebnisse nur wenig zur Kenntnis genommen. Die theoriegeleitete Erforschung der Bauteile und Strukturen von Wörterbuchtexten ist jedoch eine wichtige Voraussetzung dafür, daß Wörterbücher in Wörterbuchdatenbanken überführt werden können, die als Datengrundlage sowohl beim Aufbau von Lexika für die maschinelle Sprachverarbeitung als auch beim Aufbau von Hypertext-Wörterbüchem für menschliche Benutzer herangezogen werden. Der vorliegende Artikel versteht sich als Plädoyer für die Relevanz metalexikographischer Forschungsergebnisse für die computerlexikographische Praxis. Zunächst werden die Forschungsbereiche Computerlexikographie und computerunterstützte Lexikographie gegeneinander abgegrenzt; dann wird deren Verhältnis zur lexikographischen Praxis einerseits und zur Metalexikographie andererseits skizziert. Der Hauptteil der Arbeit zeigt am Beispiel des sog. Wörterbuchparsings, wie metalexikographische Methoden und Forschungsergebnisse in der computerlinguistischen Praxis umgesetzt werden können.