Computerlinguistik
Refine
Year of publication
Document Type
- Conference Proceeding (302)
- Part of a Book (126)
- Article (87)
- Book (26)
- Working Paper (16)
- Other (15)
- Report (11)
- Contribution to a Periodical (7)
- Doctoral Thesis (7)
- Master's Thesis (4)
Language
- English (422)
- German (186)
- Multiple languages (2)
- French (1)
Keywords
- Computerlinguistik (205)
- Korpus <Linguistik> (166)
- Annotation (78)
- Deutsch (76)
- Automatische Sprachanalyse (69)
- Forschungsdaten (50)
- Natürliche Sprache (49)
- Digital Humanities (42)
- Gesprochene Sprache (40)
- Maschinelles Lernen (33)
Publicationstate
- Veröffentlichungsversion (373)
- Zweitveröffentlichung (108)
- Postprint (55)
- Preprint (2)
- (Verlags)-Lektorat (1)
- Erstveröffentlichung (1)
Reviewstate
Publisher
- Association for Computational Linguistics (40)
- European Language Resources Association (32)
- de Gruyter (30)
- Springer (26)
- European Language Resources Association (ELRA) (23)
- Institut für Deutsche Sprache (21)
- Zenodo (17)
- Linköping University Electronic Press (13)
- The Association for Computational Linguistics (11)
- CLARIN (9)
Learning from Errors. Systematic Analysis of Complex Writing Errors for Improving Writing Technology
(2015)
In this paper, we describe ongoing research on writing errors with the ultimate goal to develop error-preventing editing functions in word-processors. Drawing from the state-of-the-art research in errors carried out in various fields, we propose the application of a general concept for action-slips as introduced by Norman. We demonstrate the feasibility of this approach by using a large corpus of writing errors in published texts. The concept of slips considers both the process and the product: some failure in a procedure results in an error in the product, i.e., is visible in the written text. In order to develop preventing functions, we need to determine causes of such visible errors.
Lexical chaining has become an important part of many NLP tasks. However, the goodness of a chaining process and hence its annotation output depends on the quality of the chaining resource. Therefore, a framework for chaining is needed which integrates divergent resources in order to balance their deficits and to compare their strengths and weaknesses. In this paper we present an application that incorporates the framework of a meta model of lexical chaining exemplified on three resources and its generalized exchange format.
The task-oriented and format-driven development of corpus query systems has led to the creation of numerous corpus query languages (QLs) that vary strongly in expressiveness and syntax. This is a severe impediment for the interoperability of corpus analysis systems, which lack a common protocol. In this paper, we present KoralQuery, a JSON-LD based general corpus query protocol, aiming to be independent of particular QLs, tasks and corpus formats. In addition to describing the system of types and operations that Koral- Query is built on, we exemplify the representation of corpus queries in the serialized format and illustrate use cases in the KorAP project.
The present thesis introduces KoralQuery, a protocol for the generic representation of queries to linguistic corpora. KoralQuery defines a set of types and operations which serve as abstract representations of linguistic entities and configurations. By combining these types and operations in a nested structure, the protocol may express linguistic structures of arbitrary complexity. It achieves a high degree of neutrality with regard to linguistic theory, as it provides flexible structures that allow for the setting of certain parameters to access several complementing and concurrent sources and layers of annotation on the same textual data. JSON-LD is used as a serialisation format for KoralQuery, which allows for the well-defined and normalised exchange of linguistic queries between query engines to promote their interoperability. The automatic translation of queries issued in any of three supported query languages to such KoralQuery serialisations is the second main contribution of this thesis. By employing the introduced translation module, query engines may also work independently of particular query languages, as their backend technology may rely entirely on the abstract KoralQuery representations of the queries. Thus, query engines may provide support for several query languages at once without any additional overhead. The original idea of a general format for the representation of linguistic queries comes from an initiative called Corpus Query Lingua Franca (CQLF), whose theoretic backbone and practical considerations are outlined in the first part of this thesis. This part also includes a brief survey of three typologically different corpus query languages, thus demonstrating their wide variety of features and defining the minimal target space of linguistic types and operations to be covered by KoralQuery.
In this feasibility study we aim at contributing at the practical use of domain ontologies for hypertext classification by introducing an algorithm generating potential keywords. The algorithm uses structural markup information and lemmatized word lists as well as a domain ontology on linguistics. We present the calculation and ranking of keyword candidates based on ontology relationships, word position, frequency information, and statistical significance as evidenced by log-likelihood tests. Finally, the results of our machine-driven classification are validated empirically against manually assigned keywords.
As the nature of negative polarity items (NPIs) and their licensing contexts is still under much debate, a broad empirical basis is an important cornerstone to support further insights in this area of research. The work discussed in this paper is intended as a contribution to realizing this objective. The authors briefly introduce the phenomenon of NPIs and outline major theories about their licensing and also various licensing contexts before discussing our major topics: Firstly, a corpus-based retrieval method for NPI candidates is described that ranks the candidates according to their distributional dependence on the licensing contexts. Our method extracts single-word candidates and is extended to also capture multi-word candidates. The basic idea for automatically collecting NPI candidates from a large corpus is that an NPI behaves like a kind of collocate to its licensing contexts. Manual inspection and interpretation of the candidate lists identify the actual NPIs. Secondly, an online repository for NPIs and other items that show distributional idiosyncrasies is presented, which offers an empirical database for further (theoretical) research on these items in a sustainable way.
Der Beitrag behandelt konzeptionelle und methodische Fragen aus einem Projekt, in dem eine neue Referenzausgabe des Thomas Mannschen Gesamtwerks für die Publikation in zwei Medien aufbereitet wird: als Buch und als elektronische Ausgabe. Die Basis dafür bildet ein Informationspool, in dem die Texte SGML/XML-basiert vorgehalten und durch eine Topic Map verknüpft werden. Der Beitrag skizziert die Architektur des Systems sowie die dahinter stehenden technischen und konzeptionellen Überlegungen. Es wird gezeigt, wie gerade die elektronische Version neue Wege beschreitet, damit ein Arbeitswerkzeug für Literaturwissenschaftler entsteht, das völlig neuartige Zugriffsmöglichkeiten auf das Werk Thomas Manns bietet.
Lexicographic data are normally linked with each other in a complex manner. Especially, within the electronic lexicographic context, the following issues are addressed: How to encode these cross-reference structures so that both the lexicographers‘ editorial work with the linking-up is easy to handle and the options of the presentation are adequately flexible. The objective of this paper is to elucidate the presentation of an XML-modelling of cross-reference structures as part of a complete modelling concept. Thereby, the modelling potential of the XML-connected standard XLink and a new lexicographic concept will be brought together with cross-project guidelines for the modelling of link-structures.
This contribution presents an XML Schema for annotating a high level narratological category: speech, thought and writing representation (ST&WR). It focusses on two aspects: Firstly, the original Schema is presented as an example for the challenge to encode a narrative feature in a structured and flexible way and secondly, ways of adapting this Schema to TEI are considered, in Order to make it usable for other, TEI-based projects.