Korpuslinguistik
Refine
Year of publication
- 2015 (27) (remove)
Document Type
- Conference Proceeding (11)
- Part of a Book (10)
- Book (3)
- Article (1)
- Master's Thesis (1)
- Other (1)
Keywords
- Korpus <Linguistik> (23)
- Annotation (9)
- Corpus annotation (6)
- Corpus technology (6)
- Datenbanksystem (6)
- Deutsch (5)
- Historische Sprachwissenschaft (5)
- Large corpora (5)
- Corpus linguistics (4)
- Computerlinguistik (3)
Publicationstate
- Veröffentlichungsversion (17)
- Zweitveröffentlichung (5)
- Postprint (1)
Reviewstate
Publisher
- Institut für Deutsche Sprache (12)
- Narr (7)
- German Society for Computational Linguistics & Language Technology (GSCL) (2)
- Association for Computational Linguistics ( ACL ); Curran Associates, Inc. (1)
- Linköping University Electronic Press, Linköpings universitet (1)
- Narr Francke Attempto (1)
- Stauffenburg (1)
Feedback utterances are among the most frequent in dialogue. Feedback is also a crucial aspect of all linguistic theories that take social interaction involving language into account. However, determining communicative functions is a notoriously difficult task both for human interpreters and systems. It involves an interpretative process that integrates various sources of information. Existing work on communicative function classification comes from either dialogue act tagging where it is generally coarse grained concerning the feed- back phenomena or it is token-based and does not address the variety of forms that feed- back utterances can take. This paper introduces an annotation framework, the dataset and the related annotation campaign (involving 7 raters to annotate nearly 6000 utterances). We present its evaluation not merely in terms of inter-rater agreement but also in terms of usability of the resulting reference dataset both from a linguistic research perspective and from a more applicative viewpoint.
The project Referenzkorpus Altdeutsch (‘Old German Reference Corpus’) aims to es- tablish a deeply-annotated text corpus of all extant Old German texts. As the automated part-of-speech and morphological pre-annotation is amended by hand, a quality control system for the results seems a desirable objective. To this end, standardized inflectional forms, generated using the morphological information, are compared with the attested word forms. Their creation is described by way of example for the Old High German part of the corpus. As is shown, in a few cases, some features of the attested word forms are also required in order to determine as exactly as possible the shape of the inflected lemma form to be created.
Usenet is a large online resource containing user-generated messages (news articles) organised in discussion groups (newsgroups) which deal with a wide variety of different topics. We describe the download, conversion, and annotation of a comprehensive German news corpus for integration in DeReKo, the German Reference Corpus hosted at the Institut für Deutsche Sprache in Mannheim.
The availability of large multi-parallel corpora offers an enormous wealth of material to contrastive corpus linguists, translators and language learners, if we can exploit the data properly. Necessary preparation steps include sentence and word alignment across multiple languages. Additionally, linguistic annotation such as partof- speech tagging, lemmatisation, chunking, and dependency parsing facilitate precise querying of linguistic properties and can be used to extend word alignment to sub-sentential groups. Such highly interconnected data is stored in a relational database to allow for efficient retrieval and linguistic data mining, which may include the statistics-based selection of good example sentences. The varying information needs of contrastive linguists require a flexible linguistic query language for ad hoc searches. Such queries in the format of generalised treebank query languages will be automatically translated into SQL queries.
This article reports on the on-going CoRoLa project, aiming at creating a reference corpus of contemporary Romanian (from 1945 onwards), opened for online free exploitation by researchers in linguistics and language processing, teachers of Romanian, students. We invest serious efforts in persuading large publishing houses and other owners of IPR on relevant language data to join us and contribute the project with selections of their text and speech repositories. The CoRoLa project is coordinated by two Computer Science institutes of the Romanian Academy, but enjoys cooperation of and consulting from professional linguists from other institutes of the Romanian Academy. We foresee a written component of the corpus of more than 500 million word forms, and a speech component of about 300 hours of recordings. The entire collection of texts (covering all functional styles of the language) will be pre-processed and annotated at several levels, and also documented with standardized metadata. The pre-processing includes cleaning the data and harmonising the diacritics, sentence splitting and tokenization. Annotation will include morpho-lexical tagging and lemmatization in the first stage, followed by syntactic, semantic and discourse annotation in a later stage.
Wenn sich ein Partizip II in Bedeutung und Gebrauch verselbstständigt, dann sprechen Linguisten von einer Lexikalisierung. Es entsteht ein Pseudo-Partizip, das nicht mehr als Verbform identifiziert werden kann. Doch wie systematisch lassen sich Partizipien erfassen, deren Verhalten teilweise auf Lexikalisierung schließen lässt, die aber zugleich eine transparente verbale Basis im Gegenwartsdeutschen aufzuweisen scheinen?
Dieser Band beschreibt Partizipien II von Experiencer-Objekt-Verben wie verwirrt, frustriert oder begeistert auf Grundlage ihrer besonderen Semantik und analysiert den Gebrauch von 21 ausgewählten Exemplaren mit korpuslinguistischen Mitteln sowohl qualitativ als auch quantitativ. Im Mittelpunkt stehen die Verwendungen in Kombination mit den Kopula- oder Passivhilfsverben sein und werden sowie mit dem Kausativverb machen, in denen die Partizipialformen in verbalem und/oder adjektivischem Gebrauch vorliegen. Dabei ergeben sich einige bemerkenswerte Ergebnisse und bisher nicht wahrgenommene Korrelationen.
With an increasing amount of text data available it is possible to automatically extract a variety of information about language. One way to obtain knowledge about subtle relations and analogies between words is to observe words which are used in the same context. Recently, Mikolov et al. proposed a method to efficiently compute Euclidean word representations which seem to capture subtle relations and analogies between words in the English language. We demonstrate that this method also captures analogies in the German language. Furthermore, we show that we can transfer information extracted from large non-annotated corpora into small annotated corpora, which are then, in turn, used for training NLP systems.
In this paper we present some preliminary considerations concerning the possibility of automatic parsing an annotated corpus for N-N compounds. This should in prin- ciple be possible at least for relational and stereotype compounds, if the lemmatization of the corpus connects the lemmata with lexical entries as described in Höhle (1982). These lexical entries then supply the necessary information about the argument structure of a relational noun or about the stereotypical purpose associated with the noun’s referent which can be used to establish a relation between the first and the head constituent of the compound.