Korpuslinguistik
Refine
Year of publication
- 2019 (21) (remove)
Document Type
- Article (11)
- Conference Proceeding (9)
- Book (1)
Has Fulltext
- yes (21)
Keywords
- Korpus <Linguistik> (20)
- corpus linguistics (9)
- corpus processing (6)
- Rumänisch (5)
- corpus management (4)
- web corpora (4)
- Annotation (3)
- CoRoLa (3)
- Deutsch (3)
- KorAP (3)
Publicationstate
- Veröffentlichungsversion (11)
- Zweitveröffentlichung (9)
- Postprint (1)
Reviewstate
- Peer-Review (21) (remove)
Publisher
- Leibniz-Institut für Deutsche Sprache (7)
- Editura Academiei Române (5)
- de Gruyter (3)
- Bern Open Publishing (1)
- Cergy-Pontoise University, France (1)
- Erich Schmidt (1)
- Leibniz-Institut für Deutsche Sprache (IDS) (1)
- Lexical Computing CZ s.r.o. (1)
- Sprachenzentrum der Technischen Universität Darmstadt ; Universitäts- und Landesbibliothek Darmstadt (1)
In the first volume of Corpus Linguistics and Linguistic Theory, Gries (2005. Null-hypothesis significance testing of word frequencies: A follow-up on Kilgarriff. Corpus Linguistics and Linguistic Theory 1(2). doi:10.1515/cllt.2005.1.2.277. http://www.degruyter.com/view//cllt.2005.1.issue-2/cllt.2005.1.2.277/cllt.2005.1.2.277.xml: 285) asked whether corpus linguists should abandon null-hypothesis significance testing. In this paper, I want to revive this discussion by defending the argument that the assumptions that allow inferences about a given population – in this case about the studied languages – based on results observed in a sample – in this case a collection of naturally occurring language data – are not fulfilled. As a consequence, corpus linguists should indeed abandon null-hypothesis significance testing.
Distributional models of word use constitute an indispensable tool in corpus based lexicological research for discovering paradigmatic relations and syntagmatic patterns (Belica et al. 2010). Recently, word embeddings (Mikolov et al. 2013) have revived the field by allowing to construct and analyze distributional models on very large corpora. This is accomplished by reducing the very high dimensionality of word cooccurrence contexts, the size of the vocabulary, to few dimensions, such as 100-200. However, word use and meaning can vary widely along dimensions such as domain, register, and time, and word embeddings tend to represent only the most prevalent meaning. In this paper we thus construct domain specific word embeddings to allow for systematically analyzing variations in word use. Moreover, we also demonstrate how to reconstruct domain specific co-occurrence contexts from the dense word embeddings.
Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.
Since 2013 representatives of several French and German CMC corpus projects have developed three customizations of the TEI-P5 standard for text encoding in order to adapt the encoding schema and models provided by the TEI to the structural peculiarities of CMC discourse. Based on the three schema versions, a 4th version has been created which takes into account the experiences from encoding our corpora and which is specifically designed for the submission of a feature request to the TEI council. On our poster we would present the structure of this schema and its relations (commonalities and differences) to the previous schemas.
The present paper examines a variety of ways in which the Corpus of Contemporary Romanian Language (CoRoLa) can be used. A multitude of examples intends to highlight a wide range of interrogation possibilities that CoRoLa opens for different types of users. The querying of CoRoLa displayed here is supported by the KorAP frontend, through the querying language Poliqarp. Interrogations address annotation layers, such as the lexical, morphological and, in the near future, the syntactical layer, as well as the metadata. Other issues discussed are how to build a virtual corpus, how to deal with errors, how to find expressions and how to identify expressions.
Introduction
(2019)
Die korpusbasierte Lexikografie ist ein interessanter und vielfältiger wissenschaftlicher Anwendungsbereich, der auch im muttersprachlichen Deutschunterricht und im Deutsch-als-Fremdsprache-Unterricht eine größere Rolle einnehmen sollte. In unserem Beitrag stellen wir deshalb geeignete Korpora und Korpusanalysewerkzeuge vor, mit deren Hilfe Nutzerinnen und Nutzer einzelne Angabebereiche in einem Wörterbuch nicht nur nachvollziehen, sondern auch eigenständig erarbeiten können. Neben vorhandenen Ansätzen geschieht dies am Beispiel des Denktionarys, eines wikibasierten Wörterbuches, für das Schülerinnen und Schüler im Rahmen des Projekts Schüler machen Wörterbücher – Wörterbücher machen Schule im muttersprachlichen Deutschunterricht selbst korpusbasierte Artikel verfassten.
Little strokes fell great oaks. Creating CoRoLa, the reference corpus of contemporary Romanian
(2019)
The paper presents the quite long-standing tradition of Romanian corpus acquisition and processing, which reaches its peak with the reference corpus of contemporary Romanian language (CoRoLa). The paper describes decisions behind the kinds of texts collected, as well as processing and annotation steps, highlighting the structure and importance of metadata to the corpus. The reader is also introduced to the three ways in which (s)he can plunge into the rich linguistic data of the corpus, waiting to be discovered. Besides querying the corpus, word embeddings extracted from it are useful to various natural language processing applications and for linguists, when user-friendly interfaces offer them the possibility to exploit the data.