Refine
Year of publication
Document Type
- Part of a Book (18)
- Conference Proceeding (9)
- Article (4)
- Working Paper (1)
Keywords
- Korpus <Linguistik> (29)
- Korpusanalyseplattform (KorAP) (5)
- Computerlinguistik (4)
- Deutsch (4)
- Forschungsdaten (4)
- Kontrastive Linguistik (4)
- Rumänisch (4)
- Sprachdaten (4)
- Benutzeroberfläche (3)
- Deutsches Referenzkorpus (DeReKo) (3)
Publicationstate
- Veröffentlichungsversion (17)
- Zweitveröffentlichung (10)
- Postprint (1)
Reviewstate
- Peer-Review (15)
- (Verlags)-Lektorat (10)
- Review-Status-unbekannt (1)
Publisher
- de Gruyter (6)
- Editura Academiei Române (3)
- European Language Resources Association (ELRA) (3)
- IDS-Verlag (3)
- Leibniz-Institut für Deutsche Sprache (2)
- CECL Papers 1 (1)
- De Gruyter (1)
- European Language Resources Association (1)
- European language resources association (ELRA) (1)
- Gesellschaft für Sprachtechnologie und Computerlinguistik (1)
In the mid-1990s, the Faculty of Linguistics and Literary-Studies at Bielefeld University began to establish the field Text technology, both in research and education. Text technology is a new field of research on the border of Computational Linguistics and Computational Philology.
This paper focuses on Text technology in academic education. In 2002, Text Technology was introduced as a minor subject for B.A. Programs. It is organized in modules: Module 1 introduces the characteristics of electronic texts and documents, typography, typesetting systems and hypertext. Module 2 introduces one or two programming languages relevant to the field of humanities computing. Markup languages and the principles of information structuring are the main topics of Module 3. The formal fundamentals of computer-based text processing, as formal languages and their grammars, Logics et cetera are subjects of another module. The paper ends with a short description of other Bachelor- and Master-Programs at Bielefeld University which contain text technological themes.
Little strokes fell great oaks. Creating CoRoLa, the reference corpus of contemporary Romanian
(2019)
The paper presents the quite long-standing tradition of Romanian corpus acquisition and processing, which reaches its peak with the reference corpus of contemporary Romanian language (CoRoLa). The paper describes decisions behind the kinds of texts collected, as well as processing and annotation steps, highlighting the structure and importance of metadata to the corpus. The reader is also introduced to the three ways in which (s)he can plunge into the rich linguistic data of the corpus, waiting to be discovered. Besides querying the corpus, word embeddings extracted from it are useful to various natural language processing applications and for linguists, when user-friendly interfaces offer them the possibility to exploit the data.
Die Diskurslinguistik hat sich in den letzten Jahren als eine linguistische Teildisziplin etabliert, die in transtextuellen Untersuchungen über sprachliche Muster gesamtgesellschaftlich rele-vante Denk- und Vorstellungswelten rekonstruiert. Die Digitalisierung hat nicht nur unsere Gesellschaft grundlegend verändert und neue Kommunikationsformen und innovative kulturelle Praktiken geprägt, sondern auch das diskurslinguistische Arbeiten maßgeblich beein-flusst. So war die Etablierung der Diskurslinguistik sowie auch der diskursorientierten Lexikographie geprägt durch die Engführung mit computergestützten Methoden (Bubenhofer 2009, Teubert/Čermáková 2007, Halliday et al. 2004), die große Textsammlungen für Diskursanalysen zugänglich machen. Da diskursanalytische Forschung in foucaultscher Tradition nicht am Einzelbeleg interessiert ist, sondern mit kontextuellen Mustern und intertextuellen Verweisstrukturen arbeitet, bietet eine korpusgestützte Analyse eine produktive Ausgangsbasis für Diskursuntersuchungen. Dies gilt insbesondere für die Diskurslexikographie, bei der auf breiter Datenbasis Wörterbücher zu kulturhistorischen Diskursen erstellt werden.
In diesem Kapitel stellen wir zunächst grundlegende Konzepte von Abfragesystemen und Abfragesprachen für die Suche in Korpora vor. Diese Konzepte sollen Ihnen helfen, die einzelnen Abfragesprachen besser zu verstehen und vergleichen zu können. Die gängigen Abfragesprachen unterscheiden sich in vielen Details. Diese Details und die Möglichkeiten und Grenzen der einzelnen Abfragesprachen stellen wir im zweiten Teil mit vielen Beispielaufgaben und dazu passenden Lösungen in jeweils drei Abfragesprachen vor.
This paper reports on the latest developments of the European Reference Corpus EuReCo and the German Reference Corpus in relation to three of the most important CMLC topics: interoperability, collaboration on corpus infrastructure building, and legal issues. Concerning interoperability, we present new ways to access DeReKo via KorAP on the API and on the plugin level. In addition we report about advancements in the EuReCo- and ICC-initiatives with the provision of comparable corpora, and about recent problems with license acquisitions and our solution approaches using an indemnification clause and model licenses that include scientific exploitation.
Das Deutsche Referenzkorpus DeReKo dient als eine empirische Grundlage für die germanistische Linguistik. In diesem Beitrag geben wir einen Überblick über Grundlagen und Neuigkeiten zu DeReKo und seine Verwendungsmöglichkeiten sowie einen Einblick in seine strategische Gesamtkonzeption, die zum Ziel hat, DeReKo trotz begrenzter Ressourcen für einerseits möglichst viele und andererseits auch für innovative und anspruchsvolle Anwendungen nutzbar zu machen. Insbesondere erläutern wir dabei Strategien zur Aufbereitung sehr großer Korpora mit notwendigerweise heuristischen Verfahren und Herausforderungen, die sich auf dem Weg zur linguistischen Erschließung solcher Korpora stellen.
This paper reports on recent developments within the European Reference Corpus EuReCo, an open initiative that aims at providing and using virtual and dynamically definable comparable corpora based on existing national, reference or other large corpora. Given the well-known shortcomings of other types of multilingual corpora such as parallel/translation corpora (shining-through effects, over-normalization, simplification, etc.) or web-based comparable corpora (covering only web material), EuReCo provides a unique linguistic resource offering new perspectives for fine-grained contrastive research on authentic cross-linguistic data, applications in translation studies and foreign language teaching and learning.
Neues von KorAP
(2019)
Die Korpusanalyseplattform KorAP wird als Nachfolgesystem zu COSMAS II am Leibniz-Institut für Deutsche Sprache (IDS) entwickelt und erlaubt einen umfassenden Zugriff auf einen Teil von DeReKo (Kupietz et al. 2010). Trotz einiger noch fehlender Funktionalitäten ist KorAP bereits produktiv einsetzbar. Im Folgenden wollen wir am Beispiel der Untersuchung von Social-Media-Korpora einige neue Möglichkeiten und Besonderheiten vorstellen.
Making corpora accessible and usable for linguistic research is a huge challenge in view of (too) big data, legal issues and a rapidly evolving methodology. This does not only affect the design of user-friendly graphical interfaces to corpus analysis tools, but also the availability of programming interfaces supporting access to the functionality of these tools from various analysis and development environments. RKorAPClient is a new research tool in the form of an R package that interacts with the Web API of the corpus analysis platform KorAP, which provides access to large annotated corpora, including the German reference corpus DeReKo with 45 billion tokens. In addition to optionally authenticated KorAP API access, RKorAPClient provides further processing and visualization features to simplify common corpus analysis tasks. This paper introduces the basic functionality of RKorAPClient and exemplifies various analysis tasks based on DeReKo, that are bundled within the R package and can serve as a basic framework for advanced analysis and visualization approaches.
Enabling appropriate access to linguistic research data, both for many researchers and for innovative research applications, is a challenging task. In this chapter, we describe how we address this challenge in the context of the German Reference Corpus DeReKo and the corpus analysis platform KorAP. The core of our approach, which is based on and tightly integrated into the CLARIN infrastructure, is to offer access at different levels. The graduated access levels make it possible to find a low-loss compromise between the possibilities opened up and the costs incurred by users and providers for each individual use case, so that, viewed over many applications, the ratio between effort and results achieved can be effectively optimized. We also report on experiences with the current state of this approach.
KorAP, die neue Korpusanalyseplattform des IDS, die COSMAS II im Laufe der kommenden 2–3 Jahre ablösen wird, bietet gerade zur Erforschung grammatischer Variation einige besondere Funktionalitäten. Grundlegend ist beispielsweise, dass KorAP die Repräsentation und Abfrage beliebiger und beliebig vieler Annotationsschichten, zum Beispiel zu Konstituenz- und Dependenzrelationen, unterstutzt und damit die Suche nach speziellen grammatischen Phänomenen erleichtert oder erst möglich macht. Darüber hinaus unterstutzt KorAP die Konstruktion virtueller Korpora anhand von Metadatenvariablen und erleichtert damit kontrastive Untersuchungen. Der vorliegende Artikel erläutert die für die grammatische Variationsforschung relevanten KorAP-Funktionalitäten im Einzelnen und gibt einen Einblick in ihre Grundlagen.
The International Comparable Corpus (ICC) (Kirk/Čermáková 2017; Čermáková et al. 2021) is an open initiative which aims to improve the empirical basis for contrastive linguistics by compiling comparable corpora for many languages and making them as freely available as possible as well as providing tools with which they can easily be queried and analysed. In this contribution we present the first release of written language parts of the ICC which includes corpora for Chinese, Czech, English, German, Irish (partly), and Norwegian. Each of the released corpora contains 400k words distributed over 14 different text categories according to the ICC specifications. Our poster covers the design basics of the ICC, its TEI encoding, a demonstration of using the ICC via different query tools, and an outlook on future plans.
Similar to the European Reference Corpus EuReCo (Kupietz et al. 2020), ICC follows the approach of reusing existing linguistic resources wherever possible in order to cover as many languages as possible with realistic effort in as short a time as possible. In contrast to EuReCo, however, comparable corpus pairs are not defined dynamically in the usage phase, but the compositions of the corpora are fixed in the ICC design. The approaches are thus complementary in this respect. The design principles and composition of the ICC are based on those of the International Corpus of English (ICE) (Greenbaum (ed.) 1996), with the deviation that the ICC includes the additional text category blog post and excludes spoken legal texts (see Čermáková et al. 2021 for details). ICC’s fixed-design approach has the advantage that all single-language corpora in the ICC have the same composition with respect to the selected text types and that this guarantees that the selected broad spectrum of potential influencing variables for linguistic variation is always represented. The disadvantage, however, is that this can only be achieved for quite small corpora and that the generalisability of comparative findings based on the ICC corpora will often need to be checked on larger monolingual corpora or translation corpora (Čermáková/Ebeling/Oksefjell Ebeling forthcoming). Arguing that such issues with comparability and representativeness are inevitable, in one way or the other, and need to be dealt with, our poster will discuss and exemplify the text selections in more detail.
In this paper, we present our experiences and decisions in dealing with challenges in developing, maintaining and operating online research software tools in the field of linguistics. In particular, we highlight reproducibility, dependability, and security as important aspects of quality management – taking into account the special circumstances in which research software
is usually created.
When comparing different tools in the field of natural language processing (NLP), the quality of their results usually has first priority. This is also true for tokenization. In the context of large and diverse corpora for linguistic research purposes, however, other criteria also play a role – not least sufficient speed to process the data in an acceptable amount of time. In this paper we evaluate several state of the art tokenization tools for German – including our own – with regard to theses criteria. We conclude that while not all tools are applicable in this setting, no compromises regarding quality need to be made.