Refine
Year of publication
- 2019 (65) (remove)
Document Type
- Part of a Book (20)
- Article (19)
- Conference Proceeding (17)
- Book (5)
- Working Paper (2)
- Other (1)
- Part of Periodical (1)
Has Fulltext
- yes (65)
Keywords
- Korpus <Linguistik> (65) (remove)
Publicationstate
- Zweitveröffentlichung (32)
- Veröffentlichungsversion (31)
- Postprint (3)
- Erstveröffentlichung (1)
Reviewstate
- Peer-Review (39)
- (Verlags)-Lektorat (24)
- (Verlags-)Lektorat (1)
Publisher
- de Gruyter (13)
- Leibniz-Institut für Deutsche Sprache (7)
- Editura Academiei Române (5)
- Leibniz-Institut für Deutsche Sprache (IDS) (5)
- German Society for Computational Linguistics & Language Technology und Friedrich-Alexander-Universität Erlangen-Nürnberg (4)
- Narr (4)
- Lexical Computing CZ s.r.o. (3)
- Stauffenburg (3)
- Erich Schmidt (2)
- Association for Computational Linguistics (1)
We report on a new project building a Natural Language Processing resource for Zulu by making use of resources already available. Combining tagging results with the results of morphological analysis semi-automatically, we expect to reduce the amount of manual work when generating a finely-grained gold standard corpus usable for training a tagger. From the tagged corpus, we plan to extract verb-argument pairs with the aim of compiling a verb valency lexicon for Zulu.
Der vorliegende Beitrag setzt sich mit dem computergestützten Transkriptionsverfahren arabisch-deutscher Gesprächsdaten für interaktionsbezogene Untersuchungen auseinander. Zunächst werden wesentliche methodische Herausforderungen der gesprächsanalytischen Arbeit adressiert: Hinsichtlich der derzeitigen Korpustechnologie ermöglicht die Verwendung von arabischen Schriftzeichen in einem mehrsprachigen, bidirektionalen Transkript keine analysegerechte Rekonstruktion von Reziprozität, Linearität und Simultaneität sprachlichen Handelns. Zudem ist die Verschriftung von arabischen Gesprächsdaten aufgrund der unzureichenden (gesprächsanalytischen) Beschäftigung mit den standardfernen Varietäten und gesprochensprachlichen Phänomenen erschwert. Daher widmet sich der zweite Teil des Beitrags den bisher erarbeiteten und erprobten Lösungsansätzen ̶ einem stringenten, gesprächsanalytisch fundierten Transkriptionssystem für gesprochenes Arabisch.
The paper deals with the process of computer-aided transcription regarding Arabic-German data material for interaction-based studies. First of all, it sheds light upon some major methodological challenges posed by the conversation-analytic approaches: due to current corpus technology, the reciprocity, linearity, and simultaneity of linguistic activities cannot be reconstructed in an analytically proper way when using the Arabic characters in multilingual and bidirectional transcripts. The difficulty of transcribing Arabic encounters is also compounded by the fact that Spoken Arabic as well as its varieties and phenomena have not been standardised enough (for conversation-analytic purposes). Therefore, the second part of this paper is dedicated to preliminary, self-developed solutions, namely a systematic method for transcribing Spoken Arabic.
Content
1 Predicting learner knowledge of individual words using machine learning
Drilon Avdiu, Vanessa Bui, Klára Ptacinová Klimci´ková
2 Automatic Generation and Semantic Grading of Esperanto Sentences in a Teaching Context
Eckhard Bick
3 Toward automatic improvement of language produced by non-native language learners
Mathias Creutz, Eetu Sjöblom
4 Linguistic features and proficiency classification in L2 Spanish and L2 Portuguese
Iria del Ri´o
5 Integrating large-scale web data and curated corpus data in a search engine supporting German literacy education
Sabrina Dittrich, Zarah Weiss, Hannes Schröter, Detmar Meurers
6 Formalism for a language agnostic language learning game and productive grid generation
Sylvain Hatier, Arnaud Bey, Mathieu Loiseau
7 Understanding Vocabulary Growth Through An Adaptive Language Learning System
Elma Kerz, Andreas Burgdorf, Daniel Wiechmann, Stefan Meeger,Yu Qiao, Christian Kohlschein, Tobias Meisen
8 Summarization Evaluation meets Short-Answer Grading
Margot Mieskes, Ulrike Padó
9 Experiments on Non-native Speech Assessment and its Consistency
Ziwei Zhou, Sowmya Vajjala, Seyed Vahid Mirnezami
10 The Impact of Spelling Correction and Task Context on Short Answer Assessment for Intelligent Tutoring Systems
Ramon Ziai, Florian Nuxoll, Kordula De Kuthy, Björn Rudzewitz, Detmar Meurers
Gegenstand ist eine vergleichende empirische Korpusstudie zur Bedeutung des Ausdrucks geschäftsmäßig im (bundesdeutschen) Gemeinsprach- und juristischen Fachsprachgebrauch. Die Studie illustriert an einem aktuellen Fall strittiger Wortdeutung (hier zu § 217 StGB) die Möglichkeiten computergestützter Sprachgebrauchsanalyse für die Auslegung vor Gericht und die Normtextprognose in der Rechtsetzung.
Dulko ist ein im Aufbau befindliches fehlerannotiertes deutsch-ungarisches Lernerkorpus an der Universität Szeged. Es wird seit Sommer 2017 von der Alexander-von-Humboldt-Stiftung gefördert im Rahmen einer Institutspartnerschaft zwischen dem IDS und dem Institut für Germanistik an der Universität Szeged („Deutsch-ungarischer Sprachvergleich: korpustechnologisch, funktional-semantisch und sprachdidaktisch (DeutUng)“). Die in Dulko erhobenen Lernerdaten setzen sich zusammen aus kontrolliert erhobenen deutschsprachigen Essays und Übersetzungen aus dem Ungarischen ins Deutsche. Die Probanden sind Studierende am Institut für Germanistik der Universität Szeged mit Ungarisch als Muttersprache und Deutsch als erster oder zweiter Fremdsprache.
In diesem Beitrag soll ein Nachschlagewerk zur arealen Variation in der Grammatik des Deutschen kurz vorgestellt werden: die in Form eines Online-Wikis erschienene „Variantengrammatik des Standarddeutschen“. Sie ist das Hauptergebnis einer langjährigen Zusammenarbeit der Projektgruppe „Variantengrammatik“ unter der Leitung der Autorin und der Autoren dieses Beitrags. Für das Projekt wurde ein areal gewichtetes und annotiertes Korpus erstellt, das aus Lokal- und Regionalteilen der Online-Ausgaben von 68 regional verbreiteten Zeitungen besteht. Die ausgewählten Zeitungen sind nach fünfzehn Arealen des zusammenhängenden deutschsprachigen Raums unterteilt. Das tokenisierte, lemmatisierte und nach Wortarten annotierte Gesamtkorpus, auf das sich die Variantengrammatik stützt, umfasst ca. 600 Millionen Wörter.
In this paper, we describe a data processing pipeline used for annotated spoken corpora of Uralic languages created in the INEL (Indigenous Northern Eurasian Languages) project. With this processing pipeline we convert the data into a loss-less standard format (ISO/TEI) for long-term preservation while simultaneously enabling a powerful search in this version of the data. For each corpus, the input we are working with is a set of files in EXMARaLDA XML format, which contain transcriptions, multimedia alignment, morpheme segmentation and other kinds of annotation. The first step of processing is the conversion of the data into a certain subset of TEI following the ISO standard ’Transcription of spoken language’ with the help of an XSL transformation. The primary purpose of this step is to obtain a representation of our data in a standard format, which will ensure its long-term accessibility. The second step is the conversion of the ISO/TEI files to a JSON format used by the “Tsakorpus” search platform. This step allows us to make the corpora available through a web-based search interface. As an addition, the existence of such a converter allows other spoken corpora with ISO/TEI annotation to be made accessible online in the future.
Die Vermittlung von Fachsprache gewinnt in der heutigen europäischen Gesellschaft, die von 'Bewegungen' unterschiedlicher Art charakterisiert ist, immer mehr an Relevanz, aber die Lernergruppen werden immer differenzierter und die Lehrenden, die meist keine Experten auf dem Fachgebiet sind, haben Schwierigkeiten lernergerechte Kurse zu gestalten, da die Möglichkeiten zur Aus- oder Fortbildung selten sind. Fragen, die offen stehen oder nur teilweise beantwortet wurden, gibt es noch viele und eine einheitliche Antwort ist nicht immer möglich, aber wir möchten trotzdem versuchen, anstatt von Problemfällen auch Experimente und Lösungen vorzustellen. Wir möchten zeigen, wie und mit welchen Mitteln und Werkzeugen Fachsprachen beschrieben werden können und welche Auswirkungen dies im Unterricht haben kann. Nach einem Überblick über die unterschiedlichen Definitionsmöglichkeiten von 'Fachsprache', zeigen wir, welche Auswirkungen die unterschiedlichen Schwerpunkte in der Lehre haben können. Abschließend werden wir ein kleines korpuslinguistisches Experiment vorstellen (Korpus mit den Aufsätzen zum Themenschwerpunkt 'Fachsprache' ZIF 2019-1), um mögliche Anregungen zur Benutzung von Korpora zu geben, da sich Korpora in allen Phasen des Unterrichts (vor, während und danach) sowohl für Lehrende als auch für Lernende positiv auswirken können.
The DRuKoLA project
(2019)
DRuKoLA, the accompanying project in the making of the Corpus of Romanian Language, is a cooperation between German and Romanian computer scientists, corpus linguists and linguists, aiming at linking reference corpora of European languages under one corpus analysis tool able to manage big data. KorAP, the analysis tool developed at the Leibniz Institute for the German Language (Mannheim), is being tailored for the Romanian language in a first attempt to reunite reference corpora under the EuReCo initiative, detailed in this paper. The paper describes the necessary steps of harmonization within KorAP and the corpus of Romanian language and discusses, as one important goal of this project, criteria and ways to build virtual comparable corpora to be used for contrastive linguistic analyses.
The present paper examines a variety of ways in which the Corpus of Contemporary Romanian Language (CoRoLa) can be used. A multitude of examples intends to highlight a wide range of interrogation possibilities that CoRoLa opens for different types of users. The querying of CoRoLa displayed here is supported by the KorAP frontend, through the querying language Poliqarp. Interrogations address annotation layers, such as the lexical, morphological and, in the near future, the syntactical layer, as well as the metadata. Other issues discussed are how to build a virtual corpus, how to deal with errors, how to find expressions and how to identify expressions.
The user interfaces for corpus analysis platforms must provide a high degree of accessibility for ordinary users and at the same time provide the possibility to answer complex research questions. In this paper, we present the design concepts behind the user interface of KorAP, a corpus analysis platform that has evolved into the main gateway to CoRoLa, the Reference Corpus of Contemporary Romanian Language. Based on established principles of user interface design, we show how KorAP addresses the challenge of providing a user-friendly interface for heterogeneous corpus data to a wide range of users with different research questions.
Little strokes fell great oaks. Creating CoRoLa, the reference corpus of contemporary Romanian
(2019)
The paper presents the quite long-standing tradition of Romanian corpus acquisition and processing, which reaches its peak with the reference corpus of contemporary Romanian language (CoRoLa). The paper describes decisions behind the kinds of texts collected, as well as processing and annotation steps, highlighting the structure and importance of metadata to the corpus. The reader is also introduced to the three ways in which (s)he can plunge into the rich linguistic data of the corpus, waiting to be discovered. Besides querying the corpus, word embeddings extracted from it are useful to various natural language processing applications and for linguists, when user-friendly interfaces offer them the possibility to exploit the data.
Introduction
(2019)
Persuasionsstrategien in deutschen rechtsorientierten Zeitungen. Eine korpuslinguistische Studie
(2019)
Corpus Linguistics has often proved fruitful to examine different types of discourses, also the one of refugees. Aim of the paper is to show how language usage patterns can be focused on with the help of techniques grounded in Corpus Linguistics, giving information about themes and topoi. After showing what type of words (keywords, collocations) and what type of phenomena will be considered (topoi, metaphors and frames) in the article, the focus will shift on the methodology and the adopted criteria. After presenting the primary corpus (articles from right-oriented newspapers) and the comparison corpus (articles from 'Die Zeit') the main results of the analysis are presented and reflected on.
This article investigates the transitive-oblique alternation in German that involves the preposition an ‘at, on’, e.g. ein Buch schreiben ‘write a book’ vs. an einem Buch schreiben ‘work on / write a book’ (lit. write at a book). The crucial semantic difference between the two structures is the obligatory atelic interpretation of the prepositional an-variant. Based on a corpus study for twenty verbs that were discussed in the previous work, I revisit the assumptions that were made by Filip (1999). First, the incremental theme verbs like bauen ‘build’ or essen ‘eat’ appear only seldom with an. This questions the central role of incrementality as the semantic explanation for the acceptability of the an-variant. Second, selectional preferences of verbs differ in the two argument structures. This observation challenges the assumption that the an-phrase and the direct object are alternative syntactic realizations of the same verbal argument. Overall, this first corpus-based study of the an-construction reveals complex interactions between the semantics of individual verbs, verb classes and the meaning of the preposition an.
The following article shows how several verbal argument structure patterns can build clusters or families. Argument structure patterns are conceptualised as form-meaning pairings related by family relationships. These are based on formal and / or semantic characteristics of the individual patterns making up the family. The small family of German argument structure patterns containing vor sich her and vor sich hin is selected to illustrate the process whereby pattern meaning combines with the syntactic and semantic properties of the patterns’ individual components to constitute a higher-level family or cluster of argument structure patterns. The study shows that the patterns making up the family are similar with regard to some of their formal characteristics, but differ quite clearly with respect to their meaning. The article also discusses the conditions of usage of the individual patterns of the family, the contribution of verb meaning and prepositional meaning to the overall meaning of the patterns, coercion effects, and productivity issues.
Argumentstrukturmuster. Ein elektronisches Handbuch zu verbalen Argumentstrukturen im Deutschen
(2019)
Valency-based and construction-based approaches to argument structure have been competing for quite a while. However, while valency-based approaches are backed up by numerous valency dictionaries as comprehensive descriptive resources, nothing comparable exists for construction-based approaches. The paper at hand describes the foundations of an ongoing project at the Institut für Deutsche Sprache in Mannheim. Aim of the project is the compilation of an online available description of a net of German argument structure patterns. The main purpose of this resource is to provide an empirical basis for an evaluation of the adequacy of valency- versus construction-based theories of argument structure. The paper at hand addresses the theoretical background, in particular the concepts of pattern and argument structure, and the corpus-based method of the project. Furthermore, it describes the coverage of the resource, the microstructure of the articles, and the macrostructure which is conceived of as a net of argument structure patterns based on family resemblance.
Der Beitrag stellt die wissenschaftlichen und methodologischen Herausforderungen für die Erstellung einer innovativen, korpusbasierten lexikografischen Ressource zur Lexik des gesprochenen Deutsch in der Interaktion vor und zeigt neue Wege für lexikografische Arbeiten auf. Neben allgemeinen Projektinformationen zu den Ausgangspunkten, der Datengrundlage, den Methoden, Zielen und dem konkreten Gegenstandsbereich werden ausgewählte Ergebnisse von zwei projektbezogenen empirischen Studien zu Erwartungshaltungen an eine lexikografische Ressource des gesprochenen Deutsch präsentiert. Für korpusbasierte quantitative Informationen werden die Möglichkeiten eines Tools, welches im Rahmen des Projekts entwickelt wurde, aufgezeigt. Außerdem wird ein Einblick in die konzeptionellen und methodologischen Überlegungen zur Mikrostruktur der geplanten Ressource gegeben.
Are borrowed neologisms accepted more slowly into the German language than German words resulting from the application of wrd formation rules? This study addresses this question by focusing on two possible indicators for the acceptance of neologisms: a) frequency development of 239 German neologisms from the 1990s (loanwords as well as new words resulting from the application of word formation rules) in the German reference corpus DEREKO and b) frequency development in the use of pragmatic markers (‘flags’, namely quotation marks and phrases such as sogenannt ‘so-called’) with these words. In the second part of the article, a psycholinguistic approach to evaluating the (psychological) status of different neologisms and non-words in an experimentally controlled study and plans to carry out interviews in a field test to collect speakers’ opinions on the acceptance of the analysed neologisms are outlined. Finally, implications for the lexicographic treatment of both types of neologisms are discussed.
A large database is a desirable basis for multimodal analysis. The development of more elaborate methods, data banks, and tools for a stronger empirical grounding of multimodal analysis is a prevailing topic within multimodality. Prereq- uisite for this are corpora for multimodal data. Our contribution aims at developing a proposal for gathering and building multimodal corpora of audio-visual social media data, predominantly YouTube data.Our contribution has two parts: First we outline a participation framework which is able to represent the complexity of YouTube communication. To this end we ‘dissect’ the different communicative and multimodal layers YouTube consists of. Besides the Video performance YouTube also integrates comments, social media operators, commercials, and announcements for further YouTube Videos. The data consists of various media and modes and is interactively engaged in various discourses. Hence, it is rather difficult to decide what can be considered as a basic communicative unit (or a ‘turn’) and how it can be mapped. Another decision to be made is which elements are of higher priority than others, thus have to be integrated in an adequate transcription format. We illustrate our conceptual considerations on the example of so-called L e t’s Plays, which are supposed to present and comment Computer gaming processes.The second part is devoted to corpus building. Most previous studies either worked with ad hoc data samples or outlined data mining and data sampling strategies. Our main aim is to delineate in a systematic way and based on the conceptual outline in the first part necessary elements which should be part of a YouTube corpus. To this end we describe in a first Step which components (e.g., the Video itself, the comments, the metadata, etc.) should be captured. ln a second Step we outline why and which relations (e.g., screen appearances, hypertextual struc- tures, etc.) are worth to get part of the corpus. In sum, our contribution aims at outlining a proposal for gathering and systematizing multimodal data, specifically audio-visual social media data, in a corpus derived from a conceptual modeling of important communicative processes of the research object itself.
This article investigates the use of überhaupt and sowieso in German and Dutch. These two words are frequently classified as particles, if only because of their pragmatic functions. The frequent use of particles is considered a specific trait common to German and Dutch, and the description of their semantics and pragmatics is notoriously difficult. It is unclear whether both particles have the same meaning in Dutch (where they are loanwords) and German, whether they can fulfil the same syntactic functions and to what extent the (semantic and pragmatic) functions of überhaupt und sowieso overlap. There has already been linguistic research on überhaupt and sowieso by Fisseni (2009) using the world-wide web and by Bruijnen and Sudhoff (2013) using the EUROPARL corpus. In the present study we critically evaluated the corpus study, integrating information on original utterance language and discussing the adequacy of this corpus. Moreover, we conducted an experimental survey collecting subjective-intuitive judgements in three dimensions, thus gathering more data on sparse and informal constructions.
By using these complementary methods, we obtain a more nuanced picture of the use of überhaupt and sowieso in both languages: On the one hand, the data show where the use of both words is more similar and on the other hand, differences between the languages can also be discerned.
In an earlier publication it was claimed that there is no useful relationship between Swahili-English dictionary look-up frequencies and the occurrence frequencies for the same wordforms in Swahili-English corpora, at least not beyond the top few thousand wordforms. This result was challenged using data for German by a different team of researchers using an improved methodology. In the present article the original Swahili-English data is revisited, using ten years’ worth of it rather than just two, and using the improved methodology. We conclude that there is indeed a positive relationship. In addition, we show that online dictionary look-up behaviour is remarkably similar across languages, even when, as in our case, one is dealing with languages from very dissimilar language families. Furthermore, online dictionaries turn out to have minimum look-up success rates, below which they simply cannot go. These minima are language-sensitive and vary depending on the regularity of the searched-for entries, but are otherwise constant no matter the size of randomly sampled dictionaries. Corpus-informed sampling always improves on any random method. Lastly, from the point of view of the graphical user interface, we argue that the average user of an online bilingual dictionary is better served with a single search box, rather than separate search boxes for each dictionary side.
Das Archiv für Gesprochenes Deutsch (AGD, Stift/Schmidt 2014) am Leibniz-Institut für Deutsche Sprache ist ein Forschungsdatenzentrum für Korpora des gesprochenen Deutsch. Gegründet als Deutsches Spracharchiv (DSAv) im Jahre 1932 hat es über Eigenprojekte, Kooperationen und Übernahmen von Daten aus abgeschlossenen Forschungsprojekten einen Bestand von bald 100 Variations-, Interview- und Gesprächskorpora aufgebaut, die u. a. dialektalen Sprachgebrauch, mündliche Kommunikationsformen oder die Sprachverwendung bestimmter Sprechertypen oder zu bestimmten Themen dokumentieren. Heute ist dieser Bestand fast vollständig digitalisiert und wird zu einem großen Teil der wissenschaftlichen Gemeinschaft über die Datenbank für Gesprochenes Deutsch (DGD) im Internet zur Nutzung in Forschung und Lehre angeboten.
This paper investigates emergent pseudo-coordination in spoken German. In a corpus-based study, seven verbs in the first conjunct are analyzed regarding the degree of semantic bleaching and the development of subjective or aspectual meaning components. Moreover, it is shown that each verb shows distinct tendencies for co-ocurrences, especially with deictic adverbs in the first conjunct and with specific verbs and verb classes in the second conjunct. It is argued that pseudo-coordination is originally motivated by the need for ‘chunking’ in unplanned speech and that it is still prominently used in this function in German, in contrast to languages in which pseudo-coordination is grammaticalized further.
Sprechen im Umbruch. Zeitzeugen erzählen und argumentieren rund um den Fall der Mauer im Wendekorpus
(2019)
Der Beitrag beschreibt ein mehrfach annotiertes Korpus deutschsprachiger Songtexte als Datenbasis für interdisziplinäre Untersuchungsszenarien. Die Ressource erlaubt empirisch begründete Analysen sprachlicher Phänomene, systemischstruktureller Wechselbeziehungen und Tendenzen in den Texten moderner Popmusik. Vorgestellt werden Design und Annotationen des in thematische und autorenspezifische Archive stratifizierten Korpus sowie deskriptive Statistiken am Beispiel des Udo-Lindenberg-Archivs.
In this paper, we present our work-inprogress to automatically identify free indirect representation (FI), a type of thought representation used in literary texts. With a deep learning approach using contextual string embeddings, we achieve f1 scores between 0.45 and 0.5 (sentence-based evaluation for the FI category) on two very different German corpora, a clear improvement on earlier attempts for this task. We show how consistently marked direct speech can help in this task. In our evaluation, we also consider human inter-annotator scores and thus address measures of certainty for this difficult phenomenon.
This paper presents the prototype of a lexicographic resource for spoken German in interaction, which was conceived within the framework of the LeGeDe-project (LeGeDe=Lexik des gesprochenen Deutsch). First of all, it summarizes the theoretical and methodological approaches that were used for the initial planning of the resource. The headword candidates were selected by analyzing corpus-based data. Therefore, the data of two corpora (written and spoken German) were compared with quantitative methods. The information that was gathered on the selected headword candidates can be assigned to two different sections: meanings and functions in interaction.
Additionally, two studies on the expectations of future users towards the resource were carried out. The results of these two studies were also taken into account in the development of the prototype. Focusing on the presentation of the resource’s content, the paper shows both the different lexicographical information in selected dictionary entries, and the information offered by the provided hyperlinks and external texts. As a conclusion, it summarizes the most important innovative aspects that were specifically developed for the implementation of such a resource.
In the first volume of Corpus Linguistics and Linguistic Theory, Gries (2005. Null-hypothesis significance testing of word frequencies: A follow-up on Kilgarriff. Corpus Linguistics and Linguistic Theory 1(2). doi:10.1515/cllt.2005.1.2.277. http://www.degruyter.com/view//cllt.2005.1.issue-2/cllt.2005.1.2.277/cllt.2005.1.2.277.xml: 285) asked whether corpus linguists should abandon null-hypothesis significance testing. In this paper, I want to revive this discussion by defending the argument that the assumptions that allow inferences about a given population – in this case about the studied languages – based on results observed in a sample – in this case a collection of naturally occurring language data – are not fulfilled. As a consequence, corpus linguists should indeed abandon null-hypothesis significance testing.
Since 2013 representatives of several French and German CMC corpus projects have developed three customizations of the TEI-P5 standard for text encoding in order to adapt the encoding schema and models provided by the TEI to the structural peculiarities of CMC discourse. Based on the three schema versions, a 4th version has been created which takes into account the experiences from encoding our corpora and which is specifically designed for the submission of a feature request to the TEI council. On our poster we would present the structure of this schema and its relations (commonalities and differences) to the previous schemas.
This paper presents types and annotation layers of reply relations in computer- mediated communication (CMC). Reply relations hold between post units in CMC interactions and describe references from one given post to a previous post. We classify three types of reply relations in CMC interactions: first, technical replies, i. e. the possibility to reply directly to a previous post by clicking a ‘reply’ button; second, indentations, e. g. in wiki talk pages in which users insert their contributions in the existing talk page by indenting them and third, interpretative reply relations, i. e. the reply action is not realised formally but signalled by other structural or linguistics means such as address markers ‘@’, greetings, citations and/or Q-A structures. We take a look at existing practices in the description and representation of such relations in corpora and examples of chat, Wikipedia talk pages, Twitter and blogs. We then provide an annotation proposal that combines the different levels of description and representation of reply relations and which adheres to the schemas and practices for encoding CMC corpus documents within the TEI framework as defined by the TEI CMC SIG. It constitutes a prerequisite for correctly identifying higher levels of interactional relations such as dialogue acts or discussion trees.
Classical null hypothesis significance tests are not appropriate in corpus linguistics, because the randomness assumption underlying these testing procedures is not fulfilled. Nevertheless, there are numerous scenarios where it would be beneficial to have some kind of test in order to judge the relevance of a result (e.g. a difference between two corpora) by answering the question whether the attribute of interest is pronounced enough to warrant the conclusion that it is substantial and not due to chance. In this paper, I outline such a test.
Text corpora come in many different shapes and sizes and carry heterogeneous annotations, depending on their purpose and design. The true benefit of corpora is rooted in their annotation and the method by which this data is encoded is an important factor in their interoperability. We have accumulated a large collection of multilingual and parallel corpora and encoded it in a unified format which is compatible with a broad range of NLP tools and corpus linguistic applications. In this paper, we present our corpus collection and describe a data model and the extensions to the popular CoNLL-U format that enable us to encode it.
Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.
Nearly all of the very large corpora of English are “static”, which allows a wide range of one-time, pre-processed data, such as collocates. The challenge comes with large “dynamic” corpora, which are updated regularly, and where preprocessing is much more difficult. This paper provides an overview of the NOW corpus (News on the Web), which is currently 8.2 billion words in size, and which grows by about 170 million words each month. We discuss the architecture of NOW, and provide many examples that show how data from NOW can (uniquely) be extracted to look at a wide range of ongoing changes in English.
As the Web ought to be considered as a series of sources rather than as a source in itself, a problem facing corpus construction resides in meta-information and categorization. In addition, we need focused data to shed light on particular subfields of the digital public sphere. Blogs are relevant to that end, especially if the resulting web texts can be extracted along with metadata and made available in coherent and clearly describable collections.
This paper reports on the latest developments of the European Reference Corpus EuReCo and the German Reference Corpus in relation to three of the most important CMLC topics: interoperability, collaboration on corpus infrastructure building, and legal issues. Concerning interoperability, we present new ways to access DeReKo via KorAP on the API and on the plugin level. In addition we report about advancements in the EuReCo- and ICC-initiatives with the provision of comparable corpora, and about recent problems with license acquisitions and our solution approaches using an indemnification clause and model licenses that include scientific exploitation.
Contents:
1. Johannes Graën, Tannon Kew, Anastassia Shaitarova and Martin Volk, "Modelling Large Parallel Corpora", S. 1-8
2. Pedro Javier Ortiz Suárez, Benoît Sagot and Laurent Romary, "Asynchronous Pipelines for Processing Huge Corpora on Medium to Low Resource Infrastructures", S. 9-16
3. Vladimír Benko, "Deduplication in Large Web Corpora", S. 17-22
4. Mark Davies, "The best of both worlds: Multi-billion word “dynamic” corpora", S. 23-28
5. Adrien Barbaresi, "On the need for domain-focused web corpora", S. 29-32
6. Marc Kupietz, Eliza Margaretha, Nils Diewald, Harald Lüngen and Peter Fankhauser, "What's New in EuReCo? Interoperability, Comparable Corpora, Licensing", S. 33-39
Relativpronomenselektion und grammatische Variation: 'was' vs. 'das' in attributiven Relativsätzen
(2019)
Distributional models of word use constitute an indispensable tool in corpus based lexicological research for discovering paradigmatic relations and syntagmatic patterns (Belica et al. 2010). Recently, word embeddings (Mikolov et al. 2013) have revived the field by allowing to construct and analyze distributional models on very large corpora. This is accomplished by reducing the very high dimensionality of word cooccurrence contexts, the size of the vocabulary, to few dimensions, such as 100-200. However, word use and meaning can vary widely along dimensions such as domain, register, and time, and word embeddings tend to represent only the most prevalent meaning. In this paper we thus construct domain specific word embeddings to allow for systematically analyzing variations in word use. Moreover, we also demonstrate how to reconstruct domain specific co-occurrence contexts from the dense word embeddings.
Ein sehr mächtiges Instrument für die Untersuchung von Wörtern und Verwandtschaftsbeziehungen zwischen ihnen ist die Analyse typischer Verwendungskontexte - unabhängig davon, ob die Evidenzen auf Bedeutungskonstitution, ihre Veränderung oder Verwechslung hinweisen, drei Aspekte, die alle bei der Charakterisierung von Paronymie eine Rolle spielen. Auch wenn für die Ermittlung typischer Verwendungsmuster ausgereifte Methoden zur Verfügung stehen, so sollte beim Vergleich der Analysen doch beachtet werden, dass sie diversen Einflussgrößen unterliegen. Neben der Datengrundlage und der Definition und Handhabung des relevanten Kontextes wird im Folgenden besonders darauf eingegangen, welche Rolle verschiedene Teilmengen eines Flexionsparadigmas spielen können, wenn ein Lemma als dessen Gesamtmenge als sprachliche Bezugseinheit einer Untersuchung gewählt wurde. Veranschaulicht wird die Gedankenführung an der beispielhaften Betrachtung von Paronymkandidaten.
Einleitung
(2019)
Digitale Korpora haben die Voraussetzungen, unter denen sich Wissenschaftler mit der Erforschung von Sprachphänomenen beschäftigen, fundamental verändert. Umfangreiche Sammlungen geschriebener und gesprochener Sprache bilden mittlerweile die empirische Basis für mathematisch präzise Generalisierungen über zu beschreibende Wirklichkeitsausschnitte. Das Datenmaterial ist hochkomplex und besteht neben den Rohtexten aus diversen linguistischen Annotationsebenen sowie außersprachlichen Metadaten. Als unmittelbare Folge stellt sich die Konzeption adäquater Recherchelösungen als beträchtliche Herausforderung dar. Im vorliegenden Buch wird deshalb ein datenbankbasierter Ansatz vorgestellt, der sich der Problematiken multidimensionaler Korpusrecherchen annimmt. Ausgehend von einer Charakterisierung der Anforderungsmerkmale linguistisch motivierter Suchen werden Speicherungs- und Abfragestrategien für mehrfach annotierte Korpora entwickelt und anhand eines linguistischen Anforderungskatalogs evaluiert. Ein Schwerpunkt liegt dabei in der Einführung problemorientierter Segmentierung und Parallelisierung.