Refine
Year of publication
- 2021 (107) (remove)
Document Type
- Article (47)
- Conference Proceeding (26)
- Part of a Book (20)
- Book (4)
- Other (3)
- Part of Periodical (3)
- Report (2)
- Course Material (1)
- Working Paper (1)
Language
- English (107) (remove)
Keywords
- Interaktion (32)
- Konversationsanalyse (32)
- Korpus <Linguistik> (24)
- Deutsch (23)
- Kommunikation (12)
- Forschungsdaten (11)
- conversation analysis (11)
- Computerlinguistik (9)
- Semantik (8)
- Grammatik (7)
Publicationstate
- Veröffentlichungsversion (76)
- Zweitveröffentlichung (19)
- Postprint (11)
- Erstveröffentlichung (1)
Reviewstate
- Peer-Review (87)
- (Verlags)-Lektorat (11)
Publisher
- Taylor & Francis (15)
- Benjamins (7)
- Leibniz-Institut für Deutsche Sprache (IDS) (7)
- Association for Computational Linguistics (6)
- Linköping University Electronic Press (6)
- Leibniz-Institut für Deutsche Sprache (4)
- Verlag für Gesprächsforschung (4)
- CLARIN (3)
- Cambridge University Press (3)
- Deutsche Gesellschaft für Sprachwissenschaft (3)
Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus
(2021)
Since the introduction of large language models in Natural Language Processing, large raw corpora have played a crucial role in Computational Linguistics. However, most of these large raw corpora are either available only for English or not available to the general public due to copyright issues. Nevertheless, there are some examples of freely available multilingual corpora for training Deep Learning NLP models, such as the OSCAR and Paracrawl corpora. However, they have quality issues, especially for low-resource languages. Moreover, recreating or updating these corpora is very complex. In this work, we try to reproduce and improve the goclassy pipeline used to create the OSCAR corpus. We propose a new pipeline that is faster, modular, parameterizable, and well documented. We use it to create a corpus similar to OSCAR but larger and based on recent data. Also, unlike OSCAR, the metadata information is at the document level. We release our pipeline under an open source license and publish the corpus under a research-only license.
This paper explores how attitudes affect the seemingly objective process of counting speakers of varieties using the example of Low German, Germany’s sole regional language. The initial focus is on the basic taxonomy of classifying a variety as a language or a dialect. Three representative surveys then provide data for the analysis: the Germany Survey 2008, the Northern Germany Survey 2016, and the Germany Survey 2017. The results of these surveys indicate that there is no consensus concerning the evaluation of Low German’s status and that attitudes towards Low German are related to, for example, proficiency in the language. These attitudes are shown to matter when counting speakers of Low German and investigating the status it has been accorded.
In this paper we examine the composition and interactional deployment of suspended assessments in ordinary German conversation. We define suspended assessments as lexicosyntactically incomplete assessing TCUs that share a distinct cluster of prosodic-phonetic features which auditorily makes them come off as 'left hanging' rather than cut-off (e.g., Schegloff/Jefferson/Sacks 1977; Jasperson 2002) or trailing-off (e.g., Local/Kelly 1986; Walker 2012). Using CA/IL methodology (Couper-Kuhlen/Selting 2018) and drawing on a large body of video-recorded face-to-face conversations, we highlight the verbal, vocal and bodily-visual resources participants use to render such unfinished assessing TCUs recognizably incomplete and identify six recurrent usage types. Overall, the suspension of assessing TCUs appears to either serve as a practice for circumventing the production of assessments that are interactionally inapposite, or as a practice for coping with local contingencies that render the very doing of an assessment problematic for the speaker. Data are in German with English translations.
The automatic recognition of idioms poses a challenging problem for NLP applications. Whereas native speakers can intuitively handle multiword expressions whose compositional meanings are hard to trace back to individual word semantics, there is still ample scope for improvement regarding computational approaches. We assume that idiomatic constructions can be characterized by gradual intensities of semantic non-compositionality, formal fixedness, and unusual usage context, and introduce a number of measures for these characteristics, comprising count-based and predictive collocation measures together with measures of context (un)similarity. We evaluate our approach on a manually labelled gold standard, derived from a corpus of German pop lyrics. To this end, we apply a Random Forest classifier to analyze the individual contribution of features for automatically detecting idioms, and study the trade-off between recall and precision. Finally, we evaluate the classifier on an independent dataset of idioms extracted from a list of Wikipedia idioms, achieving state-of-the art accuracy.
In order to differentiate between figurative and literal usage of verb-noun combinations for the shared task on the disambiguation of German Verbal Idioms issued for KONVENS 2021, we apply and extend an approach originally developed for detecting idioms in a dataset consisting of random ngram samples. The classification is done by implementing a rather shallow, statistics-based pipeline without intensive preprocessing and examinations on the morphosyntactic and semantic level. We describe the overall approach, the differences between the original dataset and the dataset of the KONVENS task, provide experimental classification results, and analyse the individual contributions of our feature sets.
This poster summarizes the results of the CLARIAH-DE Work Package 3: Skills Training and Promotion of Junior Researchers.
For a research field that is characterised by rapid technical development, CLARIAH-DE has to include the promotion of data literacy necessary for the efficient use of this digital research infrastructure as part of its objective. To develop, consolidate and refine a common programme in this area, work package 3 set itself the following sub goals:
- Consolidation of the activities from the previous projects into a joint service
- Cataloguing and reflecting on the methods and tools used in the research field, with the aim of identifying remaining gaps
- Skills training of, individual support for and the promotion of junior researchers
This paper presents the QUEST project and describes concepts and tools that are being developed within its framework. The goal of the project is to establish quality criteria and curation criteria for annotated audiovisual language data. Building on existing resources developed by the participating institutions earlier, QUEST also develops tools that could be used to facilitate and verify adherence to these criteria. An important focus of the project is making these tools accessible for researchers without substantial technical background and helping them produce high-quality data. The main tools we intend to provide are a questionnaire and automatic quality assurance for depositors of language resources, both developed as web applications. They are accompanied by a knowledge base, which will contain recommendations and descriptions of best practices established in the course of the project. Conceptually, we consider three main data maturity levels in order to decide on a suitable level of strictness of the quality assurance. This division has been introduced to avoid that a set of ideal quality criteria prevent researchers from depositing or even assessing their (legacy) data. The tools described in the paper are work in progress and are expected to be released by the end of the QUEST project in 2022.
CMDI Explorer
(2021)
We present CMDI Explorer, a tool that empowers users to easily explore the contents of complex CMDI records and to process selected parts of them with little effort. The tool allows users, for instance, to analyse virtual collections represented by CMDI records, and to send collection items to other CLARIN services such as the Switchboard for subsequent processing. CMDI Explorer hence adds functionality that many users felt was lacking from the CLARIN tool space.
Signposts for CLARIN
(2021)
An implementation of CMDI-based signposts and its use is presented in this paper. Arnold, Fisseni et al. (2020) present signposts as a solution to challenges in long-term preservation of corpora. Though applicable to digital resources in general, we focus on corpora, especially those that are continuously extended or subject to modification, e.g., due to legal injunctions, but also may overlap with respect to constituents, and may be subject to migrations to new data formats. We describe the contribution signposts can make to the CLARIN infrastructure, notably virtual collections, and document the design for the CMDI profile.
Playing videogames is a popular social activity; people play videogames in different places, on different media, in different situations, alone or with partners, online or offline. Unsurprisingly, they thereby share space (physically or virtually) with other playing or non-playing people. The special issue investigates through different contexts and settings how non-players become participants of the gaming interaction and how players and non-players co-construct presence. The introduction provides a problem-related context for the individual contributions and then briefly presents them.