Refine
Year of publication
- 2021 (205) (remove)
Document Type
- Article (92)
- Conference Proceeding (29)
- Part of a Book (27)
- Other (20)
- Book (11)
- Report (11)
- Part of Periodical (8)
- Review (3)
- Working Paper (2)
- Course Material (1)
Keywords
- Deutsch (77)
- Korpus <Linguistik> (43)
- Interaktion (25)
- Konversationsanalyse (25)
- Kommunikation (22)
- Grammatik (19)
- Sprachgebrauch (16)
- Sprachpolitik (16)
- Forschungsdaten (15)
- COVID-19 (13)
Publicationstate
- Veröffentlichungsversion (205) (remove)
Reviewstate
- Peer-Review (93)
- (Verlags)-Lektorat (71)
Publisher
Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus
(2021)
Since the introduction of large language models in Natural Language Processing, large raw corpora have played a crucial role in Computational Linguistics. However, most of these large raw corpora are either available only for English or not available to the general public due to copyright issues. Nevertheless, there are some examples of freely available multilingual corpora for training Deep Learning NLP models, such as the OSCAR and Paracrawl corpora. However, they have quality issues, especially for low-resource languages. Moreover, recreating or updating these corpora is very complex. In this work, we try to reproduce and improve the goclassy pipeline used to create the OSCAR corpus. We propose a new pipeline that is faster, modular, parameterizable, and well documented. We use it to create a corpus similar to OSCAR but larger and based on recent data. Also, unlike OSCAR, the metadata information is at the document level. We release our pipeline under an open source license and publish the corpus under a research-only license.
This paper explores how attitudes affect the seemingly objective process of counting speakers of varieties using the example of Low German, Germany’s sole regional language. The initial focus is on the basic taxonomy of classifying a variety as a language or a dialect. Three representative surveys then provide data for the analysis: the Germany Survey 2008, the Northern Germany Survey 2016, and the Germany Survey 2017. The results of these surveys indicate that there is no consensus concerning the evaluation of Low German’s status and that attitudes towards Low German are related to, for example, proficiency in the language. These attitudes are shown to matter when counting speakers of Low German and investigating the status it has been accorded.
Öffentliche Sprachdiskurse, wie sie beispielsweise in den Medien stattfinden, werden typischerweise aus einer sprachkritischen Haltung heraus geführt. Inwieweit diese veröffentlichte Meinung tatsächlich die Mehrheitsmeinung der Sprecherinnen und Sprecher widerspiegelt, ist durchaus eine offene Frage. In diesem Beitrag berichten wir aus einer rezenten Erhebung über Spracheinstellungen in Deutschland. Wir zeigen, dass die Art der Frageformulierung einen starken Einfluss auf die Ergebnisse hat, und berichten, welche sprachlichen Veränderungen die Befragten in jüngerer Zeit angeben, wahrgenommen zu haben.
Bislang gibt es keine akkuraten, repräsentativen Statistiken dazu, welche Sprachen in Deutschland gesprochen werden. Zwar wird in verschiedenen Erhebungen nach Muttersprachen oder nach zuhause gesprochenen Sprachen gefragt; aufgrund einiger Mängel im Erhebungsdesign bilden die Ergebnisse der vorliegenden Erhebungen jedoch die sprachliche Realität der in Deutschland lebenden Bevölkerung nicht angemessen ab. Im Beitrag wird anhand von drei Erhebungen gezeigt, dass bereits die Instrumente zur Erhebung von Sprache von Spracheinstellungen geprägt sind und dass dadurch die Gültigkeit der Ergebnisse stark eingeschränkt wird. Diese Mängel gelten für Sprachstatistiken im Hinblick auf die gesamte Bevölkerung Deutschlands – Kinder und Jugendliche eingeschlossen.
The automatic recognition of idioms poses a challenging problem for NLP applications. Whereas native speakers can intuitively handle multiword expressions whose compositional meanings are hard to trace back to individual word semantics, there is still ample scope for improvement regarding computational approaches. We assume that idiomatic constructions can be characterized by gradual intensities of semantic non-compositionality, formal fixedness, and unusual usage context, and introduce a number of measures for these characteristics, comprising count-based and predictive collocation measures together with measures of context (un)similarity. We evaluate our approach on a manually labelled gold standard, derived from a corpus of German pop lyrics. To this end, we apply a Random Forest classifier to analyze the individual contribution of features for automatically detecting idioms, and study the trade-off between recall and precision. Finally, we evaluate the classifier on an independent dataset of idioms extracted from a list of Wikipedia idioms, achieving state-of-the art accuracy.
In order to differentiate between figurative and literal usage of verb-noun combinations for the shared task on the disambiguation of German Verbal Idioms issued for KONVENS 2021, we apply and extend an approach originally developed for detecting idioms in a dataset consisting of random ngram samples. The classification is done by implementing a rather shallow, statistics-based pipeline without intensive preprocessing and examinations on the morphosyntactic and semantic level. We describe the overall approach, the differences between the original dataset and the dataset of the KONVENS task, provide experimental classification results, and analyse the individual contributions of our feature sets.