Refine
Year of publication
Document Type
- Conference Proceeding (9)
- Other (6)
- Article (4)
- Book (1)
- Part of a Book (1)
Has Fulltext
- yes (21)
Keywords
- Datensatz (21) (remove)
Publicationstate
- Veröffentlichungsversion (21) (remove)
Reviewstate
- Peer-Review (13)
- (Verlags)-Lektorat (2)
Publisher
- Leibniz-Institut für Deutsche Sprache (IDS) (4)
- European Language Resources Association (ELRA) (3)
- Association for Computational Linguistics (2)
- Leibniz-Institut für Deutsche Sprache (2)
- de Gruyter (2)
- Deutsche Gesellschaft für Sprachwissenschaft (1)
- LiU Electronic Press (1)
- Springer Nature (1)
- Stroudsburg (1)
- Zenodo (1)
Datensatz Schwache Maskulina
(2023)
Der Datensatz enthält eine Sammlung von 1.156 Substantiven (mit wenigen Ausnahmen Maskulina), die sich im Korpusgrammatik-Untersuchungskorpus (Bubenhofer et al. 2014), basierend auf dem Deutschen Referenzkorpus DeReKo (Kupietz et al. 2010, 2018), Release 2017-II, unmittelbar nach einem Beleg für die Akkusativ- oder Dativform des unbestimmten Artikels ( einen / einem ) mindestens einmal mit der “schwachen” Endung -(e)n belegen lassen (z.B. einen Aktivisten , einem Autoren ). Einzelheiten zur Datenerhebung in Weber & Hansen (2023).
In order to differentiate between figurative and literal usage of verb-noun combinations for the shared task on the disambiguation of German Verbal Idioms issued for KONVENS 2021, we apply and extend an approach originally developed for detecting idioms in a dataset consisting of random ngram samples. The classification is done by implementing a rather shallow, statistics-based pipeline without intensive preprocessing and examinations on the morphosyntactic and semantic level. We describe the overall approach, the differences between the original dataset and the dataset of the KONVENS task, provide experimental classification results, and analyse the individual contributions of our feature sets.
Der Datensatz enthält 10.113 Korpusbelege für Konstruktionen, in denen ein Substantiv mit einem dass-Satz oder einem zu-Infinitiv auftritt (das Versprechen, dass man sich irgendwann wiedersieht vs. das Versprechen, sich irgendwann wiederzusehen).
Die Daten wurden erhoben aus:
1. dem Korpusgrammatik-Untersuchungskorpus (Bubenhofer et al. 2014), basierend auf dem Deutschen Referenzkorpus DeReKo (Kupietz et al. 2010, 2018), Release 2017-II.
2. dem Subkorpus “Forum” des DECOW16B-Webkorpus (Schäfer & Bildhauer 2012).
We investigate the optional omission of the infinitival marker in a Swedish future tense construction. During the last two decades the frequency of omission has been rapidly increasing, and this process has received considerable attention in the literature. We test whether the knowledge which has been accumulated can yield accurate predictions of language variation and change. We extracted all occurrences of the construction from a very large collection of corpora. The dataset was automatically annotated with language-internal predictors which have previously been shown or hypothesized to affect the variation. We trained several models in order to make two kinds of predictions: whether the marker will be omitted in a specific utterance and how large the proportion of omissions will be for a given time period. For most of the approaches we tried, we were not able to achieve a better-than-baseline performance. The only exception was predicting the proportion of omissions using autoregressive integrated moving average models for one-step-ahead forecast, and in this case time was the only predictor that mattered. Our data suggest that most of the language-internal predictors do have some effect on the variation, but the effect is not strong enough to yield reliable predictions.
So far, there have been few descriptions on creating structures capable of storing lexicographic data, ISO 24613:2008 being one of the latest. Another one is by Spohr (2012), who designs a multifunctional lexical resource which is able to store data of different types of dictionaries in a user-oriented way. Technically, his design is based on the principle of a hierarchical XML/OWL (eXtensible Markup Language/Web Ontology Language) representation model. This article follows another route in describing a model based on entities and relations between them; MySQL (usually referred to as: Structured Query Language) describes a database system of tables containing data and definitions of relations between them. The model was developed in the context of the project "Scientific eLexicography for Africa" and the lexicographic database to be built thereof will be implemented with MySQL. The principles of the ISO model and of Spohr's model are adhered to with one major difference in the implementation strategy: we do not place the lemma in the centre of attention, but the sense description — all other elements, including the lemma, depend on the sense description. This article also describes the contained lexicographic data sets and how they have been collected from different sources. As our aim is to compile several prototypical internet dictionaries (a monolingual Northern Sotho dictionary, a bilingual learners' Xhosa–English dictionary and a bilingual Zulu–English dictionary), we describe the necessary microstructural elements for each of them and which principles we adhere to when designing different ways of accessing them. We plan to make the model and the (empty) database with all graphical user interfaces that have been developed, freely available by mid-2015.
The QUEST (QUality ESTablished) project aims at ensuring the reusability of audio-visual datasets (Wamprechtshammer et al., 2022) by devising quality criteria and curating processes. RefCo (Reference Corpora) is an initiative within QUEST in collaboration with DoReCo (Documentation Reference Corpus, Paschen et al. (2020)) focusing on language documentation projects. Previously, Aznar and Seifart (2020) introduced a set of quality criteria dedicated to documenting fieldwork corpora. Based on these criteria, we establish a semi-automatic review process for existing and work-in-progress corpora, in particular for language documentation. The goal is to improve the quality of a corpus by increasing its reusability. A central part of this process is a template for machine-readable corpus documentation and automatic data verification based on this documentation. In addition to the documentation and automatic verification, the process involves a human review and potentially results in a RefCo certification of the corpus. For each of these steps, we provide guidelines and manuals. We describe the evaluation process in detail, highlight the current limits for automatic evaluation and how the manual review is organized accordingly.
Metadata provides important information relevant both to finding and understanding corpus data. Meaningful linguistic data requires both reasonable annotations and documentation of these annotations. This documentation is part of the metadata of a dataset. While corpus documentation has often been provided in the form of accompanying publications, machinereadable metadata, both containing the bibliographic information and documenting the corpus data, has many advantages. Metadata standards allow for the development of common tools and interfaces. In this paper I want to add a new perspective from an archive’s point of view and look at the metadata provided for four learner corpora and discuss the suitability of established standards for machine-readable metadata. I am are aware that there is ongoing work towards metadata standards for learner corpora. However, I would like to keep the discussion going and add another point of view: increasing findability and reusability of learner corpora in an archiving context.
We address the task of distinguishing implicitly abusive sentences on identity groups (“Muslims contaminate our planet”) from other group-related negative polar sentences (“Muslims despise terrorism”). Implicitly abusive language are utterances not conveyed by abusive words (e.g. “bimbo” or “scum”). So far, the detection of such utterances could not be properly addressed since existing datasets displaying a high degree of implicit abuse are fairly biased. Following the recently-proposed strategy to solve implicit abuse by separately addressing its different subtypes, we present a new focused and less biased dataset that consists of the subtype of atomic negative sentences about identity groups. For that task, we model components that each address one facet of such implicit abuse, i.e. depiction as perpetrators, aspectual classification and non-conformist views. The approach generalizes across different identity groups and languages.
CLARIN stands for “Common Language Resources and Technology Infrastructure”. In 2012 CLARIN ERIC was established as a legal entity with the mission to create and maintain a digital infrastructure to support the sharing, use, and sustainability of language data (in written, spoken, or multimodal form) available through repositories from all over Europe, in support of research in the humanities and social sciences and beyond. Since 2016 CLARIN has had the status of Landmark research infrastructure and currently it provides easy and sustainable access to digital language data and also offers advanced tools to discover, explore, exploit, annotate, analyse, or combine such datasets, wherever they are located. This is enabled through a networked federation of centres: language data repositories, service centres, and knowledge centres with single sign-on access for all members of the academic community in all participating countries. In addition, CLARIN offers open access facilities for other interested communities of use, both inside and outside of academia. Tools and data from different centres are interoperable, so that data collections can be combined and tools from different sources can be chained to perform operations at different levels of complexity. The strategic agenda adopted by CLARIN and the activities undertaken are rooted in a strong commitment to the Open Science paradigm and the FAIR data principles. This also enables CLARIN to express its added value for the European Research Area and to act as a key driver of innovation and contributor to the increasing number of industry programmes running on data-driven processes and the digitalization of society at large.
Contents:
1. Vasile Pais, Maria Mitrofan, Verginica Barbu Mititelu, Elena Irimia, Roxana Micu and Carol Luca Gasan: Challenges in Creating a Representative Corpus of Romanian Micro-Blogging Text. Pp. 1-7
2. Modest von Korff: Exhaustive Indexing of PubMed Records with Medical Subject Headings. Pp. 8-15
3. Luca Brigada Villa: UDeasy: a Tool for Querying Treebanks in CoNLL-U Format. Pp. 16-19
4. Nils Diewald: Matrix and Double-Array Representations for Efficient Finite State Tokenization. Pp. 20-26
5. Peter Fankhauser and Marc Kupietz: Count-Based and Predictive Language Models for Exploring DeReKo. Pp. 27-31
6. Hanno Biber: “The word expired when that world awoke.” New Challenges for Research with Large Text Corpora and Corpus-Based Discourse Studies in Totalitarian Times. Pp. 32-35
In this paper we investigate the coverage of the two knowledge sources WordNet and Wikipedia for the task of bridging resolution. We report on an annotation experiment which yielded pairs of bridging anaphors and their antecedents in spoken multi-party dialog. Manual inspection of the two knowledge sources showed that, with some interesting exceptions, Wikipedia is superior to WordNet when it comes to the coverage of information necessary to resolve the bridging anaphors in our data set. We further describe a simple procedure for the automatic extraction of the required knowledge from Wikipedia by means of an API, and discuss some of the implications of the procedure’s performance.
In this paper, we present a suite of flexible UIMA-based components for information retrieval research which have been successfully used (and re-used) in several projects in different application domains. Implementing the whole system as UIMA components is beneficial for configuration management, component reuse, implementation costs, analysis and visualization.
This paper introduces LRTwiki, an improved variant of the Likelihood Ratio Test (LRT). The central idea of LRTwiki is to employ a comprehensive domain specific knowledge source as additional “on-topic” data sets, and to modify the calculation of the LRT algorithm to take advantage of this new information. The knowledge source is created on the basis of Wikipedia articles. We evaluate on the two related tasks product feature extraction and keyphrase extraction, and find LRTwiki to yield a significant improvement over the original LRT in both tasks.
Data sets of publication meta data with manually disambiguated author names play an important role in current author name disambiguation (AND) research. We review the most important data sets used so far, and compare their respective advantages and shortcomings. From the results of this review, we derive a set of general requirements to future AND data sets. These include both trivial requirements, like absence of errors and preservation of author order, and more substantial ones, like full disambiguation and adequate representation of publications with a small number of authors and highly variable author names. On the basis of these requirements, we create and make publicly available a new AND data set, SCAD-zbMATH. Both the quantitative analysis of this data set and the results of our initial AND experiments with a naive baseline algorithm show the SCAD-zbMATH data set to be considerably different from existing ones. We consider it a useful new resource that will challenge the state of the art in AND and benefit the AND research community.
Datenmanagement wird durch die Forschungsföderungsorganisationen (etwa in Horizon 2020 der EU, die Allianz der deutschen Wissenschaftsorganisationen oder in DFG geförderten Projekten) mehr und mehr Teil der Forschungslandschaft. Für die Computerlinguistik ist das Forschungsdatenmanagement aber auch Teil des Forschungsgebietes: Datenmodellierung und Transformation für die nachhaltige Datenspeicherung gehören in den Bereich der Texttechnologie und Textlinguistik, ebenso die Modellierung der beschreibenden Daten zu Datensätzen.
Der Datensatz enthält 16.604 Korpusbelege aus Nominalphrasen mit Genitiv- und von-Attributen (die Ideen zahlreicher Kinder, die Ideen von zahlreichen Kindern), wobei die Genitivattribute prä- oder postnominal erscheinen können (Mannheims Sehenswürdigkeiten, die Sehenswürdigkeiten Mannheims).
Für jeden Beleg sind Informationen zu Land, Dekade und Medium enthalten. Hinzu kommen Angaben zu Kopf- und/oder Attributslemma (z. B. Namentyp, Flexionsklasse), Gesamtphrase (z. B. Definitheit, Kasus) und Attributsphrase (z. B. Kasusdistinktion, Länge). Zahlreiche Sonderfälle sind ebenfalls annotiert (z. B. Genitiv bei nichtflektiertem Adjektiv wie Gebäck Mannheimer Bäckereien, Phrasen mit adjektivisch flektierendem Attributsnomen wie die Ideen Jugendlicher, die Ideen von Jugendlichen).
Der Datensatz enthält 409 Korpusbelege aus Nominalphrasen mit eingebetteten Genitivattributen, die wiederum ein eingebettetes Genitivattribut aufweisen (Petras Nachfolgers Beisein). Die Belege sind danach klassifiziert, ob die erste eingebettete Nominalphrase vor oder hinter dem Kopfnomen der Gesamtnominalphrase steht (Petras Nachfolgers Beisein vs. Beisein Petras Nachfolgers) und ob die erste eingebettete Nominalphrase neben einem Genitiv noch ein Adjektiv enthält (Beisein Petras direkten Nachfolgers). Für jeden Beleg werden zudem die Lemmas der drei Nomen in ihrer Einbettungsreihenfolge angegeben. Darüber hinaus sind Metadaten (Land, Jahr) enthalten.
Der Datensatz enthält die Gesamtheit der relevanten Belege aus dem KoGra-Untersuchungskorpus mit den im Folgenden aufgeführten Strukturen. Die Abfragen für die vier Strukturtypen führten zu 15.875 potenziellen Belegen, von denen sich bei manueller Durchsicht 409 als tatsächliche Nominalphrasen mit zweifach eingebetteten Genitivattributen erwiesen.
Der Datensatz dient der Untersuchung der Sonderfälle des Genitivattributs (Kopf 2021).
Datensatz Nominalphrasen
(2021)
Der Datensatz Nominalphrasen enthält Belege zu nichtpronominalen (d.h. vollen, lexikalischen) Nominalphrasen (NPs) mit einem Substantiv oder einer Nominalisierung als Kopf. Jeder Beleg ist in Bezug auf eine Reihe linguistisch relevanter Merkmale annotiert. Insgesamt enthält der Datensatz 8.137 Belegstellen. Nach dem Aussortieren von Fehlbelegen (siehe Spalten „valide“ und „nicht-valide_Begründung“) bleiben noch 7.813 einschlägige Belege. Die Suchanfrage erfolgte über das Kopfnomen; für Details zur Datenerhebung siehe Weber (2021a). Das Kopfnomen erscheint in der Spalte „Kopf_der_NP“. In manchen Fällen besteht die NP nur aus dem Kopfnomen, in den meisten Fällen geht sie aber darüber hinaus; sie erstreckt sich dann auf einen Teil des vorangehenden Kontexts (Spalte „Satzkontext_vor_Beleg“) und/oder des nachfolgenden Kontexts („Satzkontext_nach_Beleg“). Der Datensatz dient der Untersuchung der syntaktischen Funktionen von NPs (Weber 2021a) und der Determination in der NP (Weber 2021b).
Implicitly abusive language – What does it actually look like and why are we not getting there?
(2021)
Abusive language detection is an emerging field in natural language processing which has received a large amount of attention recently. Still the success of automatic detection is limited. Particularly, the detection of implicitly abusive language, i.e. abusive language that is not conveyed by abusive words (e.g. dumbass or scum), is not working well. In this position paper, we explain why existing datasets make learning implicit abuse difficult and what needs to be changed in the design of such datasets. Arguing for a divide-and-conquer strategy, we present a list of subtypes of implicitly abusive language and formulate research tasks and questions for future research.
We examine the task of detecting implicitly abusive comparisons (e.g. “Your hair looks like you have been electrocuted”). Implicitly abusive comparisons are abusive comparisons in which abusive words (e.g. “dumbass” or “scum”) are absent. We detail the process of creating a novel dataset for this task via crowdsourcing that includes several measures to obtain a sufficiently representative and unbiased set of comparisons. We also present classification experiments that include a range of linguistic features that help us better understand the mechanisms underlying abusive comparisons.