Refine
Year of publication
Document Type
- Conference Proceeding (62)
- Part of a Book (19)
- Article (13)
- Working Paper (2)
- Book (1)
- Doctoral Thesis (1)
Keywords
- Automatische Sprachanalyse (29)
- Deutsch (20)
- Korpus <Linguistik> (17)
- Computerlinguistik (16)
- Frame-Semantik (15)
- Annotation (14)
- Semantische Analyse (14)
- Propositionale Einstellung (10)
- Beleidigung (9)
- sentiment analysis (9)
Publicationstate
- Veröffentlichungsversion (75)
- Zweitveröffentlichung (8)
- Postprint (5)
Reviewstate
Publisher
- Association for Computational Linguistics (13)
- European Language Resources Association (10)
- The Association for Computational Linguistics (9)
- German Society for Computational Linguistics & Language Technology und Friedrich-Alexander-Universität Erlangen-Nürnberg (4)
- Springer (4)
- European Language Resources Association (ELRA) (3)
- European language resources association (ELRA) (3)
- Oxford University Press (3)
- Universität Hildesheim (3)
- EACL (2)
The classification of verbs in Levin's (1993) English Verb Classes and Alternations: A preliminary Investigation, on the basis of both intuitive semantic grouping and their participation in valence alternations, is often used by the NLP community as evidence of the semantic similarity of verbs (Jing & McKeown 1998; Lapata & Brew 1999; Kohl et al. 1998). In this paper, we compare the Levin classification with the work of the FrameNet project (Fillmore & Baker 2001), where words (not just verbs) are grouped according to the conceptual structures (frames) that underlie them and their combinatorial patterns are inductively derived from corpus evidence. This means that verbs grouped together in FrameNet (FN) might be semantically similar but have different (or no) alternations, and that verbs which share the same alternation might be represented in two different semantic frames.
In this paper, we report on an effort to develop a gold standard for the intensity ordering of subjective adjectives. Rather than pursue a complete order as produced by paying attention to the mean scores of human ratings only, we take into account to what extent assessors consistently rate pairs of adjectives relative to each other. We show that different available automatic methods for producing polar intensity scores produce results that correlate well with our gold standard, and discuss some conceptual questions surrounding the notion of polar intensity.
In this paper, we describe MLSA, a publicly available multi-layered reference corpus for German-language sentiment analysis. The construction of the corpus is based on the manual annotation of 270 German-language sentences considering three different layers of granularity. The sentence-layer annotation, as the most coarse-grained annotation, focuses on aspects of objectivity, subjectivity and the overall polarity of the respective sentences. Layer 2 is concerned with polarity on the word- and phrase-level, annotating both subjective and factual language. The annotations on Layer 3 focus on the expression-level, denoting frames of private states such as objective and direct speech events. These three layers and their respective annotations are intended to be fully independent of each other. At the same time, exploring for and discovering interactions that may exist between different layers should also be possible. The reliability of the respective annotations was assessed using the average pairwise agreement and Fleiss’ multi-rater measures. We believe that MLSA is a beneficial resource for sentiment analysis research, algorithms and applications that focus on the German language.
Auf dem Weg zu einer Kartographie: automatische und manuelle Analysen am Beispiel des Korpus ISW
(2021)
Recent work suggests that concreteness and imageability play an important role in the meanings of figurative expressions. We investigate this idea in several ways. First, we try to define more precisely the context within which a figurative expression may occur, by parsing a corpus annotated for metaphor. Next, we add both concreteness and imageability as “features” to the parsed metaphor corpus, by marking up words in this corpus using a psycholinguistic database of scores for concreteness and imageability. Finally, we carry out detailed statistical analyses of the augmented version of the original metaphor corpus, cross-matching the features of concreteness and imageability with others in the corpus such as parts of speech and dependency relations, in order to investigate in detail the use of such features in predicting whether a given expression is metaphorical or not.
This paper addresses the task of finding antecedents for locally uninstantiated arguments. To resolve such null instantiations, we develop a weakly supervised approach that investigates and combines a number of linguistically motivated strategies that are inspired by work on semantic role labeling and corefence resolution. The performance of the system is competitive with the current state-of-the-art supervised system.
We provide a unified account of semantic effects observable in attested examples of the German applicative (‘be-’) construction, e.g. Rollstuhlfahrer Poul Sehachsen aus Kopenhagen will den 1997 erschienenen Wegweiser Handiguide Europa fortführen und zusammen mit Movado Berlin berollen (‘Wheelchair user Poul Schacksen from Copenhagen wants to continue the guide ‘Handiguide Europe’, which came out in 1997, and roll Berlin together with Movado.’). We argue that these effects do not come from lexico-semantic operations on ‘input’ verbs, but are instead the products of a reconciliation procedure in which the meaning of the verb is integrated into the event-structure schema denoted by the applicative construction. We analyze the applicative pattern as an argument-structure construction, in terms of Goldberg (1995). We contrast this approach with that of Brinkmann (1997), in which properties associated with the applicative pattern (e.g. omissibility of the theme argument, holistic interpretation of the goal argument, and planar construal of the location argument) are attributed to general semantico-pragmatic principles. We undermine the generality of the principles as stated, and assert that these properties are instead construction-particular. We further argue that the constructional account provides an elegant model of the valence-creation and valence-augmentation functions of the prefix. We describe the constructional semantics as prototype-based: diverse implications of fee-predications, including iteration, transfer, affectedness, intensity and saturation, derive via regular patterns of semantic extension from the topological concept of coverage.
Alternations play a central role in most current theories of verbal argument structure, wich are devides primarily to model the syntactic flexibility of verbs. Accordingly, these frameworks take verbs, and their projection properties, to be the sole contributors of thematic content to the clause. Approached from this perspective, the German applicative (or be-prefix) construction has puzzling properties. First, while many applicative verbs have transparent base forms, many, including those coined from nouns, do not. Second, applicative verbs are bound by interpretive and argument-realization conditions which cannot be traced to their base forms, if any. These facts suggest that applicative formation is not appropriately modeled as a lexical rule.
Using corpus data from a diverse array of genres, Michaelis and Ruppenhofer propose a unified solution to these two puzzles within the framework of Construction Grammar. Central to this account is the concept of valence augmentation: argument-structure constructions denote event types, and therefore license valence sets which may properly include those of their lexical fillers. As per Panini's Law, resolution of valence mismatch favors the construction over the verb. Like verbs of transfer and location, the applicative construction has a prototype-based event-structure representation: diverse implications of applicative predications--including iteration, transfer, affectedness, intensity and saturation--are shown to derive via regular patterns of semantic extension from the topological concept of coverage.
We introduce a system that learns the participants of arbitrary given scripts. This system processes data from web experiments, in which each participant can be realized with different expressions. It computes participants by encoding semantic similarity and global structural information into an Integer Linear Program. An evaluation against a gold standard shows that we significantly outperform two informed baselines.
We introduce a method for error detection in automatically annotated text, aimed at supporting the creation of high-quality language resources at affordable cost. Our method combines an unsupervised generative model with human supervision from active learning. We test our approach on in-domain and out-of-domain data in two languages, in AL simulations and in a real world setting. For all settings, the results show that our method is able to detect annotation errors with high precision and high recall.
Catching the common cause: extraction and annotation of causal relations and their participants
(2017)
In this paper, we present a simple, yet effective method for the automatic identification and extraction of causal relations from text, based on a large English-German parallel corpus. The goal of this effort is to create a lexical resource for German causal relations. The resource will consist of a lexicon that describes constructions that trigger causality as well as the participants of the causal event, and will be augmented by a corpus with annotated instances for each entry, that can be used as training data to develop a system for automatic classification of causal relations. Focusing on verbs, our method harvested a set of 100 different lexical triggers of causality, including support verb constructions. At the moment, our corpus includes over 1,000 annotated instances. The lexicon and the annotated data will be made available to the research community.
We present a new resource for German causal language, with annotations in context for verbs, nouns and adpositions. Our dataset includes 4,390 annotated instances for more than 150 different triggers. The annotation scheme distinguishes three different types of causal events (CONSEQUENCE, MOTIVATION, PURPOSE). We also provide annotations for semantic roles, i.e. of the cause and effect for the causal event as well as the actor and affected party, if present. In the paper, we present inter-annotator agreement scores for our dataset and discuss problems for annotating causal language. Finally, we present experiments where we frame causal annotation as a sequence labelling problem and report baseline results for the prediciton of causal arguments and for predicting different types of causation.
In the paper we investigate the impact of data size on a Word Sense Disambiguation task (WSD). We question the assumption that the knowledge acquisition bottleneck, which is known as one of the major challenges for WSD, can be solved by simply obtaining more and more training data. Our case study on 1,000 manually annotated instances of the German verb drohen (threaten) shows that the best performance is not obtained when training on the full data set, but by carefully selecting new training instances with regard to their informativeness for the learning process (Active Learning). We present a thorough evaluation of the impact of different sampling methods on the data sets and propose an improved method for uncertainty sampling which dynamically adapts the selection of new instances to the learning progress of the classifier, resulting in more robust results during the initial stages of learning. A qualitative error analysis identifies problems for automatic WSD and discusses the reasons for the great gap in performance between human annotators and our automatic WSD system.
Active Learning (AL) has been proposed as a technique to reduce the amount of annotated data needed in the context of supervised classification. While various simulation studies for a number of NLP tasks have shown that AL works well on goldstandard data, there is some doubt whether the approach can be successful when applied to noisy, real-world data sets. This paper presents a thorough evaluation of the impact of annotation noise on AL and shows that systematic noise resulting from biased coder decisions can seriously harm the AL process. We present a method to filter out inconsistent annotations during AL and show that this makes AL far more robust when applied to noisy data.
This paper presents a compositional annotation scheme to capture the clusivity properties of personal pronouns in context, that is their ability to construct and manage in-groups and out-groups by including/excluding the audience and/or non-speech act participants in reference to groups that also include the speaker. We apply and test our schema on pronoun instances in speeches taken from the German parliament. The speeches cover a time period from 2017-2021 and comprise manual annotations for 3,126 sentences. We achieve high inter-annotator agreement for our new schema, with a Cohen’s κ in the range of 89.7-93.2 and a percentage agreement of > 96%. Our exploratory analysis of in/exclusive pronoun use in the parliamentary setting provides some face validity for our new schema. Finally, we present baseline experiments for automatically predicting clusivity in political debates, with promising results for many referential constellations, yielding an overall 84.9% micro F1 for all pronouns.
We present a method for detecting annotation errors in manually and automatically annotated dependency parse trees, based on ensemble parsing in combination with Bayesian inference, guided by active learning. We evaluate our method in different scenarios: (i) for error detection in dependency treebanks and (ii) for improving parsing accuracy on in- and out-of-domain data.
Who is we? Disambiguating the referents of first person plural pronouns in parliamentary debates
(2021)
This paper investigates the use of first person plural pronouns as a rhetorical device in political speeches. We present an annotation schema for disambiguating pronoun references and use our schema to create an annotated corpus of debates from the German Bundestag. We then use our corpus to learn to automatically resolve pronoun referents in parliamentary debates. We explore the use of data augmentation with weak supervision to further expand our corpus and report preliminary results.