400 Sprache, Linguistik
Refine
Document Type
- Conference Proceeding (3)
- Article (2)
- Part of a Book (1)
- Review (1)
Has Fulltext
- yes (7)
Keywords
- Ambiguität (7) (remove)
Publicationstate
Reviewstate
- Peer-Review (4)
- (Verlags)-Lektorat (2)
Publisher
Corpus-based identification and disambiguation of reading indicators for German nominalizations
(2010)
Corpus data is often structurally and lexically ambiguous; corpus extraction methodologies thus must be made aware of ambiguities. Therefore, given an extraction task, all relevant ambiguities must be identified. To resolve these ambiguities, contextual data responsible for one or another reading is to be considered. In the context of our present work, German -ung-nominalizations and their sortal readings are under examination. A number of these nominalizations may be read as an event or a result, depending on the semantic group they belong to. Here, we concentrate on nominalizations of verbs of saying (henceforth: "verba dicendi"), identify their context partners and their influence on the sortal reading of the nominalizations in question. We present a tool which calculates the sortal reading of such nominalizations and thus may improve not only corpus extraction, but also e.g. machine translation. Lastly, we describe successful attempts to identify the correct sortal reading, conclusions and future work.
Between classical symbolic word sense disambiguation (wsd) using explicit deep semantic representations of sentences and texts and statistical wsd using word co-occurrence information, there is a recent tendency towards mediating methods. Similar to so-called lightweight semantics (Marek, 2009) we suggest to only make sparse use of semantic information. We describe an approximation model based upon flat underspecified discourse representation structures (FUDRSs, cf. Eberle, 2004) that weighs knowledge about context structure, lexical semantic restrictions and interpretation preferences. We give a catalogue of guidelines for human annotation of texts by corresponding indicators. Using this, the reliability of an analysis tool that implements the model can be tested with respect to annotation precision and disambiguation prediction and how both can be improved by bootstrapping the knowledge of the system using corpus information. For the balanced test corpus considered the recognition rate of the preferred reading is 80-90% (depending on the smoothing of parse errors).
We propose to use abusive emojis, such as the “middle finger” or “face vomiting”, as a proxy for learning a lexicon of abusive words. Since it represents extralinguistic information, a single emoji can co-occur with different forms of explicitly abusive utterances. We show that our approach generates a lexicon that offers the same performance in cross-domain classification of abusive microposts as the most advanced lexicon induction method. Such an approach, in contrast, is dependent on manually annotated seed words and expensive lexical resources for bootstrapping (e.g. WordNet). We demonstrate that the same emojis can also be effectively used in languages other than English. Finally, we also show that emojis can be exploited for classifying mentions of ambiguous words, such as “fuck” and “bitch”, into generally abusive and just profane usages.
Novel formats of construction-based description hold great potential for phenomena that fall through the cracks in traditional kinds of linguistic reference works. On the example of German verb argument structure constructions with a prepositional object, we demonstrate that a construction-based description of such phenomena is superior to existing lexicographic and grammaticographic treatments, but that it also poses a number of new problems. The most fundamental of these relates to the fact that construction-based analyses can be proposed on different levels of abstraction. We illustrate pertinent problems relating to the precise identification of constructional form and meaning and suggest a multi-layered descriptive format for web-based electronic reference constructica that can accommodate these challenges. Semantically, the proposed solution integrates both lumping and splitting perspectives on constructional grain size and permits users to flexibly zoom in and out on individual elements in the resource. Formally, it can capture variation in the number and marking of realised arguments as found in e.g. passives and transitivity alternations. Aspects of the theoretical controversy between Construction Grammar and Valency Theory are addressed where relevant, but our focus is on questions of description and the practical implementation of construction-based analyses in a suitable type of linguistic reference work.
The puzzle we consider in this paper is that Merchant (2004) judges certain elliptical utterances in context to be ungrammatical, while Culicover and Jackendoff (2005) judge similar examples to be grammatical. The main difference between the examples appears to be that Merchant’s are introduced by no, while Culicover and Jackendoff’s are introduced by yes. We propose that the different judgments do not reflect grammaticality, but complexity associated with ambiguity. First, there is an ambiguity with respect to the reference of noun phrases in discourse: the relationship of the fragment to the preceding discourse is ambiguous. Second, there is an ambiguity with respect to the discourse function of an utterance, and in particular, whether it is an affirmation triggered by yes or a denial triggered by no. In the case of the denial, it needs to be established, which part of the preceding statement has to be corrected, while in the case of the affirmation, no such ambiguity arises. The interactions between these two interpretive functions may under certain circumstances render particular sentences in discourse difficult to interpret. Interpretive difficulty has the subjective flavor of ‘ungrammaticality’; in the case that we discuss here, these judgments form the basis for a particular linguistic analysis. But, we argue, manipulation of the dis-course context can simplify discourse interpretation by resolving the ambiguity, which removes the interpretive difficulty. The conclusion that we draw is that the phenomenon in question is not a matter of linguistic structure, but of discourse interpretation.