Refine
Year of publication
- 2010 (84) (remove)
Document Type
- Part of a Book (42)
- Article (19)
- Conference Proceeding (19)
- Book (1)
- Doctoral Thesis (1)
- Other (1)
- Working Paper (1)
Is part of the Bibliography
- no (84)
Keywords
- Deutsch (30)
- Korpus <Linguistik> (12)
- Computerlinguistik (6)
- Computerunterstützte Lexikographie (6)
- Konversationsanalyse (5)
- Interaktionsanalyse (4)
- Intermedialität (4)
- Lexikographie (4)
- Annotation (3)
- Automatische Sprachanalyse (3)
Publicationstate
- Veröffentlichungsversion (84) (remove)
Reviewstate
Publisher
- de Gruyter (21)
- European Language Resources Association (5)
- Lang (4)
- Association for Computational Linguistics (3)
- Narr (3)
- University of Liverpool (3)
- Winter (3)
- Institut für Deutsche Sprache (2)
- Presses Universitaires (2)
- Saxa (2)
Von Bush administration zu Kohl-Regierung: Englische Einflüsse auf deutsche Nominalkonstruktionen?
(2010)
In diesem Beitrag wird eine neue, funktional motivierte Systematik für den adnominalen Genitiv und entsprechende von-Phrasen, die zusammenfassend als ‘possessive Attribute’ bezeichnet werden, entwickelt. Sie beruht auf Erkenntnissen aus der sprachtypologischen Forschung und dem Vergleich mit anderen, vor allem germanischen Sprachen. Der Beschreibungsrahmen für die NP mit der übergreifenden ‘funktionalen Domäne’ der Referenz und den zugehörigen Subdomänen wird vorgestellt. Possessive Attribute können als eine Ausdrucksform der Subdomäne Modifikation bestimmt werden. Es wird gezeigt, dass possessive Attribute verschiedene funktionale Typen der Modifikation realisieren können: referentiell-verankernde (der Hut meiner Schwester), qualitative (ein Autor deutscher Herkunft) und klassifikatorische (ein Mann der Tat). Auch randständige possessive Attribute wie der ‘Teilungsgenitiv’ (eine Tasse heißen Tees) und der Identitätsgenitiv (das Laster der Unbescheidenheit) werden berücksichtigt. Die neue Ordnung possessiver Attribute nach funktionalen Subdomänen ist der traditionellen Einteilung vorzuziehen, insofern als sie lediglich Grundunterscheidungen gemäß dem referenzsemantischen Status des Modifikators (begrifflich versus referentiell) und nach dem Beitrag des Modifikators zur Bedeutungskomposition der NP (verankernd versus qualitativ bzw. klassifikatorisch) berücksichtigt. Zudem ist sie durch Testverfahren wie den Pronominalisierungstest abgesichert.
How (and when) do speakers generalise from memorised exemplars of a construction to a productive schema? The present paper presents a novel take on this issue by offering a corpus-based approach to semantic extension processes. Focusing on clusters of German ADJ N expressions involving the heavily polysemous adjective tief ‚deep’, it is shown that type frequency (a commonly used measure of productivity) needs to be relativised to distinct semantic classes within the overall usage spectrum of a given construction in order to predict the occurrence of novel types within a particular region of this spectrum. Some methodological and theoretical implications for usage-based linguistic model building are considered.
This paper describes work directed towards the development of a syllable prominence-based prosody generation functionality for a German unit selection speech synthesis system. A general concept for syllable prominence-based prosody generation in unit selection synthesis is proposed. As a first step towards its implementation, an automated syllable prominence annotation procedure based on acoustic analyses has been performed on the BOSS speech corpus. The prominence labeling has been evaluated against an existing annotation of lexical stress levels and manual prominence labeling on a subset of the corpus. We discuss methods and results and give an outlook on further implementation steps.
Bootstrapping Supervised Machine-learning Polarity Classifiers with Rule-based Classification
(2010)
In this paper, we explore the effectiveness of bootstrapping supervised machine-learning polarity classifiers using the output of domain-independent rule-based classifiers. The benefit of this method is that no labeled training data are required. Still, this method allows to capture in-domain knowledge by training the supervised classifier on in-domain features, such as bag of words.
We investigate how important the quality of the rule-based classifier is and what features are useful for the supervised classifier. The former addresses the issue in how far relevant constructions for polarity classification, such as word sense disambiguation, negation modeling, or intensification, are important for this self-training approach. We not only compare how this method relates to conventional semi-supervised learning but also examine how it performs under more difficult settings in which classes are not balanced and mixed reviews are included in the dataset.
Opinion holder extraction is one of the important subtasks in sentiment analysis. The effective detection of an opinion holder depends on the consideration of various cues on various levels of representation, though they are hard to formulate explicitly as features. In this work, we propose to use convolution kernels for that task which identify meaningful fragments of sequences or trees by themselves. We not only investigate how different levels of information can be effectively combined in different kernels but also examine how the scope of these kernels should be chosen. In general relation extraction, the two candidate entities thought to be involved in a relation are commonly chosen to be the boundaries of sequences and trees. The definition of boundaries in opinion holder extraction, however, is less straightforward since there might be several expressions beside the candidate opinion holder to be eligible for being a boundary.
This paper presents a survey on the role of negation in sentiment analysis. Negation is a very common linguistic construction that affects polarity and, therefore, needs to be taken into consideration in sentiment analysis.
We will present various computational approaches modeling negation in sentiment analysis. We will, in particular, focus on aspects such as level of representation used for sentiment analysis, negation word detection and scope of negation. We will also discuss limits and challenges of negation modeling on that task.
Der Beitrag gibt einen Überblick über die Entwicklung und die Aufgaben des Fachverbandes Deutsch als Fremdsprache (FaDaF) seit seiner Gründung 1989/90. Er zeigt dabei die Entwicklungslinien des Verbandes auf, der als Nachfolge-Organisation des Arbeitskreises Deutsch als Fremdsprache beim DAAD (AkDaF) dessen Aufgaben übernommen, fortgeführt und weiter entwickelt hat.
Grammars even trying to be as comprehensible as possible hardly avoid using technical terms unknown to novices. To overcome these inconveniencies, the grammatical information system grammis of the Institut für Deutsche Sprache incorporated a glossary specialized on terms used within the system. This glossary - actually named Grammatische Grundbegriffe (elementary terms of grammar) and tied by hyperlinks to technical terms in the core grammar' of grammis - offers short and simple explanations mainly by means of exemplification. The idea is to provide the users with provisional understanding to get along while following the main themes they are interested in. Explicitly, the glossary is not a stand-alone dictionary of grammatical terms, and it should not be regarded as one.
This paper shows how corpora and related tools can be used to analyse and present significant colligational patterns lexicographically. In German, patterns such as das nötige Wissen vermitteln and sein Wissen unter Beweis stellen play a vital role when learning the language, as they exhibit relevant idiomatic usage and lexical and syntactic rules of combination. Each item has specific semantic and grammatical functions and particular preferences with respect to position and distribution. An analysis of adjectives, for example, identifies preferences in adverbial, attributive, or predicative functions.
Traditionally, corpus analyses of syntagmatic constructions have not been conducted for lexicographic purposes. This paper shows how to utilise corpora to extract and examine typical syntagms and how the results of such an analysis are documented systematically in ELEXIKO, a large-scale corpus-based Internet reference work of German. It also demonstrates how this dictionary accounts for the lexical and grammatical interplay between units in a syntagm and how authentic corpus material and complementary prose-style usage notes are a useful guide to text production or reception.