Grammatik
Refine
Year of publication
- 2018 (5) (remove)
Document Type
- Part of a Book (5)
Language
- English (5) (remove)
Has Fulltext
- yes (5)
Is part of the Bibliography
- yes (5)
Keywords
- Deutsch (4)
- Althochdeutsch (2)
- Automatische Sprachanalyse (1)
- Dialektologie (1)
- Early New High German (ENHG) (1)
- Ergänzung <Linguistik> (1)
- Frühneuhochdeutsch (1)
- Generative Syntax (1)
- German dialects (1)
- Gesprochene Sprache (1)
Publicationstate
- Veröffentlichungsversion (3)
- Postprint (2)
- Zweitveröffentlichung (1)
Reviewstate
- Peer-Review (3)
- (Verlags)-Lektorat (2)
Publisher
Null subjects (NSs) have been a central research topic in generative syntax ever since the 1980s. This chapter considers the situation of German NSs both from a dialectological and from a diachronic perspective and attempts to reconstruct a direct line concerning the licensing conditions of pro-drop from Old High German (OHG) through Middle High German (MHG) and Early New High German (ENHG) to current dialects of New High German (NHG). Particularly, we will argue that German changed from a consistent, yet asymmetric pro-drop language to a partial, but symmetric one. In order to demonstrate that this development took place and the steps involved, we survey the existing empirical evidence and introduce new data.
Complement phrases are essential for constructing well-formed sentences in German. Identifying verb complements and categorizing complement classes is challenging even for linguists who are specialized in the field of verb valency. Against this background, we introduce an ML-based algorithm which is able to identify and classify complement phrases of any German verb in any written sentence context. We use a large training set consisting of example sentences from a valency dictionary, enriched with POS tagging, and the ML-based technique of Conditional Random Fields (CRF) to generate the classification models.
We present an approach for modeling German negation in open-domain fine grained sentiment analysis. Unlike most previous work in sentiment analysis, we assume that negation can be conveyed by many lexical units (and not only common negation words) and that different negation words have different scopes. Our approach is examined on a new dataset comprising sentences with mentions of polar expressions and various negation words. We identify different types of negation words that have the same scopes. We show that already negation modeling based on these types largely outperforms traditional negation models which assume the same scope for all negation words and which employ a window-based scope detection rather than a scope detection based on syntactic information.
We present a method for detecting and reconstructing separated particle verbs in a corpus of spoken German by following an approach suggested for written language. Our study shows that the method can be applied successfully to spoken language, compares different ways of dealing with structures that are specific to spoken language corpora, analyses some remaining problems, and discusses ways of optimising precision or recall for the method. The outlook sketches some possibilities for further work in related areas.