Grammatikforschung
Refine
Document Type
- Doctoral Thesis (8) (remove)
Is part of the Bibliography
- no (8)
Keywords
- Deutsch (3)
- Englisch (2)
- Korpus <Linguistik> (2)
- Tempus (2)
- Aspekt <Linguistik> (1)
- Chadic (1)
- Comitative Construction (1)
- Comitative Preposition (1)
- Compositional Semantics (1)
- Computerunterstütztes Lernen (1)
Publicationstate
Reviewstate
Publisher
This is a study of how aspects of information structure can be captured within a formal grammar of Spanish, couched in the framework of Head-Driven Phrase Structure Grammar (HPSG, Pollard
and Sag 1994). While a large number of morphological, syntactic and semantic aspects in a variety of languages have been successfully analysed in this theory, information structure has not been paid the same attention in the HPSG literature. However, as a theory of signs, HPSG should include all
levels of description without which the structural descriptions offered by the grammar would ultimately remain incomplete. Languages often explicitly mark the information-structural partitioning of utterances. Depending on the particular language, linguistic resources used for this purpose include
prosody (stress/intonation), syntax (e. g. constituent order, special syntactic constructions) and morphology (e. g. special affixes). In HPSG, phonological, syntactic, semantic and pragmatic information is represented in parallel, which would seem to be a well-suited architecture for modelling
the sort of interfaces called for.
The principal claim of this dissertation is that there is a unique structural core shared by Double Object, Dative Experiencer and Existential/Presentational constructions. This core is argued to take the form of a Cipient Predication structure, `cipient covering traditional notions like (affected) source/goal, recipient, indirect object or dative experiencer. Central questions arising in defining Cipient Predication are: How are cipients thematically licensed, and what is the role of there in argument-structural terms? What is the structural locus of cipients/there? What is the role and nature of dative case? How can the possessive interpretation, the blocking and definiteness effects associated with the above-mentioned constructions be explained? Cipients are presented as external arguments and logical subjects (location individuals) of predicates derived from a propositional meaning embedded in the VP, the predicate formed by a lower tense head `little t that is overtly realized as there. Little t is argued to encode a distinction at the reference time level, structural dative hinging on a tense property like structural nominative. The cipient relates as a whole to a part to a VP-internal location argument that together with the theme furnishes the propositional meaning (`possession ). As logical subjects, cipients anchor the predicate to the utterance context, forcing its interpretation in extralinguistic terms (`blocking effects ). It is proposed that lacking structurally encoded subjects, Existential/Presentational constructions are not saturated expressions in syntax, precluding the interpretation of certain quantifiers (most/every, vide `definiteness effects ). Cipient Predication, couched in terms of the Minimalist Program (in particular, Chomsky 1999) and a semantics relying on tense and the ontological distinction of locations as well as scalar and part-whole structure, should be of interest to scholars working on datives, argument structure, and the syntax/semantics/pragmatics interface more generally.
Die vorliegende Dissertation reiht sich in die Diskussion um die Verwendungsweise der deutschen Konstruktion werden + Infinitiv ein und stellt dem so genannten Temporalisten/Modalisten-Dissens eine dritte Position abseits einer kategorialgrammatischen Klassifizierung bei. Untersucht wird die Ausprägung des epistemisch modalen und des temporalen Bedeutungselementes der Periphrase werden + Infinitiv und die Abhängigkeit dieser Ausprägung von bestimmenden Faktoren im Vergleich mit Präsens, Perfekt und Präsens pro Futuro. Zur Anwendung kommen hierbei vornehmlich webbasierte Experimentalparadigmen, welche nicht nur die Untersuchungen der Arbeit auf eine breite empirische Basis stellen, sondern auch eine systematische und kontrollierte Analyse ermöglichen. Überdies enthält die vorliegende Arbeit einen methodischen Teil, der auf etwaige Unterschiede zwischen webbasierten und laborgestützen Experimenten eingeht.
This thesis describes work in three areas: grammar engineering, computer-assisted language learning and grammar learning. These three parts are connected by the concept of a grammar-based language learning application. Two types of grammars are of concern. The first we call resource grammars, extensive descriptions a natural languages. Part I focuses on this kind of grammars. The other are domain-specific or application-specific grammars. These grammars only describe a fragment of natural language that is determined by the domain of a certain application. Domain-specific grammars are relevant for Part II and Part III. Another important distinction is between humans learning a new natural language using computational grammars (Part II) and computers learning grammars from example sentences (Part III). Part I of this thesis focuses on grammar engineering and grammar testing. It describes the development and evaluation of a computational resource grammar for Latin. Latin is known for its rich morphology and free word order, both have to be handled in a computationally efficient way. A special focus is on methods how computational grammars can be evaluated using corpus data. Such an evaluation is presented for the Latin resource grammar. Part II, the central part, describes a computer-assisted language learning application based on domain-specific grammars. The language learning application demonstrates how computational grammars can be used to guide the user input and how language learning exercises can be modeled as grammars. This allows us to put computational grammars in the center of the design of language learning exercises used to help humans learn new languages. Part III, the final part, is dedicated to a method to learn domain- or application-specific grammars based on a wide-coverage grammar and small sets of example sentences. Here a computer is learning a grammar for a fragment of a natural language from example sentences, potentially without any additional human intervention. These learned grammars can be based e.g. on the Latin resource grammar described in Part II and used as domain-specific lesson grammars in the language learning application described Part II.
This thesis investigates temporal and aspectual reference in the typologically unrelated African languages Hausa (Chadic, Afro–Asiatic) and Medumba (Grassfields Bantu). It argues that Hausa is a genuinely tenseless language and compares the interpretation of temporally unmarked sentences in Hausa to that of morphologically tenseless sentences in Medumba, where tense marking is optional and graded. The empirical behavior of the optional temporal morphemes in Medumba motivates an analysis as existential quantifiers over times and thus provides new evidence suggesting that languages vary in whether their (past) tense is pronominal or quantificational (see also Sharvit 2014). The thesis proposes for both Hausa and Medumba that the alleged future tense marker is a modal element that obligatorily combines with a prospective future shifter (which is covert in Medumba). Cross-linguistic variation in whether or not a future marker is compatible with non-future interpretation is proposed to be predictable from the aspectual architecture of the given language.
Manual development of deep linguistic resources is time-consuming and costly and therefore often described as a bottleneck for traditional rule-based NLP. In my PhD thesis I present a treebank-based method for the automatic acquisition of LFG resources for German. The method automatically creates deep and rich linguistic presentations from labelled data (treebanks) and can be applied to large data sets. My research is based on and substantially extends previous work on automatically acquiring wide-coverage, deep, constraint-based grammatical resources from the English Penn-II treebank (Cahill et al.,2002; Burke et al., 2004; Cahill, 2004). Best results for English show a dependency f-score of 82.73% (Cahill et al., 2008) against the PARC 700 dependency bank, outperforming the best hand-crafted grammar of Kaplan et al. (2004). Preliminary work has been carried out to test the approach on languages other than English, providing proof of concept for the applicability of the method (Cahill et al., 2003; Cahill, 2004; Cahill et al., 2005). While first results have been promising, a number of important research questions have been raised. The original approach presented first in Cahill et al. (2002) is strongly tailored to English and the datastructures provided by the Penn-II treebank (Marcus et al., 1993). English is configurational and rather poor in inflectional forms. German, by contrast, features semi-free word order and a much richer morphology. Furthermore, treebanks for German differ considerably from the Penn-II treebank as regards data structures and encoding schemes underlying the grammar acquisition task. In my thesis I examine the impact of language-specific properties of German as well as linguistically motivated treebank design decisions on PCFG parsing and LFG grammar acquisition. I present experiments investigating the influence of treebank design on PCFG parsing and show which type of representations are useful for the PCFG and LFG grammar acquisition tasks. Furthermore, I present a novel approach to cross-treebank comparison, measuring the effect of controlled error insertion on treebank trees and parser output from different treebanks. I complement the cross-treebank comparison by providing a human evaluation using TePaCoC, a new testsuite for testing parser performance on complex grammatical constructions. Manual evaluation on TePaCoC data provides new insights on the impact of flat vs. hierarchical annotation schemes on data-driven parsing. I present treebank-based LFG acquisition methodologies for two German treebanks. An extensive evaluation along different dimensions complements the investigation and provides valuable insights for the future development of treebanks.
This thesis deals with expressions consisting of two noun phrases connected by a comitative preposition, referred to as comitative constructions (CCs). It focuses on CCs in Polish, with some comparisons to other languages, and provides an analysis at the morphosyntax-semantics-pragmatics interface in the paradigm of Head-Driven Phrase Structure Grammar with the integrated model-theoretic semantic framework of Lexicalized Flexible Ty2. After postulating three different readings of Polish CCs: accompanitive, conjunctive and (open and closed) inclusive, a number of semantic phenomena are discussed which provide evidence for this classification. Further examination of the data shows that all CC types behave uniformly with regard to their syntactic properties but exhibit differences regarding agreement and person, number and gender resolution. These differences have previously been explained by syntactic stipulations. This thesis argues that a syntactic approach to CCs lacks real empirical motivation and it demonstrates that some of the existing analyses are problematic for a number of empirical and / or theoretical reasons. It further offers an alternative analysis based on the assumption that all CC types have a uniform, adjunctionbased syntactic structure, and that the crucial differences between them are semantic in nature, being triggered by the meaning of the comitative preposition. The core of the proposed semantic analysis are three different logical representations of the comitative preposition, whose truth conditions allow us to make the right predictions about the different behavior of the three CC types. All other lexical components of CCs, including plural pronouns, bear in each type of CC their customary forms and meanings. Implementing this idea in a constraint-based framework whose description language incorporates a formal semantic representation language, and modeling the morphosyntactic, semantic, pragmatic and referential properties of CCs within a single grammatical paradigm, we arrive at an analysis that accounts for these expressions in a very natural way.