Refine
Year of publication
Document Type
- Conference Proceeding (411)
- Part of a Book (332)
- Article (253)
- Book (27)
- Doctoral Thesis (19)
- Working Paper (14)
- Other (7)
- Contribution to a Periodical (6)
- Review (3)
- Part of Periodical (2)
Language
- English (1078) (remove)
Is part of the Bibliography
- no (1078) (remove)
Keywords
- Deutsch (225)
- Korpus <Linguistik> (213)
- Computerlinguistik (118)
- Englisch (78)
- Konversationsanalyse (69)
- Annotation (59)
- Automatische Sprachanalyse (50)
- Interaktion (42)
- Semantik (42)
- Gesprochene Sprache (41)
Publicationstate
- Veröffentlichungsversion (559)
- Postprint (148)
- Zweitveröffentlichung (112)
- Preprint (2)
- Ahead of Print (1)
- Erstveröffentlichung (1)
Reviewstate
- Peer-Review (397)
- (Verlags)-Lektorat (303)
- Qualifikationsarbeit (Dissertation, Habilitationsschrift) (16)
- Peer-review (14)
- Verlags-Lektorat (9)
- Review-Status-unbekannt (5)
- Peer-Revied (3)
- Abschlussarbeit (Bachelor, Master, Diplom, Magister) (Bachelor, Master, Diss.) (2)
- Zweitveröffentlichung (2)
- (Verlags-)Lektorat (1)
Publisher
- de Gruyter (60)
- Benjamins (50)
- IDS-Verlag (47)
- Springer (42)
- Association for Computational Linguistics (35)
- European Language Resources Association (ELRA) (32)
- Institut für Deutsche Sprache (24)
- European Language Resources Association (22)
- Elsevier (21)
- Niemeyer (20)
Speakers’ dialogical orientation to the particular others they talk to is implemented by practices of recipient-design. One such practice is the use of negation as a means to constrain interpretations of speaker’s actions by the partner. The paper situates this use of negation within the larger context of other recipient-designed uses of negation which negate assumptions the speaker makes about what the addressee holds to be true (second-order assumptions) or what the addressee assumes the speaker holds to be true (third- order assumptions). The focus of the study is on the ways in which speakers use negation to disclaim interpretations of their turns which partners have displayed or may possibly arrive at. Special emphasis is given to the positionally sensitive uses of negation, which may occur before, after or inserted between the nucleus actions whose interpretation is constrained by the negation. Interactional motivations and rhetorical potentials of the practice are pointed out, partly depending on the position of the negation vis-à-vis the nucleus action. The analysis shows that the concept of ‘recipient design’ is in need of distinctions which have not been in focus in prior research.
"Standard language" is a contested concept, ideologically, empirically and theoretically. This is particularly true for a language such as German, where the standardization of the spoken language was based on the written standard and was established with respect to a communicative situation, i.e. public speech on stage (Bühnenaussprache), which most speakers never come across. As a consequence, the norms of the oral standard exhibit many features which are infrequent in the everyday speech even of educated speakers. This paper discusses ways to arrive at a more realistic conception of (spoken) standard German, which will be termed "standard usage". It must be founded on empirical observations of speakers linguistic choices in everyday situations. Arguments in favor of a corpus-based notion of standard have to consider sociolinguistic, political, and didactic concerns. We report on the design of a large study of linguistic variation conducted at the Institute for the German Language (project "Variation in Spoken German", Variation des gesprochenen Deutsch) with the aim of arriving at a representative picture of "standard usage" in contemporary German. It systematically takes into account both diatopic variation covering the multi-national space in which German an official language, and diastratic variation in terms of varying degrees of formality. Results of the study of phonetic and morphosyntactic variation are discussed. At least for German, a corpus-based notion of "standard usage" inevitably includes some degree of pluralism concerning areal variation, and it needs to do justice to register-based variation as well.
With the advent of mobile devices, mediatized political discourse became more dynamic. I assume that the microblog Twitter can be considered as a medium for spatial coordination during protests. Therefore, the case of neo-Nazi demonstrations and counter-protests in the city of Dresden that occurred in February 2012 is analysed. Data consists of microposts that occurred during the event. Quantitative analysis of hashtag and retweet frequencies was performed as well as qualitative speech act pattern analysis and a tempo-spatial discourse analysis on selected subsets of microposts. Results show that a common linguistic practice is verbal georeferencing and by that constructing space. Empirical analysis indicates a strong relation between communicational online space and physical offline place: Protest participants permanently reconfigure spatial context discursively and thus the contested protest area becomes a temporarily meaningful place.
Positioning analysis, a variant of discourse analysis, was used to explore the narratives of 40 psychiatric patients (11 females and 29 males; mean age = 40 years) who had manifest difficulties with engagement with statutory mental health services. Positioning analysis is a qualitative method that captures how people linguistically position the roles and identities of themselves and others in their day-to-day lives and narratives. The language of disengagement incorporated the passive positioning of self in relation to their lives and treatment through the use of metaphor, the passive voice and them and us attribution, while the discourse of engagement incorporated more active positioning of self, achieved through the use of the personal pronoun we and metaphoric references to balanced relationships. The findings corroborate previous thematic analysis that highlighted the importance of identity and agency in the ‘making or breaking’ of therapeutic relationships (Priebe et al. 2005). Implications are discussed in relation to how positioning analysis may help signal and emphasize important life and therapeutic experiences in spoken narratives as well as clinical consultations.
The purpose of this paper is to describe the functions of ‘where’-based relative elements' in six Balkan languages, paying particular attention to non-standard varieties.2 Relative elements based on an originally interrogative pronoun meaning ‘where’ are attested in all Balkan languages and, more generally, in all European languages. In accordance with the locative meaning of the original pronoun, ‘where’-based relative elements are primarily used to relativize locatives. However, it will be shown that in some Balkan languages, and especially in non-standard varieties, these elements have extended their functional domain. This process does not appear to be random, but rather to pattern with the following hierarchy: locative > unspecific connector > other syntactic positions (indirect/direct object, subject).3 Additionally, ‘where’-based relative elements will be compared with ‘what’-based ones in order to highlight common patterns of development.
The present investigation targets the phenomenon commonly called control. Many languages including German and Polish employ non-finite clauses (besides finite clauses) as propositional complements. The subject of these complement clauses is left unexpressed and must generally be interpreted co-referentially with the subject or object of the matrix clause (subject or object control). However. there are also infinitive-selecting verbs that do not allow for a co- referential interpretation of the embedded subject - semantically, the embedded infinitives of these anti-control verbs are thus less dependent on or less unifiable with the matrix proposition. In Polish anti-control constructions, non-finite complements are overtly marked with the complementizer zeby, suggesting that they are structurally more complex (namely. containing a C-projection) than the non-finite complements in control constructions lacking zeby (modulo special contexts. viz. 'control switch'). In a comparative perspective, the paper brings corpuslinguistic and experimental evidence to bear on the question whether surface appearances notwithstanding, the infinitival complements of anti-control verbs in German should similarly be analyzed as truly sentential, i.e., C-headed structures.
The paper reports the results of the curation project ChatCorpus2CLARIN. The goal of the project was to develop a workflow and resources for the integration of an existing chat corpus into the CLARIN-D research infrastructure for language resources and tools in the Humanities and the Social Sciences (http://clarin-d.de). The paper presents an overview of the resources and practices developed in the project, describes the added value of the resource after its integration and discusses, as an outlook, to what extent these practices can be considered best practices which may be useful for the annotation and representation of other CMC and social media corpora.
This study explores how ‘gatherings’ turn into ‘encounters’ in a virtual world (VW) context. Most communication technologies enable only focused encounters between distributed participants, but in VWs both gatherings and encounters can occur. We present close sequential analysis of moments when after a silent gathering, interaction among participants in a VW is gradually resumed, and also investigate the social actions in the verbal (re-)opening turns. Our findings show that like in face-to-face situations, also in VWs participants often use different types of embodied resources to achieve the transition, rather than rely on verbal means only. However, the transition process in VWs has distinctive characteristics compared to the one in face-to-face situations. We discuss how participants in a VW use virtually embodied pre-beginnings to display what we call encounter-readiness, instead of displaying lack of presence by avatar stillness. The data comprise 40 episodes of video-recorded team interactions in a VW.
This paper focusss on the first Slavonic-Romanian lexicons, compiled in the second half of the 17th century and their use(rs), proposing a method of investigating the manner in which lexical information available in the above corpus relates, if at all, to the vocabulary of texts from the same period. We chose to investigate their relation to an anonymous Old Testament translation made from Church Slavonic, also from the second half of the 17th century, which was supposed to be produced in the same geographical area, in the same Church Slavonic school or even by the same author as the lexicons. After applying a lemmatizer on both the Biblical text (Books of Genesis and Daniel) and the Romanian material from the lexicons, we analyse the results and double the statistical analysis with a series of case studies, focusing on some common lexemes that might be an indicator of the relatedness of the texts. Even if the analysis points out that the lexicons might not have been compiled as a tool for the translation of religious texts, it proves to be a useful method that reveals interesting data and provides the basis for more extensive approaches.
This paper presents the application of the <tiger2/> format to various linguistic scenarios with the aim of making it the standard serialisation for the ISO 24615 [1] (SynAF) standard. After outlining the main characteristics of both the SynAF metamodel and the <tiger2/> format, as extended from the initial Tiger XML format [2], we show through a range of different language families how <tiger2/> covers a variety of constituency and dependency based analyses.
This introductory tutorial describes a strictly corpus-driven approach for uncovering indications for aspects of use of lexical items. These aspects include ‘(lexical) meaning’ in a very broad sense and involve different dimensions, they are established in and emerge from respective discourses. Using data-driven mathematical-statistical methods with minimal (linguistic) premises, a word’s usage spectrum is summarized as a collocation profile. Self-organizing methods are applied to visualize the complex similarity structure spanned by these profiles. These visualizations point to the typical aspects of a word’s use, and to the common and distinctive aspects of any two words.
To build a comparable Wikipedia corpus of German, French, Italian, Norwegian, Polish and Hungarian for contrastive grammar research, we used a set of XSLT stylesheets to transform the mediawiki anntations to XML. Furthermore, the data has been amnntated with word class information using different taggers. The outcome is a corpus with rich meta data and linguistic annotation that can be used for multilingual research in various linguistic topics.
A comparison between morphological complexity measures: typological data vs. language corpora
(2016)
Language complexity is an intriguing phenomenon argued to play an important role in both language learning and processing. The need to compare languages with regard to their complexity resulted in a multitude of approaches and methods, ranging from accounts targeting specific structural features to global quantification of variation more generally. In this paper, we investigate the degree to which morphological complexity measures are mutually correlated in a sample of more than 500 languages of 101 language families. We use human expert judgements from the World Atlas of Language Structures (WALS), and compare them to four quantitative measures automatically calculated from language corpora. These consist of three previously defined corpus-derived measures, which are all monolingual, and one new measure based on automatic word-alignment across pairs of languages. We find strong correlations between all the measures, illustrating that both expert judgements and automated approaches converge to similar complexity ratings, and can be used interchangeably.
Authors like Fillmore 1986 and Goldberg 2006 have made a strong case for regarding argument omission in English as a lexical and construction-based affordance rather than one based on general semantico-pragmatic constraints. They do not, however, address the question of how grammatical restrictions on null complementation might interact with broader narrative conventions, in particular those of genre. In this paper, we attempt to remedy this oversight by presenting a comprehensive overview of genre-based argument omissions and offering a construction-based analysis of genre-based omission conventions. We consider five genre-based omission types: instructional imperatives (Culy 1996, Bender 1999), labelese, diary style (Haegeman 1990), match reports (Ruppenhofer 2004) and quotative clauses. We show that these omission types share important traits; all, for example, have anaphoric rather than indefinite construals. We also show, however, that the omission types differ from each other in idiosyncratic ways. We then address several interrelated representational problems posed by the grammatical treatment of genre-based omissions. For example, the constructions that represent genre-based omission conventions must interact with the lexical entries of verbs, many of which do not generally permit omitted arguments. Accordingly, we offer constructional analyses of genre-based omissions that allow constructions to override lexical valence constraints.
Ph@ttSessionz and Deutsch heute are two large German speech databases. They were created for different purposes: Ph@ttSessionz to test Internet-based recordings and to adapt speech recognizers to the voices of adolescent speakers, Deutsch heute to document regional variation of German. The databases differ in their recording technique, the selection of recording locations and speakers, elicitation mode, and data processing.
In this paper, we outline how the recordings were performed, how the data was processed and annotated, and how the two databases were imported into a single relational database system. We present acoustical measurements on the digit items of both databases. Our results confirm that the elicitation technique affects the speech produced, that f0 is quite comparable despite different recording procedures, and that large speech technology databases with suitable metadata may well be used for the analysis of regional variation of speech.
There have been several attempts to annotate communicative functions to utterances of verbal feedback in English previously. Here, we suggest an annotation scheme for verbal and non-verbal feedback utterances in French including the categories base, attitude, previous and visual. The data comprises conversations, maptasks and negotiations from which we extracted ca. 13,000 candidate feedback utterances and gestures. 12 students were recruited for the annotation campaign of ca. 9,500 instances. Each instance was annotated by between 2 and 7 raters. The evaluation of the annotation agreement resulted in an average best-pair kappa of 0.6. While the base category with the values acknowledgement, evaluation, answer, elicit and other achieves good agreement, this is not the case for the other main categories. The data sets, which also include automatic extractions of lexical, positional and acoustic features, are freely available and will further be used for machine learning classification experiments to analyse the form-function relationship of feedback.
The main objective of this article is to describe the current activities at the Mannheim Institute for German Language regarding the implementation of a domain-specific ontology for German grammar. We differentiate ontology bases from ontology management Systems, point out the benefits of database-driven Solutions, and go Step by Step through all phases of the ontology lifecycle. In Order to demonstrate the practical use of our approach, we outline the interface between our ontology and the grammis web Information System, and compare the ontology-based retrieval mechanism with traditional full text search.
The present paper reports the first results of the compilation and annotation of a blog corpus for German. The main aim of the project is the representation of the blog discourse structure and relations between its elements (blog posts, comments) and participants (bloggers, commentators). The data included in the corpus were manually collected from the scientific blog portal SciLogs. The feature catalogue for the corpus annotation includes three types of information which is directly or indirectly provided in the blog or can be construed by means of statistical analysis or computational tools. At this point, only directly available information (e.g. title of the blog post, name of the blogger etc.) has been annotated. We believe, our blog corpus can be of interest for the general study of blog structure or related research questions as well as for the development of NLP methods and techniques (e.g. for authorship detection).
Most cultures have metaphors for time that involve movement, for example, ‘time passes’. Although time is objectively measured, it is subjectively understood, as we can perceive time as stationary, whereby we move towards future events, or we can perceive ourselves as stationary, with time moving past us and events moving towards us. This paper reports a series of studies that first examines whether people think about time in a metaphor-consistent manner (Study 1) and then explores the relationship between ‘time perspective’, level of perceived personal agency, and time representations (Study 2), the relationship between emotional experiences and time representation (Study 3), and whether this relationship is bidirectional by manipulating either emotional experiences (Study 4) or time representation (Study 5). Results provide bidirectional evidence for an ego-moving representation of time, with happiness eliciting more agentic control, and evidence for a time-moving passivity associated with emotional experiences of anxiety and depression. This bidirectional relationship suggests that our representation of time is malleable, and therefore, current emotional experiences may change through modification of time representations.
We present an implemented XML data model and a new, simplified query language for multi-level annotated corpora. The new query language involves automatic conversion of queries into the underlying, more complicated MMAXQL query language. It supports queries for sequential and hierarchical, but also associative (e.g. coreferential) relations. The simplified query language has been designed with non-expert users in mind.
A key difference between traditional humanities research and the emerging field of digital humanities is that the latter aims to complement qualitative methods with quantitative data. In linguistics, this means the use of large corpora of text, which are usually annotated automatically using natural language processing tools. However, these tools do not exist for historical texts, so scholars have to work with unannotated data. We have developed a system for systematic iterative exploration and annotation of historical text corpora, which relies on an XML database (BaseX) and in particular on the Full Text and Update facilities of XQuery.
Linguistic query systems are special purpose IR applications. We present a novel state-of-the-art approach for the efficient exploitation of very large linguistic corpora, combining the advantages of relational database management systems (RDBMS) with the functional MapReduce programming model. Our implementation uses the German DEREKO reference corpus with multi-layer linguistic annotations and several types of text-specific metadata, but the proposed strategy is language-independent and adaptable to large-scale multilingual corpora.
So far, there have been few descriptions on creating structures capable of storing lexicographic data, ISO 24613:2008 being one of the latest. Another one is by Spohr (2012), who designs a multifunctional lexical resource which is able to store data of different types of dictionaries in a user-oriented way. Technically, his design is based on the principle of a hierarchical XML/OWL (eXtensible Markup Language/Web Ontology Language) representation model. This article follows another route in describing a model based on entities and relations between them; MySQL (usually referred to as: Structured Query Language) describes a database system of tables containing data and definitions of relations between them. The model was developed in the context of the project "Scientific eLexicography for Africa" and the lexicographic database to be built thereof will be implemented with MySQL. The principles of the ISO model and of Spohr's model are adhered to with one major difference in the implementation strategy: we do not place the lemma in the centre of attention, but the sense description — all other elements, including the lemma, depend on the sense description. This article also describes the contained lexicographic data sets and how they have been collected from different sources. As our aim is to compile several prototypical internet dictionaries (a monolingual Northern Sotho dictionary, a bilingual learners' Xhosa–English dictionary and a bilingual Zulu–English dictionary), we describe the necessary microstructural elements for each of them and which principles we adhere to when designing different ways of accessing them. We plan to make the model and the (empty) database with all graphical user interfaces that have been developed, freely available by mid-2015.
We present a gold standard for semantic relation extraction in the food domain for German. The relation types that we address are motivated by scenarios for which IT applications present a commercial potential, such as virtual customer advice in which a virtual agent assists a customer in a supermarket in finding those products that satisfy their needs best. Moreover, we focus on those relation types that can be extracted from natural language text corpora, ideally content from the internet, such as web forums, that are easy to retrieve. A typical relation type that meets these requirements are pairs of food items that are usually consumed together. Such a relation type could be used by a virtual agent to suggest additional products available in a shop that would potentially complement the items a customer has already in their shopping cart. Our gold standard comprises structural data, i.e. relation tables, which encode relation instances. These tables are vital in order to evaluate natural language processing systems that extract those relations.
Large classes at universities(> 1600 students) create their own challenges for teaching and learning. Audience feedback is lacking and fine tuning of lectures, courses and exam preparation to address individual needs is very difficult to achieve. At RWTH Aachen University, a course concept and a knowledge map learning tool aimed to support individual students to prepare for exams in information science through theme-based exercises were developed and evaluated. The tool was grounded in the notion of self-regul ated learning with the goal of enabling students to learn
independently.
This paper argues that a lectometric approach may shed light on the distinction between destandardization and demotization, a pair of concepts that plays a key role in ongoing discussions about contemporary trends in standard languages. Instead of a binary distinction, the paper proposes three different types of destandardization, defined as quantitatively measurable changes in a stratigraphic language continuum. The three types are illustrated on the basis of a case study describing changes in the vocabulary of Dutch in The Netherlands and Flanders between 1990 and 2010.
We apply a decision tree based approach to pronoun resolution in spoken dialogue. Our system deals with pronouns with NP- and non-NP-antecedents. We present a set of features designed for pronoun resolution in spoken dialogue and determine the most promising features. We evaluate the system on twenty Switchboard dialogues and show that it compares well to Byron’s (2002) manually tuned system.
Creating and maintaining metadata for various kinds of resources requires appropriate tools to assist the user. The paper presents the metadata editor ProFormA for the creation and editing of CMDI (Component Metadata Infrastructure) metadata in web forms. This editor supports a number of CMDI profiles currently being provided for different types of resources. Since the editor is based on XForms and server-side processing, users can create and modify CMDI files in their standard browser without the need for further processing. Large parts of ProFormA are implemented as web services in order to reuse them in other contexts and programs.
In this paper we present a new approach to lexicographical design for the description of German speech act verbs. This approach is based on an action-theoretical semantic conception. The several conditions for linguistic action provide the basis for the elaboration of the central semantic features. The systematic relationship of these features is reflected in the organization of a lexical database which allows various possibilities of access to different types of lexical information.
In the following paper we shall give an outline of the semantic framework for describing speech act verbs, i. e. verbs of communication, with the practical goal of a semantical database for a (dictionary of) synonymy of German speech act verbs which enables the user not only to find a list of synonymous verbs but also enables him to gain an insight into the semantic relations between the words.
The semantic framework is based on
(i) a set of conditions for performing speech acts as the relevant domain of reference
(ii) the introduction of a notion of situation, or better type of situation
The performative as well as the descriptive use of the verbs can be reduced to their fundamental dependency on the situations in which they are used: on the one hand with regard to the possibility of the action itself, and on the other hand with regard to the possibility of their designation. For both ways of use the relevant aspects of the situation constitute the necessary conditions.
This paper presents three electronic collections of polarity items: (i) negative polarity items in Romanian, (ii) negative polarity items in German, and (iii) positive polarity items in German. The presented collections are a part of a linguistic resource on lexical units with highly idiosyncratic occurrence patterns. The motivation for collecting and documenting polarity items was to provide a solid empirical basis for linguistic investigations of these expressions. Our databe provides general information about the collected items, specifies their syntactic properties, and describes the environment that licenses a given item. For each licensing context, examples from various corpora and the Internet are introduced. Finally, the type of polarity (negative or positive) and the class (superstrong, strong, weak or open) associated with a given item is speci ed. Our database is encoded in XML and is available via the Internet, offering dynamic and exible access.
The authors present a multilingual electronic database of lexical items with idiosyncratic occurrence patterns. Currently, our database consists of: (1) a collection of 444 bound words in German; (2) a collection of 77 bound words in English; (3) a collection of 58 negative polarity items in Romanian; (4) a collection of 84 negative polarity items in German; and (5) a collection of 52 positive polarity items in German. The database is encoded in XML and is available via the Internet, offering dynamic and flexible access.
This paper outlines the generation process of a specifi computational linguistic representation termed the Multilingual Time Map, conceptually a multi-tape finit state transducer encoding linguistic data at different levels of granularity. The fi st component acquires phonological data from syllable labeled speech data, the second component define feature profiles the third component generates feature hierarchies and augments the acquired data with the define feature profiles and the fourth component displays the Multilingual Time Map as a graph.
One of the most popular techniques used in HPSG-based studies to describe linguistic phenomena is the raising mechanism. Besides ordinary raising verbs or adjectives, this tool has been applied for handling verbal complexes and discontinuous constituents, among other phenomena. In this paper, a new application for raising within the HPSG paradigm will be discussed, thereby investigating data from the prepositional domain. We will analyze linguistic properties of word combinations in German consisting of a preposition, a noun, and another preposition (such as auf Grund von (‘by virtue of’)), thus arguing that raising is the most appropriate method for satisfactorily describing the crucial syntactic features which are typical for those expressions. The objective of this paper is thus to demonstrate the efficiency of the raising mechanism as used in HPSG, and therefore, to emphasize the importance of designing a satisfactory uniform theory of raising within this grammar framework.
One of the most popular techniques used in HPSG-based studies to describe linguistic phenomena is the raising mechanism. Besides ordinary raising verbs or adjectives, this tool has been applied for handling verbal complexes and discontinuous constituents, among other phenomena. In this paper, a new application for raising within the HPSG paradigm will be discussed, thereby investigating data from the prepositional domain. We will analyze linguistic properties of word combinations in German consisting of a preposition, a noun, and another preposition (such as auf Grund von (‘by virtue of’)), thus arguing that raising is the most appropriate method for satisfactorily describing the crucial syntactic features which are typical for those expressions. The objective of this paper is thus to demonstrate the efficiency of the raising mechanism as used in HPSG, and therefore, to emphasize the importance of designing a satisfactory uniform theory of raising within this grammar framework.
The understanding of story variation, whether motivated by cultural currents or other factors, is important for applications of formal models of narrative such as story generation or story retrieval. We present the first stage of an experiment to elicit natural narrative variation data suitable for evaluation with respect to story similarity, to qualitative and quantitative analysis of story variation, and also for data processing. We also present few preliminary results from the first stage of the experiment, using Red Riding Hood and Romeo and Juliet as base texts.
XML has been designed for creating structured documents, but the information that is encoded in these structures are, by definition, out of scope for XML. Additional sources, normally not easily interpretable by computers, such as documentation are needed to determine the intention of specific tags in a tag-set. The Component Metadata Infrastructure (CMDI) takes a rather pragmatic approach to foster interoperability between XML instances in the domain of metadata descriptions for language resources. This paper gives an overview of this approach.
This paper presents the current results of an ongoing research project on corpus distribution of prepositions and pronouns within Polish preposition-pronoun contractions. The goal of the project is to provide a quantitative description of Polish preposition-pronoun contractions taking into consideration morphosyntactic properties of their components. It is expected that the results will provide a basis for a revision of the traditionally assumed inflectional paradigms of Polish pronouns and, thus, for a possible remodeling of these paradigms. The results of corpus-based investigations of the distribution of prepositions within preposition-pronoun contractions can be used for grammar-theoretical and lexicographic purposes.
The paper deals with the use of ICH WEIß NICHT (‘I don’t know’) in German talk-in-interaction. Pursuing an Interactional Linguistics approach, we identify different interactional uses of ICH WEIß NICHT and discuss their relationship to variation in argument structure (SV (O), (O)VS, V-only). After ICH WEIß NICHT with full complementation, speakers emphasize their lack of knowledge or display reluctance to answer. In contrast, after variants without an object complement, in contrast, speakers display uncertainty about the truth of the following proposition or about its sufficiency as an answer. Thus, while uses with both subject and object tend to close a sequence or display lack of knowledge, responses without an object, in contrast, function as a prepositioned epistemic hedge or a pragmatic marker framing the following TCU. When ICH WEIß NICHT is used in response to a statement, it indexes disagreement (independently from all complementation patterns).
Our paper deals with the use of ICH WEIß NICHT (‘I don’t know’) in German talk-in-interaction. Pursuing an Interactional Linguistics approach, we identify different interactional uses of ICH WEIß NICHT and discuss their relationship to variation in argument structure (SV (O), (O)VS, V-only). After ICH WEIß NICHT with full complementation, speakers emphasize their lack of knowledge or display reluctance to answer. In contrast, after variants without an object complement, in contrast, speakers display uncertainty about the truth of the following proposition or about its sufficiency as an answer. Thus, while uses with both subject and object tend to close a sequence or display lack of knowledge, responses without an object, in contrast, function as a prepositioned epistemic hedge or a pragmatic marker framing the following TCU. When ICH WEIß NICHT is used in response to a statement, it indexes disagreement (independently from all complementation patterns).
This paper presents the system architecture as well as the underlying workflow of the Extensible Repository System of Digital Objects (ERDO) which has been developed for the sustainable archiving of language resources within the Tübingen CLARIN-D project. In contrast to other approaches focusing on archiving experts, the described workflow can be used by researchers without required knowledge in the field of long-term storage for transferring data from their local file systems into a persistent repository.
This paper describes the lexical database tool LOLA (Linguistic-Oriented Lexical database Approach) which has been developed for the construction and maintenance of lexicons for the machine translation system LMT. First, the requirements such a tool should meet are discussed, then LMT and the lexical information it requires, and some issues concerning vocabulary acquisition are presented. Afterwards the architecture and the components of the LOLA system are described and it is shown how we tried to meet the requirements worked out earlier. Although LOLA originally has been designed and implemented for the German-English LMT prototype, it aimed from the beginning at a representation of lexical data that can be reused for other LMT or MT prototypes or even other NLP applications. A special point of discussion will therefore be the adaptability of the tool and its components as well as the reusability of the lexical data stored in the database for the lexicon development for LMT or for other applications.
Connectives are conjunctions, prepositions, adverbs and other particles which share the function of encoding semantic relations between sentences, or rather, between semantic objects some of which can be meanings of sentences. The relata linked by any such relation will fall into one of four distinct categories: they will be physical objects, states of affairs, propositions, or pragmatic options (the atoms of human interaction). Physical objects constitute the conceptual domain of space, states of affairs the domain of time, propositions the epistemic domain, and pragmatic options the deontic domain. The relations encodable in any of these domains can be divided into four basic types: similarity relations, situating relations, conditional relations, and causal relations. Conceptual domains and types of relations define the universe of possible connections between semantic objects.
Connectives differ as to the interpretations they permit in terms of conceptual domains and types of relations. Very few connectives are specialized on relata of one certain category and relations of one certain type. Possible examples in German are später (‘later on’) and zwischenzeitlich (‘in the meantime’), which encode situating relations between states of affairs. Other connectives are specialized on relata of one certain category, but are underspecified with respect to the type of relation. An example is German sobald (‘as soon as’), which can only connect states of affairs, but accepts situating, conditional and causal readings. Connectives of a third group are specialized on relations of a certain type, but are underspecified with respect to the category of the relata. Examples of this kind are German weil (‘because’) and trotzdem (‘nevertheless’), which encode causal relations, but accept states of affairs, propositions and pragmatic options as their relata. Connectives of a fourth group are underspecified both for the category of relata and the type of relation. An example is German da (‘there’), which accepts relata of any category and allows for situating, conditional and causal readings. Connectives like und (‘and’) and oder (‘or’) exhibit an even higher degree of under specification, in that they allow for all kinds of relations and relata.
Feedback utterances are among the most frequent in dialogue. Feedback is also a crucial aspect of linguistic theories that take social interaction, involving language, into account. This paper introduces the corpora and datasets of a project scrutinizing this kind of feedback utterances in French. We present the genesis of the corpora (for a total of about 16 hours of transcribed and phone force-aligned speech) involved in the project. We introduce the resulting datasets and discuss how they are being used in on-going work with focus on the form-function relationship of conversational feedback. All the corpora created and the datasets produced in the framework of this project will be made available for research purposes.
This paper presents a survey on hate speech detection. Given the steadily growing body of social media content, the amount of online hate speech is also increasing. Due to the massive scale of the web, methods that automatically detect hate speech are required. Our survey describes key areas that have been explored to automatically recognize these types of utterances using natural language processing. We also discuss limits of those approaches.
This paper presents a survey on the role of negation in sentiment analysis. Negation is a very common linguistic construction that affects polarity and, therefore, needs to be taken into consideration in sentiment analysis.
We will present various computational approaches modeling negation in sentiment analysis. We will, in particular, focus on aspects such as level of representation used for sentiment analysis, negation word detection and scope of negation. We will also discuss limits and challenges of negation modeling on that task.
This article presents a revised version of GAT, a transcription system first devel-oped by a group of German conversation analysts and interactional linguists in 1998. GAT tries to follow as many principles and conventions as possible of the Jefferson-style transcription used in Conversation Analysis, yet proposes some conventions which are more compatible with linguistic and phonetic analyses of spoken language, especially for the representation of prosody in talk-in-interaction. After ten years of use by researchers in conversation and discourse analysis, the original GAT has been revised, against the background of past experience and in light of new necessities for the transcription of corpora arising from technologi-cal advances and methodological developments over recent years. The present text makes GAT accessible for the English-speaking community. It presents the GAT 2 transcription system with all its conventions and gives detailed instructions on how to transcribe spoken interaction at three levels of delicacy: minimal, basic and fine. In addition, it briefly introduces some tools that may be helpful for the user: the German online tutorial GAT-TO and the transcription editing software FOLKER.
A tale of many stories: explaining policy diffusion between European higher education systems
(2013)
The thesis ”A Tale of Many Stories - Explaining Policy Diffusion between European Higher Education Systems" systematically examines diffusion processes and their effects with regard to a rather neglected policy area – the case of European higher education policy. The thesis contributes to the slowly growing number of comparative and mechanism-based studies on policy diffusion and represents the first study on the diffusion of policies between European Higher Education Systems. The main aim is to contrast and compare testable and coherent explanatory models on the functioning of different diffusion mechanisms. Three sets of explanatory models on the relationship between variables triggering and conditioning diffusion mechanisms and their impact on policy adoption are drawn from mechanism-based thinking on policy diffusion: on learning, socialization, and externalities. These approaches conceptualize the policy process in terms of interdependencies between international and national actors. Explanatory models based on assumptions about domestic policies and the common responses of countries to similar policy problems extend this theoretical framework. The thesis is based on event history modelling of policy change and adoption in higher education systems of 16 West European countries between the yeas 1980 and 1998. Overall 14 policy items describing performance-orientated reforms for public universities ranging from the adoption of external quality assurance systems to tuition fees are examined. Empirically, the main research question is what international, national and policy-specific factors cause and condition diffusion processes and the adoption of public policies? Evidence can be found for and against all of the four theoretical approaches tested. In comparison, many of the assumptions related to interdependencies lack robustness, whereas the common response model is the most stable one. This does not mean that explanatory models based on interdependent decision-making are not suitable for analysing policy diffusion in higher education. Rather interdependency is a multi- dimensional concept that requires a comparative assessment of diffusion mechanisms. Some of explanatory factors based on interdependent decision- making are still supported by the empirical analysis though. From this point of view, the recommendation for analysing diffusion is to start with a model based on domestic politics, that is successively extended by explanatory factors dealing with interdependencies between international and national actors. Diffusion variables matter – but it is only one side of the tale on policy diffusion.
In this paper we present an evaluation of rule-based morphological components for German for use in an interactive editing environment. The criteria for the evaluation are deduced from the intended use of these components, namely availability, performance, programming interfaces, and analysis quality. We evaluated systems developed and maintained since decades as well as new systems. However, we note serious general shortcomings when looking closer at recent implementations and come to the conclusion that the oldest system is the only one that satisfies our requirements.
This paper describes work in progress on I5, a TEI-based document grammar for the corpus holdings of the Institut für Deutsche Sprache (IDS) in Mannheim and the text model used by IDS in its work. The paper begins with background information on the nature and purposes of the corpora collected at IDS and the motivation for the I5 project (section 1). It continues with a description of the origin and history of the IDS text model (section 2), and a description (section 3) of the techniques used to automate, as far as possible, the preparation of the ODD file documenting the IDS text model. It ends with some concluding remarks (section 4). A survey of the additional features of the IDS-XCES realization of the IDS text model is given in an appendix.
The paper presents an XML schema for the representation of genres of computer-mediated communication (CMC) that is compliant with the encoding framework defined by the TEI. It was designed for the annotation of CMC documents in the project Deutsches Referenzkorpus zur internetbasierten Kommunikation (DeRiK), which aims at building a corpus on language use in the most popular CMC genres on the German-speaking Internet. The focus of the schema is on those CMC genres which are written and dialogic―such as forums, bulletin boards, chats, instant messaging, wiki and weblog discussions, microblogging on Twitter, and conversation on “social network” sites.
The schema provides a representation format for the main structural features of CMC discourse as well as elements for the annotation of those units regarded as “typical” for language use on the Internet. The schema introduces an element <posting>, which describes stretches of text that are sent to the server by a user at a certain point in time. Postings are the main constituting elements of threads and logfiles, which, in our schema, are the two main types of CMC macrostructures. For the microlevel of CMC documents (that is, the structure of the <posting> content), the schema introduces elements for selected features of Internet jargon such as emoticons, interaction words and addressing terms. It allows for easy anonymization of CMC data for purposes in which the annotated data are made publicly available and includes metadata which are necessary for referencing random excerpts from the data as references in dictionary entries or as results of corpus queries.
Documentation of the schema as well as encoding examples can be retrieved from the web at http://www.empirikom.net/bin/view/Themen/CmcTEI. The schema is meant to be a core model for representing CMC that can be modified and extended by others according to their own specific perspectives on CMC data. It could be a first step towards an integration of features for the representation of CMC genres into a future new version of the TEI Guidelines.
This paper formulates a proposal for standardising spoken language transcription, as practised in conversation analysis, sociolinguistics, dialectology and related fields, with the help of the TEI guidelines. Two areas relevant to standardisation are identified and discussed: first, the macro structure of transcriptions, as embodied in the data models and file formats of transcription tools such as ELAN, Praat or EXMARaLDA; second, the micro structure of transcriptions as embodied in transcription conventions such as CA, HIAT or GAT. A two-step process is described in which first the macro structure is represented in a generic TEI format based on elements defined in the P5 version of the Guidelines. In the second step, character data in this representation is parsed according to the regularities of a transcription convention resulting in a more fine-grained TEI markup which is also based on P5. It is argued that this two step process can, on the one hand, map idiosyncratic differences in tool formats and transcription conventions onto a unified representation. On the other hand, differences motivated by different theoretical decisions can be retained in a manner which still allows a common processing of data from different sources. In order to make the standard usable in practice, a conversion tool—TEI Drop—is presented which uses XSL transformations to carry out the conversion between different tool formats (CHAT, ELAN, EXMARaLDA, FOLKER and Transcriber) and the TEI representation of transcription macro structure (and vice versa) and which also provides methods for parsing the micro structure of transcriptions according to two different transcription conventions (HIAT and cGAT). Using this tool, transcribers can continue to work with software they are familiar with while still producing TEI-conformant transcription files. The paper concludes with a discussion of the work needed in order to establish the proposed standard. It is argued that both tool formats and the TEI guidelines are in a sufficiently mature state to serve as a basis for standardisation. Most work consequently remains in analysing and standardising differences between different transcription conventions.
The paper will give a concise account of the theory of Lexical Event Structures. The paper has three objectives which correspond to the following three sections. In section 2 I will sketch the theory and discuss the empirical goals the theory pursues (section 2.1) and the semantic components Lexical Event Structures consist of (section 2.2). Section 3 is devoted to linguistic phenomena whose explanation depends on Lexical Event Structures. In section 3.1 I will briefly illustrate in how far Lexical Event Structures are related to phenomena from five central empirical domains of lexical semantics and in section 3.2 it will be shown how Lexical Event Structures function in a linking theory. Section 4 aims to show how the central semantic concepts in Lexical Event Structures can be anchored to concepts which are well-founded in cognitive science. Section 4.1 discusses the event concept employed and illustrates the relation between the perception of movements and the use of verbs of movement. Section 4.2 deals with the concept of volition with respect to the licensing conditions for intransitive verb passives. In section 4.3 the distinction between durativity and punctuality, which has proven relevant for a number of verb semantic phenomena, is tied to the way we perceive events and structure our own actions. Section 5 provides a conclusion.
We present SPLICR, the Web-based Sustainability Platform for Linguistic Corpora and Resources. The system is aimed at people who work in Linguistics or Computational Linguistics: a comprehensive database of metadata records can be explored in order to find language resources that could be appropriate for one’s spe cific research needs. SPLICR also provides a graphical interface that enables users to query and to visualise corpora. The project in which the system is developed aims at sustainably archiving the ca. 60 language resources that have been constructed in three collaborative research centres. Our project has two primary goals: (a) To process and to archive sustainably the resources so that they are still available to the research community in five, ten, or even 20 years time. (b) To enable researchers to query the resources both on the level of their metadata as well as on the level of linguistic annotations. In more general terms, our goal is to enable solutions that leverage the interoperability, reusability, and sustainability of heterogeneous collec- tions of language resources.
Abertura/Opening
(2010)
In this paper I explore the theoretical significance of phonologically conditioned gaps in word formation. The data support the original approach to gaps in Optimality Theory proposed by Prince & Smolensky (1993), which crucially involves MPARSE as a ranked and violable constraint. The alternative CONTROL model proposed by Orgun & Sprouse (1999) is found to be inadequate because of lost generalisations and technical flaws. It is shown that a careful distinction between various morphophonological effects (e.g. paradigm uniformity effects, phonological repair and ‘stem selection’) is necessary to shed light on the morphology–phonology interface. The data investigated here support affixspecific constraint rankings, but argue against any stratal organisation of morphology.
The Manatee corpus management system on which the Sketch Engine is built is efficient, but unable to harness the power of today’s multiprocessor machines. We describe a new, compatible implementation of Manatee which we develop in the Go language and report on the performance gains that we obtained.
Accentuation, Uncertainty and Exhaustivity - Towards a Model of Pragmatic Focus Interpretation
(2010)
This paper presents a model of pragmatic focus interpretation that is assumed to be part of a complete language comprehension model and that is inspired by Levelt's language processing model. The model is derived from our empirical data on the role of accentuation, prosodic indicators of uncertainty and context for pragmatic focus interpretation. In its present state, the model is restricted to these data, but nevertheless generates predictions.
This paper is concerned with a novel methodology for generating phonetic questions used in tree-based state tying for speech recognition. In order to implement a speech recognition system, language-dependent knowledge which goes beyond annotated material is usually required. The approach presented here generates phonetic questions for decision trees are based on a feature table that summarizes the articulatory characteristics of each sound. On the one hand, this method allows better language-specific triphone models to be defined given only a feature-table as linguistic input. On the other hand, the feature-table approach facilitates efficient definition of triphone models for other languages since again only a feature table for this language is required. The approach is exemplified with speech recognition systems for English and Thai.
This study investigates high vowel laxing in the Louisiana French of the Lafourche Basin. Unlike Canadian French, in which the high vowels /i, y, u/ are traditionally described as undergoing laxing (to [I, Y, U]) in word-final syllables closed by any consonant other than a voiced fricative (see Poliquin 2006), Oukada (1977) states that in the Louisiana French of Lafourche Parish, any coda consonant will trigger high vowel laxing of /i/; he excludes both /y/ and /u/ from his discussion of high vowel laxing. The current study analyzes tokens of /i, y, u/ from pre-recorded interviews with three older male speakers from Terrebonne Parish. We measured the first and second formants and duration for high vowel tokens produced in four phonetic environments, crossing syllable type (open vs. closed) by consonant type (voiced fricative vs. any consonant other than a voiced fricative). Results of the acoustic analysis show optional laxing for /i/ and /y/ and corroborate the finding that high vowels undergo laxing in word-final closed syllables, regardless of consonant type. Data for /u/ show that the results vary widely by speaker, with the dominant pattern (shown by two out of three speakers) that of lowering and backing in the vowel space of closed syllable tokens. Duration data prove inconclusive, likely due to the effects of stress. The formant data published here constitute the first acoustic description of high vowels for any variety of Louisiana French and lay the groundwork for future study on these endangered varieties.
This paper presents Release 2.0 of the SALSA corpus, a German resource for lexical semantics. The new corpus release provides new annotations for German nouns, complementing the existing annotations of German verbs in Release 1.0. The corpus now includes around 24,000 sentences with more than 36,000 annotated instances. It was designed with an eye towards NLP applications such as semantic role labeling but will also be a useful resource for linguistic studies in lexical semantics.
The web portal Lehnwortportal Deutsch (lwp.ids-mannheim.de), developed at the Institute for the German Language (IDS), aims to provide unified access to existing and possibly new dictionaries of German loanwords in other languages. Internally, the lexicographical information is represented as a directed acyclic graph of relations between words. The graph abstracts from the idiosyncrasies of the individual component dictionaries. This paper explores two different strategies to make complex graph-based cross-dictionary queries in such a portal more accessible to users. The first strategy effectively hides the underlying graph structure, but allows users to assign scopes (internally defined in terms of the graph structure) to search criteria. A second type of search strategy directly formulates queries in terms of the relational graph structure. In this case, search results are not entries but n-tuples of words (metalemmata, loanwords, etyma); a query consists of specifying properties of these words and relations between them. A working prototype of an easy-to-use human-readable declarative query language is presented and ways to interactively construct queries are discussed.
Hierarchical predictive coding has been identified as a possible unifying principle of brain function, and recent work in cognitive neuroscience has examined how it may be affected by age–related changes. Using language comprehension as a test case, the present study aimed to dissociate age-related changes in prediction generation versus internal model adaptation following a prediction error. Event-related brain potentials (ERPs) were measured in a group of older adults (60–81 years; n = 40) as they read sentences of the form “The opposite of black is white/yellow/nice.” Replicating previous work in young adults, results showed a target-related P300 for the expected antonym (“white”; an effect assumed to reflect a prediction match), and a graded N400 effect for the two incongruous conditions (i.e. a larger N400 amplitude for the incongruous continuation not related to the expected antonym, “nice,” versus the incongruous associated condition, “yellow”). These effects were followed by a late positivity, again with a larger amplitude in the incongruous non-associated versus incongruous associated condition. Analyses using linear mixed-effects models showed that the target-related P300 effect and the N400 effect for the incongruous non-associated condition were both modulated by age, thus suggesting that age-related changes affect both prediction generation and model adaptation. However, effects of age were outweighed by the interindividual variability of ERP responses, as reflected in the high proportion of variance captured by the inclusion of by-condition random slopes for participants and items. We thus argue that – at both a neurophysiological and a functional level – the notion of general differences between language processing in young and older adults may only be of limited use, and that future research should seek to better understand the causes of interindividual variability in the ERP responses of older adults and its relation to cognitive performance.
The transition between phases of activities is a practical problem which participants in an interaction have to deal with routinely. In meetings, the sequence of phases of activity is often outlined by a written agenda. However, transitions still have to be accomplished by local interactional work of the participants. In a detailed conversation analytic case study based on video-data, it is shown how participants collaboratively accomplish an emergent interactional state of affairs (a break-like activity) which differs widely from the state of affairs which was projected by awritten agenda (the next presentation), although in doing so, the participants still show their continuous orientation to the agenda. The paper argues that the reconstruction of emergent developments in interaction calls for a multimodal analysis of interaction, because the fine-grained multimodal co-ordination of bodily and verbal resources provides for opportunities of sequentially motivated, relevant next actions. These, however, can amount to emergent activity sequences, which may be at odds with the activity types which are projected by an interactional agenda or expected on behalf of some institutional routine.
American English and German AI, AU observed in cognates such as Wein, wine, Haus, house are usually treated on a par, represented with the same initial vowel (cf. [ai], [au] for Am. Engl, and German [1]). Yet, acoustic measurements indicate differences as the relevant trajectories characteristically cross in Am. Engl, but not in German. These data may indicate consistency with the same initial target for these diphthongs in German, supporting the choice of the same Symbol /a/ in phonemic representation, as opposed to distinct targets (and distinct initial phonemes) in American English.
The paper presents the process of developing the AirFrame database, a specialized lexical resource in which aviation terminology is defined in the form of semantic frames, following the methodology of the Berkeley FrameNet (FN). First, the structure of the database is presented, and then the methodology applied in developing and populating the database is described. The link between specialized aviation frames and general language semantic frames, of which frames defining entities, processes, attributes and events are particularly relevant, is discussed on the example of the semantic frame of Flight and its related frames. The paper ends with discussing possibilities of using AirFrame as a model for further developing resources in which general and specialized knowledge are linked.
This paper introduces the Aix Map Task corpus, a corpus of audio and video recordings of task-oriented dialogues. It was modelled after the original HCRC Map Task corpus. Lexical material was designed for the analysis of speech and prosody, as described in Astésano et al. (2007). The design of the lexical material, the protocol and some basic quantitative features of the existing corpus are presented. The corpus was collected under two communicative conditions, one audio-only condition and one face-to-face condition. The recordings took place in a studio and a sound attenuated booth respectively, with head-set microphones (and in the face-to-face condition with two video cameras). The recordings have been segmented into Inter-Pausal-Units and transcribed using transcription conventions containing actual productions and canonical forms of what was said. It is made publicly available online.
The English language has taken advantage of the Digital Revolution to establish itself as the global language; however, only 28.6 %of Internet users speak English as their native language. Machine Trans-lation (MT) is a powerful technology that can bridge this gap. In devel-opment since the mid-20th century, MT has become available to every Internet user in the last decade, due to free online MT services. This paper aims to discuss the implications that these tools may have for the privacy of their users and how they are addressed by EU data protec-tion law. It examines the data-flows in respect of the initial processing (both from the perspective of the user and the MT service provider) and potential further processing that may be undertaken by the MT service provider.
A model of grammar needs to reconcile the undesirability inherent to allomorphy, the apparent extra burden on learning and memory, with its occurrence and possible stability. OT approaches this task by positing an anti-allomorphy constraint, henceforth referred to as "OO-correspondence", which requires leveling (i.e. sameness of sound structure) in related word forms (Benua 1997). The occurrence of allomorphy then indicates crucial domination of OO-correspondence by other constraints. To assess the adequacy of this proposal it is necessary to establish the level of abstractness at which OO-correspondence applies and to examine the consequences of this decision for ranking order. While proponents of OT tacitly assume the level in question to be rather concrete, the notion of allomorphy as originally envisioned in Structuralism was defined by distinctness at a more abstract level referred to as "phonemic" (Harris 1942; Nida 1944). The basic intuition here is that the defining property of subphonemic sound properties, their conditionedness by context, entails that whatever burden they put on learning and memory is of a fundamentally different nature than that entailed by phonemic distinctness. The evidence from German supports that intuition in that leveling can be shown to target phonemic sound structure to the exclusion of subphonemic properties. Allomorphy, defined by phonemic alterna-tion, tends to serve phonological optimization in closed class items (function words, affixes) while serving to express morphological distinctions in open class items. The key to demonstrating the correlations in question lies in the discernment of phonemic structure, which is therefore at the core of the article.
We describe a simple and efficient Java object model and application programming interface (API) for (possibly multi-modal) annotated natural language corpora. Corpora are represented as elements like Sentences, Turns, Utterances, Words, Gestures and Markables. The API allows linguists to access corpora in terms of these discourse-level elements, i.e. at a conceptual level they are familiar with, with the flexibility offered by a general purpose programming language. It is also a contribution to corpus standardization efforts because it is based on a straightforward and easily extensible data model which can serve as a target for conversion of different corpus formats.
Wortgeschichte digital (Digital Word History) is an emerging historical dictionary of the German language that focuses on describing semantic shifts from about 1600 through today. This article provides deeper insight into the dictionary’s “cross-reference clusters,” one of its software tools that performs visualization of its reference network. Hence, the clusters are a part of the project’s macrostructure. They serve as both a means for users to find entries of interest and a tool to elucidate relations among dictionary entries. Rather than delve into technical aspects, this article focuses on the applied logics of the software and discusses the approach in light of the dictionary’s microstructure. The article concludes with some considerations about the clusters’ advantages and limitations.
Taking a usage-based perspective, lexical-semantic relations and other aspects of lexical meaning are characterised as emerging from language use. At the same time, they shape language use and therefore become manifest in corpus data. This paper discusses how this mutual influence can be taken into account in the study of these relations. An empirically driven methodology is proposed that is, as an initial step, based on self-organising clustering of comprehensive collocation profiles. Several examples demonstrate how this methodology may guide linguists in explicating implicit knowledge of complex semantic structures. Although these example analyses are conducted for written German, the overall methodology is language-independent.
The paper presents the results of a joint effort of a group of multimodality researchers and tool developers to improve the interoperability between several tools used for the annotation and analysis of multimodality. Each of the tools has specific strengths so that a variety of different tools, working on the same data, can be desirable for project work. However this usually requires tedious conversion between formats. We propose a common exchange format for multimodal annotation, based on the annotation graph (AG) formalism, which is supported by import and export routines in the respective tools. In the current version of this format the common denominator information can be reliably exchanged between the tools, and additional information can be stored in a standardized way.
This paper presents the results of a joint effort of a group of multimodality researchers and tool developers to improve the interoperability between several tools used for the annotation and analysis of multimodality. Each of the tools has specific strengths so that a variety of differ-ent tools, working on the same data, can be desirable for project work. However this usually re-quires tedious conversion between formats. We propose a common exchange format for multi-modal annotation, based on the annotation graph (AG) formalism, which is supported by import and export routines in the respective tools. In the current version of this format the common de-nominator information can be reliably exchanged between the tools, and additional information can be stored in a standardized way.
We investigate whether prototypicality or prominence of semantic roles can account for role-related effects in sentence interpretation. We present two acceptability-rating experiments testing three different constructions: active, personal passive and DO-clefts involving the same type of transitive verbs that differ with respect to the agentive role features they select. Our results reveal that there is no cross-constructional advantage for prototypical roles (e.g., agents), hence disconfirming a central tenet of role prototypicality. Rather, acceptability clines depend on the construction under investigation, thereby highlighting different role features. This finding is in line with one core assumption of the prominence account stating that role features are flexibly highlighted depending on the discourse function of the respective construction.
This paper aims to describe different patterns of syntactic extensions of turns-at-talk in mundane conversations in Czech. Within interactional linguistics, same-speaker continuations of possibly complete syntactic structures have been described for typologically diverse languages, but have not yet been investigated for Slavic languages. Based on previously established descriptions of various types of extensions (Vorreiter 2003; Couper-Kuhlen & Ono 2007), our initial description shall therefore contribute to the cross-linguistic exploration of this phenomenon. While all previously described forms for continuing a turn-constructional unit seem to exist in Czech, some grammatical features of this language (especially free word order and strong case morphology) may lead to problems in distinguishing specific types of syntactic extensions. Consequently, this type of language allows for critically evaluating the cross-linguistic validity of the different categories and underlines the necessity of analysing syntactic phenomena within their specific action contexts.
The paper presents the results of a survey on lexicographic practices and lexicographers’ needs across Europe that was conducted in the context of the Horizon 2020 project European Lexicographic Infrastructure (ELEXIS) among the observer institutions of the project. The survey is a revised and upgraded version of the survey which was originally conducted among ELEXIS lexicographic partner institutions in 2018 (Kallas et al. 2019a). The main goal of this new survey was to complement the data from the ELEXIS lexicographic partner institutions in order to get a more complete picture of lexicographic practices both for born-digital and retro-digitised resources in Europe. The results offer a detailed insight into many aspects of the lexicographic process at European institutions, such as funding, training, staff, lexicographic expertise, software and tools. In addition, the survey reflects on current trends in lexicography and reveals what institutions see as the most important emerging trends that will affect lexicography in the short-term and long-term future. Overall, the results provide valuable input informing the development of tools, resources, guidelines and training materials within ELEXIS.
This article describes an English Zulu learners’ dictionary that is part of a larger set of information tools, namely an online Zulu course, an e-dictionary of possessives (which was implemented earlier) accompanied by training software offering translation tasks on several levels, and an ontology of morphemic items categorizing and describing all parts of speech of Zulu. The underlying lexicographic database contains the usual type of lexicographic data, such as translation equivalents and their respective morphosyntactic data, but its entries have been extended with data related to the lessons of the online course in order to enable the learner to link both tools autonomously. The ‘outer matter’ is integrated into the website in the form of several texts on additional web pages (how-to-use, typical outputs, grammar tables, information on morphosyntactic rules, etc.). The dictionary comprises a modular system, where each module fulfils one of the necessary functions.
Lexical chaining has become an important part of many NLP tasks. However, the goodness of a chaining process and hence its annotation output depends on the quality of the chaining resource. Therefore, a framework for chaining is needed which integrates divergent resources in order to balance their deficits and to compare their strengths and weaknesses. In this paper we present an application that incorporates the framework of a meta model of lexical chaining exemplified on three resources and its generalized exchange format.
The paper reports on a dictionary of German loanwords in the languages of the South Pacific that is compiled at the Institut für Deutsche Sprache in Mannheim. The loanwords described in this dictionary mainly result from language contact between 1884 and 1914, when the German empire was in possession of large areas of the South Pacific where overall more than 700 indigenous languages were spoken. The dictionary is designed as an electronic XML-based resource from which an internet dictionary and a printed dictionary can be derived. Its printed version is intended as an ‘inverted loanword dictionary’, that is, a dictionary that – in contrast to the usual praxis in loanword lexicography – lemmatizes the words of a source language that have been borrowed by other languages. Each of the loanwords will be described with respect to its form and meaning and the contact situation in which it was borrowed. Among the outer texts of the dictionary are (i) a list of all sources with bibliographic and archival information, (ii) a commentary on each source, (iii) a short history of the language contact with German for each target language, and perhaps (iv) facsimiles of source texts.The dictionary is supposed to (i) help to reconstruct the history of language contact of the source language, (ii) provide evidence for the cultural contact between the populations speaking the source and the target languages, (iii) enable linguistic theories about the systematic changes of the semantic, morphosyntactic, or phonological lexical properties of the source language when its words are borrowed into genetically and typologically different languages, and (iv) establish a thoroughly described case for testing typological theories of borrowing.
This paper discusses an investigation of how senses are ordered across eight dictionaries. A dataset of 75 words was used for this purpose, and two senses were examined for each word. The words are divided into three groups of 25 words each according to the relationship between the senses: Homonymy, Metaphor, and Systematic Polysemy. The primary finding is that WordNet differs from the other dictionaries in terms of Metaphor. The order of the senses was more often figurative/literal, and it had the highest percentage of figurative senses that were not found. We discuss leveraging another dictionary, COBUILD, to re-order the senses according to frequency.
Just like most varieties of West Germanic, virtually all varieties of German use a construction in which a cognate of the English verb 'do' (standard German 'tun') functions as an auxiliary and selects another verb in the bare infinitive, a construction known as 'do'-periphrasis or 'do'-support. The present paper provides an Optimality Theoretic (OT) analysis of this phenomenon. It builds on a previous analysis by Bader and Schmid (An OT-analysis of 'do'-support in Modern German, 2006) but (i) extends it from root clauses to subordinate clauses and (ii) aims to capture all of the major distributional patterns found across (mostly non-standard) varieties of German. In so doing, the data are used as a testing ground for different models of German clause structure. At first sight, the occurrence of 'do' in subordinate clauses, as found in many varieties, appears to support the standard CP-IP-VP analysis of German. In actual fact, however, the full range of data turn out to challenge, rather than support, this model. Instead, I propose an analysis within the IP-less model by Haider (Deutsche Syntax - generativ. Vorstudien zur Theorie einer projektiven Grammatik, Narr, Tübingen, 1993 et seq.). In sum, the 'do'-support data will be shown to have implications not only for the analysis of clause structure but also for the OT constraints commonly assumed to govern the distribution of 'do', for the theory of non-projecting words (Toivonen in Non-projecting words, Kluwer, Dordrecht, 2003) as well as research on grammaticalization.
This contribution presents an XML Schema for annotating a high level narratological category: speech, thought and writing representation (ST&WR). It focusses on two aspects: Firstly, the original Schema is presented as an example for the challenge to encode a narrative feature in a structured and flexible way and secondly, ways of adapting this Schema to TEI are considered, in Order to make it usable for other, TEI-based projects.
This paper aims to address these problems by dealing with theoretical and methodological questions concerning the national effects of the Bologna Process and the role national factors play in determining the impact of these effects. Altogether the purpose of the paper is to serve as a starting point for future research – both as a guide for systematic and comparative empirical work on higher education, but also for further theoretical and methodological reasoning concerning research on (higher) education policy. As higher education research so far particularly lacks an approach allowing for a competitive and systematic falsification of theoretical arguments by clearly indicating testable and specific hypothesis as well as variables behind the research design (Goedegebuure/Vught 1996) we propose to fall back on neighbouring disciplines, namely social science to improve and enhance the analysis (Slaughter 2001: 398; Altbach 2002: 154; Teichler 1996a: 433, 2005: 448). Several strands of research have to be considered – namely literature on Europeanization as well as insights and approaches of studies dealing with cross-national policy convergence. Taking into account the non-obligatory and mainly intergovernmental character of the Bologna Process the main focus of the paper is on factors related to the effects of transnational communication. The inherent goal is to extend the research agenda on higher education (McLendon 2003: 184ff) and to leave behind the restriction of to analyse only a few cases by striving for a research design that allows for systematic testing and sufficient explanations of cross-national policy convergence at the interface between the Bologna Process and domestic factors.
Although there is a growing interest of policy makers in higher education issues (especially on an international scale), there is still a lack of theoretically well-grounded comparative analyses of higher education policy. Even broadly discussed topics in higher education research like the potential convergence of European higher education systems in the course of the Bologna Process suffer from a thin empirical and comparative basis. This paper aims to deal with these problems by addressing theoretical questions concerning the domestic impact of the Bologna Process and the role national factors play in determining its effects on cross-national policy convergence. It develops a distinct theoretical approach for the systematic and comparative analysis of cross-national policy convergence. In doing so, it relies upon insights from related research areas — namely literature on Europeanization as well as studies dealing with cross-national policy convergence.
This study investigated whether an analysis of narrative style (word use and cross-clausal syntax) of patients with symptoms of generalised anxiety and depression disorders can help predict the likelihood of successful participation in guided self-help. Texts by 97 people who had made contact with a primary care mental health service were analysed. Outcome measures were completion of the guided self-help programme, and change in symptoms assessed by a standardised scale (CORE-OM). Regression analyses indicated that some aspects of participants' syntax helped to predict completion of the programme, and that aspects of syntax and word use helped to predict improvement of symptoms. Participants using non-finite complement clauses with above-average frequency were four times more likely to complete the programme (95% confidence interval 1.4 to 11.7) than other participants. Among those who completed, the use of causation words and complex syntax (adverbial clauses) predicted improvement, accounting for 50% of the variation in well-being benefit. These results suggest that the analysis of narrative style can provide useful information for assessing the likelihood of success of individuals participating in a mental health guided self-help programme.
MRI data of German vowels and consonants was acquired for 9 speakers. In this paper tongue contours for the vowels were analyzed using the three-mode factor analysis technique PARAFAC. After some difficulties, probably related to what constitutes an adequate speaker sample for this three-mode technique to work, a stable two-factor solution was extracted that explained about 90% of the variance. Factor 1 roughly captured the dimension low back to high front; Factor 2 that from mid front to high back. These factors are compared with earlier models based on PARAFAC. These analyses were based on midsagittal contours; the paper concludes by illustrating from coronal and axial sections how non-midline information could be incorporated into this approach.
Anaphora by pronouns
(1983)
An adequate conception of anaphora is still a desideratum. Considering the anaphoric use of third-person personal pronouns, the present study contributes to the solution of the question of what anaphora is. Major tenets of generative approaches to pronominal anaphora are surveyed; descriptive and methodological problems with transformational as well as interpretive treatments are discussed. The prevailing assumption that anaphora is a syntactically based phenomenon is shown tobe inadequate. In particular, it is argued that pronominal anaphora does not constitute a case of eilher a syntactic ( agreement) relation or a semantic ( coreference) relation between antecedents and anaphors, i.e. linguistic expressions. Infact, there is no grammatical antecedent-anaphor relation that is essential to the description of pronouns. Pronouns are to be treated in their own right rather than by recourse to supposed antecedents. An account of the use of pronouns has to be based on a notion of speaker reference and on a unified description of lexical entries for pronouns that specify their meanings. Sampie entries for English are suggested. It is emphasized that pronoun meanings rejlect social, not biological, classifications of possible referents. To the extent that pronouns are used according to morphosyntactic features, as in languages like German or French, lexical entries for pronouns should specify the pronouns' 'associative potential'. Associative potential has the samefunction as conceptual meaning, viz. delimiting the associated extension. In addition to this, pronouns turn out to differ from 'normal definite nominals' only in the low conceptual content of their meanings. Pronoun occurrences that apparently agree with and are coreferential with referential antecedents are found to form a restricted subclass of pronoun use in generat as weil as of anaphoric pronoun use. Thus one must refrainfromforcing each and every pronoun occurrence into this mold. Instead, anaphora by pronouns is characterized as a type of use where pronouns serve to refer to referents that the speaker considers to be retrievable from the universe-of-discourse.
Annotating Discourse Relations in Spoken Language: A Comparison of the PDTB and CCR Frameworks
(2016)
In discourse relation annotation, there is currently a variety of different frameworks being used, and most of them have been developed and employed mostly on written data. This raises a number of questions regarding interoperability of discourse relation annotation schemes, as well as regarding differences in discourse annotation for written vs. spoken domains. In this paper, we describe ouron annotating two spoken domains from the SPICE Ireland corpus (telephone conversations and broadcast interviews) according todifferent discourse annotation schemes, PDTB 3.0 and CCR. We show that annotations in the two schemes can largely be mappedone another, and discuss differences in operationalisations of discourse relation schemes which present a challenge to automatic mapping. We also observe systematic differences in the prevalence of implicit discourse relations in spoken data compared to written texts,find that there are also differences in the types of causal relations between the domains. Finally, we find that PDTB 3.0 addresses many shortcomings of PDTB 2.0 wrt. the annotation of spoken discourse, and suggest further extensions. The new corpus has roughly theof the CoNLL 2015 Shared Task test set, and we hence hope that it will be a valuable resource for the evaluation of automatic discourse relation labellers.
Annotating Spoken Language
(2014)
We continue the study of the reproducibility of Propp’s annotations from Bod et al. (2012). We present four experiments in which test subjects were taught Propp’s annotation system; we conclude that Propp’s system needs a significant amount of training, but that with sufficient time investment, it can be reliably trained for simple tales.