Refine
Year of publication
Document Type
- Part of a Book (108)
- Article (82)
- Conference Proceeding (54)
- Book (1)
- Doctoral Thesis (1)
- Master's Thesis (1)
- Review (1)
Language
- English (248) (remove)
Keywords
- Deutsch (65)
- Korpus <Linguistik> (51)
- Konversationsanalyse (25)
- Interaktion (23)
- Computerlinguistik (22)
- Gesprochene Sprache (19)
- Natürliche Sprache (17)
- Sprachpolitik (15)
- Maschinelles Lernen (14)
- Digital Humanities (12)
Publicationstate
- Zweitveröffentlichung (248) (remove)
Reviewstate
Publisher
- de Gruyter (30)
- Benjamins (16)
- European Language Resources Association (16)
- Springer (13)
- Narr (9)
- Oxford University Press (8)
- Narr Francke Attempto (7)
- Cambridge University Press (5)
- Editura Academiei Române (5)
- Elsevier (5)
With the advent of mobile devices, mediatized political discourse became more dynamic. I assume that the microblog Twitter can be considered as a medium for spatial coordination during protests. Therefore, the case of neo-Nazi demonstrations and counter-protests in the city of Dresden that occurred in February 2012 is analysed. Data consists of microposts that occurred during the event. Quantitative analysis of hashtag and retweet frequencies was performed as well as qualitative speech act pattern analysis and a tempo-spatial discourse analysis on selected subsets of microposts. Results show that a common linguistic practice is verbal georeferencing and by that constructing space. Empirical analysis indicates a strong relation between communicational online space and physical offline place: Protest participants permanently reconfigure spatial context discursively and thus the contested protest area becomes a temporarily meaningful place.
"What makes this so complicated?" On the value of disorienting dilemmas in language instruction
(2017)
A "polyglottal" speech synthesis - modifications for a replica of Kempelen's speaking machine
(2019)
This paper argues that there is a correlation between functional and purely grammatical patterning in language, yet the nature of this correlation has to be explored. This claim is based on the results of a corpus-driven study of the Slavic aspect, drawing on the socalled Distributional Hypothesis. According to the East-West Theory of the Slavic aspect, there is a broad east-west isogloss dividing the Slavic languages into an eastern group and a western group. There are also two transitional zones in the north and south, which share some properties with each group (Dickey 2000; Barentsen 1998, 2008). The East-West Theory uses concepts of cognitive grammar such as totality and temporal definiteness, and is based on various parameters of aspectual usage in discourse, including contexts such as habituals, general factuals, historical (narrative) present, performatives, sequenced events in the past etc. The purpose of the above-mentioned study is to challenge the semantic approach to the Slavic aspect by comparing the perfective and imperfective verbal aspect on the basis of purely grammatical co-occurrence patterns (see also Janda & Lyashevskaya 2011). The study focused on three Slavic languages: Russian, which, following the East-West Theory, belongs to the eastern group, Czech, which belongs to the western group, and Polish, which is considered as transitional in its aspectual patterning.
This paper argues that a lectometric approach may shed light on the distinction between destandardization and demotization, a pair of concepts that plays a key role in ongoing discussions about contemporary trends in standard languages. Instead of a binary distinction, the paper proposes three different types of destandardization, defined as quantitatively measurable changes in a stratigraphic language continuum. The three types are illustrated on the basis of a case study describing changes in the vocabulary of Dutch in The Netherlands and Flanders between 1990 and 2010.
We present a new resource for German causal language, with annotations in context for verbs, nouns and adpositions. Our dataset includes 4,390 annotated instances for more than 150 different triggers. The annotation scheme distinguishes three different types of causal events (CONSEQUENCE, MOTIVATION, PURPOSE). We also provide annotations for semantic roles, i.e. of the cause and effect for the causal event as well as the actor and affected party, if present. In the paper, we present inter-annotator agreement scores for our dataset and discuss problems for annotating causal language. Finally, we present experiments where we frame causal annotation as a sequence labelling problem and report baseline results for the prediciton of causal arguments and for predicting different types of causation.
In a number of languages, agreement in specificational copular sentences can or must be with the second of the two nominals, even when it is the first that occupies the canonical subject position. Béjar & Kahnemuyipour (2017) show that Persian and Eastern Armenian are two such languages. They then argue that ‘NP2 agreement’ occurs because the nominal in subject position (NP1) is not accessible to an external probe. It follows that actual agreement with NP1 should never be possible: the alternative to NP2 agreement should be ‘default’ agreement. We show that this prediction is false. In addition to showing that English has NP1, not default, agreement, we present new data from Icelandic, a language with rich agreement morphology, including cases that involve ‘plurale tantum’ nominals as NP1. These allow us to control for any confound from the fact that typically in a specificational sentence with two nominals differing in number, it is NP2 that is plural. We show that even in this case, the alternative to agreement with NP2 is agreement with NP1, not a default. Hence, we conclude that whatever the correct analysis of specificational sentences turns out to be, it must not predict obligatory failure of NP1 agreement.
We question the growing consensus in the literature that European Americans behave as a homogenous pan-ethnic coalition of voters. Seemingly below the radar of scholarship on voting groups in American politics, we identify a group of white voters that behaves differently from others: German Americans, the largest ethnic group, regionally concentrated in the ‘Swinging Midwest’. Using county level voting returns, ancestry group information from the American Community Survey (ACS), current survey data and historical census data going back as early as 1910, we provide evidence for a partisan and a non-partisan pathway that motivated German Americans to vote for Trump in 2016: a historically grown association with the Republican Party and an acquired taste for isolationist attitudes that mobilizes non-partisan German Americans to support isolationist candidates. Our findings indicate that European American experiences of migration and integration still echo into the political arena of today.
Accentuation, Uncertainty and Exhaustivity - Towards a Model of Pragmatic Focus Interpretation
(2010)
This paper presents a model of pragmatic focus interpretation that is assumed to be part of a complete language comprehension model and that is inspired by Levelt's language processing model. The model is derived from our empirical data on the role of accentuation, prosodic indicators of uncertainty and context for pragmatic focus interpretation. In its present state, the model is restricted to these data, but nevertheless generates predictions.
In this paper, we present an overview of freely available web applications providing online access to spoken language corpora. We explore and discuss various solutions with which the corpus providers and corpus platform developers address the needs of researchers who are working with spoken language. The paper aims to contribute to the long-overdue exchange and discussion of methods and best practices in the design of online access to spoken language corpora.
Action ascription can be understood from two broad perspectives. On one view, it refers to the ways in which actions constitute categories by which members make sense of their world, and forms a key foundation for holding others accountable for their conduct. On another view, it refers to the ways in which we accountably respond to the actions of others, thereby accomplishing sequential versions of meaningful social experience. In short, action ascription can be understood as matter of categorisation of prior actions or responding in ways that are sequentially fitted to prior actions, or both. In this chapter, we review different theoretical approaches to action ascription that have developed in the field, as well as the key constituents and resources of action ascription that have been identified in conversation analytic research, before going on to discuss how action ascription can itself be considered a form of social action.
In the first volume of Corpus Linguistics and Linguistic Theory, Gries (2005. Null-hypothesis significance testing of word frequencies: A follow-up on Kilgarriff. Corpus Linguistics and Linguistic Theory 1(2). doi:10.1515/cllt.2005.1.2.277. http://www.degruyter.com/view//cllt.2005.1.issue-2/cllt.2005.1.2.277/cllt.2005.1.2.277.xml: 285) asked whether corpus linguists should abandon null-hypothesis significance testing. In this paper, I want to revive this discussion by defending the argument that the assumptions that allow inferences about a given population – in this case about the studied languages – based on results observed in a sample – in this case a collection of naturally occurring language data – are not fulfilled. As a consequence, corpus linguists should indeed abandon null-hypothesis significance testing.
Terminological resources play a central role in the organization and retrieval of scientific texts. Both simple keyword lists and advanced modelings of relationships between terminological concepts can make a most valuable contribution to the analysis, classification, and finding of appropriate digital documents, either on the web or within local repositories. This seems especially true for long-established scientific fields with elusive theoretical and historical branches, where the use of terminology within documents from different origins is often far from being consistent. In this paper, we report on the progress of a linguistically motivated project on the onomasiological re-modeling of the terminological resources for the grammatical information system grammis. We present the design principles and the results of their application. In particular, we focus on new features for the authoring backend and discuss how these innovations help to evaluate existing, loosely structured terminological content, as well as to efficiently deal with automatic term extraction. Furthermore, we introduce a transformation to a future SKOS representation. We conclude with a positioning of our resources with regard to the Knowledge Organization discourse and discuss how a highly complex information environment like grammis benefits from the re-designed terminological KOS.
This article describes an English Zulu learners’ dictionary that is part of a larger set of information tools, namely an online Zulu course, an e-dictionary of possessives (which was implemented earlier) accompanied by training software offering translation tasks on several levels, and an ontology of morphemic items categorizing and describing all parts of speech of Zulu. The underlying lexicographic database contains the usual type of lexicographic data, such as translation equivalents and their respective morphosyntactic data, but its entries have been extended with data related to the lessons of the online course in order to enable the learner to link both tools autonomously. The ‘outer matter’ is integrated into the website in the form of several texts on additional web pages (how-to-use, typical outputs, grammar tables, information on morphosyntactic rules, etc.). The dictionary comprises a modular system, where each module fulfils one of the necessary functions.
Just like most varieties of West Germanic, virtually all varieties of German use a construction in which a cognate of the English verb 'do' (standard German 'tun') functions as an auxiliary and selects another verb in the bare infinitive, a construction known as 'do'-periphrasis or 'do'-support. The present paper provides an Optimality Theoretic (OT) analysis of this phenomenon. It builds on a previous analysis by Bader and Schmid (An OT-analysis of 'do'-support in Modern German, 2006) but (i) extends it from root clauses to subordinate clauses and (ii) aims to capture all of the major distributional patterns found across (mostly non-standard) varieties of German. In so doing, the data are used as a testing ground for different models of German clause structure. At first sight, the occurrence of 'do' in subordinate clauses, as found in many varieties, appears to support the standard CP-IP-VP analysis of German. In actual fact, however, the full range of data turn out to challenge, rather than support, this model. Instead, I propose an analysis within the IP-less model by Haider (Deutsche Syntax - generativ. Vorstudien zur Theorie einer projektiven Grammatik, Narr, Tübingen, 1993 et seq.). In sum, the 'do'-support data will be shown to have implications not only for the analysis of clause structure but also for the OT constraints commonly assumed to govern the distribution of 'do', for the theory of non-projecting words (Toivonen in Non-projecting words, Kluwer, Dordrecht, 2003) as well as research on grammaticalization.
Although there is a growing interest of policy makers in higher education issues (especially on an international scale), there is still a lack of theoretically well-grounded comparative analyses of higher education policy. Even broadly discussed topics in higher education research like the potential convergence of European higher education systems in the course of the Bologna Process suffer from a thin empirical and comparative basis. This paper aims to deal with these problems by addressing theoretical questions concerning the domestic impact of the Bologna Process and the role national factors play in determining its effects on cross-national policy convergence. It develops a distinct theoretical approach for the systematic and comparative analysis of cross-national policy convergence. In doing so, it relies upon insights from related research areas — namely literature on Europeanization as well as studies dealing with cross-national policy convergence.
Repeating the movements associated with activities such as drawing or sports typically leads to improvements in kinematic behavior: these movements become faster, smoother, and exhibit less variation. Likewise, practice has also been shown to lead to faster and smoother movement trajectories in speech articulation. However, little is known about its effect on articulatory variability. To address this, we investigate the extent to which repetition and predictability influence the articulation of the frequent German word “sie” [zi] (they). We find that articulatory variability is proportional to speaking rate and the duration of [zi], and that overall variability decreases as [zi] is repeated during the experiment. Lower variability is also observed as the conditional probability of [zi] increases, and the greatest reduction in variability occurs during the execution of the vocalic target of [i]. These results indicate that practice can produce observable differences in the articulation of even the most common gestures used in speech.
The project Referenzkorpus Altdeutsch (‘Old German Reference Corpus’) aims to es- tablish a deeply-annotated text corpus of all extant Old German texts. As the automated part-of-speech and morphological pre-annotation is amended by hand, a quality control system for the results seems a desirable objective. To this end, standardized inflectional forms, generated using the morphological information, are compared with the attested word forms. Their creation is described by way of example for the Old High German part of the corpus. As is shown, in a few cases, some features of the attested word forms are also required in order to determine as exactly as possible the shape of the inflected lemma form to be created.
This paper describes a rule-based approach to detect direct speech without the help of any quotation markers. As datasets fictional and non-fictional texts were used. Our evaluation shows that the results appear stable throughout different datasets in the fictional domain and are comparable to the results achieved in related work.
This study builds on a large body of work on the use of linguistic forms for requests in social interaction. Using Conversation Analysis / Interactional Linguistics, this study explores the use of two recurrent linguistic formats for requesting in spoken German – simple interrogatives ('do you do ..?') and kannst du VP? ('can you do..?') interrogatives. Based on a corpus of video-recorded, naturally occurring data of mundane data, this study demonstrates one of the interactional factors that is relevant for the choice between alternative interrogative request formats in spoken German – recipient's embodied availability before and during the request initiation. It is shown that simple interrogatives are used to request an action from a recipient who is either available or involved in their own project, which, however, does not have to be suspended or interrupted for the compliance with the request. In contrast, kannst du VP? interrogatives occur in environments in which the recipient is already engaged in a project that must be suspended in order to grant the request.
This chapter explores the Linguistic Landscape of six medium-size towns in the Baltic States with regard to languages of tourism and to the role of English and Russian as linguae francae. A quantitative analysis of signs and of tourism web sites shows that, next to the state languages, English is the most dominant language. Yet, interviews reveal that underneath the surface, Russian still stands strong. Therefore, possible claims that English might take over the role of the main lingua franca in the Baltic States cannot be maintained. English has a strong position for attracting international tourists, but only alongside Russian which remains important both as a language of international communication and for local needs.
Beyond Citations: Corpus-based Methods for Detecting the Impact of Research Outcomes on Society
(2020)
This paper proposes, implements and evaluates a novel, corpus-based approach for identifying categories indicative of the impact of research via a deductive (top-down, from theory to data) and an inductive (bottom-up, from data to theory) approach. The resulting categorization schemes differ in substance. Research outcomes are typically assessed by using bibliometric methods, such as citation counts and patterns, or alternative metrics, such as references to research in the media. Shortcomings with these methods are their inability to identify impact of research beyond academia (bibliometrics) and considering text-based impact indicators beyond those that capture attention (altmetrics). We address these limitations by leveraging a mixed-methods approach for eliciting impact categories from experts, project personnel (deductive) and texts (inductive). Using these categories, we label a corpus of project reports per category schema, and apply supervised machine learning to infer these categories from project reports. The classification results show that we can predict deductively and inductively derived impact categories with 76.39% and 78.81% accuracy (F1-score), respectively. Our approach can complement solutions from bibliometrics and scientometrics for assessing the impact of research and studying the scope and types of advancements transferred from academia to society.
Beyond the stars: exploiting free-text user reviews to improve the accuracy of movie recommendations
(2009)
In this paper we show that the extraction of opinions from free-text reviews can improve the accuracy of movie recommendations. We present three approaches to extract movie aspects as opinion targets and use them as features for the collaborative filtering. Each of these approaches requires different amounts of manual interaction. We collected a data set of reviews with corresponding ordinal (star) ratings of several thousand movies to evaluate the different features for the collaborative filtering. We employ a state-of-the-art collaborative filtering engine for the recommendations during our evaluation and compare the performance with and without using the features representing user preferences mined from the free-text reviews provided by the users. The opinion mining based features perform significantly better than the baseline, which is based on star ratings and genre information only.
Canadian heritage German across three generations: A diary-based study of language shift in action
(2019)
It is well known that migration has an effect on language use and language choice. If the language of origin is maintained after migration, it tends to change in the new contact setting. Often, migrants shift to the new majority language within few generations. The current paper examines a diary corpus containing data from three generations of one German-Canadian family, ranging from 1867 to 1909, and covering the second to fourth generation after immigration. The paper analyzes changes that can be observed between the generations, with respect to the language system as well as to the individuals’ decision on language choice. The data not only offer insight into the dynamics of acquiring a written register of a heritage language, and the eventual shift to the majority language. They also allow us to identify different linguistic profiles of heritage speakers within one community. It is discussed how these profiles can be linked to the individuals’ family backgrounds and how the combination of these backgrounds may have contributed to giving up the heritage language in favor of the majority language.
This paper studies how the turn-design of a highly recurrent type of action changes over time. Based on a corpus of video-recordings of German driving lessons, we consider one type of instructions and analyze how the same instructional action is produced by the same speaker (the instructor) for the same addressee (the student) in consecutive trials of a learning task. We found that instructions become increasingly shorter, indexical and syntactically less complex; interactional sequences become more condensed and activities designed to secure mutual understanding become rarer. This study shows how larger temporal frameworks of interpersonal interactional histories which range beyond the interactional sequence impinge on the recipient-design of turns and the deployment of multimodal resources in situ.
We present web services implementing a workflow for transcripts of spoken language following TEI guidelines, in particular ISO 24624:2016 "Language resource management - Transcription of spoken language". The web services are available at our website and will be available via the CLARIN infrastructure, including the Virtual Language Observatory and WebLicht.
This paper deals with different types of verbal complementation of the German verb verdienen. It focuses on constructions that have been undergoing a grammaticalization process and thus express deontic modality, as in Sie verdient geliebt zu werden (ʽShe deserves to be lovedʼ) and Sie verdient zu leben (ʽShe deserves to liveʼ) (Diewald, Dekalo & Czicza 2021). These constructions are connected to parallel complementation types with passive and active infinitives containing a correlate es, as in Sie verdient es, geliebt zu werden and Sie verdient es, zu leben, as well as finite clauses with the subordinator dass with and without correlative es, as in Sie verdient, dass sie geliebt wird and Sie verdient es, dass sie geliebt wird. This paper attempts to show a close comparative investigation of these six types of constructions based on their relevant semantic and syntactic properties in terms of clause linkage (Lehmann 1988). We analyze the relevant data retrieved from the DWDS corpus of the 20th century and present an expanded grammaticalization path for verdienen-constructions. The finite complementation with dass is regarded as an example of a separate structural option called “elaboration”. Concerning the use of correlative es, it is shown that it does not have any substantial effect on the grammaticalization of modal verdienen-constructions.
The ubiquity of smartphones has been recognised within conversation analysis as having an impact on conversational structures and on the participants’ interactional involvement. However, most of the previous studies have relied exclusively on video recordings of overall encounters and have not systematically considered what is taking place on the device. Due to the personal nature of smartphones and their small displays, onscreen activities are of limited visibility and are thus potentially opaque for both the co-present participants (“participant opacity”) and the researchers (“analytical opacity”). While opacity can be an inherent feature of smartphones in general, analytical opacity might not be desirable for research purposes. This chapter discusses how a recording set-up consisting of static cameras, wearable cameras and dynamic screen captures allowed us to address the analytical opacity of mobile devices. Excerpts from multi-source video data of everyday encounters will illustrate how the combination of multiple perspectives can increase the visibility of interactional phenomena, reveal new analytical objects and improve analytical granularity. More specifically, these examples will emphasise the analytical advantages and challenges of a combined recording set-up with regard to smartphone use as multiactivity, the role of the affordances of the mobile device, and the prototypicality and “naturalness” of the recorded practices.
Communication of stereotypes in the classroom: biased language use of German and Turkish adolescents
(2014)
Little is known about the linguistic transmission and maintenance of mutual stereotypes in interethnic contexts. This field study, therefore, investigated the linguistic expectancy bias (LEB) and the linguistic intergroup bias (LIB) among German and Turkish adolescents (13 to 20 years) in the school context. The LEB refers to the general phenomenon of describing stereotypes more abstractly. The LIB is the tendency to use language abstraction for in-group protective reasons. Results revealed an unmoderated LEB, whereas the LIB only occurred when foreigners were in the numerical majority, the classroom composition was perceived as a learning disadvantage, or the interethnic conflict frequency was high. These findings provide first evidence for the use of both LEB and LIB in an interethnic classroom setting.
The present paper reports the first results of the compilation and annotation of a blog corpus for German. The main aim of the project is the representation of the blog discourse structure and relations between its elements (blog posts, comments) and participants (bloggers, commentators). The data included in the corpus were manually collected from the scientific blog portal SciLogs. The feature catalogue for the corpus annotation includes three types of information which is directly or indirectly provided in the blog or can be construed by means of statistical analysis or computational tools. At this point, only directly available information (e.g., title of the blog post, name of the blogger etc.) has been annotated. We believe, our blog corpus can be of interest for the general study of blog structure or related research questions as well as for the development of NLP methods and techniques (e.g. for authorship detection).
Are borrowed neologisms accepted more slowly into the German language than German words resulting from the application of word formation rules? This study addresses this question by focusing on two possible indicators for the acceptance of neologisms: a) frequency development of 239 German neologisms from the 1990s (loanwords as well as new words resulting from the application of word formation rules) in the German reference corpus DeReKo and b) frequency development in the use of pragmatic markers (‘flags’, namely quotation marks and phrases such as sogenannt ‘so-called’) with these words. In the second part of the article, a psycholinguistic approach to evaluating the (psychological) status of different neologisms and non-words in an experimentally controlled study and plans to carry out interviews in a field test to collect speakers’ opinions on the acceptance of the analysed neologisms are outlined. Finally, implications for the lexicographic treatment of both types of neologisms are discussed.
Are borrowed neologisms accepted more slowly into the German language than German words resulting from the application of wrd formation rules? This study addresses this question by focusing on two possible indicators for the acceptance of neologisms: a) frequency development of 239 German neologisms from the 1990s (loanwords as well as new words resulting from the application of word formation rules) in the German reference corpus DEREKO and b) frequency development in the use of pragmatic markers (‘flags’, namely quotation marks and phrases such as sogenannt ‘so-called’) with these words. In the second part of the article, a psycholinguistic approach to evaluating the (psychological) status of different neologisms and non-words in an experimentally controlled study and plans to carry out interviews in a field test to collect speakers’ opinions on the acceptance of the analysed neologisms are outlined. Finally, implications for the lexicographic treatment of both types of neologisms are discussed.
Ancient Chinese poetry is constituted by structured language that deviates from ordinary language usage; its poetic genres impose unique combinatory constraints on linguistic elements. How does the constrained poetic structure facilitate speech segmentation when common linguistic and statistical cues are unreliable to listeners in poems? We generated artificial Jueju, which arguably has the most constrained structure in ancient Chinese poetry, and presented each poem twice as an isochronous sequence of syllables to native Mandarin speakers while conducting magnetoencephalography (MEG) recording. We found that listeners deployed their prior knowledge of Jueju to build the line structure and to establish the conceptual flow of Jueju. Unprecedentedly, we found a phase precession phenomenon indicating predictive processes of speech segmentation—the neural phase advanced faster after listeners acquired knowledge of incoming speech. The statistical co-occurrence of monosyllabic words in Jueju negatively correlated with speech segmentation, which provides an alternative perspective on how statistical cues facilitate speech segmentation. Our findings suggest that constrained poetic structures serve as a temporal map for listeners to group speech contents and to predict incoming speech signals. Listeners can parse speech streams by using not only grammatical and statistical cues but also their prior knowledge of the form of language.
Construction-based language models assume that grammar is meaningful and learnable from experience. Focusing on five of the most elementary argument structure constructions of English, a large-scale corpus study of child-directed speech (CDS) investigates exactly which meanings/functions are associated with these patterns in CDS, and whether they are indeed specially indicated to children by their caretakers (as suggested by previous research, cf. Goldberg, Casenhiser and Sethuraman 2004). Collostructional analysis (Stefanowitsch and Gries 2003) is employed to uncover significantly attracted verb-construction combinations, and attracted pairs are classified semantically in order to systematise the attested usage patterns of the target constructions. The results indicate that the structure of the input may aid learners in making the right generalisations about constructional usage patterns, but such scaffolding is not strictly necessary for construction learning: not all argument structure constructions are coherently semanticised to the same extent (in the sense that they designate a single schematic event type of the kind envisioned in Goldberg’s [1995] ‘scene encoding hypothesis’), and they also differ in the extent to which individual semantic subtypes predominate in learners’ input
Departing from Rooth's focus interpretation theory the article discusses two types of (German) ellipsis phenomena: direct alternative and implicit alternative coordinative ellipsis. For the first type, which includes Stripping, Gapping, ATB, and RNR, it is characteristic that the semantic value of either conjunct instantiates the context variable of the respective focus operator in the other. For German Polarity ellipsis and Sluicing, which constitute the other type, it is characteristic that the semantic value, which instantiates the variable given by the focus operator in the second conjunct, must be derived from the semantic value of the first conjunct and that the second conjunct always hosts an alternative set inducing item which demands new information focus in the first conjunct.
Our paper examines how bodily behavior contributes to the local meaning of OKAY. We explore the interplay between OKAY as response to informings and narratives and accompanying multimodal resources in German multi-party interaction. Based on informal and institutional conversations, we describe three different uses of OKAY with falling intonation and the recurrent multimodal patterns that are associated with them and that can be characterized as ‘multimodal gestalts’. We show that: 1. OKAY as a claim to sufficient understanding is typically accompanied by upward nodding; 2. OKAY after change-of-state tokens exhibits a recurrent pattern of up- and downward nodding with distinctive timing; and 3. OKAY closing larger activities is associated with gaze-aversion from the prior speaker.
The present paper outlines the projected second part of the Corpus Query Lingua Franca (CQLF) family of standards: CQLF Ontology, which is currently in the process of standardization at the International Standards Organization (ISO), in its Technical Committee 37, Subcommittee 4 (TC37SC4) and its national mirrors. The first part of the family, ISO 24623-1 (henceforth CQLF Metamodel), was successfully adopted as an international standard at the beginning of 2018. The present paper reflects the state of the CQLF Ontology at the moment of submission for the Committee Draft ballot. We provide a brief overview of the CQLF Metamodel, present the assumptions and aims of the CQLF Ontology, its basic structure, and its potential extended applications. The full ontology is expected to emerge from a community process, starting from an initial version created by the authors of the present paper.
Corpus REDEWIEDERGABE
(2020)
This article presents the corpus REDEWIEDERGABE, a German-language historical corpus with detailed annotations for speech, thought and writing representation (ST&WR). With approximately 490,000 tokens, it is the largest resource of its kind. It can be used to answer literary and linguistic research questions and serve as training material for machine learning. This paper describes the composition of the corpus and the annotation structure, discusses some methodological decisions and gives basic statistics about the forms of ST&WR found in this corpus.
One problem of data-driven answer extraction in open-domain factoid question answering is that the class distribution of labeled training data is fairly imbalanced. In an ordinary training set, there are far more incorrect answers than correct answers. The class-imbalance is, thus, inherent to the classification task. It has a deteriorating effect on the performance of classifiers trained by standard machine learning algorithms. They usually have a heavy bias towards the majority class, i.e. the class which occurs most often in the training set. In this paper, we propose a method to tackle class imbalance by applying some form of cost-sensitive learning which is preferable to sampling. We present a simple but effective way of estimating the misclassification costs on the basis of class distribution. This approach offers three benefits. Firstly, it maintains the distribution of the classes of the labeled training data. Secondly, this form of meta-learning can be applied to a wide range of common learning algorithms. Thirdly, this approach can be easily implemented with the help of state-of-the-art machine learning software.
In this chapter, we overview the specificity of comparisons made within the perspective of Conversation Analysis (CA), and we position them in relation to other fields. We introduce the analytical mentality, methodology, and procedures of CA, and we show how we used it for the analysis of OKAY in this volume.
This article examines a recurrent format that speakers use for defining ordinary expressions or technical terms. Drawing on data from four different languages - Flemish, French, German, and Italian - it focuses on definitions in which a definiendum is first followed by a negative definitional component (‘definiendum is not X’), and then by a positive definitional component (‘definiendum is Y’). The analysis shows that by employing this format, speakers display sensitivity towards a potential meaning of the definiendum that recipients could have taken to be valid. By negating this meaning, speakers discard this possible, yet unintended understanding. The format serves three distinct interactional purposes: (a) it is used for argumentation, e.g. in discussions and political debates, (b) it works as a resource for imparting knowledge, e.g. in expert talk and instructions, and (c) it is employed, in ordinary conversation, for securing the addressee's correct understanding of a possibly problematic expression. The findings contribute to our understanding of how epistemic claims and displays relate to the turn-constructional and sequential organization of talk. They also show that the much quoted ‘problem of meaning’ is, first and foremost, a participant's problem.
Digital humanities research under United States and European copyright laws. Evolving frameworks
(2021)
This chapter summarizes the current state of copyright laws in the United States and European Union that most affect Digital Humanities research, namely the fair use doctrine in the US and research exceptions in Europe, including the Directive on Copyright in the Digital Single Market, which has been finally adopted in 2019. This summary begins with a description of recent copyright advances most relevant to DH research, and finishes with an analysis of a significant remaining legal hurdle which DH researchers face: how do fair use and research exceptions deal with the critical issue of circumventing technological protection measures (TPM, a.k.a. DRM). Our discussion of the lawful means of obtaining TPM-protected material may contribute to both current DH research and planning decisions and inform future stakeholders and lawmakers of the need to allow TPM circumvention for academic research.
DIL is a bilingual (German-Italian) online dictionary of linguistics. It is still under construction and contains 240 lemmas belonging to the subfield of “German as a Foreign Language”, but other subfields are in preparation. DIL is an open dictionary; participation of experts from various subfields is welcome. The dictionary is intended for a user group with different levels of knowledge, therefore it is a multifunctional dictionary. An analysis of existing dictionaries, either in their online or written form, was essential in order to make important decisions for the macro- or microstructure of DIL; the results are discussed. Criteria for the selection of entries and an example of an entry conclude the article.
A constructicon, i.e., a structured inventory of constructions, essentially aims at documenting functions of lexical and grammatical constructions. Among other parameters, so-called constructional collo-profiles, as introduced by Herbst (2018, 2020), are conclusive for determining constructional meanings. They provide information on how relevant individual words are for construction slots, they hint at usage preferences of constructions and serve as a helpful indicator for semantic peculiarities of constructions. However, even though collo-profiles constitute an indispensable component of constructicon entries, they pose major challengers for constructicographers: For a constructicographic enterprise it is not feasible to conduct collostructional analyses for hundreds or even thousands of constructions. In this article, we introduce a procedure based on the large language model BERT that allows to predict collo-profiles without having to extensively annotate instances of constructions in a given corpus. Specifically, by discussing the constructions X macht Y ADJP (‘x makes Y ADJ’, e.g. he drives him crazy) and N1 PREP N1 (e.g., bumper to bumper, constructions over constructions), we show how the developed automated system generates collo-profiles based on a limited number of annotated instances. Finally, we place collo-profiles alongside other dimensions of constructional meanings included in the German Constructicon.
Entity framing is the selection of aspects of an entity to promote a particular viewpoint towards that entity. We investigate entity framing of political figures through the use of names and titles in German online discourse, enhancing current research in entity framing through titling and naming that concentrates on English only. We collect tweets that mention prominent German politicians and annotate them for stance. We find that the formality of naming in these tweets correlates positively with their stance. This confirms sociolinguistic observations that naming and titling can have a status-indicating function and suggests that this function is dominant in German tweets mentioning political figures. We also find that this status-indicating function is much weaker in tweets from users that are politically left-leaning than in tweets by right leaning users. This is in line with observations from moral psychology that left-leaning and right-leaning users assign different importance to maintaining social hierarchies.
The idea of this article is to take the immaterial and somehow ethereal nature of aesthetic concepts seriously by asking how aesthetic concepts are negotiated and thus formed in communication. My examples come from theatrical production where aesthetic decisions naturally play a major role. In the given case, an aesthetic concept is introduced with which only the director, but none of the actors is familiar in the beginning of the rehearsals. The concept, Wabi Sabi, comes from Japanese culture. As the whole rehearsal process was video recorded, it is possible to track the process of how the concept is negotiated and acquired over time. So, instead of defining criteria what Wabi Sabi as an aesthetic concept “consists of,” this article seeks to show how the concept is introduced, explained and “used” within a practical context, in this case a theater rehearsal. In contrast to conventional models of aesthetic experience, I am interested in the ways in which an aesthetic concept is configured in and through socially organized interaction, and — vice versa — how that interaction contributes to the situational accomplishment of the same concept. In short: I am interested in the “doing” of aesthetic concepts, especially in “doing Wabi Sabi.”
Older adults are often exposed to elderspeak, a specialized speech register linked with negative outcomes. However, previous research has mainly been conducted in nursing homes without considering multiple contextual conditions. Based on a novel contextually-driven framework, we examined elderspeak in an acute general versus geriatric German hospital setting. Individuallevel information such as cognitive impairment (CI) and audio-recorded data from care interactions between 105 older patients (M = 83.2 years; 49% with severe CI) and 34 registered nurses (M = 38.9 years) were assessed. Psycholinguistic analyses were based on manual coding (k = .85 to k = .97) and computer-assisted procedures. First, diminutives (61%), collective pronouns (70%), and tag questions (97%) were detected. Second, patients’ functional impairment emerged as an important factor for elderspeak. Our study suggests that functional impairment may be a more salient trigger of stereotype activation than CI and that elderspeak deserves more attention in acute hospital settings.
This paper investigates emergent pseudo-coordination in spoken German. In a corpus-based study, seven verbs in the first conjunct are analyzed regarding the degree of semantic bleaching and the development of subjective or aspectual meaning components. Moreover, it is shown that each verb shows distinct tendencies for co-ocurrences, especially with deictic adverbs in the first conjunct and with specific verbs and verb classes in the second conjunct. It is argued that pseudo-coordination is originally motivated by the need for ‘chunking’ in unplanned speech and that it is still prominently used in this function in German, in contrast to languages in which pseudo-coordination is grammaticalized further.
The sentiment polarity of an expression (whether it is perceived as positive, negative or neutral) can be influenced by a number of phenomena, foremost among them negation. Apart from closed-class negation words like no, not or without, negation can also be caused by so-called polarity shifters. These are content words, such as verbs, nouns or adjectives, that shift polarities in their opposite direction, e. g. abandoned in “abandoned hope” or alleviate in “alleviate pain”. Many polarity shifters can affect both positive and negative polar expressions, shifting them towards the opposing polarity. However, other shifters are restricted to a single shifting direction. Recoup shifts negative to positive in “recoup your losses”, but does not affect the positive polarity of fortune in “recoup a fortune”. Existing polarity shifter lexica only specify whether a word can, in general, cause shifting, but they do not specify when this is limited to one shifting direction. To address this issue we introduce a supervised classifier that determines the shifting direction of shifters. This classifier uses both resource-driven features, such as WordNet relations, and data-driven features like in-context polarity conflicts. Using this classifier we enhance the largest available polarity shifter lexicon.
We present a technique called event mapping that allows to project text representations into event lists, produce an event table, and derive quantitative conclusions to compare the text representations. The main application of the technique is the case where two classes of text representations have been collected in two different settings (e.g., as annotations in two different formal frameworks) and we can compare the two classes with respect to their systematic differences in the event table. We illustrate how the technique works by applying it to data collected in two experiments (one using annotations in Vladimir Propp’s framework, the other using natural language summaries).
In der natürlichen Sprachverarbeitung haben Frage-Antwort-Systeme in der letzten Dekade stark an Bedeutung gewonnen. Vor allem durch robuste Werkzeuge wie statistische Syntax-Parser und Eigennamenerkenner ist es möglich geworden, linguistisch strukturierte Informationen aus unannotierten Textkorpora zu gewinnen. Zusätzlich werden durch die Text REtrieval Conference (TREC) jährlich Maßstäbe für allgemeine domänen-unabhängige Frage-Antwort-Szenarien definiert. In der Regel funktionieren Frage-Antwort-Systeme nur gut, wenn sie robuste Verfahren für die unterschiedlichen Fragetypen, die in einer Fragemenge vorkommen, implementieren. Ein charakteristischer Fragetyp sind die sogenannten Ereignisfragen. Obwohl Ereignisse schon seit Mitte des vorigen Jahrhunderts in der theoretischen Linguistik, vor allem in der Satzsemantik, Gegenstand intensive Forschung sind, so blieben sie bislang im Bezug auf Frage-Antwort-Systeme weitgehend unerforscht. Deshalb widmet sich diese Diplomarbeit diesem Problem. Ziel dieser Arbeit ist zum Einen eine Charakterisierung von Ereignisstruktur in Frage-Antwort Systemen, die unter Berücksichtigung der theoretischen Linguistik sowie einer Analyse der TREC 2005 Fragemenge entstehen soll. Zum Anderen soll ein Ereignis-basiertes Antwort-Extraktionsverfahren entworfen und implementiert werden, das sich auf den Ergebnissen dieser Analyse stützt. Informationen von diversen linguistischen Ebenen sollen daten-getrieben in einem uniformen Modell integriert werden. Spezielle linguistische Ressourcen, wie z.B. WordNet und Subkategorisierungslexika werden dabei eine zentrale Rolle einnehmen. Ferner soll eine Ereignisstruktur vorgestellt werden, die das Abpassen von Ereignissen unabhängig davon, ob sie von Vollverben oder Nominalisierungen evoziert werden, erlaubt. Mit der Implementierung eines Ereignis-basierten Antwort-Extraktionsmoduls soll letztendlich auch die Frage beantwortet werden, ob eine explizite Ereignismodellierung die Performanz eines Frage-Antwort-Systems verbessern kann.
Sexual harassment severely impacts the educational system in the West African country Benin and the progress of women in this society that is characterized by great gender inequality. Knowledge of the belief systems rooting in the sociocultural context is crucial to the understanding of sexual harassment. However, no study has yet investigated how sexual harassment is related to fundamental beliefs in Benin or West African countries. We conducted a field study on 265 female and male students from several high schools in Benin to investigate the link between sexual harassment and measures of ambivalent sexism, gender identity, and rape myth acceptance. Almost half of the sample reported having experienced sexual harassment personally or among peers. Levels of sexism and rape myth acceptance were very high compared to other studies. These attitudes appeared to converge in a sexist belief system that was linked to personal experiences, the perceived probability of experiencing and fear of sexual harassment. Results suggest that sexual harassment is a societal problem and that interventions need to address fundamental attitudes held in societies low in gender equality.
Extending the possibilities for collaborative work with TEI/XML through the usage of a wiki system
(2013)
This paper presents and discusses an integrated project-specific working environment for editing TEI/XML-files and linking entities of interest to a dedicated wiki system. This working environment has been specifically tailored to the workflow in our interdisciplinary digital humanities project GeoBib. It addresses some challenges that arose while working with person-related data and geographical references in a growing collection of TEI/XML-files. While our current solution provides some essential benefits, we also discuss several critical issues and challenges that remain.
Knowledge in textual form is always presented as visually and hierarchically structured units of text, which is particularly true in the case of academic texts. One research hypothesis of the ongoing project Knowledge ordering in texts - text structure and structure visualisations as sources of natural ontologies1 is that the textual structure of academic texts effectively mirrors essential parts of the knowledge structure that is built up in the text. The structuring of a modern dissertation thesis (e.g. in the form of an automatically generated table of contents - toes), for example, represents a compromise between requirements of the text type and the methodological and conceptual structure of its subject-matter. The aim of the project is to examine how visual-hierarchical structuring systems are constructed, how knowledge structures are encoded in them, and how they can be exploited to automatically derive ontological knowledge for navigation, archiving, or search tasks. The idea to extract domain concepts and semantic relations mainly from the structural and linguistic information gathered from tables of contents represents a novel approach to ontology learning.
To date, little is known about prosodic accommodation and its conversational functions in instances of overlapping talk in conversation. A major conversational action that happens in overlap is turn competition. It is not known whether participants accommodate prosodic parameters locally in the overlapped turn (initialisation) or access a repertoire of prosodic patterns that refer to general prosodic parameter norms (normalisation) when competing for the turn in overlap. This paper investigates the initialisation and normalisation of fundamental frequency (f0) and assesses its role as a resource for turn competition in overlap. We drew instances of overlapping talk from a corpus of conversational multi-party interactions in British English. We annotated the overlaps on a competitiveness scale and categorised them by overlap onset position and conversational function. We automatically extracted f0 parameters from the speech signal and processed them into f0 accommodation features that represent the normalising or the initialising use of f0. Using decision tree classification we found that f0 accommodation is only relevant as a turn competitive resource in overlaps that start clearly before a speaker transition. In this turn context, we found that normalising and initialising f0 features can both be relevant turn competitive resources. Their deployment depends on the conversational function of overlap.
We present a fine-grained NER annotations scheme with 30 labels and apply it to German data. Building on the OntoNotes 5.0 NER inventory, our scheme is adapted for a corpus of transcripts of biographic interviews by adding categories for AGE and LAN(guage) and also adding label classes for various numeric and temporal expressions. Applying the scheme to the spoken data as well as a collection of teaser tweets from newspaper sites, we can confirm its generality for both domains, also achieving good inter-annotator agreement. We also show empirically how our inventory relates to the well-established 4-category NER inventory by re-annotating a subset of the GermEval 2014 NER coarse-grained dataset with our fine label inventory. Finally, we use a BERT-based system to establish some baselines for NER tagging on our two new datasets. Global results in in-domain testing are quite high on the two datasets, near what was achieved for the coarse inventory on the CoNLLL2003 data. Cross-domain testing produces much lower results due to the severe domain differences.
Psychological research has neglected people whose accent does not match their appearance. Most research on person perception has focused on appearance, overlooking accents that are equally important social cues. If accents were studied, it was often done in isolation (i.e., detached from appearance). We examine how varying accent and appearance information about people affects evaluations. We show that evaluations of expectancy-violating people shift in the direction of the added information. When a job candidate looked foreign, but later spoke with a native accent, his evaluations rose and he was evaluated best of all candidates (Experiment 1a). However, the sequence in which information was presented mattered: When heard first and then seen, his evaluations dropped (Experiment 1b). Findings demonstrate the importance of studying the combination and sequence of different types of information in impression formation. They also allow predicting reactions to ethnically mixed people, who are increasingly present in modern societies.
Based on the empirical data of 97 fourth-graders from three districts of Braunschweig in Germany, this paper investigates the possibility of changing semantic frames in multilingual communities. The focus of study is the verb field of self-motion. In a free-sorting task involving 52 verbs, Turkish-speaking students, in particular, placed the verbs schleichen (‘to sneak’) and kommen (‘to come’) in the same group. When explaining the perceived similarity they also used the word schleichen (‘to sneak’), in a specific grammatical construction that is not found in Standard German. This paper suggests that semantic frames may change along with grammatical constructions when typologically distinct languages come into close contact.
Starting from early approaches within Generative Grammar in the late 1960s, the article describes and discusses the development of different theoretical frameworks of lexical decomposition of verbs. It presents the major subsequent conceptions of lexical decompositions, namely, Dowty’s approach to lexical decomposition within Montague Semantics, Jackendoff’s Conceptual Semantics, the LCS decompositions emerging from the MIT Lexicon Project, Pustejovsky’s Event Structure Theory, Wierzbicka’s Natural Semantic Metalanguage, Wunderlich’s Lexical Decompositional Grammar, Hale and Kayser’s Lexical Relational Structures, and Distributed Morphology. For each of these approaches, (i) it sketches their origins and motivation, (ii) it describes the general structure of decompositions and their location within the theory, (iii) it explores their explanative value for major phenomena of verb semantics and syntax, (iv) and it briefly evaluates the impact of the theory. Referring to discussions in article 7 [Semantics: Foundations, History and Methods] (Engelberg) Lexical decomposition, a number of theoretical topics are taken up throughout the paper concerning the interpretation of decompositions, the basic inventory of decompositional predicates, the location of decompositions on the different levels of linguistic representation (syntactic, semantic, conceptual), and the role they play for the interfaces between these levels.
This paper discusses German neologisms in the so-called “new-media” and presents a German corpus-based online dictionary of neologisms. Several neological morphemes and lexemes, as well as their meaning will be presented, showing that these new modes of communication are an important source of enrichment of German lexicon.
This paper aims at showing how quantitative corpus linguistic analysis can inform qualitative analysis of digital media discourse with respect to the mediality of language in use. Using the example of protest discourse in Twitter, in the field of anti-Islamic ‘Pegida’ demonstrations, a three-step method of collecting, reducing and interpreting salient data is proposed. Each step is aligned with operative medial features of the microblog: hashtags, retweets and @-interactions. The exemplary analysis reveals the importance of discussions of attendance numbers in protest discourse and the asymmetry between administrative (i.e. the police) and non-administrative discourse agents. Furthermore, it exemplifies how frequency analysis and sequence analysis can be combined for research in media linguistics.
Present-day German uses two formally different patterns of compounding in N+N compounds. The first combines bare stems (e.g. Tisch+decke ‘tablecloth’) while the second contains an intervening linking element (LE) as in Geburt-s-ort ‘birth-LE-place’. The linked compounding type developed in Early New High German (1350–1650) from phrasal constructions by reanalyzing genitive attributes as first constituents of compounds. The present paper uses corpus data to explore three key stages in this development: In the initial stage, it shows how prenominal non-specific genitive constructions lent themselves to reanalysis due to their functional overlap and formal similarity. Additionally, compounds seem to have replaced not only prenominal genitives, but also structurally different postnominal genitives. In the second stage, the new compounding pattern increases in productivity between 1500 and 1710, especially compared to the older pattern without linking elements. The last stage pertains to changes in spelling practice. It shows that linked compounds were written separately in the beginning. Their gradual graphematic integration into directly connected words was reversed by a century of hyphenation (1650–1750). This is strikingly different from present-day spelling practice and shows that the linked pattern was still perceived as marked.
The present paper examines the rise and fall of Modern High German loanwords in English from 1600 until 2000, principally making use of the record of borrowing documented by the Oxford English Dictionary (OED) in its Third Edition (online version, in revision 2000-). Groups of loanwords are analysed by century, with reference to the changing social and cultural landscape characterising relationships between the relevant nations over this period. This is not a simple picture: each language grows over the period in different ways, and the speakers of English look to German at different times for different types of borrowing, as the political and intellectual balance alters.
From Open Source to Open Information. Collaborative Methods in Creating XML-based Markup Languages
(2000)
From Proof Texts to Logic. Discourse Representation Structures for Proof Texts in Mathematics
(2009)
We present an extension to Discourse Representation Theory that can be used to analyze mathematical texts written in the commonly used semi-formal language of mathematics (or at least a subset of it). Moreover, we describe an algorithm that can be used to check the resulting Proof Representation Structures for their logical validity and adequacy as a proof.
In this paper, we compare three different generalization methods for in-domain and cross-domain opinion holder extraction being simple unsupervised word clustering, an induction method inspired by distant supervision and the usage of lexical resources. The generalization methods are incorporated into diverse classifiers. We show that generalization causes significant improvements and that the impact of improvement depends on the type of classifier and on how much training and test data differ from each other. We also address the less common case of opinion holders being realized in patient position and suggest approaches including a novel (linguistically-informed) extraction method how to detect those opinion holders without labeled training data as standard datasets contain too few instances of this type.
Between January 2020 and summer 2021, many new words and phrases contributed to the expansion of the German vocabulary in order to enable communication under the new conditions during the corona pandemic. This rapid expansion of vocabulary has most notably affected lexicography as a discipline of applied linguistics. General language dictionaries or terminological dictionaries have quickly reflected on how the German lexicon evolved during the corona pandemic: new entries were added, others were revised. This paper, however, focuses on the ways in which a German (specialized) neologism dictionary project, the "Neologismenwörterbuch" at the "Leibniz Institute for the German Language, Mannheim" published (online, see https://www.owid.de/docs/neo/start.jsp) has chosen to capture and document lexicographic information in a timely manner. Neologisms are (following the definition applied here) lexical units or senses/meanings which emerge in a language community over a specific period of time of language development, which diffuse, are generally accepted as language norms, and which the majority of speakers perceive as new for some time. Thus, the "Neologismenwörterbuch" used to record neologisms only retrospectively, that is after their lexicalization. As a consequence, users of the dictionary were often not able to obtain details on words that were particularly conspicuous at a particular time in a specific discourse, thus raising questions concerning their meaning, correct spelling, etc. This, however, did not imply that the lexicographers of the project had not already collected these words with some preliminary information in a list of candidates for inclusion in an internal database. Therefore, the project started to publish online an index of monitored words including lexical units that had emerged since 2011, for which only time will tell whether they will diffuse and manifest as language norms. This list format was used since April 2020 to also issue a compilation of corona-related neologisms as part of the "Neologismenwörterbuch". In October 2021, this inventory included more than 1.800 Corona-related neologisms, and still, more than 700 candidates in an internal database awaited lexicographic description and inclusion into the online index (see https://www.owid.de/docs/neo/listen/corona.jsp). In this paper many examples are presented to illustrate how new words, new senses and new uses in the context of the Covid-19 pandemic are reflected in the dictionary.
Traditionally, research on language change has been a post-mortem activity, focused on isolated changes that are complete and often only documented in written texts. In the 1960s the field was advanced considerably by Labovian sociolinguistics and the investigation of “change in progress” adduced through patterns of community-internal linguistic variation correlated with external facts about speakers such as age and class (see Labov 1994 for an overview). However, despite the many benefits of such work on “dynamic synchrony,” we still know relatively little about how language change unfolds over the lifetimes of individual speakers, that is, in real time (cf. Bailey et al. 1991). The logistical challenges of such research are, of course, considerable. Whereas it is straightforward for psycholinguists to observe language development in children over the course of a few years, documenting changes in the verbal behavior of individuals over several decades is by contrast much less feasible. Nevertheless, present theoretical models of language change could be considerably improved by the results of real-time studies.
This chapter focuses on the contributions of German scholars to two of the three main research questions that have defined EU studies. Leaving aside the debate on the drivers of European integration, i.e. European integration theory, we will discuss the «governance turn» Fritz Scharpf, Beate Kohler-Koch, Arthur Benz, Ingeborg Tömmel and others promoted in studying EU institutions as well as the more policy-oriented approaches by Adrienne Héritier and again Fritz Scharpf and their students. We will then address the ever-growing literature on Europeanization on how EU policies, institutions and political processes have been affecting the domestic structures of member states, membership candidates, as well as neighborhood and third countries. In this context, German scholars also contributed to EU studies in what could be coined in methodological rather than substantial terms. Whereas Thomas König, Gerald Schneider, and others promoted the application of quantitative approaches, scientists like Bernhard Ebbinghaus and Markus Haverland dealt with general questions on research designs like case selection and causal inference. Finally, we will also discuss German contributions to diffusion research. The European Union as a most likely case for the diffusion of policies has attracted considerable attention by scholars dealing with the question of when and how policies spread across time and space. So it comes as no surprise that EU studies as well as diffusion research mutually benefitted from each other. In this regard, German scientists like Katharina Holzinger, Christoph Knill, Tanja Börzel, Thomas Plümper, Thomas Risse and others played a prominent role, too.
During the second half of the 19th century, extended regions of the South Pacific came to be part of the German colonial empire. The colonial administration included repeated and diverse efforts to implement German as the official language in several settings (administration, government, education) in the colonial areas. Due to unfamiliar sociological and linguistic conditions, to competition with English as a(nother) prestigious colonizer language, and to the short time-span of the German colonial rule, these efforts rendered only little language-related effect. Nevertheless, some linguistic traces remained, and these seem to reflect in what areas language implementation was organized most thoroughly. The study combines two directions of investigation: First, taking a historical approach, legal and otherwise official documents and information are considered in order to understand how the implementation process was planned and (intended to be) carried out. Second, from a linguistic perspective, documented lexical borrowings and other traces of linguis tic contact are identified that can corroborate the historical findings by reflecting a greater effect of contact in such areas where the implementation of German was carried out most strictly. The goal of this paper is, firstly, to trace the political and missionary activities in language planning with regard to German in the colonial Pacific, rather similar to a modem language policy scenario when a new code of prestige or national unity is implemented. Secondly, these activities are evaluated in the face of the outcome that can be observed, in the historical practice as well as in long-term effects of language contact up until today.
Der Aufsatz nähert sich der Frage, wie Sprachwandel beobachtet und beschrieben werden kann, auf empirischen Wege: Es werden Sprachbiographien von deutschstämmigen Amerikaner(inne)n aus Wisconsin nachgezeichnet. Diese Fallstudien - von denen hier zwei etwas näher beleuchtet werden - lassen ganz unterschiedliche Entwicklungen in der Lebenszeit eines Sprechers erkennbar werden. Der Beibehaltung und behutsamen Wandlung im Sprachgebrauch einer schweizerdeutschen Sprecherin steht der beinahe komplette Verlust der deutschen Sprachkompetenz einer Niederdeutsch-Sprecherin gegenüber.
Für die Rekonstruktion dieser Wandlungsprozesse in realer Zeit wird die Methode des Re-Recordings präsentiert - der erneuten Aufnahme von Sprechern, die in früheren Tonaufnahme-Aktionen in Wisconsin bereits einmal erfasst wurden (hier: 1968 und 2001). Erste Ergebnisse der zu Grunde liegenden linguistischen Analysen werden in Tabellen dargestellt.
Im ’Minimalistischen Programm’ (Chomsky 1995) werden A-Bewegungen (’A- movements’, d.h. N-Hebung, V-Hebung usw.) und A’-Bewegungen (’A-bar- movements’, d.h. Extraposition, VP-Adjunktion, ’scrambling’ usw.) als sehr ungleichwertige Operationen behandelt. Der vorliegende Aufsatz untersucht die distinkten Eigenschaften von A-Bewegungen und A’-Bewegungen anhand von drei Gruppen von Argumenten, nämlich Topikalisierung (Abschnitt 2), Verschiebung schwacher Pronomina (Abschnitt 3) und Verb-Zweit unter der Symmetrie-Annahme (Abschnitt 4). Die Konklusionen daraus sind, daß die hier vertretene Analyse eine Möglichkeit bietet, A-Bewegungen und A’- Bewegungen zu unterscheiden, ohne letztere aus dem Zuständigkeitsbereich der Grammatik zu verbannen.
In this chapter, we will investigate smartphone-based showing sequences in everyday social encounters, that is, moments in which a personal mobile device is used for presenting (audio-)visual content to co-present participants. Despite a growing interest in object-centred sequences and mundane technology use, detailed accounts of the sequential, multimodal, and material dimensions of showing sequences are lacking. Based on video data of social interactions in different languages and on the framework of multimodal interaction analysis, this chapter will explore the link between mobile device use and social practices. We will analyse how smartphone showers and their recipients coordinate the manipulation of a technological object with multiple courses of action, and reflect upon the fundamental complexity of this by-now routine joint activity.
This is an introduction to a special issue of Dictionaries: Journal of the Dictionary Society of North America. It offers a characterization of neology and describes the Globalex-sponsored workshop at which the papers in the issue originated. It provides an overview of the papers, which treat lexicographical neology and neological lexicography in Danish, Dutch, Estonian, Frisian, Greek, Korean, Spanish, and Swahili and address relevant aspects of lexicography in those languages, presenting state-of-the-art research into neology and ideas about modern lexicographic treatment of neologisms in various dictionary types.
Novel formats of construction-based description hold great potential for phenomena that fall through the cracks in traditional kinds of linguistic reference works. On the example of German verb argument structure constructions with a prepositional object, we demonstrate that a construction-based description of such phenomena is superior to existing lexicographic and grammaticographic treatments, but that it also poses a number of new problems. The most fundamental of these relates to the fact that construction-based analyses can be proposed on different levels of abstraction. We illustrate pertinent problems relating to the precise identification of constructional form and meaning and suggest a multi-layered descriptive format for web-based electronic reference constructica that can accommodate these challenges. Semantically, the proposed solution integrates both lumping and splitting perspectives on constructional grain size and permits users to flexibly zoom in and out on individual elements in the resource. Formally, it can capture variation in the number and marking of realised arguments as found in e.g. passives and transitivity alternations. Aspects of the theoretical controversy between Construction Grammar and Valency Theory are addressed where relevant, but our focus is on questions of description and the practical implementation of construction-based analyses in a suitable type of linguistic reference work.
This study explores the interdependence of qualitative and quantitative analysis in articulating empirically plausible and theoretically coherent generalizations about grammatical structure. I will show that the use of large electronic corpora is indispensable to the grammarian's work, serving as a rich source of semantic and contextual information, which turns out to be crucial in categorizing and explaining grammatical forms. These general concerns are illustrated by the patterns of use of Czech relative clauses (RC) with the non-declinable relativizer co, by taking a set of existing claims about these RCs and testing their accuracy on corpus material. The relevant analytic categories revolve around the referential type of the relativized noun, the interaction between relativization and deixis, and the semantic relationship between the relativized noun and the proposition expressed by the RC. The analysis demonstrates that some of the existing claims are fully invalid in the face of regularly attested semantic distinctions, while others are more or less on the right track but often not comprehensive or precise enough to capture the full richness of the facts. 1
In this paper we present some preliminary considerations concerning the possibility of automatic parsing an annotated corpus for N-N compounds. This should in prin- ciple be possible at least for relational and stereotype compounds, if the lemmatization of the corpus connects the lemmata with lexical entries as described in Höhle (1982). These lexical entries then supply the necessary information about the argument structure of a relational noun or about the stereotypical purpose associated with the noun’s referent which can be used to establish a relation between the first and the head constituent of the compound.
How Do Speakers Define the Meaning of Expressions? The Case of German x heißt y (“x means y”)
(2020)
To secure mutual understanding in interaction, speakers sometimes explain or negotiate expressions. Adopting a conversation analytic and interaction linguistic approach, I examine how participants explain which kinds of expressions in different sequential environments, using the format x heißt y (“x means y”). When speakers use it to clarify technical terms or foreign words that are unfamiliar to co-participants, they often provide a situationally anchored definition that however is rather context-free and therefore transferable to future situations. When they explain common (but indexical, ambiguous, polysemous, or problematic) expressions instead, speakers always design their explanation strongly connected to the local context, building on situational circumstances. I argue that x heißt y definitions in interaction do not meet the requirements of scientific or philosophical definitions but that this is irrelevant for the situational exigencies speakers face.
The present paper examines a variety of ways in which the Corpus of Contemporary Romanian Language (CoRoLa) can be used. A multitude of examples intends to highlight a wide range of interrogation possibilities that CoRoLa opens for different types of users. The querying of CoRoLa displayed here is supported by the KorAP frontend, through the querying language Poliqarp. Interrogations address annotation layers, such as the lexical, morphological and, in the near future, the syntactical layer, as well as the metadata. Other issues discussed are how to build a virtual corpus, how to deal with errors, how to find expressions and how to identify expressions.
Sentiment Analysis is the task of extracting and classifying opinionated content in natural language texts. Common subtasks are the distinction between opinionated and factual texts, the classification of polarity in opinionated texts, and the extraction of the participating entities of an opinion(-event), i.e. the source from which an opinion emanates and the target towards which it is directed. With the emerging Web 2.0 which describes the shift towards a highly user-interactive communication medium, the amount of subjective content on the World Wide Web is steadily increasing. Thus, there is a growing need for automatically processing this type of content which is provided by sentiment analysis. Both natural language processing, which is the task of providing computational methods for the analysis and representation of natural language, and machine learning, which is the task of building task-specific classification models on the basis of empirical data, may be instrumental in mastering the challenges of the automatic sentiment analysis of written text. Many problems in sentiment analysis have been proposed to be solved with machine learning methods exclusively using a fairly low-level feature design, such as bag of words, containing little linguistic information. In this thesis, we examine the effectiveness of linguistic features in various subtasks of sentiment analysis. Thus, we heavily draw from the insights gained by natural language processing. The application of linguistic features can be applied on various classification methods, be it in rule-based classification, where the linguistic features are directly encoded as a classifier, in supervised machine learning, where these features complement basic low-level features, or in bootstrapping methods, where these features form a rule-based classifier generating a labeled training set from which a supervised classifier can be trained. In this thesis, we will in particular focus on scenarios where the combination of linguistic features and machine learning methods is effective. We will look at common text classification tasks, both coarse-grained and fine-grained, and extraction tasks.
Meta-communicative practices are generally reflexive in a fairly obvious sense: Inasmuch as speakers use them to talk about or comment on earlier/subsequent talk, they use language self-reflexively. In this paper, we explore a practice that is reflexive not only in this meta-communicative sense but also in a sequential-interactional one: Prefacing a conversational turn with I was gonna say. We show that the I was gonna say-preface furnishes the following general semantic-pragmatic affordances: (1) It retroactively relates the speaker’s subsequent talk to preceding talk from a co-participant, (2) it embodies a claim to prior, now-preempted, communicative intent with regard to what their co-participant has (just) said/done, (3) it therefore displays its speaker’s orientation to the relevance or the appropriate placement of the action(s) done in their own subsequent talk at an earlier moment in the interaction, and (4) it reflexively re-invokes, or retrieves, this earlier moment as the relevant sequential context for their action(s). We then go on to illustrate how speakers draw on these sequentially reflexive affordances for managing recurrent interactional contingencies in specific sequential environments. The paper ends with a discussion of the role that reflexivity plays in and for the deployment of this practice.
Perhaps the biggest challenge in derivational morphology is to reconcile morphological idiosyncrasy with semantic regularity. How can it be explained that words with dead affixes and irregulär allomorphy can nonetheless exhibit straightforward and stable semantic relations to their etymological bases (cf. strength ‘property of being strong’, obedience ‘act of obeying’, ‘property of being obedient’)? Theories based on the idea of capturing regularity in terms of synthetic rules for building up complex words out of morphemes along with rules for interpreting such structures in a compositional fashion have not made - and arguably cannot make - sense of this phenomenon. Taking the perspective of the learner in acquisition, I propose an alternative approach to meaning assignment based, not on syntagmatic relations among their constituent morphemes, but on paradigmatic relations between whole words. This approach not only explains the conditions under which meaning relations between words are expected to be stable but also accounts for another notorious mystery in derivational morphology, the frequent occurrence of total synonymy among affixes, as opposed to words.
We present the German Sentiment Analysis Shared Task (GESTALT) which consists of two main tasks: Source, Subjective Expression and Target Extraction from Political Speeches (STEPS) and Subjective Phrase and Aspect Extraction from Product Reviews (StAR). Both tasks focused on fine-grained sentiment analysis, extracting aspects and targets with their associated subjective expressions in the German language. STEPS focused on political discussions from a corpus of speeches in the Swiss parliament. StAR fostered the analysis of product reviews as they are available from the website Amazon.de. Each shared task led to one participating submission, providing baselines for future editions of this task and highlighting specific challenges. The shared task homepage can be found at https://sites.google.com/site/iggsasharedtask/.
In this paper we present work in developing a computerized grammar for the Latin language. It demonstrates the principles and challenges in developing a grammar for a natural language in a modern grammar formalism. The grammar presented here provides a useful resource for natural language processing applications in different fields. It can be easily adopted for language learning and use in language technology for Cultural Heritage like translation applications or to support post-correction of document digitization.
Automatic summarization systems usually are trained and evaluated in a particular domain with fixed data sets. When such a system is to be applied to slightly different input, labor- and cost-intensive annotations have to be created to retrain the system. We deal with this problem by providing users with a GUI which allows them to correct automatically produced imperfect summaries. The corrected summary in turn is added to the pool of training data. The performance of the system is expected to improve as it adapts to the new domain.
This paper presents experiments on sentence boundary detection in transcripts of spoken dialogues. Segmenting spoken language into sentence-like units is a challenging task, due to disfluencies, ungrammatical or fragmented structures and the lack of punctuation. In addition, one of the main bottlenecks for many NLP applications for spoken language is the small size of the training data, as the transcription and annotation of spoken language is by far more time-consuming and labour-intensive than processing written language. We therefore investigate the benefits of data expansion and transfer learning and test different ML architectures for this task. Our results show that data expansion is not straightforward and even data from the same domain does not always improve results. They also highlight the importance of modelling, i.e. of finding the best architecture and data representation for the task at hand. For the detection of boundaries in spoken language transcripts, we achieve a substantial improvement when framing the boundary detection problem as a sentence pair classification task, as compared to a sequence tagging approach.
The present thesis investigates the syntagmatic relations of certain Finnish emotion verbs that are formed by the derivational suffix -ua/-yä (e.g. suuttua ‘get angry’, pelästyä ‘get frightened’). Prototypically, the suffix expresses reflexivity, but in the case of the “inchoative” emotion verbs, it indicates a change of state on behalf of the experiencer, from a non-emotional state to an emotional state.
Response particles manage intersubjectivity. This conversation analytic study describes German eben (“exactly”). With eben, speaker A locally agrees with the immediately prior turn of B (the “confirmable”) and establishes a second indexical link: A relates B’s confirmable to a position A herself had already displayed (the “anchor”). Through claiming temporal priority, eben speakers treat a just-formulated position as self-evident and mark independence. Further evidence for the three-part structure “anchor-confirmable-eben” that eben sets in motion retrospectively comes from instances where eben speakers supply a missing/opaque anchor via a postpositioned display of independent access. Data are in German with English translation.
Instruction practices in German driving lessons: Differential uses of declaratives and imperatives
(2018)
Building on а corpus of 70 hours of German driving lessons, this paper studies the use of declaratives vs. imperatives for instruction. It shows how these linguistic resources are adapted to different praxeological, temporal and participant-related environments. Declaratives are used for first instructions, task-setting and post- trial discussions. They exhibit complex syntax and do not call for immediate compliance. Their high degree of explicitness conveys how the action is to be carried out. Imperative instructions overwhelmingly correct ongoing actions of students or respond to their failure to produce expected actions. They exhibit minimal argument structure. They are reminders which presuppose that the student monitors the scene and can perform the action unproblematically. They index that requests have to be complied with immediately or even urgently.