Refine
Year of publication
Document Type
- Part of a Book (249) (remove)
Language
- English (249) (remove)
Is part of the Bibliography
- yes (249) (remove)
Keywords
- Deutsch (73)
- Korpus <Linguistik> (53)
- Interaktion (22)
- Konversationsanalyse (21)
- Annotation (15)
- Gesprochene Sprache (15)
- Englisch (14)
- Grammatik (14)
- Wörterbuch (14)
- Digital Humanities (11)
Publicationstate
- Veröffentlichungsversion (117)
- Zweitveröffentlichung (60)
- Postprint (41)
- Preprint (1)
Reviewstate
- Peer-Review (111)
- (Verlags)-Lektorat (77)
- Verlags-Lektorat (5)
- Peer-review (4)
- (Verlags-)Lektorat (1)
- (Verlags-)lektorat (1)
Publisher
- IDS-Verlag (28)
- de Gruyter (24)
- Benjamins (21)
- Springer (12)
- European language resources association (ELRA) (11)
- De Gruyter (9)
- Oxford University Press (8)
- Routledge (7)
- European Language Resources Association (6)
- The Association for Computational Linguistics (6)
Germany's (single) national official language is German. The dominance of German in schools, politics, the legal system, administration and the entire written public domain is so great that for a long time the lack of a coherent language policy was not seen as a problem. State restraint in this area is due, on the one hand, to historical reasons; on the other hand, it has been promoted by the federal system in Germany, which grants the federal states far-reaching responsibilities in the fields of education and culture. More recently, multilingualism among the population has increased and has resulted in a growing interest in understanding the language situation in Germany and (in particular) taking a closer look at the different minority languages. In 2017, for the first time in about 80 years, there is a question on the language of the population in the German micro census. The Institute for the German Language has also carried out various representative surveys; in the winter of 2017/201, a large representative survey with questions on the language repertoire and language attitudes is in the field.
In this paper we examine the composition and interactional deployment of suspended assessments in ordinary German conversation. We define suspended assessments as lexicosyntactically incomplete assessing TCUs that share a distinct cluster of prosodic-phonetic features which auditorily makes them come off as 'left hanging' rather than cut-off (e.g., Schegloff/Jefferson/Sacks 1977; Jasperson 2002) or trailing-off (e.g., Local/Kelly 1986; Walker 2012). Using CA/IL methodology (Couper-Kuhlen/Selting 2018) and drawing on a large body of video-recorded face-to-face conversations, we highlight the verbal, vocal and bodily-visual resources participants use to render such unfinished assessing TCUs recognizably incomplete and identify six recurrent usage types. Overall, the suspension of assessing TCUs appears to either serve as a practice for circumventing the production of assessments that are interactionally inapposite, or as a practice for coping with local contingencies that render the very doing of an assessment problematic for the speaker. Data are in German with English translations.
In many European languages, propositional arguments (PAs) can be realized as different types of structures. Cross-linguistically, complex structures with PAs show a systematic correlation between the strength of the semantic bond and the syntactic union (cf. Givón 2001; Wurmbrand/Lohninger 2023). Also, different languages show similarities with respect to the (lexical) licensing of different PAs (cf. Noonan 1985; Givón 2001; Cristofaro 2003 on different predicate types). However, on a more fine-grained level, a variation across languages can be observed both with respect to the syntactic-semantic properties of PAs as well as to their licensing and usage. This presentation takes a multi-contrastive view of different types of PAs as syntactic subjects and objects by looking at five European languages: EN, DE, IT, PL and HU. Our goal is to identify the parameters of variation in the clausal domain with PAs and by this to contribute to a better understanding of the individual language systems on the one hand and the nature of the linguistic variation in the clausal domain on the other hand. Phenomena and Methodology: We investigate the following types of PAs: direct object (DO) clauses (1), prepositional object (PO) clauses (2), subject clauses (3), and nominalizations (4, 5). Additionally, we discuss clause union phenomena (6, 7). The analyzed parameters include among others finiteness, linear position of the PA, (non) presence of a correlative element, (non) presence of a complementizer, lexical-semantic class of the embedding verb. The phenomena are analyzed based on corpus data (using mono- and multilingual corpora), experimental data (acceptability judgement surveys) or introspective data.
In this chapter, we will investigate smartphone-based showing sequences in everyday social encounters, that is, moments in which a personal mobile device is used for presenting (audio-)visual content to co-present participants. Despite a growing interest in object-centred sequences and mundane technology use, detailed accounts of the sequential, multimodal, and material dimensions of showing sequences are lacking. Based on video data of social interactions in different languages and on the framework of multimodal interaction analysis, this chapter will explore the link between mobile device use and social practices. We will analyse how smartphone showers and their recipients coordinate the manipulation of a technological object with multiple courses of action, and reflect upon the fundamental complexity of this by-now routine joint activity.
The ubiquity of smartphones has been recognised within conversation analysis as having an impact on conversational structures and on the participants’ interactional involvement. However, most of the previous studies have relied exclusively on video recordings of overall encounters and have not systematically considered what is taking place on the device. Due to the personal nature of smartphones and their small displays, onscreen activities are of limited visibility and are thus potentially opaque for both the co-present participants (“participant opacity”) and the researchers (“analytical opacity”). While opacity can be an inherent feature of smartphones in general, analytical opacity might not be desirable for research purposes. This chapter discusses how a recording set-up consisting of static cameras, wearable cameras and dynamic screen captures allowed us to address the analytical opacity of mobile devices. Excerpts from multi-source video data of everyday encounters will illustrate how the combination of multiple perspectives can increase the visibility of interactional phenomena, reveal new analytical objects and improve analytical granularity. More specifically, these examples will emphasise the analytical advantages and challenges of a combined recording set-up with regard to smartphone use as multiactivity, the role of the affordances of the mobile device, and the prototypicality and “naturalness” of the recorded practices.
Introduction
(2023)
We present a method to identify and document a phenomenon on which there is very little empirical data: German phrasal compounds occurring in the form of as a single token (without punctuation between their components). Relying on linguistic criteria, our approach implies to have an operational notion of compounds which can be systematically applied as well as (web) corpora which are large and diverse enough to contain rarely seen phenomena. The method is based on word segmentation and morphological analysis, it takes advantage of a data-driven learning process. Our results show that coarse-grained identification of phrasal compounds is best performed with empirical data, whereas fine-grained detection could be improved with a combination of rule-based and frequency-based word lists. Along with the characteristics of web texts, the orthographic realizations seem to be linked to the degree of expressivity.
In this paper we present the results of an automatic classification of Russian texts into three levels of difficulty. Our aim is to build a study corpus of Russian, in which a L2 student is able to select texts of a desired complexity. We are building on a pilot study, in which we classified Russian texts into two levels of difficulty. In the current paper, we apply the classification to an extended corpus of 577 labelled texts. The best-performing combination of features achieves an accuracy of 0,74 within at most one level difference.
We present a method for detecting and reconstructing separated particle verbs in a corpus of spoken German by following an approach suggested for written language. Our study shows that the method can be applied successfully to spoken language, compares different ways of dealing with structures that are specific to spoken language corpora, analyses some remaining problems, and discusses ways of optimising precision or recall for the method. The outlook sketches some possibilities for further work in related areas.
It is well known that the distribution of lexical and grammatical patterns is size- and register-sensitive (Biber 1986, and later publications). This fact alone presents a challenge to many corpus-oriented linguistic studies focusing on a single language. When it comes to cross-linguistic studies using corpora, the challenge becomes even greater due to the lack of high-quality multilingual corpora (Kupietz et al. 2020; Kupietz/Trawiński 2022), which are comparable with respect to the size and the register. That was the motivation for the creation of the European Reference Corpus EuReCo, an initiative started in 2013 at the Leibniz Institute for the German Language (IDS) together with several European partners (Kupietz et al. 2020). EuReCo is an emerging federated corpus, with large virtual comparable corpora across various languages and with an infrastructure supporting contrastive research. The core of the infrastructure is KorAP (Diewald et al. 2016), a scalable open-source platform supporting the analysis and visualisation of properties of texts annotated by multiple and potentially conflicting information layers, and supporting several corpus query languages. Until recently, EuReCo consisted of three monolingual subparts: the German Reference Corpus DeReKo (Kupietz et al. 2018), the Reference Corpus of Contemporary Romanian Language (Barbu Mititelu/Tufiş/Irimia 2018), and the Hungarian National Corpus (Váradi 2002). The goal of the present submission is twofold. On the one hand, it reports about the new component of EuReCo: a sample of the National Corpus of Polish (Przepiórkowski et al. 2010). On the other hand, it presents the results of a new pilot study using the newly extended EuReCo. This pilot study investigates selected Polish collocations involving light verbs and their prepositional / nominal complements (Fig. 1) and extends the collocation analyses of German, Romanian and Hungarian (Fig. 2) discussed in Kupietz/Trawiński (2022).
In mid-2017, as part of our activities within the TEI Special Interest Group for Linguists (LingSIG), we submitted to the TEI Technical Council a proposal for a new attribute class that would gather attributes facilitating simple token-level linguistic annotation. With this proposal, we addressed community feedback complaining about the lack of a specific tagset for lightweight linguistic annotation within the TEI. Apart from @lemma and @lemmaRef, up till now TEI encoders could only resort to using the generic attribute @ana for inline linguistic annotation, or to the quite complex system of feature structures for robust linguistic annotation, the latter requiring relatively complex processing even for the most basic types of linguistic features. As a result, there now exists a small set of basic descriptive devices which have been made available at the cost of only very small changes to the TEI tagset. The merit of a predefined TEI tagset for lightweight linguistic annotation is the homogeneity of tagging and thus better interoperability of simple linguistic resources encoded in the TEI. The present paper introduces the new attributes, makes a case for one more addition, and presents the advantages of the new system over the legacy TEI solutions.
Standards in CLARIN
(2022)
This chapter looks at a fragment of the ongoing work of the CLARIN Standards Committee (CSC) on producing a shared set of recommendations on standards, formats, and related best practices supported by the CLARIN infrastructure and its participating centres. What might at first glance seem to be a straightforward goal has over the years proven to be rather complex, reflecting the robustness and heterogeneity of the emerging distributed digital research infrastructure and the various disciplines and research traditions of the language-based humanities that it serves and represents, and therefore part of the chapter reviews the various initiatives and proposals that strove to produce helpful standards-related guidance. The focus turns next to a subtask initiated in late 2019, its scope narrowed to one of the core activities and responsibilities of CLARIN backbone centres, namely the provision of data deposition services. Centres are obligated to publish their recom-mendations concerning the repertoire of data formats that are best suited for their research profiles. We look at how this requirement has been met by the particular centres and suggest that having centres maintain their information in the Standards Information System (SIS) is the way to improve on the current state of affairs.
Recent typological studies have shown that socio-linguistic factors have a substantial effect on at least certain structures of language. However, we are still far from understanding how such factors should be operationalized and how they interact with other factors in shaping grammar. To address both questions, this study examines the influence of socio-linguistic factors on the number of dedicated conditional constructions in a sample of 374 languages. We test the number of speakers, the degree of multilingualism, the availability of a literature tradition, the use of writing, and the use of the language in the education system. At the same time, we control for genealogical, contact, and bibliographical biases. Our results suggest that the number of speakers is the most informative predictor. However, we find that the association between the number of speakers and the number of dedicated conditional constructions is much weaker than assumed, once genealogical and contact biases are controlled for.
The paper reports on the results of a scientific colloquium dedicated to the creation of standards and best practices which are needed to facilitate the integration of language resources for CMC stemming from different origins and the linguistic analysis of CMC phenomena in different languages and genres. The key issue to be solved is that of interoperability – with respect to the structural representation of CMC genres, linguistic annotations metadata, and anonymization/pseudonymization schemas. The objective of the paper is to convince more projects to partake in a discussion about standards for CMC corpora and for the creation of a CMC corpus infrastructure across languages and genres. In view of the broad range of corpus projects which are currently underway all over Europe, there is a great window of opportunity for the creation of standards in a bottom-up approach.
OKAY originates from English, but it is increasingly used across languages. This chapter presents data from 13 languages, illustrating the spectrum of possible uses of OKAY in responding and claiming understanding in contexts of informings. Drawing on a wide range of interaction types from both informal and institutional contexts, including those crucially involving embodied practices, we show how OKAY can be used to (i) claim sufficient understanding, (ii) mark understanding of the prior informing as preliminary or not complete, and (iii) index discrepancy of expectation.
This contribution aims to shed light on the structural development of Luxembourgish German in the 19th Century. The fact that it is embedded in a multilingual context raises many research questions. The evidence comprises predominantly bilingual German/French public notices issued by the City of Luxembourg in this period. The analysis of two conjunctions suggests that processes of replication and interlingual transfer are sources for Variation. It shows that the influence of French was particularly acute during the “French period” (1795-1814). However, rather than working in isolation, the language contact phenomena operate on the basis of similar constructions existing in the borrowing language. In addition, ancient German forms quickly disappeared, despite showing similarity to forms in the local dialect.
One was a distinguished natural scientist and engineer, the other a self-taught scientist and vilified as a conman: Christian Gottlieb Kratzenstein (1723–1795) and Wolfgang von Kempelen (1734–1804). Some of the former’s postula-tions on human physiology and articulation of speech proved wrong in later years. Most of the latter’s theories are considered applicable even today. The perhaps most contrasting approaches to speech synthesis during the 18th century are linked to their names. There are many essential differences between their approaches which show that these two researchers were not only representatives of different schools of thought, but also representatives of two different scientific eras. A speculative and philosophical approach on the one hand versus an empirical and logical approach on the other hand. Both Kratzenstein and Kempelen published books on their research. But while the “Tentamen” [4] of the physician Kratzen-stein remains rather vague and imprecise in its descriptions of vowel production and synthesis, the “Mechanismus” [8] of the engineer Kempelen shows much more precision and correctness in almost every respect of human speech and lan-guage. The goal of this paper is to discuss the differences between these two con-temporaneous researchers on speech synthesis and to compare their theories with present-days findings.
We argue that properties with a nominal origin get transferred regularly in certain Gentian particle verb constructions to properties that are propositional insofar as they characterize the temporal structure of eventualities, understood to be described by propositional (= truth-assessable) representations of state changes. Accordingly, the oft-noted perfectivizing function of certain verbal particles like ein- in einfahren ('pull in', cf. Kühnhold 1972) is the effect of redressing a conflict at the syntax-semantics interface: On the one hand, constructions like in [die Grube]acc einfahren ('pull into the mine’) exhibit transitive syntax (Gehrke 2008), requiring that the syntactic arguments be mapped onto well-distinguished or DIFFERENT referents in the semantics (Kemmer 1993). On the other hand, in/ein codes a spatio-temporal inclusion relation between its relata, contradicting the requirement imposed by the transitive syntax. Following Brandt (2019), we submit that the interface executes a manoeuvre that delays the interpretation of part of the contradiction-inducing DIFFERENCE feature. It is not locally interpreted (semantically represented) in toto but in part passed on to the next syntactic-semantic computational cycle. Here, the passed-on meaning is interpreted in the locally customary terms, in the case at hand, as a temporal index where the post-state of the depicted eventuality does not hold.
We present zu-excessive structures like Otto ist zu schwer ‘Otto is too heavy’ as instantiations of comparatives that have been reflexivized. Comparatives express asymmetric relations between distinguished referents, but reflexivization identifies argument places (or reduces two argument places to one), leading to a Symmetrie relation. Reflexivization is thus in conflict with the asymmetry property of comparatives and leads to an intermediate semantic representation that is con- tradictory. Two experiments substantiate that zu-excessives share this property with privative adjective and animal-for-statue constructions that similarly give rise to contradictory semantics. The processing of any of the constructions mentioned yields a positivity in the event-related-potential signature characteristic of concep- tual reorganization; however, the observed positivity occurs earlier in the case of zu-excessives than in the other cases. We propose this difference is due to zu signalling the mandatory preparation for an ensuing repair rather than reflecting the repair Operation itself that involves manipulating the Standard of comparison, coded elsewhere in the String (if at all).
Corpus REDEWIEDERGABE
(2020)
This article presents the corpus REDEWIEDERGABE, a German-language historical corpus with detailed annotations for speech, thought and writing representation (ST&WR). With approximately 490,000 tokens, it is the largest resource of its kind. It can be used to answer literary and linguistic research questions and serve as training material for machine learning. This paper describes the composition of the corpus and the annotation structure, discusses some methodological decisions and gives basic statistics about the forms of ST&WR found in this corpus.
Researchers interested in the sounds of speech or the physical gestures of Speakers make use of audio and video recordings in their work. Annotating these recordings presents a different set of requirements to the annotation of text. Special purpose tools have been developed to display video and audio Signals and to allow the creation of time-aligned annotations. This chapter reviews the most widely used of these tools for both manual and automatic generation of annotations on multimodal data.
This article details the process of creating the Nottinghamer Korpus deutscher YouTube-Sprache ('The Nottingham German YouTube Language Corpus' - or NottDeuYTSch corpus) and outlines potential research opportunities. The corpus was compiled to analyse the online language produced by young German-speakers and offers significant opportunity for in-depth research across several linguistic fields including lexis, morphology, syntax, orthography, and conversational and discursive analysis. The NottDeuYTSch corpus contains over 33 million words taken from approximately 3 million YouTube comments from videos published between 2008 to 2018 targeted at a young, German-speaking demographic and represent an authentic language snapshot of young German speakers. The corpus was proportionally sampled based on video category and year from a database of 112 popular German-speaking YouTube channels in the DACH region for optimal representativeness and balance and contains a considerable amount of associated metadata for each comment that enable further longitudinal cross-sectional analyses. The NottDeuYTSch corpus is available for analysis as part of the German Reference Corpus (DeReKo).
"What makes this so complicated?" On the value of disorienting dilemmas in language instruction
(2017)
Dieses Kapitel lotet Möglichkeiten und Methoden aus, digitale Diskursanalysen nationalsozialistischer Quellentexte durchzuführen. Digitale Technologie wird dabei als heuristisches Werkzeug betrachtet, mit dem der Sprachgebrauch während des Nationalsozialismus im Rahmen größerer Quellenkorpora untersucht werden kann. In einem theoretischen Abschnitt wird grundsätzlich dafür plädiert, während des Analyseprozesses hermeneutisches Sinnverstehen mit breitflächigen korpusbasierten Abfragen zu kombinieren. Verdeutlicht wird diese Herangehensweise an zwei empirischen Beispielen: Anhand eines Korpus von Hitler- und Goebbels-Reden wird dem Auftauchen und der diskursiven Ausgestaltung des nationalsozialistischen Konzepts „Lebensraum“ nachgespürt. Schritt für Schritt wird offengelegt, welche Analysewege durch das Abfragen von Schlüsseltexten, Keywords, Konkordanzen und Kollokationen verfolgt werden können. Das zweite Beispiel zeigt anhand von Eingaben, die aus der Bevölkerung an Staats- und Parteiinstanzen gerichtet wurden, wie solche Quellen mithilfe eines digitalen Tools manuell annotiert werden können, um sie danach auf Musterhaftigkeiten im Sprachgebrauch hin auswerten zu können.
CLARIN stands for “Common Language Resources and Technology Infrastructure”. In 2012 CLARIN ERIC was established as a legal entity with the mission to create and maintain a digital infrastructure to support the sharing, use, and sustainability of language data (in written, spoken, or multimodal form) available through repositories from all over Europe, in support of research in the humanities and social sciences and beyond. Since 2016 CLARIN has had the status of Landmark research infrastructure and currently it provides easy and sustainable access to digital language data and also offers advanced tools to discover, explore, exploit, annotate, analyse, or combine such datasets, wherever they are located. This is enabled through a networked federation of centres: language data repositories, service centres, and knowledge centres with single sign-on access for all members of the academic community in all participating countries. In addition, CLARIN offers open access facilities for other interested communities of use, both inside and outside of academia. Tools and data from different centres are interoperable, so that data collections can be combined and tools from different sources can be chained to perform operations at different levels of complexity. The strategic agenda adopted by CLARIN and the activities undertaken are rooted in a strong commitment to the Open Science paradigm and the FAIR data principles. This also enables CLARIN to express its added value for the European Research Area and to act as a key driver of innovation and contributor to the increasing number of industry programmes running on data-driven processes and the digitalization of society at large.
This paper studies how the turn-design of a highly recurrent type of action changes over time. Based on a corpus of video-recordings of German driving lessons, we consider one type of instructions and analyze how the same instructional action is produced by the same speaker (the instructor) for the same addressee (the student) in consecutive trials of a learning task. We found that instructions become increasingly shorter, indexical and syntactically less complex; interactional sequences become more condensed and activities designed to secure mutual understanding become rarer. This study shows how larger temporal frameworks of interpersonal interactional histories which range beyond the interactional sequence impinge on the recipient-design of turns and the deployment of multimodal resources in situ.
Mock fiction is a genre of humorous, fictional narratives. It is pervasive in adolescents’ peer-group interaction. Building on a corpus of informal peer-group interaction among 14 to 17 year-old German adolescents, it is shown how mock fiction is used to sanction identity-claims of peer-group co-members that are taken to be inadequate by the teller of a mock fiction. Mock fiction exposes and ridicules those claims by fictional exaggeration. Mock fiction is an indirect, yet sometimes even highly abusive means for criticizing and negotiating identities and statuses of peer-group members. The analysis shows how mock fiction is collaboratively produced, how it is used to convey criticism and to negotiate social norms indirectly, and how, in addition, it allows for performative self-positioning of the tellers as skilled, entertaining tellers and socio-psychological diagnosticians.
Positioning
(2015)
Over the last two decades, “positioning” has become an established concept used to elucidate how identities are deployed and negotiated in narratives. This chapter first locates positioning in the larger field of research on identities and discourse. Commonalities and differences in conceptions of positioning are highlighted. In the following, the historical development of theoretical approaches to positioning and their methodological implications are reviewed in more detail. The article closes by taking up two current lines of debate concerning the future development of the concept of positioning.
This paper shows how understanding in interaction is informed by temporality, and in particular, by the workings of retrospection. Understanding is a temporally extended, sequentially organized process. Temporality, namely, the sequential relationship of turn positions, equips participants with default mechanisms to display understandings and to expect such displays. These mechanisms require local management of turn-taking to be in order, i.e., the possibility and the expectation to respond locally and reciprocally to prior turns at talk. Sequential positions of turns in interaction provide an infrastructure for displaying understanding and accomplishing intersubjectivity. Linguistic practices specialized in displaying particular kinds of (not) understanding are adapted to the individual sequential positions with respect to an action-to-be-understood.
Social actions
(2021)
Social actions are recipient-designed actions that occur in the context of interaction sequences. This chapter focuses on sources and practices for the formation and ascription of social actions. While linguists stress the relevance of linguistic social action formats, conversation analysts highlight the relevance of the sequential position of an action, and sociolinguists point to the influence of social identities for action-formation and -ascription. The combination of these three approaches helps us to solve the analytic problem of indirectness, which, however, only rarely becomes a problem for the participants in an interaction themselves. Social properties which recurrently apply when using verbal and bodily resources of action-formation, i.e. the social actions themselves, inferred meanings, projected next actions, the participation framework, the activity type, speaker’s stance, participants’ identities, etc. lead to stable pragmatic connotations of those forms, i.e. action-meanings, which become idiomatic and part of our common-sense competence. Still, social actions are multi-layered and can be ambiguous at times. Therefore, their meaning can be open for negotiation. Intersubjectivity of action ascription is ultimately secured neither by conventions nor by speaker’s intentions, but is accomplished by their treatment in subsequent discourse.
The authors establish a phenomenological perspective on the temporal constitution of experience and action. Retrospection and projection (i.e. backward as well as forward orientation of everyday action), sequentiality and the sequential organization of activities as well as simultaneity (i.e. participants’ simultaneous coordination) are introduced as key concepts of a temporalized approach to interaction. These concepts are used to capture that every action is produced as an inter-linked step in the succession of adjacent actions, being sensitive to the precise moment where it is produced. The adoption of a holistic, multimodal and praxeological perspective additionally shows that action in interaction is organized according to several temporal orders simultaneously in operation. Each multimodal resource used in interaction has its own temporal properties.
Action ascription can be understood from two broad perspectives. On one view, it refers to the ways in which actions constitute categories by which members make sense of their world, and forms a key foundation for holding others accountable for their conduct. On another view, it refers to the ways in which we accountably respond to the actions of others, thereby accomplishing sequential versions of meaningful social experience. In short, action ascription can be understood as matter of categorisation of prior actions or responding in ways that are sequentially fitted to prior actions, or both. In this chapter, we review different theoretical approaches to action ascription that have developed in the field, as well as the key constituents and resources of action ascription that have been identified in conversation analytic research, before going on to discuss how action ascription can itself be considered a form of social action.
While the role of intentions in the constitution of actions gives rise to complex and heavily controversial questions, it appears to be indisputable that action ascription in interaction mostly does without any overt ascription of intention. Yet, sometimes participants explicitly ascribe intentions to their interlocutors in order to make sense of their prior actions. The chapter examines intention ascriptions in response to a partner’s adjacent prior turn using the German modal verb construction willst du/wollen Sie (do you want). The analysis focuses on the aspect of the prior action the intention ascription addresses (action type, projected next action, motive etc.), the action the intention ascription performs itself, and the next action they make relevant from the prior speaker. It was found that intention ascriptions are used to clarify and intersubjectively ground the meaning of the prior turn, which seems otherwise underspecified, ambiguous or puzzling. Yet, they are also used to adumbrate criticism, e.g., that the prior turn projects a course of future actions which is considered to be inadequate, or to expose a concealed, problematic allegedly “real” meaning of the prior turn.
Metalinguistic awareness of standard vs standard usage. The case of determiners in spoken German
(2015)
In this chapter, we overview the specificity of comparisons made within the perspective of Conversation Analysis (CA), and we position them in relation to other fields. We introduce the analytical mentality, methodology, and procedures of CA, and we show how we used it for the analysis of OKAY in this volume.
When comparing different tools in the field of natural language processing (NLP), the quality of their results usually has first priority. This is also true for tokenization. In the context of large and diverse corpora for linguistic research purposes, however, other criteria also play a role – not least sufficient speed to process the data in an acceptable amount of time. In this paper we evaluate several state of the art tokenization tools for German – including our own – with regard to theses criteria. We conclude that while not all tools are applicable in this setting, no compromises regarding quality need to be made.
When comparing different tools in the field of natural language processing (NLP), the quality of their results usually has first priority. This is also true for tokenization. In the context of large and diverse corpora for linguistic research purposes, however, other criteria also play a role – not least sufficient speed to process the data in an acceptable amount of time. In this paper we evaluate several state-ofthe-art tokenization tools for German – including our own – with regard to theses criteria. We conclude that while not all tools are applicable in this setting, no compromises regarding quality need to be made.
This chapter will present lessons learned from CLARIN-D, the German CLARIN national consortium. Members of the CLARIN-D communities and of the CLARIN-D consortium have been engaged in innovative, data-driven, and community-based research, using language resources and tools in the humanities and neigh-bouring disciplines. We will present different use cases and users’ stories that demonstrate the innovative research potential of large digital corpora and lexical resources for the study of language change and variation, for language documentation, for literary studies, and for the social sciences. We will emphasize the added value of making language resources and tools available in the CLARIN distributed research infrastructure and will discuss legal and ethical issues that need to be addressed in the use of such an infrastructure. Innovative technical solutions for accessing digital materials still under copyright and for data mining such materials will be presented. We will outline the need for close interaction with communities of interest in the areas of curriculum development, data management, and training the next generation of digital humanities scholars. The importance of community-supported standards for encoding language resources and the practice of community-based quality control for digital research data will be presented as a crucial step toward the provisioning of high quality research data. The chapter will conclude with a discussion of impor-tant directions for innovative research and for supporting infrastructure development over the next decade and beyond.
This article examines the language contact situation as well as the language attitudes of the Caucasian Germans, descendants of German-born inhabitants of the Russian Empire and the Soviet Union who emigrated in 1816/17 to areas of Transcaucasia. After deportations and migrations, the group of Caucasian Germans now consists of those who have since emigrated to Germany and those who still live in the South Caucasus. It’s the first time that sociolinguistic methods have been used to record data from the generation who experienced living in the South Caucasus and in Germany as well as from two succeeding generations. Initial results will be presented below with a focus on the language contact constellations of German varieties as well as on consequences of language contact and language repression, which both affect language attitudes.
L’article intitulé «Traitement de l’information: Spinfo, HKI et humanités numériques - l’expérience de Cologne» présente l’histoire du développement des humanités numériques au sein de l’Université de Cologne. L'institutionnalisation des humanités numériques a commencé encore à l’époque où dans le monde germanophone le périmètre de la discipline était en train d’être défini par les travaux de quelques pionniers. Parmi eux, il convient de souligner le rôle d’Elisabeth Burr, active notamment à Tubingue, Duisbourg, Brême et Leipzig.L’article retrace le développement des humanités numériques à Cologne à partir de leurs débuts dans les années soixante du 20ème siècle, en passant par leur consolidation dans les années quatre-vingt-dix, jusqu’aux deux dernières décennies, quand Cologne est devenu un centre important de cette discipline. Le processus illustre comment une nouvelle discipline scientifique peut s’institutionnaliser au sein d’une université allemande. L’article décrit la perspective de deux domaines fondateurs: le traitement linguistique de l’information (en allemand: Sprachliche Informationsverarbeitung, Spinfo) et le traitement historico-culturel de l’information (en allemand: Historisch Kulturwissenschaftliche Informationsverarbeitung, HKI) et leur synthèse, qui a abouti en 2017 à la création de l’Institut des Humanités Numériques (Digital Humanities), qui aujourd’hui est - du point de vue interne - une composante de la Faculté de Philosophie de l’Université de Cologne et - du point de vue externe - une partie intégrante de la communauté internationale des humanités numériques.
Theories of lexical decomposition assume that lexical meanings are complex. This complexity is expressed in structured meaning representations that usually consist of predicates, arguments, operators, and other elements of propositional and predicate logic. Lexical decomposition has been used to explain phenomena such as argument linking, selectional restrictions, lexical-semantic relations, scope ambiguities, and the inference behavior of lexical items. The article sketches the early theoretical development from noun-oriented semantic feature theories to verb-oriented complex decompositions. It also deals with a number of theoretical issues, including the controversy between decompositional and atomistic approaches to meaning, the search for semantic primitives, the function of decompositions as definitions, problems concerning the interpretability of decompositions, and the debate about the cognitive status of decompositions.
Starting from early approaches within Generative Grammar in the late 1960s, the article describes and discusses the development of different theoretical frameworks of lexical decomposition of verbs. It presents the major subsequent conceptions of lexical decompositions, namely, Dowty’s approach to lexical decomposition within Montague Semantics, Jackendoff’s Conceptual Semantics, the LCS decompositions emerging from the MIT Lexicon Project, Pustejovsky’s Event Structure Theory, Wierzbicka’s Natural Semantic Metalanguage, Wunderlich’s Lexical Decompositional Grammar, Hale and Kayser’s Lexical Relational Structures, and Distributed Morphology. For each of these approaches, (i) it sketches their origins and motivation, (ii) it describes the general structure of decompositions and their location within the theory, (iii) it explores their explanative value for major phenomena of verb semantics and syntax, (iv) and it briefly evaluates the impact of the theory. Referring to discussions in article 7 [Semantics: Foundations, History and Methods] (Engelberg) Lexical decomposition, a number of theoretical topics are taken up throughout the paper concerning the interpretation of decompositions, the basic inventory of decompositional predicates, the location of decompositions on the different levels of linguistic representation (syntactic, semantic, conceptual), and the role they play for the interfaces between these levels.
Tok Pisin is a pidgin/creole language spoken since the late 19th century in most of the area that nowadays constitutes Papua New Guinea where it emerged under German colonial rule. Unusual for a pidgin/creole, Tok Pisin is characterized by a extensive lexicographic history. The Tok Pisin Dictionary Collection at the Leibniz Institute for the German Language, described in this article, includes about fifty dictionaries. The collection forms the basis for the sketch of the history of Tok Pisin lexicography as part of colonial history presented here. The basic thesis is that in the history of Tok Pisin, lexicographic strat egies, dictionary structures, and publication patterns reflect the interest (and disinterest) of various groups of colonial actors. Among these colonial actors, European scientists, Catholic missionaries, and the Australian and US militaries played important roles.
Tok Pisin is a pidgin/creole language spoken since the late 19th century in most of the area that nowadays constitutes Papua New Guinea where it emerged under German colonial rule. Unusual for a pidgin/creole, Tok Pisin is characterized by a extensive lexicographic history. The Tok Pisin Dictionary Collection at the Leibniz Institute for the German Language, described in this article, includes about fifty dictionaries. The collection forms the basis for the sketch of the history of Tok Pisin lexicography as part of colonial history presented here. The basic thesis is that in the history of Tok Pisin, lexicographic strategies, dictionary structures, and publication patterns reflect the interest (and disinterest) of various groups of colonial actors. Among these colonial actors, European scientists, Catholic missionaries, and the Australian and US militaries played important roles.
The present paper outlines the projected second part of the Corpus Query Lingua Franca (CQLF) family of standards: CQLF Ontology, which is currently in the process of standardization at the International Standards Organization (ISO), in its Technical Committee 37, Subcommittee 4 (TC37SC4) and its national mirrors. The first part of the family, ISO 24623-1 (henceforth CQLF Metamodel), was successfully adopted as an international standard at the beginning of 2018. The present paper reflects the state of the CQLF Ontology at the moment of submission for the Committee Draft ballot. We provide a brief overview of the CQLF Metamodel, present the assumptions and aims of the CQLF Ontology, its basic structure, and its potential extended applications. The full ontology is expected to emerge from a community process, starting from an initial version created by the authors of the present paper.
Question Answering Systems for retrieving information from Knowledge Graphs (KG) have become a major area of interest in recent years. Current systems search for words and entities but cannot search for grammatical phenomena. The purpose of this paper is to present our research on developing a QA System that answers natural language questions about German grammar.
Our goal is to build a KG which contains facts and rules about German grammar, and is also able to answer specific questions about a concrete grammatical issue. An overview of the current research in the topic of QA systems and ontology design is given and we show how we plan to construct the KG by integrating the data in the grammatical information system Grammis, hosted by the Leibniz-Institut für Deutsche Sprache (IDS). In this paper, we describe the construction of the initial KG, sketch our resulting graph, and demonstrate the effectiveness of such an approach. A grammar correction component will be part of a later stage. The paper concludes with the potential areas for future research.
“Die Sprach-Checker” (Eng. “Language Checkers”) are young citizen scientists from Mannheim’s highly diverse district Neckarstadt-West. Together with linguists, they investigate a tremendous treasure: their own multilingualism. They are exploring and (re)discovering their own languages and the other languages used in their environment while documenting and reflecting on their everyday experiences in and with different linguistic practices. Our aim is to raise awareness of their strengths and to promote appreciation for their language biographies, thus fostering a sense of identification with one’s own linguistic surroundings. Such a joint research endeavour offers empirical opportunities to address (linguistic) issues of societal relevance by collecting authentic data from the multicultural district and involving its residents and local stakeholders. In this paper, we will provide insights regarding the project’s background, conception, and outcomes. We address everyone who is planning or conducting a citizen science project with young people, especially children and adolescents, or who works at the interface between science and society.
We present web services which implement a workflow for transcripts of spoken language following the TEI guidelines, in particular ISO 24624:2016 “Language resource management – Transcription of spoken language”. The web services are available at our website and will be available via the CLARIN infrastructure, including the Virtual Language Observatory and WebLicht.
Preface
(2022)
In this paper, we address two problems in indexing and querying spoken language corpora with overlapping speaker contributions. First, we look into how token distance and token precedence can be measured when multiple primary data streams are available and when transcriptions happen to be tokenized, but are not synchronized with the sound at the level of individual tokens. We propose and experiment with a speaker based search mode that enables any speaker’s transcription tier to be the basic tokenization layer whereby the contributions of other speakers are mapped to this given tier. Secondly, we address two distinct methods of how speaker overlaps can be captured in the TEI based ISO Standard for Spoken Language Transcriptions (ISO 24624:2016) and how they can be queried by MTAS – an open source Lucene-based search engine for querying text with multilevel annotations. We illustrate the problems, introduce possible solutions and discuss their benefits and drawbacks.
Language Change
(2017)
The present chapter outlines a research program for historical linguistics based on the idea that the object of the formal study of language change should be defined as grammar change, that is, a set of discrete differences between the target grammar and the grammar acquired by the learner (Hale 2007). This approach is shown to offer new answers to some classical problems of historical linguistics (Weinreich et al. 1968), concerning, specifically, the actuation of changes and the observation that the transition from one historical state to another proceeds gradually. It is argued that learners are highly sensitive to small fluctuations in the linguistic input they receive, making change inevitable, while the impression of gradualness is linked to independent factors (diffusion in a speech community, and grammar competition). Special attention is paid to grammaticalization phenomena, which offer insights into the nature of functional categories, the building blocks of clause structure.
In recent years, the availability of large annotated and searchable corpora, together with a new interest in the empirical foundation and validation of linguistic theory and description, has sparked a surge of novel and interesting work using corpus-based methods to study the grammar of natural languages. However, a look at relevant current research on the grammar of the Germanic, Romance, and Slavic languages reveals a variety of different theoretical approaches and empirical foci, which can be traced back to different philological and linguistic traditions. Still, this current state of affairs should not be seen as an obstacle but as an ideal basis for a fruitful exchange of ideas between different research paradigms.
Complement phrases are essential for constructing well-formed sentences in German. Identifying verb complements and categorizing complement classes is challenging even for linguists who are specialized in the field of verb valency. Against this background, we introduce an ML-based algorithm which is able to identify and classify complement phrases of any German verb in any written sentence context. We use a large training set consisting of example sentences from a valency dictionary, enriched with POS tagging, and the ML-based technique of Conditional Random Fields (CRF) to generate the classification models.
This article reports about the on-going work on a new version of the metadata framework Component Metadata Infrastructure (CMDI), central to the CLARIN infrastructure. Version 1.2 introduces a number of important changes based on the experience gathered in the last five years of intensive use of CMDI by the digital humanities community, addressing problems encountered, but also introducing new functionality. Next to the consolidation of the structure of the model and schema sanity, new means for lifecycle management have been introduced aimed at combatting the observed proliferation of components, new mechanism for use of external vocabularies will contribute to more consistent use of controlled values and cues for tools will allow improved presentation of the metadata records to the human users. The feature set has been frozen and approved, and the infrastructure is now entering a transition phase, in which all the tools and data need to be migrated to the new version.
The newest generation of speech technology caused a huge increase of audio-visual data nowadays being enhanced with orthographic transcripts such as in automatic subtitling in online platforms. Research data centers and archives contain a range of new and historical data, which are currently only partially transcribed and therefore only partially accessible for systematic querying. Automatic Speech Recognition (ASR) is one option of making that data accessible. This paper tests the usability of a state-of-the-art ASR-System on a historical (from the 1960s), but regionally balanced corpus of spoken German, and a relatively new corpus (from 2012) recorded in a narrow area. We observed a regional bias of the ASR-System with higher recognition scores for the north of Germany vs. lower scores for the south. A detailed analysis of the narrow region data revealed – despite relatively high ASR-confidence – some specific word errors due to a lack of regional adaptation. These findings need to be considered in decisions on further data processing and the curation of corpora, e.g. correcting transcripts or transcribing from scratch. Such geography-dependent analyses can also have the potential for ASR-development to make targeted data selection for training/adaptation and to increase the sensitivity towards varieties of pluricentric languages.
This conversation analytic study compares the use of negation particles in spoken German and Persian, namely nein/nee and na. While these particles have a range of functions in both languages (Ghaderi 2022; Imo 2017), their use in response to news remains understudied. We focus on nein/nee and na in two sequential contexts: (i) after prior disconfirmations (Extract (a)) and (ii) in response to either solicited or unsolicited informings (see Extracts (b) and (c), respectively). In both contexts, nein/nee and na mark unexpectedness and open up an opportunity space for more, but they do so in different ways and with different outcomes. Nein/nee- and na-turns after disconfirming, often minimal responses to first-position confirmable turns mark the prior as unexpected (or even contrasting with the nein/nee/na-speaker’s expectations) and thus as expandable/accountable (cf. Ford 2001; Gubina/Betz 2021). Nein/nee/na-turns after informings (e.g., announcements that display a story teller’s negative emotional stance) differ not only in sequential position but also in prosodic realization. They can be either falling or rising, but all are characterized by marked prosody, i.e., lengthening, very low onset, smiling or breathy voice, or high overall pitch. Through position and turn design features, such nein/nee- and na-turns not only mark a prior turn as counter to (normative) expectations, but may also display the speaker’s affective stance and affiliate with the affective stance of the prior interactant. By comparing the use of nein/nee and na in German and Persian in the two functions illustrated in Extracts (a) and (b/c), we will show (i) how nein/nee- and na-turns shape interactional trajectories after responsive actions and (ii) what role the particles play in managing news and stance-taking as well as epistemic and affective positioning. Apart from revealing similarities in the use of German and Persian negation particles, the results of our crosslinguistic comparison will demonstrate that even if different languages have similar practices for specific actions, the use of these practices is language- and culture-specific. This means that even similar practices in different languages have their own “collateral effects” (Sidnell/Enfield 2012), linguistic and prosodic characteristic features, and, at least sometimes, consequences for social actions accomplished in the specific language (e.g., Dingemanse/Blythe/Dirksmeyer 2014; Evans/Levinson 2009; Floyd/Rossi/Enfield (eds.) 2020; Fox et al. 2009). Our study uses the method of Conversation Analysis (Sidnell/Stivers (eds.) 2013) and draws on more than 80 hours of audio and video recordings of spontaneous interactions (co-present, via video link, and on the telephone) in everyday and institutional contexts.
The issue: We discuss (declarative) prepositional object clauses (PO-clauses) in the West Germanic languages Dutch (NL), German (DE), and English (EN). In Dutch and German, PO-clauses occur with a prepositional proform (=PPF, Dutch: ervan, erover, etc.; German: drauf/darauf, drüber/darüber, etc.). This proform is optional with some verbs (1). In English, by contrast, P embeds a clausal complement in the case of gerunds or indirect questions (2), however, P is obligatorily absent when the embedded CP is a that-clause in its base positionv(3a). However, when the that-clause is passivized or topicalized, the stranded P is obligatory (3b). Given this scenario, we will address the following questions: i) Are there structural differences between PO-clauses with a P/PPF and those in which the P/PPF is optionally or obligatorily omitted? ii) In particular, do PO-clauses without P/PPF structurally coincide with direct object (=DO) clauses? iii) To what extent are case and nominal properties of clauses relevant? We use wh-extraction as a relevant test for such differences.
Previous research: Based on pronominalization and topicalization data in German and Dutch, PO-clauses are different from DO-clauses independent of the presence of the PPF (see, e.g., Breindl 1989; Zifonun/Hoffmann/Strecker 1997; Berman 2003; Broekhuis/Corver 2015 and references therein) (4,5). English pronominalization and topicalization data (3b) appear to point in the same direction (Fischer 1997; Berman 2003; Delicado Cantero 2013). However, the obligatory absence of P before that-clauses in base position indicates a convergence with DO-clauses.
Experimental evidence: To provide further evidence to these questions we tested PO-clauses in all three languages for long wh-extraction, which is usually possible for DO-clauses in English and Dutch, and in German for southern regional varieties. For German and Dutch we conducted rating studies using the thermometer method (Featherston 2008). Each study contained two sets of sentences: the first set tested long wh-extraction with regular DO-clauses (6). The second set tested wh-extraction from PO-clauses with and without PPFs (7), respectively. The results show no significant difference in extraction with PO-clauses whether or not the PPF was present even for those speakers who otherwise accept long-distance extraction in German. This supports a uniform analysis of PO-clauses with and without the PPF in contrast to DO-clauses. For English we tested extraction with verbs that select for PP-objects in two configurations: V+that-clause and V+P-gerund (8) in comparison to sentences without extraction. Participants rated sentences on a scale of 1 (unnatural) to 7 (natural). We included the gerund for English as this is a regular alternative for such objects. The results show that extraction is licit in both configurations. This suggests that English PO-clauses are different from German and Dutch PO-clauses: They rather behave as DO-clauses allowing for extraction. Note though, that the availability of extraction from P+gerund also shows that PPs are not islands for extraction in English. Overall, this shows that there is a split between English vs. German/Dutch PO-clauses when the P/PPF is absent. While these clauses behave like PO-clauses in the latter languages, extraction does not show a difference between DO- and PO-clauses in English. We will discuss the results in relation to the questions i)–iii) above.
This paper will address the challenge of creating a knowledge graph from a corpus of historical encyclopedias with a special focus on word sense alignment (WSA) and disambiguation (WSD). More precisely, we examine WSA and WSD approaches based on article similarity to link messy historical data, utilizing Wikipedia as aground-truth component – as the lack of a critical overlap in content paired with the amount of variation between and within the encyclopedias does not allow for choosing a ”baseline” encyclopedia to align the others to. Additionally, we are comparing the disambiguation performance of conservative methods like the Lesk algorithm to more recent approaches, i.e. using language models to disambiguate senses.
Twenty-two historical encyclopedias encoded in TEI: a new resource for the Digital Humanities
(2020)
This paper accompanies the corpus publication of EncycNet, a novel XML/TEI annotated corpus of 22 historical German encyclopedias from the early 18th to early 20th century. We describe the creation and annotation of the corpus, including the rationale for its development, suggested methodology for TEI annotation, possible use cases and future work. While many well-developed annotation standards for lexical resources exist, none can adequately model the encyclopedias at hand, and we therefore suggest how the TEI Lex-0 standard may be modified with additional guidelines for the annotation of historical encyclopedias. As the digitization and annotation of historical encyclopedias are settling on TEI as the de facto standard, our methodology may inform similar projects.
This paper explores the syntax of agreement in Insular Scandinavian in copular clauses with two potential goals for agreement. Data from three production experiments - one in Faroese and two in Icelandic - establish several new facts. First, in both languages agreement with the second nominal (DP2) is possible/preferred. Second, there is considerable variation (both within and between languages, and indeed speakers) in the patterns observed. Third, Icelandic shows a surprising pattern of “partial” agreement with DP2 - agreement in number but not person. We discuss the implications for current theorising about agreement, proposing that in these languages, at least, agreement is downwards, and that the available agreement options depend in part on the syntactic position of DPI when agreement is established.
This paper discusses the interaction of Freezing with movement and focus on the basis of subextraction from the pivot of it-cleft sentences. It shows that subextraction is in principle possible, and that it is not sensitive to whether the pivot is related to a derived subject or real object. However, if the context induces an additional contrastive focus on the pivot, extraction is less acceptable. It is suggested that the problem is that two different sets of alternatives need to be construed on the basis of one and the same syntactically marked focus phrase, the pivot. Once the two sets of alternatives are syntactically separated, interpretation is less complex and licit.
The European digital research infrastructure CLARIN (Common Language Resources and Technology Infrastructure) is building a Knowledge Sharing Infrastructure (KSI) to ensure that existing knowledge and expertise is easily available both for the CLARIN community and for the humanities research communities for which CLARIN is being developed. Within the Knowledge Sharing Infrastructure, so called Knowledge Centres comprise one or more physical institutions with particular expertise in certain areas and are committed to providing their expertise in the form of reliable knowledge-sharing services. In this paper, we present the ninth K Centre – the CLARIN Knowledge Centre for Linguistic Diversity and Language Documentation (CKLD) – and the expertise and services provided by the member institutions at the Universities of London (ELAR/SWLI), Cologne (DCH/IfDH/IfL) and Hamburg (HZSK/INEL). The centre offers information on current best practices, available resources and tools, and gives advice on technological and methodological matters for researchers working within relevant fields.
This article discusses questions concerning the creation, annotation and sharing of spoken language corpora. We use the Hamburg Map Task Corpus (HAMATAC), a small corpus in which advanced learners of German were recorded solving a map task, as an example to illustrate our main points. We first give an overview of the corpus creation and annotation process including recording, metadata documentation, transcription and semi-automatic annotation of the data. We then discuss the manual annotation of disfluencies as an example case in which many of the typical and challenging problems for data reuse – in particular the reliability of interpretative annotations – are revealed.
Modeling the properties of German phrasal compounds within a usage-based constructional approach
(2017)
This paper discusses phrasal compounds in German (e.g.“Man-muss-doch-überalles-reden-können”-Credo, ‘one-should-be-able-to-talk-about-everything motto’). It provides the first empirically based investigation and description of this wordformation type within the theoretical framework of construction grammar. While phrasal compounds pose a problem for “traditional” generative approaches, I argue that a usage-based constructional model (e.g. Langacker 1987; Goldberg 2006) which takes into consideration aspects of frequency provides a suitable approach to modeling and explaining their properties. For this purpose, a large inventory of phrasal compounds was extracted from the German Reference Corpus (DeReKo) and modeled as pairings of form and meaning at different levels of specificity and abstractness within a bottom-up process.
Overall, this paper not only presents a new and original approach to phrasal compounds, but also offers interesting perspectives for dealing with composition in general.
Action ascription is an emergent process of mutual displays of understanding. Usually, the kind of action that is ascribed to a prior turn by a next action remains implicit. Sometimes, however, actions are overtly ascribed, for example, when speakers expose the use of strategies. This happens particularly in conflictual interaction, such as public debates or mediation talks. In these interactional settings, one of the speakers’ goals is to discredit their opponents in front of other participants or an overhearing audience. This chapter investigates different types of overt strategy ascriptions in a public mediation: exposing the opponent’s use of rhetorical devices, exposing the opponent’s use of false premises, and exposing that an opponent is telling only a half-truth. This chapter shows how speakers use ascriptions of acting strategically as accusations to disclose their opponents’ intentions and ‘truths’ that the opponents allegedly conceal and that are detrimental to their position.
Our paper examines how bodily behavior contributes to the local meaning of OKAY. We explore the interplay between OKAY as response to informings and narratives and accompanying multimodal resources in German multi-party interaction. Based on informal and institutional conversations, we describe three different uses of OKAY with falling intonation and the recurrent multimodal patterns that are associated with them and that can be characterized as ‘multimodal gestalts’. We show that: 1. OKAY as a claim to sufficient understanding is typically accompanied by upward nodding; 2. OKAY after change-of-state tokens exhibits a recurrent pattern of up- and downward nodding with distinctive timing; and 3. OKAY closing larger activities is associated with gaze-aversion from the prior speaker.
We use a convolutional neural network to perform authorship identification on a very homogeneous dataset of scientific publications. In order to investigate the effect of domain biases, we obscure words below a certain frequency threshold, retaining only their POS-tags. This procedure improves test performance due to better generalization on unseen data. Using our method, we are able to predict the authors of scientific publications in the same discipline at levels well above chance.
Based on the empirical data of 97 fourth-graders from three districts of Braunschweig in Germany, this paper investigates the possibility of changing semantic frames in multilingual communities. The focus of study is the verb field of self-motion. In a free-sorting task involving 52 verbs, Turkish-speaking students, in particular, placed the verbs schleichen (‘to sneak’) and kommen (‘to come’) in the same group. When explaining the perceived similarity they also used the word schleichen (‘to sneak’), in a specific grammatical construction that is not found in Standard German. This paper suggests that semantic frames may change along with grammatical constructions when typologically distinct languages come into close contact.
The article addresses Solution-Oriented Questions (SOQs) as an interactional practice for relationship management in psychodiagnostic interviews. Therapeutic alliance results from the concordance of alignment, as willingness to cooperate regarding common goals, and of affiliation, as relationship based upon trust. SOQs particularly allow for both: They are situated at the end of a troublesome topic area, which is linked to low agency on the patient’s side, and they reveal understanding of and interest in the patient. Following the paradigm of Conversation Analysis and German Gesprächsanalyse this paper analyzes the design and functions of SOQs as a means for securing and enhancing the relationship in the process of therapy. Our data comprise 15 videotaped first interviews following the manual of the Operationalized Psychodynamic Diagnostics. The analyses refer to all SOQs found but will be illustrated by means of a single conversation.
The General Data Protection Regulation (GDPR) on personal data protection in the European Union entered into application on 25 May 2018. With its 173 recitals and 99 articles, it may be one of the most ambitious pieces of EU legislation to date. Rather than a guide to GDPR compliance for Digital Humanities researchers, this chapter looks at the use of personal data in DH projects from the data subject’s perspective, and examines to what extent the GDPR kept its promise of enabling the data subject to “take control of his data”. The chapter provides an overview of the right to privacy and the right to data protection, a discussion of the relation between the concept of data control and privacy and data protection law, an introduction to the GDPR, and an explanation of its relevance for scientific research in general and DH in particular. The main section of the chapter analyses two types of data control mechanisms (consent and data subject rights) and their impact on DH research.
Twitter data is used in a wide variety of research disciplines in Social Sciences and Humanities. Although most Twitter data is publicly available, its re-use and sharing raise many legal questions related to intellectual property and personal data protection. Moreover, the use of Twitter and its content is subject to the Terms of Service, which also regulate re-use and sharing. This extended abstract provides a brief analysis of these issues and introduces the new Academic Research product track, which enables authorized researchers to access Twitter API on a preferential basis.
The normative layer of CLARIN is, alongside the organizational and technical layers, an essential part of the infrastructure. It consists of the regulatory framework (statutory law, case law, authoritative guidelines, etc.), the contractual framework (licenses, terms of service, etc.), and ethical norms. Navigating the normative layer requires expertise, experience, and qualified effort. In order to advise the Board of Directors, a standing committee dedicated to legal and ethical issues, the CLIC, was created. Since its establishment in 2012, the CLIC has made considerable efforts to provide not only the BoD but also the general public with information and guidance. It has published many articles (both in proceedings of CLARIN conferences and in its own White Paper Series) and developed several LegalTech tools. It also runs a Legal Information Platform, where accessible information on various issues affecting language resources can be found.
Providing online repositories for language resources is one of the main activities of CLARIN centres. The legal framework regarding liability of Service Providers for content uploaded by their users has recently been modified by the new Directive on Copyright in the Digital Single Market. A new category of Service Providers, Online Content-Sharing Service Providers (OCSSPs), was added. It is subject to a complex and strict framework, including the requirement to obtain licenses from rightholders for the hosted content. This paper provides the background and effect of these changes to law and aims to initiate a debate on how CLARIN repositories should navigate this new legal landscape.
Privacy by Design (also referred to as Data Protection by Design) is an approach in which solutions and mechanisms addressing privacy and data protection are embedded through the entire project lifecycle, from the early design stage, rather than just added as an additional layer to the final product. Formulated in the 1990 by the Privacy Commissionner of Ontario, the principle of Privacy by Design has been discussed by institutions and policymakers on both sides of the Atlantic, and mentioned already in the 1995 EU Data Protection Directive (95/46/EC). More recently, Privacy by Design was introduced as one of the requirements of the General Data Protection Regulation (GDPR), obliging data controllers to define and adopt, already at the conception phase, appropriate measures and safeguards to implement data protection principles and protect the rights of the data subject. Failing to meet this obligation may result in a hefty fine, as it was the case in the Uniontrad decision by the French Data Protection Authority (CNIL). The ambition of the proposed paper is to analyse the practical meaning of Privacy by Design in the context of Language Resources, and propose measures and safeguards that can be implemented by the community to ensure respect of this principle.
Ethical issues in Language Resources and Language Technology are often invoked, but rarely discussed. This is at least partly because little work has been done to systematize ethical issues and principles applicable in the fields of Language Resources and Language Technology. This paper provides an overview of ethical issues that arise at different stages of Language Resources and Language Technology development, from the conception phase through the construction phase to the use phase. Based on this overview, the authors propose a tentative taxonomy of ethical issues in Language Resources and Language Technology, built around five principles: Privacy, Property, Equality, Transparency and Freedom. The authors hope that this tentative taxonomy will facilitate ethical assessment of projects in the field of Language Resources and Language Technology, and structure the discussion on ethical issues in this domain, which may eventually lead to the adoption of a universally accepted Code of Ethics of the Language Resources and Language Technology community.
Sometimes legal scholars get relevant but baffling questions from laypersons like: “The reference to a work is personal data, so does the GDPR actually require me to anonymise it? Or, as my voice data is personal data, does the GDPR automatically give me access to a speech recognizer using my voice sample? Or, can I say anything about myself without the GDPR requiring the web host to anonymise or remove the post? What can I say about others like politicians? And, what can researchers say about patients in a research report?” Based on these questions, the authors address the interaction of intellectual property and data protection law in the context of data minimisation and attribution rights, access rights, trade secret protection, and freedom of expression.
Digital humanities research under United States and European copyright laws. Evolving frameworks
(2021)
This chapter summarizes the current state of copyright laws in the United States and European Union that most affect Digital Humanities research, namely the fair use doctrine in the US and research exceptions in Europe, including the Directive on Copyright in the Digital Single Market, which has been finally adopted in 2019. This summary begins with a description of recent copyright advances most relevant to DH research, and finishes with an analysis of a significant remaining legal hurdle which DH researchers face: how do fair use and research exceptions deal with the critical issue of circumventing technological protection measures (TPM, a.k.a. DRM). Our discussion of the lawful means of obtaining TPM-protected material may contribute to both current DH research and planning decisions and inform future stakeholders and lawmakers of the need to allow TPM circumvention for academic research.
We present an approach for automatic detection and correction of OCR-induced misspellings in historical texts. The main objective is the post-correction of the digitized Royal Society Corpus, a set of historical documents from 1665 to 1869. Due to the aged material the OCR procedure has made mistakes, thus leading to files corrupted by thousands of misspellings. This motivates a post processing step. The current correction technique is a pattern-based approach which due to its lack of generalization suffers from bad recall.
To generalize from the patterns we propose to use the noisy channel model. From the pattern based substitutions we train a corpus specific error model complemented with a language model. With an F1-Score of 0.61 the presented technique significantly outperforms the pattern based approach which has an F1-score of 0.28. Due to its more accurate error model it also outperforms other implementations of the noisy channel model.
The lexicography of German
(2020)
This chapter discusses the main dictionaries of the German language as it is spoken and written in Germany, and also German as it is spoken and written in Austria, Switzerland, the eastern fringes of Belgium, and South Tyrol. It also briefly describes Pennsylvania German. Corpora and other language resources used in German dictionary-making are also presented. Finally, there is a discussion of some current issues in German lexicography, as well as future prospects.
In this paper, we discuss an efficient method of (semi-automatic) neologism detection for German and its application for the production of a dictionary of neologisms, focusing on the lexicographic process. By monitoring the language via editorial (print and online) media evaluation and interpreting the findings on the basis of lexicographic competence, many, but not all neologisms can be identified which qualify for inclusion in the Neologismenworterbuch (2006-today) at the Institute for the German Language in Mannheim (IDS). In addition, an automated corpus linguistic method offers neologism candidates based on a systematic analysis of large amounts of text to lexicographers. We explain the principles of the corpus linguistic compilation of a list of candidates and show how lexicographers work with the results, combining them with their own findings in order to continuously enlarge this specialized online dictionary of new words in German.