Refine
Year of publication
- 2020 (101) (remove)
Document Type
- Article (38)
- Conference Proceeding (26)
- Part of a Book (25)
- Book (4)
- Part of Periodical (3)
- Doctoral Thesis (2)
- Other (2)
- Working Paper (1)
Language
- English (101) (remove)
Keywords
- Korpus <Linguistik> (36)
- Forschungsdaten (21)
- Gesprochene Sprache (15)
- Deutsch (14)
- Computerlinguistik (13)
- Konversationsanalyse (13)
- Datenmanagement (8)
- Interaktion (8)
- Natürliche Sprache (8)
- German (7)
Publicationstate
- Veröffentlichungsversion (57)
- Zweitveröffentlichung (34)
- Postprint (14)
- Ahead of Print (2)
Reviewstate
Publisher
- European Language Resources Association (19)
- CLARIN (6)
- Association for Computational Linguistics (4)
- Benjamins (4)
- Dictionary Society of North America (3)
- Linköping University Electronic Press (3)
- MDPI (3)
- Routledge (3)
- SAGE (3)
- De Gruyter (2)
In our paper, we present a case study on the quality of concept relations in the manually developed terminological resource of grammis, an information system on German grammar. We assess a SKOS representation of the resource using the tool qSKOS, create a typology of the issues identified by the tool, and conduct a qualitative analysis of selected cases. We identify and discuss aspects that can motivate quality issues and uncover that ill-formed relations are frequently indicative of deeper issues in the data model. Finally, we outline how these findings can inform improvements in our resource’s data model, discussing implications for the machine readability of terminological data.
For a long time, the lecture dominated performatively presented scientific communication. Given academic traditions, it is possible to make a connection between the lecture and classical rhetoric, a highly differentiated instrument of analysis. The tradition of the lecture has been perpetuated in the presentation of research results, first in the use of transparencies and subsequently through computer-based projections. Yet the use of media technology has also allowed new practices to emerge, including mediation practices hitherto neglected in the theory of rhetoric.
Using video-recordings from one day of a theater project for young adults, this paper investigates how the meaning of novel verbal expressions is interactionally constituted and elaborated over the interactional history of a series of activities. We examine how the theater director introduces and instructs the group in the Chekhovian technique of acting, which is based on “imagining with the body,” and how the imaginary elements of the technique are “brought into existence” in the language of the instructions. By tracking shifts in the instructor’s use of the key expressions invisible/imaginary/inner body or movement through a series of exercises, we demonstrate how they are increasingly treated as real and perceivable bodily conduct. The analyses focus on the instructor’s attribution of factual and agentive properties to these expressions, and the changes that these properties undergo over the series of instructions. This case demonstrates the significance of longitudinal processes for the establishment of shared meaning in social interaction. The study thereby contributes to the field of interactional semantics and to longitudinal studies of social interaction.
According to Positioning Theory, participants in narrative interaction can position themselves on a representational level concerning the autobiographical, told self, and a performative level concerning the interactive and emotional self of the tellers. The performative self is usually much harder to pin down, because it is a non-propositional, enacted self. In contrast to everyday interaction, psychotherapists regularly topicalize the performative self explicitly. In our paper, we study how therapists respond to clients' narratives by interpretations of the client's conduct, shifting from the autobiographical identity of the told self, which is the focus of the client's story, to the present performative self of the client. Drawing on video recordings from three psychodynamic therapies (tiefenpsychologisch fundierte Psychotherapie) with 25 sessions each, we will analyze in detail five extracts of therapists' shifts from the representational to the performative self. We highlight four findings:
• Whereas, clients' narratives often serve to support identity claims in terms of personal psychological and moral characteristics, therapists rather tend to focus on clients' feelings, motives, current behavior, and ways of interacting.
• In response to clients' stories, therapists first show empathy and confirm clients' accounts, before shifting to clients' performative self.
• Therapists ground the shift to clients' performative self by references to clients' observable behavior.
• Therapists do not simply expect affiliation with their views on clients' performative self. Rather, they use such shifts to promote the clients' self-exploration. Yet, if clients resist to explore their selves in more detail, therapists more explicitly ascribe motives and feelings that clients do not seem to be aware of. The shift in positioning levels thus seems to have a preparatory function for engendering therapeutic insights.
Coaching outcome research convincingly argues that coaching is effective and facilitates change in clients. While coaching practice literature depicts questions as key vehicle for such change, empirical findings as regards the local and global change potential of questions are so far largely missing in both (psychological) outcome research and (linguistic and psychological) process research on coaching. The local change potential of questions refers to a turn-by-turn transformation as a result of their sequentiality, the global change potential is related to the power of questions to initiate, process and finalize established phases of change. This programmatic article on questions, or rather questioning sequences, in executive coaching pursues two goals: firstly, it takes stock of available insights into questions in coaching and advocates for Conversation Analysis as a fruitful methodological framework to assess the local change potential of questioning sequences. Secondly, it points to the limitations of a local turn-by-turn approach to unravel the overall change potential of questions and calls for an interdisciplinary approach to bring both local and global effectiveness into relation. Such an approach is premised on conversational sequentiality and psychological theories of change and facilitates research on questioning sequences as both local and global agents of change across the continuum of coaching sessions. We present the TSPP Model as a first result of such an interdisciplinary cooperation.
As part of a larger research paradigm on understanding client change in the helping professions from an interprofessional perspective, this paper applies a conversation analytic approach to investigate therapists’ requesting examples (REs) and their interactional and sequential contribution to clients’ change during the diagnostic evaluation process. The analyzed data comprises 15 videotaped intake interviews that followed the system of Operationalized Psychodynamic Diagnosis. Therapists’ requesting examples in psychodiagnostic interviews explicitly or implicitly criticize the patient’s prior turn as insufficient. They also open a retro-sequence and in the following turns provide for a description that helps clarify meaning and evince psychic or relational aspects of the topic at hand. While the therapist’s prior request initiates the patient’s insufficient presentation, the patient’s example presentation is regularly followed by the therapist’s summarizing comments or by further requests. Requesting examples thus are a particular case of requests that follow expandable responses regarding the sequential organization; yet, given that they make examples conditionally relevant, they are more specific. With the help of this sequential organization, participants co-construct common knowledge which allows the therapist to pursue the overall aim of therapy, which is to increase the patients’ awareness of their distorted perceptions, and thus to pave the way for change.
The newest generation of speech technology caused a huge increase of audio-visual data nowadays being enhanced with orthographic transcripts such as in automatic subtitling in online platforms. Research data centers and archives contain a range of new and historical data, which are currently only partially transcribed and therefore only partially accessible for systematic querying. Automatic Speech Recognition (ASR) is one option of making that data accessible. This paper tests the usability of a state-of-the-art ASR-System on a historical (from the 1960s), but regionally balanced corpus of spoken German, and a relatively new corpus (from 2012) recorded in a narrow area. We observed a regional bias of the ASR-System with higher recognition scores for the north of Germany vs. lower scores for the south. A detailed analysis of the narrow region data revealed – despite relatively high ASR-confidence – some specific word errors due to a lack of regional adaptation. These findings need to be considered in decisions on further data processing and the curation of corpora, e.g. correcting transcripts or transcribing from scratch. Such geography-dependent analyses can also have the potential for ASR-development to make targeted data selection for training/adaptation and to increase the sensitivity towards varieties of pluricentric languages.
Lean syntax: how argument structure is adapted to its interactive, material, and temporal ecology
(2020)
It has often been argued that argument structure in spoken discourse is less complex than in written discourse. This paper argues that lean argument structure, in particular, argument omission, gives evidence of how the production and understanding of linguistic structures is adapted to the interactive, material, and temporal ecology of talk-in-interaction. It is shown how lean argument structure builds on participants' ongoing bodily conduct, joint perceptual salience, joint attention, and their Orientation to expectable next actions within a joint project. The phenomena discusscd in this paper are verb-derived discourse markers and tags, analepsis in responsive actions, and ellipsis in first actions, such as requests and instructions. The study draws from transcripts and audio- and video-recordings of naturally occurring interaction in German from the Research and Teaching Corpus of Spoken German (FOLK).
This article makes an empirical and a methodological contribution to the comparative study of action. The empirical contribution is a comparative study of three distinct types of action regularly accomplished with the turn format du meinst x (“you mean/think x”) in German: candidate understandings, formulations of the other’s mind, and requests for a judgment. These empirical materials are the basis for a methodological exploration of different levels of researcher abstraction in the comparative study of action. Two levels are examined: the (coarser) level of conditionally relevant responses (what a response speaker must do to align with the action of the prior turn) and the (finer) level of “full alignment” (what a response speaker can do to align with the action of a prior turn). Both levels of abstraction provide empirically viable and analytically interesting descriptive concepts for the comparative study of action. Data are in German.
The newest generation of speech technology caused a huge increase of audio-visual data nowadays being enhanced with orthographic transcripts such as in automatic subtitling in online platforms. Research data centers and archives contain a range of new and historical data, which are currently only partially transcribed and therefore only partially accessible for systematic querying. Automatic Speech Recognition (ASR) is one option of making that data accessible. This paper tests the usability of a state-of-the-art ASR-System on a historical (from the 1960s), but regionally balanced corpus of spoken German, and a relatively new corpus (from 2012) recorded in a narrow area. We observed a regional bias of the ASR-System with higher recognition scores for the north of Germany vs. lower scores for the south. A detailed analysis of the narrow region data revealed – despite relatively high ASR-confidence – some specific word errors due to a lack of regional adaptation. These findings need to be considered in decisions on further data processing and the curation of corpora, e.g. correcting transcripts or transcribing from scratch. Such geography-dependent analyses can also have the potential for ASR-development to make targeted data selection for training/adaptation and to increase the sensitivity towards varieties of pluricentric languages.
Individuals with Autism Spectrum Disorder (ASD) experience a variety of symptoms sometimes including atypicalities in language use. The study explored diferences in semantic network organisation of adults with ASD without intellectual impairment. We assessed clusters and switches in verbal fuency tasks (‘animals’, ‘human feature’, ‘verbs’, ‘r-words’) via curve ftting in combination with corpus-driven analysis of semantic relatedness and evaluated socio-emotional and motor action related content. Compared to participants without ASD (n=39), participants with ASD (n=32) tended to produce smaller clusters, longer switches, and fewer words in semantic conditions (no p values survived Bonferroni-correction), whereas relatedness and content were similar. In ASD, semantic networks underlying cluster formation appeared comparably small without afecting strength of associations or content.
This thesis describes work in three areas: grammar engineering, computer-assisted language learning and grammar learning. These three parts are connected by the concept of a grammar-based language learning application. Two types of grammars are of concern. The first we call resource grammars, extensive descriptions a natural languages. Part I focuses on this kind of grammars. The other are domain-specific or application-specific grammars. These grammars only describe a fragment of natural language that is determined by the domain of a certain application. Domain-specific grammars are relevant for Part II and Part III. Another important distinction is between humans learning a new natural language using computational grammars (Part II) and computers learning grammars from example sentences (Part III). Part I of this thesis focuses on grammar engineering and grammar testing. It describes the development and evaluation of a computational resource grammar for Latin. Latin is known for its rich morphology and free word order, both have to be handled in a computationally efficient way. A special focus is on methods how computational grammars can be evaluated using corpus data. Such an evaluation is presented for the Latin resource grammar. Part II, the central part, describes a computer-assisted language learning application based on domain-specific grammars. The language learning application demonstrates how computational grammars can be used to guide the user input and how language learning exercises can be modeled as grammars. This allows us to put computational grammars in the center of the design of language learning exercises used to help humans learn new languages. Part III, the final part, is dedicated to a method to learn domain- or application-specific grammars based on a wide-coverage grammar and small sets of example sentences. Here a computer is learning a grammar for a fragment of a natural language from example sentences, potentially without any additional human intervention. These learned grammars can be based e.g. on the Latin resource grammar described in Part II and used as domain-specific lesson grammars in the language learning application described Part II.
Nonnative-accented speakers face prevalent discrimination. The assumption that people freely express negative sentiments toward nonnative speakers has also guided common research methods. However, recent studies did not consistently find downgrading, so that prejudice against nonnative accents might even be questioned at first sight. The present theoretical article will bridge these contradictory findings in three ways: (a) We illustrate that nonnative speakers with foreign accents frequently may not be downgraded in commonly used first-impression and employment scenario paradigms. It appears that relatively controlled responding may be influenced by norms and motivations to respond without prejudice, whereas negative biases emerge in spontaneous responding. (b) We present an integrative view based on knowledge on modern forms of prejudice to develop modern notions of accent-ism, which allow for predictions when accent biases are (not) likely to surface. (c) We conclude with implications for interventions and a tailored research agenda.
In this article, we examine the current situation of data dissemination and provision for CMC corpora. By that we aim to give a guiding grid for future projects that will improve the transparency and replicability of research results as well as the reusability of the created resources. Based on the FAIR guiding principles for research data management, we evaluate the 20 European CMC corpora listed in the CLARIN CMC Resource family, individuate successful strategies among the existing corpora and establish best practices for future projects. We give an overview of existing approaches to data referencing, dissemination and provision in European CMC corpora, and discuss the methods, formats and strategies used. Furthermore, we discuss the need for community standards and offer recommendations for best practices when creating a new CMC corpus.
We present recognizers for four very different types of speech, thought and writing representation (STWR) for German texts. The implementation is based on deep learning with two different customized contextual embeddings, namely FLAIR embeddings and BERT embeddings. This paper gives an evaluation of our recognizers with a particular focus on the differences in performance we observed between those two embeddings. FLAIR performed best for direct STWR (F1=0.85), BERT for indirect (F1=0.76) and free indirect (F1=0.59) STWR. For reported STWR, the comparison was inconclusive, but BERT gave the best average results and best individual model (F1=0.60). Our best recognizers, our customized language embeddings and most of our test and training data are freely available and can be found via www.redewiedergabe.de or at github.com/redewiedergabe.
In this Paper, we describe a schema and models which have been developed for the representation of corpora of computer-mediated communicatin (CMC corpora) using the representation framework provided by the Text Encoding Initiative (TEI). We characterise CMC discourse as dialogic, sequentially organised interchange between humans and point out that many features of CMC are not adequately handled by current corpus encoding schemas and tools. We formulate desiderata for a representation of CMC in encoding schemes and argue why the TEI is a suitable framework for the encoding of CMC corpora. We propose a model of basic CMC units (utterances, posts, and nonverbal activities) and the macro- and micro-level structures of interactions in CMC environments. Based on these models, we introduce CMC-core, a TEI customisation for the encoding of CMC corpora, which defines CMC-specific encoding features on the four levels of elements, model classes, attribute classes, and modules of the TEI infrastructure. The description of our customisation is illustrated by encoding examples from corpora by researchers of the TEI SIG CMC, representing a variety of CMC genres, i.e. chat, wiki talk, twitter, blog, and Second Life interactions. The material described, i.e. schemata, encoding examples, and documentation, is available from the of the TEI CMC SIG Wiki and will accompany a feature request to the TEI council in late 2019.
The annual microcensus provides Germany’s most important official statistics. Unlike a census it does not cover the whole population, but a representative 1%-sample of it. In 2017, the German microcensus asked a question on the language of the population, i.e. ‘Which language is mainly spoken in your household?’ Unfortunately, the question, its design and its position within the whole microcensus’ questionnaire feature several shortcomings. The main shortcoming is that multilingual repertoires cannot be captured by it. Recommendations for the improvement of the microcensus’ language question: first and foremost the question (i.e. its wording, design, and answer options) should make it possible to count multilingual repertoires.
Linguistic Variation and Change in 250 Years of English Scientific Writing: A Data-Driven Approach
(2020)
We trace the evolution of Scientific English through the Late Modern period to modern time on the basis of a comprehensive corpus composed of the Transactions and Proceedings of the Royal Society of London, the first and longest-running English scientific journal established in 1665. Specifically, we explore the linguistic imprints of specialization and diversification in the science domain which accumulate in the formation of “scientific language” and field-specific sublanguages/registers (chemistry, biology etc.). We pursue an exploratory, data-driven approach using state-of-the-art computational language models and combine them with selected information-theoretic measures (entropy, relative entropy) for comparing models along relevant dimensions of variation (time, register). Focusing on selected linguistic variables (lexis, grammar), we show how we deploy computational language models for capturing linguistic variation and change and discuss benefits and limitations.
We present web services which implement a workflow for transcripts of spoken language following the TEI guidelines, in particular ISO 24624:2016 “Language resource management – Transcription of spoken language”. The web services are available at our website and will be available via the CLARIN infrastructure, including the Virtual Language Observatory and WebLicht.
Twenty-two historical encyclopedias encoded in TEI: a new resource for the Digital Humanities
(2020)
This paper accompanies the corpus publication of EncycNet, a novel XML/TEI annotated corpus of 22 historical German encyclopedias from the early 18th to early 20th century. We describe the creation and annotation of the corpus, including the rationale for its development, suggested methodology for TEI annotation, possible use cases and future work. While many well-developed annotation standards for lexical resources exist, none can adequately model the encyclopedias at hand, and we therefore suggest how the TEI Lex-0 standard may be modified with additional guidelines for the annotation of historical encyclopedias. As the digitization and annotation of historical encyclopedias are settling on TEI as the de facto standard, our methodology may inform similar projects.
We evaluate a graph-based dependency parser on DeReKo, a large corpus of contemporary German. The dependency parser is trained on the German dataset from the SPMRL 2014 Shared Task which contains text from the news domain, whereas DeReKo also covers other domains including fiction, science, and technology. To avoid the need for costly manual annotation of the corpus, we use the parser’s probability estimates for unlabeled and labeled attachment as main evaluation criterion. We show that these probability estimates are highly correlated with the actual attachment scores on a manually annotated test set. On this basis, we compare estimated parsing scores for the individual domains in DeReKo, and show that the scores decrease with increasing distance of a domain to the training corpus.
The present paper outlines the projected second part of the Corpus Query Lingua Franca (CQLF) family of standards: CQLF Ontology, which is currently in the process of standardization at the International Standards Organization (ISO), in its Technical Committee 37, Subcommittee 4 (TC37SC4) and its national mirrors. The first part of the family, ISO 24623-1 (henceforth CQLF Metamodel), was successfully adopted as an international standard at the beginning of 2018. The present paper reflects the state of the CQLF Ontology at the moment of submission for the Committee Draft ballot. We provide a brief overview of the CQLF Metamodel, present the assumptions and aims of the CQLF Ontology, its basic structure, and its potential extended applications. The full ontology is expected to emerge from a community process, starting from an initial version created by the authors of the present paper.
This article makes an empirical and a methodological contribution to the comparative study of action. The empirical contribution is a comparative study of three distinct types of action regularly accomplished with the turn format du meinst x (“you mean/think x”) in German: candidate understandings, formulations of the other’s mind, and requests for a judgment. These empirical materials are the basis for a methodological exploration of different levels of researcher abstraction in the comparative study of action. Two levels are examined: the (coarser) level of conditionally relevant responses (what a response speaker must do to align with the action of the prior turn) and the (finer) level of “full alignment” (what a response speaker can do to align with the action of a prior turn). Both levels of abstraction provide empirically viable and analytically interesting descriptive concepts for the comparative study of action. Data are in German.
This paper addresses long-term archival for large corpora. Three aspects specific to language resources are focused, namely (1) the removal of resources for legal reasons, (2) versioning of (unchanged) objects in constantly growing resources, especially where objects can be part of multiple releases but also part of different collections, and (3) the conversion of data to new formats for digital preservation. It is motivated why language resources may have to be changed, and why formats may need to be converted. As a solution, the use of an intermediate proxy object called a signpost is suggested. The approach will be exemplified with respect to the corpora of the Leibniz Institute for the German Language in Mannheim, namely the German Reference Corpus (DeReKo) and the Archive for Spoken German (AGD).
This technology watch report discusses digital repository solutions, in the context of the research infrastructure projects CLARIAH-DE, CLARIN, and DARIAH. It provides an overview of different repository systems, comparing them and discussing their respective applicabilities from the perspectives of the project partners at the time of writing.
Signposts for CLARIN
(2020)
An implementation of CMDI-based signposts and its use is presented in this paper. Arnold et al. 2020 present Signposts as a solution to challenges in long-term preservation of corpora, especially corpora that are continuously extended and subject to modification, e.g., due to legal injunctions, but also may overlap with respect to constituents, and may be subject to migrations to new data formats. We describe the contribution Signposts can make to the CLARIN infrastructure and document the design for the CMDI profile.
The CMDI Explorer
(2020)
We present the CMDI Explorer, a tool that empowers users to easily explore the contents of complex CMDI records and to process selected parts of them with little effort. The tool allows users, for instance, to analyse virtual collections represented by CMDI records, and to send collection items to other CLARIN services such as the Switchboard for subsequent processing. The CMDI Explorer hence adds functionality that many users felt was lacking from the CLARIN tool space.
In order to satisfy the information needs of a wide range of researchers across a number of disciplines, large textual datasets require careful design, collection, cleaning, encoding, annotation, storage, retrieval, and curation. This daunting set of tasks has coalesced into a number of key themes and questions that are of interest to the contributing research communities: (a) what sampling techniques can we apply? (b) what quality issues should we be aware of? (c) what infrastructures and frameworks are being developed for the efficient storage, annotation, analysis and retrieval of large datasets? (d) what affordances do visualisation techniques offer for the exploratory analysis approaches of corpora? (e) what legal paths can be followed in dealing with IPR and data protection issues governing both the data sources and the query results? (f) how to guarantee that corpus data remain available and usable in a sustainable way?
Repeating the movements associated with activities such as drawing or sports typically leads to improvements in kinematic behavior: these movements become faster, smoother, and exhibit less variation. Likewise, practice has also been shown to lead to faster and smoother movement trajectories in speech articulation. However, little is known about its effect on articulatory variability. To address this, we investigate the extent to which repetition and predictability influence the articulation of the frequent German word “sie” [zi] (they). We find that articulatory variability is proportional to speaking rate and the duration of [zi], and that overall variability decreases as [zi] is repeated during the experiment. Lower variability is also observed as the conditional probability of [zi] increases, and the greatest reduction in variability occurs during the execution of the vocalic target of [i]. These results indicate that practice can produce observable differences in the articulation of even the most common gestures used in speech.
Making corpora accessible and usable for linguistic research is a huge challenge in view of (too) big data, legal issues and a rapidly evolving methodology. This does not only affect the design of user-friendly graphical interfaces to corpus analysis tools, but also the availability of programming interfaces supporting access to the functionality of these tools from various analysis and development environments. RKorAPClient is a new research tool in the form of an R package that interacts with the Web API of the corpus analysis platform KorAP, which provides access to large annotated corpora, including the German reference corpus DeReKo with 45 billion tokens. In addition to optionally authenticated KorAP API access, RKorAPClient provides further processing and visualization features to simplify common corpus analysis tasks. This paper introduces the basic functionality of RKorAPClient and exemplifies various analysis tasks based on DeReKo, that are bundled within the R package and can serve as a basic framework for advanced analysis and visualization approaches.
CLARIN contractual framework for sharing language data: the perspective of personal data protection
(2020)
The article analyses the responsibility for ensuring compliance with the General Data Protection Regulation (GDPR) in research settings. As a general rule, organisations are considered the data controller (responsible party for the GDPR compliance). Research constitutes a unique setting influenced by academic freedom. This raises the question of whether academics could be considered the controller as well. However, there are some court cases and policy documents on this issue. It is not settled yet. The analysis serves a preliminary analytical background for redesigning CLARIN contractual framework for sharing data.
Privacy by Design (also referred to as Data Protection by Design) is an approach in which solutions and mechanisms addressing privacy and data protection are embedded through the entire project lifecycle, from the early design stage, rather than just added as an additional layer to the final product. Formulated in the 1990 by the Privacy Commissionner of Ontario, the principle of Privacy by Design has been discussed by institutions and policymakers on both sides of the Atlantic, and mentioned already in the 1995 EU Data Protection Directive (95/46/EC). More recently, Privacy by Design was introduced as one of the requirements of the General Data Protection Regulation (GDPR), obliging data controllers to define and adopt, already at the conception phase, appropriate measures and safeguards to implement data protection principles and protect the rights of the data subject. Failing to meet this obligation may result in a hefty fine, as it was the case in the Uniontrad decision by the French Data Protection Authority (CNIL). The ambition of the proposed paper is to analyse the practical meaning of Privacy by Design in the context of Language Resources, and propose measures and safeguards that can be implemented by the community to ensure respect of this principle.
Providing online repositories for language resources is one of the main activities of CLARIN centres. The legal framework regarding liability of Service Providers for content uploaded by their users has recently been modified by the new Directive on Copyright in the Digital Single Market. A new category of Service Providers, Online Content-Sharing Service Providers (OCSSPs), was added. It is subject to a complex and strict framework, including the requirement to obtain licenses from rightholders for the hosted content. This paper provides the background and effect of these changes to law and aims to initiate a debate on how CLARIN repositories should navigate this new legal landscape.
Corpus REDEWIEDERGABE
(2020)
This article presents the corpus REDEWIEDERGABE, a German-language historical corpus with detailed annotations for speech, thought and writing representation (ST&WR). With approximately 490,000 tokens, it is the largest resource of its kind. It can be used to answer literary and linguistic research questions and serve as training material for machine learning. This paper describes the composition of the corpus and the annotation structure, discusses some methodological decisions and gives basic statistics about the forms of ST&WR found in this corpus.
This is an introduction to a special issue of Dictionaries: Journal of the Dictionary Society of North America. It offers a characterization of neology and describes the Globalex-sponsored workshop at which the papers in the issue originated. It provides an overview of the papers, which treat lexicographical neology and neological lexicography in Danish, Dutch, Estonian, Frisian, Greek, Korean, Spanish, and Swahili and address relevant aspects of lexicography in those languages, presenting state-of-the-art research into neology and ideas about modern lexicographic treatment of neologisms in various dictionary types.
T-Shirt Lexicography
(2020)
This article presents a study of graphic inscriptions on garments such as T-shirts, inscriptions that resemble entries in general monolingual dictionaries of German. Referred to here as "T-shirt lexicography," the collected material is analyzed in terms of its form, content, and function, focusing on lexicographical aspects. T-shirt lexicography is an example of vernacular lexicography inasmuch as different lexicographical traditions are assumed (correctly as well as erroneously) by the (unknown) authors, but also adapted to their specific needs.
This contribution deals with right-dislocated complement clauses with the subordinating conjunction dass (‘that’) in German talk-in-interaction. The bi-clausal construction we analyze is as follows: The first clause, in which one argument is realized by the demonstrative pronoun das (‘this/that’), is syntactically and semantically complete; the reference of the pronoun is (re-)specified by adding a dass-complement clause after a point of possible completion (e.g., aber das hab ich nich MITbekommen. (0.32) dass es da so YOUtubevideos gab. (‘But I wasn’t aware of that. That there were videos about that on YouTube.’). The first clause always performs a backward-oriented action (e.g., an assessment) and the second clause (re-)specifies the propositional reference of the demonstrative, allowing for a (strategic) perspective shift. Based on a collection of 93 cases from everyday conversations and institutional interactions, we found that the construction is used close to the turn-beginning for referring to and (re-)specifying (parts of) another speaker’s prior turn; turn-internal uses tie together parts of a speaker’s multi-unit turn. The construction thus facilitates an incremental constitution of meaning and reference.
This paper presents the QUEST project and describes concepts and tools that are being developed within its framework. The goal of the project is to establish quality criteria and curation criteria for annotated audiovisual language data. Building on existing resources developed by the participating institutions earlier, QUEST develops tools that could be used to facilitate and verify adherence to these criteria. An important focus of the project is making these tools accessible for researchers without substantial technical background and helping them produce high-quality data. The main tools we intend to provide are the depositors’ questionnaire and automatic quality assurance, both developed as web applications. They are accompanied by a Knowledge base, which will contain recommendations and descriptions of best practices established in the course of the project. Conceptually, we split linguistic data into three resource classes (data deposits, collections and corpora). The class of a resource defines the strictness of the quality assurance it should undergo. This division is introduced so that too strict quality criteria do not prevent researchers from depositing their data.
This paper analyses the variation we find in the realization of finite clausal complements in the position of prepositional objects in a set of Germanic languages. The Germanic languages differ with respect to whether prepositions can directly select a clause (North Germanic) or not and instead need a prepositional proform (Continental West Germanic). Within the Continental West Germanic languages, we find further differences with respect to the constituent structures. We propose that German strong vs. weak prepositional proforms (e.g. drauf vs. darauf) differ with respect to their syntax, while this is not the case for the Dutch forms (ervan vs. daarvan). What the Germanic languages under consideration share is that the prepositional element can be covert, except in English. English shows only limited evidence for the presence of P with finite clauses in the position of prepositional objects generally, but only with a selected set of verbs. This investigation is a first step towards a broader study of the nature of clauses in prepositional object positions and the implications for the syntax of clausal complementation.
This article examines the language contact situation as well as the language attitudes of the Caucasian Germans, descendants of German-born inhabitants of the Russian Empire and the Soviet Union who emigrated in 1816/17 to areas of Transcaucasia. After deportations and migrations, the group of Caucasian Germans now consists of those who have since emigrated to Germany and those who still live in the South Caucasus. It’s the first time that sociolinguistic methods have been used to record data from the generation who experienced living in the South Caucasus and in Germany as well as from two succeeding generations. Initial results will be presented below with a focus on the language contact constellations of German varieties as well as on consequences of language contact and language repression, which both affect language attitudes.
Beyond Citations: Corpus-based Methods for Detecting the Impact of Research Outcomes on Society
(2020)
This paper proposes, implements and evaluates a novel, corpus-based approach for identifying categories indicative of the impact of research via a deductive (top-down, from theory to data) and an inductive (bottom-up, from data to theory) approach. The resulting categorization schemes differ in substance. Research outcomes are typically assessed by using bibliometric methods, such as citation counts and patterns, or alternative metrics, such as references to research in the media. Shortcomings with these methods are their inability to identify impact of research beyond academia (bibliometrics) and considering text-based impact indicators beyond those that capture attention (altmetrics). We address these limitations by leveraging a mixed-methods approach for eliciting impact categories from experts, project personnel (deductive) and texts (inductive). Using these categories, we label a corpus of project reports per category schema, and apply supervised machine learning to infer these categories from project reports. The classification results show that we can predict deductively and inductively derived impact categories with 76.39% and 78.81% accuracy (F1-score), respectively. Our approach can complement solutions from bibliometrics and scientometrics for assessing the impact of research and studying the scope and types of advancements transferred from academia to society.
Interoperability in an Infrastructure Enabling Multidisciplinary Research: The case of CLARIN
(2020)
CLARIN is a European Research Infrastructure providing access to language resources and technologies for researchers in the humanities and social sciences. It supports the use and study of language data in general and aims to increase the potential for comparative research of cultural and societal phenomena across the boundaries of languages and disciplines, all in line with the European agenda for Open Science. Data infrastructures such as CLARIN have recently embarked on the emerging frameworks for the federation of infrastructural services, such as the European Open Science Cloud and the integration of services resulting from multidisciplinary collaboration in federated services for the wider domain of the social sciences and humanities (SSH). In this paper we describe the interoperability requirements that arise through the existing ambitions and the emerging frameworks. The interoperability theme will be addressed at several levels, including organisation and ecosystem, design of workflow services, data curation, performance measurement and collaboration. For each level, some concrete outcomes are described.
As a part of the ZuMult-project, we are currently modelling a backend architecture that should provide query access to corpora from the Archive of Spoken German (AGD) at the Leibniz-Institute for the German Language (IDS). We are exploring how to reuse existing search engine frameworks providing full text indices and allowing to query corpora by one of the corpus query languages (QLs) established and actively used in the corpus research community. For this purpose, we tested MTAS - an open source Lucene-based search engine for querying on text with multilevel annotations. We applied MTAS on three oral corpora stored in the TEI-based ISO standard for transcriptions of spoken language (ISO 24624:2016). These corpora differ from the corpus data that MTAS was developed for, because they include interactions with two and more speakers and are enriched, inter alia, with timeline-based annotations. In this contribution, we report our test results and address issues that arise when search frameworks originally developed for querying written corpora are being transferred into the field of spoken language.
N-grams are of utmost importance for modern linguistics and language theory. The legal status of n-grams, however, raises many practical questions. Traditionally, text snippets are considered copyrightable if they meet the originality criterion, but no clear indicators as to the minimum length of original snippets exist; moreover, the solutions adopted in some EU Member States (the paper cites German and French law as examples) are considerably different. Furthermore, recent developments in EU law (the CJEU's Pelham decision and the new right of newspaper publishers) also provide interesting arguments in this debate. The proposed paper presents the existing approaches to the legal protection of n-grams and tries to formulate some clear guidelines as to the length of n-grams that can be freely used and shared.
Song lyrics can be considered as a text genre that has features of both written and spoken discourse, and potentially provides extensive linguistic and cultural information to scientists from various disciplines. However, pop songs play a rather subordinate role in empirical language research so far - most likely due to the absence of scientifically valid and sustainable resources. The present paper introduces a multiply annotated corpus of German lyrics as a publicly available basis for multidisciplinary research. The resource contains three types of data for the investigation and evaluation of quite distinct phenomena: TEI-compliant song lyrics as primary data, linguistically and literary motivated annotations, and extralinguistic metadata. It promotes empirically/statistically grounded analyses of genre-specific features, systemic-structural correlations and tendencies in the texts of contemporary pop music. The corpus has been stratified into thematic and author-specific archives; the paper presents some basic descriptive statistics, as well as the public online frontend with its built-in evaluation forms and live visualisations.
The majority of new words in dictionaries are included following a certain period of time during which they have become more frequent in use and established morphosyntactic and orthographic features consistent with the language system they are borrowed into. In case of borrowed new words, inclusion often takes place at a transitional state of assimilation to the language system, where delayed orthographic or phonetic change cannot be ruled out and the differentiation between standard-conforming and non-standard orthographic word forms of a lemma oftentimes depends on the proximity between the writing systems of the donor and the recipient language. Following a brief overview of loan words and their lexicographical description in the Neologismenwörterbuch, a specialized online dictionary for neologisms in contemporary German, this paper presents findings of an investigative case study on dictionary entries for a neologism borrowed from a logographic language system and discusses the potential of a corpus-based description of new loan words.
This paper discusses a theoretical and empirical approach to language fixedness that we have developed at the Institut für Deutsche Sprache (IDS) (‘Institute for German Language’) in Mannheim in the project Usuelle Worterbindungen(UWV) over the last decade. The analysis described is based on the Deutsches Referenzkorpus (‘German Reference Corpus’; DeReKo) which is located at the IDS. The corpus analysis tool used for accessing the corpus data is COSMAS II (CII) and – for statistical analysis – the IDS collocation analysis tool (Belica, 1995; CA). For detecting lexical patterns and describing their semantic and pragmatic nature we use the tool lexpan (or ‘Lexical Pattern Analyzer’) that was developed in our project. We discuss a new corpus-driven pattern dictionary that is relevant not only to the field of phraseology, but also to usage-based linguistics and lexicography as a whole.
This article explores a sequence organizational phenomenon that results from the use of a loosely specifiable turn format (viz., That’s + wh-clause) for launching (next) sequences while at the same time connecting back to a prior turn. Using this practice creates a sequential juncture, i.e., a pivot-like nexus between one sequence and a next. In third position, such junctures serve to accomplish seamless sequential transitions from one sequence into a next by presenting the latter as locally occasioned. The practice may, however, also be deployed in second position to launch actions that have not been made relevant or provided for by the preceding action and exhibit response relevance themselves. The sequential junctures then become retro-sequential in character: They transform the projected trajectory of the sequence in progress and create interlocking sequential structures. These findings highlight that sequence is practice, while pointing to understudied interconnections between tying and sequentiality. Data are in English.
How Do Speakers Define the Meaning of Expressions? The Case of German x heißt y (“x means y”)
(2020)
To secure mutual understanding in interaction, speakers sometimes explain or negotiate expressions. Adopting a conversation analytic and interaction linguistic approach, I examine how participants explain which kinds of expressions in different sequential environments, using the format x heißt y (“x means y”). When speakers use it to clarify technical terms or foreign words that are unfamiliar to co-participants, they often provide a situationally anchored definition that however is rather context-free and therefore transferable to future situations. When they explain common (but indexical, ambiguous, polysemous, or problematic) expressions instead, speakers always design their explanation strongly connected to the local context, building on situational circumstances. I argue that x heißt y definitions in interaction do not meet the requirements of scientific or philosophical definitions but that this is irrelevant for the situational exigencies speakers face.
This article describes the development of the digital infrastructure at a research data centre for audio-visual linguistic research data, the Hamburg Centre for Language Corpora (HZSK) at the University of Hamburg in Germany, over the past ten years. The typical resource hosted in the HZSK Repository, the core component of the infrastructure, is a collection of recordings with time-aligned transcripts and additional contextual data, a spoken language corpus. Since the centre has a thematic focus on multilingualism and linguistic diversity and provides its service to researchers within linguistics and other disciplines, the development of the infrastructure was driven by diverse usage scenarios and user needs on the one hand, and by the common technical requirements for certified service centres of the CLARIN infrastructure on the other. Beyond the technical details, the article also aims to be a contribution to the discussion on responsibilities and services within emerging digital research data infrastructures and the fundamental issues in sustainability of research software engineering, concluding that in order to truly cater to user needs across the research data lifecycle, we still need to bridge the gap between discipline-specific research methods in the process of digitalisation and generic digital research data management approaches.
This paper describes the development of a systematic approach to the creation, management and curation of linguistic resources, particularly spoken language corpora. It also presents first steps towards a framework for continuous quality control to be used within external research projects by non-technical users, and discuss various domain and discipline specific problems and individual solutions. The creation of spoken language corpora is not only a time-consuming and costly process, but the created resources often represent intangible cultural heritage, containing recordings of, for example, extinct languages or historical events. Since high quality resources are needed to enable re-use in as many future contexts as possible, researchers need to be provided with the necessary means for quality control. We believe that this includes methods and tools adapted to Humanities researchers as non-technical users, and that these methods and tools need to be developed to support existing tasks and goals of research projects.
Towards Comprehensive Definitions of Data Quality for Audiovisual Annotated Language Resources
(2020)
Though digital infrastructures such as CLARIN have been successfully established and now provide large collections of digital resources, the lack of widely accepted standards for data quality and documentation still makes re-use of research data a difficult endeavour, especially for more complex resource types. The article gives a detailed overview over relevant characteristics of audiovisual annotated language resources and reviews possible approaches to data quality in terms of their suitability for the current context. Conclusively, various strategies are suggested in order to arrive at comprehensive and adequate definitions of data quality for this particular resource type.
In this article, we describe a user support solution for the digital humanities. As a case study, we show the development of the CLARIN-D Helpdesk from 2013 into the current support solution that has been extended for several other CLARIN-related software and projects and the DARIAH-ERIC. Furthermore, we describe a way towards a common support platform for CLARIAH-DE, which is currently in the final phase. We hope to further expand the help desk in the following years in order to act as a hub for user support and a central knowledge resource for the digital humanities not only in the German, but also in the European area and perhaps at some point worldwide.
Studying Lexical Dynamics and Language Change via Generalized Entropies: The Problem of Sample Size
(2020)
Recently, it was demonstrated that generalized entropies of order α offer novel and important opportunities to quantify the similarity of symbol sequences where α is a free parameter. Varying this parameter makes it possible to magnify differences between different texts at specific scales of the corresponding word frequency spectrum. For the analysis of the statistical properties of natural languages, this is especially interesting, because textual data are characterized by Zipf’s law, i.e., there are very few word types that occur very often (e.g., function words expressing grammatical relationships) and many word types with a very low frequency (e.g., content words carrying most of the meaning of a sentence). Here, this approach is systematically and empirically studied by analyzing the lexical dynamics of the German weekly news magazine Der Spiegel (consisting of approximately 365,000 articles and 237,000,000 words that were published between 1947 and 2017). We show that, analogous to most other measures in quantitative linguistics, similarity measures based on generalized entropies depend heavily on the sample size (i.e., text length). We argue that this makes it difficult to quantify lexical dynamics and language change and show that standard sampling approaches do not solve this problem. We discuss the consequences of the results for the statistical analysis of languages.
The coronavirus pandemic may be the largest crisis the world has had to face since World War II. It does not come as a surprise that it is also having an impact on language as our primary communication tool. In this short paper, we present three inter-connected resources that are designed to capture and illustrate these effects on a subset of the German language: An RSS corpus of German-language newsfeeds (with freely available untruncated frequency lists), a continuously updated HTML page tracking the diversity of the vocabulary in the RSS corpus and a Shiny web application that enables other researchers and the broader public to explore the corpus in terms of basic frequencies.
This paper presents the corpus-based lexicographical prototype that was developed within the framework of the project Lexik des gesprochenen Deutsch (=LeGeDe) as a thirdparty funded project. Research results regarding the information offered in dictionaries have shown that there is a necessity for information on spoken lexis and its interactional functions. The resulting LeGeDe-prototype is based on these needs and desiderata and is thus an innovative example for the adequate representation of spoken language in online dictionaries. It is available online since September 2019 (https://www.owid.de/legede/). In the following sections, after first focusing on the presentation of the project’s goals, the data basis, the intended end user, and the applied methods, we will illustrate the microstructure of the prototype and the information provided in a dictionary entry based on the lemma eben. Finally, we will summarize innovative aspects that are important for the implementation of such a resource.
Are borrowed neologisms accepted more slowly into the German language than German words resulting from the application of word formation rules? This study addresses this question by focusing on two possible indicators for the acceptance of neologisms: a) frequency development of 239 German neologisms from the 1990s (loanwords as well as new words resulting from the application of word formation rules) in the German reference corpus DeReKo and b) frequency development in the use of pragmatic markers (‘flags’, namely quotation marks and phrases such as sogenannt ‘so-called’) with these words. In the second part of the article, a psycholinguistic approach to evaluating the (psychological) status of different neologisms and non-words in an experimentally controlled study and plans to carry out interviews in a field test to collect speakers’ opinions on the acceptance of the analysed neologisms are outlined. Finally, implications for the lexicographic treatment of both types of neologisms are discussed.
We present a new resource for German causal language, with annotations in context for verbs, nouns and adpositions. Our dataset includes 4,390 annotated instances for more than 150 different triggers. The annotation scheme distinguishes three different types of causal events (CONSEQUENCE, MOTIVATION, PURPOSE). We also provide annotations for semantic roles, i.e. of the cause and effect for the causal event as well as the actor and affected party, if present. In the paper, we present inter-annotator agreement scores for our dataset and discuss problems for annotating causal language. Finally, we present experiments where we frame causal annotation as a sequence labelling problem and report baseline results for the prediciton of causal arguments and for predicting different types of causation.
This paper presents experiments on sentence boundary detection in transcripts of spoken dialogues. Segmenting spoken language into sentence-like units is a challenging task, due to disfluencies, ungrammatical or fragmented structures and the lack of punctuation. In addition, one of the main bottlenecks for many NLP applications for spoken language is the small size of the training data, as the transcription and annotation of spoken language is by far more time-consuming and labour-intensive than processing written language. We therefore investigate the benefits of data expansion and transfer learning and test different ML architectures for this task. Our results show that data expansion is not straightforward and even data from the same domain does not always improve results. They also highlight the importance of modelling, i.e. of finding the best architecture and data representation for the task at hand. For the detection of boundaries in spoken language transcripts, we achieve a substantial improvement when framing the boundary detection problem as a sentence pair classification task, as compared to a sequence tagging approach.
We present a fine-grained NER annotations scheme with 30 labels and apply it to German data. Building on the OntoNotes 5.0 NER inventory, our scheme is adapted for a corpus of transcripts of biographic interviews by adding categories for AGE and LAN(guage) and also adding label classes for various numeric and temporal expressions. Applying the scheme to the spoken data as well as a collection of teaser tweets from newspaper sites, we can confirm its generality for both domains, also achieving good inter-annotator agreement. We also show empirically how our inventory relates to the well-established 4-category NER inventory by re-annotating a subset of the GermEval 2014 NER coarse-grained dataset with our fine label inventory. Finally, we use a BERT-based system to establish some baselines for NER tagging on our two new datasets. Global results in in-domain testing are quite high on the two datasets, near what was achieved for the coarse inventory on the CoNLLL2003 data. Cross-domain testing produces much lower results due to the severe domain differences.
The paper presents a discussion on the main linguistic phenomena of user-generated texts found in web and social media, and proposes a set of annotation guidelines for their treatment within the Universal Dependencies (UD) framework. Given on the one hand the increasing number of treebanks featuring user-generated content, and its somewhat inconsistent treatment in these resources on the other, the aim of this paper is twofold: (1) to provide a short, though comprehensive, overview of such treebanks - based on available literature - along with their main features and a comparative analysis of their annotation criteria, and (2) to propose a set of tentative UD-based annotation guidelines, to promote consistent treatment of the particular phenomena found in these types of texts. The main goal of this paper is to provide a common framework for those teams interested in developing similar resources in UD, thus enabling cross-linguistic consistency, which is a principle that has always been in the spirit of UD.
I’ve got a construction looks funny – representing and recovering non-standard constructions in UD
(2020)
The UD framework defines guidelines for a crosslingual syntactic analysis in the framework of dependency grammar, with the aim of providing a consistent treatment across languages that not only supports multilingual NLP applications but also facilitates typological studies. Until now, the UD framework has mostly focussed on bilexical grammatical relations. In the paper, we propose to add a constructional perspective and discuss several examples of spoken-language constructions that occur in multiple languages and challenge the current use of basic and enhanced UD relations. The examples include cases where the surface relations are deceptive, and syntactic amalgams that either involve unconnected subtrees or structures with multiply-headed dependents. We argue that a unified treatment of constructions across languages will increase the consistency of the UD annotations and thus the quality of the treebanks for linguistic analysis.
The sentiment polarity of an expression (whether it is perceived as positive, negative or neutral) can be influenced by a number of phenomena, foremost among them negation. Apart from closed-class negation words like no, not or without, negation can also be caused by so-called polarity shifters. These are content words, such as verbs, nouns or adjectives, that shift polarities in their opposite direction, e. g. abandoned in “abandoned hope” or alleviate in “alleviate pain”. Many polarity shifters can affect both positive and negative polar expressions, shifting them towards the opposing polarity. However, other shifters are restricted to a single shifting direction. Recoup shifts negative to positive in “recoup your losses”, but does not affect the positive polarity of fortune in “recoup a fortune”. Existing polarity shifter lexica only specify whether a word can, in general, cause shifting, but they do not specify when this is limited to one shifting direction. To address this issue we introduce a supervised classifier that determines the shifting direction of shifters. This classifier uses both resource-driven features, such as WordNet relations, and data-driven features like in-context polarity conflicts. Using this classifier we enhance the largest available polarity shifter lexicon.
Older adults are often exposed to elderspeak, a specialized speech register linked with negative outcomes. However, previous research has mainly been conducted in nursing homes without considering multiple contextual conditions. Based on a novel contextually-driven framework, we examined elderspeak in an acute general versus geriatric German hospital setting. Individuallevel information such as cognitive impairment (CI) and audio-recorded data from care interactions between 105 older patients (M = 83.2 years; 49% with severe CI) and 34 registered nurses (M = 38.9 years) were assessed. Psycholinguistic analyses were based on manual coding (k = .85 to k = .97) and computer-assisted procedures. First, diminutives (61%), collective pronouns (70%), and tag questions (97%) were detected. Second, patients’ functional impairment emerged as an important factor for elderspeak. Our study suggests that functional impairment may be a more salient trigger of stereotype activation than CI and that elderspeak deserves more attention in acute hospital settings.
This chapter describes the resources that speakers of Polish use when recruiting assistance and collaboration from others in everyday social interaction. The chapter draws on data from video recordings of informal conversation in Polish, and reports language-specific findings generated within a large-scale comparative project involving eight languages from five continents (see other chapters of this volume). The resources for recruitment described in this chapter include linguistic structures from across the levels of grammatical organization, as well as gestural and other visible and contextual resources of relevance to the interpretation of action in interaction. The presentation of categories of recruitment, and elements of recruitment sequences, follows the coding scheme used in the comparative project (see Chapter 2 of the volume). This chapter extends our knowledge of the structure and usage of Polish with detailed attention to the properties of sequential structure in conversational interaction. The chapter is a contribution to an emerging field of pragmatic typology.
In informal interaction, speakers rarely thank a person who has complied with a request. Examining data from British English, German, Italian, Polish, and Telugu, we ask when speakers do thank after compliance. The results show that thanking treats the other’s assistance as going beyond what could be taken for granted in the circumstances. Coupled with the rareness of thanking after requests, this suggests that cooperation is to a great extent governed by expectations of helpfulness, which can be long-standing, or built over the course of a particular interaction. The higher frequency of thanking in some languages (such as English or Italian) suggests that cultures differ in the importance they place on recognizing the other’s agency in doing as requested.
Entity framing is the selection of aspects of an entity to promote a particular viewpoint towards that entity. We investigate entity framing of political figures through the use of names and titles in German online discourse, enhancing current research in entity framing through titling and naming that concentrates on English only. We collect tweets that mention prominent German politicians and annotate them for stance. We find that the formality of naming in these tweets correlates positively with their stance. This confirms sociolinguistic observations that naming and titling can have a status-indicating function and suggests that this function is dominant in German tweets mentioning political figures. We also find that this status-indicating function is much weaker in tweets from users that are politically left-leaning than in tweets by right leaning users. This is in line with observations from moral psychology that left-leaning and right-leaning users assign different importance to maintaining social hierarchies.
This paper studies practices of indexing discrepant assumptions accomplished by turn-constructional units with ich dachte ('I thought') in German talk-in-interaction. Building on the analysis of 141 instances from the corpus FOLK, we identify three sequential environments in which ich dachte is used to index that an assumption which a speaker (has) held contrasts with some other, contextually salient assumption. We show that practices which have been studied for English I thought are also routinely used in German: ich dachte is a means to manage epistemic incongruencies and to contrast an incorrect with a correct assumption in narratives. In addition, ich dachte is also used to account for the speaker's own prior actions which may have looked problematic because they built on misunderstandings which the speaker only discovered later. Moreover, ich dachte-practices may also be used to create comic effects by reporting an earlier, absurd assumption. The practices are discussed with regard to their role in regaining common ground, in managing relationships, in maintaining the identity of a rational actor, and in terms of their exploitation for other conversational interests. Special attention is paid to how co-occurring linguistic features, and sequential and pragmatic factors, account for local interpretations of ich dachte.
This article deals with narratives of traumatic experiences of parental violence in childhood, told by adult narrators in the context of clinical adult attachment interviews. The study rests on a corpus of interviews with 20 patients suffering from fibromyalgia, who were interviewed in the context of psychodynamic psychotherapy. Nine of the patients reported repeated experiences of parental violence. The article focuses on extracts from two interviews, which provide for a maximal contrast concerning the practices of telling experiences of violence and which are ‘clear cases’ of the practices that are characteristic of the whole corpus. The main differences between the different ways of telling concern:
• With respect to the ascription of guilt and responsibility, parental violence is portrayed as legitimate pedagogic action versus as being evil-minded and guilty without rational justification.
• With respect to the process of the telling, we find narrative trajectories over which an initial vague gloss is increasingly unpacked by reports of highly violent actions versus narratives in which violence is overtly stated and morally ascribed from its very first mention.
Editorial
(2020)
Designed as a contribution to contrastive linguistics, the present volume brings up-to-date the comparison of German with its closest neighbour, Dutch, and other Germanic relatives like English, Afrikaans, and the Scandinavian languages. It takes its inspiration from the idea of a "Germanic Sandwich", i.e. the hypothesis that sets of genetically related languages diverge in systematic ways in diverse domains of the linguistic system. Its contributions set out to test this approach against new phenomena or data from synchronic, diachronic and, for the first time in a Sandwich-related volume, psycholinguistic perspectives. With topics ranging from nickname formation to the IPP (aka 'Ersatzinfinitiv'), from the grammaticalisation of the definite article to /s/-retraction, and from the role of verb-second order in the acquisition of L2 English to the psycholinguistics of gender, the volume appeals to students and specialists in modern and historical linguistics, psycholinguistics, translation studies, language pedagogy and cognitive science, providing a wealth of fresh insights into the relationships of German with its closest relatives while highlighting the potential inherent in the integration of different methodological traditions.
In the present article we argue that all communication is medial in the sense that every human sign-based interaction is shaped by medial aspects from the outset. We propose a dynamic, semiotic concept of media that focuses on the process-related aspect of mediality, and we test the applicability of this concept using as an example the second presidential debate between Clinton and Trump in 2016. The analysis shows in detail how the sign processing during the debate is continuously shaped by structural aspects of television and specific traits of political communication in television. This includes how the camerawork creates meaning and how the protagonists both use the affordances of this special mediality. Therefore, it is not adequate in our view to separate the technical aspects of the medium, the ‘hardware’, from the processual aspects and the structural conditions of communication. While some aspects of the interaction are directly constituted by the medium, others are more indirectly shaped and influenced by it, especially by its institutional dimension – we understand them as second-order media effects. The whole medial procedure with its specific mediality is a necessary, but not a sufficient condition of meaning-making. We distinguish the medial procedure from the semiotic modes employed, the language games played and the competence of the players involved.
Content
1 Substituto - A Synchronous Educational Language Game for Simultaneous Teaching and Crowdsourcing
Marianne Grace Araneta, Gülsen Eryigit, Alexander König, Ji-Ung Lee, Ana Luís, Verena Lyding, Lionel Nicolas, Christos Rodosthenous and Federico Sangati
2 The Teacher-Student Chatroom Corpus
Andrew Caines, Helen Yannakoudakis, Helena Edmondson, Helen Allen, Pascual Pérez-Paredes, Bill Byrne and Paula Buttery
3 Polygloss - A conversational agent for language practice
Etiene da Cruz Dalcol and Massimo Poesio
4 Show, Don’t Tell: Visualising Finnish Word Formation in a Browser-Based Reading Assistant
Frankie Robertson
In this paper we investigate the problem of grammar inference from a different perspective. The common approach is to try to infer a grammar directly from example sentences, which either requires a large training set or suffers from bad accuracy. We instead view it as a problem of grammar restriction or sub-grammar extraction. We start from a large-scale resource grammar and a small number of examples, and find a sub-grammar that still covers all the examples. To do this we formulate the problem as a constraint satisfaction problem, and use an existing constraint solver to find the optimal grammar. We have made experiments with English, Finnish, German, Swedish and Spanish, which show that 10–20 examples are often sufficient to learn an interesting domain grammar. Possible applications include computer-assisted language learning, domain-specific dialogue systems, computer games, Q/A-systems, and others.
Preface
(2020)
We introduce a novel scientific document processing task for making previously inaccessible information in printed paper documents available to automatic processing. We describe our data set of scanned documents and data records from the biological database SABIO-RK, provide a definition of the task, and report findings from preliminary experiments. Rigorous evaluation proved challenging due to lack of gold-standard data and a difficult notion of correctness. Qualitative inspection of results, however, showed the feasibility and usefulness of the task.
pyMMAX2 is an API for processing MMAX2 stand-off annotation data in Python. It provides a lightweight basis for the development of code which opens up the Java- and XML-based ecosystem of MMAX2 for more recent, Python-based NLP and data science methods. While pyMMAX2 is pure Python, and most functionality is implemented from scratch, the API re-uses the complex implementation of the essential business logic for MMAX2 annotation schemes by interfacing with the original MMAX2 Java libraries. pyMMAX2 is available for download at http://github.com/nlpAThits/pyMMAX2.
The theme of the AFinLA 2020 Yearbook Methodological turns in applied language studies is discussed in this introductory article from three interrelated perspectives, variously addressed in the three plenary presentations at the AFinLA Autumn Symposium 2019 as well as in the thirteen contributions to the yearbook. In the first set of articles presented, the authors examine the role and impact of technological development on the study of multimodal digital and non-digital contexts and discourses and ensuing new methods. The second set of studies in the yearbook revisits issues of language proficiency, critically discussing relevant concepts and approaches. The third set of articles explores participation and participatory research approaches, reflecting on the roles of the researcher and the researched community.
Having the necessary skills for staying in contact with friends and relatives through digital devices is crucial in today’s world. As the current COVID-19 pandemic shows, this holds especially true for the elderly. Being quarantined and restricted from physically meeting people, various communication technologies are more important than ever for staying social and informed on current events. In nursing homes, staff members are now finding new ways for staying in touch with family members by assisting residents in making video calls with mobile devices.
But what if elderly people cannot rely on personal assistance for accessing these alternative means of communication? This raises the general question of how older people can and do learn to use such technologies. Although the internet is full of guides and instructional videos on how to use smartphones or tablets, they are a cold comfort to someone who may not even know what an internet browser is.
Especially for digital newcomers, the tried and true method of face-to-face instruction is invaluable. While many older people turn to their children or grandchildren for help in all things digital, courses specifically tailored for elderly users are also increasingly popular.
More and more governmental initiatives and associations indeed acknowledge the already existing interest of elderly citizens in digital tools and their growing need to receive customized training (e.g. “SeniorSurf” and “Kansalaisen digitaidot” in Finland or “Silver Tipps” in Germany). For a researcher of social interaction, these courses can also provide a valuable window for discovering what it looks and sounds like to learn to use essential but sometimes alien technologies.
To ensure short gaps between turns in conversation, next speakers regularly start planning their utterance in overlap with the incoming turn. Three experiments investigate which stages of utterance planning are executed in overlap. E1 establishes effects of associative and phonological relatedness of pictures and words in a switch-task from picture naming to lexical decision. E2 focuses on effects of phonological relatedness and investigates potential shifts in the time-course of production planning during background speech. E3 required participants to verbally answer questions as a base task. In critical trials, however, participants switched to visual lexical decision just after they began planning their answer. The task-switch was time-locked to participants' gaze for response planning. Results show that word form encoding is done as early as possible and not postponed until the end of the incoming turn. Hence, planning a response during the incoming turn is executed at least until word form activation.
When humans have a conversation with one-another, they generally take turns speaking one after the other without overlapping each others talk or leaving silence between turns for long stretches of time. Previous research has shown that conversation is a structured practice following rules that help interlocutors to manage the flow of conversation interactively. While at the beginning of a conversation it remains open who will speak when about what and for how long, interlocutors regulate the flow of conversation as it unfolds. One basic set of rules that interlocutors operate with governs the allocation of speaking turns, with the central rule stating that whoever starts speaking first at a point in time when speaker change becomes relevant has the rights and obligations to produce the next turn. The organization of turn allocation, therefore, is one reason for conversational turn taking to be so remarkably fast, with the beginnings of turns most often being quite accurately aligned with the ends of the previous turns. Observations of this outstanding speed of turn taking gave rise to a number of questions concerning language processing in conversational situations. The studies presented in this thesis investigate some of these questions from the perspective of the current listener preparing to be the next speaker who will respond to the current turn.
The study presented in Chapter 2 investigates when next speakers begin to plan their own turn with respect to two points in time, (i) the moment when the incoming turn’s message becomes clear enough to make response planning possible and (ii) the moment when the incoming turn terminates. Results of previous studies were inconclusive about the timing of language planning in conversation, with evidence in favour of both late and early response planning. Furthermore, previous studies presented both evidence as well as counter evidence indicating that response planning depends or does not depend on an accurate prediction of the timing of the incoming turn’s end. The study presented here makes use of a novel experimental paradigm which includes a dialogic task that participants need to fulfil in response to critical utterances by a confederate. These critical utterances were structured, on the one hand, so that their message became clear either only at the end of the turn or before the end of the turn, and, on the other hand, so that it was either predictable or not predictable when exactly the turn would end. Participant’s eye-movements as well as their response latencies indicated that they always planned their next turn as early as possible, irrespective of the predictability of the incoming turn’s end. The presented results provide evidence in favour of models of turn taking that predict speech planning to happen in overlap with the incoming turn.
Having established that next speakers begin to plan their turn in overlap, the study presented in Chapter 3 goes more into detail investigating to which depth language planning progresses while the incoming turn is still unfolding. To this end, a number of psycholinguistic paradigms were combined. In the study’s main experiment, participants had to fulfil a switch-task in which they switched from picture naming in response to an auditorily presented question to making a lexical decision. By manipulating the relatedness of the word for lexical decision with the picture that was prepared to be named before the task-switch it was possible to draw inferences on which processing stages were entered during the speech production process in overlap with the incoming turn. Participants’ behavioural responses in the lexical decision task revealed that they entered the stage of phonological encoding while the incoming turn was still unfolding, showing that planning in overlap is not limited to conceptual preparation but includes all sub-processes of formulation.
Given that speech production regularly enters the stages of formulation in overlap with the incoming turn, as shown in Chapters 2 and 3, the question arises whether planning the next turn in overlap is cognitively more demanding than during the gap between turns. This question is approached in the study presented in Chapter 4 by measuring pupillometric responses of participants in a dialogic task. An increase in pupil diameter during a cognitive task is indicative of increased processing load, and pupillometric responses to planning in overlap with the incoming turn were found to be greater than responses to planning in the gap between turns. These results show that planning in overlap is more demanding than planning during the gap, even though it is highly practiced by speakers.
After Chapters 2 to 4 investigated the timing and mechanisms of speech planning in conversation, Chapter 5 turns towards the timing of articulation of a planned turn, asking the question what sources of information next speakers use to time the articulation of a planned utterance to start closely after the incoming turn comes to an end. In this Chapter’s study, participants taking turns with a confederate responded to utterances containing or not containing different cues to the location of the incoming turn’s end. Participants made use of lexical and turn-final intonational cues, but not of turn-initial intonational cues, responding faster when the relevant cues were present than when they were not present. These results show that the timing of turn initiation in next speakers depends on the recognition of the incoming turn’s point of completion and not merely on the progress in planning the next turn.
All evidence presented in Chapters 2 to 5 is summed up and bundled together in a cognitive model of turn taking, which is being presented in Chapter 6. This model assumes, centrally, that the planning of a turn and the timing of its articulation are separate cognitive processes that run in parallel in any next speaker during conversation. Planning generally starts as early as possible, often in overlap with the incoming turn, while the timing of articulation depends on the next speaker’s level of certainty that speaker change has become relevant at a particular moment, with a number of cues to the end of the incoming turn leading to an increase of certainty. Next turns are assumed to often be planned down to fully formulated utterance plans including their phonological form as early as possible on the basis of anticipations of the incoming turn’s message, which are created with the help of the general and situational knowledge about the world, the current speaker and her intentions, as well as the input that has been received so far. The level of certainty that speaker change becomes relevant rises or decreases as lexico-syntactic, prosodic, and pragmatic projections about the development of the current turn are fulfilled or not fulfilled. As the incoming turn progresses towards its end as was projected by the current listener, he becomes certain that speaker change becomes relevant and will initiate articulation of the prepared next turn. Viewing these two processes, planning a next turn and timing of its articulation, as separate makes it possible to explain the observable fast timing of turn taking while still modelling the allocation of turns as interactionally managed by interlocutors — a considerable advantage of the presented model compared to more traditional perspectives on turn taking and conversation.
EFNIL, the European Federation of National Institutions for Language, promotes the standard languages and the linguistic diversity of the European countries as an essential characteristic of their cultural diversity and wealth. The 17th annual conference of EFNIL in Tallinn dealt with the relation between language and economy.
• Language politics often have economic intentions, the language use of the individual is embedded in economic conditions, languages seem to differ in their economic value. In recent years, economists and sociolinguists have developed models of describing these interdependencies.
• The interaction in multilingual settings needs professional handling. There are traditional instances such as language teaching or translation and new professional fields of the digital age such as multilingual databases. Lots of economic needs and opportunities appear in this field.
• Digitization and societal diversity are two elements leading to more successful interaction, assisted by the use of automatic everyday translation, the development of plain language etc.
This volume presents an extensive overview of the interplay of language and economy.
The lexicography of German
(2020)
This chapter discusses the main dictionaries of the German language as it is spoken and written in Germany, and also German as it is spoken and written in Austria, Switzerland, the eastern fringes of Belgium, and South Tyrol. It also briefly describes Pennsylvania German. Corpora and other language resources used in German dictionary-making are also presented. Finally, there is a discussion of some current issues in German lexicography, as well as future prospects.
Despite the importance of the agent role for language grammar and processing, its definition and features are still controversially discussed in the literature on semantic roles. Moreover, diagnostic tests to dissociate agentive from non-agentive roles are typically applied with qualitative introspection data. We investigated whether quantitative acceptability ratings obtained with a well-established agentivity test, the DO-cleft, provide evidence for the feature-based prototype account of (Dowty, David R. 1991. Thematic protoroles and argument selction. Language 67(3). 547-619) postulating that agentivity increases with the number of agentive features that a role subsumes. We used four different intransitive verb classes in German and collected acceptability judgements from non-expert native speakers of German. Our results show that sentence acceptability increases linearly with the number of agentive features and, hence, agentivity. Moreover, our findings confirm that sentience belongs to the group of proto-agent features. In summary, this suggests that a multidimensional account including a specific mechanism for role prototypicality (feature accumulation) successfully captures gradient acceptability clines. Quantitative acceptability estimates are a meaningful addition to linguistic theorizing.
The 12th Web as Corpus workshop (WAC-XII) looks at the past, present, and future of web corpora given the fact that large web corpora are nowadays provided mostly by a few major initiatives and companies, and the diversity of the early years appears to have faded slightly. Also, we acknowledge the fact that alternative sources of data (such as data from Twitter and similar platforms) have emerged, some of them only available to large companies and their affiliates, such as linguistic data from social media and other forms of the deep web. At the same time, gathering interesting and relevant web data (web crawling) is becoming an ever more intricate task as the nature of the data offered on the web changes (for example the death of forums in favour of more closed platforms).
As immigration and mobility increases, so do interactions between people from different linguistic backgrounds. Yet while linguistic diversity offers many benefits, it also comes with a number of challenges. In seven empirical articles and one commentary, this Special Issue addresses some of the most significant language challenges facing researchers in the 21st century: the power language has to form and perpetuate stereotypes, the contribution language makes to intersectional identities, and the role of language in shaping intergroup relations. By presenting work that aims to shed light on some of these issues, the goal of this Special Issue is to (a) highlight language as integral to social processes and (b) inspire researchers to address the challenges we face. To keep pace with the world’s constantly evolving linguistic landscape, it is essential that we make progress toward harnessing language’s power in ways that benefit 21st century globalized societies.
This paper reports on recent developments within the European Reference Corpus EuReCo, an open initiative that aims at providing and using virtual and dynamically definable comparable corpora based on existing national, reference or other large corpora. Given the well-known shortcomings of other types of multilingual corpora such as parallel/translation corpora (shining-through effects, over-normalization, simplification, etc.) or web-based comparable corpora (covering only web material), EuReCo provides a unique linguistic resource offering new perspectives for fine-grained contrastive research on authentic cross-linguistic data, applications in translation studies and foreign language teaching and learning.
This study examines asymmetries between so-called inherent and contextual categories in relation to the morphological complexity of the nominal and verbal inflectional domain of languages. The observations are traced back to the influence of adult L2 learning in scenarios of intense language contact. A method for a simple comparison of the amount of inherent versus contextual categories is proposed and applied to the German-based creole language Unserdeutsch (Rabaul Creole German) in comparison to its lexifier language. The same procedure will be applied to two further language pairs. The grammatical systems of Unserdeutsch and other contact languages display a noticeable asymmetry regarding their structural complexity. Analysing different kinds of evidence, the explanatory key factor seems to be the role of (adult) L2 acquisition in the history of a language, whereby languages with periods of widespread L2 acquisition tend to lose contextual features. This impression is reinforced by general tendencies in pidgin and creole languages. Beyond that, there seems to be a tendency for inherent categories to be more strongly associated with the verb, while contextual categories seem to be more strongly associated with the noun. This leads to an asymmetry in categorical complexity between the noun phrase and the verb phrase in languages that experienced periods of intense L2 learning.
Journal for language technology and computational linguistics. Special Issue on offensive language
(2020)
Recent years have seen a sharp increase in studies of offensive language (and related notions such as abusive language, hate speech, verbal aggression etc.) as well as of patterns of online behavior such as cyberbullying and trolling. Multiple efforts have been launched for the exploration of computational approaches and the establishment of benchmark datasets for various languages (Basile et al. (2019), Wiegand et al. (2018), Zampieri et al. (2019)).