Refine
Year of publication
- 2022 (79) (remove)
Document Type
- Part of a Book (37)
- Article (15)
- Conference Proceeding (15)
- Book (7)
- Preprint (2)
- Doctoral Thesis (1)
- Other (1)
- Part of Periodical (1)
Language
- English (79) (remove)
Is part of the Bibliography
- yes (79) (remove)
Keywords
- Deutsch (29)
- Korpus <Linguistik> (22)
- Interaktion (11)
- Wörterbuch (11)
- Sprachdaten (10)
- Konversationsanalyse (9)
- Lexikografie (9)
- Neologismus (7)
- Datensatz (6)
- Handlung (6)
Publicationstate
- Veröffentlichungsversion (65)
- Zweitveröffentlichung (10)
- Postprint (7)
Reviewstate
Publisher
Comprehending conditional statements is fundamental for hypothetical reasoning about situations. However, the online comprehension of conditional statements containing different conditional connectives is still debated. We report two self-paced reading experiments on German conditionals presenting the conditional connectives wenn (‘if’) and nur wenn (‘only if’) in identical discourse contexts. In Experiment 1, participants read a conditional sentence followed by the confirmed antecedent p and the confirmed or negated consequent q. The final, critical sentence was presented word by word and contained a positive or negative quantifier (ein/kein ‘one/no’). Reading times of the two quantifiers did not differ between the two conditional connectives. In Experiment 2, presenting a negated antecedent, reading times for the critical positive quantifier (ein) did not differ between conditional connectives, while reading times for the negative quantifier (kein) were shorter for nur wenn than for wenn. The results show that comprehenders form distinct predictions about discourse continuations due to differences in the lexical semantics of the tested conditional connectives, shedding light on the role of conditional connectives in the online interpretation of conditionals in general.
Standards in CLARIN
(2022)
This chapter looks at a fragment of the ongoing work of the CLARIN Standards Committee (CSC) on producing a shared set of recommendations on standards, formats, and related best practices supported by the CLARIN infrastructure and its participating centres. What might at first glance seem to be a straightforward goal has over the years proven to be rather complex, reflecting the robustness and heterogeneity of the emerging distributed digital research infrastructure and the various disciplines and research traditions of the language-based humanities that it serves and represents, and therefore part of the chapter reviews the various initiatives and proposals that strove to produce helpful standards-related guidance. The focus turns next to a subtask initiated in late 2019, its scope narrowed to one of the core activities and responsibilities of CLARIN backbone centres, namely the provision of data deposition services. Centres are obligated to publish their recom-mendations concerning the repertoire of data formats that are best suited for their research profiles. We look at how this requirement has been met by the particular centres and suggest that having centres maintain their information in the Standards Information System (SIS) is the way to improve on the current state of affairs.
This thesis is a corpus linguistic investigation of the language used by young German speakers online, examining lexical, morphological, orthographic, and syntactic features and changes in language use over time. The study analyses the language in the Nottinghamer Korpus deutscher YouTube‐Sprache ("Nottingham corpus of German YouTube language", or NottDeuYTSch corpus), one of the first large corpora of German‐language comments taken from the videosharing website YouTube, and built specifically for this project. The metadatarich corpus comprises c.33 million tokens from more than 3 million comments posted underneath videos uploaded by mainstream German‐language youthorientated YouTube channels from 2008‐2018.
The NottDeuYTSch corpus was created to enable corpus linguistic approaches to studying digital German youth language (Jugendsprache), having identified the need for more specialised web corpora (see Barbaresi 2019). The methodology for compiling the corpus is described in detail in the thesis to facilitate future construction of web corpora. The thesis is situated at the intersection of Computer‐Mediated Communication (CMC) and youth language, which have been important areas of sociolinguistic scholarship since the 1980s, and explores what we can learn from a corpus‐driven, longitudinal approach to (online) youth language. To do so, the thesis uses corpus linguistic methods to analyse three main areas:
1. Lexical trends and the morphology of polysemous lexical items. For this purpose, the analysis focuses on geil, one of the most iconic and productive words in youth language, and presents a longitudinal analysis, demonstrating that usage of geil has decreased, and identifies lexical items that have emerged as potential replacements. Additionally, geil is used to analyse innovative morphological productiveness, demonstrating how different senses of geil are used as a base lexeme or affixoid in compounding and derivation.
2. Syntactic developments. The novel grammaticalization of several subordinating conjunctions into both coordinating conjunctions and discourse markers is examined. The investigation is supported by statistical analyses that demonstrate an increase in the use of non‐standard syntax over the timeframe of the corpus and compares the results with other corpora of written language.
3. Orthography and the metacommunicative features of digital writing. This analysis identifies orthographic features and strategies in the corpus, e.g. the repetition of certain emoji, and develops a holistic framework to study metacommunicative functions, such as the communication of illocutionary force, information structure, or the expression of identities. The framework unifies previous research that had focused on individual features, integrating a wide range of metacommunicative strategies within a single, robust system of analysis.
By using qualitative and computational analytical frameworks within corpus linguistic methods, the thesis identifies emergent linguistic features in digital youth language in German and sheds further light on lexical and morphosyntactic changes and trends in the language of young people over the period 2008‐2018. The study has also further developed and augmented existing analytical frameworks to widen the scope of their application to orthographic features associated with digital writing.
This paper deals with different types of verbal complementation of the German verb verdienen. It focuses on constructions that have been undergoing a grammaticalization process and thus express deontic modality, as in Sie verdient geliebt zu werden (ʽShe deserves to be lovedʼ) and Sie verdient zu leben (ʽShe deserves to liveʼ) (Diewald, Dekalo & Czicza 2021). These constructions are connected to parallel complementation types with passive and active infinitives containing a correlate es, as in Sie verdient es, geliebt zu werden and Sie verdient es, zu leben, as well as finite clauses with the subordinator dass with and without correlative es, as in Sie verdient, dass sie geliebt wird and Sie verdient es, dass sie geliebt wird. This paper attempts to show a close comparative investigation of these six types of constructions based on their relevant semantic and syntactic properties in terms of clause linkage (Lehmann 1988). We analyze the relevant data retrieved from the DWDS corpus of the 20th century and present an expanded grammaticalization path for verdienen-constructions. The finite complementation with dass is regarded as an example of a separate structural option called “elaboration”. Concerning the use of correlative es, it is shown that it does not have any substantial effect on the grammaticalization of modal verdienen-constructions.
Germany’s diverse history in the 20th century raises the question of how social upheavals were constituted in and through political discourse. By analysing basic concepts, the research network “The 20th century in basic concepts” (based at the Leibniz institutes IDS, ZfL, ZZF) aims to identify continuities and discontinuities in political and social discourse. In this way, historical sediments of the present are to be uncovered and those challenges identified that emerged in the course of the 20th century and continue to shape political discourse until the present.
Dieses Kapitel lotet Möglichkeiten und Methoden aus, digitale Diskursanalysen nationalsozialistischer Quellentexte durchzuführen. Digitale Technologie wird dabei als heuristisches Werkzeug betrachtet, mit dem der Sprachgebrauch während des Nationalsozialismus im Rahmen größerer Quellenkorpora untersucht werden kann. In einem theoretischen Abschnitt wird grundsätzlich dafür plädiert, während des Analyseprozesses hermeneutisches Sinnverstehen mit breitflächigen korpusbasierten Abfragen zu kombinieren. Verdeutlicht wird diese Herangehensweise an zwei empirischen Beispielen: Anhand eines Korpus von Hitler- und Goebbels-Reden wird dem Auftauchen und der diskursiven Ausgestaltung des nationalsozialistischen Konzepts „Lebensraum“ nachgespürt. Schritt für Schritt wird offengelegt, welche Analysewege durch das Abfragen von Schlüsseltexten, Keywords, Konkordanzen und Kollokationen verfolgt werden können. Das zweite Beispiel zeigt anhand von Eingaben, die aus der Bevölkerung an Staats- und Parteiinstanzen gerichtet wurden, wie solche Quellen mithilfe eines digitalen Tools manuell annotiert werden können, um sie danach auf Musterhaftigkeiten im Sprachgebrauch hin auswerten zu können.
CLARIN stands for “Common Language Resources and Technology Infrastructure”. In 2012 CLARIN ERIC was established as a legal entity with the mission to create and maintain a digital infrastructure to support the sharing, use, and sustainability of language data (in written, spoken, or multimodal form) available through repositories from all over Europe, in support of research in the humanities and social sciences and beyond. Since 2016 CLARIN has had the status of Landmark research infrastructure and currently it provides easy and sustainable access to digital language data and also offers advanced tools to discover, explore, exploit, annotate, analyse, or combine such datasets, wherever they are located. This is enabled through a networked federation of centres: language data repositories, service centres, and knowledge centres with single sign-on access for all members of the academic community in all participating countries. In addition, CLARIN offers open access facilities for other interested communities of use, both inside and outside of academia. Tools and data from different centres are interoperable, so that data collections can be combined and tools from different sources can be chained to perform operations at different levels of complexity. The strategic agenda adopted by CLARIN and the activities undertaken are rooted in a strong commitment to the Open Science paradigm and the FAIR data principles. This also enables CLARIN to express its added value for the European Research Area and to act as a key driver of innovation and contributor to the increasing number of industry programmes running on data-driven processes and the digitalization of society at large.
Action ascription can be understood from two broad perspectives. On one view, it refers to the ways in which actions constitute categories by which members make sense of their world, and forms a key foundation for holding others accountable for their conduct. On another view, it refers to the ways in which we accountably respond to the actions of others, thereby accomplishing sequential versions of meaningful social experience. In short, action ascription can be understood as matter of categorisation of prior actions or responding in ways that are sequentially fitted to prior actions, or both. In this chapter, we review different theoretical approaches to action ascription that have developed in the field, as well as the key constituents and resources of action ascription that have been identified in conversation analytic research, before going on to discuss how action ascription can itself be considered a form of social action.
While the role of intentions in the constitution of actions gives rise to complex and heavily controversial questions, it appears to be indisputable that action ascription in interaction mostly does without any overt ascription of intention. Yet, sometimes participants explicitly ascribe intentions to their interlocutors in order to make sense of their prior actions. The chapter examines intention ascriptions in response to a partner’s adjacent prior turn using the German modal verb construction willst du/wollen Sie (do you want). The analysis focuses on the aspect of the prior action the intention ascription addresses (action type, projected next action, motive etc.), the action the intention ascription performs itself, and the next action they make relevant from the prior speaker. It was found that intention ascriptions are used to clarify and intersubjectively ground the meaning of the prior turn, which seems otherwise underspecified, ambiguous or puzzling. Yet, they are also used to adumbrate criticism, e.g., that the prior turn projects a course of future actions which is considered to be inadequate, or to expose a concealed, problematic allegedly “real” meaning of the prior turn.
This paper presents an algorithm and an implementation for efficient tokenization of texts of space-delimited languages based on a deterministic finite state automaton. Two representations of the underlying data structure are presented and a model implementation for German is compared with state-of-the-art approaches. The presented solution is faster than other tools while maintaining comparable quality.
When comparing different tools in the field of natural language processing (NLP), the quality of their results usually has first priority. This is also true for tokenization. In the context of large and diverse corpora for linguistic research purposes, however, other criteria also play a role – not least sufficient speed to process the data in an acceptable amount of time. In this paper we evaluate several state of the art tokenization tools for German – including our own – with regard to theses criteria. We conclude that while not all tools are applicable in this setting, no compromises regarding quality need to be made.
When comparing different tools in the field of natural language processing (NLP), the quality of their results usually has first priority. This is also true for tokenization. In the context of large and diverse corpora for linguistic research purposes, however, other criteria also play a role – not least sufficient speed to process the data in an acceptable amount of time. In this paper we evaluate several state-ofthe-art tokenization tools for German – including our own – with regard to theses criteria. We conclude that while not all tools are applicable in this setting, no compromises regarding quality need to be made.
This chapter will present lessons learned from CLARIN-D, the German CLARIN national consortium. Members of the CLARIN-D communities and of the CLARIN-D consortium have been engaged in innovative, data-driven, and community-based research, using language resources and tools in the humanities and neigh-bouring disciplines. We will present different use cases and users’ stories that demonstrate the innovative research potential of large digital corpora and lexical resources for the study of language change and variation, for language documentation, for literary studies, and for the social sciences. We will emphasize the added value of making language resources and tools available in the CLARIN distributed research infrastructure and will discuss legal and ethical issues that need to be addressed in the use of such an infrastructure. Innovative technical solutions for accessing digital materials still under copyright and for data mining such materials will be presented. We will outline the need for close interaction with communities of interest in the areas of curriculum development, data management, and training the next generation of digital humanities scholars. The importance of community-supported standards for encoding language resources and the practice of community-based quality control for digital research data will be presented as a crucial step toward the provisioning of high quality research data. The chapter will conclude with a discussion of impor-tant directions for innovative research and for supporting infrastructure development over the next decade and beyond.
Tok Pisin is a pidgin/creole language spoken since the late 19th century in most of the area that nowadays constitutes Papua New Guinea where it emerged under German colonial rule. Unusual for a pidgin/creole, Tok Pisin is characterized by a extensive lexicographic history. The Tok Pisin Dictionary Collection at the Leibniz Institute for the German Language, described in this article, includes about fifty dictionaries. The collection forms the basis for the sketch of the history of Tok Pisin lexicography as part of colonial history presented here. The basic thesis is that in the history of Tok Pisin, lexicographic strat egies, dictionary structures, and publication patterns reflect the interest (and disinterest) of various groups of colonial actors. Among these colonial actors, European scientists, Catholic missionaries, and the Australian and US militaries played important roles.
Tok Pisin is a pidgin/creole language spoken since the late 19th century in most of the area that nowadays constitutes Papua New Guinea where it emerged under German colonial rule. Unusual for a pidgin/creole, Tok Pisin is characterized by a extensive lexicographic history. The Tok Pisin Dictionary Collection at the Leibniz Institute for the German Language, described in this article, includes about fifty dictionaries. The collection forms the basis for the sketch of the history of Tok Pisin lexicography as part of colonial history presented here. The basic thesis is that in the history of Tok Pisin, lexicographic strategies, dictionary structures, and publication patterns reflect the interest (and disinterest) of various groups of colonial actors. Among these colonial actors, European scientists, Catholic missionaries, and the Australian and US militaries played important roles.
We present the use of count-based and predictive language models for exploring language use in the German Reference Corpus DeReKo. For collocation analysis along the syntagmatic axis we employ traditional association measures based on co-occurrence counts as well as predictive association measures derived from the output weights of skipgram word embeddings. For inspecting the semantic neighbourhood of words along the paradigmatic axis we visualize the high dimensional word embeddings in two dimensions using t-stochastic neighbourhood embeddings. Together, these visualizations provide a complementary, explorative approach to analysing very large corpora in addition to corpus querying. Moreover, we discuss count-based and predictive models w.r.t. scalability and maintainability in very large corpora.
Preface
(2022)
In this paper, we address two problems in indexing and querying spoken language corpora with overlapping speaker contributions. First, we look into how token distance and token precedence can be measured when multiple primary data streams are available and when transcriptions happen to be tokenized, but are not synchronized with the sound at the level of individual tokens. We propose and experiment with a speaker based search mode that enables any speaker’s transcription tier to be the basic tokenization layer whereby the contributions of other speakers are mapped to this given tier. Secondly, we address two distinct methods of how speaker overlaps can be captured in the TEI based ISO Standard for Spoken Language Transcriptions (ISO 24624:2016) and how they can be queried by MTAS – an open source Lucene-based search engine for querying text with multilevel annotations. We illustrate the problems, introduce possible solutions and discuss their benefits and drawbacks.
This paper describes the TEI-based ISO standard 24624:2016 ‘Transcription of spoken language’ and other formats used within CLARIN for spoken language resources. It assesses the current state of support for the standard and the interoperability between these formats and with rele- vant tools and services. The main idea behind the paper is that a digital infrastructure providing language resources and services to researchers should also allow the combined use of resources and/or services from different contexts. This requires syntactic and semantic interoperability. We propose a solution based on the ISO/TEI format and describe the necessary steps for this format to work as an exchange format with basic semantic interoperability for spoken language resources across the CLARIN infrastructure and beyond.
Action ascription is an emergent process of mutual displays of understanding. Usually, the kind of action that is ascribed to a prior turn by a next action remains implicit. Sometimes, however, actions are overtly ascribed, for example, when speakers expose the use of strategies. This happens particularly in conflictual interaction, such as public debates or mediation talks. In these interactional settings, one of the speakers’ goals is to discredit their opponents in front of other participants or an overhearing audience. This chapter investigates different types of overt strategy ascriptions in a public mediation: exposing the opponent’s use of rhetorical devices, exposing the opponent’s use of false premises, and exposing that an opponent is telling only a half-truth. This chapter shows how speakers use ascriptions of acting strategically as accusations to disclose their opponents’ intentions and ‘truths’ that the opponents allegedly conceal and that are detrimental to their position.
The article addresses Solution-Oriented Questions (SOQs) as an interactional practice for relationship management in psychodiagnostic interviews. Therapeutic alliance results from the concordance of alignment, as willingness to cooperate regarding common goals, and of affiliation, as relationship based upon trust. SOQs particularly allow for both: They are situated at the end of a troublesome topic area, which is linked to low agency on the patient’s side, and they reveal understanding of and interest in the patient. Following the paradigm of Conversation Analysis and German Gesprächsanalyse this paper analyzes the design and functions of SOQs as a means for securing and enhancing the relationship in the process of therapy. Our data comprise 15 videotaped first interviews following the manual of the Operationalized Psychodynamic Diagnostics. The analyses refer to all SOQs found but will be illustrated by means of a single conversation.
The normative layer of CLARIN is, alongside the organizational and technical layers, an essential part of the infrastructure. It consists of the regulatory framework (statutory law, case law, authoritative guidelines, etc.), the contractual framework (licenses, terms of service, etc.), and ethical norms. Navigating the normative layer requires expertise, experience, and qualified effort. In order to advise the Board of Directors, a standing committee dedicated to legal and ethical issues, the CLIC, was created. Since its establishment in 2012, the CLIC has made considerable efforts to provide not only the BoD but also the general public with information and guidance. It has published many articles (both in proceedings of CLARIN conferences and in its own White Paper Series) and developed several LegalTech tools. It also runs a Legal Information Platform, where accessible information on various issues affecting language resources can be found.
The debate on the use of personal data in language resources usually focuses — and rightfully so — on anonymisation. However, this very same debate usually ends quickly with the conclusion that proper anonymisation would necessarily cause loss of linguistically valuable information. This paper discusses an alternative approach — pseudonymisation. While pseudonymisation does not solve all the problems (inasmuch as pseudonymised data are still to be regarded as personal data and therefore their processing should still comply with the GDPR principles), it does provide a significant relief, especially — but not only — for those who process personal data for research purposes. This paper describes pseudonymisation as a measure to safeguard rights and interests of data subjects under the GDPR (with a special focus on the right to be informed). It also provides a concrete example of pseudonymisation carried out within a research project at the Institute of Information Technology and Communications of the Otto von Guericke University Magdeburg.
Ethical issues in Language Resources and Language Technology are often invoked, but rarely discussed. This is at least partly because little work has been done to systematize ethical issues and principles applicable in the fields of Language Resources and Language Technology. This paper provides an overview of ethical issues that arise at different stages of Language Resources and Language Technology development, from the conception phase through the construction phase to the use phase. Based on this overview, the authors propose a tentative taxonomy of ethical issues in Language Resources and Language Technology, built around five principles: Privacy, Property, Equality, Transparency and Freedom. The authors hope that this tentative taxonomy will facilitate ethical assessment of projects in the field of Language Resources and Language Technology, and structure the discussion on ethical issues in this domain, which may eventually lead to the adoption of a universally accepted Code of Ethics of the Language Resources and Language Technology community.
Between January 2020 and July 2021, many new words and phrases contributed to the expansion of the German vocabulary to enable communication under the new conditions that evolved during the Covid-19 pandemic. Medical and epidemiological vocabulary was integrated into the general language to a large extent. Suddenly, some lexemes from general language were used with very high frequency, while other words were used less often than before. These processes of language change can be studied in various ways, for example, in corpus linguistics with respect to the frequency or emergence of certain words in certain types of texts (e.g. press releases vs. posts in social media), in critical discourse analysis with respect to certain participants of the discourse (e.g. vocabulary of Covid-19 pandemic deniers), or in conversation analysis (e.g. with respect to new verbal interactions in greetings and farewells). The rapid expansion of vocabulary has notably affected also lexicography as a discipline of applied linguistics.
This article will focus on the ways in which a German neologism dictionary project has chosen to capture and document lexicographic information in a timely manner. Both challenges and advantages arise from lexicographic practice “at the pulse of time”. The Neologismenwörterbuch is presented as an example that lends itself well to such a discussion because its subject (neologisms) is characterized as new, innovative, and constantly changing.
Not only professional lexicographers, but also people without a professional background in lexicography, have reacted to the increased need for information on new words or medical and epidemiological terms being used in the context of the COVID-19 pandemic. In this study, corona-related glossaries published on German news websites are presented, as well as different kinds of responses from professional lexicography. They are compared in terms of the amount of encyclopaedic information given and the methods of definition used. In this context, answers to corona-related words from a German questionanswer platform are also presented and analyzed. Overall, these different reactions to a unique challenge shed light on the importance of lexicography for society and vice versa.
Not only professional lexicographers, but also people without a professional background in lexicography, have reacted to the increased need for information on new words or medical and epidemiological terms being used in the context of the COVID-19 pandemic. In this study, corona-related glossaries published on German news websites are presented, as well as different kinds of responses from professional lexicography. They are compared in terms of the amount of encyclopaedic information given and the methods of definition used. In this context, answers to corona-related words from a German questionanswer platform are also presented and analyzed. Overall, these different reactions to a unique challenge shed light on the importance of lexicography for society and vice versa.
This volume of Lexicographica : Series Maior focuses on lexicographic neology and neological lexicography concerning COVID-19 neologisms, featuring papers originally presented at the third Globalex Workshop on Lexicography and Neology (GWLN 2021).
The thirteen papers in this volume focus on ten languages: one Altaic (Korean), one Finno-Ugric (Hungarian), two Germanic (English and German), four Romance (French, Italian, [Brazilian and European] Portuguese and [Pan-American and European] Spanish), and one Slavic (Croatian), as well as the Sign Language of New Zealand. Specialized dictionaries of neologisms are discussed as well as general language ones, monolingual, bilingual and multilingual lexical resources, print and electronic dictionaries. Questions regarding terminology as well as general language and standard and norm regarding COVID-19 neologisms are raised and different methods of detecting candidates in media corpora, as well as by user contributions, are discussed.
In a recent article, Meylan and Griffiths (Meylan & Griffiths, 2021, henceforth, M&G) focus their attention on the significant methodological challenges that can arise when using large-scale linguistic corpora. To this end, M&G revisit a well-known result of Piantadosi, Tily, and Gibson (2011, henceforth, PT&G) who argue that average information content is a better predictor of word length than word frequency. We applaud M&G who conducted a very important study that should be read by any researcher interested in working with large-scale corpora. The fact that M&G mostly failed to find clear evidence in favor of PT&G's main finding motivated us to test PT&G's idea on a subset of the largest archive of German language texts designed for linguistic research, the German Reference Corpus consisting of ∼43 billion words. We only find very little support for the primary data point reported by PT&G.
In a previous study published in Nature Human Behaviour, Varnum and Grossmann claim that reductions in gender inequality are linked to reductions in pathogen prevalence in the United States between 1951 and 2013. Since the statistical methods used by Varnum and Grossmann are known to induce (seemingly) significant correlations between unrelated time series, so-called spurious or non-sense correlations, we test here whether the statistical association between gender inequality and pathogens prevalence in its current form also is the result of mis-specified models that do not correctly account for the temporal structure of the data. Our analysis clearly suggests that this is the case. We then discuss and apply several standard approaches of modelling time-series processes in the data and show that there is, at least as of now, no support for a statistical association between gender inequality and pathogen prevalence.
It was recently suggested in a study published in Nature Human Behaviour that the historical loosening of American culture was associated with a trade-off between higher creativity and lower order. To this end, Jackson et al. generate a linguistic index of cultural tightness based on the Google Books Ngram corpus and use this index to show that American norms loosened between 1800 and 2000. While we remain agnostic toward a potential loosening of American culture and a statistical association with creativity/order, we show here that the methods used by Jackson et al. are neither suitable for testing the validity of the index nor for establishing possible relationships with creativity/order.
The present paper reports two acceptability-rating experiments and a supporting corpus study for Polish that tested the acceptability and frequency of five verb classes (WATCH, SEE, HATE, KNOW, EXHIBIT), entailing different sets of agentivity features, in different syntactic constructions: a) the personal passive (e.g. zachód słońca był oglądany ‘the sunset was watched’), b) the impersonal -no/-to construction (e.g. oglądano zachód słońca ‘people/they/one watched the sunset’), and c) the personal active construction (e.g. niektórzy oglądali zachód słońca ‘some (people) watched the sunset’). We asked whether acceptability ratings would show identical acceptability clines across constructions affected by agentivity, as predicted from Dowty’s (1991) prototype account of semantic roles with feature accumulation as its central mechanism, or whether clines would vary depending on syntactic construction, as predicted from Himmelmann & Primus’ (2015) prominence account that uses feature weighting to describe role-related effects. In contrasting the applicability of these two accounts, we also investigated whether previous research findings from German replicate in Polish, thereby revealing cross-linguistic stability or variation. Our results show that the five verb classes yield different acceptability clines in all three Polish constructions and that the clines for Polish and German passives show cross-linguistic variation. This pattern cannot be explained by role prototypicality, so that the experiments provide further evidence for the prominence account of role-related effects in sentence interpretation. Moreover, our data suggest that experiencer verbs interact differently with the animacy of the subject referent, yielding different results for perception verbs (SEE), emotion verbs (HATE), and cognition verbs (KNOW).
Enabling appropriate access to linguistic research data, both for many researchers and for innovative research applications, is a challenging task. In this chapter, we describe how we address this challenge in the context of the German Reference Corpus DeReKo and the corpus analysis platform KorAP. The core of our approach, which is based on and tightly integrated into the CLARIN infrastructure, is to offer access at different levels. The graduated access levels make it possible to find a low-loss compromise between the possibilities opened up and the costs incurred by users and providers for each individual use case, so that, viewed over many applications, the ratio between effort and results achieved can be effectively optimized. We also report on experiences with the current state of this approach.
Meta-communicative practices are generally reflexive in a fairly obvious sense: Inasmuch as speakers use them to talk about or comment on earlier/subsequent talk, they use language self-reflexively. In this paper, we explore a practice that is reflexive not only in this meta-communicative sense but also in a sequential-interactional one: Prefacing a conversational turn with I was gonna say. We show that the I was gonna say-preface furnishes the following general semantic-pragmatic affordances: (1) It retroactively relates the speaker’s subsequent talk to preceding talk from a co-participant, (2) it embodies a claim to prior, now-preempted, communicative intent with regard to what their co-participant has (just) said/done, (3) it therefore displays its speaker’s orientation to the relevance or the appropriate placement of the action(s) done in their own subsequent talk at an earlier moment in the interaction, and (4) it reflexively re-invokes, or retrieves, this earlier moment as the relevant sequential context for their action(s). We then go on to illustrate how speakers draw on these sequentially reflexive affordances for managing recurrent interactional contingencies in specific sequential environments. The paper ends with a discussion of the role that reflexivity plays in and for the deployment of this practice.
In this article we examine moments in which parents or other caregivers overtly invoke rules during episodes in which they take issue with, intervene against, and try to change a child’s ongoing behavior or action(s). Drawing on interactional data from four different languages (English, Finnish, German, Polish) and using Conversation Analytic methods, we first illustrate the variety of ways in which parents may use such overt rule invocations as part of their behavior modification attempts, showing them to be functionally versatile interactional objects. Their interactional flexibility notwithstanding, we find that parents typically invoke rules when, in the course of the intervention episode, they encounter trouble with achieving an acceptable compliant outcome. To get at the distinct import of rule formulations in this context, we then compare them to two sequential alternatives: parental expressions of an experienced negative affective state, and parental threats. While the former emphasize aspects of social solidarity, the latter seek to enforce compliance by foregrounding a power asymmetry between the parent and the child. Rule formulations, by contrast, are designedly impersonal and appear to be directed at what the parents construe as shortcomings in common-sense practical reasoning on the child’s part. Reflexively, the child is thereby cast as not having properly applied common-sense ‘practical reason’ when engaging in what is treated as the problematic behavior or action. Overt rule invocations can, therefore, be understood as indexical appeals to practical reason.
Metadata provides important information relevant both to finding and understanding corpus data. Meaningful linguistic data requires both reasonable annotations and documentation of these annotations. This documentation is part of the metadata of a dataset. While corpus documentation has often been provided in the form of accompanying publications, machinereadable metadata, both containing the bibliographic information and documenting the corpus data, has many advantages. Metadata standards allow for the development of common tools and interfaces. In this paper I want to add a new perspective from an archive’s point of view and look at the metadata provided for four learner corpora and discuss the suitability of established standards for machine-readable metadata. I am are aware that there is ongoing work towards metadata standards for learner corpora. However, I would like to keep the discussion going and add another point of view: increasing findability and reusability of learner corpora in an archiving context.
The QUEST (QUality ESTablished) project aims at ensuring the reusability of audio-visual datasets (Wamprechtshammer et al., 2022) by devising quality criteria and curating processes. RefCo (Reference Corpora) is an initiative within QUEST in collaboration with DoReCo (Documentation Reference Corpus, Paschen et al. (2020)) focusing on language documentation projects. Previously, Aznar and Seifart (2020) introduced a set of quality criteria dedicated to documenting fieldwork corpora. Based on these criteria, we establish a semi-automatic review process for existing and work-in-progress corpora, in particular for language documentation. The goal is to improve the quality of a corpus by increasing its reusability. A central part of this process is a template for machine-readable corpus documentation and automatic data verification based on this documentation. In addition to the documentation and automatic verification, the process involves a human review and potentially results in a RefCo certification of the corpus. For each of these steps, we provide guidelines and manuals. We describe the evaluation process in detail, highlight the current limits for automatic evaluation and how the manual review is organized accordingly.
In this paper, we deal with register-driven variation from a probabilistic perspective, as proposed in Schäfer, Bildhauer, Pankratz, Müller (2022). We compare two approaches to analyse this variation within HPSG. On the one hand, we consider a multiple-grammar approach and combine it with the architecture proposed in the CoreGram project Müller (2015) - discussing its advantages and disadvantages. On the other hand, we take into account a single-grammar approach and argue that it appears to be superior due to its computational efficiency and cognitive plausibility.
Every Regional Dossier begins with an introduction about the region in question, followed by six chapters that each deal with a specific level of the education system (e.g. primary education). Chapters 8 and 9 cover the main lines of research into education of the minority language under discussion, and the prospects for the minority language in general and in education in particular, respectively. Chapter 10 provides a summary of statistics. Lists of (legal) references and useful addresses regarding the minority language are given at the end of the dossier.
The CLARIN Concept Registry (CCR) is the common semantic ground for most CMDI-based profiles to describe language-related resources in the CLARIN universe. While the CCR supports semantic interoperability within this universe, it does not extend beyond it. The flexibility of CMDI, however, allows users to use other term or concept registries when defining their metadata components. In this paper, we describe our use of schema.org, a light ontology used by many parties across disciplines.
This paper presents the Lehnwortportal Deutsch, a new, freely accessible publication platform for resources on German lexical borrowings in other languages, to be launched in the second half of 2022. The system will host digital-native sources as well as existing, digitized paper dictionaries on loanwords, initially for some 15 recipient languages. All resources remain accessible as individual standalone dictionaries; in addition, data on words (etyma, loanwords etc.) together with their senses and relations to each other is represented as a cross-resource network in a graph database, with careful distinction between information present in the original sources and the curated portal network data resulting from matching and merging information on, e. g., lexical units appearing in multiple dictionaries. Special tooling is available for manually creating graphs from dictionary entries during digitization and for editing and augmenting the graph database. The user interface allows users to browse individual dictionaries, navigate through the underlying graph and ‘click together’ complex queries on borrowing constellations in the graph in an intuitive way. The web application will be available as open source.
This paper presents the Lehnwortportal Deutsch, a new, freely accessible publication platform for resources on German lexical borrowings in other languages, to be launched in the second half of 2022. The system will host digital-native sources as well as existing, digitized paper dictionaries on loanwords, initially for some 15 recipient languages. All resources remain accessible as individual standalone dictionaries; in addition, data on words (etyma, loanwords etc.) together with their senses and relations to each other is represented as a cross-resource network in a graph database, with careful distinction between information present in the original sources and the curated portal network data resulting from matching and merging information on, e. g., lexical units appearing in multiple dictionaries. Special tooling is available for manually creating graphs from dictionary entries during digitization and for editing and augmenting the graph database. The user interface allows users to browse individual dictionaries, navigate through the underlying graph and ‘click together’ complex queries on borrowing constellations in the graph in an intuitive way. The web application will be available as open source.
In semantic fieldwork, it is common to use a language other than the language under investigation for presenting linguistic materials to the language consultants, e.g. discourse contexts in acceptability judgment tasks. Previous works commenting on the use of a ‘meta-language’ or ‘language of wider communication’ in this sense (AnderBois and Henderson 2015; Matthewson 2004) have argued that this practice is not methodologically inferior to the exclusive use of the object language for elicitation, but that the fieldworker needs to be alert to potential influences of the meta-language or, indeed, the object language, on the elicited judgments. Thus, the choice of a language for presenting discourse contexts is an integral component of fieldwork methodology. This paper provides a research report with a focus on this component. It describes a multilingual fieldwork setting offering several potential meta-languages, which the fieldworker and the consultants master to varying degrees. The choice of the languages in this setting is discussed with regard to methodological, social and practical considerations and related to selected, more general methodological questions regarding semantic fieldwork practice.
We present a simple tool for extracting text and markup information from printouts of (not only) scientific documents. While the heavy-lifting OCR is done by off-the-shelf tesseract, our focus is on detection, extraction, and basic categorization of color-highlighted text sections, as well as on providing a framework for downstream processing of extraction results. The tool can be useful for document analysis tasks that must, or benefit from being able to, use printed paper.
Annotated dataset consisting of personal designations found on websites of 42 German, Austrian, Swiss and South Tyrolean cities. Our goal is to re-evaluate the websites every year in order to see how the use of gender-fair language develops over time. The dataset contains coordinates for the creation of map material.
Dictionaries are often a reflection of their time; their respective (socio-)historical context influences how the meaning of certain lexical units is described. This also applies to descriptions of personal terms such as man or woman. Lexicographers have a special responsibility to comprehensively investigate current language use before describing it in the dictionary. Accordingly, contemporary academic dictionaries are usually corpus-based. However, it is important to acknowledge that language is always embedded in cultural contexts. Our case study investigates differences in the linguistic contexts of the use of man and woman, drawing from a range of language collections (in our case fiction books, popular magazines and newspapers). We explain how potential differences in corpus construction would therefore influence the “reality”1 depicted in the dictionary. In doing so, we address the far-reaching consequences that the choice of corpus-linguistic basis for an empirical dictionary has on semantic descriptions in dictionary entries.
Furthermore, we situate the case study within the context of gender-linguistic issues and discuss how lexicographic teams can engage with how dictionaries might perpetuate traditional role concepts when describing language use.
Dictionaries are often a reflection of their time; their respective (socio-)historical context influences how the meaning of certain lexical units is described. This also applies to descriptions of personal terms such as man or woman. Lexicographers have a special responsibility to comprehensively investigate current language use before describing it in the dictionary. Accordingly, contemporary academic dictionaries are usually corpus-based. However, it is important to acknowledge that language is always embedded in cultural contexts. Our case study investigates differences in the linguistic contexts of the use of man and woman, drawing from a range of language collections (in our case fiction books, popular magazines and newspapers). We explain how potential differences in corpus construction would therefore influence the “reality” depicted in the dictionary. In doing so, we address the far-reaching consequences that the choice of corpus-linguistic basis for an empirical dictionary has on semantic descriptions in dictionary entries.Furthermore, we situate the case study within the context of gender-linguistic issues and discuss how lexicographic teams can engage with how dictionaries might perpetuate traditional role concepts when describing language use.
This contribution investigates the use of the Czech particle jako (“like”/“as”) in naturally occurring conversations. Inspired by interactional research on unfinished or suspended utterances and on turn-final conjunctions and particles, the analysis aims to trace the possible development of jako from conjunction to a tag-like particle that can be exploited for mobilizing affiliative responses. Traditionally, jako has been described as conjunction used for comparing two elements or for providing a specification of a first element [“X (is) like Y”]. In spoken Czech, however, jako can be flexibly positioned within a speaking turn and does not seem to operate as a coordinating or hypotactic conjunction. As a result, prior studies have described jako as a polyfunctional particle. This article will try to shed light on the meaning of jako in spoken discourse by focusing on its apparent fuzzy or “filler” uses, i.e., when it is found in a mid-turn position in multi-unit turns and in the immediate vicinity of hesitations, pauses, and turn suspensions. Based on examples from mundane, video-recorded conversations and on a sequential and multimodal approach to social interaction, the analyses will first show that jako frequently frames discursive objects that co-participants should respond to. By using jako before a pause and concurrently adopting specific embodied displays, participants can more explicitly seek to mobilize responsive action. Moreover, as jako tends to cluster in multi-unit turns involving the formulation of subjective experience or stance, it can be shown to be specifically designed for mobilizing affiliative responses. Finally, it will be argued that the potential of jako to open up interactive turn spaces can be linked to the fundamental comparative semantics of the original conjunction.
The article investigates the hypothesis that prominence phenomena on different levels of linguistic structure are systematically related to each other. More specifically, it is hypothesized that prominence relations in morphosyntax reflect, and contribute to, prominence management in discourse. This hypothesis is empirically based on the phenomenon of agentivity clines, i.e. the observation that the relevance of agentivity features such as volition or sentience is variable across different constructions. While some constructions, including German DO-clefts, show a strong preference for highly agentive verbs, other constructions, including German basic active constructions, have no particular requirements regarding the agentivity of the verb, except that at least one agentivity feature should be present. Our hypothesis predicts that this variable relevance of agentivity features is related to the discourse constraints on the felicitous use of a given construction, which in turn, of course, requires an explicit statement of such constraints. We propose an original account of the discourse constraints on DO-clefts in German using the ‘Question Under Discussion’ framework. Here, we hypothesize that DO-clefts render prominent one implicit question from a set of alternative questions available at a particular point in the developing discourse. This then yields a prominent question-answer pair that changes the thematic structure of the discourse. We conclude with some observations on the possibility of relating morphosyntactic prominence (high agentivity) to discourse prominence (making a Question Under Discussion prominent by way of clefting).
Recent years have seen a growing interest in linguistic phenomena that challenge the received division of labour between lexicon and grammar, and hence often fall through the cracks of traditional dictionaries and grammars. Such phenomena call for novel, pattern based types of linguistic reference works (see various papers in Herbst 2019). The present paper introduces one such resource: MAP (“Musterbank argumentmarkierender Präpositionen”), a web based corpus linguistic patternbank of prepositional argument structure constructions in German. The paper gives an overview of the design and functionality of the MAP prototype currently developed at the Leibniz Institute for the German Language in Mannheim. We give a brief account of the data and our analytic workflow, illustrate the descriptions that make up the resource and sketch available options for querying it for specific lexical, semantic and structural properties of the data.
Recent years have seen a growing interest in linguistic phenomena that challenge the received division of labour between lexicon and grammar, and hence often fall through the cracks of traditional dictionaries and grammars. Such phenomena call for novel, pattern-based types of linguistic reference works (see various papers in Herbst 2019). The present paper introduces one such resource: MAP (“Musterbank argumentmarkierender Präpositionen”), a web-based corpus-linguistic patternbank of prepositional argument structure constructions in German. The paper gives an overview of the design and functionality of the MAP-prototype currently developed at the Leibniz-Institute for the German Language in Mannheim. We give a brief account of the data and our analytic workflow, illustrate the descriptions that make up the resource and sketch available options for querying it for specific lexical, semantic and structural properties of the data.
The shortening of linguistic expressions naturally involves some sort of correspondence between short forms and (some portion of) the respective full forms. Based mostly on data from English and Hebrew this article explores the hypothesis that such correspondence concerns necessary sameness of symbolic form, referring either to graphemic or to a specific level of phonological representation. That level indicates a degree of abstractness defined by language-specific contrastiveness (i.e. “phonemic”). Reference to written form can be shown to be highly systematic in certain contexts, including cases where full forms consist of multiple stems. Specific asymmetries pertaining to the targeting of material by correspondence (e.g. initial vs. non-initial position) appear to be alike for both types of representation, a claim supported by a study based on a nomenclature strictly confined to writing (chemical element symbols).
Head alignment in German compounds: Implications for prosodic constituency and morphological parsing
(2022)
The notion of head alignment was introduced to account for the observation that in a word with multiple feet, one is more prominent than the others. In particular, this notion is meant to capture the characteristic edge-orientation of main stress by requiring the (left or right) word boundary and the respective (left or right) boundary of the head foot to coincide (McCarthy & Prince 1993). In the present paper the notion of head alignment will be applied to compounds, which are also characterized by the property that one of their members, located in a margin position, is most prominent.
The adequacy of an analysis in terms of head alignment hinges on the question of whether observable prominence peaks associate with the boundaries of independently motivated constituents. It will be argued that such links exist for German compounds, indicating reference to at least three distinct compound categories established on morphological grounds: copulative, phrasal, and a default class of “regular” compounds. The evidence for the relevant distinctions sheds light on morphological parsing, indicating that compound categories can be – and often are – determined by properties pertaining to their complete form, rather than by conditions affecting their (original) construction.
Words originating from shortening, including acronyms and clippings, constitute a treasure trove of insight into phonological grammar. In particular, they serve as an ideal testing ground for Optimality Theory (OT) and its view of grammar as an interaction of markedness constraints, which express (dis-) preferences regarding phonological structure in output forms, and faithfulness constraints, which require output forms to correspond to input structure (Prince and Smolensky 1993). This is because shortenings are characterised by a sharply diminished role of faithfulness, allowing for markedness constraints to make their force felt (“The Emergence of the Unmarked”). This article aims to demonstrate the heuristic value of shortening data for testing the OT model and for shedding light on various controversies in German phonology. A particular concern is to draw attention to the need for properly sorting the shortening data, to identify influences on phonological structure due to internal domain boundaries or to special correspondence effects potentially obscuring the view on the maximally unmarked patterns.
This paper presents a compositional annotation scheme to capture the clusivity properties of personal pronouns in context, that is their ability to construct and manage in-groups and out-groups by including/excluding the audience and/or non-speech act participants in reference to groups that also include the speaker. We apply and test our schema on pronoun instances in speeches taken from the German parliament. The speeches cover a time period from 2017-2021 and comprise manual annotations for 3,126 sentences. We achieve high inter-annotator agreement for our new schema, with a Cohen’s κ in the range of 89.7-93.2 and a percentage agreement of > 96%. Our exploratory analysis of in/exclusive pronoun use in the parliamentary setting provides some face validity for our new schema. Finally, we present baseline experiments for automatically predicting clusivity in political debates, with promising results for many referential constellations, yielding an overall 84.9% micro F1 for all pronouns.
The question of whether a letter is a grapheme or not is a perennial issue in writing research. The answer depends on which criteria are used to differentiate between letters and graphemes and, ultimately,how the unit ‘grapheme’ is defined. This problem is particularly relevant to complex graphemes, i.e. sequences of letters that behave like a single grapheme in certain respects. Typical for German is the ‹ch›. This paper argues for a scalar concept of graphemes, which compares the grapheme status of each of the units under investigation. For this purpose, new criteria for the identification of complex graphemes are used, which originate from handwriting analysis. There, it is shown that complex graphemes are connected with each other disproportionately often and also have deviating letter forms disproportionately often.
Within a rapidly digitalising society, it is important to understand how the learning and teaching of digital skills play out in situ, particularly amongst older adults who acquire these skills later in life. This paper focuses on participants engaged in the process of learning digital skills in adult education courses. Using video recordings from adult education centres in Finland and Germany, we explore how students mobilise their teachers’ assistance when encountering problems with their smartphones, laptops or tablets. Prior research on social interaction has shown that assistance can be recruited through a variety of verbal and embodied formats. In this specific educational setting, participants can use complaints about their digital skills or mobile devices to obtain assistance. Utilising multimodal conversation analysis, we describe two basic sequence types involving students’ complaints, discuss their cross-linguistic characteristics, and reflect on their connection to this educational setting and digital devices.
This article presents a discussion on the main linguistic phenomena which cause difficulties in the analysis of user-generated texts found on the web and in social media, and proposes a set of annotation guidelines for their treatment within the Universal Dependencies (UD) framework of syntactic analysis. Given on the one hand the increasing number of treebanks featuring user-generated content, and its somewhat inconsistent treatment in these resources on the other, the aim of this article is twofold: (1) to provide a condensed, though comprehensive, overview of such treebanks—based on available literature—along with their main features and a comparative analysis of their annotation criteria, and (2) to propose a set of tentative UD-based annotation guidelines, to promote consistent treatment of the particular phenomena found in these types of texts. The overarching goal of this article is to provide a common framework for researchers interested in developing similar resources in UD, thus promoting cross-linguistic consistency, which is a principle that has always been central to the spirit of UD.
This paper investigates the long-term diachronic development of the perfect and preterite tenses in German and provides a novel analysis by supplementing Reichenbach’s (1947) classical theory of tense by the notion of underspecification. Based on a newly compiled parallel corpus spanning the entire documented history of German, we show that the development in question is cyclic: It starts out with only one tense form (preterite) compatible with both current relevance and narrative past readings in (early) Old High German and, via three intermediate stages, arrives at only one tense form again (perfect) compatible with the same readings in modern Upper German dialects. We propose that in order to capture all attested stages we must allow tenses to be unspecified for R (reference time), with R merely being inferred pragmatically. We then propose that the transitions between the different stages can be explained by the interplay between semantics and pragmatics.
The public as linguistic authority: Why users turn to internet forums to differentiate between words
(2022)
This paper addresses the question of why we face unsatisfactory German dictionary entries when looking up and comparing two similar lexical terms that are loan words, new words, (near) synonyms, or confusables. It explains how users are aware of existing reference works but still search or post on language forums, often after consulting a dictionary and experiencing a range of dictionary based problems. Firstly, these dictionary based difficulties will be scrutinised in more detail with respect to content, function, presentation, and the language of definitions. Entries documenting loan words and commonly confused pairs from different lexical reference resources serve as examples to show the short comings. Secondly, I will explain why learning about your target group involves studying discussion forums. Forums are a valuable source for detailed user studies, enabling the examination of different communicative needs, concrete linguistic questions, speakers’ intuitions, and people’s reactions to posts and comments. Thirdly, with the help of two examples I will describe how the study of chats and forums had a major impact on the development of a recently compiled German dictionary of confusables. Finally, that same problem solving approach is applied to the idea of a future dictionary of neologisms and their synonyms.
The public as linguistic authority: Why users turn to internet forums to differentiate between words
(2022)
This paper addresses the question of why we face unsatisfactory German dictionary entries when looking up and comparing two similar lexical terms that are loan words, new words, (near)-synonyms, or confusables. It explains how users are aware of existing reference works but still search or post on language forums, often after consulting a dictionary and experiencing a range of dictionary-based problems. Firstly, these dictionary-based difficulties will be scrutinised in more detail with respect to content, function, presentation, and the language of definitions. Entries documenting loan words and commonly confused pairs from different lexical reference resources serve as examples to show the shortcomings. Secondly, I will explain why learning about your target group involves studying discussion forums. Forums are a valuable source for detailed user studies, enabling the examination of different communicative needs, concrete linguistic questions, speakers’ intuitions, and people’s reactions to posts and comments. Thirdly, with the help of two examples I will describe how the study of chats and forums had a major impact on the development of a recently compiled German dictionary of confusables. Finally, that same problem-solving approach is applied to the idea of a future dictionary of neologisms and their synonyms.
We address the task of distinguishing implicitly abusive sentences on identity groups (“Muslims contaminate our planet”) from other group-related negative polar sentences (“Muslims despise terrorism”). Implicitly abusive language are utterances not conveyed by abusive words (e.g. “bimbo” or “scum”). So far, the detection of such utterances could not be properly addressed since existing datasets displaying a high degree of implicit abuse are fairly biased. Following the recently-proposed strategy to solve implicit abuse by separately addressing its different subtypes, we present a new focused and less biased dataset that consists of the subtype of atomic negative sentences about identity groups. For that task, we model components that each address one facet of such implicit abuse, i.e. depiction as perpetrators, aspectual classification and non-conformist views. The approach generalizes across different identity groups and languages.
The Leibniz-Institute for the German Language (IDS) was established in Mannheim in 1964. Since then, it has been at the forefront of innovation in German linguistics as a hub for digital language data. This chapter presents various lessons learnt from over five decades of work by the IDS, ranging from the importance of sustainability, through its strong technical base and FAIR principles, to the IDS’ role in national and international cooperation projects and its expertise on legal and ethical issues related to language resources and language technology.
Dictionaries have been part and parcel of literate societies for many centuries. They assist in communication, particularly across different languages, to aid in understanding, creating, and translating texts. Communication problems arise whenever a native speaker of one language comes into contact with a speaker of another language. At the same time, English has established itself as a lingua franca of international communication. This marked tendency gives lexicography of English a particular significance, as English dictionaries are used intensively and extensively by huge numbers of people worldwide.
We examine moments in social interaction in which a person formulates what another thinks or believes. Such formulations of belief constitute a practice with specifiable contexts and consequences. Belief formulations treat aspects of the other person's prior conduct as accountable on the basis that it provided a new angle on a topic, or otherwise made a surprising contribution within an ongoing course of actions. The practice of belief formulations subjectivizes the content that the other articulated and thereby topicalizes it, mobilizing commitment to that position, an account, or further elaboration. We describe how the practice can be put to work in different activity contexts: sometimes it is designed to undermine the other's position as a subjective 'mere belief', at other times it serves to mobilize further topic talk. Throughout, belief formulations show themselves to be a method by which we get to know ourselves and each other as mental agents.
Sometimes in interaction, a speaker articulates an overt interpretation of prior talk. Such moments have been studied as involving the repair of a problem with the other’s talk or as formulating an understanding of the matter at hand. Stepping back from the established notions of formulations and repair, we examine the variety of actions speakers do with the practice of offering an interpretation, and the order within this domain. Results show half a dozen usage types of interpretations in mundane interaction. These form a largely continuous territory of action, with recognizably distinct usage types as well as cases falling between these (proto)typical uses. We locate order in the domain of interpretations using the method of semantic maps and show that, contrary to earlier assumptions in the literature, interpretations that formulate an understanding of the matter at hand are actually quite pervasive in ordinary talk. These findings contribute to research on action formation and advance our understanding of understanding in interaction. Data are video- and audio-recordings of mundane social interaction in the German language from a variety of settings.
CLARIN, the "Common Language Resources and Technology Infrastructure", has established itself as a major player in the field of research infrastructures for the humanities. This volume provides a comprehensive overview of the organization, its members, its goals and its functioning, as well as of the tools and resources hosted by the infrastructure. The many contributors representing various fields, from computer science to law to psychology, analyse a wide range of topics, such as the technology behind the CLARIN infrastructure, the use of CLARIN resources in diverse research projects, the achievements of selected national CLARIN consortia, and the challenges that CLARIN has faced and will face in the future.
The book will be published in 2022, 10 years after the establishment of CLARIN as a European Research Infrastructure Consortium by the European Commission (Decision 2012/136/EU).
This edited volume offers up-to-date research on the interactive building and managing of relationships in organized helping. Its contributions address this core of helping in psychotherapy, coaching, doctor-patient interaction, and digital helping interaction and document and analyze essential communicative practices of relationship management. A summarizing contribution identifies common dimensions of relationship management across the different helping contexts and thereby provides a framework for understanding and researching how interactive practices and helping relationships are interconnected. The volume brings together researchers and practitioners and merges academic approaches to studying relationships with practical knowledge about verbal helping in these settings. The book is intended for scholars in the field of organized helping as well as for students and researchers of communication and discourse / conversation analysis in professional and organized contexts. It is also addressed to practitioners interested in learning more about the micro- and meso-management of their working relationships.
Contents:
1. Vasile Pais, Maria Mitrofan, Verginica Barbu Mititelu, Elena Irimia, Roxana Micu and Carol Luca Gasan: Challenges in Creating a Representative Corpus of Romanian Micro-Blogging Text. Pp. 1-7
2. Modest von Korff: Exhaustive Indexing of PubMed Records with Medical Subject Headings. Pp. 8-15
3. Luca Brigada Villa: UDeasy: a Tool for Querying Treebanks in CoNLL-U Format. Pp. 16-19
4. Nils Diewald: Matrix and Double-Array Representations for Efficient Finite State Tokenization. Pp. 20-26
5. Peter Fankhauser and Marc Kupietz: Count-Based and Predictive Language Models for Exploring DeReKo. Pp. 27-31
6. Hanno Biber: “The word expired when that world awoke.” New Challenges for Research with Large Text Corpora and Corpus-Based Discourse Studies in Totalitarian Times. Pp. 32-35
Bringing together a team of global experts, this is the first volume to focus on the ways in which meanings are ascribed to actions in social interaction. It builds on the research traditions of Conversation Analysis and Pragmatics, and highlights the role of interactional, social, linguistic, multimodal, and epistemic factors in the formation and ascription of action-meanings. It shows how inference and intention ascription are displayed and drawn upon by participants in social interaction. Each chapter reveals practices, processes, and uses of action ascription, based on the analysis of audio and video recordings from nine different languages. Action ascription is conceptualised in this volume as not merely a cognitive process, but a social action in its own right that is used for managing interactional concerns and guiding the subsequent course of social interaction. It will be essential reading for academic researchers and advanced students interested in the relationship between language, behaviour and social interaction.
This volume brings together contributions by international experts reflecting on Covid19-related neologisms and their lexicographic processing and representation. The papers analyze new words, new meanings of existing words, and new multiword units, where they come from, how they are transmitted (or differ) across languages, and how their use and meaning are reflected in dictionaries of all sorts. Recent trends in as many as ten languages are considered, including general and specialized language, monolingual as well as bilingual and printed as well as online dictionaries.