Refine
Year of publication
Document Type
- Article (196)
- Part of a Book (112)
- Conference Proceeding (110)
- Book (21)
- Part of Periodical (7)
- Other (6)
- Review (2)
- Image (1)
- Working Paper (1)
Is part of the Bibliography
- yes (456) (remove)
Keywords
- Deutsch (150)
- Korpus <Linguistik> (138)
- Interaktion (45)
- Konversationsanalyse (41)
- Gesprochene Sprache (39)
- Forschungsdaten (36)
- Computerlinguistik (31)
- Annotation (25)
- Englisch (24)
- Multimodalität (23)
Publicationstate
- Veröffentlichungsversion (456) (remove)
Reviewstate
- Peer-Review (456) (remove)
Publisher
- IDS-Verlag (34)
- de Gruyter (20)
- Zenodo (19)
- Verlag für Gesprächsforschung (16)
- Linköping University Electronic Press (14)
- European language resources association (ELRA) (13)
- Lexical Computing CZ s.r.o. (12)
- Peter Lang (11)
- Springer (11)
- Springer Nature (11)
Das Kommunizieren in Sozialen Medien und der Umgang mit Hypertexten ist im Jahr 2020 kein Randphänomen mehr. Die sprachlichen Besonderheiten internetbasierter Kommunikation und Sozialer Medien sind mittlerweile auch gut erforscht und beschrieben, allerdings werden diese bislang in deutschen Grammatiken, mit Ausnahme von Hoffmann (2014), allenfalls am Rande behandelt. Selbst neuere Ansätze zur Textanalyse, z. B. Ágel (2017), konzentrieren sich auf gestaltstabile, linear organisierte Schrifttexte. Dasselbe gilt für Ansätze, die primär für die Bewertung von Schreibprodukten in Bildungskontexten entwickelt wurden.
The annual microcensus provides Germany’s most important official statistics. Unlike a census it does not cover the whole population, but a representative 1%-sample of it. In 2017, the German microcensus asked a question on the language of the population, i.e. ‘Which language is mainly spoken in your household?’ Unfortunately, the question, its design and its position within the whole microcensus’ questionnaire feature several shortcomings. The main shortcoming is that multilingual repertoires cannot be captured by it. Recommendations for the improvement of the microcensus’ language question: first and foremost the question (i.e. its wording, design, and answer options) should make it possible to count multilingual repertoires.
This paper explores how attitudes affect the seemingly objective process of counting speakers of varieties using the example of Low German, Germany’s sole regional language. The initial focus is on the basic taxonomy of classifying a variety as a language or a dialect. Three representative surveys then provide data for the analysis: the Germany Survey 2008, the Northern Germany Survey 2016, and the Germany Survey 2017. The results of these surveys indicate that there is no consensus concerning the evaluation of Low German’s status and that attitudes towards Low German are related to, for example, proficiency in the language. These attitudes are shown to matter when counting speakers of Low German and investigating the status it has been accorded.
Bislang gibt es keine akkuraten, repräsentativen Statistiken dazu, welche Sprachen in Deutschland gesprochen werden. Zwar wird in verschiedenen Erhebungen nach Muttersprachen oder nach zuhause gesprochenen Sprachen gefragt; aufgrund einiger Mängel im Erhebungsdesign bilden die Ergebnisse der vorliegenden Erhebungen jedoch die sprachliche Realität der in Deutschland lebenden Bevölkerung nicht angemessen ab. Im Beitrag wird anhand von drei Erhebungen gezeigt, dass bereits die Instrumente zur Erhebung von Sprache von Spracheinstellungen geprägt sind und dass dadurch die Gültigkeit der Ergebnisse stark eingeschränkt wird. Diese Mängel gelten für Sprachstatistiken im Hinblick auf die gesamte Bevölkerung Deutschlands – Kinder und Jugendliche eingeschlossen.
Collaborative work in NFDI
(2023)
The non-profit association National Research Data Infrastructure (NFDI) promotes science and research through a National Research Data Infrastructure. Its aim is to develop and establish an overarching research data management (RDM) for Germany and to increase the efficiency of the entire German science system. After a two-and-a-half year build up phase, the process of adding new consortia, each representing a different data domain, has ended in March 2023. NFDI now has 26 disciplinary consortia (and one additional basic service collaboration). Now the full extent of cross-consortial interaction is beginning to show.
The automatic recognition of idioms poses a challenging problem for NLP applications. Whereas native speakers can intuitively handle multiword expressions whose compositional meanings are hard to trace back to individual word semantics, there is still ample scope for improvement regarding computational approaches. We assume that idiomatic constructions can be characterized by gradual intensities of semantic non-compositionality, formal fixedness, and unusual usage context, and introduce a number of measures for these characteristics, comprising count-based and predictive collocation measures together with measures of context (un)similarity. We evaluate our approach on a manually labelled gold standard, derived from a corpus of German pop lyrics. To this end, we apply a Random Forest classifier to analyze the individual contribution of features for automatically detecting idioms, and study the trade-off between recall and precision. Finally, we evaluate the classifier on an independent dataset of idioms extracted from a list of Wikipedia idioms, achieving state-of-the art accuracy.
In order to differentiate between figurative and literal usage of verb-noun combinations for the shared task on the disambiguation of German Verbal Idioms issued for KONVENS 2021, we apply and extend an approach originally developed for detecting idioms in a dataset consisting of random ngram samples. The classification is done by implementing a rather shallow, statistics-based pipeline without intensive preprocessing and examinations on the morphosyntactic and semantic level. We describe the overall approach, the differences between the original dataset and the dataset of the KONVENS task, provide experimental classification results, and analyse the individual contributions of our feature sets.
Der Auftaktworkshop "Lexik des gesprochenen Deutsch: Forschungsstand, Erwartungen und Anforderungen an die Entwicklung einer innovativen lexikografischen Ressource" fand am 16. und 17. Februar 2017 am Institut fur Deutsche Sprache (IDS) in Mannheim statt. Das von der Leibniz-Gemeinschaft geforderte Projekt "Lexik des gesprochenen Deutsch" (=LeGeDe, Leibniz-Wettbewerb 2016, Forderlinie "Innovative Vorhaben") nahm im September 2016 am IDS seine Arbeit auf. Das Hauptziel ist die Erstellung einer korpusbasierten elektronischen Ressource zur Lexik des gesprochenen Deutsch auf der Grundlage von lexikologischen und gesprachsanalytischen Untersuchungen authentischer gesprochensprachlicher Daten.
In this paper, we describe a data processing pipeline used for annotated spoken corpora of Uralic languages created in the INEL (Indigenous Northern Eurasian Languages) project. With this processing pipeline we convert the data into a loss-less standard format (ISO/TEI) for long-term preservation while simultaneously enabling a powerful search in this version of the data. For each corpus, the input we are working with is a set of files in EXMARaLDA XML format, which contain transcriptions, multimedia alignment, morpheme segmentation and other kinds of annotation. The first step of processing is the conversion of the data into a certain subset of TEI following the ISO standard ’Transcription of spoken language’ with the help of an XSL transformation. The primary purpose of this step is to obtain a representation of our data in a standard format, which will ensure its long-term accessibility. The second step is the conversion of the ISO/TEI files to a JSON format used by the “Tsakorpus” search platform. This step allows us to make the corpora available through a web-based search interface. As an addition, the existence of such a converter allows other spoken corpora with ISO/TEI annotation to be made accessible online in the future.
This paper presents the QUEST project and describes concepts and tools that are being developed within its framework. The goal of the project is to establish quality criteria and curation criteria for annotated audiovisual language data. Building on existing resources developed by the participating institutions earlier, QUEST develops tools that could be used to facilitate and verify adherence to these criteria. An important focus of the project is making these tools accessible for researchers without substantial technical background and helping them produce high-quality data. The main tools we intend to provide are the depositors’ questionnaire and automatic quality assurance, both developed as web applications. They are accompanied by a Knowledge base, which will contain recommendations and descriptions of best practices established in the course of the project. Conceptually, we split linguistic data into three resource classes (data deposits, collections and corpora). The class of a resource defines the strictness of the quality assurance it should undergo. This division is introduced so that too strict quality criteria do not prevent researchers from depositing their data.
This paper presents the QUEST project and describes concepts and tools that are being developed within its framework. The goal of the project is to establish quality criteria and curation criteria for annotated audiovisual language data. Building on existing resources developed by the participating institutions earlier, QUEST also develops tools that could be used to facilitate and verify adherence to these criteria. An important focus of the project is making these tools accessible for researchers without substantial technical background and helping them produce high-quality data. The main tools we intend to provide are a questionnaire and automatic quality assurance for depositors of language resources, both developed as web applications. They are accompanied by a knowledge base, which will contain recommendations and descriptions of best practices established in the course of the project. Conceptually, we consider three main data maturity levels in order to decide on a suitable level of strictness of the quality assurance. This division has been introduced to avoid that a set of ideal quality criteria prevent researchers from depositing or even assessing their (legacy) data. The tools described in the paper are work in progress and are expected to be released by the end of the QUEST project in 2022.
The CMDI Explorer
(2020)
We present the CMDI Explorer, a tool that empowers users to easily explore the contents of complex CMDI records and to process selected parts of them with little effort. The tool allows users, for instance, to analyse virtual collections represented by CMDI records, and to send collection items to other CLARIN services such as the Switchboard for subsequent processing. The CMDI Explorer hence adds functionality that many users felt was lacking from the CLARIN tool space.
CMDI Explorer
(2021)
We present CMDI Explorer, a tool that empowers users to easily explore the contents of complex CMDI records and to process selected parts of them with little effort. The tool allows users, for instance, to analyse virtual collections represented by CMDI records, and to send collection items to other CLARIN services such as the Switchboard for subsequent processing. CMDI Explorer hence adds functionality that many users felt was lacking from the CLARIN tool space.
This paper addresses long-term archival for large corpora. Three aspects specific to language resources are focused, namely (1) the removal of resources for legal reasons, (2) versioning of (unchanged) objects in constantly growing resources, especially where objects can be part of multiple releases but also part of different collections, and (3) the conversion of data to new formats for digital preservation. It is motivated why language resources may have to be changed, and why formats may need to be converted. As a solution, the use of an intermediate proxy object called a signpost is suggested. The approach will be exemplified with respect to the corpora of the Leibniz Institute for the German Language in Mannheim, namely the German Reference Corpus (DeReKo) and the Archive for Spoken German (AGD).
Signposts for CLARIN
(2020)
An implementation of CMDI-based signposts and its use is presented in this paper. Arnold et al. 2020 present Signposts as a solution to challenges in long-term preservation of corpora, especially corpora that are continuously extended and subject to modification, e.g., due to legal injunctions, but also may overlap with respect to constituents, and may be subject to migrations to new data formats. We describe the contribution Signposts can make to the CLARIN infrastructure and document the design for the CMDI profile.
Signposts for CLARIN
(2021)
An implementation of CMDI-based signposts and its use is presented in this paper. Arnold, Fisseni et al. (2020) present signposts as a solution to challenges in long-term preservation of corpora. Though applicable to digital resources in general, we focus on corpora, especially those that are continuously extended or subject to modification, e.g., due to legal injunctions, but also may overlap with respect to constituents, and may be subject to migrations to new data formats. We describe the contribution signposts can make to the CLARIN infrastructure, notably virtual collections, and document the design for the CMDI profile.
In diesem Beitrag widmen wir uns der Frage, welche Schritte unternommen werden müssen, um Skripte, die bei der Aufbereitung und/oder Auswertung von Forschungsdaten Anwendung finden, so FAIR wie möglich zu gestalten. Dabei nehmen wir sowohl Reproduzierbarkeit, also den Weg von den (Roh)daten zu den Ergebnissen einer Studie, als auch Wiederverwertbarkeit, also die Möglichkeit, die Methoden einer Studie mittels des Skripts auf andere Daten anzuwenden, in den Fokus und beleuchten dabei die folgenden Aspekte: Arbeitsumgebung, Datenvalidierung, Modularisierung, Dokumentation und Lizenz.
Sound units play a pivotal role in cognitive models of auditory comprehension. The general consensus is that during perception listeners break down speech into auditory words and subsequently phones. Indeed, cognitive speech recognition is typically taken to be computationally intractable without phones. Here we present a computational model trained on 20 hours of conversational speech that recognizes word meanings within the range of human performance (model 25%, native speakers 20–44%), without making use of phone or word form representations. Our model also generates successfully predictions about the speed and accuracy of human auditory comprehension. At the heart of the model is a ‘wide’ yet sparse two-layer artificial neural network with some hundred thousand input units representing summaries of changes in acoustic frequency bands, and proxies for lexical meanings as output units. We believe that our model holds promise for resolving longstanding theoretical problems surrounding the notion of the phone in linguistic theory.
In many European languages, propositional arguments (PAs) can be realized as different types of structures. Cross-linguistically, complex structures with PAs show a systematic correlation between the strength of the semantic bond and the syntactic union (cf. Givón 2001; Wurmbrand/Lohninger 2023). Also, different languages show similarities with respect to the (lexical) licensing of different PAs (cf. Noonan 1985; Givón 2001; Cristofaro 2003 on different predicate types). However, on a more fine-grained level, a variation across languages can be observed both with respect to the syntactic-semantic properties of PAs as well as to their licensing and usage. This presentation takes a multi-contrastive view of different types of PAs as syntactic subjects and objects by looking at five European languages: EN, DE, IT, PL and HU. Our goal is to identify the parameters of variation in the clausal domain with PAs and by this to contribute to a better understanding of the individual language systems on the one hand and the nature of the linguistic variation in the clausal domain on the other hand. Phenomena and Methodology: We investigate the following types of PAs: direct object (DO) clauses (1), prepositional object (PO) clauses (2), subject clauses (3), and nominalizations (4, 5). Additionally, we discuss clause union phenomena (6, 7). The analyzed parameters include among others finiteness, linear position of the PA, (non) presence of a correlative element, (non) presence of a complementizer, lexical-semantic class of the embedding verb. The phenomena are analyzed based on corpus data (using mono- and multilingual corpora), experimental data (acceptability judgement surveys) or introspective data.
This article investigates mundane photo taking practices with personal mobile devices in the co-presence of others, as well as “divergent” self-initiated smartphone use, thereby exploring the impact of everyday technologies on social interaction. Utilizing multimodal conversation analysis, we examined sequences in which young adults take pictures of food and drinks in restaurants and cafés. Although everyday interactions are abundant in opportunities for accomplishing food photography as a side activity, our data show that taking pictures is also often prioritized over other activities. Through a detailed sequential analysis of video recordings and dynamic screen captures of mobile devices, we illustrate how photographers orient to the momentary opportunities for and relevance of photo taking, that is, how they systematically organize their photographing with respect to the ongoing social encounter and the (projected) changes in the material environment. We investigate how the participants multimodally negotiate the “mainness” and “sideness” (Mondada, 2014) of situated food photography and describe some particular features of participants’ conduct in moments of mundane multiactivity.
The term “pivot” usually refers to two overlapping syntactic units such that the completion of the first unit simultaneously launches the second. In addition, pivots are generally said to be characterized by the smooth prosodic integration of their syntactic parts. This prosodic integration is typically achieved by prosodic-phonetic matching of the pivot components. As research on such turns in a range of languages has illustrated, speakers routinely deploy pivots so as to be able to continue past a point of possible turn completion, in the service of implementing some additional or revised action. This article seeks to build on, and complement, earlier research by exploring two issues in more detail as follows: (1) what exactly do pivotal turn extensions accomplish on the action dimension, and (2) what role does prosodic-phonetic packaging play in this? We will show that pivot constructions not only exhibit various degrees of prosodic-phonetic (non-)integration, i.e., differently strong cesuras, but that they can be ordered on a continuum, and that this cline maps onto the relationship of the actions accomplished by the components of the pivot construction. While tighter prosodic-phonetic integration, i.e., weak(er) cesuring, co-occurs with post-pivot actions whose relationship to that of the pre-pivot tends to be rather retrospective in character, looser prosodic-phonetic integration, i.e., strong(er) cesuring, is associated with a more prospective orientation of the post-pivot’s action. These observations also raise more general questions with regard to the analysis of action.
We present a collection of (currently) about 5.500 commands directed to voice-controlled virtual assistants (VAs) by sixteen initial users of a VA system in their homes. The collection comprises recordings captured by the VA itself and with a conditional voice recorder (CVR) selectively capturing recordings including the VA-directed commands plus some surrounding context. Next to a description of the collection, we present initial findings on the patterns of use of the VA systems during the first weeks after installation, including usage timing, the development of usage frequency, distributions of sentence structures across commands, and (the development of) command success rates. We discuss the advantages and disadvantages of the applied collection-specific recording approach and describe potential research questions that can be investigated in the future, based on the collection, as well as the merit of combining quantitative corpus linguistic approaches with qualitative in-depth analyses of single cases.
Comprehending conditional statements is fundamental for hypothetical reasoning about situations. However, the online comprehension of conditional statements containing different conditional connectives is still debated. We report two self-paced reading experiments on German conditionals presenting the conditional connectives wenn (‘if’) and nur wenn (‘only if’) in identical discourse contexts. In Experiment 1, participants read a conditional sentence followed by the confirmed antecedent p and the confirmed or negated consequent q. The final, critical sentence was presented word by word and contained a positive or negative quantifier (ein/kein ‘one/no’). Reading times of the two quantifiers did not differ between the two conditional connectives. In Experiment 2, presenting a negated antecedent, reading times for the critical positive quantifier (ein) did not differ between conditional connectives, while reading times for the negative quantifier (kein) were shorter for nur wenn than for wenn. The results show that comprehenders form distinct predictions about discourse continuations due to differences in the lexical semantics of the tested conditional connectives, shedding light on the role of conditional connectives in the online interpretation of conditionals in general.
We present a method for detecting and reconstructing separated particle verbs in a corpus of spoken German by following an approach suggested for written language. Our study shows that the method can be applied successfully to spoken language, compares different ways of dealing with structures that are specific to spoken language corpora, analyses some remaining problems, and discusses ways of optimising precision or recall for the method. The outlook sketches some possibilities for further work in related areas.
The paper reviews the results of work done in the context of TEI-Lex0, a joint ENeL / DARIAH / PARTHENOS initiative aimed at formulating guidelines for the encoding of retrodigitized dictionaries by streamlining and simplifying the recommendations of the “Print Dictionaries” chapter of the TEI Guidelines. TEI-Lex0 work is performed by teams concentrating on each of the main components of dictionary entries. The work presented here concerns proposals for constraining TEI-based encoding of orthographic, phonetic, and grammatical information on written and spoken forms of the lemma (headword), including auxiliary inflected forms. We also adduce examples of handling various types of orthographic and phonetic variants, as well as examples of handling the representation of inflectional paradigms, which have received less attention in the TEI Guidelines but which are nonetheless essential for properly exposing data content to the various uses that digitized lexica may have.
It is well known that the distribution of lexical and grammatical patterns is size- and register-sensitive (Biber 1986, and later publications). This fact alone presents a challenge to many corpus-oriented linguistic studies focusing on a single language. When it comes to cross-linguistic studies using corpora, the challenge becomes even greater due to the lack of high-quality multilingual corpora (Kupietz et al. 2020; Kupietz/Trawiński 2022), which are comparable with respect to the size and the register. That was the motivation for the creation of the European Reference Corpus EuReCo, an initiative started in 2013 at the Leibniz Institute for the German Language (IDS) together with several European partners (Kupietz et al. 2020). EuReCo is an emerging federated corpus, with large virtual comparable corpora across various languages and with an infrastructure supporting contrastive research. The core of the infrastructure is KorAP (Diewald et al. 2016), a scalable open-source platform supporting the analysis and visualisation of properties of texts annotated by multiple and potentially conflicting information layers, and supporting several corpus query languages. Until recently, EuReCo consisted of three monolingual subparts: the German Reference Corpus DeReKo (Kupietz et al. 2018), the Reference Corpus of Contemporary Romanian Language (Barbu Mititelu/Tufiş/Irimia 2018), and the Hungarian National Corpus (Váradi 2002). The goal of the present submission is twofold. On the one hand, it reports about the new component of EuReCo: a sample of the National Corpus of Polish (Przepiórkowski et al. 2010). On the other hand, it presents the results of a new pilot study using the newly extended EuReCo. This pilot study investigates selected Polish collocations involving light verbs and their prepositional / nominal complements (Fig. 1) and extends the collocation analyses of German, Romanian and Hungarian (Fig. 2) discussed in Kupietz/Trawiński (2022).
In mid-2017, as part of our activities within the TEI Special Interest Group for Linguists (LingSIG), we submitted to the TEI Technical Council a proposal for a new attribute class that would gather attributes facilitating simple token-level linguistic annotation. With this proposal, we addressed community feedback complaining about the lack of a specific tagset for lightweight linguistic annotation within the TEI. Apart from @lemma and @lemmaRef, up till now TEI encoders could only resort to using the generic attribute @ana for inline linguistic annotation, or to the quite complex system of feature structures for robust linguistic annotation, the latter requiring relatively complex processing even for the most basic types of linguistic features. As a result, there now exists a small set of basic descriptive devices which have been made available at the cost of only very small changes to the TEI tagset. The merit of a predefined TEI tagset for lightweight linguistic annotation is the homogeneity of tagging and thus better interoperability of simple linguistic resources encoded in the TEI. The present paper introduces the new attributes, makes a case for one more addition, and presents the advantages of the new system over the legacy TEI solutions.
CoMParS is a resource under construction in the context of the long-term project German Grammar in European Comparison (GDE) at the IDS Mannheim. The principal goal of GDE is to create a novel contrastive grammar of German against the background of other European languages. Alongside German, which is the central focus, the core languages for comparison are English, French, Hungarian and Polish, representing different typological classes. Unlike traditional contrastive grammars available for German, which usually cover language pairs and are based on formal grammatical categories, the new GDE grammar is developed in the spirit of functionalist typology. This implies that, instead of formal criteria, cognitively motivated functional domains in terms of Givón (1984) are used as tertia comparationis. The purpose of CoMParS is to document the empirical basis of the theoretical assumptions of GDE-V and to illustrate the otherwise rather abstract content of grammar books by as many as possible naturally occurring and adequately presented multilingual examples, including information on their use in specific contexts and registers. These examples come from existing parallel corpora, and our presentation will focus on the legal aspects and consequences of this choice of language data.
Since 2013 representatives of several French and German CMC corpus projects have developed three customizations of the TEI-P5 standard for text encoding in order to adapt the encoding schema and models provided by the TEI to the structural peculiarities of CMC discourse. Based on the three schema versions, a 4th version has been created which takes into account the experiences from encoding our corpora and which is specifically designed for the submission of a feature request to the TEI council. On our poster we would present the structure of this schema and its relations (commonalities and differences) to the previous schemas.
In this Paper, we describe a schema and models which have been developed for the representation of corpora of computer-mediated communicatin (CMC corpora) using the representation framework provided by the Text Encoding Initiative (TEI). We characterise CMC discourse as dialogic, sequentially organised interchange between humans and point out that many features of CMC are not adequately handled by current corpus encoding schemas and tools. We formulate desiderata for a representation of CMC in encoding schemes and argue why the TEI is a suitable framework for the encoding of CMC corpora. We propose a model of basic CMC units (utterances, posts, and nonverbal activities) and the macro- and micro-level structures of interactions in CMC environments. Based on these models, we introduce CMC-core, a TEI customisation for the encoding of CMC corpora, which defines CMC-specific encoding features on the four levels of elements, model classes, attribute classes, and modules of the TEI infrastructure. The description of our customisation is illustrated by encoding examples from corpora by researchers of the TEI SIG CMC, representing a variety of CMC genres, i.e. chat, wiki talk, twitter, blog, and Second Life interactions. The material described, i.e. schemata, encoding examples, and documentation, is available from the of the TEI CMC SIG Wiki and will accompany a feature request to the TEI council in late 2019.
We investigate the optional omission of the infinitival marker in a Swedish future tense construction. During the last two decades the frequency of omission has been rapidly increasing, and this process has received considerable attention in the literature. We test whether the knowledge which has been accumulated can yield accurate predictions of language variation and change. We extracted all occurrences of the construction from a very large collection of corpora. The dataset was automatically annotated with language-internal predictors which have previously been shown or hypothesized to affect the variation. We trained several models in order to make two kinds of predictions: whether the marker will be omitted in a specific utterance and how large the proportion of omissions will be for a given time period. For most of the approaches we tried, we were not able to achieve a better-than-baseline performance. The only exception was predicting the proportion of omissions using autoregressive integrated moving average models for one-step-ahead forecast, and in this case time was the only predictor that mattered. Our data suggest that most of the language-internal predictors do have some effect on the variation, but the effect is not strong enough to yield reliable predictions.
Der Beitrag behandelt die Frage, inwiefern es sich bei den gegenwärtigen Russlanddeutschen (Erwachsenen und Jugendlichen der ersten Generation, Einwanderungswelle der 1990er Jahre aus Sprachinseln) um Re-Migranten handelt, welche Veränderungen in den Varietätenrepertoires stattfinden und welche Schwierigkeiten und Probleme, aber auch Vorteile sich durch diese spezifische Migrationskonfiguration für die zugewanderten Russlanddeutschen ergeben. Die besondere Situation der Re-Migration mit der spezifischen linguistisch-soziolinguistischen Problematik wird durch Beispiele aus dem aktuellen IDS-Projekt „Migrationslinguistik“ veranschaulicht. Einerseits liegen besondere varietätenlinguistische Konstellationen vor, die bei der russlanddeutschen Migrantenpopulation generationenspezifische Konturen aufweisen. Dadurch entstehen andererseits unikale linguistische Sprachkontaktbedingungen, die die sprachlich-kommunikative Integration und den Erhalt der Migrantensprache Russisch in besonderer Weise beeinflussen können.
With recourse to a broader understanding of the concept of translation, the transfer of source texts in one variety into another variety of the same language can also be called translation. This paper focuses on the target language – or rather – the target variety “easy-to-read language”, which is meant to make texts comprehensible for people with communication limitations. Considering its origins in the disability rights movement, the aim is to inform affected persons about their rights and democratic processes, i.e. to translate especially legal texts into the so-called easy-to-read language. Although there is a whole range of rules and guidelines for formulating in easy-to-read language, ”none offers a sufficient approach for translation into easy-to-read language“ (Bredel & Maaß, 2016a, p. 109). Standardization of the variety is also still a long way off. On the one hand, the contribution takes stock of legal regulations in easy-to-read language. On the other hand, four versions of the Federal Participation Law in easy-to-read language are analysed with regard to their external features and the constructions used to explain technical terminology. The analysis shows that legal texts in easy-to-read language are (still) quite limited in number and are also difficult to find. Concerning the second part, the constructions used exhibit a great structural variance, both intra- and intertextually. It is therefore questionable whether the addressees can access the texts independently. Also, it is still necessary to make the rules, the formulations of the rules and the implementations clearer so that the translations fulfil their function.
The European language world is characterized by an ideology of monolingualism and national languages. This language-related world view interacts with social debates and definitions about linguistic autonomy, diversity, and variation. For the description of border minorities and their sociolinguistic situation, however, this view reaches its limits. In this article, the conceptual difficulties with a language area that crosses national borders are examined. It deals with the minority in East Lorraine (France) in particular. On the language-historical level, this minority is closely related to the language of its (big) neighbor Germany. At the same time, it looks back on a conflictive history with this country, has never filled a (subordinated) political–administrative unit, and has experienced very little public support. We want to address the questions of how speakers themselves reflect on their linguistic situation and what concepts and argumentative figures they bring up in relation to what (Germanic) variety. To this end, we look at statements from guideline-based interviews. In the paper, we present first observations gained through qualitative content analysis.
Linguistic Variation and Change in 250 Years of English Scientific Writing: A Data-Driven Approach
(2020)
We trace the evolution of Scientific English through the Late Modern period to modern time on the basis of a comprehensive corpus composed of the Transactions and Proceedings of the Royal Society of London, the first and longest-running English scientific journal established in 1665. Specifically, we explore the linguistic imprints of specialization and diversification in the science domain which accumulate in the formation of “scientific language” and field-specific sublanguages/registers (chemistry, biology etc.). We pursue an exploratory, data-driven approach using state-of-the-art computational language models and combine them with selected information-theoretic measures (entropy, relative entropy) for comparing models along relevant dimensions of variation (time, register). Focusing on selected linguistic variables (lexis, grammar), we show how we deploy computational language models for capturing linguistic variation and change and discuss benefits and limitations.
Die Bedeutung von Forschungsdatenmanagement im wissenschaftspolitischen Diskurs und im wissenschaftlichen Arbeitsalltag nimmt stetig zu. Nationale und internationale Forschungsinfrastrukturen, Verbünde, disziplinäre Datenzentren und institutionelle Kompetenzzentren nähern sich den Herausforderungen aus unterschiedlichen Perspektiven. Dieser Beitrag stellt das Data Center for the Humanities an der Universität zu Köln als Beispiel für ein universitäres Datenzentrum mit fachlicher Spezialisierung auf die Geisteswissenschaften vor.
Most authors agree that modal particles - a dass of function words widely considered characteristic of Modem German - cannot receive prosodic stress, though the reasons for this restriction have not yet been satisfactorily explained. This paper argues that unstressability follows from the general contribution of modal particles to compositional utterance meaning, which requires them to take scope over focus-background structures. Form and function of modal particle meanings are modelled and illustrated for five representative examples - the particles wohl, ja, eigentlich, eben and halt. It is argued that these as well as other particles, whenever they occur under prosodic stress, cannot preserve the meaning nor the syntactic behaviour of modal particles. All instances of stressed particles in German must therefore be categorized in other functional classes.
This paper investigates the syntactic behaviour of adverbial clauses in contemporary German and Italian. It focuses on three main questions: (i) How many degrees of syntactic integration of adverbial clauses are there to be distinguished by an adequate grammatical description of the two languages? (ii) Which linear and hierarchical positions in the structure of the matrix sentence can be occupied by adverbial clauses? (iii) Which is the empirical distribution of adverbial clauses introduced by the conjunctions als, während, wenn, obwohl and weil in German, as well as quando, mentre, se, sebbene and perché in Italian?
Responding to question (i), a distinction is drawn between strongly integrated, weakly integrated and syntactically disintegrated adverbial clauses. There are further degrees on the gradient of syntactic integration, which are not examined in this paper. Responding to question (ii), eight classes of structural positions in the matrix sentence are identified that can be occupied by adverbial clauses. Five of them are positions of syntactic integration, three are positions of disintegration. Responding to question (iii), the distribution of the ten classes of adverbial clauses is described on the basis of a corpus of internet data. Strongly integrated, weakly integrated and disintegrated adverbial clauses show clearly different distributions within the structure of the matrix sentence. Also the semantic classes of adverbial clauses (temporal, adversative, conditional, concessive, causal) are distributed differently.
The proposed contribution will shed light on current and future challenges on legal and ethical questions in research data infrastructures. The authors of the proposal will present the work of NFDI’s section on Ethical, Legal and Social Aspects (hereinafter: ELSA), whose aim is to facilitate cross-disciplinary cooperation between the NFDI consortia in the relevant areas of management and re-use of research data.
In 2010, ISO published a standard for syntactic annotation, ISO 24615:2010 (SynAF). Back then, the document specified a comprehensive reference model for the representation of syntactic annotations, but no accompanying XML serialisation. ISO’s subcommittee on language resource management (ISO TC 37/SC 4) is working on making the SynAF serialisation ISOTiger an additional part of the standard. This contribution addresses the current state of development of ISOTiger, along with a number of open issues on which we are seeking community feedback in order to ensure that ISOTiger becomes a useful extension to the SynAF reference model.
Sogenannte „Pragmatikalisierte Mehrworteinheiten“ sind im Deutschen hochfrequent und unterliegen bisweilen tiefgreifenden phonetischen Reduktionsprozessen. Diese können Realisierungsvarianten hervorbringen, die in der Rückschau auf mehr als eine lexematische Ursprungsform zurückführbar sind. Die vorliegende Studie untersucht mit [ˈzɐmɐ] einen besonders prägnanten Fall dieser Art anhand eines Perzeptionsexperimentes.
Am Beispiel der polyfunktionalen Mehrworteinheit <was weiß ich> wird das Zusammenspiel von pragmatischer und phonetischer Ausdifferenzierung in Pragmatikalisierungsprozessen untersucht. Hierzu werden spontan-sprachliche Belege aus dem Korpus „Deutsch heute“ analysiert. Die beobachtete phonetische Variationsbreite deutet auf eine komplexe Beziehung zu den jeweiligen pragmatischen Funktionen hin.
The CLARIN infrastructure as an interoperable language technology platform for SSH and beyond
(2023)
CLARIN is a European Research Infrastructure Consortium developing and providing a federated and interoperable platform to support scientists in the field of the Social Sciences and Humanities in carrying-out language-related research. This contribution provides an overview of the entire infrastructure with a particular focus on tool interoperability, ease of access to research data, tools and services, the importance of sharing knowledge within and across (national) communities, and community building. By taking into account FAIR principles from the very beginning, CLARIN succeeded in becoming a successful example of a research infrastructure that is actively used by its members. The benefits CLARIN members reap from their infrastructure secure a future for their common good that is both sustainable and attractive to partners beyond the original target groups.
Linguistische Studien arbeiten häufig mit einer Differenzierung zwischen gesprochener und geschriebener Sprache bzw. zwischen Kommunikation der Nähe und Distanz. Die Annahme eines Kontinuums zwischen diesen Polen bietet sich für eine Verortung unterschiedlichster Äußerungsformen an, inklusive unkonventioneller Textsorten wie etwa Popsongs. Wir konzipieren, implementieren und evaluieren ein automatisiertes Verfahren, das mithilfe unkorrelierter Entscheidungsbäume entsprechende Vorhersagen auf Textebene durchführt. Für die Identifizierung der Pole definieren wir einen Merkmalskatalog aus Sprachphänomenen, die als Markierer für Nähe/Mündlichkeit bzw. Distanz/Schriftlichkeit diskutiert werden, und wenden diesen auf prototypische Nähe-/Mündlichkeitstexte sowie prototypische Distanz-/Schrifttexte an. Basierend auf der sehr guten Klassifikationsgüte verorten wir anschließend eine Reihe weiterer Textsorten mithilfe der trainierten Klassifikatoren. Dabei erscheinen Popsongs als „mittige Textsorte“, die linguistisch motivierte Merkmale unterschiedlicher Kontinuumsstufen vereint. Weiterhin weisen wir nach, dass unsere Modelle mündlich kommunizierte, aber vorab oder nachträglich verschriftlichte Äußerungen wie Reden oder Interviews vollkommen anders verorten als prototypische Gesprächsdaten und decken Klassifikationsunterschiede für Social-Media-Varianten auf. Ziel ist dabei nicht eine systematisch-verbindliche Einordung im Kontinuum, sondern eine empirische Annäherung an die Frage, welche maschinell vergleichsweise einfach bestimmbaren Merkmale („shallow features“) nachweisbar Einfluss auf die Verortung haben.
This contribution presents a quantitative approach to speech, thought and writing representation (ST&WR) and steps towards its automatic detection. Automatic detection is necessary for studying ST&WR in a large number of texts and thus identifying developments in form and usage over time and in different types of texts. The contribution summarizes results of a pilot study: First, it describes the manual annotation of a corpus of short narrative texts in relation to linguistic descriptions of ST&WR. Then, two different techniques of automatic detection – a rule-based and a machine learning approach – are described and compared. Evaluation of the results shows success with automatic detection, especially for direct and indirect ST&WR.
The paper explores factors that influence the distribution of constituent words of compounds over the head and modifier position. The empirical basis for the study is a large database of German compounds, annotated with respect to the morphological structure of the compound and the semantic category of the constituents. The study shows that the polysemy of the constituent word, its constituent family size, and its semantic category account for tendencies of the constituent word to occur in either modifier or head position. Furthermore, the paper explores the degree to which the semantic category combination of head and modifier word, e.g., x=substance and y=artifact, indicates the semantic relation between the constituents, e.g., y_consists_of_x.
Projektvorstellung – Redewiedergabe. Eine literatur- und sprachwissenschaftliche Korpusanalyse
(2018)
Das laufende DFG-Projekt „Redewiedergabe“ stellt einen Anwendungsfall quantitativer Sprach-und Literaturwissenschaft dar und beschäftigt sich mit dem Phänomen „Redewiedergabe“ auf der Grundlage großer Datenmengen. Zu diesem Zweck wird zum einen ein Korpus manuell mit Redewiedergabeformen annotiert, zum anderen werden Verfahren zur automatischen Erkennung des Phänomens entwickelt. Ziel ist es, Forschungsfragen nach der Entwicklung von Redewiedergabe vor allem im 19. Jahrhundert zu beantworten.
KoMuX, der Kompositamuster-Explorer, (www.owid.de/plus/komux) ist eine Webanwendung, die es ermöglicht, mehr als 50.000 nominale Komposita des Deutschen gezielt nach abstrakten oder lexikalisch-teilspezifizierten Mustern zu durchsuchen. Unterschiedliche Visualisierungen helfen dabei, Strukturen und Zusammenhänge innerhalb der Ergebnismenge zu erfassen.
Die vorgestellte Studie untersucht die Anteile unterschiedlicher Redewiedergabeformen im Vergleich zwischen zwei Literaturtypen von gegensätzlichen Enden des Spektrums: Hochliteratur – definiert als Werke, die auf der Auswahlliste von Literaturpreisen standen – und Heftromanen, massenproduzierten Erzählwerken, die zumeist über den Zeitschriftenhandel vertrieben werden und früher abwertend als „Romane der Unterschicht” (Nusser 1981) bezeichnet wurden. Unsere These ist, dass sich diese Literaturtypen hinsichtlich ihrer Erzählweise unterscheiden, und sich dies in den verwendeten Wiedergabeformen niederschlägt. Der Fokus der Untersuchung liegt auf der Dichotomie zwischen direkter und nicht-direkter Wiedergabe, die schon in der klassischen Rhetorik aufgemacht wurde.
We present recognizers for four very different types of speech, thought and writing representation (STWR) for German texts. The implementation is based on deep learning with two different customized contextual embeddings, namely FLAIR embeddings and BERT embeddings. This paper gives an evaluation of our recognizers with a particular focus on the differences in performance we observed between those two embeddings. FLAIR performed best for direct STWR (F1=0.85), BERT for indirect (F1=0.76) and free indirect (F1=0.59) STWR. For reported STWR, the comparison was inconclusive, but BERT gave the best average results and best individual model (F1=0.60). Our best recognizers, our customized language embeddings and most of our test and training data are freely available and can be found via www.redewiedergabe.de or at github.com/redewiedergabe.
In this paper, we present our work-inprogress to automatically identify free indirect representation (FI), a type of thought representation used in literary texts. With a deep learning approach using contextual string embeddings, we achieve f1 scores between 0.45 and 0.5 (sentence-based evaluation for the FI category) on two very different German corpora, a clear improvement on earlier attempts for this task. We show how consistently marked direct speech can help in this task. In our evaluation, we also consider human inter-annotator scores and thus address measures of certainty for this difficult phenomenon.
Our current era of globalization is characterized above all by increased mobility, namely by the increasing mobility of people and the development of new communication technologies, including the mobility of linguistic signs and resources. This process raises new theoretical and methodological questions in linguistics, which results in the development of a new sociolinguistics of globalization (Blommaert 2010) in recent years. One of the most obvious ways to trace this new and dynamic development is to analyze individual language repertoires, especially those of migrants. In this essay, I examine aspects of the communicative repertoire of a refugee who fled to Germany in 2015 to escape the civil war in Syria. I draw on two interviews I conducted with him (in the following I refer to him by the pseudonym „Baran“). The first interview with Baran was recorded in 2016, a few months after his arrival in Germany. The second interview is from 2023, seven years later. In both recordings, German was the dominant language of interaction. I will analyze and show the characteristics of his German at the beginning of his immigration, how he resorts to practices of language mixing between German, Turkish and English (which has recently also been referred to as translanguaging) and how his German has developed over the course of the past seven years.
Der Beitrag präsentiert Ergebnisse des Projekts „Deutsch im Beruf: Die sprachlich-kommunikative Integration der Flüchtlinge“, das am Leibniz-Institut für Deutsche Sprache (IDS) durchgeführt wird. Im ersten Teil wird auf die zweistufige Sprachstandserhebung in den allgemeinen Integrationskursen eingegangen, die zusammen mit dem Goethe-Institut umgesetzt wurde. Bei der ersten Erhebung zu Beginn der Kurse wurden mit einer Tabletumfrage die Sozialdaten und Sprachenbiografien der Teilnehmenden erhoben. Bei der zweiten Erhebung am Ende der gleichen Kurse ging es darum, mit Hilfe der Analyse von Sprachaufnahmen das erreichte mündliche Kompetenzniveau der Teilnehmenden zu ermitteln. Im zweiten Teil des Beitrags stellen wir Ergebnisse unserer ethnografisch-gesprächsanalytischen Feldstudien vor, die wir in verschiedenen Arbeitskontexten wie Qualifizierungsmaßnahmen, duale Berufsausbildung und betriebliche Praktika durchgeführt haben. In Bezug auf die zentralen Fragen zu gegenseitiger Verständigung und der Sprachvermittlung am Arbeitsplatz konnten wir im Rahmen unserer Ethnografien drei prototypische Praktiken feststellen, auf die wir näher eingehen: a) „kaum Verständnissicherung und Sprachvermittlung“, b) „ad-hoc Verständnissicherung und Sprachvermittlung“ und c) „systematische Verständnissicherung und Sprachvermittlung“. Des Weiteren fokussieren wir im letzten Teil des Beitrags die Ergebnisse unserer ethnografischen Langzeitstudie zu Betriebspraktika von studierenden Geflüchteten. Anhand der Untersuchung von Reparaturen zeigt sich hier die Entwicklung der interaktionalen Kompetenz eines L2-Sprechers, die mit einer zunehmenden kommunikativen Integration in Teamgesprächen einhergeht.
This paper analyses intensification in German digitally-mediated communication (DMC) using a corpus of YouTube comments written by young people (the NottDeuYTSch corpus). Research on intensification in written language has traditionally focused on two grammatical aspects: syntactic intensification, i.e. the use of particles and other lexical items and morphological intensification, i.e. the use of compounding. Using a wide variety og examples from the corpus, the paper identifies novel ways that have been used for intensification in DMC, and suggests a new taxonomy of classification for future analysis of intensification.
This paper introduces the Nottinghamer Korpus deutscher YouTube-Sprache (‘The Nottingham German YouTube Language Corpus’ - or NottDeuYTSch corpus). The corpus comprises over 33 million words, taken from roughly 3 million YouTube comments published between 2008 and 2018, written by a young, German-speaking demographic. The NottDeuYTSch corpus provides an authentic and representative linguistic snapshot of young German speakers and offers significant opportunities for in-depth research in several linguistic fields, such as lexis, morphology, syntax, orthography, multilingualism, and conversational and discursive analysis.
The NottDeuYTSch corpus is a freely available collection of YouTube comments written under German-speaking videos by young people between 2008 and 2018. The article uses the NottDeuYTSch corpus to investigate how YouTube comments can be used to produce learning materials and how corpora of Digitally-Mediated Communication can benefit intermediate learners of German. The article details the effects of authentic communication within YouTube comments on teenage learners, examining how they can influence the psycholinguistic factors of motivation, foreign language anxiety, and willingness to communicate. The article also discusses the benefits and limitations of using authentic corpus material for the development of teaching material.
Sich und andere politisch zu positionieren, ist eine elementare sprachliche und soziale Praxis. Dies zeigen etwa Diskussionen um europäische Identität in Zeiten des britischen EU-Austritts und einer umstrittenen EU-Grenzpolitik oder die Haltung zu Waffenlieferungen in Krisengebiete im Zuge des Kriegs in der Ukraine, der 2022 ausbrach, ebenso wie wiederkehrende Auseinandersetzungen um Themen wie Alltagsrassismus, Sexismus und Diskriminierung. Diese Beispiele, die aktuelle politische Ereignisse ebenso umfassen, wie fortlaufende, immer wieder neu aufflammende gesellschaftliche Debatten um grundlegende Fragen des Zusammenlebens, verdeutlichen: Wo und wie wir uns in der Gesellschaft verorten, ist eine alltägliche Frage. Politische Positionierungen werden nicht nur ständig vorgenommen, sie werden, wie auch Nicht-Positionierungen, ebenso kontinuierlich thematisiert und kontrovers diskutiert. Diese Einführung in das Band soll in die Thematik des politischen Positionierens durch Klärung des Termins und einem Beispiel aus der Praxis einführen.
Germany’s diverse history in the 20th century raises the question of how social upheavals were constituted in and through political discourse. By analysing basic concepts, the research network “The 20th century in basic concepts” (based at the Leibniz institutes IDS, ZfL, ZZF) aims to identify continuities and discontinuities in political and social discourse. In this way, historical sediments of the present are to be uncovered and those challenges identified that emerged in the course of the 20th century and continue to shape political discourse until the present.
In an earlier publication it was claimed that there is no useful relationship between Swahili-English dictionary look-up frequencies and the occurrence frequencies for the same wordforms in Swahili-English corpora, at least not beyond the top few thousand wordforms. This result was challenged using data for German by a different team of researchers using an improved methodology. In the present article the original Swahili-English data is revisited, using ten years’ worth of it rather than just two, and using the improved methodology. We conclude that there is indeed a positive relationship. In addition, we show that online dictionary look-up behaviour is remarkably similar across languages, even when, as in our case, one is dealing with languages from very dissimilar language families. Furthermore, online dictionaries turn out to have minimum look-up success rates, below which they simply cannot go. These minima are language-sensitive and vary depending on the regularity of the searched-for entries, but are otherwise constant no matter the size of randomly sampled dictionaries. Corpus-informed sampling always improves on any random method. Lastly, from the point of view of the graphical user interface, we argue that the average user of an online bilingual dictionary is better served with a single search box, rather than separate search boxes for each dictionary side.
How do people communicate in mobile settings of interaction? How does mobility affect the way we speak? How does mobility exert influence on the manner in which talk itself is consequential for how we move in space? Recently, questions of this sort have attracted increasing attention in the human and social sciences. This Special Issue contributes to the emerging body of studies on mobility and talk by inspecting an ordinary and ubiquitous phenomenon in which communication among mobile participants is paramount: participation in traffic. This editorial presents previous work on mobility in natural settings, as carried out by interactionally oriented researchers. It also shows how the investigation into traffic participation adds new perspectives to research on language and communication.
Recipient design is a key constituent of intersubjectivity in interaction. Recipient design of turns is informed by prior knowledge about and shared experience with recipients. Designing turns in order to be maximally effective for the particular recipient(s) is crucial for accomplishing intersubjectively coordinated action. This paper reports on a specific pragmatic structure of recipient design, i.e. counter-factual recipient design, and how it impinges on intersubjectivity in interaction. Based on an analysis of video-recordings data from driving school lessons in German, two kinds of counterfactual recipient design of instructors' requests are distinguished: pedagogic and egocentric turn-design. Counterfactual, pedagogic turn-design is used strategically to diagnose student skills and to create opportunities for corrective instructions. Egocentric turn-design rests on private, non-shared knowledge of the instructor. Egocentrically designed turns imply expectations of how to comply with requests which cannot be recovered by the student and which lead to a breakdown of intersubjective cooperation. This paper identifies practices, sources and interactional consequences of these two kinds of counterfactual recipient design. In addition, the study enhances our understanding of recipient design in at least three ways. It shows that recipient design does not only concern referential and descriptive practices, but also the indexing intelligible projections of next actions; it highlights the productive, other-positioning effects of recipient design; it argues that recipient design should be analyzed in terms of temporally extended interactional trajectories, linking turn-constructional practices to interactional histories and consecutive trajectories of joint action.
Der vorliegende Beitrag erkundet den Zusammenhang zwischen der Komplexität politischer Argumentationsprozesse und der Diversifikation der Semantik von Schlüsselwörtern, deren Bedeutung im Argumentationsprozess umkämpft und in zahlreichen Facetten entfaltet widAdegenstand der Untersuchung ist die Verwendung von „Ökologie" in den Schlichtungsgesprächen zum Bahnprojekt Stuttgart 21. Im Unterscheid zu bisher vorliegenden Analysen zu semantischen Kämpfen geht es weniger darum, wie ein Ausdruck von einer Partei im Gegensatz zu anderen semantisiert wird. Es wird vielmehr gezeigt, wie semantische Diversifizierung und Ambiguität von „Ökologie" im expertischen Argumentationsprozess entstehen und welche kommunikativen Effekte dies für die Möglichkeit der Bürgerbeteiligung mit sich bringt. Es werden drei Praktiken identifiziert, mit denen die Interaktionsteilnehmer selbst auf semantische Diversifizierung und Ambiguität reagieren und versuchen, den Ausdruck eindeutig interpretierbar und die Quaestio entscheidbar zu machen: Strategieunterstellungen, Popularisierungen und Populismus. Die Interaktionsanalysen zeigen dabei, dass diese Praktiken selbst die Problematik, die sie lösen sollen, reproduzieren.
This paper argues that conversation analysis has largely neglected the fact that meaning in interaction relies on inferences to a high degree. Participants treat each other as cognitive agents, who imply and infer meanings, which are often consequential for interactional progression. Based on the study of audio- and video-recordings from German talk-in-interaction, the paper argues that inferences matter to social interaction in at least three ways. They can be explicitly formulated; they can be (conventionally) indexed, but not formulated; or they may be neither indexed nor formulated yet would be needed for the correct understanding of a turn. The last variety of inferences usually remain tacit, but are needed for smooth interactional progression. Inferences in this case become an observable discursive phenomenon if misunderstandings are treated by the explication of correct (accepted) and wrong (unaccepted) inferences. The understanding of referential terms, analepsis, and ellipsis regularly rely on inferences. Formulations, third-position repairs, and fourth-position explications of erroneous inferences are practices of explicating inferences. There are conventional linguistic means like discourse markers, connectives, and response particles that index specific kinds of inferences. These practices belong to a larger class of inferential practices, which play an important role for indexing and accomplishing intersubjectivity in talk in interaction.
In social interaction, different kinds of word-meaning can become problematic for participants. This study analyzes two meta-semantic practices, definitions and specifications, which are used in response to clarification requests in German implemented by the format Was heißt X (‘What does X mean?’). In the data studied, definitions are used to convey generalizable lexical meanings of mostly technical terms. These terms are either unknown to requesters, or, in pedagogical contexts, requesters ask in order to check the addressee’s knowledge. Specifications, in contrast, clarify aspects of local speaker meanings of ordinary expressions (e.g., reference, participants in an event, standards applied to scalar expressions). Both definitions and specifications are recipient-designed with respect to the (presumed) knowledge of the addressee and tailored to the topical and practical relevancies of the current interaction. Both practices attest to the flexibility and situatedness of speakers’ semantic understandings and to the systematicity of using meta-semantic practices differentially for different kinds of semantic problems. Data are come from mundane and institutional interaction in German from the public corpus FOLK.
How do people’s interactional practices change over time? Can conversation analysis identify those changes, and if so, how? In this introductory article, we scrutinize the novel insights that can be gained from examining interactional practices over time and discuss the related methodological challenges for longitudinal CA. We first retrace CA’s interest in the temporality of social interaction and then review three lines of current CA work on change over time: developmental studies, studies of sociohistorical change, and studies of joint interactional histories. Existing work shows how the execution of locally coordinated actions and their meanings change over time; how prior actions inform future actions; and how resources, practices, and structures of joint action emerge over people’s repeated interactional encounters. We conclude by arguing that the empirical analysis of the microlevel organization of social interaction, which is the hallmark of CA, can elucidate the fine-grained situated interactional infrastructure that provides for the larger-scale social dynamics that have been of interest to other lines of research.
Research on multimodal interaction has shown that simultaneity of embodied behavior and talk is constitutive for social action. In this study, we demonstrate different temporal relationships between verbal and embodied actions. We focus on uses of German darf/kann ich? (“may/can I?”) in which speakers initiate, or even complete the embodied action that is addressed by the turn before the recipient’s response. We argue that through such embodied conduct, the speaker bodily enacts high agency, which is at odds with the low deontic stance they express through their darf/kann ich?-TCUs. In doing so, speakers presuppose that the intersubjective permissibility of the action is highly probable or even certain. Moreover, we demonstrate how the speaker’s embodied action, joint perceptual salience of referents, and the projectability of the action addressed with darf/kann ich? allow for a lean syntactic design of darf/kann ich?-TCUs (i.e., pronominalization, object omission, and main verb omission). Our findings underscore the reflexive relationship between lean syntax, sequential organization and multimodal conduct.
Schegloff (1996) has argued that grammars are “positionally-sensitive”, implying that the situated use and understanding of linguistic formats depends on their sequential position. Analyzing the German format Kannst du X? (corresponding to English Can you X?) based on 82 instances from a large corpus of talk-in-interaction (FOLK), this paper shows how different action-ascriptions to turns using the same format depend on various orders of context. We show that not only sequential position, but also epistemic status, interactional histories, multimodal conduct, and linguistic devices co-occurring in the same turn are decisive for the action implemented by the format. The range of actions performed with Kannst du X? and their close interpretive interrelationship suggest that they should not be viewed as a fixed inventory of context-dependent interpretations of the format. Rather, the format provides for a root-interpretation that can be adapted to local contextual contingencies, yielding situated action-ascriptions that depend on constraints created by contexts of use.
Overtaking as an interactional achievement : video analyses of participants' practices in traffic
(2018)
In this article we pursue a systematic and extensive study of overtaking in traffic as an interactional event. Our focus is on the accountable organisation and accomplishment of overtaking by road users in real-world traffic situations. Data and analysis are drawn from multiple research groups studying driving from an ethnomethodological and conversation analytic perspective. Building on multimodal and sequential analyses of video recordings of overtaking events, the article describes the shared practices which overtakers and overtaken parties use in displaying, recognizing and coordinating their manoeuvres. It examines the three sequential phases of an overtaking event: preparation and projection; the overtaking proper; the re-alignment post-phase including retrospective accounts and assessments. We identify how during each of these phases drivers and passengers organize intra-vehicle and inter-vehicle practices: driving and non-driving related talk between vehicle- occupants, the emerging spatiotemporal ecology of the road, and the driving actions of other road users. The data is derived from a two camera set-up recording the road ahead and car interior. The recordings are from three settings: daily commuting, driving lessons, race-car coaching. The events occur on a variety of road types (motorways, country roads, city streets, a race track, etc.), in six languages (English, Finnish, French, German, Italian, and Swedish) and in seven countries (Australia, Finland, France, Germany, Sweden, Switzerland, and the UK). From an exceptionally diverse collection of video data, the study of which is made possible thanks to the innovative collaboration of multiple researchers, the article exhibits the range of practical challenges and communicative skills involved in overtaking.
This special issue investigates early responses—responsive actions that (start to) unfold while the production of the responded-to turn and action is still under way. Although timing in human conduct has gained intense interest in research, the early production of responsive actions has so far largely remained unexplored. But what makes early responses possible? What do such responses tell us about the complex interplay between syntax, prosody, and embodied conduct? And what sorts of actions do participants accomplish by means of such early responses? By addressing these questions, the special issue seeks to offer new advances in the systematic analysis of temporal organization in interaction, contributing to broader discussions in the language and cognitive sciences as to the social coordination of human conduct. In this introductory article, we discuss the role of temporality and sequentiality in social interaction, specifically focusing on projective and anticipatory mechanisms and the interplay between multiple semiotic resources, which are crucial for making early responses possible.
According to Positioning Theory, participants in narrative interaction can position themselves on a representational level concerning the autobiographical, told self, and a performative level concerning the interactive and emotional self of the tellers. The performative self is usually much harder to pin down, because it is a non-propositional, enacted self. In contrast to everyday interaction, psychotherapists regularly topicalize the performative self explicitly. In our paper, we study how therapists respond to clients' narratives by interpretations of the client's conduct, shifting from the autobiographical identity of the told self, which is the focus of the client's story, to the present performative self of the client. Drawing on video recordings from three psychodynamic therapies (tiefenpsychologisch fundierte Psychotherapie) with 25 sessions each, we will analyze in detail five extracts of therapists' shifts from the representational to the performative self. We highlight four findings:
• Whereas, clients' narratives often serve to support identity claims in terms of personal psychological and moral characteristics, therapists rather tend to focus on clients' feelings, motives, current behavior, and ways of interacting.
• In response to clients' stories, therapists first show empathy and confirm clients' accounts, before shifting to clients' performative self.
• Therapists ground the shift to clients' performative self by references to clients' observable behavior.
• Therapists do not simply expect affiliation with their views on clients' performative self. Rather, they use such shifts to promote the clients' self-exploration. Yet, if clients resist to explore their selves in more detail, therapists more explicitly ascribe motives and feelings that clients do not seem to be aware of. The shift in positioning levels thus seems to have a preparatory function for engendering therapeutic insights.
Taking the use of the esthetic term wabi sabi (Japanese compound noun) in a series of German- and English-language theater rehearsals as an example, this article studies the emergence of shared meanings and uses of an expression over an interactional history. We track how shared understandings and uses of wabi sabi develop over the course of a series of theater rehearsals. We focus on the practices by which understandings of wabi sabi are displayed, adopted, and negotiated. We discuss complexities and intransparencies of the manifestation of common ground in multiparty interactions and its relationship to the emergence of routine uses of the expression. Data are in English and German with English translation.
Our study deals with early bodily responses to directives (requests and instructions, i.e., second pair parts [SPPs]) produced before the first pair part (FPP) is complete. We show how early bodily SPPs build on the properties of an emerging FPP. Our focus is on the successive incremental coordination of components of the FPP with components of the SPP. We show different kinds of micro-sequential relationships between FPP and SPP: successive specification of the SPP building on the resources that the FPP makes available, the readjustment or repair of the SPP in response to the emerging FPP, and reflexive micro sequential adaptions of the FPP to an early SPP. This article contributes to our understanding of the origins of projection in interaction and of the relationship between sequentially and simultaneity in interaction. Data are video-recordings from interaction in German.
This paper presents an algorithm and an implementation for efficient tokenization of texts of space-delimited languages based on a deterministic finite state automaton. Two representations of the underlying data structure are presented and a model implementation for German is compared with state-of-the-art approaches. The presented solution is faster than other tools while maintaining comparable quality.
When comparing different tools in the field of natural language processing (NLP), the quality of their results usually has first priority. This is also true for tokenization. In the context of large and diverse corpora for linguistic research purposes, however, other criteria also play a role – not least sufficient speed to process the data in an acceptable amount of time. In this paper we evaluate several state of the art tokenization tools for German – including our own – with regard to theses criteria. We conclude that while not all tools are applicable in this setting, no compromises regarding quality need to be made.
When comparing different tools in the field of natural language processing (NLP), the quality of their results usually has first priority. This is also true for tokenization. In the context of large and diverse corpora for linguistic research purposes, however, other criteria also play a role – not least sufficient speed to process the data in an acceptable amount of time. In this paper we evaluate several state-ofthe-art tokenization tools for German – including our own – with regard to theses criteria. We conclude that while not all tools are applicable in this setting, no compromises regarding quality need to be made.
In this paper, we present our experiences and decisions in dealing with challenges in developing, maintaining and operating online research software tools in the field of linguistics. In particular, we highlight reproducibility, dependability, and security as important aspects of quality management – taking into account the special circumstances in which research software
is usually created.
To improve grammatical function labelling for German, we augment the labelling component of a neural dependency parser with a decision history. We present different ways to encode the history, using different LSTM architectures, and show that our models yield significant improvements, resulting in a LAS for German that is close to the best result from the SPMRL 2014 shared task (without the reranker).
We propose a new type of subword embedding designed to provide more information about unknown compounds, a major source for OOV words in German. We present an extrinsic evaluation where we use the compound embeddings as input to a neural dependency parser and compare the results to the ones obtained with other types of embeddings. Our evaluation shows that adding compound embeddings yields a significant improvement of 2% LAS over using word embeddings when no POS information is available. When adding POS embeddings to the input, however, the effect levels out. This suggests that it is not the missing information about the semantics of the unknown words that causes problems for parsing German, but the lack of morphological information for unknown words. To augment our evaluation, we also test the new embeddings in a language modelling task that requires both syntactic and semantic information.
Das Songkorpus erlaubt Einblicke in bestimmte gesellschaftliche Diskurse, die in anderen Sprachkorpora weniger zur Geltung kommen. Das zeigt sich auch bei der Analyse von Phrasemen im Songkorpus.
Phraseme sind etablierte Wortkombinationen; sie konservieren kollektives Wissen, kollektive Kultur. Element of Crime, Fettes Brot, Udo Lindenberg, Stefan Stoppok, Konstantin Wecker, Marius Müller-Westernhagen, die Autoren meines kleinen Teilkorpus, sind Anti-Establishment und alles andere als konservativ. Zwar verwenden sie häufig Phraseme verschiedenster Struktur und Art, karikieren sie aber auch häufig, spielen lässig mit ihnen, hinterfragen ihre Bedeutung, verändern ihre Bedeutung. Ihre spezielle Haltung bedingt spezielle Phraseme und spezielle Phrasemvarianten.
Eine Umschau in jüngeren sprachwissenschaftlichen Arbeiten zeigt einen häufig betonten engen Zusammenhang von Sprache und Identität, vor allem den der eigenen Sprache und der ethnischen Identität. Dass aber Sprache in einem zwei- oder mehrsprachigen Kontext nur eine Ressource einer Identitätskonstruktion sein kann, wird selten herausgestellt. Der nachstehende Aufsatz untersucht als charakteristisches Beispiel einer gelösten Bindung von Sprache und ethnischer Identität die Minderheit der deutschen Aussiedler aus der ehemaligen Sowjetunion. Im Vordergrund steht dabei die zweite Generation, bei der ihr Zugehörigkeitsgefühl zur ethnischen Identität als Deutsche trotz der erfolgten Sprachumstellung sich nicht oder selten verändert hat.
Cette contribution se concentre sur les locuteurs de l’allemand en situation minoritaire dans le Caucase. Il s’agit de descendants d’anciennes minorités allemandes de l’Empire russe et de l’Union soviétique, qui ont émigré vers les territoires transcaucasiens en plusieurs phases à partir de la fin du xviiie siècle. Les personnes interrogées sont celles qui, en raison de mariages interethniques, ont évité les déportations de 1941 et vivent toujours dans le Caucase du Sud. Avec les méthodes caractéristiques de la sociolinguistique, l’auteure a enregistré, transcrit et analysé des entretiens formels semi-dirigés effectués en 2017 dans le Caucase du Sud avec deux générations de descendants. L’article présente la situation des variétés de l’allemand (dialecte souabe et allemand standard) et de leurs locuteurs dans des constellations de langues en contact dans le Caucase ainsi que les actions menées par différents groupes d’acteurs pour préserver la langue et la culture allemandes en Géorgie.
Este artículo expone a partir de una serie de ejemplos diferentes situaciones de uso del diccionario bilingüe que evidencian la importancia de llevar a cabo una adecuada adquisición y desarrollo de las competencias lexicográficas en el contexto de enseñanza-aprendizaje de lenguas extranjeras y, en este caso en concreto, del alemán como lengua extranjera. Con este propósito se parte de tres competencias básicas: la selección de la obra lexicográfica adecuada según la situación comunicativa, la desambiguación pertinente en el contexto de la recepción en L2 y traducción de L2 a L1 y la selección y uso del equivalente en el contexto de la producción y traducción en la L2. El objetivo de esta aportación es poner de manifiesto la necesidad de identificar adecuadamente por parte del usuario de un recurso lexicográfico bilingüe la información lexicológica pertinente a la forma, contenido y uso de los lemas consultados tanto en la situación de recepción y producción en L2 como en el contexto de la traducción de y a L2.
Individuals with Autism Spectrum Disorder (ASD) experience a variety of symptoms sometimes including atypicalities in language use. The study explored diferences in semantic network organisation of adults with ASD without intellectual impairment. We assessed clusters and switches in verbal fuency tasks (‘animals’, ‘human feature’, ‘verbs’, ‘r-words’) via curve ftting in combination with corpus-driven analysis of semantic relatedness and evaluated socio-emotional and motor action related content. Compared to participants without ASD (n=39), participants with ASD (n=32) tended to produce smaller clusters, longer switches, and fewer words in semantic conditions (no p values survived Bonferroni-correction), whereas relatedness and content were similar. In ASD, semantic networks underlying cluster formation appeared comparably small without afecting strength of associations or content.
Die Gedichte Georg Trakls gelten allgemein als semantisch schwer zugänglich und stellen Gedichtinterpretationen vor einige Herausforderungen. Im Zentrum dieses Aufsatzes steht ein einzelner satzwertiger Vers aus einem Gedicht Trakls. Ziel ist es zu zeigen, wie literaturwissenschaftliche Interpretationen dieses Verses linguistisch rekonstruiert werden können, und zwar auf der Basis von grundlegenden lexikalischen Eigenschaften, Prozessen der Bedeutungsverschiebung, pragmatisch basierten Anreicherungsprozessen, Welt- und literarischem Wissen und insbesondere detaillierten Annahmen zur Argumentstruktur. Die changierende Bedeutung des untersuchten Verses, so eine der Schlussfolgerungen dieses Aufsatzes, basiert dabei neben Uminterpretationen und Bedeutungsanreicherungen insbesondere auf der Amalgamierung verschiedener Argumentstrukturmuster.
Tok Pisin is a pidgin/creole language spoken since the late 19th century in most of the area that nowadays constitutes Papua New Guinea where it emerged under German colonial rule. Unusual for a pidgin/creole, Tok Pisin is characterized by a extensive lexicographic history. The Tok Pisin Dictionary Collection at the Leibniz Institute for the German Language, described in this article, includes about fifty dictionaries. The collection forms the basis for the sketch of the history of Tok Pisin lexicography as part of colonial history presented here. The basic thesis is that in the history of Tok Pisin, lexicographic strat egies, dictionary structures, and publication patterns reflect the interest (and disinterest) of various groups of colonial actors. Among these colonial actors, European scientists, Catholic missionaries, and the Australian and US militaries played important roles.
Tok Pisin is a pidgin/creole language spoken since the late 19th century in most of the area that nowadays constitutes Papua New Guinea where it emerged under German colonial rule. Unusual for a pidgin/creole, Tok Pisin is characterized by a extensive lexicographic history. The Tok Pisin Dictionary Collection at the Leibniz Institute for the German Language, described in this article, includes about fifty dictionaries. The collection forms the basis for the sketch of the history of Tok Pisin lexicography as part of colonial history presented here. The basic thesis is that in the history of Tok Pisin, lexicographic strategies, dictionary structures, and publication patterns reflect the interest (and disinterest) of various groups of colonial actors. Among these colonial actors, European scientists, Catholic missionaries, and the Australian and US militaries played important roles.
In the lexicon of pidgin and creole languages we can see an important part of these languages’ history of origin and of language contact. The current paper deals with the lexical sources of Tok Pisin and, more specifically, with words of German origin found in this language. During the period of German colonial domination of New Guinea and a number of insular territories in the Pacific (ca. 1885–1915), German words entered the emerging Tok Pisin lexicon. Based on a broad range of lexical and lexicographic data from the early 20th century up until today, we investigate the actual or presumed German origin of a number of Tok Pisin words and trace different lexical processes of integration that are linked to various, often though not always colonially determined, contact settings and sociocultural interactions.
In der Computerlinguistik ist eine kaskadische Prozessierung von Texten üblich. Dabei werden diese zuerst segmentiert (tokenisiert), d.h. Tokens und ggf. Satzgrenzen werden erkannt. Dabei entsteht meist eine Liste bzw. eine einspaltige Tabelle, die sukzessive durch weitere Prozessierungschritte um zusätzliche Spalten – also positionale Annotationen wie z.B. Wortarten und Lemmata für die Tokens in der ersten Spalte – ergänzt wird. Bei der Tokenisierung werden alle Spatien (Leerzeichen) gelöscht. Schon immer problematisch waren dabei Interpunktionszeichen, da diese äußerst ambig sein können, aber auch mehrteilige Namen, die Leerzeichen enthalten und eigentlich zusammengehören. Dieser Beitrag fokussiert auf den Apostroph, der in vielfältiger Weise in den Texten Udo Lindenbergs eingesetzt wird sowie auf mehrteilige Namen, die wir als Tokens erhalten möchten. Wir nutzen dafür das komplette Lindenberg-Archiv des song-korpus.de-Repositoriums, kategorisieren die auftretenden Phänomene, erstellen einen Goldstandard und entwickeln ein teils regel-, teils auf maschinellem Lernen basierendes Segmentierungswerkzeug, das insbesondere die auftretenden Apostrophe, aber auch -lexikonbasiert - mehrteilige Namen nach unseren Vorstellungen erkennt und tokenisiert. Im Anschluss trainieren wir den RNN-Tagger (Schmid, 2019) und zeigen auf, dass ein spezifisch für diese Texte angepasstes Training zu Genauigkeiten ≥ 96% führt. Dabei entsteht nicht nur ein Goldstandard des annotierten Korpus, das dem Songkorpus-Repositorium zur Verfügung gestellt wird, sondern auch eine angepasste Version des RNN-Taggers (verfügbar auf github), die für ähnliche Texte verwendet werden kann.
We evaluate a graph-based dependency parser on DeReKo, a large corpus of contemporary German. The dependency parser is trained on the German dataset from the SPMRL 2014 Shared Task which contains text from the news domain, whereas DeReKo also covers other domains including fiction, science, and technology. To avoid the need for costly manual annotation of the corpus, we use the parser’s probability estimates for unlabeled and labeled attachment as main evaluation criterion. We show that these probability estimates are highly correlated with the actual attachment scores on a manually annotated test set. On this basis, we compare estimated parsing scores for the individual domains in DeReKo, and show that the scores decrease with increasing distance of a domain to the training corpus.
We present the use of count-based and predictive language models for exploring language use in the German Reference Corpus DeReKo. For collocation analysis along the syntagmatic axis we employ traditional association measures based on co-occurrence counts as well as predictive association measures derived from the output weights of skipgram word embeddings. For inspecting the semantic neighbourhood of words along the paradigmatic axis we visualize the high dimensional word embeddings in two dimensions using t-stochastic neighbourhood embeddings. Together, these visualizations provide a complementary, explorative approach to analysing very large corpora in addition to corpus querying. Moreover, we discuss count-based and predictive models w.r.t. scalability and maintainability in very large corpora.