Refine
Year of publication
- 2020 (357) (remove)
Document Type
- Part of a Book (139)
- Article (124)
- Conference Proceeding (29)
- Book (17)
- Other (15)
- Part of Periodical (15)
- Review (12)
- Doctoral Thesis (2)
- Working Paper (2)
- Master's Thesis (1)
Language
- German (252)
- English (101)
- French (2)
- Multiple languages (2)
Keywords
- Deutsch (95)
- Korpus <Linguistik> (64)
- COVID-19 (44)
- Sprachgebrauch (41)
- Neologismus (30)
- Forschungsdaten (25)
- Gesprochene Sprache (24)
- Grammatik (23)
- Mundart (23)
- Wortschatz (21)
Publicationstate
- Veröffentlichungsversion (169)
- Zweitveröffentlichung (137)
- Postprint (19)
- Ahead of Print (2)
Reviewstate
Publisher
- Leibniz-Institut für Deutsche Sprache (IDS) (77)
- Narr Francke Attempto (44)
- de Gruyter (43)
- European Language Resources Association (19)
- Erich Schmidt (10)
- CLARIN (6)
- Gesellschaft für deutsche Sprache e.V. (6)
- Heidelberg University Publishing (6)
- Spektrum der Wissenschaft Verlagsgesellschaft (5)
- Association for Computational Linguistics (4)
In our paper, we present a case study on the quality of concept relations in the manually developed terminological resource of grammis, an information system on German grammar. We assess a SKOS representation of the resource using the tool qSKOS, create a typology of the issues identified by the tool, and conduct a qualitative analysis of selected cases. We identify and discuss aspects that can motivate quality issues and uncover that ill-formed relations are frequently indicative of deeper issues in the data model. Finally, we outline how these findings can inform improvements in our resource’s data model, discussing implications for the machine readability of terminological data.
Gegenstand dieses Beitrags ist die Entwicklung des graphentheoretischen Analysetools Laniakea, das zur Visualisierung von Phänomenen und Veränderungen in terminologischen Netzwerken entwickelt wurde. Wir führen theoretische Grundlagen, Designentscheidungen und technische Details der Implementierung des Tools aus. Darüber hinaus wird auch eine Beschreibung von Erfahrungen im Fokus des Beitrages stehen, die bei der Anwendung von Laniakea bei der Überarbeitung der terminologischen Ressourcen des Grammatischen Informationssystems grammis, gesammelt wurden.
Einleitung
(2020)
A corpus-based academic grammar of German is an enormous undertaking, especially if it aims at using state-of-the-art methodology while ensuring that its study results are verifiable. The Bausteine-series, which is being developed at the Leibniz Institute for the German Language (IDS), presents individual “building blocks” for such a grammar. In addition to the peer-reviewed texts, the series publishes the results of statistical analyses and, for selected topics, the underlying data sets.
Bericht vom ersten nationalen Best-Practice-Workshop der deutschen Open-Access-Monografienfonds
(2020)
For a long time, the lecture dominated performatively presented scientific communication. Given academic traditions, it is possible to make a connection between the lecture and classical rhetoric, a highly differentiated instrument of analysis. The tradition of the lecture has been perpetuated in the presentation of research results, first in the use of transparencies and subsequently through computer-based projections. Yet the use of media technology has also allowed new practices to emerge, including mediation practices hitherto neglected in the theory of rhetoric.
Sprachkämpfe gibt es so manche, aber wer hätte gedacht, dass ausgerechnet das Erscheinen der 28. Auflage des Rechtschreibdudens die Gemüter so in Wallung versetzen würde, dass gleich mehrere davon in die nächste Runde gehen. Verlag und Redaktion werden auf die sprachpolitische Bühne gezerrt, weil man die deutsche Sprache so gut für Zwecke identitärer Politik instrumentalisieren kann.
„Revolutionen sind die Lokomotiven der Geschichte“, lautet ein berühmter Ausspruch von Karl Marx. Kann man dies auch auf die Sprachgeschichte übertragen? Und was sind deren Lokomotiven? Eine neuere These besagt, dass Pandemien, Kriege und andere “revolutionäre” Ereignisse mit starker Auswirkung auf die Demografie sprachhistorisches Geschehen in Gang setzen können.
Die Sprachpolitik der AfD
(2020)
Sprachpolitik hat sich in den letzten Jahren als ein lohnendes Politikfeld etabliert. Im Umfeld der AfD und in der parlamentarischen Repräsentanz der Partei werden durch Aufrufe, Anträge, Anfragen und Gesetzesinitiativen verschiedene Themen adressiert, die schon im AfD-Grundsatzprogramm von 2016 gesetzt wurden. Um was für sprachpolitische Positionen handelt es sich, und was ist der Grund für das Interesse an diesen Themen?
Nachruf auf Ulrich Engel
(2020)
affiziertes Objekt
(2020)
Türkisch in Deutschland
(2020)
Einleitung
(2020)
Russisch
(2020)
Usually, weak inflection of an attributive or nominalized adjective occurs if the adjective is preceded by an inflected determiner: mit diesem technischen Aufwand (‘at great technical expense’). Otherwise, the inflection of the adjective is strong: mit technischem Aufwand. Following this rule of thumb, we would expect strong inflection of an adjective following another adjective whenever the determiner is missing: mit hohem technischem Aufwand. But many German speakers opt for a weak dative singular ending -en following the strong ending -em on the first adjective: mit hohem technischen Aufwand. This chapter shows which explanatory variables play a role in this variation within standard German.
Interaktionale Semantik
(2020)
Der Beitrag stellt zunächst die drei grundlegenden methodischen Verfahren der Konversationsanalyse und der mittlerweile deren Vorgehen folgenden diskursiven Psychologie dar: die Transkription, die detaillierte Sequenzanalyse am Einzelfall und die (komparative) Analyse von Datenkollektionen. Nach einer Übersicht über grundlegende Befunde zur Organisation von Interaktionen wird auf drei psychologische Untersuchungsbereiche eingegangen: Die Konstitution von Identität in Gesprächen, die Rolle von Kognitionen in der sozialen Interaktion und die Erforschung von Psychotherapiegesprächen.
Using video-recordings from one day of a theater project for young adults, this paper investigates how the meaning of novel verbal expressions is interactionally constituted and elaborated over the interactional history of a series of activities. We examine how the theater director introduces and instructs the group in the Chekhovian technique of acting, which is based on “imagining with the body,” and how the imaginary elements of the technique are “brought into existence” in the language of the instructions. By tracking shifts in the instructor’s use of the key expressions invisible/imaginary/inner body or movement through a series of exercises, we demonstrate how they are increasingly treated as real and perceivable bodily conduct. The analyses focus on the instructor’s attribution of factual and agentive properties to these expressions, and the changes that these properties undergo over the series of instructions. This case demonstrates the significance of longitudinal processes for the establishment of shared meaning in social interaction. The study thereby contributes to the field of interactional semantics and to longitudinal studies of social interaction.
According to Positioning Theory, participants in narrative interaction can position themselves on a representational level concerning the autobiographical, told self, and a performative level concerning the interactive and emotional self of the tellers. The performative self is usually much harder to pin down, because it is a non-propositional, enacted self. In contrast to everyday interaction, psychotherapists regularly topicalize the performative self explicitly. In our paper, we study how therapists respond to clients' narratives by interpretations of the client's conduct, shifting from the autobiographical identity of the told self, which is the focus of the client's story, to the present performative self of the client. Drawing on video recordings from three psychodynamic therapies (tiefenpsychologisch fundierte Psychotherapie) with 25 sessions each, we will analyze in detail five extracts of therapists' shifts from the representational to the performative self. We highlight four findings:
• Whereas, clients' narratives often serve to support identity claims in terms of personal psychological and moral characteristics, therapists rather tend to focus on clients' feelings, motives, current behavior, and ways of interacting.
• In response to clients' stories, therapists first show empathy and confirm clients' accounts, before shifting to clients' performative self.
• Therapists ground the shift to clients' performative self by references to clients' observable behavior.
• Therapists do not simply expect affiliation with their views on clients' performative self. Rather, they use such shifts to promote the clients' self-exploration. Yet, if clients resist to explore their selves in more detail, therapists more explicitly ascribe motives and feelings that clients do not seem to be aware of. The shift in positioning levels thus seems to have a preparatory function for engendering therapeutic insights.
Interaktive Emergenz und Stabilisierung. Zur Entstehung kollektiver Kreativität in Theaterproben
(2020)
Coaching outcome research convincingly argues that coaching is effective and facilitates change in clients. While coaching practice literature depicts questions as key vehicle for such change, empirical findings as regards the local and global change potential of questions are so far largely missing in both (psychological) outcome research and (linguistic and psychological) process research on coaching. The local change potential of questions refers to a turn-by-turn transformation as a result of their sequentiality, the global change potential is related to the power of questions to initiate, process and finalize established phases of change. This programmatic article on questions, or rather questioning sequences, in executive coaching pursues two goals: firstly, it takes stock of available insights into questions in coaching and advocates for Conversation Analysis as a fruitful methodological framework to assess the local change potential of questioning sequences. Secondly, it points to the limitations of a local turn-by-turn approach to unravel the overall change potential of questions and calls for an interdisciplinary approach to bring both local and global effectiveness into relation. Such an approach is premised on conversational sequentiality and psychological theories of change and facilitates research on questioning sequences as both local and global agents of change across the continuum of coaching sessions. We present the TSPP Model as a first result of such an interdisciplinary cooperation.
As part of a larger research paradigm on understanding client change in the helping professions from an interprofessional perspective, this paper applies a conversation analytic approach to investigate therapists’ requesting examples (REs) and their interactional and sequential contribution to clients’ change during the diagnostic evaluation process. The analyzed data comprises 15 videotaped intake interviews that followed the system of Operationalized Psychodynamic Diagnosis. Therapists’ requesting examples in psychodiagnostic interviews explicitly or implicitly criticize the patient’s prior turn as insufficient. They also open a retro-sequence and in the following turns provide for a description that helps clarify meaning and evince psychic or relational aspects of the topic at hand. While the therapist’s prior request initiates the patient’s insufficient presentation, the patient’s example presentation is regularly followed by the therapist’s summarizing comments or by further requests. Requesting examples thus are a particular case of requests that follow expandable responses regarding the sequential organization; yet, given that they make examples conditionally relevant, they are more specific. With the help of this sequential organization, participants co-construct common knowledge which allows the therapist to pursue the overall aim of therapy, which is to increase the patients’ awareness of their distorted perceptions, and thus to pave the way for change.
The newest generation of speech technology caused a huge increase of audio-visual data nowadays being enhanced with orthographic transcripts such as in automatic subtitling in online platforms. Research data centers and archives contain a range of new and historical data, which are currently only partially transcribed and therefore only partially accessible for systematic querying. Automatic Speech Recognition (ASR) is one option of making that data accessible. This paper tests the usability of a state-of-the-art ASR-System on a historical (from the 1960s), but regionally balanced corpus of spoken German, and a relatively new corpus (from 2012) recorded in a narrow area. We observed a regional bias of the ASR-System with higher recognition scores for the north of Germany vs. lower scores for the south. A detailed analysis of the narrow region data revealed – despite relatively high ASR-confidence – some specific word errors due to a lack of regional adaptation. These findings need to be considered in decisions on further data processing and the curation of corpora, e.g. correcting transcripts or transcribing from scratch. Such geography-dependent analyses can also have the potential for ASR-development to make targeted data selection for training/adaptation and to increase the sensitivity towards varieties of pluricentric languages.
Fragen sind zentrale Interventionen im Coaching. Trotzdem gibt es kaum Erkenntnisse darüber, wie sie zur Veränderung bei Klientinnen und Klienten beitragen. Mit ihrem Fokus auf die sequenzielle Abfolge von Äußerungen wie „Frage – Antwort – Reaktion“ kann die linguistische Gesprächsanalyse dieses Veränderungspotenzial von Fragen beschreiben und so auch für die (Weiterbildungs-)Praxis oder Personalwirtschaft zugänglich machen.
Lean syntax: how argument structure is adapted to its interactive, material, and temporal ecology
(2020)
It has often been argued that argument structure in spoken discourse is less complex than in written discourse. This paper argues that lean argument structure, in particular, argument omission, gives evidence of how the production and understanding of linguistic structures is adapted to the interactive, material, and temporal ecology of talk-in-interaction. It is shown how lean argument structure builds on participants' ongoing bodily conduct, joint perceptual salience, joint attention, and their Orientation to expectable next actions within a joint project. The phenomena discusscd in this paper are verb-derived discourse markers and tags, analepsis in responsive actions, and ellipsis in first actions, such as requests and instructions. The study draws from transcripts and audio- and video-recordings of naturally occurring interaction in German from the Research and Teaching Corpus of Spoken German (FOLK).
This article makes an empirical and a methodological contribution to the comparative study of action. The empirical contribution is a comparative study of three distinct types of action regularly accomplished with the turn format du meinst x (“you mean/think x”) in German: candidate understandings, formulations of the other’s mind, and requests for a judgment. These empirical materials are the basis for a methodological exploration of different levels of researcher abstraction in the comparative study of action. Two levels are examined: the (coarser) level of conditionally relevant responses (what a response speaker must do to align with the action of the prior turn) and the (finer) level of “full alignment” (what a response speaker can do to align with the action of a prior turn). Both levels of abstraction provide empirically viable and analytically interesting descriptive concepts for the comparative study of action. Data are in German.
In diesem Beitrag werden exemplarisch verschiedene potenzielle Gebrauchsmuster mit dem deutschen Lemma wissen gesammelt und ihre in der Fachliteratur vorgelegten interaktionslinguistisch-funktionalen Beschreibungen für einen Strukturierungsversuch genutzt. Im Zentrum steht ein multifunktionaler handlungsorientierter Ansatz zur Beschreibung von Interaktion im Gespräch. Der Beitrag greift dabei Überlegungen auf, die im Rahmen des Forschungsprojekts Lexik des gesprochenen Deutsch (= LeGeDe) zur Erstellung einer korpusbasierten lexikogra- fischen Ressource lexikalischer Besonderheiten des gesprochenen Deutsch in der Interaktion thematisiert wurden.
Schlüsselwörter: Muster, Lexik des gesprochenen Deutsch, Interaktion, Internetlexikografie
Am Leibniz-Institut für Deutsche Sprache (IDS) wurde im Programmbereich „Lexikografie und Sprachdokumentation“ ein neuartiges Wörterbuch entwickelt, das leicht verwechselbare Ausdrücke in ihrem aktuellen öffentlichen Sprachgebrauch deskriptiv beschreibt. Im Jahr 2018 erschien das elektronische Nachschlagewerk „Paronyme – Dynamisch im Kontrast“, das sich durch folgende drei Aspekte auszeichnet:
1) Erstens liegen mehrstufige kontrastive Beschreibungsebenen und flexible Darstellungsformen vor;
2) zweitens sind die Bedeutungserläuterungen kognitiv-konzeptuell angelegt, um einer langen Forderung nach einer stärker kognitiv ausgerichteten Lexikografie Rechnung zu tragen;
3) drittens werden Datengrundlagen und Analysemethoden genutzt, mit denen umfassend Paronyme ermittelt und diese anschließend erstmals empirisch ausgewertet werden konnten.
Aus diesem Grunde haben wir uns empirisch der Frage genähert, wie oder ob bestimmte Gruppen heute überhaupt noch Wörterbücher nutzen und ob sie sie bewusst von anderen sprachbezogenen Daten im Web unterscheiden. Es sollten empirische Daten gesammelt werden, um zu erfahren, wie DaF-Lernende tatsächlich arbeiten (und nicht was sie dazu retrospektiv sagen), vor allem um eine bessere empirische Basis für den Unterricht zur Verfügung zu haben. Zentrale Fragen dabei waren:
• Wie nutzen DaF-Lernende heutzutage lexikografische Ressourcen?
• Welche Suchstrategien wenden sie an?
• Differenzieren sie zwischen den unterschiedlichen Ressourcen?
• Welche Strategien erweisen sich als besonders erfolgreich?
IDS aktuell. Neues aus dem Leibniz-Institut für Deutsche Sprache in Mannheim. Jg. 2020, Heft 1
(2020)
The newest generation of speech technology caused a huge increase of audio-visual data nowadays being enhanced with orthographic transcripts such as in automatic subtitling in online platforms. Research data centers and archives contain a range of new and historical data, which are currently only partially transcribed and therefore only partially accessible for systematic querying. Automatic Speech Recognition (ASR) is one option of making that data accessible. This paper tests the usability of a state-of-the-art ASR-System on a historical (from the 1960s), but regionally balanced corpus of spoken German, and a relatively new corpus (from 2012) recorded in a narrow area. We observed a regional bias of the ASR-System with higher recognition scores for the north of Germany vs. lower scores for the south. A detailed analysis of the narrow region data revealed – despite relatively high ASR-confidence – some specific word errors due to a lack of regional adaptation. These findings need to be considered in decisions on further data processing and the curation of corpora, e.g. correcting transcripts or transcribing from scratch. Such geography-dependent analyses can also have the potential for ASR-development to make targeted data selection for training/adaptation and to increase the sensitivity towards varieties of pluricentric languages.
IDS aktuell. Neues aus dem Leibniz-Institut für Deutsche Sprache in Mannheim. Jg. 2020, Heft 3
(2020)
IDS aktuell. Neues aus dem Leibniz-Institut für Deutsche Sprache in Mannheim. Jg. 2020, Heft 2
(2020)
Politische Grenzen haben nachweislich sowohl auf den Sprachgebrauch als auch auf die Sprachwahrnehmung einen großen Einfluss. Die vorliegende Arbeit analysiert für den die Länder Deutschland, Österreich und Italien übergreifenden bairischen Sprachraum, wie Sprecher/Hörer diesen räumlich (horizontal-areal) sowie hinsichtlich seines Verhaltensspektrums (vertikal-sozial) gliedern. Dabei werden die Wahrnehmungen sprachlicher und außersprachlicher Merkmale und die Einstellungen dazu genauer betrachtet.
Mithilfe eines pluridimensionalen Erhebungssettings, bestehend aus Tiefeninterview, Online-Fragebogen, Mental-Map-Erhebung und Hörerurteilstest, kann gezeigt werden, dass extralinguistische Barrieren, wie etwa politische Grenzen, stark mit attitudinal-perzeptiven Grenzen korrelieren. Damit stellt im Bewusstsein der Befragten auch die Staatsgrenze zwischen Deutschland und Österreich eine Sprachgrenze dar.
Effiziertes Objekt
(2020)
Terminologiearbeit im wirtschaftlichen Kontext geht von zwei Arbeitsphasen aus: einer umfassenden deskriptiven Phase, in der die Begriffsstruktur und der aktuelle Terminologiegebrauch erfasst, aber noch nicht bewertet werden, sowie einer präskriptiven Phase, in der der eigentliche Standardisierungseingriff erfolgt. In der Praxis wird die deskriptive Phase oft reduziert und der Schwerpunkt unmittelbar auf die Präskription gelegt. In unserem Beitrag diskutieren wir das Potenzial, das eine ausführliche deskriptive Terminologiearbeit zur Verbesserung der Wissenskommunikation im Rahmen des Wissensmanagements birgt. Am Beispiel eines wissenschaftlichen Projektes im Bereich Grammatik des Deutschen zeigen wir, wie diese eng an der Theorie orientierte Ausgestaltung der Deskription in der Praxis aussieht, welche Herausforderungen sie mit sich bringt und wie ihre Ergebnisse das Wissensmanagement unterstützen können.
Das "Verzeichnis grundlegender grammatischer Fachbegriffe" 2019. Anliegen, Konzeption, Perspektiven
(2020)
This chapter begins with a sketch of the specifics of our approach, an overview of the contents of the chapters on word formation and some methodological notes. It then discusses the general characteristics of word formations and of their overall inventory, comparing word formations to primary words. Furthermore, the chapter explores the relative frequencies of word formations in different vocabulary areas and traces the word formation profiles of individual parts of speech. Finally, it compiles the characteristic word formation rules for different parts of speech.
Individuals with Autism Spectrum Disorder (ASD) experience a variety of symptoms sometimes including atypicalities in language use. The study explored diferences in semantic network organisation of adults with ASD without intellectual impairment. We assessed clusters and switches in verbal fuency tasks (‘animals’, ‘human feature’, ‘verbs’, ‘r-words’) via curve ftting in combination with corpus-driven analysis of semantic relatedness and evaluated socio-emotional and motor action related content. Compared to participants without ASD (n=39), participants with ASD (n=32) tended to produce smaller clusters, longer switches, and fewer words in semantic conditions (no p values survived Bonferroni-correction), whereas relatedness and content were similar. In ASD, semantic networks underlying cluster formation appeared comparably small without afecting strength of associations or content.
Sogenannte „Pragmatikalisierte Mehrworteinheiten“ sind im Deutschen hochfrequent und unterliegen bisweilen tiefgreifenden phonetischen Reduktionsprozessen. Diese können Realisierungsvarianten hervorbringen, die in der Rückschau auf mehr als eine lexematische Ursprungsform zurückführbar sind. Die vorliegende Studie untersucht mit [ˈzɐmɐ] einen besonders prägnanten Fall dieser Art anhand eines Perzeptionsexperimentes.
This thesis describes work in three areas: grammar engineering, computer-assisted language learning and grammar learning. These three parts are connected by the concept of a grammar-based language learning application. Two types of grammars are of concern. The first we call resource grammars, extensive descriptions a natural languages. Part I focuses on this kind of grammars. The other are domain-specific or application-specific grammars. These grammars only describe a fragment of natural language that is determined by the domain of a certain application. Domain-specific grammars are relevant for Part II and Part III. Another important distinction is between humans learning a new natural language using computational grammars (Part II) and computers learning grammars from example sentences (Part III). Part I of this thesis focuses on grammar engineering and grammar testing. It describes the development and evaluation of a computational resource grammar for Latin. Latin is known for its rich morphology and free word order, both have to be handled in a computationally efficient way. A special focus is on methods how computational grammars can be evaluated using corpus data. Such an evaluation is presented for the Latin resource grammar. Part II, the central part, describes a computer-assisted language learning application based on domain-specific grammars. The language learning application demonstrates how computational grammars can be used to guide the user input and how language learning exercises can be modeled as grammars. This allows us to put computational grammars in the center of the design of language learning exercises used to help humans learn new languages. Part III, the final part, is dedicated to a method to learn domain- or application-specific grammars based on a wide-coverage grammar and small sets of example sentences. Here a computer is learning a grammar for a fragment of a natural language from example sentences, potentially without any additional human intervention. These learned grammars can be based e.g. on the Latin resource grammar described in Part II and used as domain-specific lesson grammars in the language learning application described Part II.
Nonnative-accented speakers face prevalent discrimination. The assumption that people freely express negative sentiments toward nonnative speakers has also guided common research methods. However, recent studies did not consistently find downgrading, so that prejudice against nonnative accents might even be questioned at first sight. The present theoretical article will bridge these contradictory findings in three ways: (a) We illustrate that nonnative speakers with foreign accents frequently may not be downgraded in commonly used first-impression and employment scenario paradigms. It appears that relatively controlled responding may be influenced by norms and motivations to respond without prejudice, whereas negative biases emerge in spontaneous responding. (b) We present an integrative view based on knowledge on modern forms of prejudice to develop modern notions of accent-ism, which allow for predictions when accent biases are (not) likely to surface. (c) We conclude with implications for interventions and a tailored research agenda.
Blogg Dir deinen Urlaub nach Tunesien! Zur Erläuterung des Musters [VImp PROPReflexivDat NPAkk]
(2020)
In diesem Beitrag soll das Muster [VImp PROPReflexivDat NPAkk] semantisch und syntaktisch erläutert werden. Dieses Muster, das semantisch mit Verben des Erwerbens wie anschaffen korreliert, wird auch im Zusammenhang mit Kommunikationsverben wie bloggen und facebooken sowie mit dem Kontaktverb rubbeln belegt. Mithilfe des Konzeptes der Koerzion bzw. der semantischen Anpassung soll das Kovorkommen des erwänhten Musters mit diesen Verben beschrieben und erklärt werden. Als empirische Quelle dient das Korpus für das Deutsche 2012 und 2014 aus den Corpora from the Web. Die vorliegende Untersuchung ist im Rahmen meiner Dissertationsarbeit zum Thema Argumentstruktur und Bedeutung medialer Kommunikationsverben des Deutschen und des Spanischen im Sprachvergleich durchgeführt worden.
In this article, we examine the current situation of data dissemination and provision for CMC corpora. By that we aim to give a guiding grid for future projects that will improve the transparency and replicability of research results as well as the reusability of the created resources. Based on the FAIR guiding principles for research data management, we evaluate the 20 European CMC corpora listed in the CLARIN CMC Resource family, individuate successful strategies among the existing corpora and establish best practices for future projects. We give an overview of existing approaches to data referencing, dissemination and provision in European CMC corpora, and discuss the methods, formats and strategies used. Furthermore, we discuss the need for community standards and offer recommendations for best practices when creating a new CMC corpus.
In der Diskussion um Methodologie und Methoden finden unterschiedliche
wissenschaftliche Arbeitsbereiche und Forschungsaktivitäten stets einen gemeinsamen Nenner. Ulrike Froschauer hat sich lange Jahre ausführlich und intensiv mit den Fragestellungen der Organisationssoziologie beschäftigt. Die vorliegenden Buchveröffentlichungen wie beispielsweise „Organisationen in Bewegung. Beiträge zur interpretativen Organisationsanalyse“ (2012) oder „Organisationen im Wechselspiel von Dynamik und Stabilität“ (2015) geben einen guten Zugang zu ihrem wissenschaftlichen Wirken. Das Arbeitsfeld unserer Forschungsgruppe ist ein anderes, nämlich das der Medienwissenschaft, speziell der Medienrezeptionsforschung. In den 1980er Jahren haben wir hierzu das integrationswissenschaftliche Modell der „Strukturanalytischen Rezeptionsforschung“ entwickelt und dieses über die Jahre hinweg an unterschiedlichen Forschungsorten in zahlreichen Einzelstudien weiter ausgearbeitet. Verbunden hat uns, die Wiener Organisationssoziologin Ulrike Froschauer und die Baseler Mediensoziolog_innen, das anhaltende Interesse an method(-olog-)ischen Fragen.
We present recognizers for four very different types of speech, thought and writing representation (STWR) for German texts. The implementation is based on deep learning with two different customized contextual embeddings, namely FLAIR embeddings and BERT embeddings. This paper gives an evaluation of our recognizers with a particular focus on the differences in performance we observed between those two embeddings. FLAIR performed best for direct STWR (F1=0.85), BERT for indirect (F1=0.76) and free indirect (F1=0.59) STWR. For reported STWR, the comparison was inconclusive, but BERT gave the best average results and best individual model (F1=0.60). Our best recognizers, our customized language embeddings and most of our test and training data are freely available and can be found via www.redewiedergabe.de or at github.com/redewiedergabe.
Die vorgestellte Studie untersucht die Anteile unterschiedlicher Redewiedergabeformen im Vergleich zwischen zwei Literaturtypen von gegensätzlichen Enden des Spektrums: Hochliteratur – definiert als Werke, die auf der Auswahlliste von Literaturpreisen standen – und Heftromanen, massenproduzierten Erzählwerken, die zumeist über den Zeitschriftenhandel vertrieben werden und früher abwertend als „Romane der Unterschicht” (Nusser 1981) bezeichnet wurden. Unsere These ist, dass sich diese Literaturtypen hinsichtlich ihrer Erzählweise unterscheiden, und sich dies in den verwendeten Wiedergabeformen niederschlägt. Der Fokus der Untersuchung liegt auf der Dichotomie zwischen direkter und nicht-direkter Wiedergabe, die schon in der klassischen Rhetorik aufgemacht wurde.
In this Paper, we describe a schema and models which have been developed for the representation of corpora of computer-mediated communicatin (CMC corpora) using the representation framework provided by the Text Encoding Initiative (TEI). We characterise CMC discourse as dialogic, sequentially organised interchange between humans and point out that many features of CMC are not adequately handled by current corpus encoding schemas and tools. We formulate desiderata for a representation of CMC in encoding schemes and argue why the TEI is a suitable framework for the encoding of CMC corpora. We propose a model of basic CMC units (utterances, posts, and nonverbal activities) and the macro- and micro-level structures of interactions in CMC environments. Based on these models, we introduce CMC-core, a TEI customisation for the encoding of CMC corpora, which defines CMC-specific encoding features on the four levels of elements, model classes, attribute classes, and modules of the TEI infrastructure. The description of our customisation is illustrated by encoding examples from corpora by researchers of the TEI SIG CMC, representing a variety of CMC genres, i.e. chat, wiki talk, twitter, blog, and Second Life interactions. The material described, i.e. schemata, encoding examples, and documentation, is available from the of the TEI CMC SIG Wiki and will accompany a feature request to the TEI council in late 2019.
Sprachliche Zeichen im öffentlichen Raum (Linguistic Landscape - LL) tragen neben ihrer primären Bedeutung und Funktion wie Auskunft und Werbung auch sekundäre Informationen zur Sprachenhierarchie, zur Repräsentation von Minderheitensprachen, zur sprachlichen Toleranz gegenüber der Mehrsprachigkeit in diesem Raum, etc. Diese Vielschichtigkeit macht die sprachlichen Zeichen im öffentlichen Raum zu wertvollen Lernobjekten, an denen die im Berufsleben so bedeutende diskursive Lesefähigkeit der Studierenden trainiert werden kann. Der Beitrag öffnet Perspektiven auf die Möglichkeiten der Verknüpfung der LL-Analyse mit den Inhalten der traditionellen germanistischen Curricula wie auch benachbarter Fachbereiche und verweist auf bisherige Studien in diesem Bereich.
Dieser Beitrag analysiert auf der Grundlage der Wikipedia-Korpora des Leibniz-Instituts für Deutsche Spra-che morphosyntaktische Phänomene im deutsch-italienischen Vergleich. Konkret fokussiert die Fallstudie Konfixe, die ursprünglich lateinischen bzw. griechischen Ursprungs waren und zunächst überwiegend für den Bereich der Medizinfachsprache entlehnt wurden. Mittlerweile werden diese mit veränderter Semantik jedoch auch für gemeinsprachliche Wortbildungsprodukte eingesetzt: So finden sich -phob- (D) und -fob- (IT) sowie -man- (D) und -man- (IT) in gemeinsprachlichen Wortbildungsprodukten, die formale und funk-tionale Äquivalenzen im Deutschen und Italienischen aufweisen. Wikipedia-Autor/-innen nutzen die als Krankheitsmetaphern zu deutenden Termini wie Lösch(o)manie oder cancellomania auf den Diskussionsseiten der Online-Enzyklopädie dazu, das Verhalten anderer Autor/-innen in der kollaborativen Textproduktion der Wikipedia metadiskursiv zu normieren.
In diesem Beitrag werden neue, repräsentative Daten zur arealen Variation in Deutschland vorgestellt, die das Leibniz-Institut für Deutsche Sprache im Rahmen der Innovationsstichprobe des Sozio-ökonomischen Panels (SOEP) des Deutschen Instituts für Wirtschaftsforschung (DIW) in der Befragungsrunde 2017/2018 erhoben hat. Zum einen wurde die Dialektkompetenz abgefragt; überindividuell zeigt sich hier das bekannte Nord-Süd-Gefälle, beim individuellen Grad der Kompetenz der Dialektsprecher gibt es aber regional nur geringe Unterschiede. Zum anderen wurden die Bewertungen von Dialekten erhoben; hier werden Norddeutsch und Bayerisch besonders positiv, Sächsisch hingegen besonders negativ bewertet, wobei regionale Muster eine Rolle spielen. Auffällig ist ferner die bundesweit sehr einheitlich positive Bewertung des Hochdeutschen.
The annual microcensus provides Germany’s most important official statistics. Unlike a census it does not cover the whole population, but a representative 1%-sample of it. In 2017, the German microcensus asked a question on the language of the population, i.e. ‘Which language is mainly spoken in your household?’ Unfortunately, the question, its design and its position within the whole microcensus’ questionnaire feature several shortcomings. The main shortcoming is that multilingual repertoires cannot be captured by it. Recommendations for the improvement of the microcensus’ language question: first and foremost the question (i.e. its wording, design, and answer options) should make it possible to count multilingual repertoires.
Linguistic Variation and Change in 250 Years of English Scientific Writing: A Data-Driven Approach
(2020)
We trace the evolution of Scientific English through the Late Modern period to modern time on the basis of a comprehensive corpus composed of the Transactions and Proceedings of the Royal Society of London, the first and longest-running English scientific journal established in 1665. Specifically, we explore the linguistic imprints of specialization and diversification in the science domain which accumulate in the formation of “scientific language” and field-specific sublanguages/registers (chemistry, biology etc.). We pursue an exploratory, data-driven approach using state-of-the-art computational language models and combine them with selected information-theoretic measures (entropy, relative entropy) for comparing models along relevant dimensions of variation (time, register). Focusing on selected linguistic variables (lexis, grammar), we show how we deploy computational language models for capturing linguistic variation and change and discuss benefits and limitations.
We present web services which implement a workflow for transcripts of spoken language following the TEI guidelines, in particular ISO 24624:2016 “Language resource management – Transcription of spoken language”. The web services are available at our website and will be available via the CLARIN infrastructure, including the Virtual Language Observatory and WebLicht.
Twenty-two historical encyclopedias encoded in TEI: a new resource for the Digital Humanities
(2020)
This paper accompanies the corpus publication of EncycNet, a novel XML/TEI annotated corpus of 22 historical German encyclopedias from the early 18th to early 20th century. We describe the creation and annotation of the corpus, including the rationale for its development, suggested methodology for TEI annotation, possible use cases and future work. While many well-developed annotation standards for lexical resources exist, none can adequately model the encyclopedias at hand, and we therefore suggest how the TEI Lex-0 standard may be modified with additional guidelines for the annotation of historical encyclopedias. As the digitization and annotation of historical encyclopedias are settling on TEI as the de facto standard, our methodology may inform similar projects.
Dieser Beitrag beschreibt, welche Schritte nötig sind, um die Daten des Archivs der Grafen v. Platen (AGP) für Forschungsdateninfrastrukturen (FDI) zugänglich zu machen: die Daten konvertieren, die Metadaten extrahieren, Daten und Metadaten indizieren sowie die Datenmodelle für Daten und Metadaten so ergänzen, dass sie die Bestände des Archivs sinnvoll erfassen. Zugleich wird begründet, weshalb man überhaupt solchen Aufwand treiben sollte: nämlich, damit die Daten einem größeren Publikum zur Verfügung stehen und überdies mit Werkzeugen bearbeitet werden können, die in den Infrastrukturen zur Verfügung stehen, und damit eine weitere Verlinkung und Kombination mit externen Ressourcen erfolgen kann, sodass ein deutlicher Mehrwert entstehen kann.
We evaluate a graph-based dependency parser on DeReKo, a large corpus of contemporary German. The dependency parser is trained on the German dataset from the SPMRL 2014 Shared Task which contains text from the news domain, whereas DeReKo also covers other domains including fiction, science, and technology. To avoid the need for costly manual annotation of the corpus, we use the parser’s probability estimates for unlabeled and labeled attachment as main evaluation criterion. We show that these probability estimates are highly correlated with the actual attachment scores on a manually annotated test set. On this basis, we compare estimated parsing scores for the individual domains in DeReKo, and show that the scores decrease with increasing distance of a domain to the training corpus.
The present paper outlines the projected second part of the Corpus Query Lingua Franca (CQLF) family of standards: CQLF Ontology, which is currently in the process of standardization at the International Standards Organization (ISO), in its Technical Committee 37, Subcommittee 4 (TC37SC4) and its national mirrors. The first part of the family, ISO 24623-1 (henceforth CQLF Metamodel), was successfully adopted as an international standard at the beginning of 2018. The present paper reflects the state of the CQLF Ontology at the moment of submission for the Committee Draft ballot. We provide a brief overview of the CQLF Metamodel, present the assumptions and aims of the CQLF Ontology, its basic structure, and its potential extended applications. The full ontology is expected to emerge from a community process, starting from an initial version created by the authors of the present paper.
This article makes an empirical and a methodological contribution to the comparative study of action. The empirical contribution is a comparative study of three distinct types of action regularly accomplished with the turn format du meinst x (“you mean/think x”) in German: candidate understandings, formulations of the other’s mind, and requests for a judgment. These empirical materials are the basis for a methodological exploration of different levels of researcher abstraction in the comparative study of action. Two levels are examined: the (coarser) level of conditionally relevant responses (what a response speaker must do to align with the action of the prior turn) and the (finer) level of “full alignment” (what a response speaker can do to align with the action of a prior turn). Both levels of abstraction provide empirically viable and analytically interesting descriptive concepts for the comparative study of action. Data are in German.
This paper addresses long-term archival for large corpora. Three aspects specific to language resources are focused, namely (1) the removal of resources for legal reasons, (2) versioning of (unchanged) objects in constantly growing resources, especially where objects can be part of multiple releases but also part of different collections, and (3) the conversion of data to new formats for digital preservation. It is motivated why language resources may have to be changed, and why formats may need to be converted. As a solution, the use of an intermediate proxy object called a signpost is suggested. The approach will be exemplified with respect to the corpora of the Leibniz Institute for the German Language in Mannheim, namely the German Reference Corpus (DeReKo) and the Archive for Spoken German (AGD).
This technology watch report discusses digital repository solutions, in the context of the research infrastructure projects CLARIAH-DE, CLARIN, and DARIAH. It provides an overview of different repository systems, comparing them and discussing their respective applicabilities from the perspectives of the project partners at the time of writing.
Signposts for CLARIN
(2020)
An implementation of CMDI-based signposts and its use is presented in this paper. Arnold et al. 2020 present Signposts as a solution to challenges in long-term preservation of corpora, especially corpora that are continuously extended and subject to modification, e.g., due to legal injunctions, but also may overlap with respect to constituents, and may be subject to migrations to new data formats. We describe the contribution Signposts can make to the CLARIN infrastructure and document the design for the CMDI profile.
The CMDI Explorer
(2020)
We present the CMDI Explorer, a tool that empowers users to easily explore the contents of complex CMDI records and to process selected parts of them with little effort. The tool allows users, for instance, to analyse virtual collections represented by CMDI records, and to send collection items to other CLARIN services such as the Switchboard for subsequent processing. The CMDI Explorer hence adds functionality that many users felt was lacking from the CLARIN tool space.
In order to satisfy the information needs of a wide range of researchers across a number of disciplines, large textual datasets require careful design, collection, cleaning, encoding, annotation, storage, retrieval, and curation. This daunting set of tasks has coalesced into a number of key themes and questions that are of interest to the contributing research communities: (a) what sampling techniques can we apply? (b) what quality issues should we be aware of? (c) what infrastructures and frameworks are being developed for the efficient storage, annotation, analysis and retrieval of large datasets? (d) what affordances do visualisation techniques offer for the exploratory analysis approaches of corpora? (e) what legal paths can be followed in dealing with IPR and data protection issues governing both the data sources and the query results? (f) how to guarantee that corpus data remain available and usable in a sustainable way?
Repeating the movements associated with activities such as drawing or sports typically leads to improvements in kinematic behavior: these movements become faster, smoother, and exhibit less variation. Likewise, practice has also been shown to lead to faster and smoother movement trajectories in speech articulation. However, little is known about its effect on articulatory variability. To address this, we investigate the extent to which repetition and predictability influence the articulation of the frequent German word “sie” [zi] (they). We find that articulatory variability is proportional to speaking rate and the duration of [zi], and that overall variability decreases as [zi] is repeated during the experiment. Lower variability is also observed as the conditional probability of [zi] increases, and the greatest reduction in variability occurs during the execution of the vocalic target of [i]. These results indicate that practice can produce observable differences in the articulation of even the most common gestures used in speech.
Making corpora accessible and usable for linguistic research is a huge challenge in view of (too) big data, legal issues and a rapidly evolving methodology. This does not only affect the design of user-friendly graphical interfaces to corpus analysis tools, but also the availability of programming interfaces supporting access to the functionality of these tools from various analysis and development environments. RKorAPClient is a new research tool in the form of an R package that interacts with the Web API of the corpus analysis platform KorAP, which provides access to large annotated corpora, including the German reference corpus DeReKo with 45 billion tokens. In addition to optionally authenticated KorAP API access, RKorAPClient provides further processing and visualization features to simplify common corpus analysis tasks. This paper introduces the basic functionality of RKorAPClient and exemplifies various analysis tasks based on DeReKo, that are bundled within the R package and can serve as a basic framework for advanced analysis and visualization approaches.
CLARIN contractual framework for sharing language data: the perspective of personal data protection
(2020)
The article analyses the responsibility for ensuring compliance with the General Data Protection Regulation (GDPR) in research settings. As a general rule, organisations are considered the data controller (responsible party for the GDPR compliance). Research constitutes a unique setting influenced by academic freedom. This raises the question of whether academics could be considered the controller as well. However, there are some court cases and policy documents on this issue. It is not settled yet. The analysis serves a preliminary analytical background for redesigning CLARIN contractual framework for sharing data.
Privacy by Design (also referred to as Data Protection by Design) is an approach in which solutions and mechanisms addressing privacy and data protection are embedded through the entire project lifecycle, from the early design stage, rather than just added as an additional layer to the final product. Formulated in the 1990 by the Privacy Commissionner of Ontario, the principle of Privacy by Design has been discussed by institutions and policymakers on both sides of the Atlantic, and mentioned already in the 1995 EU Data Protection Directive (95/46/EC). More recently, Privacy by Design was introduced as one of the requirements of the General Data Protection Regulation (GDPR), obliging data controllers to define and adopt, already at the conception phase, appropriate measures and safeguards to implement data protection principles and protect the rights of the data subject. Failing to meet this obligation may result in a hefty fine, as it was the case in the Uniontrad decision by the French Data Protection Authority (CNIL). The ambition of the proposed paper is to analyse the practical meaning of Privacy by Design in the context of Language Resources, and propose measures and safeguards that can be implemented by the community to ensure respect of this principle.
Providing online repositories for language resources is one of the main activities of CLARIN centres. The legal framework regarding liability of Service Providers for content uploaded by their users has recently been modified by the new Directive on Copyright in the Digital Single Market. A new category of Service Providers, Online Content-Sharing Service Providers (OCSSPs), was added. It is subject to a complex and strict framework, including the requirement to obtain licenses from rightholders for the hosted content. This paper provides the background and effect of these changes to law and aims to initiate a debate on how CLARIN repositories should navigate this new legal landscape.