Refine
Year of publication
Document Type
- Part of a Book (1761)
- Article (1170)
- Conference Proceeding (442)
- Book (214)
- Other (100)
- Review (61)
- Working Paper (48)
- Part of Periodical (28)
- Doctoral Thesis (25)
- Report (17)
Language
- German (2834)
- English (961)
- French (22)
- Multiple languages (18)
- Russian (14)
- Spanish (11)
- Portuguese (9)
- Ukrainian (5)
- Latvian (3)
- Polish (3)
Keywords
- Deutsch (1505)
- Korpus <Linguistik> (544)
- Konversationsanalyse (208)
- Gesprochene Sprache (176)
- Wörterbuch (176)
- Grammatik (162)
- Interaktion (153)
- Kommunikation (140)
- Sprachgebrauch (139)
- Computerlinguistik (136)
Publicationstate
- Veröffentlichungsversion (3883) (remove)
Reviewstate
- (Verlags)-Lektorat (2490)
- Peer-Review (1008)
- Verlags-Lektorat (79)
- Peer-review (37)
- Qualifikationsarbeit (Dissertation, Habilitationsschrift) (33)
- Review-Status-unbekannt (12)
- Abschlussarbeit (Bachelor, Master, Diplom, Magister) (Bachelor, Master, Diss.) (5)
- (Verlags-)Lektorat (4)
- Verlagslektorat (4)
- Peer-Revied (3)
Publisher
- de Gruyter (621)
- Institut für Deutsche Sprache (354)
- Leibniz-Institut für Deutsche Sprache (IDS) (223)
- Narr (206)
- IDS-Verlag (108)
- Lang (97)
- Niemeyer (90)
- De Gruyter (59)
- Verlag für Gesprächsforschung (51)
- Association for Computational Linguistics (44)
"Reproducibility crisis" and "empirical turn" are only two keywords when it comes to providing reasons for research data management. Research data is omnipresent and with the more and more automatic data processing procedures, they become even more important. However, just because new methods require data and produce data, this does not mean that data are easily accessible, reusable or even make a difference in the CV of a researcher, even if a large portion of research goes into data creation, acquisition, preparation, and analysis. In this talk I will present where we find data in the research process, where we may find appropriate support for data management and advocate for a procedure for including it in research publications and resumes.
This presentation relies on work within the BMBF-funded project CLARIN-D. It also builds on work within the German National Research Data Infrastructure (NFDI) consortium Text+, DFG project number 460033370.
Prediction is a central mechanism in the human language processing architecture. The psycholinguistic and neurolinguistic literature has seen a lively debate about what form prediction may take and what status it has for language processing in the human mind and brain. While predictions are a ubiquitous finding, the implications of these results for models of language processing differ. For instance, eyetracking data suggest that predictions may rely on sublexical orthographic information in natural reading, while electrophysiological data provide mixed evidence for form-based predictions during reading. Other research has revealed that humans rapidly adapt to text specifics and that their predictive capacity varies, broadly speaking, in accordance with inter- and intra-individual language proficiency, which cuts across the speaker groups (e.g. L1 vs. L2 speakers, skilled vs. untrained readers) traditionally used for experimental contrasts. There is therefore evidence that the kind and strength of linguistic predictions depend on (at least) three sources of variability in language processing: speaker, text genre and experimental method.
The aim of this Research Topic is to develop a better understanding of prediction in light of the three sources of variability in language processing, by providing an overview of state-of-the art research on predictive language processing and by bringing together research from various disciplines.
First, intra-and inter-individual differences and their influence on predictive processes remain underrepresented in experimental research on predictive processing. How do language users differ in their predictive abilities and strategies, and how are these differences shaped by e.g. biological, social and cultural factors?
Second, while language users experience great stylistic diversity in their daily language exposure and use, the majority of language processing research still focuses on a very constrained register of well-controlled sentences composed in the standard language. How are predictions shaped by extra- and meta-linguistic context, such as register/genre or accent/speaker identity, and how may this influence the processing of experimental items in another language or text variety?
Third, the Research Topic invites contributions that make use of a multi-method approach, such as combined behavioral and electrophysiological measures or experimental methods combined with measures extracted from corpus data. What opportunities and challenges do we face when integrating multiple approaches to examine linguistic, experimental and individual differences in human predictive capacity?
We welcome contributions from all areas of empirical psycho- and neurolinguistics, but contributions must explicitly address variability and variation in language and language processing. Relevant topics include individual differences and the impact of genre, modality, register and language variety. Contributions that go beyond single word and single sentence paradigms are especially desirable. Experimental, corpus-based, meta-analytic and review papers, as well as theoretical/opinion pieces are welcome; however, papers of the latter type should support their arguments with substantial empirical evidence from the literature. Particularly desirable are contributions which combine topics and/or methods, such as the impact of an individual's native dialect on processing of constructions that show variability in the standard language (e.g. choice of auxiliary, agreement of mass nouns, etc.) or experimental methods combined with measures extracted from corpus data such as information-theoretic surprisal.
Simultandolmetschen ist eine komplexe und kognitive Aktivität, bei der verschiedene Prozesse gleichzeitig ablaufen. Neben monolingualer Textverarbeitung braucht man auch dolmetschspezifische Strategien, die erworben werden müssen. Die Notstrategien werden erst dann angewendet, wenn die Kapazitätsgrenze des Dolmetschers erreicht ist.
We introduce DeReKoGram, a novel frequency dataset containing lemma and part-of-speech (POS) information for 1-, 2-, and 3-grams from the German Reference Corpus. The dataset contains information based on a corpus of 43.2 billion tokens and is divided into 16 parts based on 16 corpus folds. We describe how the dataset was created and structured. By evaluating the distribution over the 16 folds, we show that it is possible to work with a subset of the folds in many use cases (e.g., to save computational resources). In a case study, we investigate the growth of vocabulary (as well as the number of hapax legomena) as an increasing number of folds are included in the analysis. We cross-combine this with the various cleaning stages of the dataset. We also give some guidance in the form of Python, R, and Stata markdown scripts on how to work with the resource.
Computational language models (LMs), most notably exemplified by the widespread success of OpenAI's ChatGPT chatbot, show impressive performance on a wide range of linguistic tasks, thus providing cognitive science and linguistics with a computational working model to empirically study different aspects of human language. Here, we use LMs to test the hypothesis that languages with more speakers tend to be easier to learn. In two experiments, we train several LMs—ranging from very simple n-gram models to state-of-the-art deep neural networks—on written cross-linguistic corpus data covering 1293 different languages and statistically estimate learning difficulty. Using a variety of quantitative methods and machine learning techniques to account for phylogenetic relatedness and geographical proximity of languages, we show that there is robust evidence for a relationship between learning difficulty and speaker population size. However, contrary to expectations derived from previous research, our results suggest that languages with more speakers tend to be harder to learn.
Recent years have seen a growing interest in grammatical variation, a core explanandum of grammatical theory. The present volume explores questions that are fundamental to this line of research: First, the question of whether variation can always and completely be explained by intra- or extra-linguistic predictors, or whether there is a certain amount of unpredictable – or ‘free’ – grammatical variation. Second, the question of what implications the (in-)existence of free variation would hold for our theoretical models and the empirical study of grammar. The volume provides the first dedicated book-length treatment of this long-standing topic. Following an introductory chapter by the editors, it contains ten case studies on potentially free variation in morphology and syntax drawn from Germanic, Romance, Uralic and Mayan.
Allusion
(2023)
Assessment
(2023)
Most broadly, an assessment is a type of social action by which an interactant expresses an evaluative stance towards someone or something (e.g., an object, an event, an action, an experience, a state of affairs, a place, a circumstance, etc.). The target of an assessment is typically called the ‘assessable’.
Collaborative work in NFDI
(2023)
The non-profit association National Research Data Infrastructure (NFDI) promotes science and research through a National Research Data Infrastructure. Its aim is to develop and establish an overarching research data management (RDM) for Germany and to increase the efficiency of the entire German science system. After a two-and-a-half year build up phase, the process of adding new consortia, each representing a different data domain, has ended in March 2023. NFDI now has 26 disciplinary consortia (and one additional basic service collaboration). Now the full extent of cross-consortial interaction is beginning to show.
KoMuX, der Kompositamuster-Explorer, (www.owid.de/plus/komux) ist eine Webanwendung, die es ermöglicht, mehr als 50.000 nominale Komposita des Deutschen gezielt nach abstrakten oder lexikalisch-teilspezifizierten Mustern zu durchsuchen. Unterschiedliche Visualisierungen helfen dabei, Strukturen und Zusammenhänge innerhalb der Ergebnismenge zu erfassen.
Retro-sequence
(2023)
The Data Governance Act was proposed in late 2020 as part of the European Strategy for Data, and adopted on 30 May 2022 (as Regulation 2022/868). It will enter into application on 24 September 2023. The Data governance Act is a major development in the legal framework affecting CLARIN and the whole language community. With its new rules on the re-use of data held by the public sector bodies and on the provision of data sharing services, and especially its encouragement of data altruism, the Data Governance Act creates new opportunities and new challenges for CLARIN ERIC. This paper analyses the provisions of the Data Governance Act, and aims at initiating the debate on how they will impact CLARIN and the whole language community.
For many reasons, Mennonite Low German is a language whose documentation and investigation is of great importance for linguistics. To date, most research projects that deal with this language and/ or its speakers have had a relatively narrow focus, with many of the data cited being of limited relevance beyond the projects for which they were collected. In order to create a resource for a broad range of researchers, especially those working on Mennonite Low German, the dataset presented here has been transformed into a structured and searchable corpus that is accessible online. The translations of 46 English, Spanish, or Portuguese stimulus sentences into Mennonite Low German by 321 consultants form the core of the MEND-corpus (Mennonite Low German in North and South America) in the Archive for Spoken German. In addition to describing the origin of this corpus and discussing possibilities and limitations for further research, we discuss the technical structure and search possibilities of the Database for Spoken German. Among other things, this database allows for a structured search of metadata, a context-sensitive token search, and the generation of virtual corpora that can be shared with others. Moreover, thanks to its text-sound alignment, one can easily switch from a particular text section of the corpus to the corresponding audio section. Aside from the desire to equip the reader with the technical knowledge necessary to use this corpus, a further goal of this paper is to demonstrate that the corpus still offers many possibilities for future research.
Conventional terminology resources reach their limits when it comes to automatic content classification of texts in the domain of expertlayperson communication. This can be attributed to the fact that (non-normalized) language usage does not necessarily reflect the terminological elements stored in such resources. We present several strategies to extend a terminological resource with term-related elements in order to optimize automatic content classification of expert-layperson texts.
We present a collection of (currently) about 5.500 commands directed to voice-controlled virtual assistants (VAs) by sixteen initial users of a VA system in their homes. The collection comprises recordings captured by the VA itself and with a conditional voice recorder (CVR) selectively capturing recordings including the VA-directed commands plus some surrounding context. Next to a description of the collection, we present initial findings on the patterns of use of the VA systems during the first weeks after installation, including usage timing, the development of usage frequency, distributions of sentence structures across commands, and (the development of) command success rates. We discuss the advantages and disadvantages of the applied collection-specific recording approach and describe potential research questions that can be investigated in the future, based on the collection, as well as the merit of combining quantitative corpus linguistic approaches with qualitative in-depth analyses of single cases.
Linguistische Studien arbeiten häufig mit einer Differenzierung zwischen gesprochener und geschriebener Sprache bzw. zwischen Kommunikation der Nähe und Distanz. Die Annahme eines Kontinuums zwischen diesen Polen bietet sich für eine Verortung unterschiedlichster Äußerungsformen an, inklusive unkonventioneller Textsorten wie etwa Popsongs. Wir konzipieren, implementieren und evaluieren ein automatisiertes Verfahren, das mithilfe unkorrelierter Entscheidungsbäume entsprechende Vorhersagen auf Textebene durchführt. Für die Identifizierung der Pole definieren wir einen Merkmalskatalog aus Sprachphänomenen, die als Markierer für Nähe/Mündlichkeit bzw. Distanz/Schriftlichkeit diskutiert werden, und wenden diesen auf prototypische Nähe-/Mündlichkeitstexte sowie prototypische Distanz-/Schrifttexte an. Basierend auf der sehr guten Klassifikationsgüte verorten wir anschließend eine Reihe weiterer Textsorten mithilfe der trainierten Klassifikatoren. Dabei erscheinen Popsongs als „mittige Textsorte“, die linguistisch motivierte Merkmale unterschiedlicher Kontinuumsstufen vereint. Weiterhin weisen wir nach, dass unsere Modelle mündlich kommunizierte, aber vorab oder nachträglich verschriftlichte Äußerungen wie Reden oder Interviews vollkommen anders verorten als prototypische Gesprächsdaten und decken Klassifikationsunterschiede für Social-Media-Varianten auf. Ziel ist dabei nicht eine systematisch-verbindliche Einordung im Kontinuum, sondern eine empirische Annäherung an die Frage, welche maschinell vergleichsweise einfach bestimmbaren Merkmale („shallow features“) nachweisbar Einfluss auf die Verortung haben.
"Das im Januar 2022 gestartete Projekt "Sprachanfragen" (https://www.ids-mannheim.de/gra/projekte2/sprachanfragen/) verfolgt erstmalig das Ziel, Sprachanfragedaten zu erfassen, aufzubereiten und ein wissenschaftsöffentliches Monitorkorpus aus ihnen zu erstellen. Dazukommend wird eine Rechercheschnittstelle entwickelt, mit der die Sprachanfragen systematisch wissenschaftlich analysierbar gemacht werden. Das Poster gibt einen Überblick über das Projekt, zeigt erste Ergebnisse und bietet einen Ausblick auf Überlegungen zur Konzeption eines Chatbots zur automatisierten Beantwortung von Sprachanfragen." Ein Beitrag zur 9. Tagung des Verbands "Digital Humanities im deutschsprachigen Raum" - DHd 2023 Open Humanities Open Culture.
This article investigates mundane photo taking practices with personal mobile devices in the co-presence of others, as well as “divergent” self-initiated smartphone use, thereby exploring the impact of everyday technologies on social interaction. Utilizing multimodal conversation analysis, we examined sequences in which young adults take pictures of food and drinks in restaurants and cafés. Although everyday interactions are abundant in opportunities for accomplishing food photography as a side activity, our data show that taking pictures is also often prioritized over other activities. Through a detailed sequential analysis of video recordings and dynamic screen captures of mobile devices, we illustrate how photographers orient to the momentary opportunities for and relevance of photo taking, that is, how they systematically organize their photographing with respect to the ongoing social encounter and the (projected) changes in the material environment. We investigate how the participants multimodally negotiate the “mainness” and “sideness” (Mondada, 2014) of situated food photography and describe some particular features of participants’ conduct in moments of mundane multiactivity.
Seit Mitte der 1990er Jahre wird am Institut für deutsche Sprache (IDS) in Mannheim erforscht, wie der hochkomplexe Gegenstandsbereich „Grammatik“ unter Ausnutzung hypertextueller Navigationsstrukturen wissenschaftlich fundiert und anschaulich vermittelt werden kann. Eine zentrale Bedeutung kommt folglich einer konsistenten, theorieübergreifenden Vernetzung sämtlicher Textinhalte zu. Um eine automatisierbare Bezugnahme zwischen mit unterschiedlichem terminologischem Vokabular formulierten, aber das gleiche sprachliche Phänomen beschreibenden Inhalten zu befördern, bildet eine onomasiologisch konzipierte Terminologiedatenbank das Rückgrat des Online-Systems. Der Beitrag beschreibt Konzeption und Aufbau der skizzierten linguistischen Fachterminologie.
Das Ziel des Beitrages ist es, das Schweigen und seine sprachliche Gestaltung in Bezug auf die Makro- und Mikrostruktur des literarischen Textes zu erforschen. Den theoretischen Hintergrund bilden linguistische und literaturwissenschaftliche Arbeiten, die kommunikative, pragmatische, semantische, kulturelle sowie literaturhistorische Aspekte des Schweigens behandeln und seine Abgrenzung von der Stille hervorheben, die als Naturphänomen zu verstehen ist. Hingewiesen wird ausgehend vom Modell der literarischen Kommunikation auf die Rolle des Schweigens in der Triade Autor-Text-Leser sowie auf seine Realisierungsmöglichkeiten in der Struktur und Sprache des Erzähltextes. Dabei richtet sich die Aufmerksamkeit nicht nur auf das Schweigen als Nicht-Sprechen, sondern auch auf die nichtssagende Rede, die im Rahmen der Kommunikationssituation die Semantik des Schweigens aktualisiert. Die zwei gegensätzlichen Schweigeformen kommen in den Berliner Romanen von Robert Walser (1878-1956) zum Vorschein und unterliegen der genauen Analyse aus der Perspektive der Makro- und Mikrostilistik. Untersucht werden das Erzählprinzip der Geschwätzigkeit in Geschwister Tanner (1907), die Ironie in Der Gehülfe (1908) und die fragmentarische Erzählweise in Jakob von Gunten (1909), durch die das Schweigen sowohl auf der thematischen Ebene als auch in der Struktur und Sprache des Textes realisiert wird. Als narrative Strategie beeinflusst Schweigen die Form und den Inhalt Walsers Berliner Romane und erzielt somit die vom Autor gewünschte Wirkung auf den Leser.
Developments within the field of Second Language Acquisition (SLA) have meant that scholars are increasingly engaging with corpora and corpus-based resources, providing a source of “‘authentic’ language” to learners and educators (Mitchell 2020: 254), and contributing to “state-of-the-art research methodologies” (Deshors and Gries 2023: 164). However, there are areas in which progress can still be made, particularly in the area of metadata, such as information about the speaker and contexts of the language use, as well as increased variety in the text types and genres of corpora used to develop SLA materials (Paquot 2022: 36). This post discusses one such possibility for increasing the variety of text types and providing a rich source of authentic language that can be used to create engaging SLA materials, particularly for young people learning German, namely the use of the NottDeuYTSch corpus (to download the corpus in a variety of formats, see Cotgrove 2018).
Wie die Eule erkunden große & kleine Sprach-Checker ihre Neckarstadt-West. Kommt mit auf Entdeckungsreise!
Das Buch „Der Wörter-Sammel-Koffer“ ist ein Werk der Sprach-Checker. Es entstand im Rahmen des Projekts „Die Sprach-Checker - So sprechen wir in der Neckarstadt“ (Leitung: Dr. Christine Möhrs & Elena Schoppa-Briele) des Leibniz-Instituts für Deutsche Sprache (IDS), Mannheim, in Zusammenarbeit mit der Kinderbuchautorin und Illustratorin Anke Faust, dem Campus Neckarstadt-West, den Neckarstadt-Kids sowie der Alten Feuerwache Mannheim.
Aus den vielen witzigen Ideen der Kinder entwickelte sich die Geschichte um die Eule, die anschließend mit Wasserfarben, Farbstiften und viel Phantasie von den Sprach-Checkern illustriert wurde.
Modular pivot
(2023)
A modular pivot is a type of turn-constructional pivot. It is built from syntactically entirely optional items (i.e. linguistic adjuncts) that can occur in both turn-initial and turn-final position and can therefore be used to patch a wide range of otherwise discrete turn-constructional units (TCUs) together (Clayman & Raymond 2015). A prime example of an item that lends itself to be deployed as a modular pivot are address terms (Clayman 2012).
Pivot
(2023)
The term pivot denotes an element of talk that can be understood to belong to two larger units of talk simultaneously, thereby joining them together and acting as a transitional link between them (Schegloff 1979: 275-276). Most commonly, the term is used to refer to lexico-syntactic elements that can be interpreted as ending one turn-constructional unit (TCU) while at the same time launching a next.
Mit dem cGAT-Handbuch stellt das FOLK-Projekt eine Richtlinie für das computergestützte Transkribieren nach GAT 2 zur Verfügung. Das Handbuch wurde anhand der Transkriptionspraxis in FOLK entwickelt und enthält eine Vielzahl von authentischen Beispielen, die mit dem zugehörigen Audio auch über die Datenbank für Gesprochenes Deutsch (DGD) abgerufen werden können.
The special issue opens up a construction-grammatical perspective on (German) word formation phenomena and goes back to a DFG-funded conference of the same name, which we held at the University of Düsseldorf in December 2020. The aim is to bundle up for the first time research from the field of German linguistics that is oriented towards construction grammar, and thus to lay the foundation for a 'Construction Word Formation' (cf. Booij 2010) also in the German-speaking world. Furthermore, ‘Construction Word Formation’ as a discipline shall hereby be sharpened. In this context, construction grammar should not be seen as a radical alternative to traditional word formation approaches that completely reinvents the wheel, but rather as a further development that builds on traditional concepts such as the pattern term with prominent consideration of usage-based aspects.
The Encyclopedia of Terminology for Conversation Analysis and Interactional Linguistics is an online resource for students and scholars of CA/IL, publicly available on the EMCA Wiki page. Encyclopedias and glossaries are widespread across various fields and methods, and serve as immensely valuable resources. Given the extent to which the EMCA/IL community has expanded over the years—both terminologically as well as geographically—we hope that this encyclopedia of terminology will be well received by students and practitioners of CA and IL across the globe.
This paper presents an extended annotation and analysis of interpretative reply relations focusing on a comparison of reply relation types and targets between conflictual pages and neutral pages of German Wikipedia (WP) talk pages. We briefly present the different categories identified for interpretative reply relations to analyze the relationship between WP postings as well as linguistic cues for each category. We investigate referencing strategies of WP authors in discussion page postings, illustrated by means of reply relation types and targets taking into account the degree of disagreement displayed on a WP talk page. We provide richly annotated data that can be used for further analyses such as the identification of interactional relations on higher levels, or for training tasks in machine learning algorithms.
The landscape of digital lexical resources is often characterized by dedicated local portals and proprietary interfaces as primary access points for scholars and the interested public. In addition, legal and technical restrictions are potential issues that can make it difficult to efficiently query and use these valuable resources. As part of the research data consortium Text+, solutions for the storage and provision of digital language resources are being developed and provided in the context of the unified cross-domain German research data infrastructure NFDI. The specific topic of accessing lexical resources in a diverse and heterogenous landscape with a variety of participating institutions and established technical solutions is met with the development of the federated search and query framework LexFCS. The LexFCS extends the established CLARIN Federated Content Search that already allows accessing spatially distributed text corpora using a common specification of technical interfaces, data formats, and query languages. This paper describes the current state of development of the LexFCS, gives an insight into its technical details, and provides an outlook on its future development.
The proposed contribution will shed light on current and future challenges on legal and ethical questions in research data infrastructures. The authors of the proposal will present the work of NFDI’s section on Ethical, Legal and Social Aspects (hereinafter: ELSA), whose aim is to facilitate cross-disciplinary cooperation between the NFDI consortia in the relevant areas of management and re-use of research data.
This paper describes general requirements for evaluating and documenting NLP tools with a focus on morphological analysers and the design of a Gold Standard. It is argued that any evaluation must be measurable and documentation thereof must be made accessible for any user of the tool. The documentation must be of a kind that it enables the user to compare different tools offering the same service, hence the descriptions must contain measurable values. A Gold Standard presents a vital part of any measurable evaluation process, therefore, the corpus-based design of a Gold Standard, its creation and problems that occur are reported upon here. Our project concentrates on SMOR, a morphological analyser for German that is to be offered as a web-service. We not only utilize this analyser for designing the Gold Standard, but also evaluate the tool itself at the same time. Note that the project is ongoing, therefore, we cannot present final results.
Dieser Band fasst die Vorträge des 9. Hildesheimer Evaluierungs- und Retrieval-Workshops (HIER) zusammen, der am 9. und 10. Juli 2015 an der Universität Hildesheim stattfand. Die HIER Workshop-Reihe begann im Jahr 2001 mit dem Ziel, die Forschungsergebnisse der Hildesheimer Informationswissenschaft zu präsentieren und zu diskutieren. Mittlerweile nehmen immer wieder Kooperationspartner von anderen Institutionen teil, was wir sehr begrüßen. HIER schafft auch ein Forum für Systemvorstellungen und praxisorientierte Beiträge.
Open Science and language data: Expectations vs. reality. The role of research data infrastructures
(2023)
Language data are essential for any scientific endeavor. However, unlike numerical data, language data are often protected by copyright, as they easily meet the threshold of originality. The role of research infrastructures (such CLARIN, DARIAH, and Text+) is to bridge the gap between uses allowed by statutory exceptions and the requirements of Open Science. This is achieved on the one hand by sharing language data produced by research organisations with the widest possible circle of persons, and on the other by mutualizing efforts towards copyright clearance and appropriate licensing of datasets.
Corpus-based identification and disambiguation of reading indicators for German nominalizations
(2010)
Corpus data is often structurally and lexically ambiguous; corpus extraction methodologies thus must be made aware of ambiguities. Therefore, given an extraction task, all relevant ambiguities must be identified. To resolve these ambiguities, contextual data responsible for one or another reading is to be considered. In the context of our present work, German -ung-nominalizations and their sortal readings are under examination. A number of these nominalizations may be read as an event or a result, depending on the semantic group they belong to. Here, we concentrate on nominalizations of verbs of saying (henceforth: "verba dicendi"), identify their context partners and their influence on the sortal reading of the nominalizations in question. We present a tool which calculates the sortal reading of such nominalizations and thus may improve not only corpus extraction, but also e.g. machine translation. Lastly, we describe successful attempts to identify the correct sortal reading, conclusions and future work.
This White Paper sets out commonly agreed definitions on activities of consortia within NFDI. It aims to provide a common basis for reporting and reference regarding selected questions of cross-consortial relevance in DFG’s template for the Interim Reports. The questions were prioritised by an NFDI Task Force on Evaluation and Reporting (formerly Task Force Monitoring) as a result of discussing possible answers to the DFG template. In this process the need to agree on a generalizable meaning of terms commonly used in the context of NFDI, and reporting in particular, were identified from cross-consortial perspectives. Questions that showed the highest requirement on clarification are discussed in this White Paper. As NFDI evolves, the Task Force will likely propose further joint approaches for reporting in information infrastructures.
While each of broad relevance, the questions addressed relate to substantially different aspects of consortia’s work. They are thus also structured slightly different.
This paper analyses intensification in German digitally-mediated communication (DMC) using a corpus of YouTube comments written by young people (the NottDeuYTSch corpus). Research on intensification in written language has traditionally focused on two grammatical aspects: syntactic intensification, i.e. the use of particles and other lexical items and morphological intensification, i.e. the use of compounding. Using a wide variety og examples from the corpus, the paper identifies novel ways that have been used for intensification in DMC, and suggests a new taxonomy of classification for future analysis of intensification.
In diesem sprachwissenschaftlichen Projekt wurden Verfahren des Sprecherwechsels und der Bedeutungsaushandlung in authentischen, telefonisch gedolmetschten Beratungsgesprächen untersucht. Die Daten liegen als PDF (nach dem HIAT-Verfahren) und als bearbeitbare Rohdaten in einer .exb-Datei vor. Die Transkripte dokumentieren Beratungsgespräche zwischen arabischsprechenden Klienten und einer Migrations- und Sozialberaterin, zu denen verschiedene Dolmetscher via Telefon herangezogen werden. Die Dolmetscher befinden sich jeweils in einem anderen Raum als die Beraterin und die Klienten. Die Gespräche erfolgten nach vorheriger Anmeldung. Die Beraterin ruft den Dolmetscher jeweils zum vereinbarten Termin an. Die Klienten sind subsidiär geschützte syrische Geflüchtete mit sehr geringen Deutschkenntnissen, die Hilfe beim Familiennachzug, Spracherwerb oder anderen authentischen Anliegen benötigen. Die Dolmetscher sprechen verschiedene regionale Varietäten des Arabischen. Daten, die auf die beteiligten Personen sowie auf die Beratungsstelle schließen lassen, wurden anonymisiert. Technisch bedingte Übertönungen und andere Handlungen der Beteiligten, die wegen der mangelnden Kopräsenz nicht für alle Beteiligten in den beiden Interaktionsräumen hörbar oder in einem Interaktionsraum nur visuell wahrnehmbar sind, wurden in den jeweiligen Annotationsspuren durchgehend annotiert. Dagegen wurden non- und paraverbale Phänomene wie etwa die Atmung ausschließlich im Hinblick auf ihre kommunikative Bedeutung sowie Relevanz für den Sprecherwechsel transkribiert und bei eingeschränkter akustischen Wahrnehmung kenntlich gemacht. Natürliche Phänomene menschlichen Sprechens, wie das Einatmen, Schlucken und Schmatzen, mit denen keine turnbezogene Handlungen einhergehen, blieben unberücksichtigt. Die nonverbalen Handlungen der Aktanten und deren Prosodie werden nur ansatzweise und im Hinblick auf die Untersuchungsfrage angedeutet.
Es werden in Veröffentlichungen ein englisches oder ein deutsches Kürzel als Bezeichnung für das Korpus verwendet (TIGA und TeDo). Die Nummerierung der Dateien ist jedoch immer die selbe.
Weitere Sprachen in dieser Datensammlung sind verschiedene Varietäten des gesprochenen Arabisch. Die Datensammlung stammt aus dem DFG-Projekt ME 3439/3 "Turn-taking und Verständnissicherung beim Telefondolmetschen Deutsch-Arabisch".
Strategische Kommunikation wird in verschiedenen Bereichen der menschlichen Interaktion verwendet, um eine bestimmte Zielgruppe zu beeinflussen. Sie befindet sich an der Schnittstelle mannigfaltiger Disziplinen, wie z.B. Kommunikations- und Politikwissenschaft, Psychologie, Management und Marketing. Strategische Kommunikation bezieht sich sowohl auf öffentliche und private Kommunikation, professionelle und unprofessionelle Kommunikantinnen und Kommunikanten als auch auf unterschiedliche Kommunikationskanäle.
Als Teil der NFDI vernetzt Text+ ortsverteilt verschiedenste Daten und Dienste für die geisteswissenschaftliche Forschung und stellt sie der wissenschaftlichen Gemeinschaft FAIR zur Verfügung. In diesem Beitrag beschreiben wir die Umsetzung beispielhaft im Bereich der Text+ Datendomäne Sammlungen anhand von Korpora, die in verschiedenen Disziplinen Verwendung finden. Die Infrastruktur ist auf Erweiterbarkeit ausgelegt, so dass auch weitere Ressourcen über Text+ verfügbar gemacht werden können. Enthalten ist auch ein Ausblick auf weitere zu erwartende Entwicklungen. Ein Beitrag zur 9. Tagung des Verbands "Digital Humanities im deutschsprachigen Raum" - DHd 2023 Open Humanities Open Culture.
Hintergrund: Die digitale Transformation prägt gesellschaftliche Systeme weltweit. Digital Health umfasst verschiedene Bereiche, wie z. B. die Verfügbarkeit und Auswertung von Daten, die Möglichkeit der Vernetzung innerhalb der eigenen Berufs- oder Betroffenengruppe und die Art, wie Patient*innen, Angehörige und Behandler*innen miteinander kommunizieren.
Ziel der Arbeit: Digital Health wird mit ihren Auswirkungen auf die Beziehung und die Kommunikation zwischen Patient*innen, Angehörigen und Behandler*innen beleuchtet. Veränderungen, die bereits erkennbar sind, werden beschrieben und Perspektiven aufgezeigt.
Methoden: Das Thema wird aus sozialphilosophischer, sprachwissenschaftlicher und ärztlicher Perspektive in folgenden Bereichen exploriert: digitale vs. analoge Kommunikation, Narration vs. Datensammeln, Internet und soziale Medien als Informationsquelle, Raum für Identitätsbildung und Veränderung der Interaktion von Patient*innen, Angehörigen und Behandler*innen.
Ergebnisse: Die Erweiterung der Interaktion zwischen Patient*innen und Ärzt*innen auf digitale und Präsenzformate sowie die asynchrone und synchrone Kommunikation erhöhen die Komplexität, aber auch die Flexibilität. Die Fokussierung auf „objektive“ Daten kann den Blick auf die Person mit ihrer individuellen Biografie beeinträchtigen, während digitale Räume die Möglichkeiten zur Identitätsbildung aufseiten der Patient*innen und für die Interaktion deutlich erweitern.
Diskussion: Bereits jetzt zeigen sich Vorteile der Digitalisierung (z. B. besseres Selbstmanagement) und Nachteile (Fokussierung auf Daten statt auf die Person). Für den kinder- und jugendärztlichen Bereich bestehen die Notwendigkeiten, professionelle kommunikative Kompetenzen und professionelle Gesundheitskompetenz zu erweitern sowie die Organisation seiner Versorgungseinrichtungen weiterzuentwickeln.
National Socialism, one could argue, was all about belonging: belonging to the ‘Volk’ or the ‘Volksgemeinschaft’, belonging to the ‘Aryan’ or ‘Non-Aryan race’, belonging to the National Socialist ‘movement’, and so on. These categories of belonging worked both inclusionary and exclusionary and they were constituted, proclaimed and enacted to a great part through language. What is more, they had to be performed through communicative acts. For the normative side of National Socialist propaganda and legislation, this seems rather obvious and one-directional. On the side of the general population, however, this entailed a mixture of communicative need to position oneself vis-à-vis National Socialism (mostly in affirmative ways), but also the urge to do so willingly. When we look at the language use of ‘ordinary people’ in different communicative situations and texts during National Socialism, we have to focus on these dimensions of discursive collusion, co-constitution and appropriation. People during National Socialism, such is our hypothesis, navigated through discourses of belonging and by that made them real and effective. Besides diaries, war letters and autobiographical writings, one way to grasp this phenomenon is to analyse petitions, i.e., letters of complaint and request sent in large numbers by ‘ordinary people’ to public authorities of the party and the state. As I will show by some examples, letter-writers tried to inscribe themselves within (what they took for) National Socialist discourses of belonging in order to legitimate their claims. By doing so, they co-constituted and co-created the discursive realm of National Socialism.