Refine
Year of publication
Document Type
- Article (419)
- Part of a Book (339)
- Conference Proceeding (125)
- Book (92)
- Other (74)
- Working Paper (23)
- Part of Periodical (11)
- Report (11)
- Review (9)
- Preprint (7)
Language
Is part of the Bibliography
- yes (1116) (remove)
Keywords
- Deutsch (379)
- Korpus <Linguistik> (258)
- Gesprochene Sprache (76)
- Sprachgebrauch (76)
- Interaktion (74)
- Grammatik (68)
- Konversationsanalyse (60)
- Neologismus (56)
- Kommunikation (54)
- COVID-19 (49)
Publicationstate
- Veröffentlichungsversion (1116) (remove)
Reviewstate
- (Verlags)-Lektorat (492)
- Peer-Review (457)
- Verlags-Lektorat (23)
- Peer-review (18)
- Qualifikationsarbeit (Dissertation, Habilitationsschrift) (8)
- Verlagslektorat (4)
- Review-Status-unbekannt (3)
- (Verlags)Lektorat (1)
- (Verlags-)lektorat (1)
- Abschlussarbeit (Bachelor, Master, Diplom, Magister) (Bachelor, Master, Diss.) (1)
Publisher
- Leibniz-Institut für Deutsche Sprache (IDS) (129)
- Institut für Deutsche Sprache (125)
- de Gruyter (91)
- IDS-Verlag (42)
- De Gruyter (39)
- Heidelberg University Publishing (34)
- Verlag für Gesprächsforschung (28)
- Leibniz-Institut für Deutsche Sprache (27)
- Zenodo (24)
- V&R unipress (22)
This paper introduces the Nottinghamer Korpus deutscher YouTube-Sprache (‘The Nottingham German YouTube Language Corpus’ - or NottDeuYTSch corpus). The corpus comprises over 33 million words, taken from roughly 3 million YouTube comments published between 2008 and 2018, written by a young, German-speaking demographic. The NottDeuYTSch corpus provides an authentic and representative linguistic snapshot of young German speakers and offers significant opportunities for in-depth research in several linguistic fields, such as lexis, morphology, syntax, orthography, multilingualism, and conversational and discursive analysis.
In dieser Reihe teilen Tagungsteilnehmende ihre persönlichen Eindrücke vom Forum Citizen Science 2023 in Freiburg. Im zweiten Beitrag berichtet Rahaf Farag, wissenschaftliche Mitarbeiterin im Programmbereich Dokumentationszentrum der deutschen Sprache am IDS Mannheim, von spannenden Diskussionsrunden, projektübergreifenden Gemeinsamkeiten und der Vielfalt der Projektausrichtungen.
Von Januar bis Juli 2023 gestalten Grundschulkinder aus dem Mannheimer Vielfaltsquartier Neckarstadt-West zusammen mit der Kinderbuchautorin und Illustratorin Anke Faust in Kooperation mit dem Leibniz-Institut für Deutsche Sprache (IDS) ein Buch. Sie erzählen darin von den Abenteuern, die ihre Figuren in der Neckarstadt-West erleben, und welche Sprachschätze sie dabei finden können. Kooperationspartner des IDS für dieses Projekt sind unter anderem der Campus Neckarstadt-West, die Alte Feuerwache Mannheim gGmbH und der Verein Neckarstadt Kids e.V.
Wie können Kinder und Jugendliche ihren mehrsprachigen Alltag im Mannheimer Vielfaltsquartier Neckarstadt-West erforschen – gemeinsam mit Forschenden des Leibniz-Instituts für Deutsche Sprache und seinen Kooperationspartnern, dem Campus Neckarstadt-West, der Alten Feuerwache Mannheim gGmbH und dem Verein Neckarstadt Kids e.V.?
Wir wollen die Potenziale von Citizen Science in einem sprachbezogenen Projekt ausloten:
- für die Etablierung vertrauensvoller Zusammenarbeit zwischen den jungen Citizen Scientists und der sprachwissenschaftlichen Forschung,
- für hochwertige Bildungsangebote im Sinne der UN-Nachhaltigkeitsziele und
- für neue Impulse im Bereich der Sprachkontakt- und Mehrsprachigkeitsforschung.
In diesem Beitrag skizzieren wir die Ziele, Fragen und Methoden unseres Projekts und geben Einblicke in die bisher durchgeführten und im Jahr 2023 geplanten Aktionen.
We present a simple tool for extracting text and markup information from printouts of (not only) scientific documents. While the heavy-lifting OCR is done by off-the-shelf tesseract, our focus is on detection, extraction, and basic categorization of color-highlighted text sections, as well as on providing a framework for downstream processing of extraction results. The tool can be useful for document analysis tasks that must, or benefit from being able to, use printed paper.
This study aims to establish what lexical factors make it more likely for dictionary users to consult specific articles in a dictionary using the English Wiktionary log files, which include records of user visits over the course of 6 years. Recent findings suggest that lexical frequency is a significant factor predicting look-up behavior, with the more frequent words being more likely to be consulted. Three further lexical factors are brought into focus: (1) age of acquisition; (2) lexical prevalence; and (3) degree of polysemy operationalized as the number of dictionary senses. Age of acquisition and lexical prevalence data were obtained from recent published studies and linked to the list of visited Wiktionary lemmas, whereas polysemy status was derived from Wiktionary entries themselves. Regression modeling confirms the significance of corpus frequency in explaining user interest in looking up words in the dictionary. However, the remaining three factors also make a contribution whose nature is discussed and interpreted. Knowing what makes dictionary users look up words is both theoretically interesting and practically useful to lexicographers, telling them which lexical items should be prioritized in lexicographic work.
In many European languages, propositional arguments (PAs) can be realized as different types of structures. Cross-linguistically, complex structures with PAs show a systematic correlation between the strength of the semantic bond and the syntactic union (cf. Givón 2001; Wurmbrand/Lohninger 2023). Also, different languages show similarities with respect to the (lexical) licensing of different PAs (cf. Noonan 1985; Givón 2001; Cristofaro 2003 on different predicate types). However, on a more fine-grained level, a variation across languages can be observed both with respect to the syntactic-semantic properties of PAs as well as to their licensing and usage. This presentation takes a multi-contrastive view of different types of PAs as syntactic subjects and objects by looking at five European languages: EN, DE, IT, PL and HU. Our goal is to identify the parameters of variation in the clausal domain with PAs and by this to contribute to a better understanding of the individual language systems on the one hand and the nature of the linguistic variation in the clausal domain on the other hand. Phenomena and Methodology: We investigate the following types of PAs: direct object (DO) clauses (1), prepositional object (PO) clauses (2), subject clauses (3), and nominalizations (4, 5). Additionally, we discuss clause union phenomena (6, 7). The analyzed parameters include among others finiteness, linear position of the PA, (non) presence of a correlative element, (non) presence of a complementizer, lexical-semantic class of the embedding verb. The phenomena are analyzed based on corpus data (using mono- and multilingual corpora), experimental data (acceptability judgement surveys) or introspective data.
Despite being an official language of several countries in Central and Western Europe, German is not formally recognised as the official language of the Federal Republic of Germany. However, in certain situations the use of the German language, including the spelling rules, is subject to state regulation (by acts of Federal Parliament orby administrative decisions). This article presents the content of this regulation, its scope, and the historical context in which it was adopted.
Our current era of globalization is characterized above all by increased mobility, namely by the increasing mobility of people and the development of new communication technologies, including the mobility of linguistic signs and resources. This process raises new theoretical and methodological questions in linguistics, which results in the development of a new sociolinguistics of globalization (Blommaert 2010) in recent years. One of the most obvious ways to trace this new and dynamic development is to analyze individual language repertoires, especially those of migrants. In this essay, I examine aspects of the communicative repertoire of a refugee who fled to Germany in 2015 to escape the civil war in Syria. I draw on two interviews I conducted with him (in the following I refer to him by the pseudonym „Baran“). The first interview with Baran was recorded in 2016, a few months after his arrival in Germany. The second interview is from 2023, seven years later. In both recordings, German was the dominant language of interaction. I will analyze and show the characteristics of his German at the beginning of his immigration, how he resorts to practices of language mixing between German, Turkish and English (which has recently also been referred to as translanguaging) and how his German has developed over the course of the past seven years.
Das nationalsozialistische Mobilisierungsregime war darauf angelegt, Zeitgenoss:innen zu Positionierungshandlungen zu bewegen: vom allfälligen ‚Hitlergruß‘ über die Mitwirkung bei Parteiorganisationen oder Spendensammlungen bis hin zur Anleitung zur Selbstreflexion in Tagebüchern nationalsozialistischer Schulungslager. Allerdings sollte eine solche Aufforderung zur affirmativen Positionierung nicht allein als Zwang verstanden werden, denn dies würde ausblenden, dass viele Zeitgenoss:innen tatsächlich Anhänger:innen des Nationalsozialismus waren oder dem nationalsozialistischen Gesellschaftsprojekt zumindest nicht grundsätzlich oder in allen Punkten ablehnend gegenüberstanden. Demzufolge scheint es treffend, eine je nach Kommunikationssituation und Akteursposition variierende Mischung aus Positionierungsdruck und -bedürfnis für den hier untersuchten historischen Kontext anzunehmen.
Politisches Positionieren ist eine elementare sprachliche und soziale Praxis. Wo und wie wir uns und andere in der Gesellschaft verorten, ist eine alltäglich verhandelte Frage. Positionierungen werden dabei sowohl explizit thematisiert und kontrovers diskutiert als auch beiläufig durch sprachliche Praktiken hervorgebracht. Im Zentrum von Positionierungen stehen Aushandlungen sozialer Identität. Doch nicht nur persönliche Identitäten werden durch Positionierungen konstituiert, stabilisiert oder umgedeutet, auch die Gesellschaft ist durch die sprachlichen Positionierungspraktiken ihrer Mitglieder unmittelbar oder mittelbar betroffen.
Die Beiträge des Bandes betrachten diese Schnittstelle zwischen Interaktion und Diskurs aus unterschiedlichen disziplinären Perspektiven und erörtern, wie Positionierungen vollzogen werden, ob bzw. inwiefern sie politisch sind und in welchen wechselseitigen Zusammenhängen sie zu gesellschaftlichen, sozialen und politischen Arrangements und Ordnungen stehen.
Sprachpolitik war in der Bundesrepublik Deutschland seit 1949 nie ein größeres Thema in Wahlkämpfen. Seit der Bundestagswahl 2017 hat sich dies jedoch geändert. Damals waren unter dem Eindruck des großen Migrationsandrangs im Jahr 2016 von einigen Parteien Positionen zu sprachlicher Integration in die Wahlprogramme aufgenommen worden. Unter Positionen sei hier der explizite sprachliche Ausdruck einer Haltung zu einem politischen Thema bzw. Themenbereich zu verstehen, der unter anderem im Rahmen von parteilichen Grundsatz- und Wahlprogrammen Orientierung hinsichtlich des (zukünftig zu erwartenden) politischen Handelns parteilicher Akteur/-innen bieten soll. Und auch die zunehmende Diversität der deutschen Gesellschaft führte schon bei der Wahl im Jahr 2017 zu einer Berücksichtigung von Themen der sprachlichen Bildung in der Programmatik der Parteien. Dieser Beitrag untersucht somit die Grundsatz- und Wahlprogramme der größten Parteien anhand der sprachpolitischen Ausdrucksweise.
Sich und andere politisch zu positionieren, ist eine elementare sprachliche und soziale Praxis. Dies zeigen etwa Diskussionen um europäische Identität in Zeiten des britischen EU-Austritts und einer umstrittenen EU-Grenzpolitik oder die Haltung zu Waffenlieferungen in Krisengebiete im Zuge des Kriegs in der Ukraine, der 2022 ausbrach, ebenso wie wiederkehrende Auseinandersetzungen um Themen wie Alltagsrassismus, Sexismus und Diskriminierung. Diese Beispiele, die aktuelle politische Ereignisse ebenso umfassen, wie fortlaufende, immer wieder neu aufflammende gesellschaftliche Debatten um grundlegende Fragen des Zusammenlebens, verdeutlichen: Wo und wie wir uns in der Gesellschaft verorten, ist eine alltägliche Frage. Politische Positionierungen werden nicht nur ständig vorgenommen, sie werden, wie auch Nicht-Positionierungen, ebenso kontinuierlich thematisiert und kontrovers diskutiert. Diese Einführung in das Band soll in die Thematik des politischen Positionierens durch Klärung des Termins und einem Beispiel aus der Praxis einführen.
This conference booklet provides information about 10th International Contrastive Linguistics Conference (ICLC-10) that took place in Mannheim, Germany, from 18 to 21 July 2023. It contains
– a description of the conference aims,
– details on the conference venue,
– information on committees,
– the conference program,
– the abstracts of the keynotes, oral and poster presentations, and
– an author index.
Äußerungen wie „Ich geh Schule“ oder Ausdrücke wie „lan“ scheinen im Repertoire vieler deutscher Jugendlicher mittlerweile ihren festen Platz zu haben. Zuweilen ist die Aufregung über diese durch Migration ausgelösten Neuerungen groß, da befürchtet wird, dass das kein (richtiges) Deutsch sei. Doch wie jede andere Sprache verändert sich auch das Deutsche ständig. Gesellschaftliche Veränderung, ausgelöst durch Migration, stellt nur eine Ursache dar, die für Sprachwandel verantwortlich ist. Andere Wandlungsprozesse werden etwa aufgrund einer Überlagerung durch eine prestigeträchtigere Sprache, durch friedliche Nachbarschaft über längere Zeiträume oder durch Eroberungen und Besatzungszeiten ausgelöst. Im Zuge der Globalisierung kommen auch verstärkt Prozesse zum Vorschein, die durch gesteigerte Mobilität, Mehrsprachigkeit und neue Kommunikationsmöglichkeiten gekennzeichnet sind.
Der deutschen Muttersprachen
(2023)
This manual introduces a conversation analytically informed coding scheme for episodes involving the direct social sanctioning of problem behavior in informal social interaction which was developed in the project Norms, Rules, and Morality across Languages (NoRM-aL) at the Leibniz-Institute for the German Language. It outlines the background for its development, delimits the phenomena to which the coding scheme can be applied and provides instructions for its use.
The scheme asks for basic information about the recording and the participants involved in the episode, before taking stock of different features of the sanctioning episode as a whole. This is followed by sets of specific coding questions about the sanctioning move itself (such as its timing and composition) and the reaction it engenders. The coding enables researchers to get a bird’s eye view on recurrent features of such episodes in larger quantities of data and allows for comparisons across different languages and informal settings.
“Die Sprach-Checker” (Eng. “Language Checkers”) are young citizen scientists from Mannheim’s highly diverse district Neckarstadt-West. Together with linguists, they investigate a tremendous treasure: their own multilingualism. They are exploring and (re)discovering their own languages and the other languages used in their environment while documenting and reflecting on their everyday experiences in and with different linguistic practices. Our aim is to raise awareness of their strengths and to promote appreciation for their language biographies, thus fostering a sense of identification with one’s own linguistic surroundings. Such a joint research endeavour offers empirical opportunities to address (linguistic) issues of societal relevance by collecting authentic data from the multicultural district and involving its residents and local stakeholders. In this paper, we will provide insights regarding the project’s background, conception, and outcomes. We address everyone who is planning or conducting a citizen science project with young people, especially children and adolescents, or who works at the interface between science and society.
Warum gibt es Futur II?
(2023)
Poster des Text+ Partners Leibniz-Institut für Deutsche Sprache Mannheim präsentiert beim Workshop "Wohin damit? Storing and reusing my language data" am 22. Juni 2023 in Mannheim. Das Poster wurde im Kontext der Arbeit des Vereins Nationale Forschungsdateninfrastruktur (NFDI) e.V. verfasst. NFDI wird von der Bundesrepublik Deutschland und den 16 Bundesländern finanziert, und das Konsortium Text+ wird gefördert durch die Deutsche Forschungsgemeinschaft (DFG) – Projektnummer 460033370. Die Autor:innen bedanken sich für die Förderung sowie Unterstützung. Ein Dank geht außerdem an alle Einrichtungen und Akteur:innen, die sich für den Verein und dessen Ziele engagieren.
This paper examines multi-unit turns that allow speakers to retrospectively close the prior sequence while prospectively launching a new sequence, which Schegloff (1986) referred to as interlocking organization. Using English telephone conversations as data, we focus on how multi-unit turns are used for topic shifts, and show that interlocking organization operates in conjunction with other phonetic and lexical features, such as increased pitch and overt markers of disjunction (e.g., “listen”). In addition, speakers utilize an audible inbreath that is placed between the first and the second units as a central interactional resource to project further talk, thereby suppressing speaker transition and possibly highlighting the action delivered in the second unit as being distinctly new. We propose that interlocking multi-unit turns, when used to make topically disjunctive moves, promote progressivity by avoiding a possible lapse in turn transition
This contribution summarizes the lessons learned from the organization of a joint conference on text analytics research by the Business, Economic, and Related Data (BERD@NFDI) and Text+ consortia within the National Research Data Infrastructure (NFDI) in Germany. The collaboration aimed to identify common ground and foster interdisciplinary dialogue between scholars in the humanities and in the business domain. The lessons learned include the importance of presenting research questions using textual data to establish common ground, similarities in methodology for processing textual data between the consortia, similarities in research data management, and the need for regular interconsortial discussions on textual analysis methods and data. The collaboration proved valuable for interdisciplinary dialogue within the NFDI, and further collaboration between the consortia is planned.
"Reproducibility crisis" and "empirical turn" are only two keywords when it comes to providing reasons for research data management. Research data is omnipresent and with the more and more automatic data processing procedures, they become even more important. However, just because new methods require data and produce data, this does not mean that data are easily accessible, reusable or even make a difference in the CV of a researcher, even if a large portion of research goes into data creation, acquisition, preparation, and analysis. In this talk I will present where we find data in the research process, where we may find appropriate support for data management and advocate for a procedure for including it in research publications and resumes.
This presentation relies on work within the BMBF-funded project CLARIN-D. It also builds on work within the German National Research Data Infrastructure (NFDI) consortium Text+, DFG project number 460033370.
Prediction is a central mechanism in the human language processing architecture. The psycholinguistic and neurolinguistic literature has seen a lively debate about what form prediction may take and what status it has for language processing in the human mind and brain. While predictions are a ubiquitous finding, the implications of these results for models of language processing differ. For instance, eyetracking data suggest that predictions may rely on sublexical orthographic information in natural reading, while electrophysiological data provide mixed evidence for form-based predictions during reading. Other research has revealed that humans rapidly adapt to text specifics and that their predictive capacity varies, broadly speaking, in accordance with inter- and intra-individual language proficiency, which cuts across the speaker groups (e.g. L1 vs. L2 speakers, skilled vs. untrained readers) traditionally used for experimental contrasts. There is therefore evidence that the kind and strength of linguistic predictions depend on (at least) three sources of variability in language processing: speaker, text genre and experimental method.
The aim of this Research Topic is to develop a better understanding of prediction in light of the three sources of variability in language processing, by providing an overview of state-of-the art research on predictive language processing and by bringing together research from various disciplines.
First, intra-and inter-individual differences and their influence on predictive processes remain underrepresented in experimental research on predictive processing. How do language users differ in their predictive abilities and strategies, and how are these differences shaped by e.g. biological, social and cultural factors?
Second, while language users experience great stylistic diversity in their daily language exposure and use, the majority of language processing research still focuses on a very constrained register of well-controlled sentences composed in the standard language. How are predictions shaped by extra- and meta-linguistic context, such as register/genre or accent/speaker identity, and how may this influence the processing of experimental items in another language or text variety?
Third, the Research Topic invites contributions that make use of a multi-method approach, such as combined behavioral and electrophysiological measures or experimental methods combined with measures extracted from corpus data. What opportunities and challenges do we face when integrating multiple approaches to examine linguistic, experimental and individual differences in human predictive capacity?
We welcome contributions from all areas of empirical psycho- and neurolinguistics, but contributions must explicitly address variability and variation in language and language processing. Relevant topics include individual differences and the impact of genre, modality, register and language variety. Contributions that go beyond single word and single sentence paradigms are especially desirable. Experimental, corpus-based, meta-analytic and review papers, as well as theoretical/opinion pieces are welcome; however, papers of the latter type should support their arguments with substantial empirical evidence from the literature. Particularly desirable are contributions which combine topics and/or methods, such as the impact of an individual's native dialect on processing of constructions that show variability in the standard language (e.g. choice of auxiliary, agreement of mass nouns, etc.) or experimental methods combined with measures extracted from corpus data such as information-theoretic surprisal.
We introduce DeReKoGram, a novel frequency dataset containing lemma and part-of-speech (POS) information for 1-, 2-, and 3-grams from the German Reference Corpus. The dataset contains information based on a corpus of 43.2 billion tokens and is divided into 16 parts based on 16 corpus folds. We describe how the dataset was created and structured. By evaluating the distribution over the 16 folds, we show that it is possible to work with a subset of the folds in many use cases (e.g., to save computational resources). In a case study, we investigate the growth of vocabulary (as well as the number of hapax legomena) as an increasing number of folds are included in the analysis. We cross-combine this with the various cleaning stages of the dataset. We also give some guidance in the form of Python, R, and Stata markdown scripts on how to work with the resource.
Computational language models (LMs), most notably exemplified by the widespread success of OpenAI's ChatGPT chatbot, show impressive performance on a wide range of linguistic tasks, thus providing cognitive science and linguistics with a computational working model to empirically study different aspects of human language. Here, we use LMs to test the hypothesis that languages with more speakers tend to be easier to learn. In two experiments, we train several LMs—ranging from very simple n-gram models to state-of-the-art deep neural networks—on written cross-linguistic corpus data covering 1293 different languages and statistically estimate learning difficulty. Using a variety of quantitative methods and machine learning techniques to account for phylogenetic relatedness and geographical proximity of languages, we show that there is robust evidence for a relationship between learning difficulty and speaker population size. However, contrary to expectations derived from previous research, our results suggest that languages with more speakers tend to be harder to learn.
Recent years have seen a growing interest in grammatical variation, a core explanandum of grammatical theory. The present volume explores questions that are fundamental to this line of research: First, the question of whether variation can always and completely be explained by intra- or extra-linguistic predictors, or whether there is a certain amount of unpredictable – or ‘free’ – grammatical variation. Second, the question of what implications the (in-)existence of free variation would hold for our theoretical models and the empirical study of grammar. The volume provides the first dedicated book-length treatment of this long-standing topic. Following an introductory chapter by the editors, it contains ten case studies on potentially free variation in morphology and syntax drawn from Germanic, Romance, Uralic and Mayan.
Allusion
(2023)
Assessment
(2023)
Most broadly, an assessment is a type of social action by which an interactant expresses an evaluative stance towards someone or something (e.g., an object, an event, an action, an experience, a state of affairs, a place, a circumstance, etc.). The target of an assessment is typically called the ‘assessable’.
Collaborative work in NFDI
(2023)
The non-profit association National Research Data Infrastructure (NFDI) promotes science and research through a National Research Data Infrastructure. Its aim is to develop and establish an overarching research data management (RDM) for Germany and to increase the efficiency of the entire German science system. After a two-and-a-half year build up phase, the process of adding new consortia, each representing a different data domain, has ended in March 2023. NFDI now has 26 disciplinary consortia (and one additional basic service collaboration). Now the full extent of cross-consortial interaction is beginning to show.
KoMuX, der Kompositamuster-Explorer, (www.owid.de/plus/komux) ist eine Webanwendung, die es ermöglicht, mehr als 50.000 nominale Komposita des Deutschen gezielt nach abstrakten oder lexikalisch-teilspezifizierten Mustern zu durchsuchen. Unterschiedliche Visualisierungen helfen dabei, Strukturen und Zusammenhänge innerhalb der Ergebnismenge zu erfassen.
Retro-sequence
(2023)
The Data Governance Act was proposed in late 2020 as part of the European Strategy for Data, and adopted on 30 May 2022 (as Regulation 2022/868). It will enter into application on 24 September 2023. The Data governance Act is a major development in the legal framework affecting CLARIN and the whole language community. With its new rules on the re-use of data held by the public sector bodies and on the provision of data sharing services, and especially its encouragement of data altruism, the Data Governance Act creates new opportunities and new challenges for CLARIN ERIC. This paper analyses the provisions of the Data Governance Act, and aims at initiating the debate on how they will impact CLARIN and the whole language community.
For many reasons, Mennonite Low German is a language whose documentation and investigation is of great importance for linguistics. To date, most research projects that deal with this language and/ or its speakers have had a relatively narrow focus, with many of the data cited being of limited relevance beyond the projects for which they were collected. In order to create a resource for a broad range of researchers, especially those working on Mennonite Low German, the dataset presented here has been transformed into a structured and searchable corpus that is accessible online. The translations of 46 English, Spanish, or Portuguese stimulus sentences into Mennonite Low German by 321 consultants form the core of the MEND-corpus (Mennonite Low German in North and South America) in the Archive for Spoken German. In addition to describing the origin of this corpus and discussing possibilities and limitations for further research, we discuss the technical structure and search possibilities of the Database for Spoken German. Among other things, this database allows for a structured search of metadata, a context-sensitive token search, and the generation of virtual corpora that can be shared with others. Moreover, thanks to its text-sound alignment, one can easily switch from a particular text section of the corpus to the corresponding audio section. Aside from the desire to equip the reader with the technical knowledge necessary to use this corpus, a further goal of this paper is to demonstrate that the corpus still offers many possibilities for future research.
Conventional terminology resources reach their limits when it comes to automatic content classification of texts in the domain of expertlayperson communication. This can be attributed to the fact that (non-normalized) language usage does not necessarily reflect the terminological elements stored in such resources. We present several strategies to extend a terminological resource with term-related elements in order to optimize automatic content classification of expert-layperson texts.
We present a collection of (currently) about 5.500 commands directed to voice-controlled virtual assistants (VAs) by sixteen initial users of a VA system in their homes. The collection comprises recordings captured by the VA itself and with a conditional voice recorder (CVR) selectively capturing recordings including the VA-directed commands plus some surrounding context. Next to a description of the collection, we present initial findings on the patterns of use of the VA systems during the first weeks after installation, including usage timing, the development of usage frequency, distributions of sentence structures across commands, and (the development of) command success rates. We discuss the advantages and disadvantages of the applied collection-specific recording approach and describe potential research questions that can be investigated in the future, based on the collection, as well as the merit of combining quantitative corpus linguistic approaches with qualitative in-depth analyses of single cases.
Linguistische Studien arbeiten häufig mit einer Differenzierung zwischen gesprochener und geschriebener Sprache bzw. zwischen Kommunikation der Nähe und Distanz. Die Annahme eines Kontinuums zwischen diesen Polen bietet sich für eine Verortung unterschiedlichster Äußerungsformen an, inklusive unkonventioneller Textsorten wie etwa Popsongs. Wir konzipieren, implementieren und evaluieren ein automatisiertes Verfahren, das mithilfe unkorrelierter Entscheidungsbäume entsprechende Vorhersagen auf Textebene durchführt. Für die Identifizierung der Pole definieren wir einen Merkmalskatalog aus Sprachphänomenen, die als Markierer für Nähe/Mündlichkeit bzw. Distanz/Schriftlichkeit diskutiert werden, und wenden diesen auf prototypische Nähe-/Mündlichkeitstexte sowie prototypische Distanz-/Schrifttexte an. Basierend auf der sehr guten Klassifikationsgüte verorten wir anschließend eine Reihe weiterer Textsorten mithilfe der trainierten Klassifikatoren. Dabei erscheinen Popsongs als „mittige Textsorte“, die linguistisch motivierte Merkmale unterschiedlicher Kontinuumsstufen vereint. Weiterhin weisen wir nach, dass unsere Modelle mündlich kommunizierte, aber vorab oder nachträglich verschriftlichte Äußerungen wie Reden oder Interviews vollkommen anders verorten als prototypische Gesprächsdaten und decken Klassifikationsunterschiede für Social-Media-Varianten auf. Ziel ist dabei nicht eine systematisch-verbindliche Einordung im Kontinuum, sondern eine empirische Annäherung an die Frage, welche maschinell vergleichsweise einfach bestimmbaren Merkmale („shallow features“) nachweisbar Einfluss auf die Verortung haben.
"Das im Januar 2022 gestartete Projekt "Sprachanfragen" (https://www.ids-mannheim.de/gra/projekte2/sprachanfragen/) verfolgt erstmalig das Ziel, Sprachanfragedaten zu erfassen, aufzubereiten und ein wissenschaftsöffentliches Monitorkorpus aus ihnen zu erstellen. Dazukommend wird eine Rechercheschnittstelle entwickelt, mit der die Sprachanfragen systematisch wissenschaftlich analysierbar gemacht werden. Das Poster gibt einen Überblick über das Projekt, zeigt erste Ergebnisse und bietet einen Ausblick auf Überlegungen zur Konzeption eines Chatbots zur automatisierten Beantwortung von Sprachanfragen." Ein Beitrag zur 9. Tagung des Verbands "Digital Humanities im deutschsprachigen Raum" - DHd 2023 Open Humanities Open Culture.