Refine
Year of publication
Document Type
- Article (28)
- Conference Proceeding (20)
- Part of a Book (10)
- Review (2)
- Part of Periodical (1)
Has Fulltext
- yes (61)
Keywords
- Gesprochene Sprache (61) (remove)
Publicationstate
- Veröffentlichungsversion (61) (remove)
Reviewstate
- Peer-Review (61) (remove)
Publisher
- Verlag für Gesprächsforschung (6)
- Association for Computational Linguistics (5)
- European Language Resources Association (ELRA) (3)
- Lexical Computing CZ s.r.o. (3)
- Linköping University Electronic Press (3)
- German Society for Computational Linguistics & Language Technology und Friedrich-Alexander-Universität Erlangen-Nürnberg (2)
- Association for Computational Linguistics ( ACL ); Curran Associates, Inc. (1)
- Benjamins (1)
- Buro van die WAT (1)
- Buske (1)
In this presentation I show first results from an ongoing study about syntactic complexity of sanctioning turns in spoken language. This study is part of a larger project on sanctioning of misconduct in social interaction in different European languages (English, German, Italian and Polish). For the study I use video recordings of different everyday settings (family breakfasts, board game interactions and car rides) with three or four participants. These data come from the Parallel European Corpus of Informal Interaction (Kornfeld/Küttner/Zinken 2023; Küttner et al. submitted). I focus on sanctioning turns with more than one turn-constructional unit (see among others for TCUs: Sacks/Schegloff/Jefferson 1974; Clayman 2013). The study asks how often TCUs are linked to each other in the different languages, for what function, and how language diversity enters into this. Note that complex sanctioning turns do not always come as complex sentences.
Im vorliegenden Artikel wird ein Überblick über das von der DFG geförderte Projekt Zugänge zu multimodalen Korpora gesprochener Sprache – Vernetzung und zielgruppenspezifische Ausdifferenzierung (ZuMult) gegeben. Dabei wird zunächst auf die Sprachdaten und auf die technische Basis der Applikationen eingegangen, die dem Projekt zugrunde liegen. Im Anschluss werden die weiteren Beiträge in diesem Themenheft von KorDaF kurz vorgestellt. Übergeordnetes Thema von ZuMult ist die Verbesserung der Zugänglichkeit von digitalen mündlichen Sprachdaten für verschiedene Anwendungen und Zielgruppen, wobei der Fokus dieses Themenhefts auf Applikationen und Anwender:innen aus der Fremdsprachendidaktik und der DaF-/DaZ-Forschung und -Lehre liegt. Die einzelnen Beiträge beleuchten zentrale methodische und/oder technische Aspekte dieses Themas und beschreiben die Architektur und verschiedene prototypische Anwendungen, die das Projekt entwickelt hat.
ZuRecht steht für Zugang zur Recherche in Transkripten. Es handelt sich um eine prototypische Implementierung einer webbasierten grafischen Benutzeroberfläche, welche Zugriff auf Transkripte gesprochener Sprache aus dem Archiv für Gesprochenes Deutsch (AGD) des Leibniz-Instituts für Deutsche Sprache (IDS) bietet. Der Zugriff erfolgt über die neue, im Projekt „ZuMult“ entwickelte Schnittstelle zur Suche in mündlichen Korpora. ZuRecht dient einerseits der Demonstration der Möglichkeiten der neuen Schnittstelle, indem es komplexe Suchanfragen mit der speziell für die Korpusrecherche entwickelten Anfragesprache CQP auf Transkriptionen gesprochener Sprache erlaubt. Andererseits kommt ZuRecht als Erweiterung der Datenbank für Gesprochenes Deutsch (DGD) zum Einsatz und eröffnet den DGD-Nutzer:innen viele neue Forschungsmöglichkeiten, insbesondere auf den Gebieten der Gesprächsanalyse und der DaF/DaZ-bezogenen Forschung. Im Beitrag werden die Funktionalitäten von ZuRecht ausführlich vorgestellt und ihre Einsatzmöglichkeiten in den genannten Disziplinen exemplarisch vorgeführt.
Post-field syntax and focalization strategies in National Socialist political speech. This paper deals with a syntactic feature of spoken German, i.e. post-field filling, and with its occurrence in one specific discourse type – political speech – throughout one significant period of the history of German language – National Socialism. This paper aims at pointing out the communicative pragmatic function of right dislocation in the NS political speech on the basis of some collected examples.
The QUEST (QUality ESTablished) project aims at ensuring the reusability of audio-visual datasets (Wamprechtshammer et al., 2022) by devising quality criteria and curating processes. RefCo (Reference Corpora) is an initiative within QUEST in collaboration with DoReCo (Documentation Reference Corpus, Paschen et al. (2020)) focusing on language documentation projects. Previously, Aznar and Seifart (2020) introduced a set of quality criteria dedicated to documenting fieldwork corpora. Based on these criteria, we establish a semi-automatic review process for existing and work-in-progress corpora, in particular for language documentation. The goal is to improve the quality of a corpus by increasing its reusability. A central part of this process is a template for machine-readable corpus documentation and automatic data verification based on this documentation. In addition to the documentation and automatic verification, the process involves a human review and potentially results in a RefCo certification of the corpus. For each of these steps, we provide guidelines and manuals. We describe the evaluation process in detail, highlight the current limits for automatic evaluation and how the manual review is organized accordingly.
This contribution investigates the use of the Czech particle jako (“like”/“as”) in naturally occurring conversations. Inspired by interactional research on unfinished or suspended utterances and on turn-final conjunctions and particles, the analysis aims to trace the possible development of jako from conjunction to a tag-like particle that can be exploited for mobilizing affiliative responses. Traditionally, jako has been described as conjunction used for comparing two elements or for providing a specification of a first element [“X (is) like Y”]. In spoken Czech, however, jako can be flexibly positioned within a speaking turn and does not seem to operate as a coordinating or hypotactic conjunction. As a result, prior studies have described jako as a polyfunctional particle. This article will try to shed light on the meaning of jako in spoken discourse by focusing on its apparent fuzzy or “filler” uses, i.e., when it is found in a mid-turn position in multi-unit turns and in the immediate vicinity of hesitations, pauses, and turn suspensions. Based on examples from mundane, video-recorded conversations and on a sequential and multimodal approach to social interaction, the analyses will first show that jako frequently frames discursive objects that co-participants should respond to. By using jako before a pause and concurrently adopting specific embodied displays, participants can more explicitly seek to mobilize responsive action. Moreover, as jako tends to cluster in multi-unit turns involving the formulation of subjective experience or stance, it can be shown to be specifically designed for mobilizing affiliative responses. Finally, it will be argued that the potential of jako to open up interactive turn spaces can be linked to the fundamental comparative semantics of the original conjunction.
Der vorliegende Beitrag setzt sich mit dem computergestützten Transkriptionsverfahren arabisch-deutscher Gesprächsdaten für interaktionsbezogene Untersuchungen auseinander. Zunächst werden wesentliche methodische Herausforderungen der gesprächsanalytischen Arbeit adressiert: Hinsichtlich der derzeitigen Korpustechnologie ermöglicht die Verwendung von arabischen Schriftzeichen in einem mehrsprachigen, bidirektionalen Transkript keine analysegerechte Rekonstruktion von Reziprozität, Linearität und Simultaneität sprachlichen Handelns. Zudem ist die Verschriftung von arabischen Gesprächsdaten aufgrund der unzureichenden (gesprächsanalytischen) Beschäftigung mit den standardfernen Varietäten und gesprochensprachlichen Phänomenen erschwert. Daher widmet sich der zweite Teil des Beitrags den bisher erarbeiteten und erprobten Lösungsansätzen ̶ einem stringenten, gesprächsanalytisch fundierten Transkriptionssystem für gesprochenes Arabisch.
We apply a decision tree based approach to pronoun resolution in spoken dialogue. Our system deals with pronouns with NP- and non-NP-antecedents. We present a set of features designed for pronoun resolution in spoken dialogue and determine the most promising features. We evaluate the system on twenty Switchboard dialogues and show that it compares well to Byron’s (2002) manually tuned system.
This paper describes the TEI-based ISO standard 24624:2016 ‘Transcription of spoken language’ and other formats used within CLARIN for spoken language resources. It assesses the current state of support for the standard and the interoperability between these formats and with rele- vant tools and services. The main idea behind the paper is that a digital infrastructure providing language resources and services to researchers should also allow the combined use of resources and/or services from different contexts. This requires syntactic and semantic interoperability. We propose a solution based on the ISO/TEI format and describe the necessary steps for this format to work as an exchange format with basic semantic interoperability for spoken language resources across the CLARIN infrastructure and beyond.
We present an implemented machine learning system for the automatic detection of nonreferential it in spoken dialog. The system builds on shallow features extracted from dialog transcripts. Our experiments indicate a level of performance that makes the system usable as a preprocessing filter for a coreference resolution system. We also report results of an annotation study dealing with the classification of it by naive subjects.
In this paper, we address two problems in indexing and querying spoken language corpora with overlapping speaker contributions. First, we look into how token distance and token precedence can be measured when multiple primary data streams are available and when transcriptions happen to be tokenized, but are not synchronized with the sound at the level of individual tokens. We propose and experiment with a speaker based search mode that enables any speaker’s transcription tier to be the basic tokenization layer whereby the contributions of other speakers are mapped to this given tier. Secondly, we address two distinct methods of how speaker overlaps can be captured in the TEI based ISO Standard for Spoken Language Transcriptions (ISO 24624:2016) and how they can be queried by MTAS – an open source Lucene-based search engine for querying text with multilevel annotations. We illustrate the problems, introduce possible solutions and discuss their benefits and drawbacks.
In this paper we investigate the coverage of the two knowledge sources WordNet and Wikipedia for the task of bridging resolution. We report on an annotation experiment which yielded pairs of bridging anaphors and their antecedents in spoken multi-party dialog. Manual inspection of the two knowledge sources showed that, with some interesting exceptions, Wikipedia is superior to WordNet when it comes to the coverage of information necessary to resolve the bridging anaphors in our data set. We further describe a simple procedure for the automatic extraction of the required knowledge from Wikipedia by means of an API, and discuss some of the implications of the procedure’s performance.
Dieser Beitrag analysiert, wie sich Verbosität als Widerstandsphänomen sprachlich-interaktional manifestiert. Widerstand gilt in der psychodynamischen Therapie als Schutzfunktion der Patienten vor Veränderung, die den Fortschritt der Therapie hemmt, ist aus therapeutischer Sicht jedoch ein wertvoller Indikator für dahinterliegende, bedeutungsvolle Erfahrungen der Patienten. Gegenstand der Analyse sind drei Fallbeispiele aufgezeichneter ambulanter, psychodynamischer Therapiesitzungen. Die folgenden Merkmale von Verbosität sind Ergebnisse der Untersuchung: a) eine Themenverschiebung zu Beginn der jeweiligen Erzählung; b) Erzählgegenstand sind dritte, nicht anwesende Personen und/oder alltägliche Begebenheiten; c) Emotionen werden wenig oder gar nicht thematisiert; d) die Erzählungen weisen einen hohen Detailliertheitsgrad auf. Therapeuten behandeln die Erzählungen nur implizit als verbos durch eine zunächst abwartende Haltung, wenig bis keine Nachfragen sowie die Thematisierung von Emotionen und der Bedeutung des Gesagten für die Patienten selbst. Außerdem lenken sie das Gespräch auf die Patienten bzw. auf das vorherige Gesprächsthema oder übertragen die erzählte Geschichte auf die aktuelle Gesprächssituation.
Comparaison de deux marqueurs d’affirmation dans des séquences de co-construction: voilà et genau
(2016)
This contribution investigates the German response particle genau and the French response particle voilà within collaborative turn sequences in videotaped ordinary conversations. Adopting a conversation analytic approach to cross-linguistic comparison, I will show that the basic epistemic value of both particles allows them to be used in similar sequential environments. When a co-participant formulates a candidate conclusion in environments where it can be easily inferred from previous talk, first speakers may confirm the adequacy of the pre-emptive completion by voilà or genau. These particles may then also be followed by self- or other-repeats. The analyses aim to illustrate that participants rely on a variety of practices in order to positively assess a pre-emptive completion, and to refute a supposed binary opposition of refusal vs. acceptance in the receipt slot.
In German oral discourse, previous research has shown that okay can be used both as a response token (e.g., for agreeing with the previous turn or for claiming a certain degree of understanding) and as a discourse marker (e.g., for closing conversational topics or sequences and/or indicating transitions). This contribution focuses on the use of okay as a response token and how it is connected with the speakers’ interactional state of knowledge (their understanding, their assumptions etc.). The analysis is based on video recorded everyday conversations in German and a sequential, micro-analytic approach (multimodal conversation analysis). The main function of conversational okay in the selected data set is related to indicating the acceptance of prior information. By okay, speakers however claim acceptance of a piece of information that they can’t verify or check. The analysis contrasts different sequences containing okay only with sequences in which change-of-state tokens such as ah and achso co-occur with okay. This illustrates that okay itself does not index prior information as new, and that it is not used for agreeing with or for confirming prior information. Instead it enables the speaker to adopt a kind of neutral, “non-agreeing” position towards a given piece of information.
This paper aims to describe different patterns of syntactic extensions of turns-at-talk in mundane conversations in Czech. Within interactional linguistics, same-speaker continuations of possibly complete syntactic structures have been described for typologically diverse languages, but have not yet been investigated for Slavic languages. Based on previously established descriptions of various types of extensions (Vorreiter 2003; Couper-Kuhlen & Ono 2007), our initial description shall therefore contribute to the cross-linguistic exploration of this phenomenon. While all previously described forms for continuing a turn-constructional unit seem to exist in Czech, some grammatical features of this language (especially free word order and strong case morphology) may lead to problems in distinguishing specific types of syntactic extensions. Consequently, this type of language allows for critically evaluating the cross-linguistic validity of the different categories and underlines the necessity of analysing syntactic phenomena within their specific action contexts.
Der Beitrag beschreibt die Entwicklung und Anwendung des TEI-basierten ISO-Standards ISO 24624:2016 Transcription of spoken language, der seit einigen Jahren für gesprochensprachliche Forschungsdaten aus unterschiedlichen Kontexten eingesetzt wird. Ein standardisiertes Dateiformat ermöglicht Interoperabilität zwischen verschiedenen Werkzeugen und weiteren Angeboten von Datenzentren und Infrastrukturen. Durch die methodologisch fundierte Abwägung zwischen Standardisierung und Flexibilität kann der ISO/TEI-Standard zudem Forschungsdaten aus verschiedenen Forschungskontexten abbilden, und so interdisziplinäre Vorhaben erleichtern. Der Beitrag stellt einige Anwendungsbereiche aus dem Lebenszyklus gesprochensprachlicher Forschungsdaten vor, in denen auf dem ISO/TEI-Standard basierenden Erweiterungen existierender Softwarelösungen erfolgreich umgesetzt werden konnten, und zeigt weitere Beispiele für die zunehmende Verbreitung des Formats.
Sogenannte „Pragmatikalisierte Mehrworteinheiten“ sind im Deutschen hochfrequent und unterliegen bisweilen tiefgreifenden phonetischen Reduktionsprozessen. Diese können Realisierungsvarianten hervorbringen, die in der Rückschau auf mehr als eine lexematische Ursprungsform zurückführbar sind. Die vorliegende Studie untersucht mit [ˈzɐmɐ] einen besonders prägnanten Fall dieser Art anhand eines Perzeptionsexperimentes.
This paper reports on the efforts of twelve national teams in building the International Comparable Corpus (ICC; https://korpus.cz/icc) that will contain highly comparable datasets of spoken, written and electronic registers. The languages currently covered are Czech, Finnish, French, German, Irish, Italian, Norwegian, Polish, Slovak, Swedish and, more recently, Chinese, as well as English, which is considered to be the pivot language. The goal of the project is to provide much-needed data for contrastive corpus-based linguistics. The ICC corpus is committed to the idea of re-using existing multilingual resources as much as possible and the design is modelled, with various adjustments, on the International Corpus of English (ICE). As such, ICC will contain approximately the same balance of forty percent of written language and 60 percent of spoken language distributed across 27 different text types and contexts. A number of issues encountered by the project teams are discussed, ranging from copyright and data sustainability to technical advances in data distribution.
Das Forschungs- und Lehrkorpus Gesprochenes Deutsch (FOLK) ist mit seinem Design bislang vornehmlich auf Nutzergruppen aus der sprachwissenschaftlichen Forschung ausgerichtet, prinzipiell aber auch hervorragend dafür geeignet, für die Nutzung im handlungsorientierten DaF- (und eventuell auch DaZ-)Unterricht fruchtbar gemacht zu werden. Lehrende und Lernende des Deutschen als Fremd- oder Zweitsprache bilden eine gesellschaftlich zunehmend relevante Zielgruppe und auch einen beträchtlichen Anteil der registrierten NutzerInnen des Korpus. Im vorliegenden Beitrag soll daher anhand eines exemplarischen Annotationsprojekts gezeigt werden, inwiefern die besonderen Ressourcen und Potentiale von FOLK für den DaF-Unterricht und dort speziell für den Aspekt des authentischen, kompetenten sprachlichen Handelns in Interaktion sinnvoll aufbereitet und schrittweise zugänglicher gemacht werden können.
I’ve got a construction looks funny – representing and recovering non-standard constructions in UD
(2020)
The UD framework defines guidelines for a crosslingual syntactic analysis in the framework of dependency grammar, with the aim of providing a consistent treatment across languages that not only supports multilingual NLP applications but also facilitates typological studies. Until now, the UD framework has mostly focussed on bilexical grammatical relations. In the paper, we propose to add a constructional perspective and discuss several examples of spoken-language constructions that occur in multiple languages and challenge the current use of basic and enhanced UD relations. The examples include cases where the surface relations are deceptive, and syntactic amalgams that either involve unconnected subtrees or structures with multiply-headed dependents. We argue that a unified treatment of constructions across languages will increase the consistency of the UD annotations and thus the quality of the treebanks for linguistic analysis.
This paper presents the corpus-based lexicographical prototype that was developed within the framework of the project Lexik des gesprochenen Deutsch (=LeGeDe) as a thirdparty funded project. Research results regarding the information offered in dictionaries have shown that there is a necessity for information on spoken lexis and its interactional functions. The resulting LeGeDe-prototype is based on these needs and desiderata and is thus an innovative example for the adequate representation of spoken language in online dictionaries. It is available online since September 2019 (https://www.owid.de/legede/). In the following sections, after first focusing on the presentation of the project’s goals, the data basis, the intended end user, and the applied methods, we will illustrate the microstructure of the prototype and the information provided in a dictionary entry based on the lemma eben. Finally, we will summarize innovative aspects that are important for the implementation of such a resource.
Im Beitrag steht das LeGeDe-Drittmittelprojekt und der im Laufe der Projektzeit entwickelte korpusbasierte lexikografische Prototyp zu Besonderheiten des gesprochenen Deutsch in der Interaktion im Zentrum der Betrachtung. Die Entwicklung einer lexikografischen Ressource dieser Art knüpft an die vielfältigen Erfahrungen in der Erstellung von korpusbasierten Onlinewörterbüchern (insbesondere am Leibniz-Institut für Deutsche Sprache, Mannheim) und an aktuelle Methoden der korpusbasierten Lexikologie sowie der Interaktionsanalyse an und nimmt als multimedialer Prototyp für die korpusbasierte lexikografische Behandlung von gesprochensprachlichen Phänomenen eine innovative Position in der modernen Onlinelexikografie ein. Der Beitrag befasst sich im Abschnitt zur LeGeDe-Projektpräsentation ausführlich mit projektrelevanten Forschungsfragen, Projektzielen, der empirischen Datengrundlage und empirisch erhobenen Erwartungshaltungen an eine Ressource zum gesprochenen Deutsch. Die Darstellung der komplexen Struktur des LeGeDe-Prototyps wird mit zahlreichen Beispielen illustriert. In Verbindung mit der zentralen Information zur Makro- und Mikrostruktur und den lexikografischen Umtexten werden die vielfältigen Vernetzungs- und Zugriffsstrukturen aufgezeigt. Ergänzend zum abschließenden Fazit liefert der Beitrag in einem Ausblick umfangreiche Vorschläge für die zukünftige lexikografische Arbeit mit gesprochensprachlichen Korpusdaten.
CLARIN contractual framework for sharing language data: the perspective of personal data protection
(2020)
The article analyses the responsibility for ensuring compliance with the General Data Protection Regulation (GDPR) in research settings. As a general rule, organisations are considered the data controller (responsible party for the GDPR compliance). Research constitutes a unique setting influenced by academic freedom. This raises the question of whether academics could be considered the controller as well. However, there are some court cases and policy documents on this issue. It is not settled yet. The analysis serves a preliminary analytical background for redesigning CLARIN contractual framework for sharing data.
We present web services which implement a workflow for transcripts of spoken language following the TEI guidelines, in particular ISO 24624:2016 “Language resource management – Transcription of spoken language”. The web services are available at our website and will be available via the CLARIN infrastructure, including the Virtual Language Observatory and WebLicht.
This article describes the development of the digital infrastructure at a research data centre for audio-visual linguistic research data, the Hamburg Centre for Language Corpora (HZSK) at the University of Hamburg in Germany, over the past ten years. The typical resource hosted in the HZSK Repository, the core component of the infrastructure, is a collection of recordings with time-aligned transcripts and additional contextual data, a spoken language corpus. Since the centre has a thematic focus on multilingualism and linguistic diversity and provides its service to researchers within linguistics and other disciplines, the development of the infrastructure was driven by diverse usage scenarios and user needs on the one hand, and by the common technical requirements for certified service centres of the CLARIN infrastructure on the other. Beyond the technical details, the article also aims to be a contribution to the discussion on responsibilities and services within emerging digital research data infrastructures and the fundamental issues in sustainability of research software engineering, concluding that in order to truly cater to user needs across the research data lifecycle, we still need to bridge the gap between discipline-specific research methods in the process of digitalisation and generic digital research data management approaches.
As a part of the ZuMult-project, we are currently modelling a backend architecture that should provide query access to corpora from the Archive of Spoken German (AGD) at the Leibniz-Institute for the German Language (IDS). We are exploring how to reuse existing search engine frameworks providing full text indices and allowing to query corpora by one of the corpus query languages (QLs) established and actively used in the corpus research community. For this purpose, we tested MTAS - an open source Lucene-based search engine for querying on text with multilevel annotations. We applied MTAS on three oral corpora stored in the TEI-based ISO standard for transcriptions of spoken language (ISO 24624:2016). These corpora differ from the corpus data that MTAS was developed for, because they include interactions with two and more speakers and are enriched, inter alia, with timeline-based annotations. In this contribution, we report our test results and address issues that arise when search frameworks originally developed for querying written corpora are being transferred into the field of spoken language.
In this paper, we describe a data processing pipeline used for annotated spoken corpora of Uralic languages created in the INEL (Indigenous Northern Eurasian Languages) project. With this processing pipeline we convert the data into a loss-less standard format (ISO/TEI) for long-term preservation while simultaneously enabling a powerful search in this version of the data. For each corpus, the input we are working with is a set of files in EXMARaLDA XML format, which contain transcriptions, multimedia alignment, morpheme segmentation and other kinds of annotation. The first step of processing is the conversion of the data into a certain subset of TEI following the ISO standard ’Transcription of spoken language’ with the help of an XSL transformation. The primary purpose of this step is to obtain a representation of our data in a standard format, which will ensure its long-term accessibility. The second step is the conversion of the ISO/TEI files to a JSON format used by the “Tsakorpus” search platform. This step allows us to make the corpora available through a web-based search interface. As an addition, the existence of such a converter allows other spoken corpora with ISO/TEI annotation to be made accessible online in the future.
We present a descriptive analysis on the two datasets from the shared task on Source, Subjective Expression and Target Extraction from Political Speeches (STEPS), the only existing German dataset for opinion role extraction of its size. Our analysis discusses the individual properties of the three components, subjective expressions, sources and targets and their relations towards each other. Our observations should help practitioners and researchers when building a system to extract opinion roles from German data.
Automatic division of spoken language transcripts into sentence-like units is a challenging problem, caused by disfluencies, ungrammatical structures and the lack of punctuation. We present experiments on dividing up German spoken dialogues where we investigate the impact of task setup and data representation, encoding of context information as well as different model architectures for this task.
This paper presents the prototype of a lexicographic resource for spoken German in interaction, which was conceived within the framework of the LeGeDe-project (LeGeDe=Lexik des gesprochenen Deutsch). First of all, it summarizes the theoretical and methodological approaches that were used for the initial planning of the resource. The headword candidates were selected by analyzing corpus-based data. Therefore, the data of two corpora (written and spoken German) were compared with quantitative methods. The information that was gathered on the selected headword candidates can be assigned to two different sections: meanings and functions in interaction.
Additionally, two studies on the expectations of future users towards the resource were carried out. The results of these two studies were also taken into account in the development of the prototype. Focusing on the presentation of the resource’s content, the paper shows both the different lexicographical information in selected dictionary entries, and the information offered by the provided hyperlinks and external texts. As a conclusion, it summarizes the most important innovative aspects that were specifically developed for the implementation of such a resource.
Smooth turn-taking in conversation depends in part on speakers being able to communicate their intention to hold or cede the floor. Both prosodic and gestural cues have been shown to be used in this context. We investigate the interplay of pitch movements and hand gestures at locations at which speaker change becomes relevant, comparing their use in German and Swedish. We find that there are some shared functions of prosody and gesture with regard to turn-taking in the two languages, but that these shared functions appear to be mediated by the different phonological demands on pitch in the two languages.
Im vorliegenden Artikel werden einleitend Gegenstand, Fragestellung und Ziele einer Studie zu „absoluten“ Verwendungen von Modalverben in verbaler Interaktion vorgestellt, gefolgt von Bemerkungen zu Forschungskontext, Theorie, Methodik und Datengrundlage. Ergebnisse der Untersuchung werden unter drei Perspektiven präsentiert: Erstens geht es um Modalverbverwendungen, die sich in der Forschung zwischen Vollverbund Ellipsenerklärungen verorten, zweitens um Strukturen mit (grammatischen) Kontextbezügen, drittens um Konstruktionen und usuelle Handlungsformate. Den Abschluss bilden eine Diskussion der Befunde und ein Ausblick auf Vermittlungspotenziale interaktionslinguistischer Befunde im Bereich Deutsch als Fremdsprache.
Online Access Tools for Spoken German: The Resources of the Deutsches Spracharchiv in a Database
(2002)
This paper shows some details of the modernization of the Deutsches Spracharchiv (DSAv). It explores some future possibilities of linguistical documentation and analysis using the Web. The Institut für Deutsche Sprache (IDS) in Mannheim is the central institution for linguistic research in Germany. The DSAv in the IDS is the center for documentation and research of spoken German. These archives include the largest collection of sound recordings of spoken German (dialects and colloquial speech, including e.g. lots of extinct dialects of former German territories in Eastern Europe) - altogether more than 15,000 sound recordings. The lacking clarification and accessibility of this data material has been felt as an essential deficit. The opportunity to edit the sound signal digitally offers a much easier access to spoken language. Through the integration of the already existing information about the corpora and the transcribed texts in an information- and full text databank, as well as the linking of the data with the acoustic signal (alignment), arises a data-pool with considerably better documentation of the materials and a fast direct grasp of the recorded sounds. Thus, the DSAv initiates totally new research questions for the work at the IDS, as well as for linguistics altogether.
Die „21. Arbeitstagung zur Gesprächsforschung“ mit dem Rahmenthema „Vergleichende Gesprächsforschung“ fand vom 21.-23. März 2018 am Institut für Deutsche Sprache in Mannheim statt. Das Ziel der Tagung war es, Forscherinnen und Forscher zusammenzubringen, die authentische Interaktionsdaten aus vergleichender Perspektive untersuchen. Das Rahmenthema der Tagung ergab sich aus dem steigenden Interesse an vergleichenden Fragestellungen innerhalb konversations- und gesprächsanalytischer Untersuchungen. Die Tagung nahm gezielt Vorgehensweisen und Methoden bei der Durchführung vergleichender Untersuchungen in den Blick. Die Vorträge1, Projektpräsentationen und Datensitzungen erörterten 1. das Vergleichen als analytische Grundoperation der Konversations- und Gesprächsanalyse, 2. Vergleiche alternativer Ressourcen und Praktiken für spezifische Handlungen und Aktivitäten in der Interaktion sowie 3. methodologische Herausforderungen einer vergleichenden Gesprächsforschung.
A syntax-based scheme for the annotation and segmentation of German spoken language interactions
(2018)
Unlike corpora of written language where segmentation can mainly be derived from orthographic punctuation marks, the basis for segmenting spoken language corpora is not predetermined by the primary data, but rather has to be established by the corpus compilers. This impedes consistent querying and visualization of such data. Several ways of segmenting have been proposed,
some of which are based on syntax. In this study, we developed and evaluated annotation and segmentation guidelines in reference to the topological field model for German. We can show that these guidelines are used consistently across annotators. We also investigated the influence of various interactional settings with a rather simple measure, the word-count per segment and unit-type. We observed that the word count and the distribution of each unit type differ in varying interactional settings and that our developed segmentation and annotation guidelines are used consistently across annotators. In conclusion, our syntax-based segmentations reflect interactional properties that are intrinsic to the social interactions that participants are involved in. This can be used for further analysis of social interaction and opens the possibility for automatic segmentation of transcripts.
Except for some recent advances in spoken language lexicography (cf. Verdonik & Sepesy Maučec 2017, Hansen & Hansen 2012, Siepmann 2015), traditional lexicographic work is mainly oriented towards the written language. In this paper, we describe a method we used to identify relevant headword candidates for a lexicographic resource for spoken language that is currently being developed at the Institute for the German Language (IDS, Mannheim). We describe the challenges of the headword selection for a dictionary of spoken language, and having made considerations regarding our headword concept, we present the corpus-based procedures that we used in order to facilitate the headword selection. After presenting the results regarding the selection of one-word lemmas, we discuss the opportunities and limitations of our approach.
Notions such as “corpus-driven” versus “theory-driven” bring into focus the specific role of corpora in linguistic research. As for phonology with its intrinsic focus on abstract categorical representation, there is a question of how a strictly corpus-driven approach can yield insight into relevant structures. Here we argue for a more theory-driven approach to phonology based on the concept of a phonological grammar in terms of interacting constraints. Empirical validation of such grammars comes from the potential convergence of the evidence from various sources including typological data, neutralization patterns, and in particular patterns observed in the creative use of language such as acronym formation, loanword adaptation, poetry, and speech errors. Further empirical validation concerns specific predictions regarding phonetic differences among opposition members, paradigm uniformity effects, and phonetic implementation in given segmental and prosodic contexts. Corpora in the narrowest sense (i.e. “raw” data consisting of spontaneous speech produced in natural settings) are useful for testing these predictions, but even here, special purpose-built corpora are often necessary.
We present a method for detecting and reconstructing separated particle verbs in a corpus of spoken German by following an approach suggested for written language. Our study shows that the method can be applied successfully to spoken language, compares different ways of dealing with structures that are specific to spoken language corpora, analyses some remaining problems, and discusses ways of optimising precision or recall for the method. The outlook sketches some possibilities for further work in related areas.
In my article I argue the need for an existence of grammar in spoken language. It would have the same functions as the grammar of written language: describing and explaining the fundamental units of spoken language and their features, describing the composition of those units and their conjunction. The basic units in the grammar of spoken language can be named as: the sound, the word, the functional unit, the conversational turn and the conversation itself. Further the central characteristics of spoken language and their impact on grammar have to be taken into account. They are: the interactivity, the multimodality, the processabihty and the great variability. After displaying my concepts I discuss three alternative concepts of a grammar in spoken language: online-syntax, construction grammar and multimodal grammar. The article concludes by discussing the role of spoken language grammar in language and foreign language teaching.
This paper gives an insight into the basic concepts for a corpus-based lexical resource of spoken German, which is being developed by the project "The Lexicon of Spoken German"(Lexik des gesprochenen Deutsch, LeGeDe) at the "Institute for the German Language" (Institut für Deutsche Sprache, IDS) in Mannheim. The focus of the paper is on initial ideas of semi-automatic and automatic resources that assist the quantitative analysis of the corpus data for the creation of dictionary content. The work is based on the "Research and Teaching Corpus of Spoken German" (Forschungs- und Lehrkorpus Gesprochenes Deutsch, FOLK).
Der Auftaktworkshop "Lexik des gesprochenen Deutsch: Forschungsstand, Erwartungen und Anforderungen an die Entwicklung einer innovativen lexikografischen Ressource" fand am 16. und 17. Februar 2017 am Institut fur Deutsche Sprache (IDS) in Mannheim statt. Das von der Leibniz-Gemeinschaft geforderte Projekt "Lexik des gesprochenen Deutsch" (=LeGeDe, Leibniz-Wettbewerb 2016, Forderlinie "Innovative Vorhaben") nahm im September 2016 am IDS seine Arbeit auf. Das Hauptziel ist die Erstellung einer korpusbasierten elektronischen Ressource zur Lexik des gesprochenen Deutsch auf der Grundlage von lexikologischen und gesprachsanalytischen Untersuchungen authentischer gesprochensprachlicher Daten.
This paper is about the workflow for construction and dissemination of FOLK (Forschungs - und Lehrkorpus Gesprochenes Deutsch – Research and Teaching Corpus of Spoken German), a large corpus of authentic spoken interaction data, recorded on audio and video. Section 2 describes in detail the tools used in the individual steps of transcription, anonymization, orthographic normalization, lemmatization and POS tagging of the data, as well as some utilities used for corpus management. Section 3 deals with the DGD (Datenbank für Gesprochenes Deutsch - Database of Spoken German) as a tool for distributing completed data sets and making them available for qualitative and quantitative analysis. In section 4, some plans for further development are sketched.