Refine
Year of publication
Document Type
- Part of a Book (1605)
- Article (999)
- Conference Proceeding (388)
- Book (159)
- Other (59)
- Review (50)
- Working Paper (41)
- Doctoral Thesis (18)
- Part of Periodical (18)
- Report (14)
Language
- German (2610)
- English (676)
- French (18)
- Multiple languages (15)
- Russian (14)
- Spanish (11)
- Portuguese (9)
- Polish (3)
- Italian (2)
- Dutch (1)
subject_facet_heading
- Deutsch (1357)
- Korpus <Linguistik> (421)
- Konversationsanalyse (175)
- Gesprochene Sprache (160)
- Wörterbuch (137)
- Grammatik (123)
- Rezension (114)
- Sprache (111)
- Sprachgebrauch (107)
- Computerlinguistik (106)
Publicationstate
- Veröffentlichungsversion (3359) (remove)
Reviewstate
- (Verlags)-Lektorat (2322)
- Peer-Review (690)
- Verlags-Lektorat (79)
- Peer-review (37)
- Qualifikationsarbeit (Dissertation, Habilitationsschrift) (25)
- Review-Status-unbekannt (12)
- (Verlags-)Lektorat (4)
- Abschlussarbeit (Bachelor, Master, Diplom, Magister) (Bachelor, Master, Diss.) (4)
- Verlagslektorat (4)
- Peer-Revied (3)
This paper presents an algorithm and an implementation for efficient tokenization of texts of space-delimited languages based on a deterministic finite state automaton. Two representations of the underlying data structure are presented and a model implementation for German is compared with state-of-the-art approaches. The presented solution is faster than other tools while maintaining comparable quality.
We present the use of count-based and predictive language models for exploring language use in the German Reference Corpus DeReKo. For collocation analysis along the syntagmatic axis we employ traditional association measures based on co-occurrence counts as well as predictive association measures derived from the output weights of skipgram word embeddings. For inspecting the semantic neighbourhood of words along the paradigmatic axis we visualize the high dimensional word embeddings in two dimensions using t-stochastic neighbourhood embeddings. Together, these visualizations provide a complementary, explorative approach to analysing very large corpora in addition to corpus querying. Moreover, we discuss count-based and predictive models w.r.t. scalability and maintainability in very large corpora.
The debate on the use of personal data in language resources usually focuses — and rightfully so — on anonymisation. However, this very same debate usually ends quickly with the conclusion that proper anonymisation would necessarily cause loss of linguistically valuable information. This paper discusses an alternative approach — pseudonymisation. While pseudonymisation does not solve all the problems (inasmuch as pseudonymised data are still to be regarded as personal data and therefore their processing should still comply with the GDPR principles), it does provide a significant relief, especially — but not only — for those who process personal data for research purposes. This paper describes pseudonymisation as a measure to safeguard rights and interests of data subjects under the GDPR (with a special focus on the right to be informed). It also provides a concrete example of pseudonymisation carried out within a research project at the Institute of Information Technology and Communications of the Otto von Guericke University Magdeburg.
Contents:
1. Vasile Pais, Maria Mitrofan, Verginica Barbu Mititelu, Elena Irimia, Roxana Micu and Carol Luca Gasan: Challenges in Creating a Representative Corpus of Romanian Micro-Blogging Text. Pp. 1-7
2. Modest von Korff: Exhaustive Indexing of PubMed Records with Medical Subject Headings. Pp. 8-15
3. Luca Brigada Villa: UDeasy: a Tool for Querying Treebanks in CoNLL-U Format. Pp. 16-19
4. Nils Diewald: Matrix and Double-Array Representations for Efficient Finite State Tokenization. Pp. 20-26
5. Peter Fankhauser and Marc Kupietz: Count-Based and Predictive Language Models for Exploring DeReKo. Pp. 27-31
6. Hanno Biber: “The word expired when that world awoke.” New Challenges for Research with Large Text Corpora and Corpus-Based Discourse Studies in Totalitarian Times. Pp. 32-35
In this paper, we address two problems in indexing and querying spoken language corpora with overlapping speaker contributions. First, we look into how token distance and token precedence can be measured when multiple primary data streams are available and when transcriptions happen to be tokenized, but are not synchronized with the sound at the level of individual tokens. We propose and experiment with a speaker based search mode that enables any speaker’s transcription tier to be the basic tokenization layer whereby the contributions of other speakers are mapped to this given tier. Secondly, we address two distinct methods of how speaker overlaps can be captured in the TEI based ISO Standard for Spoken Language Transcriptions (ISO 24624:2016) and how they can be queried by MTAS – an open source Lucene-based search engine for querying text with multilevel annotations. We illustrate the problems, introduce possible solutions and discuss their benefits and drawbacks.
In this paper we investigate the coverage of the two knowledge sources WordNet and Wikipedia for the task of bridging resolution. We report on an annotation experiment which yielded pairs of bridging anaphors and their antecedents in spoken multi-party dialog. Manual inspection of the two knowledge sources showed that, with some interesting exceptions, Wikipedia is superior to WordNet when it comes to the coverage of information necessary to resolve the bridging anaphors in our data set. We further describe a simple procedure for the automatic extraction of the required knowledge from Wikipedia by means of an API, and discuss some of the implications of the procedure’s performance.
The thesis describes a fully automatic system for the resolution of the pronouns 'it', 'this', and 'that' in English unrestricted multi-party dialog. Referential relations considered include both normal NP-antecedence as well as discourse-deictic pronouns. The thesis contains a theoretical part with a comprehensive empiricial study, and a practical part describing machine learning experiments.
We present an implemented system for the resolution of it, this, and that in transcribed multi-party dialog. The system handles NP-anaphoric as well as discourse-deictic anaphors, i.e. pronouns with VP antecedents. Selectional preferences for NP or VP antecedents are determined on the basis of corpus counts. Our results show that the system performs significantly better than a recency-based baseline.
In this paper, we present a suite of flexible UIMA-based components for information retrieval research which have been successfully used (and re-used) in several projects in different application domains. Implementing the whole system as UIMA components is beneficial for configuration management, component reuse, implementation costs, analysis and visualization.
This paper introduces LRTwiki, an improved variant of the Likelihood Ratio Test (LRT). The central idea of LRTwiki is to employ a comprehensive domain specific knowledge source as additional “on-topic” data sets, and to modify the calculation of the LRT algorithm to take advantage of this new information. The knowledge source is created on the basis of Wikipedia articles. We evaluate on the two related tasks product feature extraction and keyphrase extraction, and find LRTwiki to yield a significant improvement over the original LRT in both tasks.
We present WOMBAT, a Python tool which supports NLP practitioners in accessing word embeddings from code. WOMBAT addresses common research problems, including unified access, scaling, and robust and reproducible preprocessing. Code that uses WOMBAT for accessing word embeddings is not only cleaner, more readable, and easier to reuse, but also much more efficient than code using standard in-memory methods: a Python script using WOMBAT for evaluating seven large word embedding collections (8.7M embedding vectors in total) on a simple SemEval sentence similarity task involving 250 raw sentence pairs completes in under ten seconds end-to-end on a standard notebook computer.
Data sets of publication meta data with manually disambiguated author names play an important role in current author name disambiguation (AND) research. We review the most important data sets used so far, and compare their respective advantages and shortcomings. From the results of this review, we derive a set of general requirements to future AND data sets. These include both trivial requirements, like absence of errors and preservation of author order, and more substantial ones, like full disambiguation and adequate representation of publications with a small number of authors and highly variable author names. On the basis of these requirements, we create and make publicly available a new AND data set, SCAD-zbMATH. Both the quantitative analysis of this data set and the results of our initial AND experiments with a naive baseline algorithm show the SCAD-zbMATH data set to be considerably different from existing ones. We consider it a useful new resource that will challenge the state of the art in AND and benefit the AND research community.
Lexicography
(2008)
Lexicon schemas and their use are discussed in this paper from the perspective of lexicographers and field linguists. A variety of lexicon schemas have been developed, with goals ranging from computational lexicography (DATR) through archiving (LIFT, TEI) to standardization (LMF, FSR). A number of requirements for lexicon schemas are given. The lexicon schemas are introduced and compared to each other in terms of conversion and usability for this particular user group, using a common lexicon entry and providing examples for each schema under consideration. The formats are assessed and the final recommendation is given for the potential users, namely to request standard compliance from the developers of the tools used. This paper should foster a discussion between authors of standards, lexicographers and field linguists.
We describe a simple procedure for the automatic creation of word-level alignments between printed documents and their respective full-text versions. The procedure is unsupervised, uses standard, off-the-shelf components only, and reaches an F-score of 85.01 in the basic setup and up to 86.63 when using pre- and post-processing. Potential areas of application are manual database curation (incl. document triage) and biomedical expression OCR.
pyMMAX2 is an API for processing MMAX2 stand-off annotation data in Python. It provides a lightweight basis for the development of code which opens up the Java- and XML-based ecosystem of MMAX2 for more recent, Python-based NLP and data science methods. While pyMMAX2 is pure Python, and most functionality is implemented from scratch, the API re-uses the complex implementation of the essential business logic for MMAX2 annotation schemes by interfacing with the original MMAX2 Java libraries. pyMMAX2 is available for download at http://github.com/nlpAThits/pyMMAX2.
We introduce a novel scientific document processing task for making previously inaccessible information in printed paper documents available to automatic processing. We describe our data set of scanned documents and data records from the biological database SABIO-RK, provide a definition of the task, and report findings from preliminary experiments. Rigorous evaluation proved challenging due to lack of gold-standard data and a difficult notion of correctness. Qualitative inspection of results, however, showed the feasibility and usefulness of the task.
Fragen der Verdatung sind Bestandteil der digitalen Diskursanalyse und keine Vorarbeiten. Die Analyse digital(isiert)er Diskurse setzt im Unterschied zur Auswertung nicht-digital repräsentierter Sprache und Kommunikation notwendig technische Verfahren und Praktiken, Algorithmen und Software voraus, die den Untersuchungsgegenstand als digitales Datum konstituieren. Die nachfolgenden Abschnitte beschreiben kurz und knapp wiederkehrende Aspekte dieser Verdatungstechniken und -praktiken, insbesondere mit Blick auf Erhebung und Transformation (Abschnitt 2), Korpuskompilierung (Abschnitt 3), Annotation (Abschnitt 4) und Wege der analytischen Datenerschließung (Abschnitt 5). Im Fazit wird die Relevanz der Verdatungsarbeit für den Analyseprozess zusammengefasst (6).
Dieser Beitrag analysiert, wie sich Verbosität als Widerstandsphänomen sprachlich-interaktional manifestiert. Widerstand gilt in der psychodynamischen Therapie als Schutzfunktion der Patienten vor Veränderung, die den Fortschritt der Therapie hemmt, ist aus therapeutischer Sicht jedoch ein wertvoller Indikator für dahinterliegende, bedeutungsvolle Erfahrungen der Patienten. Gegenstand der Analyse sind drei Fallbeispiele aufgezeichneter ambulanter, psychodynamischer Therapiesitzungen. Die folgenden Merkmale von Verbosität sind Ergebnisse der Untersuchung: a) eine Themenverschiebung zu Beginn der jeweiligen Erzählung; b) Erzählgegenstand sind dritte, nicht anwesende Personen und/oder alltägliche Begebenheiten; c) Emotionen werden wenig oder gar nicht thematisiert; d) die Erzählungen weisen einen hohen Detailliertheitsgrad auf. Therapeuten behandeln die Erzählungen nur implizit als verbos durch eine zunächst abwartende Haltung, wenig bis keine Nachfragen sowie die Thematisierung von Emotionen und der Bedeutung des Gesagten für die Patienten selbst. Außerdem lenken sie das Gespräch auf die Patienten bzw. auf das vorherige Gesprächsthema oder übertragen die erzählte Geschichte auf die aktuelle Gesprächssituation.
The Lyon’s team research task consists in the study of the way in which multilingual resources are mobilized in team work within collaborative activities; how they are exploited in a specific way in order both to enhance collaboration and to respect the specificities of the members’ linguistic competences and practices within the team. Central to our analytical work, which is inspired by ethnomethodological conversation analysis, is the relationship between multilingual resources and the situated organization of linguistic uses and of social practices.
You might not know what a “smombie” is, but you have certainly already met one today. In public streets and places, the so-called “smartphone zombies” regularly cross our ways. They walk slowly, in peculiar ways, their eyes and fingers focused on their smartphone displays. While some cities have already introduced specific walking lanes or ground-level traffic signs for smartphone users “on the go”, it is not only road safety that is at stake. Frequently hunching over our phones causes cervical pain, we are addicted to likes on social media, and the fear of missing out prevents us from switching off our phones. If asked if mobile device use is possibly harmful to our bodies and minds, most people would spontaneously agree. Our social skills seem to constantly diminish since smartphones have become an everyday tool: we stick to them like glue while waiting for the bus, while walking, while eating, even while being with others. Will we turn into social zombies in the end?
Die Integration englischer er-Personenbezeichnungen ins System der deutschen Nomina agentis geht aufgrund struktureller Parallelen scheinbar schnell vonstatten. Auffällig, aber in bestehenden Untersuchungen unberücksichtigt, ist jedoch die (Nicht-)Movierung der Entlehnungen (Sharon ist Manager neben Managerin). Eine Fragebogenstudie mit zwölf prädikativen Konstruktionen, die sich auf weibliche Individuen beziehen, zeigt zunächst, dass Movierung für die meisten Teilnehmenden (ca. ¾) der Normalfall ist. Nur zwei Personen movieren nie. Bei den Teilnehmenden mit schwankender Movierung lässt sich kein Einfluss der Faktoren Geschlecht, Alter und Herkunft der Teilnehmenden sowie Geschlechterstereotyp des Lexems nachweisen. Einfluss auf die Variation haben dagegen der Fremdwortstatus (native Lexeme werden tendenziell eher moviert als Anglizismen), die Gebrauchsfrequenz (frequentere Lexeme werden tendenziell eher moviert als weniger frequente) und die Länge des Lexems (kürzere Lexeme werden tendenziell eher moviert als längere). Die statistische Untersuchung wird von kleineren qualitativen Beobachtungen aus den erhobenen Antworten und aus anderen Datenquellen (v.a. Korpora) ergänzt.
In diesem Beitrag widmen wir uns der Frage, welche Schritte unternommen werden müssen, um Skripte, die bei der Aufbereitung und/oder Auswertung von Forschungsdaten Anwendung finden, so FAIR wie möglich zu gestalten. Dabei nehmen wir sowohl Reproduzierbarkeit, also den Weg von den (Roh)daten zu den Ergebnissen einer Studie, als auch Wiederverwertbarkeit, also die Möglichkeit, die Methoden einer Studie mittels des Skripts auf andere Daten anzuwenden, in den Fokus und beleuchten dabei die folgenden Aspekte: Arbeitsumgebung, Datenvalidierung, Modularisierung, Dokumentation und Lizenz.
Having the necessary skills for staying in contact with friends and relatives through digital devices is crucial in today’s world. As the current COVID-19 pandemic shows, this holds especially true for the elderly. Being quarantined and restricted from physically meeting people, various communication technologies are more important than ever for staying social and informed on current events. In nursing homes, staff members are now finding new ways for staying in touch with family members by assisting residents in making video calls with mobile devices.
But what if elderly people cannot rely on personal assistance for accessing these alternative means of communication? This raises the general question of how older people can and do learn to use such technologies. Although the internet is full of guides and instructional videos on how to use smartphones or tablets, they are a cold comfort to someone who may not even know what an internet browser is.
Especially for digital newcomers, the tried and true method of face-to-face instruction is invaluable. While many older people turn to their children or grandchildren for help in all things digital, courses specifically tailored for elderly users are also increasingly popular.
More and more governmental initiatives and associations indeed acknowledge the already existing interest of elderly citizens in digital tools and their growing need to receive customized training (e.g. “SeniorSurf” and “Kansalaisen digitaidot” in Finland or “Silver Tipps” in Germany). For a researcher of social interaction, these courses can also provide a valuable window for discovering what it looks and sounds like to learn to use essential but sometimes alien technologies.
Wissenschaftlich basierte allgemeine Wörterbücher des Deutschen werden heute meist korpusbasiert erarbeitet, d. h. die in ihnen beschriebene Sprache wird vor der lexikografischen Beschreibung empirisch erforscht. Diese Korpora sind allerdings, wie die großen linguistischen Textsammlungen zum Deutschen allgemein, durch Zeitungstexte dominiert. Daher beruhen die in Wörterbüchern beschriebenen Kollokationen und typischen Verwendungskontexte zumindest teilweise auf dieser Textsorte. Wir untersuchen in unserem Beitrag anhand einer Fallstudie zu Mann und Frau, wie stark sich die Beschreibung solcher Kollokationssets ändern würde, wenn als Korpusgrundlage nicht Zeitungen, sondern Publikumszeitschriften oder belletristische Texte herangezogen würden und wie unterschiedlich demnach Geschlechterstereotype dargestellt würden. Damit diskutieren wir auch die Frage, ob Zeitungstexte in diesem Fall ein adäquates und vielseitiges Abbild des Gebrauchsstandards zeigen. Auf einer allgemeineren Ebene wird dadurch ein grundlegendes Problem korpuslinguistischer Forschungsarbeiten tangiert, nämlich die Frage, inwieweit durch Korpora überhaupt ein ‚objektives‘ Bild der sprachlichen Wirklichkeit gezeichnet werden kann.
Our research task consists in the study of the way in which multilingual resources are mobilized in team work within collaborative activities; how they are exploited in a specific way in order both to enhance collaboration and to respect the specificities of the members’ linguistic competences and practices within the team. Central to our analytical work, which is inspired by ethnomethodological conversation analysis, is the relationship between multilingual resources and the situated organization of linguistic uses and of social practices. These two aspects are reflexively articulated, multilingual resources being shaped by the very contexts of their use and activities being constrained and thus structured by the available resources.
L’équipe de Lyon étudie la façon dont les ressources plurilingues sont mobilisées dans des activités collaboratives au sein du travail d’équipe. La démarche analytique est inspirée de l’Analyse Conversationnelle d’emprunte ethnomethodologique, et considère comme centrale la relation entre ressources plurilingues et organisation située des usages linguistiques et des pratiques sociales. Ces deux aspects sont réflexivement articulés, les ressources plurilingues étant modelées par leur contexte d’utilisation, et les activités étant mutuellement contraintes et structurées par les ressources disponibles.
Dieser Artikel schlägt eine Reflexion vor über die möglichen Konvergenzen zwischen einerseits einer linguistischen Betrachtungsweise der wissenschaftlichen Interaktion, welche empfänglich für die „analytische Mentalität“ der ethnomethodologisch beeinflussten Konversationsanalyse ist, und andererseits einer Betrachtungsweise der wissenschaftlichen Praktiken, wie sie von den social studies of science befürwortet wird, zu denen die ethnomethodologischen Untersuchungen der wissenschaftlichen Arbeit auf zentrale Weise beigetragen haben. Die nun folgende Überlegung wird nicht auf abstrakte Art vollzogen werden, sondern auf der Grundlage von empirischen Daten, bei denen es sich um Tonband- und Videoaufnahmen von Interaktionen zwischen Forschern bei der Ausübung ihrer alltäglichen wissenschaftlichen Arbeit handelt. Die analysierten Transkriptionsauszüge werden es uns ermöglichen, die grundsätzlichen Anfangsüberlegungen, die unserer Analysepraxis zugrunde liegen, konkret aufzuzeigen; wir werden unsere Aufmerksamkeit besonders auf diejenigen Methoden lenken, mittels derer die Interaktionspartner eines ersten Sprechers nach der Äußerung seiner Proposition mit dem Gespräch fortfahren, um diese erste Proposition anzunehmen, zu verändern oder abzulehnen. Eine solche empirische Durchmusterung wird es uns erlauben, die Art zu verdeutlichen, wie die interaktionalen Praktiken der Forscher in den Entstehungsprozess des wissenschaftlichen Wissens, in das Erscheinen von Argumenten, Thesen und Ideen eingreifen, die sich entweder durchsetzen und stabilisieren können oder aber die, im Gegenteil, instabil und kontrovers bleiben, d.h. die es schaffen oder umgekehrt es gerade nicht schaffen, sich in Wissensobjekten herauszukristallisieren.
Das Centre de Sociologie de l’Innovation (CSI) der Ecole des Mines in Paris ist eine Hochburg der Wissenschaftssoziologie, an der die Arbeiten von Bruno Latour und Michel Callon erstellt wurden. Deren Untersuchungen haben eine Reihe von Analysen der wissenschaftlichen Praktiken ausgelöst, die manchmal – vor allem in der angelsächsischen Literatur – unter dem Begriff „Actor-Network-Theory“ (ANT) zusammengefasst werden. Dieser fundamentale Beitrag zur Wissenschaftssoziologie zeichnet sich aus durch eine gesteigerte Aufmerksamkeit sowohl gegenüber den Praktiken der Wissenschaftler, der „science in action“, den Objekten, den Artefakten und den technischen Vorrichtungen als auch gegenüber den Netzwerken, in denen sich Menschen und Nicht-Menschen zusammenfügen und im Umlauf sind. Eine Gruppe von Forschern des CSI, Madeleine Akrich, Antoine Hennion und Vololona Rabeharisoa, hat freundlicherweise eingewilligt, im folgenden Text sehr frei über die Thematik des vorliegenden ZBBS-Heftes und über die Art und Weise zu diskutieren, in der sie sich in ihren Forschungsfeldern und in ihren Arbeiten gegenüber den Fragen positionieren, die durch die Berücksichtigung der sozialen Interaktionen in wissenschaftlichen Arbeitsvollzügen aufgeworfen werden.
Erzählen multimodal
(2018)
We are witnessing an emerging digital revolution. For the past 25–30 years, at an increasing pace, digital technologies—especially the internet, mobile phones and smartphones—have transformed the everyday lives of human beings. The pace of change will increase, and new digital technologies will become even more tightly entangled in human everyday lives. Artificial intelligence (AI), the Internet of Things (IoT), 6G wireless solutions, virtual reality (VR), augmented reality (AR), mixed reality (XR), robots and various platforms for remote and hybrid communication will become embedded in our lives at home, work and school.
Digitalisation has been identified as a megatrend, for example, by the OECD (2016; 2019). While digitalisation processes permeate all aspects of life, special attention has been paid to its impact on the ageing population, everyday communication practices, education and learning and working life. For example, it has been argued that digital solutions and technologies have the potential to improve quality of life, speed up processes and increase efficiency. At the same time, digitalisation is likely to bring with it unexpected trends and challenges. For example, AI and robots will doubtlessly speed up or take over many routine-based work tasks from humans, leading to the disappearance of certain occupations and the need for re-education. This, in turn, will lead to an increased demand for skills that are unique to humans and that technologies are not able to master. Thus, developing human competences in the emerging digital era will require not only the mastering of new technical skills, but also the advancement of interpersonal, emotional, literacy and problem-solving skills.
It is important to identify and describe the digitalisation phenomena—pertaining to individuals and societies—and seek human-centric answers and solutions that advance the benefits of and mitigate the possible adverse effects of digitalisation (e.g. inequality, divisions, vulnerability and unemployment). This requires directing the focus on strengthening the human skills and competences that will be needed for a sustainable digital future. Digital technologies should be seen as possibilities, not as necessities.
There is a need to call attention to the co-evolutionary processes between humans and emerging digital technologies—that is, the ways in which humans grow up with and live their lives alongside digital technologies. It is imperative to gain in-depth knowledge about the natural ways in which digital technologies are embedded in human everyday lives—for example, how people learn, interact and communicate in remote and hybrid settings or with artificial intelligence; how new digital technologies could be used to support continuous learning and understand learning processes better and how health and well-being can be promoted with the help of new digital solutions.
Another significant consideration revolves around the co-creation of our digital futures. Important questions to be asked are as follows: Who are the ones to co-create digital solutions for the future? How can humans and human sciences better contribute to digitalisation and define how emerging technologies shape society and the future? Although academic and business actors have recently fostered inclusion and diversity in their co-creation processes, more must be done. The empowerment of ordinary people to start acting as active makers and shapers of our digital futures is required, as is giving voice to those who have traditionally been silenced or marginalised in the development of digital technology. In the emerging co-creation processes, emphasis should be placed on social sustainability and contextual sensitivity. Such processes are always value-laden and political and intimately intertwined with ethical issues.
Constant and accelerating change characterises contemporary human systems, our everyday lives and the environment. Resilience thinking has become one of the major conceptual tools for understanding and dealing with change. It is a multi-scalar idea referring to the capacity of individuals and human systems to absorb disturbances and reorganise their functionality while undergoing a change. Based on the evolving new digital technologies, there is a pressing need to understand how these technologies could be utilised for human well-being, sustainable lifestyles and a better environment. This calls for analysing different scales and types of resilience in order to develop better technology-based solutions for human-centred development in the new digital era.
This white paper is a collaborative effort by researchers from six faculties and groups working on questions related to digitalisation at the University of Oulu, Finland. We have identified questions and challenges related to the emerging digital era and suggest directions that will make possible a human-centric digital future and strengthen the competences of humans and humanity in this era.
Zur Sprachenpolitik der EG
(1991)
Special Issue: Mobile Medienpraktiken im Spannungsfeld von Öffentlichkeit, Privatheit und Anonymität
(2019)
In conversation, turn-taking is usually fluid, with next speakers taking their turn right after the end of the previous turn. Most, but not all, previous studies show that next speakers start to plan their turn early, if possible already during the incoming turn. The present study makes use of the list-completion paradigm (Barthel et al., 2016), analyzing speech onset latencies and eye-movements of participants in a task-oriented dialogue with a confederate. The measures are used to disentangle the contributions to the timing of turn-taking of early planning of content on the one hand and initiation of articulation as a reaction to the upcoming turn-end on the other hand. Participants named objects visible on their computer screen in response to utterances that did, or did not, contain lexical and prosodic cues to the end of the incoming turn. In the presence of an early lexical cue, participants showed earlier gaze shifts toward the target objects and responded faster than in its absence, whereas the presence of a late intonational cue only led to faster response times and did not affect the timing of participants' eye movements. The results show that with a combination of eye-movement and turn-transition time measures it is possible to tease apart the effects of early planning and response initiation on turn timing. They are consistent with models of turn-taking that assume that next speakers (a) start planning their response as soon as the incoming turn's message can be understood and (b) monitor the incoming turn for cues to turn-completion so as to initiate their response when turn-transition becomes relevant.
Speech planning is a sophisticated process. In dialog, it regularly starts in overlap with an incoming turn by a conversation partner. We show that planning spoken responses in overlap with incoming turns is associated with higher processing load than planning in silence. In a dialogic experiment, participants took turns with a confederate describing lists of objects. The confederate’s utterances (to which participants responded) were pre-recorded and varied in whether they ended in a verb or an object noun and whether this ending was predictable or not. We found that response planning in overlap with sentence-final verbs evokes larger task-evoked pupillary responses, while end predictability had no effect. This finding indicates that planning in overlap leads to higher processing load for next speakers in dialog and that next speakers do not proactively modulate the time course of their response planning based on their predictions of turn endings. The turn-taking system exerts pressure on the language processing system by pushing speakers to plan in overlap despite the ensuing increase in processing load.
To ensure short gaps between turns in conversation, next speakers regularly start planning their utterance in overlap with the incoming turn. Three experiments investigate which stages of utterance planning are executed in overlap. E1 establishes effects of associative and phonological relatedness of pictures and words in a switch-task from picture naming to lexical decision. E2 focuses on effects of phonological relatedness and investigates potential shifts in the time-course of production planning during background speech. E3 required participants to verbally answer questions as a base task. In critical trials, however, participants switched to visual lexical decision just after they began planning their answer. The task-switch was time-locked to participants' gaze for response planning. Results show that word form encoding is done as early as possible and not postponed until the end of the incoming turn. Hence, planning a response during the incoming turn is executed at least until word form activation.
Esipuhe/Preface
(2020)
Schriftlich-Mündlich
(1990)
In conversation, speakers need to plan and comprehend language in parallel in order to meet the tight timing constraints of turn taking. Given that language comprehension and speech production planning both require cognitive resources and engage overlapping neural circuits, these two tasks may interfere with one another in dialogue situations. Interference effects have been reported on a number of linguistic processing levels, including lexicosemantics. This paper reports a study on semantic processing efficiency during language comprehension in overlap with speech planning, where participants responded verbally to questions containing semantic illusions. Participants rejected a smaller proportion of the illusions when planning their response in overlap with the illusory word than when planning their response after the end of the question. The obtained results indicate that speech planning interferes with language comprehension in dialogue situations, leading to reduced semantic processing of the incoming turn. Potential explanatory processing accounts are discussed.
Lors de la négociation située de l'alternance des tours de parole en interaction (Sacks, Schegloff et Jefferson, 1974), les participants s'orientent vers la complétude possible des unités de construction de tour. Grâce à une complétion différée d'un tour de parole précédent, un locuteur peut revendiquer son droit à la parole au-delà d'un tour intercalaire d'un autre locuteur. Cet article exploite différentes formes de cette "delayed completion" (Lerner, 1989) en français parlé. À l'aide du cadre théorique de l'Analyse conversationnelle (ten Have, 1999), nous démontrerons que ce procédé ne relève pas uniquement d'une alternance de tour de parole problématique, mais aussi de séquences collaboratives, qui sont en lien étroit avec le phénomène des constructions syntaxiques collaboratives. En s'intéressant à ces structures syntaxiques émergentes, il est possible de démontrer la négociation située et locale - tour par tour – du droit à la parole et de la dynamique de l'alternance des tours en conversation ordinaire. A base d'une collection d'extraits issus d'interactions naturelles enregistrées en audio ou en vidéo, différentes manières de revendiquer ou de partager son tour seront illustrées. Lors des analyses, une attention particulière sera dédiée à quelques phénomènes récurrents dans les séquences de complétion différée. Ainsi, l'exploitation de certaines conjonctions en tant que marqueurs discursifs ou la présence d'allongements vocaliques en fin du premier segment semblent indiquer des co-occurrences de ressources audibles spécifiques à différents types de complétion différée en conversation française.