Refine
Year of publication
- 2015 (5) (remove)
Document Type
- Part of a Book (3)
- Article (2)
Language
- English (5)
Has Fulltext
- yes (5)
Is part of the Bibliography
- yes (5) (remove)
Keywords
- Computerlinguistik (1)
- Error analysis (1)
- Error classification (1)
- Fehleranalyse (1)
- Google Ngram Corpora (1)
- Interaktion (1)
- Interaktionsanalyse (1)
- Linguistische Datenverarbeitung (1)
- Pragmalinguistik (1)
- Raum (1)
Publicationstate
- Postprint (5) (remove)
Reviewstate
- Peer-Review (2)
- Verlags-Lektorat (2)
- Peer-Revied (1)
Publisher
- Amsterdam (1)
- Benjamins (1)
- Springer International Publishing (1)
This paper shows how understanding in interaction is informed by temporality, and in particular, by the workings of retrospection. Understanding is a temporally extended, sequentially organized process. Temporality, namely, the sequential relationship of turn positions, equips participants with default mechanisms to display understandings and to expect such displays. These mechanisms require local management of turn-taking to be in order, i.e., the possibility and the expectation to respond locally and reciprocally to prior turns at talk. Sequential positions of turns in interaction provide an infrastructure for displaying understanding and accomplishing intersubjectivity. Linguistic practices specialized in displaying particular kinds of (not) understanding are adapted to the individual sequential positions with respect to an action-to-be-understood.
The authors establish a phenomenological perspective on the temporal constitution of experience and action. Retrospection and projection (i.e. backward as well as forward orientation of everyday action), sequentiality and the sequential organization of activities as well as simultaneity (i.e. participants’ simultaneous coordination) are introduced as key concepts of a temporalized approach to interaction. These concepts are used to capture that every action is produced as an inter-linked step in the succession of adjacent actions, being sensitive to the precise moment where it is produced. The adoption of a holistic, multimodal and praxeological perspective additionally shows that action in interaction is organized according to several temporal orders simultaneously in operation. Each multimodal resource used in interaction has its own temporal properties.
Using the Google Ngram Corpora for six different languages (including two varieties of English), a large-scale time series analysis is conducted. It is demonstrated that diachronic changes of the parameters of the Zipf–Mandelbrot law (and the parameter of the Zipf law, all estimated by maximum likelihood) can be used to quantify and visualize important aspects of linguistic change (as represented in the Google Ngram Corpora). The analysis also reveals that there are important cross-linguistic differences. It is argued that the Zipf–Mandelbrot parameters can be used as a first indicator of diachronic linguistic change, but more thorough analyses should make use of the full spectrum of different lexical, syntactical and stylometric measures to fully understand the factors that actually drive those changes.
Learning from Errors. Systematic Analysis of Complex Writing Errors for Improving Writing Technology
(2015)
In this paper, we describe ongoing research on writing errors with the ultimate goal to develop error-preventing editing functions in word-processors. Drawing from the state-of-the-art research in errors carried out in various fields, we propose the application of a general concept for action-slips as introduced by Norman. We demonstrate the feasibility of this approach by using a large corpus of writing errors in published texts. The concept of slips considers both the process and the product: some failure in a procedure results in an error in the product, i.e., is visible in the written text. In order to develop preventing functions, we need to determine causes of such visible errors.
We analyze the linguistic evolution of selected scientific disciplines over a 30-year time span (1970s to 2000s). Our focus is on four highly specialized disciplines at the boundaries of computer science that emerged during that time: computational linguistics, bioinformatics, digital construction, and microelectronics. Our analysis is driven by the question whether these disciplines develop a distinctive language use—both individually and collectively—over the given time period. The data set is the English Scientific Text Corpus (scitex), which includes texts from the 1970s/1980s and early 2000s. Our theoretical basis is register theory. In terms of methods, we combine corpus-based methods of feature extraction (various aggregated features [part-of-speech based], n-grams, lexico-grammatical patterns) and automatic text classification. The results of our research are directly relevant to the study of linguistic variation and languages for specific purposes (LSP) and have implications for various natural language processing (NLP) tasks, for example, authorship attribution, text mining, or training NLP tools.