Refine
Year of publication
Document Type
- Part of a Book (7)
- Conference Proceeding (2)
- Article (1)
- Book (1)
- Report (1)
- Working Paper (1)
Has Fulltext
- yes (13)
Keywords
- Methode (13) (remove)
Publicationstate
- Veröffentlichungsversion (13) (remove)
Reviewstate
- (Verlags)-Lektorat (10)
- Peer-Review (2)
Publisher
- Institut für Deutsche Sprache (4)
- Narr (2)
- Frontiers Media SA (1)
- Lang (1)
- Tokyo University of Foreign Studies (1)
- Växjö University Press (1)
- de Gruyter (1)
Prediction is a central mechanism in the human language processing architecture. The psycholinguistic and neurolinguistic literature has seen a lively debate about what form prediction may take and what status it has for language processing in the human mind and brain. While predictions are a ubiquitous finding, the implications of these results for models of language processing differ. For instance, eyetracking data suggest that predictions may rely on sublexical orthographic information in natural reading, while electrophysiological data provide mixed evidence for form-based predictions during reading. Other research has revealed that humans rapidly adapt to text specifics and that their predictive capacity varies, broadly speaking, in accordance with inter- and intra-individual language proficiency, which cuts across the speaker groups (e.g. L1 vs. L2 speakers, skilled vs. untrained readers) traditionally used for experimental contrasts. There is therefore evidence that the kind and strength of linguistic predictions depend on (at least) three sources of variability in language processing: speaker, text genre and experimental method.
The aim of this Research Topic is to develop a better understanding of prediction in light of the three sources of variability in language processing, by providing an overview of state-of-the art research on predictive language processing and by bringing together research from various disciplines.
First, intra-and inter-individual differences and their influence on predictive processes remain underrepresented in experimental research on predictive processing. How do language users differ in their predictive abilities and strategies, and how are these differences shaped by e.g. biological, social and cultural factors?
Second, while language users experience great stylistic diversity in their daily language exposure and use, the majority of language processing research still focuses on a very constrained register of well-controlled sentences composed in the standard language. How are predictions shaped by extra- and meta-linguistic context, such as register/genre or accent/speaker identity, and how may this influence the processing of experimental items in another language or text variety?
Third, the Research Topic invites contributions that make use of a multi-method approach, such as combined behavioral and electrophysiological measures or experimental methods combined with measures extracted from corpus data. What opportunities and challenges do we face when integrating multiple approaches to examine linguistic, experimental and individual differences in human predictive capacity?
We welcome contributions from all areas of empirical psycho- and neurolinguistics, but contributions must explicitly address variability and variation in language and language processing. Relevant topics include individual differences and the impact of genre, modality, register and language variety. Contributions that go beyond single word and single sentence paradigms are especially desirable. Experimental, corpus-based, meta-analytic and review papers, as well as theoretical/opinion pieces are welcome; however, papers of the latter type should support their arguments with substantial empirical evidence from the literature. Particularly desirable are contributions which combine topics and/or methods, such as the impact of an individual's native dialect on processing of constructions that show variability in the standard language (e.g. choice of auxiliary, agreement of mass nouns, etc.) or experimental methods combined with measures extracted from corpus data such as information-theoretic surprisal.
Die Arbeitsgruppe konstituierte sich im Rahmen des Workshops „Querbezüge des Knowledge Engineering zu Methoden des Software Engineering und der Entwicklung von Informationssystemen" auf der 2. Deutschen Tagung Expertensysteme [AnS93]. Anfangs beteiligten sich zehn verschiedene Gruppen bzw. Einzelpersonen an der Arbeitsgruppe. Zur Fokussierung der Arbeiten beschloß die Arbeitsgruppe, sich primär mit den Themen Vorgehensmodelle und Methoden zu beschäftigen. Unter einem Vorgehensmodell wurde dabei die „Festlegung der bei der Entwicklung eines Systems durchzuführenden Arbeitsschritte verstanden, ... Beziehungen zwischen den Arbeitsschritten sind ebenso festzulegen wie Anforderungen an die zu erzeugenden Ergebnisse." [AL0+93]. Als eine Methode wurde eine „systematische Handlungsvorschrift zur Lösung von Aufgaben einer bestimmten Art verstanden." [AL0+93]. Dementsprechend wurde in der Arbeitsgruppe der Begriff Methodik im Sinne von Methodensammlung verwendet. Außerdem einigte man sich in der Arbeitsgruppe darauf, die Arbeiten anhand einer vergleichenden Fallstudie durchzuführen. In Abwandlung des oft verwendeten IFIP Beispiels [0SV82] wurde als Aufgabenstellung für die Fallstudie die Entwicklung eines (wissensbasierten) Systems zur Tagungsverwaltung ausgewählt. Im Rahmen ihrer Arbeit organisierte die Arbeitsgruppe noch einen weiteren Workshop „Vorgehensmodelle und Methoden zur Entwicklung komplexer Softwaresysteme", der auf der 18. Deutschen Jahrestagung für Künstliche Intelligenz durchgeführt wurde [KuS94]. Leider zeigte es sich in der laufenden Arbeit der Arbeitsgruppe, daß es insbesondere für Mitglieder aus der Wirtschaft sehr schwierig ist, sich über eine längeren Zeitraum aktiv an einer derartigen Arbeitsgruppe zu beteiligen. So blieben für die letzte Phase der Arbeitsgruppe nur noch vier Gruppen übrig, die auch in diesem Abschlußbericht vertreten sind. Von daher sollte klar sein, daß dieser Abschlußbericht keine alle Aspekte umfassende Analyse sein kann, sondern sich vielmehr auf Schlußfolgerungen beschränken muß, die auf Grund der analysierten Methodiken möglich sind. Gleichwohl beinhalten diese Methodiken aus Sicht der Autoren typische methodische Vorgehensweisen in den beteiligten Fachgebieten. Um einen systematischen Vergleich der Methodiken zu ermöglichen, erarbeitete die Arbeitsgruppe einen Kriterienkatalog, mit dem charakteristische Eigenschaften einer Methodik erfaßt werden können [Kri97]. Dieser Kriterienkatalog wird nachfolgend verwendet, um jede der vier Methodiken detailliert zu charakterisieren.
Wie nun bereits seit einigen Jahren üblich, wurde die IDS-Jahrestagung auch dieses Jahr wieder von einer Methodenmesse begleitet, auf der sich passend zum Tagungsthema anwendungsorientierte Projekte mit Bezug zur Lexikonforschung präsentierten. Die Bandbreite der dargebotenen Themen war sehr groß: innovative methodische Ansätze im Bereich der Translationswissenschaft, Tools zur Analyse und Beschreibung lexikalischer Muster oder zur Detektion von Neologismen, neue lexikografische Ressourcen bis hin zu Infrastrukturaktivitäten und einem Kooperationsprojekt zwischen Schüler/innen und Wissenschaftler/innen zur Wortschatzanalyse. Im Folgenden sollen die einzelnen Projekte, die sich auf der Messe präsentiert haben, auf der Basis der eingereichten Abstracts der Messeteilnehmer/innen kurz vorgestellt werden.
Dictionary usage research is a topic of increasing importance within the field of lexicography. At the beginning of the new millennium, the dictionary user was still relatively unknown. However, in the last ten years, more and more user studies have been published. Consequently, methods, data and the conclusions which can be drawn were successively refined. Also, new possibilities of web-based data collection, e.g., the analysis of log files, enriched this field of research. This contribution aims to describe the state of the art in dictionary usage research in the digital era. I begin by providing a short overview of methodological and terminological basics and then place a special focus on three different methods of collecting empirical data on dictionary use: online questionnaires, eye tracking and the analysis of log-files. All these methods are illustrated on user studies conducted at the Institute for the German Language in Mannheim.
Corpus researchers, along with many other disciplines in science are being put under continual pressure to show accountability and reproducibility in their work. This is unsurprisingly difficult when the researcher is faced with a wide array of methods and tools through which to do their work; simply tracking the operations done can be problematic, especially when toolchains are often configured by the developers, but left largely as a black box to the user. Here we present a scheme for encoding this ‘meta data’ inside the corpus files themselves in a structured data format, along with a proof-of-concept tool to record the operations performed on a file.
Intensivinterview
(1982)
This paper presents ongoing research which is embedded in an empirical-linguistic research program, set out to devise viable research strategies for developing an explanatory theory of grammar as a psychological and social phenomenon. As this phenomenon cannot be studied directly, the program attempts to approach it indirectly through its correlates in language corpora, which is justified by referring to the core tenets of Emergent Grammar. The guiding principle for identifying such corpus correlates of grammatical regularities is to imitate the psychological processes underlying the emergent nature of these regularities. While previous work in this program focused on syntagmatic structures, the current paper goes one step further by investigating schematic structures that involve paradigmatic variation. It introduces and explores a general strategy by which corpus correlates of such structures may be uncovered, and it further outlines how these correlates may be used to study the nature of the psychologically real schematic structures.
This introductory tutorial describes a strictly corpus-driven approach for uncovering indications for aspects of use of lexical items. These aspects include ‘(lexical) meaning’ in a very broad sense and involve different dimensions, they are established in and emerge from respective discourses. Using data-driven mathematical-statistical methods with minimal (linguistic) premises, a word’s usage spectrum is summarized as a collocation profile. Self-organizing methods are applied to visualize the complex similarity structure spanned by these profiles. These visualizations point to the typical aspects of a word’s use, and to the common and distinctive aspects of any two words.
This paper deals with the problem of how to interrelate theory-specific treebanks and how to transform one treebank format to another. Currently, two approaches to achieve these goals can be differentiated. The first creates a mapping algorithm between treebank formats. Categories of a source format are transformed into a target format via a given set of general or language-specific mapping rules. The second relates treebanks via a transformation to a general model of linguistic categories, for example based on the EAGLES recommendations for syntactic annotations of corpora, or relying on the HPSG framework. This paper proposes a new methodology as a solution for these desiderata.
Speakers’ linguistic experience is for the most part experience with language as used in conversational interaction. Though highly relevant for usage-based linguistics, the study of such data is as yet often left to other frameworks such as conversation analysis and interactional linguistics (Couper-Kuhlen and Selting 2001). On the basis of a case study of salient usage patterns of the two German motion verbs kommen and gehen in spontaneous conversation, the present paper argues for a methodological integration of quantitative corpus-linguistic methods with qualitative conversation analytic approaches to further the usage-based study of conversational interaction.