Refine
Document Type
- Book (25) (remove)
Language
- English (25) (remove)
Is part of the Bibliography
- yes (25) (remove)
Keywords
- Korpus <Linguistik> (10)
- Datenmanagement (5)
- Computerlinguistik (4)
- Interaktion (4)
- Forschungsdaten (3)
- Grammatik (3)
- Social Media (3)
- Syntax (3)
- Deutsch (2)
- Digital Humanities (2)
Publicationstate
- Veröffentlichungsversion (25) (remove)
Reviewstate
- Peer-Review (17)
- (Verlags)-Lektorat (8)
Publisher
- Benjamins (3)
- IDS-Verlag (3)
- de Gruyter (3)
- European Language Resources Association (ELRA) (2)
- European language resources association (ELRA) (2)
- Leibniz-Institut für Deutsche Sprache (2)
- Austrian Academy of Sciences (1)
- European Language Resources Association (1)
- Frontiers Media SA (1)
- Heidelberg University Publishing (1)
Prediction is a central mechanism in the human language processing architecture. The psycholinguistic and neurolinguistic literature has seen a lively debate about what form prediction may take and what status it has for language processing in the human mind and brain. While predictions are a ubiquitous finding, the implications of these results for models of language processing differ. For instance, eyetracking data suggest that predictions may rely on sublexical orthographic information in natural reading, while electrophysiological data provide mixed evidence for form-based predictions during reading. Other research has revealed that humans rapidly adapt to text specifics and that their predictive capacity varies, broadly speaking, in accordance with inter- and intra-individual language proficiency, which cuts across the speaker groups (e.g. L1 vs. L2 speakers, skilled vs. untrained readers) traditionally used for experimental contrasts. There is therefore evidence that the kind and strength of linguistic predictions depend on (at least) three sources of variability in language processing: speaker, text genre and experimental method.
The aim of this Research Topic is to develop a better understanding of prediction in light of the three sources of variability in language processing, by providing an overview of state-of-the art research on predictive language processing and by bringing together research from various disciplines.
First, intra-and inter-individual differences and their influence on predictive processes remain underrepresented in experimental research on predictive processing. How do language users differ in their predictive abilities and strategies, and how are these differences shaped by e.g. biological, social and cultural factors?
Second, while language users experience great stylistic diversity in their daily language exposure and use, the majority of language processing research still focuses on a very constrained register of well-controlled sentences composed in the standard language. How are predictions shaped by extra- and meta-linguistic context, such as register/genre or accent/speaker identity, and how may this influence the processing of experimental items in another language or text variety?
Third, the Research Topic invites contributions that make use of a multi-method approach, such as combined behavioral and electrophysiological measures or experimental methods combined with measures extracted from corpus data. What opportunities and challenges do we face when integrating multiple approaches to examine linguistic, experimental and individual differences in human predictive capacity?
We welcome contributions from all areas of empirical psycho- and neurolinguistics, but contributions must explicitly address variability and variation in language and language processing. Relevant topics include individual differences and the impact of genre, modality, register and language variety. Contributions that go beyond single word and single sentence paradigms are especially desirable. Experimental, corpus-based, meta-analytic and review papers, as well as theoretical/opinion pieces are welcome; however, papers of the latter type should support their arguments with substantial empirical evidence from the literature. Particularly desirable are contributions which combine topics and/or methods, such as the impact of an individual's native dialect on processing of constructions that show variability in the standard language (e.g. choice of auxiliary, agreement of mass nouns, etc.) or experimental methods combined with measures extracted from corpus data such as information-theoretic surprisal.
This edited volume offers up-to-date research on the interactive building and managing of relationships in organized helping. Its contributions address this core of helping in psychotherapy, coaching, doctor-patient interaction, and digital helping interaction and document and analyze essential communicative practices of relationship management. A summarizing contribution identifies common dimensions of relationship management across the different helping contexts and thereby provides a framework for understanding and researching how interactive practices and helping relationships are interconnected. The volume brings together researchers and practitioners and merges academic approaches to studying relationships with practical knowledge about verbal helping in these settings. The book is intended for scholars in the field of organized helping as well as for students and researchers of communication and discourse / conversation analysis in professional and organized contexts. It is also addressed to practitioners interested in learning more about the micro- and meso-management of their working relationships.
Contents:
1. Andreas Dittrich: Intra-connecting a small exemplary literary corpus with semantic web technologies for exploratory literary studies, S. 1
2. John Kirk, Anna Čermáková: From ICE to ICC: The new International Comparable Corpus, S. 7
3. Dawn Knight, Tess Fitzpatrick, Steve Morris, Jeremy Evas, Paul Rayson, Irena Spasic, Mark Stonelake, Enlli Môn Thomas, Steven Neale, Jennifer Needs, Scott Piao, Mair Rees, Gareth Watkins, Laurence Anthony, Thomas Michael Cobb, Margaret Deuchar, Kevin Donnelly, Michael McCarthy, Kevin Scannell: Creating CorCenCC (Corpws Cenedlaethol Cymraeg Cyfoes – The National Corpus of Contemporary Welsh), S. 13
4. Marc Kupietz, Andreas Witt, Piotr Bański, Dan Tufiş, Dan Cristea, Tamás Váradi: EuReCo - Joining Forces for a European Reference Corpus as a sustainable base for cross-linguistic research, S. 15
5. Harald Lüngen, Marc Kupietz: CMC Corpora in DeReKo, S. 20
6. David McClure, Mark Algee-Hewitt, Douris Steele, Erik Fredner, Hannah Walser: Organizing corpora at the Stanford Literary Lab, S. 25
7. Radoslav Rábara, Pavel Rychlý ,Ondřej Herman: Accelerating corpus search using multiple cores, S. 30
8. John Vidler, Stephen Wattam: Keeping Properties with the Data: CL-MetaHeaders – An Open Specification, S. 35
9. Vladimir Benko: Are Web Corpora Inferior? The Case of Czech and Slovak, S. 43
10. Edyta Jurkiewicz-Rohrbacher, Zrinka Kolaković, Björn Hansen: Web Corpora – the best possible solution for tracking phenomena in underresourced languages: clitics in Bosnian, Croatian and Serbian, S. 49
11. Vít Suchomel: Removing Spam from Web Corpora Through Supervised Learning Using FastText, S. 56
Contents:
1. Julien Abadji, Pedro Javier Ortiz Suárez, Laurent Romary and Benoît Sagot: "Ungoliant: An Optimized Pipeline for the Generation of a Very Large-Scale Multilingual Web Corpus", S.1-9.
2. Markus Gärtner, Felicitas Kleinkopf, Melanie Andresen and Sibylle Hermann: "Corpus Reusability and Copyright - Challenges and Opportunities", S.10-19.
3. Nils Diewald, Eliza Margaretha and Marc Kupietz: "Lessons learned in Quality Management for Online Research Software Tools in Linguistics", S.20-26.
Contents:
1. Johannes Graën, Tannon Kew, Anastassia Shaitarova and Martin Volk, "Modelling Large Parallel Corpora", S. 1-8
2. Pedro Javier Ortiz Suárez, Benoît Sagot and Laurent Romary, "Asynchronous Pipelines for Processing Huge Corpora on Medium to Low Resource Infrastructures", S. 9-16
3. Vladimír Benko, "Deduplication in Large Web Corpora", S. 17-22
4. Mark Davies, "The best of both worlds: Multi-billion word “dynamic” corpora", S. 23-28
5. Adrien Barbaresi, "On the need for domain-focused web corpora", S. 29-32
6. Marc Kupietz, Eliza Margaretha, Nils Diewald, Harald Lüngen and Peter Fankhauser, "What's New in EuReCo? Interoperability, Comparable Corpora, Licensing", S. 33-39
Contents:
1. Vasile Pais, Maria Mitrofan, Verginica Barbu Mititelu, Elena Irimia, Roxana Micu and Carol Luca Gasan: Challenges in Creating a Representative Corpus of Romanian Micro-Blogging Text. Pp. 1-7
2. Modest von Korff: Exhaustive Indexing of PubMed Records with Medical Subject Headings. Pp. 8-15
3. Luca Brigada Villa: UDeasy: a Tool for Querying Treebanks in CoNLL-U Format. Pp. 16-19
4. Nils Diewald: Matrix and Double-Array Representations for Efficient Finite State Tokenization. Pp. 20-26
5. Peter Fankhauser and Marc Kupietz: Count-Based and Predictive Language Models for Exploring DeReKo. Pp. 27-31
6. Hanno Biber: “The word expired when that world awoke.” New Challenges for Research with Large Text Corpora and Corpus-Based Discourse Studies in Totalitarian Times. Pp. 32-35
In order to satisfy the information needs of a wide range of researchers across a number of disciplines, large textual datasets require careful design, collection, cleaning, encoding, annotation, storage, retrieval, and curation. This daunting set of tasks has coalesced into a number of key themes and questions that are of interest to the contributing research communities: (a) what sampling techniques can we apply? (b) what quality issues should we be aware of? (c) what infrastructures and frameworks are being developed for the efficient storage, annotation, analysis and retrieval of large datasets? (d) what affordances do visualisation techniques offer for the exploratory analysis approaches of corpora? (e) what legal paths can be followed in dealing with IPR and data protection issues governing both the data sources and the query results? (f) how to guarantee that corpus data remain available and usable in a sustainable way?
How can we measure the impact – such as awareness for economic, ecological, and political matters – of information, such as scientific publications, user-generated content, and reports from the public administration, based on text data? This workshop brings together research from different theoretical paradigms and methodologies for the extraction of impact-relevant indicators from natural language text data and related meta-data. The papers in this workshop represent different types of expertise in different methods for analyzing text data; spanning the whole spectrum of qualitative, quantitative, and mixed methods techniques, as well as domain expertise in the field of impact measurement. The program was built to create an interdisciplinary half-day workshop where we discuss possibilities, limitations, and synergistic effects of different approaches.
Contents:
1. Christoph Kuras, Thomas Eckart, Uwe Quasthoff and Dirk Goldhahn: Automation, management and improvement of text corpus production, S. 1
2. Thomas Krause, Ulf Leser, Anke Lüdeling and Stephan Druskat: Designing a re-usable and embeddable corpus search library, S. 6
3. Radoslav Rábara, Pavel Rychlý and Ondřej Herman: Distributed corpus search, S. 10
4. Adrien Barbaresi and Antonio Ruiz Tinoco: Using elasticsearch for linguistic analysis of tweets in time and space, S. 14
5. Marc Kupietz, Nils Diewald and Peter Fankhauser: How to Get the Computation Near the Data: Improving data accessibility to, and reusability of analysis functions in corpus query platforms, S. 20
6. Roman Schneider: Example-based querying for specialist corpora, S. 26
7. Paul Rayson: Increasing interoperability for embedding corpus annotation pipelines in Wmatrix and other corpus retrieval tools, S. 33