Refine
Year of publication
- 2020 (101) (remove)
Document Type
- Article (38)
- Conference Proceeding (26)
- Part of a Book (25)
- Book (4)
- Part of Periodical (3)
- Doctoral Thesis (2)
- Other (2)
- Working Paper (1)
Language
- English (101) (remove)
Keywords
- Korpus <Linguistik> (36)
- Forschungsdaten (21)
- Gesprochene Sprache (15)
- Deutsch (14)
- Computerlinguistik (13)
- Konversationsanalyse (13)
- Datenmanagement (8)
- Interaktion (8)
- Natürliche Sprache (8)
- German (7)
Publicationstate
- Veröffentlichungsversion (57)
- Zweitveröffentlichung (34)
- Postprint (14)
- Ahead of Print (2)
Reviewstate
Publisher
- European Language Resources Association (19)
- CLARIN (6)
- Association for Computational Linguistics (4)
- Benjamins (4)
- Dictionary Society of North America (3)
- Linköping University Electronic Press (3)
- MDPI (3)
- Routledge (3)
- SAGE (3)
- De Gruyter (2)
Ancient Chinese poetry is constituted by structured language that deviates from ordinary language usage; its poetic genres impose unique combinatory constraints on linguistic elements. How does the constrained poetic structure facilitate speech segmentation when common linguistic and statistical cues are unreliable to listeners in poems? We generated artificial Jueju, which arguably has the most constrained structure in ancient Chinese poetry, and presented each poem twice as an isochronous sequence of syllables to native Mandarin speakers while conducting magnetoencephalography (MEG) recording. We found that listeners deployed their prior knowledge of Jueju to build the line structure and to establish the conceptual flow of Jueju. Unprecedentedly, we found a phase precession phenomenon indicating predictive processes of speech segmentation—the neural phase advanced faster after listeners acquired knowledge of incoming speech. The statistical co-occurrence of monosyllabic words in Jueju negatively correlated with speech segmentation, which provides an alternative perspective on how statistical cues facilitate speech segmentation. Our findings suggest that constrained poetic structures serve as a temporal map for listeners to group speech contents and to predict incoming speech signals. Listeners can parse speech streams by using not only grammatical and statistical cues but also their prior knowledge of the form of language.
This study examines asymmetries between so-called inherent and contextual categories in relation to the morphological complexity of the nominal and verbal inflectional domain of languages. The observations are traced back to the influence of adult L2 learning in scenarios of intense language contact. A method for a simple comparison of the amount of inherent versus contextual categories is proposed and applied to the German-based creole language Unserdeutsch (Rabaul Creole German) in comparison to its lexifier language. The same procedure will be applied to two further language pairs. The grammatical systems of Unserdeutsch and other contact languages display a noticeable asymmetry regarding their structural complexity. Analysing different kinds of evidence, the explanatory key factor seems to be the role of (adult) L2 acquisition in the history of a language, whereby languages with periods of widespread L2 acquisition tend to lose contextual features. This impression is reinforced by general tendencies in pidgin and creole languages. Beyond that, there seems to be a tendency for inherent categories to be more strongly associated with the verb, while contextual categories seem to be more strongly associated with the noun. This leads to an asymmetry in categorical complexity between the noun phrase and the verb phrase in languages that experienced periods of intense L2 learning.
This paper reports on recent developments within the European Reference Corpus EuReCo, an open initiative that aims at providing and using virtual and dynamically definable comparable corpora based on existing national, reference or other large corpora. Given the well-known shortcomings of other types of multilingual corpora such as parallel/translation corpora (shining-through effects, over-normalization, simplification, etc.) or web-based comparable corpora (covering only web material), EuReCo provides a unique linguistic resource offering new perspectives for fine-grained contrastive research on authentic cross-linguistic data, applications in translation studies and foreign language teaching and learning.
The 12th Web as Corpus workshop (WAC-XII) looks at the past, present, and future of web corpora given the fact that large web corpora are nowadays provided mostly by a few major initiatives and companies, and the diversity of the early years appears to have faded slightly. Also, we acknowledge the fact that alternative sources of data (such as data from Twitter and similar platforms) have emerged, some of them only available to large companies and their affiliates, such as linguistic data from social media and other forms of the deep web. At the same time, gathering interesting and relevant web data (web crawling) is becoming an ever more intricate task as the nature of the data offered on the web changes (for example the death of forums in favour of more closed platforms).
As immigration and mobility increases, so do interactions between people from different linguistic backgrounds. Yet while linguistic diversity offers many benefits, it also comes with a number of challenges. In seven empirical articles and one commentary, this Special Issue addresses some of the most significant language challenges facing researchers in the 21st century: the power language has to form and perpetuate stereotypes, the contribution language makes to intersectional identities, and the role of language in shaping intergroup relations. By presenting work that aims to shed light on some of these issues, the goal of this Special Issue is to (a) highlight language as integral to social processes and (b) inspire researchers to address the challenges we face. To keep pace with the world’s constantly evolving linguistic landscape, it is essential that we make progress toward harnessing language’s power in ways that benefit 21st century globalized societies.
In this article, we examine the current situation of data dissemination and provision for CMC corpora. By that we aim to give a guiding grid for future projects that will improve the transparency and replicability of research results as well as the reusability of the created resources. Based on the FAIR guiding principles for research data management, we evaluate the 20 European CMC corpora listed in the CLARIN CMC Resource family, individuate successful strategies among the existing corpora and establish best practices for future projects. We give an overview of existing approaches to data referencing, dissemination and provision in European CMC corpora, and discuss the methods, formats and strategies used. Furthermore, we discuss the need for community standards and offer recommendations for best practices when creating a new CMC corpus.
Nonnative-accented speakers face prevalent discrimination. The assumption that people freely express negative sentiments toward nonnative speakers has also guided common research methods. However, recent studies did not consistently find downgrading, so that prejudice against nonnative accents might even be questioned at first sight. The present theoretical article will bridge these contradictory findings in three ways: (a) We illustrate that nonnative speakers with foreign accents frequently may not be downgraded in commonly used first-impression and employment scenario paradigms. It appears that relatively controlled responding may be influenced by norms and motivations to respond without prejudice, whereas negative biases emerge in spontaneous responding. (b) We present an integrative view based on knowledge on modern forms of prejudice to develop modern notions of accent-ism, which allow for predictions when accent biases are (not) likely to surface. (c) We conclude with implications for interventions and a tailored research agenda.
In this Paper, we describe a schema and models which have been developed for the representation of corpora of computer-mediated communicatin (CMC corpora) using the representation framework provided by the Text Encoding Initiative (TEI). We characterise CMC discourse as dialogic, sequentially organised interchange between humans and point out that many features of CMC are not adequately handled by current corpus encoding schemas and tools. We formulate desiderata for a representation of CMC in encoding schemes and argue why the TEI is a suitable framework for the encoding of CMC corpora. We propose a model of basic CMC units (utterances, posts, and nonverbal activities) and the macro- and micro-level structures of interactions in CMC environments. Based on these models, we introduce CMC-core, a TEI customisation for the encoding of CMC corpora, which defines CMC-specific encoding features on the four levels of elements, model classes, attribute classes, and modules of the TEI infrastructure. The description of our customisation is illustrated by encoding examples from corpora by researchers of the TEI SIG CMC, representing a variety of CMC genres, i.e. chat, wiki talk, twitter, blog, and Second Life interactions. The material described, i.e. schemata, encoding examples, and documentation, is available from the of the TEI CMC SIG Wiki and will accompany a feature request to the TEI council in late 2019.
We present recognizers for four very different types of speech, thought and writing representation (STWR) for German texts. The implementation is based on deep learning with two different customized contextual embeddings, namely FLAIR embeddings and BERT embeddings. This paper gives an evaluation of our recognizers with a particular focus on the differences in performance we observed between those two embeddings. FLAIR performed best for direct STWR (F1=0.85), BERT for indirect (F1=0.76) and free indirect (F1=0.59) STWR. For reported STWR, the comparison was inconclusive, but BERT gave the best average results and best individual model (F1=0.60). Our best recognizers, our customized language embeddings and most of our test and training data are freely available and can be found via www.redewiedergabe.de or at github.com/redewiedergabe.