Refine
Year of publication
- 2019 (361) (remove)
Document Type
- Article (124)
- Part of a Book (110)
- Conference Proceeding (39)
- Book (34)
- Review (25)
- Part of Periodical (14)
- Other (9)
- Working Paper (4)
- Doctoral Thesis (1)
- Report (1)
Language
- German (250)
- English (106)
- Multiple languages (2)
- Ukrainian (2)
- Chinese (1)
Keywords
- Deutsch (144)
- Korpus <Linguistik> (65)
- Gesprochene Sprache (26)
- Sprache (19)
- Konversationsanalyse (17)
- Rezension (16)
- Automatische Sprachanalyse (15)
- Grammatik (15)
- Interaktion (15)
- Kommunikation (15)
Publicationstate
- Zweitveröffentlichung (164)
- Veröffentlichungsversion (137)
- Postprint (29)
- Erstveröffentlichung (2)
Reviewstate
- Peer-Review (148)
- (Verlags)-Lektorat (145)
- (Verlags-)Lektorat (2)
- Peer review (1)
- Peer-review (1)
- Qualifikationsarbeit (Dissertation, Habilitationsschrift) (1)
Publisher
- de Gruyter (66)
- Leibniz-Institut für Deutsche Sprache (IDS) (33)
- Leibniz-Institut für Deutsche Sprache (20)
- Erich Schmidt (18)
- Narr Francke Attempto (11)
- German Society for Computational Linguistics & Language Technology und Friedrich-Alexander-Universität Erlangen-Nürnberg (9)
- Stauffenburg (8)
- Winter (8)
- Institut für Deutsche Sprache (7)
- Lang (6)
Transdisciplinary research is research not only on, but also for and, most of all, with practitioners. In the research framework of transdisciplinarity, scholars and practitioners collaborate throughout research projects with the aim of mutual learning. This paper shows the value transdisciplinarity can add to media linguistics. It does so by investigating the digital literacy shift in journalism: the change, in the last two decades, from the predominance of a writing mode that we have termed focused writing to a mode we have called writing-by-the-way. Large corpora of writing process data have been generated and analyzed with the multimethod approach of progression analysis in order to combine analytical depth with breadth. On the object level of doing writing in journalism, results show that the general trend towards writing-by-the-way opens up new niches for focused writing. On a meta level of doing research, findings explain under what conditions transdisciplinarity allows for deeper insights into the medialinguistic object of investigation.
Die vorliegende Arbeit geht der Frage nach, wie bzw. mit welchen sprachlichen Mitteln der Islam im öffentlichen Diskurs konstituiert wird. Hierfür wurde ein Korpus aus überregionalen Medientexten erstellt und qualitativ analysiert. Die Auswertung des gesamten Korpus weist darauf hin, welche inhaltlichen Merkmale im Islamdiskurs sprachlich nachweisbar sind und sich im gesamten Diskurs stets wiederholen. Schlüsselwörter wie Islam, Islamismus, Islamisierung, Muslim, Dschihad, Scharia oder Koran wurden detailliert präsentiert. Außerdem wurden die aus dem untersuchten Korpus entstandenen Stereotype rekonstruiert. Weiterhin wurden Metaphern bzw. Metaphernkonzepte untersucht, die sich im Islamdiskurs abbilden lassen. Exemplarisch anhand der drei Weltereignisse Iranische Revolution 1978/79, 11. September 2001 und Arabischer Frühling 2011 hat die vorliegende Arbeit gezeigt, wie der Islam in unterschiedlichen Zeitabständen wahrgenommen wird und inwieweit gesellschaftspolitische Ereignisse und Auseinandersetzungen die Thematisierung des Islams beeinflussen können.
Kultur ist nicht nur zu einem Schlüsselbegriff der Geisteswissenschaften geworden, sondern wird auch entterminologisiert als Alltagsbegriff benutzt. In diesem Beitrag wird untersucht, wie der Ausdruck Kultur (einschließlich Derivationen und Komposita) in der mündlichen Interaktion verwendet wird. Auf Basis von 82 Instanzen im Korpus FOLK des IDS Mannheim wurde festgestellt, dass der Ausdruck von SprecherInnen in zumeist semiformellen bis formellen Interaktionstypen benutzt wird. Es findet sich ein breites Spektrum unterschiedlicher, teils ineinander übergehender Bedeutungen, welches dem der wissenschaftlichen Literatur der Kulturtheorie ähnlich ist. Dabei lassen sich jeweils relevante Kernbedeutungen identifizieren, mit denen mehr oder weniger vage assoziierte Bedeutungen verbunden sind. Kultur zeigt sich als kontroverser Begriff: Die Referenz von Kultur, die Wertung und seine Relevanz als Erklärungsressource sind häufig umstritten.
"Wie Schule Sprache macht"
(2019)
A "polyglottal" speech synthesis - modifications for a replica of Kempelen's speaking machine
(2019)
This paper presents the prototype of a lexicographic resource for spoken German in interaction, which was conceived within the framework of the LeGeDe-project (LeGeDe=Lexik des gesprochenen Deutsch). First of all, it summarizes the theoretical and methodological approaches that were used for the initial planning of the resource. The headword candidates were selected by analyzing corpus-based data. Therefore, the data of two corpora (written and spoken German) were compared with quantitative methods. The information that was gathered on the selected headword candidates can be assigned to two different sections: meanings and functions in interaction.
Additionally, two studies on the expectations of future users towards the resource were carried out. The results of these two studies were also taken into account in the development of the prototype. Focusing on the presentation of the resource’s content, the paper shows both the different lexicographical information in selected dictionary entries, and the information offered by the provided hyperlinks and external texts. As a conclusion, it summarizes the most important innovative aspects that were specifically developed for the implementation of such a resource.
We present a descriptive analysis on the two datasets from the shared task on Source, Subjective Expression and Target Extraction from Political Speeches (STEPS), the only existing German dataset for opinion role extraction of its size. Our analysis discusses the individual properties of the three components, subjective expressions, sources and targets and their relations towards each other. Our observations should help practitioners and researchers when building a system to extract opinion roles from German data.
Classical null hypothesis significance tests are not appropriate in corpus linguistics, because the randomness assumption underlying these testing procedures is not fulfilled. Nevertheless, there are numerous scenarios where it would be beneficial to have some kind of test in order to judge the relevance of a result (e.g. a difference between two corpora) by answering the question whether the attribute of interest is pronounced enough to warrant the conclusion that it is substantial and not due to chance. In this paper, I outline such a test.
A Supervised learning approach for the extraction of opinion sources and targets from German text
(2019)
We present the first systematic supervised learning approach for the extraction of opinion sources and targets on German language data. A wide choice of different features is presented, particularly syntactic features and generalization features. We point out specific differences between opinion sources and targets. Moreover, we explain why implicit sources can be extracted even with fairly generic features. In order to ensure comparability our classifier is trained and tested on the dataset of the STEPS shared task.
The Lehnwortportal Deutsch (2012 seqq.) serves as an integrated online information system on German lexical borrowings into other languages, synthesizing an increasing number of lexicographical dictionaries and providing basic cross-resource search options. The paper discusses the far-reaching revision of the system’s conceptual, lexicographical and technological underpinnings currently under way, focussing on their relevance for multilingual loanword lexicography.
In the first volume of Corpus Linguistics and Linguistic Theory, Gries (2005. Null-hypothesis significance testing of word frequencies: A follow-up on Kilgarriff. Corpus Linguistics and Linguistic Theory 1(2). doi:10.1515/cllt.2005.1.2.277. http://www.degruyter.com/view//cllt.2005.1.issue-2/cllt.2005.1.2.277/cllt.2005.1.2.277.xml: 285) asked whether corpus linguists should abandon null-hypothesis significance testing. In this paper, I want to revive this discussion by defending the argument that the assumptions that allow inferences about a given population – in this case about the studied languages – based on results observed in a sample – in this case a collection of naturally occurring language data – are not fulfilled. As a consequence, corpus linguists should indeed abandon null-hypothesis significance testing.
Akkusativobjekt
(2019)
We investigate whether prototypicality or prominence of semantic roles can account for role-related effects in sentence interpretation. We present two acceptability-rating experiments testing three different constructions: active, personal passive and DO-clefts involving the same type of transitive verbs that differ with respect to the agentive role features they select. Our results reveal that there is no cross-constructional advantage for prototypical roles (e.g., agents), hence disconfirming a central tenet of role prototypicality. Rather, acceptability clines depend on the construction under investigation, thereby highlighting different role features. This finding is in line with one core assumption of the prominence account stating that role features are flexibly highlighted depending on the discourse function of the respective construction.
Distributional models of word use constitute an indispensable tool in corpus based lexicological research for discovering paradigmatic relations and syntagmatic patterns (Belica et al. 2010). Recently, word embeddings (Mikolov et al. 2013) have revived the field by allowing to construct and analyze distributional models on very large corpora. This is accomplished by reducing the very high dimensionality of word cooccurrence contexts, the size of the vocabulary, to few dimensions, such as 100-200. However, word use and meaning can vary widely along dimensions such as domain, register, and time, and word embeddings tend to represent only the most prevalent meaning. In this paper we thus construct domain specific word embeddings to allow for systematically analyzing variations in word use. Moreover, we also demonstrate how to reconstruct domain specific co-occurrence contexts from the dense word embeddings.
In the project LeGeDe („Lexik des gesprochenen Deutsch”), we are developing a corpus-based lexicographical resource focusing on features of the lexicon of spoken German. To investigate the expectations of future users, two studies were conducted: interviews with a smaller group of experts and a large-scale online survey. We report on selected results, mainly from the online survey and with a focus on the learning perspective. We want to show if and to which extent the L2-learners’
expectations differ from those of native speakers and in which aspects the two groups agree. We also want to give an outlook on the possibilities that will be available to learners in the planned lexicographical resource.