Refine
Year of publication
- 2019 (223) (remove)
Document Type
- Part of a Book (80)
- Article (76)
- Conference Proceeding (25)
- Book (22)
- Other (7)
- Review (6)
- Part of Periodical (3)
- Working Paper (3)
- Report (1)
Language
- German (153)
- English (68)
- Multiple languages (1)
- Chinese (1)
Is part of the Bibliography
- yes (223) (remove)
Keywords
- Deutsch (92)
- Korpus <Linguistik> (48)
- Gesprochene Sprache (18)
- Automatische Sprachanalyse (11)
- Grammatik (11)
- Sprache (11)
- Interaktionsanalyse (10)
- Sprachstatistik (10)
- Pragmatik (9)
- Annotation (8)
Publicationstate
- Zweitveröffentlichung (111)
- Veröffentlichungsversion (79)
- Postprint (16)
- Erstveröffentlichung (2)
Reviewstate
- (Verlags)-Lektorat (92)
- Peer-Review (92)
- (Verlags-)Lektorat (2)
- Peer review (1)
- Peer-review (1)
Publisher
- de Gruyter (41)
- Leibniz-Institut für Deutsche Sprache (IDS) (18)
- Erich Schmidt (13)
- Narr Francke Attempto (10)
- German Society for Computational Linguistics & Language Technology und Friedrich-Alexander-Universität Erlangen-Nürnberg (8)
- Leibniz-Institut für Deutsche Sprache (7)
- Lang (6)
- Narr (6)
- Winter (6)
- Buske (5)
Kultur ist nicht nur zu einem Schlüsselbegriff der Geisteswissenschaften geworden, sondern wird auch entterminologisiert als Alltagsbegriff benutzt. In diesem Beitrag wird untersucht, wie der Ausdruck Kultur (einschließlich Derivationen und Komposita) in der mündlichen Interaktion verwendet wird. Auf Basis von 82 Instanzen im Korpus FOLK des IDS Mannheim wurde festgestellt, dass der Ausdruck von SprecherInnen in zumeist semiformellen bis formellen Interaktionstypen benutzt wird. Es findet sich ein breites Spektrum unterschiedlicher, teils ineinander übergehender Bedeutungen, welches dem der wissenschaftlichen Literatur der Kulturtheorie ähnlich ist. Dabei lassen sich jeweils relevante Kernbedeutungen identifizieren, mit denen mehr oder weniger vage assoziierte Bedeutungen verbunden sind. Kultur zeigt sich als kontroverser Begriff: Die Referenz von Kultur, die Wertung und seine Relevanz als Erklärungsressource sind häufig umstritten.
"Wie Schule Sprache macht"
(2019)
A "polyglottal" speech synthesis - modifications for a replica of Kempelen's speaking machine
(2019)
This paper presents the prototype of a lexicographic resource for spoken German in interaction, which was conceived within the framework of the LeGeDe-project (LeGeDe=Lexik des gesprochenen Deutsch). First of all, it summarizes the theoretical and methodological approaches that were used for the initial planning of the resource. The headword candidates were selected by analyzing corpus-based data. Therefore, the data of two corpora (written and spoken German) were compared with quantitative methods. The information that was gathered on the selected headword candidates can be assigned to two different sections: meanings and functions in interaction.
Additionally, two studies on the expectations of future users towards the resource were carried out. The results of these two studies were also taken into account in the development of the prototype. Focusing on the presentation of the resource’s content, the paper shows both the different lexicographical information in selected dictionary entries, and the information offered by the provided hyperlinks and external texts. As a conclusion, it summarizes the most important innovative aspects that were specifically developed for the implementation of such a resource.
We present a descriptive analysis on the two datasets from the shared task on Source, Subjective Expression and Target Extraction from Political Speeches (STEPS), the only existing German dataset for opinion role extraction of its size. Our analysis discusses the individual properties of the three components, subjective expressions, sources and targets and their relations towards each other. Our observations should help practitioners and researchers when building a system to extract opinion roles from German data.
Classical null hypothesis significance tests are not appropriate in corpus linguistics, because the randomness assumption underlying these testing procedures is not fulfilled. Nevertheless, there are numerous scenarios where it would be beneficial to have some kind of test in order to judge the relevance of a result (e.g. a difference between two corpora) by answering the question whether the attribute of interest is pronounced enough to warrant the conclusion that it is substantial and not due to chance. In this paper, I outline such a test.
A Supervised learning approach for the extraction of opinion sources and targets from German text
(2019)
We present the first systematic supervised learning approach for the extraction of opinion sources and targets on German language data. A wide choice of different features is presented, particularly syntactic features and generalization features. We point out specific differences between opinion sources and targets. Moreover, we explain why implicit sources can be extracted even with fairly generic features. In order to ensure comparability our classifier is trained and tested on the dataset of the STEPS shared task.
The Lehnwortportal Deutsch (2012 seqq.) serves as an integrated online information system on German lexical borrowings into other languages, synthesizing an increasing number of lexicographical dictionaries and providing basic cross-resource search options. The paper discusses the far-reaching revision of the system’s conceptual, lexicographical and technological underpinnings currently under way, focussing on their relevance for multilingual loanword lexicography.
In the first volume of Corpus Linguistics and Linguistic Theory, Gries (2005. Null-hypothesis significance testing of word frequencies: A follow-up on Kilgarriff. Corpus Linguistics and Linguistic Theory 1(2). doi:10.1515/cllt.2005.1.2.277. http://www.degruyter.com/view//cllt.2005.1.issue-2/cllt.2005.1.2.277/cllt.2005.1.2.277.xml: 285) asked whether corpus linguists should abandon null-hypothesis significance testing. In this paper, I want to revive this discussion by defending the argument that the assumptions that allow inferences about a given population – in this case about the studied languages – based on results observed in a sample – in this case a collection of naturally occurring language data – are not fulfilled. As a consequence, corpus linguists should indeed abandon null-hypothesis significance testing.
Akkusativobjekt
(2019)
Distributional models of word use constitute an indispensable tool in corpus based lexicological research for discovering paradigmatic relations and syntagmatic patterns (Belica et al. 2010). Recently, word embeddings (Mikolov et al. 2013) have revived the field by allowing to construct and analyze distributional models on very large corpora. This is accomplished by reducing the very high dimensionality of word cooccurrence contexts, the size of the vocabulary, to few dimensions, such as 100-200. However, word use and meaning can vary widely along dimensions such as domain, register, and time, and word embeddings tend to represent only the most prevalent meaning. In this paper we thus construct domain specific word embeddings to allow for systematically analyzing variations in word use. Moreover, we also demonstrate how to reconstruct domain specific co-occurrence contexts from the dense word embeddings.
In the project LeGeDe („Lexik des gesprochenen Deutsch”), we are developing a corpus-based lexicographical resource focusing on features of the lexicon of spoken German. To investigate the expectations of future users, two studies were conducted: interviews with a smaller group of experts and a large-scale online survey. We report on selected results, mainly from the online survey and with a focus on the learning perspective. We want to show if and to which extent the L2-learners’
expectations differ from those of native speakers and in which aspects the two groups agree. We also want to give an outlook on the possibilities that will be available to learners in the planned lexicographical resource.
This article shows what may be gained by a pattern-based analysis and lexicographic representation of argument structure patterns as compared to one based solely on the valency properties of verbs. The pattern analysed expresses a state whereby two or more entities are positioned on a scale of distinct values. Formally it minimally comprises a verb expressing a state or event and two NPs expressing the entities ranked. The NP referring to the entity occupying the lower position on the scale is embedded in a PP headed by vor. Allowing the identification of instances comprising verbs whose meaning is not straightforwardly related to that of the pattern, the pattern-based analysis employed raises the question of how the metaphorical state meaning of the pattern comes about. Since the verb does not express a ranking and / or a state in a large number of instances, the metaphorical state meaning of the pattern is argued to originate in these cases within the scalar meaning of the preposition and / or to be associated with the pattern itself.
Argumentstrukturmuster. Ein elektronisches Handbuch zu verbalen Argumentstrukturen im Deutschen
(2019)
Valency-based and construction-based approaches to argument structure have been competing for quite a while. However, while valency-based approaches are backed up by numerous valency dictionaries as comprehensive descriptive resources, nothing comparable exists for construction-based approaches. The paper at hand describes the foundations of an ongoing project at the Institut für Deutsche Sprache in Mannheim. Aim of the project is the compilation of an online available description of a net of German argument structure patterns. The main purpose of this resource is to provide an empirical basis for an evaluation of the adequacy of valency- versus construction-based theories of argument structure. The paper at hand addresses the theoretical background, in particular the concepts of pattern and argument structure, and the corpus-based method of the project. Furthermore, it describes the coverage of the resource, the microstructure of the articles, and the macrostructure which is conceived of as a net of argument structure patterns based on family resemblance.
This paper describes a rule-based approach to detect direct speech without the help of any quotation markers. As datasets fictional and non-fictional texts were used. Our evaluation shows that the results appear stable throughout different datasets in the fictional domain and are comparable to the results achieved in related work.
The goal of the current contribution is to discuss the specific change potential of requesting examples in the helping formats ‘psychotherapy’ and coaching’. Requesting examples are defined as retrospective requests from the therapist/coach to the patient/client to elaborate the latter’s directly preceding utterance via an exemplary concretization. To appropriately reflect upon past events and upon personal experiences is often considered a key for change given that such reflections allows patients/clients to develop alternative and new perspectives on their lives, their relationships, their selves etc. To work with examples or to present concrete experiences thus functions as a central change practice both in psychotherapy and in coaching. While this discursive practice entails an inherent change potential, we still have to empirically unfold the sequential, thematic and action theoretical design of requesting examples as well as their interaction-type specific change function(s). This has already been done in the context of therapy. We now widen the focus and contrast these findings with analyses of requesting examples in executive coaching. Thereby this contribution does not only provide in-depth insight into the change potential of requesting examples, but also adds to further differentiate therapy and coaching as regards their discursive and interactive layout.