Refine
Year of publication
- 2019 (57) (remove)
Document Type
- Conference Proceeding (28)
- Article (13)
- Part of a Book (9)
- Book (4)
- Other (1)
- Review (1)
- Working Paper (1)
Language
- English (57) (remove)
Keywords
- Korpus <Linguistik> (18)
- Deutsch (15)
- Automatische Sprachanalyse (9)
- corpus linguistics (8)
- Gesprochene Sprache (7)
- corpus processing (6)
- Computerlinguistik (5)
- Konversationsanalyse (5)
- Gespräch (4)
- Sprecherwechsel (4)
Publicationstate
- Veröffentlichungsversion (57) (remove)
Reviewstate
- Peer-Review (49)
- (Verlags)-Lektorat (6)
Publisher
- Leibniz-Institut für Deutsche Sprache (8)
- German Society for Computational Linguistics & Language Technology und Friedrich-Alexander-Universität Erlangen-Nürnberg (7)
- Lexical Computing CZ s.r.o. (6)
- The Association for Computational Linguistics (4)
- LiU Electronic Press (2)
- Palgrave Macmillan (2)
- Association for Computational Linguistics (1)
- Centre for Applied Language Studies (1)
- Cergy-Pontoise University, France (1)
- Deutsche Gesellschaft für Sprachwissenschaft (1)
This paper presents the prototype of a lexicographic resource for spoken German in interaction, which was conceived within the framework of the LeGeDe-project (LeGeDe=Lexik des gesprochenen Deutsch). First of all, it summarizes the theoretical and methodological approaches that were used for the initial planning of the resource. The headword candidates were selected by analyzing corpus-based data. Therefore, the data of two corpora (written and spoken German) were compared with quantitative methods. The information that was gathered on the selected headword candidates can be assigned to two different sections: meanings and functions in interaction.
Additionally, two studies on the expectations of future users towards the resource were carried out. The results of these two studies were also taken into account in the development of the prototype. Focusing on the presentation of the resource’s content, the paper shows both the different lexicographical information in selected dictionary entries, and the information offered by the provided hyperlinks and external texts. As a conclusion, it summarizes the most important innovative aspects that were specifically developed for the implementation of such a resource.
We present a descriptive analysis on the two datasets from the shared task on Source, Subjective Expression and Target Extraction from Political Speeches (STEPS), the only existing German dataset for opinion role extraction of its size. Our analysis discusses the individual properties of the three components, subjective expressions, sources and targets and their relations towards each other. Our observations should help practitioners and researchers when building a system to extract opinion roles from German data.
Classical null hypothesis significance tests are not appropriate in corpus linguistics, because the randomness assumption underlying these testing procedures is not fulfilled. Nevertheless, there are numerous scenarios where it would be beneficial to have some kind of test in order to judge the relevance of a result (e.g. a difference between two corpora) by answering the question whether the attribute of interest is pronounced enough to warrant the conclusion that it is substantial and not due to chance. In this paper, I outline such a test.
A Supervised learning approach for the extraction of opinion sources and targets from German text
(2019)
We present the first systematic supervised learning approach for the extraction of opinion sources and targets on German language data. A wide choice of different features is presented, particularly syntactic features and generalization features. We point out specific differences between opinion sources and targets. Moreover, we explain why implicit sources can be extracted even with fairly generic features. In order to ensure comparability our classifier is trained and tested on the dataset of the STEPS shared task.
The Lehnwortportal Deutsch (2012 seqq.) serves as an integrated online information system on German lexical borrowings into other languages, synthesizing an increasing number of lexicographical dictionaries and providing basic cross-resource search options. The paper discusses the far-reaching revision of the system’s conceptual, lexicographical and technological underpinnings currently under way, focussing on their relevance for multilingual loanword lexicography.
We investigate whether prototypicality or prominence of semantic roles can account for role-related effects in sentence interpretation. We present two acceptability-rating experiments testing three different constructions: active, personal passive and DO-clefts involving the same type of transitive verbs that differ with respect to the agentive role features they select. Our results reveal that there is no cross-constructional advantage for prototypical roles (e.g., agents), hence disconfirming a central tenet of role prototypicality. Rather, acceptability clines depend on the construction under investigation, thereby highlighting different role features. This finding is in line with one core assumption of the prominence account stating that role features are flexibly highlighted depending on the discourse function of the respective construction.
Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.
Since 2013 representatives of several French and German CMC corpus projects have developed three customizations of the TEI-P5 standard for text encoding in order to adapt the encoding schema and models provided by the TEI to the structural peculiarities of CMC discourse. Based on the three schema versions, a 4th version has been created which takes into account the experiences from encoding our corpora and which is specifically designed for the submission of a feature request to the TEI council. On our poster we would present the structure of this schema and its relations (commonalities and differences) to the previous schemas.
Both compounds and multi-word expressions are complex lexical units, made up of at least two constituents. The most basic difference is that the former are morphological objects and the latter result from syntactic processes. However, the exact demarcation between compounds and multi-word expressions differs greatly from language to language and is often a matter of debate in and across languages. Similarly debated is whether and how these two different kinds of units complement or compete with each other.
The volume presents an overview of compounds and multi-word expressions in a variety of European languages. Central questions that are discussed for each language concern the formal distinction between compounds and multi-word expressions, their formation and their status in lexicon and grammar.
The volume contains chapters on German, English, Dutch, French, Italian, Spanish, Greek, Russian, Polish, Finnish, and Hungarian as well as a contrastive overview with a focus on German. It brings together insights from word-formation theory, phraseology and theory of grammar and aims to contribute to the understanding of the lexicon, both from a language-specific and cross-linguistic perspective.