Refine
Year of publication
- 2017 (28) (remove)
Document Type
- Conference Proceeding (18)
- Article (5)
- Part of a Book (4)
- Book (1)
Has Fulltext
- yes (28)
Keywords
- Korpus <Linguistik> (28) (remove)
Publicationstate
- Veröffentlichungsversion (22)
- Postprint (3)
- Zweitveröffentlichung (2)
Reviewstate
- Peer-Review (28) (remove)
Publisher
Die Idee hinter dem Projekt – einen schnellen und einfachen Einstieg in die Analyse großer Korpusdaten mittels CorpusExplorer geben. Diese frei verfügbare Software bietet aktuell über 45 Analysen/Visualisierungen für vielfältige korpuslinguistische Zwecke und ist durch ihre Nutzerfreundlichkeit auch für den Einsatz in der universitären Lehre geeignet. Als Beispiel dient das EuroParl-Korpus, man kann aber auch eigenes Textmaterial (z. B. Textdateien, eBooks, Xml, Twitter, Blogs, etc.) mit dem CorpusExplorer annotieren, analysieren und visualisieren. Die Videos zeigen Schritt-für-Schritt die einzelnen Funktionen.
Überspannt werden die Videos von einer kleinen zweistufigen Aufgabe: Zuerst sollten ein paar Fragen/Thesen/Annahmen überlegt werden, die sich mit den Plenarprotokollen des EuroParl auswerten lassen – einige Videos geben auch explizite Anregungen oder man nutzt die Inspiration der anderen Beiträge im Issue #3. Die einfachsten Fragen/Thesen lassen sich bereits mit den hier vorgestellten Videos beantworten. Sobald es komplexer wird, betritt man den zweiten – reflexiven Teil der überspannenden Aufgabe: Es ist zu überlegen, wie durch (mehrfache) Kombination der einzelnen Video-/Wissensbausteine das Ziel erreicht werden kann (ein Beispiel – siehe Script). Im Zweifelsfall stehen außerdem ein Handbuch und ein E-Mail Support zur Verfügung.
In the NLP literature, adapting a parser to new text with properties different from the training data is commonly referred to as domain adaptation. In practice, however, the differences between texts from different sources often reflect a mixture of domain and genre properties, and it is by no means clear what impact each of those has on statistical parsing. In this paper, we investigate how differences between articles in a newspaper corpus relate to the concepts of genre and domain and how they influence parsing performance of a transition-based dependency parser. We do this by applying various similarity measures for data point selection and testing their adequacy for creating genre-aware parsing models.
In the NLP literature, adapting a parser to new text with properties different from the training data is commonly referred to as domain adaptation. In practice, however, the differences between texts from different sources often reflect a mixture of domain and genre properties, and it is by no means clear what impact each of those has on statistical parsing. In this paper, we investigate how differences between articles in a newspaper corpus relate to the concepts of genre and domain and how they influence parsing performance of a transition-based dependency parser. We do this by applying various similarity measures for data point selection and testing their adequacy for creating genre-aware parsing models.
In my talk, I present an empirical approach to detecting and describing proverbs as frozen sentences with specific functions in current language use. We have developed this approach in the EU project ‘SprichWort’ (based on the German Reference Corpus). The first chapter illustrates selected aspects of our complex, iterative procedure to validate proverb candidates. Based on our corpus-driven lexpan methodology of slot analysis I then discuss semantic restrictions of proverb patterns. Furthermore, I show different degrees of proverb quality ranging from genuine proverbs to non-proverb realizations of the same abstract pattern. On the one hand, the corpus validation reveals that proverbs are definitely perceived and used as relatively fixed entities and often as sentences. On the other hand, proverbs are not only interpreted as an interesting unique phenomenon but also as part of the whole lexicon, embedded in networks of different lexical items.
We present a method to identify and document a phenomenon on which there is very little empirical data: German phrasal compounds occurring in the form of as a single token (without punctuation between their components). Relying on linguistic criteria, our approach implies to have an operational notion of compounds which can be systematically applied as well as (web) corpora which are large and diverse enough to contain rarely seen phenomena. The method is based on word segmentation and morphological analysis, it takes advantage of a data-driven learning process. Our results show that coarse-grained identification of phrasal compounds is best performed with empirical data, whereas fine-grained detection could be improved with a combination of rule-based and frequency-based word lists. Along with the characteristics of web texts, the orthographic realizations seem to be linked to the degree of expressivity.
This paper gives an insight into the basic concepts for a corpus-based lexical resource of spoken German, which is being developed by the project "The Lexicon of Spoken German"(Lexik des gesprochenen Deutsch, LeGeDe) at the "Institute for the German Language" (Institut für Deutsche Sprache, IDS) in Mannheim. The focus of the paper is on initial ideas of semi-automatic and automatic resources that assist the quantitative analysis of the corpus data for the creation of dictionary content. The work is based on the "Research and Teaching Corpus of Spoken German" (Forschungs- und Lehrkorpus Gesprochenes Deutsch, FOLK).
In this paper, we will present a first attempt to classify commonly confused words in German by consulting their communicative functions in corpora. Although the use of so-called paronyms causes frequent uncertainties due to similarities in spelling, sound and semantics, up until now the phenomenon has attracted little attention either from the perspective of corpus linguistics or from cognitive linguistics. Existing investigations rely on structuralist models, which do not account for empirical evidence. Still, they have developed an elaborate model based on formal criteria, primarily on word formation (cf. Lăzărescu 1999). Looking from a corpus perspective, such classifications are incompatible with language in use and cognitive elements of misuse.
This article sketches first lexicological insights into a classification model as derived from semantic analyses of written communication. Firstly, a brief description of the project will be provided. Secondly, corpus-assisted paronym detection will be focused. Thirdly, in the main section the paper concerns the description of the datasets for paronym classification and the classification procedures. As a work in progress, new insights will continually be extended once spoken and CMC data are added to the investigations.