Korpuslinguistik
Refine
Year of publication
- 2017 (24) (remove)
Document Type
- Conference Proceeding (15)
- Article (5)
- Part of a Book (3)
- Book (1)
Has Fulltext
- yes (24)
Keywords
- Korpus <Linguistik> (22)
- Corpus linguistics (11)
- Corpus technology (6)
- Texttechnologie (5)
- Datenmanagement (4)
- Internet (4)
- Deutsch (3)
- Deutsches Referenzkorpus (DeReKo) (3)
- Englisch (3)
- Visualisierung (3)
Publicationstate
- Veröffentlichungsversion (21)
- Postprint (2)
- Zweitveröffentlichung (1)
Reviewstate
- Peer-Review (24) (remove)
Publisher
Die Idee hinter dem Projekt – einen schnellen und einfachen Einstieg in die Analyse großer Korpusdaten mittels CorpusExplorer geben. Diese frei verfügbare Software bietet aktuell über 45 Analysen/Visualisierungen für vielfältige korpuslinguistische Zwecke und ist durch ihre Nutzerfreundlichkeit auch für den Einsatz in der universitären Lehre geeignet. Als Beispiel dient das EuroParl-Korpus, man kann aber auch eigenes Textmaterial (z. B. Textdateien, eBooks, Xml, Twitter, Blogs, etc.) mit dem CorpusExplorer annotieren, analysieren und visualisieren. Die Videos zeigen Schritt-für-Schritt die einzelnen Funktionen.
Überspannt werden die Videos von einer kleinen zweistufigen Aufgabe: Zuerst sollten ein paar Fragen/Thesen/Annahmen überlegt werden, die sich mit den Plenarprotokollen des EuroParl auswerten lassen – einige Videos geben auch explizite Anregungen oder man nutzt die Inspiration der anderen Beiträge im Issue #3. Die einfachsten Fragen/Thesen lassen sich bereits mit den hier vorgestellten Videos beantworten. Sobald es komplexer wird, betritt man den zweiten – reflexiven Teil der überspannenden Aufgabe: Es ist zu überlegen, wie durch (mehrfache) Kombination der einzelnen Video-/Wissensbausteine das Ziel erreicht werden kann (ein Beispiel – siehe Script). Im Zweifelsfall stehen außerdem ein Handbuch und ein E-Mail Support zur Verfügung.
In the NLP literature, adapting a parser to new text with properties different from the training data is commonly referred to as domain adaptation. In practice, however, the differences between texts from different sources often reflect a mixture of domain and genre properties, and it is by no means clear what impact each of those has on statistical parsing. In this paper, we investigate how differences between articles in a newspaper corpus relate to the concepts of genre and domain and how they influence parsing performance of a transition-based dependency parser. We do this by applying various similarity measures for data point selection and testing their adequacy for creating genre-aware parsing models.
In the NLP literature, adapting a parser to new text with properties different from the training data is commonly referred to as domain adaptation. In practice, however, the differences between texts from different sources often reflect a mixture of domain and genre properties, and it is by no means clear what impact each of those has on statistical parsing. In this paper, we investigate how differences between articles in a newspaper corpus relate to the concepts of genre and domain and how they influence parsing performance of a transition-based dependency parser. We do this by applying various similarity measures for data point selection and testing their adequacy for creating genre-aware parsing models.
Das Archiv für Gesprochenes Deutsch (AGD, Stift/Schmidt 2014) am Institut für Deutsche Sprache ist die zentrale Sammelstelle für Korpora des Gesprochenen Deutsch. Gegründet als Deutsches Spracharchiv (DSAv) im Jahre 1932 hat es über Eigenprojekte, Kooperationen und Übernahmen von Daten aus abgeschlossenen Forschungsprojekten einen Bestand von etwa 50 Variations- und Gesprächskorpora aufgebaut. Heute ist dieser Bestand fast vollständig digitalisiert und wird zu einem großen Teil der wissenschaftlichen Gemeinschaft über die Datenbank für Gesprochenes Deutsch (DGD) im Internet zur Nutzung in Forschung und Lehre angeboten.
In my talk, I present an empirical approach to detecting and describing proverbs as frozen sentences with specific functions in current language use. We have developed this approach in the EU project ‘SprichWort’ (based on the German Reference Corpus). The first chapter illustrates selected aspects of our complex, iterative procedure to validate proverb candidates. Based on our corpus-driven lexpan methodology of slot analysis I then discuss semantic restrictions of proverb patterns. Furthermore, I show different degrees of proverb quality ranging from genuine proverbs to non-proverb realizations of the same abstract pattern. On the one hand, the corpus validation reveals that proverbs are definitely perceived and used as relatively fixed entities and often as sentences. On the other hand, proverbs are not only interpreted as an interesting unique phenomenon but also as part of the whole lexicon, embedded in networks of different lexical items.
Making 1:n explorable: a search interface for the ZAS database of clause-embedding predicates
(2017)
We introduce a recently published corpus-based database of German clause-embedding predicates and present an innovative web application for exploring it. The application displays the predicates and the corpus examples for these predicates in two separate tables that can be browsed and searched in real time. While familiar web interface paradigms make it easy for users to get started, the data presentation and the interactive advanced search components for the two tables are designed to accommodate remarkably complex query needs without the need for resorting to a dedicated query language or a more specialized tool. The 1:n relationship between predicates and their examples is exploited in the two tables in that, e.g. the predicate table also shows, for each predicate and each example attribute, all values that occur in the examples for this predicate. An easy-to-use visual query builder for arbitrary Boolean combinations of search criteria can optionally be displayed to pre-filter the underlying data presented in both tables. Several options for altering quantifier scope can be activated with simple checkboxes and considerably widen the space of searchable constellations.
CoMParS is a resource under construction in the context of the long-term project German Grammar in European Comparison (GDE) at the IDS Mannheim. The principal goal of GDE is to create a novel contrastive grammar of German against the background of other European languages. Alongside German, which is the central focus, the core languages for comparison are English, French, Hungarian and Polish, representing different typological classes. Unlike traditional contrastive grammars available for German, which usually cover language pairs and are based on formal grammatical categories, the new GDE grammar is developed in the spirit of functionalist typology. This implies that, instead of formal criteria, cognitively motivated functional domains in terms of Givón (1984) are used as tertia comparationis. The purpose of CoMParS is to document the empirical basis of the theoretical assumptions of GDE-V and to illustrate the otherwise rather abstract content of grammar books by as many as possible naturally occurring and adequately presented multilingual examples, including information on their use in specific contexts and registers. These examples come from existing parallel corpora, and our presentation will focus on the legal aspects and consequences of this choice of language data.