Korpuslinguistik
Refine
Year of publication
Document Type
- Part of a Book (200)
- Conference Proceeding (161)
- Article (105)
- Book (34)
- Part of Periodical (10)
- Other (9)
- Working Paper (7)
- Review (4)
- Doctoral Thesis (3)
- Preprint (3)
Language
- German (274)
- English (265)
- Multiple languages (1)
Keywords
- Korpus <Linguistik> (457)
- Deutsch (165)
- Gesprochene Sprache (64)
- Annotation (56)
- Forschungsdaten (36)
- Computerlinguistik (33)
- Korpuslinguistik (28)
- corpus linguistics (27)
- Deutsches Referenzkorpus (DeReKo) (25)
- Grammatik (25)
Publicationstate
- Veröffentlichungsversion (322)
- Zweitveröffentlichung (142)
- Postprint (23)
- Erstveröffentlichung (1)
Reviewstate
- (Verlags)-Lektorat (237)
- Peer-Review (202)
- Peer-review (5)
- Qualifikationsarbeit (Dissertation, Habilitationsschrift) (5)
- Zweitveröffentlichung (3)
- Abschlussarbeit (Bachelor, Master, Diplom, Magister) (Bachelor, Master, Diss.) (2)
- Verlags-Lektorat (2)
- Peer-reviewed (1)
- Review-Status-unbekannt (1)
- Verlagslektorat (1)
Publisher
- de Gruyter (81)
- Institut für Deutsche Sprache (58)
- Narr (33)
- European Language Resources Association (ELRA) (25)
- European Language Resources Association (24)
- Leibniz-Institut für Deutsche Sprache (IDS) (20)
- Narr Francke Attempto (15)
- Leibniz-Institut für Deutsche Sprache (11)
- Linköping University Electronic Press (10)
- CLARIN (8)
Feedback utterances are among the most frequent in dialogue. Feedback is also a crucial aspect of all linguistic theories that take social interaction involving language into account. However, determining communicative functions is a notoriously difficult task both for human interpreters and systems. It involves an interpretative process that integrates various sources of information. Existing work on communicative function classification comes from either dialogue act tagging where it is generally coarse grained concerning the feed- back phenomena or it is token-based and does not address the variety of forms that feed- back utterances can take. This paper introduces an annotation framework, the dataset and the related annotation campaign (involving 7 raters to annotate nearly 6000 utterances). We present its evaluation not merely in terms of inter-rater agreement but also in terms of usability of the resulting reference dataset both from a linguistic research perspective and from a more applicative viewpoint.
In the first volume of Corpus Linguistics and Linguistic Theory, Gries (2005. Null-hypothesis significance testing of word frequencies: A follow-up on Kilgarriff. Corpus Linguistics and Linguistic Theory 1(2). doi:10.1515/ cllt.2005.1.2.277. http://www.degruyter.com/view/j/cllt.2005.1.issue-2/cllt.2005. 1.2.277/cllt.2005.1.2.277.xml: 285) asked whether corpus linguists should abandon null-hypothesis significance testing. In this paper, I want to revive this discussion by defending the argument that the assumptions that allow inferences about a given population – in this case about the studied languages – based on results observed in a sample – in this case a collection of naturally occurring language data – are not fulfilled. As a consequence, corpus linguists should indeed abandon null-hypothesis significance testing.
Many applications in Natural Language Processing require a semantic analysis of sentences in terms of truth-conditional representations, often with specific desiderata in terms of which information needs to be included in the semantic analysis. However, there are only very few tools that allow such an analysis. We investigate the representations of an automatic analysis pipeline of the C&C parser and Boxer to determine whether Boxer’s analyses in form of Discourse Representation Structure can be successfully converted into a more surface oriented event semantic representation, which will serve as input for a fusion algorithm for fusing hard and soft information. We use a data set of synthetic counter intelligence messages for our investigation. We provide a basic pipeline for conversion and subsequently discuss areas in which ambiguities and differences between the semantic representations present challenges in the conversion process.
We investigate how the granularity of POS tags influences POS tagging, and furthermore, how POS tagging performance relates to parsing results. For this, we use the standard “pipeline” approach, in which a parser builds its output on previously tagged input. The experiments are performed on two German treebanks, using three POS tagsets of different granularity, and six different POS taggers, together with the Berkeley parser. Our findings show that less granularity of the POS tagset leads to better tagging results. However, both too coarse-grained and too fine-grained distinctions on POS level decrease parsing performance.
Brown clustering has been used to help increase parsing performance for morphologically rich languages. However, much of the work has focused on using clustering techniques to replace terminal nodes or as a feature for parsing. Instead, we choose to examine how effectively Brown clustering is for unlexicalized parsing by creating data-driven POS tagsets which are then used with the Berkeley parser. We investigate cluster sizes as well as on what information (e.g. words vs. lemmas) clustering will yield the best parser performance. Our results approach the current state of the art results for the German T¨uBa-D/Z treebank when using parser internal tagging.
Dieser Beitrag stellt nach einer kurzen allgemeinen Einführung die Datenbank für Gesprochenes Deutsch (DGD) und das Forschungs- und Lehrkorpus Gesprochenes Deutsch (FOLK) als Instrumente speziell für gesprächsanalytisches Arbeiten vor. Anhand des Beispiels sprich als Diskursmarker für Reformulierungen werden Schritt für Schritt die Ressourcen und Tools für systematische korpus- und datenbankgesteuerte Recherchen illustriert: Nutzungsmöglichkeiten der Token-, Kontext-, Metadaten- und Positionssuche werden gezeigt, jeweils in Bezug auf und im wechselseitigen Verhältnis mit qualitativen Fallanalysen, auch mit Belegannotationen nach analyserelevanten (strukturellen und funktionalen) Kategorien. Schließlich wird das heißt als weiterer Reformulierungsindikator für eine vergleichende Analyse herangezogen. Dieser Beitrag stellt eine detailliertere Ausarbeitung einer kürzeren, eher technisch-didaktischen Online-Handreichung (Kaiser/ Schmidt 2016) zu diesem Thema dar, und hat einen stärker inhaltlich-analytischen Fokus.
Standardisierte statistische Auswertungen von Korpusdaten im Projekt "Korpusgrammatik" (KoGra-R)
(2017)
Wir zeigen anhand dreier Beispielanalysen, wie das im IDS-Projekt „Korpusgrammatik“ entwickelte Auswertungstool KoGra-R in der quantitativlinguistischen Forschung zur Analyse von Frequenzdaten auf mehreren linguistischen Ebenen eingesetzt werden kann. Wir demonstrieren dies anhand regionaler Präferenzen bei der Selektion von Genitivallomorphen, der Variation von Relativpronomina sowie der Verwendung bestimmter anaphorischer Ausdrucke in Abhängigkeit davon, ob sich das Antezedens im gleichen Satz befindet oder nicht. Die in KoGra-R implementierten statistischen Tests sind für jede dieser Ebenen geeignet, um mindestens einen ersten statistisch abgesicherten Eindruck der Datenlage zu erlangen.
This paper presents a short insight into a new project at the "Institute for the German Language” (IDS) (Mannheim). It gives an insight into some basic ideas for a corpus-based dictionary of spoken German, which will be developed and compiled by the new project "The Lexicon of spoken German” (Lexik des gesprochenen Deutsch, LeGeDe). The work is based on the "Research and Teaching Corpus of Spoken German” (Forschungs- und Lehrkorpus Gesprochenes Deutsch, FOLK), which is implemented in the "Database for Spoken German” (Datenbank für Gesprochenes Deutsch, DGD). Both resources, the database and the corpus, have been developed at the IDS.