Korpuslinguistik
Refine
Year of publication
Document Type
- Conference Proceeding (12)
- Article (6)
- Part of a Book (6)
- Book (4)
- Doctoral Thesis (1)
- Master's Thesis (1)
- Report (1)
- Review (1)
- Working Paper (1)
Has Fulltext
- yes (33)
Keywords
- Computerlinguistik (33) (remove)
Publicationstate
- Veröffentlichungsversion (22)
- Zweitveröffentlichung (3)
- Postprint (1)
Reviewstate
Publisher
- European Language Resources Association (3)
- de Gruyter (3)
- European Language Resources Association (ELRA) (2)
- Leibniz-Institut für Deutsche Sprache (2)
- Leibniz-Institut für Deutsche Sprache (IDS) (2)
- Linköping University Electronic Press (2)
- CLARIN (1)
- Institut für Deutsche Sprache (1)
- Institut für Kommunikationsforschung und Phonetik (1)
- L'Harmattan (1)
Discourse segmentation is the division of a text into minimal discourse segments, which form the leaves in the trees that are used to represent discourse structures. A definition of elementary discourse segments in German is provided by adapting widely used segmentation principles for English minimal units, while considering punctuation, morphology, sytax, and aspects of the logical document structure of a complex text type, namely scientific articles. The algorithm and implementation of a discourse segmenter based on these principles is presented, as well an evaluation of test runs.
Große Sprachkorpora sind als empirische Basis für die Arbeit des Linguisten zunehmend wichtig geworden. Dabei gehen die Arbeiten zum Korpusaufbau Hand in Hand mit der Entwicklung immer komfortablerer computerlinguistischer Werkzeuge zur Verwaltung und Analyse großer Datenmengen. Mit dem Fortschritt in den Möglichkeiten der Datenerschließung stellt sich die Frage, wie die Linguistik dies in Erkenntniszuwachs umsetzen kann. Diese aktuelle Frage nach dem Zusammenhang von Datenverfügbarkeit und Wissenszuwachs stand im Zentrum der Jahrestagung des Instituts für Deutsche Sprache 2006. Das Jahrbuch Sprachkorpora - Datenmengen und Erkenntnisfortschritt stellt theoretische und methodische Fragen zu Anlage und Nutzung großer Korpora ins Zentrum und behandelt sie aus der Sicht verschiedener linguistischer Teildisziplinen wie Grammatik, Lexik/Lexikographie, Pragmatik/Soziolinguistik und Computerlinguistik/Informatik. Dabei werden anhand von Darstellungen zu aktuellen Projekten die unterschiedlichen Anforderungen an die Zusammensetzung und Aufbereitung von Sprachkorpora und an die Recherchemöglichkeiten ebenso deutlich wie Kernfragen der Methodologie, z.B. nach dem Status des linguistischen Datums selbst oder nach der Verbindung von quantitativen und qualitativen Verfahren.
FnhdC/HTML und FnhdC/S
(2007)
Vorwort
(2007)
The TEI has served for many years as a mature annotation format for corpora of different types, including linguistically annotated data. Although it is based on the consensus of a large community, it does not have the legal status of a standard. During the last decade, efforts have been undertaken to develop definitive de jure standards for linguistic data that not only act as a normative basis for the exchange of language corpora but also address recent advancements in technology, such as web-based standards, and the use of large and multiply annotated corpora.
In this article we will provide an overview of the process of international standardization and discuss some of the international standards currently being developed under the auspices of ISO/TC 37, a technical committee called “Terminology and other Language and Content Resources”. After that the relationship between the TEI Guidelines and these specifications, according to their formal model, notation format, and annotation model, will be discussed. The conclusion of the paper provides recommendations for dealing with language corpora.
Wikipedia is a valuable resource, useful as a lingustic corpus or a dataset for many kinds of research. We built corpora from Wikipedia articles and talk pages in the I5 format, a TEI customisation used in the German Reference Corpus (Deutsches Referenzkorpus - DeReKo). Our approach is a two-stage conversion combining parsing using the Sweble parser, and transformation using XSLT stylesheets. The conversion approach is able to successfully generate rich and valid corpora regardless of languages. We also introduce a method to segment user contributions in talk pages into postings.
Machine learning methods offer a great potential to automatically investigate large amounts of data in the humanities. Our contribution to the workshop reports about ongoing work in the BMBF project KobRA (http://www.kobra.tu-dortmund.de) where we apply machine learning methods to the analysis of big corpora in language-focused research of computer-mediated communication (CMC). At the workshop, we will discuss first results from training a Support Vector Machine (SVM) for the classification of selected linguistic features in talk pages of the German Wikipedia corpus in DeReKo provided by the IDS Mannheim. We will investigate different representations of the data to integrate complex syntactic and semantic information for the SVM. The results shall foster both corpus-based research of CMC and the annotation of linguistic features in CMC corpora.