Korpuslinguistik
Refine
Year of publication
Document Type
- Article (105) (remove)
Has Fulltext
- yes (105)
Keywords
- Korpus <Linguistik> (78)
- Deutsch (44)
- Gesprochene Sprache (17)
- Korpuslinguistik (11)
- corpus linguistics (8)
- Deutsches Referenzkorpus (DeReKo) (7)
- Textkorpus (7)
- Computerlinguistik (6)
- Institut für Deutsche Sprache <Mannheim> (6)
- Sprachdaten (6)
Publicationstate
- Veröffentlichungsversion (51)
- Zweitveröffentlichung (29)
- Postprint (9)
Reviewstate
- Peer-Review (62)
- (Verlags)-Lektorat (18)
- Peer-review (1)
- Peer-reviewed (1)
Publisher
- Institut für Deutsche Sprache (13)
- de Gruyter (10)
- Leibniz-Institut für Deutsche Sprache (IDS) (8)
- Erich Schmidt (6)
- Universitäts- und Landesbibliothek Darmstadt (6)
- Editura Academiei Române (5)
- Gesellschaft für Sprachtechnologie und Computerlinguistik (3)
- Edinburgh University Press (2)
- MDPI (2)
- Peter Lang (2)
To build a comparable Wikipedia corpus of German, French, Italian, Norwegian, Polish and Hungarian for contrastive grammar research, we used a set of XSLT stylesheets to transform the mediawiki anntations to XML. Furthermore, the data has been amnntated with word class information using different taggers. The outcome is a corpus with rich meta data and linguistic annotation that can be used for multilingual research in various linguistic topics.
In this paper, we present an overview of freely available web applications providing online access to spoken language corpora. We explore and discuss various solutions with which the corpus providers and corpus platform developers address the needs of researchers who are working with spoken language. The paper aims to contribute to the long-overdue exchange and discussion of methods and best practices in the design of online access to spoken language corpora.
In the first volume of Corpus Linguistics and Linguistic Theory, Gries (2005. Null-hypothesis significance testing of word frequencies: A follow-up on Kilgarriff. Corpus Linguistics and Linguistic Theory 1(2). doi:10.1515/ cllt.2005.1.2.277. http://www.degruyter.com/view/j/cllt.2005.1.issue-2/cllt.2005. 1.2.277/cllt.2005.1.2.277.xml: 285) asked whether corpus linguists should abandon null-hypothesis significance testing. In this paper, I want to revive this discussion by defending the argument that the assumptions that allow inferences about a given population – in this case about the studied languages – based on results observed in a sample – in this case a collection of naturally occurring language data – are not fulfilled. As a consequence, corpus linguists should indeed abandon null-hypothesis significance testing.
In the first volume of Corpus Linguistics and Linguistic Theory, Gries (2005. Null-hypothesis significance testing of word frequencies: A follow-up on Kilgarriff. Corpus Linguistics and Linguistic Theory 1(2). doi:10.1515/cllt.2005.1.2.277. http://www.degruyter.com/view//cllt.2005.1.issue-2/cllt.2005.1.2.277/cllt.2005.1.2.277.xml: 285) asked whether corpus linguists should abandon null-hypothesis significance testing. In this paper, I want to revive this discussion by defending the argument that the assumptions that allow inferences about a given population – in this case about the studied languages – based on results observed in a sample – in this case a collection of naturally occurring language data – are not fulfilled. As a consequence, corpus linguists should indeed abandon null-hypothesis significance testing.
Korpora sind – als idealerweise digital verfüg- und auswertbare Sammlungen von Texten – eine wertvolle empirische Grundlage linguistischer Studien. Eigene Korpora aufzubauen ist, je nach Sprachausschnitt, mit unterschiedlichen Herausforderungen verbunden. Zu allen Texten sollten Metadaten zu den Textentstehungsbedingungen (Zeit, Quelle usw.) erhoben werden, um diese als Variablen in Auswertungen einbeziehen zu können. Andere Informationen wie etwa die Themenzugehörigkeit (oder Annotationen auch unterhalb der Textebene) sind auch hilfreich, in vielerlei Hinsicht aber schwieriger pauschal taxonomisch vorzugeben, geschweige denn, operationell zu ermitteln. Jenseits der »materiellen« Verfügbarkeit der Texte und der technischen Aufbereitung sind es das Urheberrecht, vor allem Lizenz- bzw. Nutzungsrechte, sowie ethische Verantwortung und Persönlichkeitsrechte, die beachtet werden müssen, auch um zu gewährleisten, dass die Daten für die Reproduktion der Studien Dritten rechtssicher zugänglich gemacht werden dürfen. Bevor für ein Vorhaben ein neues Korpus aufgebaut wird, sollte deshalb am besten geprüft werden, ob nicht ein geeignetes bereits zur Verfügung steht. Wenn ein Korpus aufgebaut wird, sollte für eine nachhaltige Aufbewahrung und Zugänglichmachung gesorgt und die Existenz an geeigneter Stelle dokumentiert werden.
So far, Sepedi negations have been considered more from the point of view of lexicographical treatment. Theoretical works on Sepedi have been used for this purpose, setting as an objective a neat description of these negations in a (paper) dictionary. This paper is from a different perspective: instead of theoretical works, corpus linguistic methods are used: (1) a Sepedi corpus is examined on the basis of existing descriptions of the occurrences of a relevant verb, looking at its negated forms from a purely prescriptive point of view; (2) a "corpus-driven" strategy is employed, looking only for sequences of negation particles (or morphemes) in order to list occurring constructions, without taking into account the verbs occurring in them, apart from their endings. The approach in (2) is only intended to show a possible methodology to extend existing theories on occurring negations. We would also like to try to help lexicographers to establish a frequency-based order of entries of possible negation forms in their dictionaries by showing them the number of respective occurrences. As with all corpus linguistic work, however, we must regard corpus evidence not as representative, but as tendencies of language use that can be detected and described. This is especially true for Sepedi, for which only few and small corpora exist. This paper also describes the resources and tools used to create the necessary corpus and also how it was annotated with part of speech and lemmas. Exploring the quality of available Sepedi part-of-speech taggers concerning verbs, negation morphemes and subject concords may be a positive side result.
Auf dem Weg zu einer Kartographie: automatische und manuelle Analysen am Beispiel des Korpus ISW
(2021)
Bericht von der Dritten Internationalen Konferenz „Grammatik und Korpora“, Mannheim, 22. - 24.9.2009
(2009)
Wikipedia is a valuable resource, useful as a lingustic corpus or a dataset for many kinds of research. We built corpora from Wikipedia articles and talk pages in the I5 format, a TEI customisation used in the German Reference Corpus (Deutsches Referenzkorpus - DeReKo). Our approach is a two-stage conversion combining parsing using the Sweble parser, and transformation using XSLT stylesheets. The conversion approach is able to successfully generate rich and valid corpora regardless of languages. We also introduce a method to segment user contributions in talk pages into postings.
In this Paper, we describe a schema and models which have been developed for the representation of corpora of computer-mediated communicatin (CMC corpora) using the representation framework provided by the Text Encoding Initiative (TEI). We characterise CMC discourse as dialogic, sequentially organised interchange between humans and point out that many features of CMC are not adequately handled by current corpus encoding schemas and tools. We formulate desiderata for a representation of CMC in encoding schemes and argue why the TEI is a suitable framework for the encoding of CMC corpora. We propose a model of basic CMC units (utterances, posts, and nonverbal activities) and the macro- and micro-level structures of interactions in CMC environments. Based on these models, we introduce CMC-core, a TEI customisation for the encoding of CMC corpora, which defines CMC-specific encoding features on the four levels of elements, model classes, attribute classes, and modules of the TEI infrastructure. The description of our customisation is illustrated by encoding examples from corpora by researchers of the TEI SIG CMC, representing a variety of CMC genres, i.e. chat, wiki talk, twitter, blog, and Second Life interactions. The material described, i.e. schemata, encoding examples, and documentation, is available from the of the TEI CMC SIG Wiki and will accompany a feature request to the TEI council in late 2019.