Refine
Year of publication
- 2009 (11) (remove)
Document Type
- Part of a Book (4)
- Article (2)
- Conference Proceeding (2)
- Other (2)
- Working Paper (1)
Has Fulltext
- yes (11)
Is part of the Bibliography
- no (11) (remove)
Keywords
- Korpus <Linguistik> (11) (remove)
Publicationstate
- Veröffentlichungsversion (9)
- Postprint (2)
Reviewstate
- (Verlags)-Lektorat (11) (remove)
Publisher
- Institut für Deutsche Sprache (2)
- CSLI Publications (1)
- Europäische Akademie (1)
- Narr (1)
- Niemeyer (1)
- Oxford University Press (1)
- Schöningh (1)
- Springer (1)
- Tokyo University of Foreign Studies (1)
In der Korpuslinguistik und der Quantitativen Linguistik werden ganz verschiedenartige formale Maße verwendet, mit denen die Gebrauchshäufigkeit eines Wortes, eines Ausdrucks oder auch abstrakter oder komplexer sprachlicher Elemente in einem gegebenen Korpus gemessen und ggf. mit anderen Gebrauchshäufigkeiten verglichen werden kann. Im Folgenden soll für eine Auswahl dieser Maße (absolute Häufigkeit, relative Häufigkeit, Wahrscheinlichkeitsverteilung, Differenzenkoeffizient, Häufigkeitsklasse) zusammengefasst werden, wie sie definiert sind, welche Eigenschaften sie haben und unter welchen Bedingungen sie (sinnvoll) anwendbar und interpretierbar sind – dabei kann eine Rolle spielen, ob das Häufigkeitsmaß auf ein Korpus als Ganzes angewendet wird oder auf einzelne Teilkorpora. Zusätzlich zu den bei den einzelnen Häufigkeitsmaßen genannten Einschränkungen gilt generell der folgende vereinfachte Zusammenhang: Je seltener ein Wort im gegebenen Korpus insgesamt vorkommt und je kleiner dieses Korpus ist, desto stärker hängt die beobachtete Gebrauchshäufigkeit des Wortes von zufälligen Faktoren ab, d.h., desto geringer ist die statistische Zuverlässigkeit der Beobachtung.
In this paper we present an approach to faceted search in large language resource repositories. This kind of search which enables users to browse through the repository by choosing their personal sequence of facets heavily relies on the availability of descriptive metadata for the objects in the repository. This approach therefore informs the collection of a minimal set of metatdata for language resources. The work described in this paper has been funded by the EC within the ESFRI infrastructure project CLARIN.
While written corpora can be exploited without any linguistic annotations, speech corpora need at least a basic transcription to be of any use for linguistic research. The basic annotation of speech data usually consists of time-aligned orthographic transcriptions. To answer phonetic or phonological research questions, phonetic transcriptions are needed as well. However, manual annotation is very time-consuming and requires considerable skill and near-native competence. Therefore it can take years of speech corpus compilation and annotation before any analyses can be carried out. In this paper, approaches that address the transcription bottleneck of speech corpus exploitation are presented and discussed, including crowdsourcing the orthographic transcription, automatic phonetic alignment, and query-driven annotation. Currently, query-driven annotation and automatic phonetic alignment are being combined and applied in two speech research projects at the Institut für Deutsche Sprache (IDS), whereas crowdsourcing the orthographic transcription still awaits implementation.
We present data-driven methods for the acquisition of LFG resources from two German treebanks. We discuss problems specific to semi-free word order languages as well as problems arising from the data structures determined by the design of the different treebanks. We compare two ways of encoding semi-free word order, as done in the two German treebanks, and argue that the design of the TiGer treebank is more adequate for the acquisition of LFG resources. Furthermore, we describe an architecture for LFG grammar acquisition for German, based on the two German treebanks, and compare our results with a hand-crafted German LFG grammar.
This introductory tutorial describes a strictly corpus-driven approach for uncovering indications for aspects of use of lexical items. These aspects include ‘(lexical) meaning’ in a very broad sense and involve different dimensions, they are established in and emerge from respective discourses. Using data-driven mathematical-statistical methods with minimal (linguistic) premises, a word’s usage spectrum is summarized as a collocation profile. Self-organizing methods are applied to visualize the complex similarity structure spanned by these profiles. These visualizations point to the typical aspects of a word’s use, and to the common and distinctive aspects of any two words.
This article introduces the topic of ‘‘Multilingual language resources and interoperability’’. We start with a taxonomy and parameters for classifying language resources. Later we provide examples and issues of interoperatability, and resource architectures to solve such issues. Finally we discuss aspects of linguistic formalisms and interoperability.
We report on finished work in a project that is concerned with providing methods, tools, best practice guidelines, and solutions for sustainable linguistic resources. The article discusses several general aspects of sustainability and introduces an approach to normalizing corpus data and metadata records. Moreover, the architecture of the sustainability platform implemented by the authors is described.
The paper discusses from various angles the morphosyntactic annotation of DeReKo, the Archive of General Reference Corpora of Contemporary Written German at the Institut für Deutsche Sprache (IDS), Mannheim. The paper is divided into two parts. The first part covers the practical and technical aspects of this endeavor. We present results from a recent evaluation of tools for the annotation of German text resources that have been applied to DeReKo. These tools include commercial products, especially Xerox' Finite State Tools and the Machinese products developed by the Finnish company Connexor Oy, as well as software for which academic licenses are available free of charge for academic institutions, e.g. Helmut Schmid's Tree Tagger. The second part focuses on the linguistic interpretability of the corpus annotations and more general methodological considerations concerning scientifically sound empirical linguistic research. The main challenge here is that unlike the texts themselves, the morphosyntactic annotations of DeReKo do not have the status of observed data; instead they constitute a theory and implementation-dependent interpretation. In addition, because of the enormous size of DeReKo, a systematic manual verification of the automatic annotations is not feasible. In consequence, the expected degree of inaccuracy is very high, particularly wherever linguistically challenging phenomena, such as lexical or grammatical variation, are concerned. Given these facts, a researcher using the annotations blindly will run the risk of not actually studying the language but rather the annotation tool or the theory behind it. The paper gives an overview of possible pitfalls and ways to circumvent them and discusses the opportunities offered by using annotations in corpus-based and corpus-driven grammatical research against the background of a scientifically sound methodology.
This paper shows how a corpus-driven approach leads to a new perspective on central issues of phraseology and on lexicographical applications. It argues that a data-driven pattem search (applying Statistical methods), an a posteriori interpretation of the data and a user oriented documentation of the usage of multi-word units (e. g. in lexicographical articles) constitute a step-by-step process where each step has its own informational value and useflilness. The description of multi-word units (Usuelle Wortverbindungen) presented in this paper focuses on the second Step, the high quality analysis and interpretation of collocation data, exemplified by the fields of multi-word units centered around the word formslIdee/Ideenl(idea/ideas).