Refine
Year of publication
- 2009 (21) (remove)
Document Type
- Conference Proceeding (10)
- Part of a Book (7)
- Article (1)
- Book (1)
- Doctoral Thesis (1)
- Working Paper (1)
Language
- English (21) (remove)
Has Fulltext
- yes (21)
Is part of the Bibliography
- no (21)
Keywords
- Korpus <Linguistik> (6)
- Deutsch (5)
- Annotation (3)
- Computerlinguistik (3)
- Automatische Sprachanalyse (2)
- Bildung (2)
- German (2)
- Institut für Deutsche Sprache <Mannheim> (2)
- Lettgallen (2)
- Mehrsprachigkeit (2)
Publicationstate
- Veröffentlichungsversion (21) (remove)
Reviewstate
Publisher
- Acta Universitatis Upsaliensis (2)
- AAAI Press (1)
- Association for Computational Linguistics (1)
- Benjamins (1)
- CSLI Publications (1)
- De Gruyter (1)
- Dublin City University (1)
- EDUCatt (1)
- Equinox (1)
- Europäische Akademie (1)
While written corpora can be exploited without any linguistic annotations, speech corpora need at least a basic transcription to be of any use for linguistic research. The basic annotation of speech data usually consists of time-aligned orthographic transcriptions. To answer phonetic or phonological research questions, phonetic transcriptions are needed as well. However, manual annotation is very time-consuming and requires considerable skill and near-native competence. Therefore it can take years of speech corpus compilation and annotation before any analyses can be carried out. In this paper, approaches that address the transcription bottleneck of speech corpus exploitation are presented and discussed, including crowdsourcing the orthographic transcription, automatic phonetic alignment, and query-driven annotation. Currently, query-driven annotation and automatic phonetic alignment are being combined and applied in two speech research projects at the Institut für Deutsche Sprache (IDS), whereas crowdsourcing the orthographic transcription still awaits implementation.
In this paper we present an approach to faceted search in large language resource repositories. This kind of search which enables users to browse through the repository by choosing their personal sequence of facets heavily relies on the availability of descriptive metadata for the objects in the repository. This approach therefore informs the collection of a minimal set of metatdata for language resources. The work described in this paper has been funded by the EC within the ESFRI infrastructure project CLARIN.
Concurrent standardization as a necessity: The genesis of the new official orthographic guidelines
(2009)
The new official orthographic guidelines were brought into force by the official state authorities on August 1st, 1998 and its principle goals were a standardized representation of the guidelines and a «gentle simplification in respect of content». This regulation was not supported by the public and in fact it was the starting point for a struggle for conceptual solutions and a quest for the achievement of' a consensus between different possible norms. Since orthography is an officially codified standard taking up a prominent position among linguistic standards, it is of particular socio-political importance. It was the foremost task of the Council for German Orthography (Rat für deutsche Rechtschreibung), instituted in December 2004, to elaborate a compromise in order to bring the «Orthographical war» (Die Zeit) to an end, which was led enthusiastically for more than a decade. - The concern of this article is to classify historically the agreement reached in 2006. Against this background, it can be stated that official guidelines will only be accepted, if they are based upon the usage in writing and if they take into account the interests of the reader. Both principles are characterizing the proposal made by the Council for German Orthography. An outlook on the Council's activities concerning orthographic standardization expected in the future will conclude this article.
Manual development of deep linguistic resources is time-consuming and costly and therefore often described as a bottleneck for traditional rule-based NLP. In my PhD thesis I present a treebank-based method for the automatic acquisition of LFG resources for German. The method automatically creates deep and rich linguistic presentations from labelled data (treebanks) and can be applied to large data sets. My research is based on and substantially extends previous work on automatically acquiring wide-coverage, deep, constraint-based grammatical resources from the English Penn-II treebank (Cahill et al.,2002; Burke et al., 2004; Cahill, 2004). Best results for English show a dependency f-score of 82.73% (Cahill et al., 2008) against the PARC 700 dependency bank, outperforming the best hand-crafted grammar of Kaplan et al. (2004). Preliminary work has been carried out to test the approach on languages other than English, providing proof of concept for the applicability of the method (Cahill et al., 2003; Cahill, 2004; Cahill et al., 2005). While first results have been promising, a number of important research questions have been raised. The original approach presented first in Cahill et al. (2002) is strongly tailored to English and the datastructures provided by the Penn-II treebank (Marcus et al., 1993). English is configurational and rather poor in inflectional forms. German, by contrast, features semi-free word order and a much richer morphology. Furthermore, treebanks for German differ considerably from the Penn-II treebank as regards data structures and encoding schemes underlying the grammar acquisition task. In my thesis I examine the impact of language-specific properties of German as well as linguistically motivated treebank design decisions on PCFG parsing and LFG grammar acquisition. I present experiments investigating the influence of treebank design on PCFG parsing and show which type of representations are useful for the PCFG and LFG grammar acquisition tasks. Furthermore, I present a novel approach to cross-treebank comparison, measuring the effect of controlled error insertion on treebank trees and parser output from different treebanks. I complement the cross-treebank comparison by providing a human evaluation using TePaCoC, a new testsuite for testing parser performance on complex grammatical constructions. Manual evaluation on TePaCoC data provides new insights on the impact of flat vs. hierarchical annotation schemes on data-driven parsing. I present treebank-based LFG acquisition methodologies for two German treebanks. An extensive evaluation along different dimensions complements the investigation and provides valuable insights for the future development of treebanks.
In this paper we address the question of what is needed, in terms of morphosyntactic encoding, to relate a so-called verb-specific modifier to a nominal head. For the purposes of this paper we shall assume that the notion of a verb-specific modifier includes adverbs and their phrasal or clausal projections, adpositional phrases, and noun phrases featuring a particular semantic case such as locative or instrumental. Noun-specific modifiers, in turn, are considered to be first and foremost adjectives and adjective phrases, next participles and their phrasal projections and, finally, relative clauses.1 The basic motivation underlying this distinction relates to markedness.
This dossier consists of an introduction to the region under study, followed by six sections each dealing with a specific level of the education system. These brief descriptions contain factual information presented in a readily accessible way. Sections eight to ten cover research, prospects, and summary statistics. For detailed information and political discussions about language use at the various levels of education, the reader is referred to other sources with a list of publications.
“Linguistic Landscapes” (LL) is a research method which has become increasingly popular in recent years. In this paper, we will first explain the method itself and discuss some of its fundamental assumptions. We will then recall the basic traits of multilingualism in the Baltic States, before presenting results from our project carried out together with a group of Master students of Philology in several medium-sized towns in the Baltic States, focussing on our home town of Rēzekne in the highly multilingual region of Latgale in Eastern Latvia. In the discussion of some of the results, we will introduce the concept of “Legal Hypercorrection” as a term for the stricter compliance of language laws than necessary. The last part will report on advantages of LL for educational purposes of multilingualism, and for developing discussions on multilingualism among the general public.
This introductory tutorial describes a strictly corpus-driven approach for uncovering indications for aspects of use of lexical items. These aspects include ‘(lexical) meaning’ in a very broad sense and involve different dimensions, they are established in and emerge from respective discourses. Using data-driven mathematical-statistical methods with minimal (linguistic) premises, a word’s usage spectrum is summarized as a collocation profile. Self-organizing methods are applied to visualize the complex similarity structure spanned by these profiles. These visualizations point to the typical aspects of a word’s use, and to the common and distinctive aspects of any two words.
In opinion mining, there has been only very little work investigating semi-supervised machine learning on document-level polarity classification. We show that semi-supervised learning performs significantly better than supervised learning when only few labelled data are available. Semi-supervised polarity classifiers rely on a predictive feature set. (Semi-)Manually built polarity lexicons are one option but they are expensive to obtain and do not necessarily work in an unknown domain. We show that extracting frequently occurring adjectives & adverbs of an unlabeled set of in-domain documents is an inexpensive alternative which works equally well throughout different domains.