Refine
Year of publication
- 2009 (49) (remove)
Document Type
- Conference Proceeding (17)
- Part of a Book (16)
- Article (11)
- Book (2)
- Contribution to a Periodical (1)
- Doctoral Thesis (1)
- Working Paper (1)
Language
- English (49) (remove)
Keywords
- Korpus <Linguistik> (11)
- Deutsch (9)
- Computerlinguistik (8)
- Annotation (5)
- Automatische Sprachanalyse (4)
- Natürliche Sprache (3)
- Syntaktische Analyse (3)
- Verb (3)
- Algorithmus (2)
- Bildung (2)
Publicationstate
- Veröffentlichungsversion (21)
- Postprint (9)
- Zweitveröffentlichung (4)
Reviewstate
Publisher
- Elsevier (3)
- Acta Universitatis Upsaliensis (2)
- Benjamins (2)
- Narr (2)
- Niemeyer (2)
- Oxford University Press (2)
- Palgrave Macmillan (2)
- Springer (2)
- AAAI Press (1)
- Association for Computational Linguistics (1)
We present a morphological analyzer for Spanish called SMM. SMM is implemented in the grammar development framework Malaga, which is based on the formalism of Left-Associative Grammar. We briefly present the Malaga framework, describe the implementation decisions for some interesting morphological phenomena of Spanish, and report on the evaluation results from the analysis of corpora. SMM was originally only designed for analyzing word forms; in this article we outline two approaches for using SMM and the facilities provided by Malaga to also generate verbal paradigms. SMM can also be embedded into applications by making use of the Malagaprogramming interface; we briefly discuss some application scenarios.
Speech Act Verbs
(2009)
The paper discusses particular logical consistency conditions satisfied by German proposition-embedding predicates which determine the question type (external and internal whether-form as well as exhaustive and non-exhaustive wh-form), the correlate type (es- or da-correlate) as well as the impact of the correlate on the respective consistency condition. It will turn out that some consistency conditions also determine the embedding of verb second and subject-control.
Using concurrent electroencephalogram and eye movement measures to track natural reading, this study shows that N400 effects reflecting predictability are dissociable from those owing to spreading activation. In comparing predicted sentence endings with related and unrelated unpredicted endings in antonym constructions (‘the opposite of black is white/yellow/nice’), fixation-related potentials at the critical word revealed a predictability-based N400 effect (unpredicted vs. predicted words). By contrast, event-related potentials time locked to the last fixation before the critical word showed an N400 only for the nonrelated unpredicted condition (nice). This effect is attributed to a parafoveal mismatch between the critical word and preactivated lexical features (i.e. features of the predicted word and its associates). In addition to providing the first demonstration of a parafoveally induced N400 effect, our results support the view that the N400 is best viewed as a component family.
This paper presents EXMARaLDA, a system for the computer-assisted creation and analysis of spoken
language corpora. The first part contains some general observations about technological and methodological requirements for doing corpus-based pragmatics. The second part explains the systems architecture and gives an overview of its most important software components a transcription editor, a corpus management tool and a corpus query tool. The last part presents some corpora which have been or are currently being compiled with the help of EXMARaLDA.
Spoken language corpora— as used in conversation analytic research, language acquisition studies and dialectology— pose a number of challenges that are rarely addressed by corpus linguistic methodology and technology. This paper starts by giving an overview of the most important methodological issues distinguishing spoken language corpus workfrom the work with written data. It then shows what technological challenges these methodological issues entail and demonstrates how they are dealt with in the architecture and tools of the EXMARaLDA system.
This paper introduces LRTwiki, an improved variant of the Likelihood Ratio Test (LRT). The central idea of LRTwiki is to employ a comprehensive domain specific knowledge source as additional “on-topic” data sets, and to modify the calculation of the LRT algorithm to take advantage of this new information. The knowledge source is created on the basis of Wikipedia articles. We evaluate on the two related tasks product feature extraction and keyphrase extraction, and find LRTwiki to yield a significant improvement over the original LRT in both tasks.
Beyond the stars: exploiting free-text user reviews to improve the accuracy of movie recommendations
(2009)
In this paper we show that the extraction of opinions from free-text reviews can improve the accuracy of movie recommendations. We present three approaches to extract movie aspects as opinion targets and use them as features for the collaborative filtering. Each of these approaches requires different amounts of manual interaction. We collected a data set of reviews with corresponding ordinal (star) ratings of several thousand movies to evaluate the different features for the collaborative filtering. We employ a state-of-the-art collaborative filtering engine for the recommendations during our evaluation and compare the performance with and without using the features representing user preferences mined from the free-text reviews provided by the users. The opinion mining based features perform significantly better than the baseline, which is based on star ratings and genre information only.
The paper discusses from various angles the morphosyntactic annotation of DeReKo, the Archive of General Reference Corpora of Contemporary Written German at the Institut für Deutsche Sprache (IDS), Mannheim. The paper is divided into two parts. The first part covers the practical and technical aspects of this endeavor. We present results from a recent evaluation of tools for the annotation of German text resources that have been applied to DeReKo. These tools include commercial products, especially Xerox' Finite State Tools and the Machinese products developed by the Finnish company Connexor Oy, as well as software for which academic licenses are available free of charge for academic institutions, e.g. Helmut Schmid's Tree Tagger. The second part focuses on the linguistic interpretability of the corpus annotations and more general methodological considerations concerning scientifically sound empirical linguistic research. The main challenge here is that unlike the texts themselves, the morphosyntactic annotations of DeReKo do not have the status of observed data; instead they constitute a theory and implementation-dependent interpretation. In addition, because of the enormous size of DeReKo, a systematic manual verification of the automatic annotations is not feasible. In consequence, the expected degree of inaccuracy is very high, particularly wherever linguistically challenging phenomena, such as lexical or grammatical variation, are concerned. Given these facts, a researcher using the annotations blindly will run the risk of not actually studying the language but rather the annotation tool or the theory behind it. The paper gives an overview of possible pitfalls and ways to circumvent them and discusses the opportunities offered by using annotations in corpus-based and corpus-driven grammatical research against the background of a scientifically sound methodology.