Refine
Year of publication
- 2009 (12) (remove)
Document Type
- Conference Proceeding (8)
- Article (4)
Language
- English (12) (remove)
Has Fulltext
- yes (12)
Keywords
- Deutsch (3)
- Algorithmus (2)
- Automatische Sprachanalyse (2)
- Computerlinguistik (2)
- Datensatz (2)
- Natürliche Sprache (2)
- Polarität (2)
- Sentimentanalyse (2)
- Text Mining (2)
- Annotation (1)
Publicationstate
- Veröffentlichungsversion (6)
- Zweitveröffentlichung (3)
- Postprint (2)
Reviewstate
- Peer-Review (12) (remove)
Publisher
- AAAI Press (1)
- Acta Universitatis Upsaliensis (1)
- Association for Computing Machinery (1)
- EDUCatt (1)
- Edinburgh University Press (1)
- Elsevier (1)
- Lippincott Williams & Wilkins (1)
- Northern European Association for Language Technology (1)
- Research Institute for Linguistics, Hungarian Academy of Sciences (HAS), and Theoretical Linguistics Program, Eötvös Loránd University (ELTE) (1)
- The Association for Computational Linguistics and The Asian Federation of Natural Processing (1)
“Linguistic Landscapes” (LL) is a research method which has become increasingly popular in recent years. In this paper, we will first explain the method itself and discuss some of its fundamental assumptions. We will then recall the basic traits of multilingualism in the Baltic States, before presenting results from our project carried out together with a group of Master students of Philology in several medium-sized towns in the Baltic States, focussing on our home town of Rēzekne in the highly multilingual region of Latgale in Eastern Latvia. In the discussion of some of the results, we will introduce the concept of “Legal Hypercorrection” as a term for the stricter compliance of language laws than necessary. The last part will report on advantages of LL for educational purposes of multilingualism, and for developing discussions on multilingualism among the general public.
Beyond the stars: exploiting free-text user reviews to improve the accuracy of movie recommendations
(2009)
In this paper we show that the extraction of opinions from free-text reviews can improve the accuracy of movie recommendations. We present three approaches to extract movie aspects as opinion targets and use them as features for the collaborative filtering. Each of these approaches requires different amounts of manual interaction. We collected a data set of reviews with corresponding ordinal (star) ratings of several thousand movies to evaluate the different features for the collaborative filtering. We employ a state-of-the-art collaborative filtering engine for the recommendations during our evaluation and compare the performance with and without using the features representing user preferences mined from the free-text reviews provided by the users. The opinion mining based features perform significantly better than the baseline, which is based on star ratings and genre information only.
This paper introduces LRTwiki, an improved variant of the Likelihood Ratio Test (LRT). The central idea of LRTwiki is to employ a comprehensive domain specific knowledge source as additional “on-topic” data sets, and to modify the calculation of the LRT algorithm to take advantage of this new information. The knowledge source is created on the basis of Wikipedia articles. We evaluate on the two related tasks product feature extraction and keyphrase extraction, and find LRTwiki to yield a significant improvement over the original LRT in both tasks.
Using concurrent electroencephalogram and eye movement measures to track natural reading, this study shows that N400 effects reflecting predictability are dissociable from those owing to spreading activation. In comparing predicted sentence endings with related and unrelated unpredicted endings in antonym constructions (‘the opposite of black is white/yellow/nice’), fixation-related potentials at the critical word revealed a predictability-based N400 effect (unpredicted vs. predicted words). By contrast, event-related potentials time locked to the last fixation before the critical word showed an N400 only for the nonrelated unpredicted condition (nice). This effect is attributed to a parafoveal mismatch between the critical word and preactivated lexical features (i.e. features of the predicted word and its associates). In addition to providing the first demonstration of a parafoveally induced N400 effect, our results support the view that the N400 is best viewed as a component family.
The paper discusses particular logical consistency conditions satisfied by German proposition-embedding predicates which determine the question type (external and internal whether-form as well as exhaustive and non-exhaustive wh-form), the correlate type (es- or da-correlate) as well as the impact of the correlate on the respective consistency condition. It will turn out that some consistency conditions also determine the embedding of verb second and subject-control.
We present MaJo, a toolkit for supervised Word Sense Disambiguation (WSD), with an interface for Active Learning. Our toolkit combines a flexible plugin architecture which can easily be extended, with a graphical user interface which guides the user through the learning process. MaJo integrates off-the-shelf NLP tools like POS taggers, treebank-trained statistical parsers, as well as linguistic resources like WordNet and GermaNet. It enables the user to systematically explore the benefit gained from different feature types for WSD. In addition, MaJo provides an Active Learning environment, where the
system presents carefully selected instances to a human oracle. The toolkit supports manual annotation of the selected instances and re-trains the system on the extended data set. MaJo also provides the means to evaluate the performance of the system against a gold standard. We illustrate the usefulness of our system by learning the frames (word senses) for three verbs from the SALSA corpus, a version of the TiGer treebank with an additional layer of frame-semantic annotation. We show how MaJo can be used to tune the feature set for specific target words and so improve performance for these targets. We also show that syntactic features, when carefully tuned to the target word, can lead to a substantial increase in performance.
Though polarity classification has been extensively explored at document level, there has been little work investigating feature design at sentence level. Due to the small number of words within a sentence, polarity classification at sentence level differs substantially from document-level classification in that resulting bag-of-words feature vectors tend to be very sparse resulting in a lower classification accuracy.
In this paper, we show that performance can be improved by adding features specifically designed for sentence-level polarity classification. We consider both explicit polarity information and various linguistic features. A great proportion of the improvement that can be obtained by using polarity information can also be achieved by using a set of simple domain-independent linguistic features.
In opinion mining, there has been only very little work investigating semi-supervised machine learning on document-level polarity classification. We show that semi-supervised learning performs significantly better than supervised learning when only few labelled data are available. Semi-supervised polarity classifiers rely on a predictive feature set. (Semi-)Manually built polarity lexicons are one option but they are expensive to obtain and do not necessarily work in an unknown domain. We show that extracting frequently occurring adjectives & adverbs of an unlabeled set of in-domain documents is an inexpensive alternative which works equally well throughout different domains.
We compare the use of überhaupt and sowieso in Dutch and German. We use the world-wide web as the main resource and pursue a zigzag strategy, trying to find usages going back and forth between dictionaries, intuitions and real data obtained through web search. To our surprise, the results more or less confirm the decision of Dutch dictionaries to consider überhaupt and sowieso synonymous. In German, we find no synonymy, but only a great overlap of usage conditions in declarative sentences.