Refine
Year of publication
- 2009 (23) (remove)
Document Type
- Conference Proceeding (23) (remove)
Has Fulltext
- yes (23)
Is part of the Bibliography
- no (23)
Keywords
- Computerlinguistik (4)
- Deutsch (4)
- Korpus <Linguistik> (4)
- Automatische Sprachanalyse (3)
- Mehrsprachigkeit (3)
- Natürliche Sprache (3)
- Algorithmus (2)
- Bildung (2)
- Datensatz (2)
- Polarität (2)
Publicationstate
- Veröffentlichungsversion (11)
- Zweitveröffentlichung (8)
- Postprint (4)
Reviewstate
- Peer-Review (11)
- (Verlags)-Lektorat (5)
- Review-Status-unbekannt (1)
Publisher
- AAAI Press (1)
- AKS-Verlag (1)
- Acta Press (1)
- Acta Universitatis Upsaliensis (1)
- Association for Computational Linguistics (1)
- Association for Computing Machinery (1)
- Berlin-Brandenburgische Akademie der Wissenschaften; Zentrum Sprache (1)
- CSLI Publications (1)
- Department of Linguistics, University of Cambridge (1)
- EDUCatt (1)
“Linguistic Landscapes” (LL) is a research method which has become increasingly popular in recent years. In this paper, we will first explain the method itself and discuss some of its fundamental assumptions. We will then recall the basic traits of multilingualism in the Baltic States, before presenting results from our project carried out together with a group of Master students of Philology in several medium-sized towns in the Baltic States, focussing on our home town of Rēzekne in the highly multilingual region of Latgale in Eastern Latvia. In the discussion of some of the results, we will introduce the concept of “Legal Hypercorrection” as a term for the stricter compliance of language laws than necessary. The last part will report on advantages of LL for educational purposes of multilingualism, and for developing discussions on multilingualism among the general public.
Though polarity classification has been extensively explored at document level, there has been little work investigating feature design at sentence level. Due to the small number of words within a sentence, polarity classification at sentence level differs substantially from document-level classification in that resulting bag-of-words feature vectors tend to be very sparse resulting in a lower classification accuracy.
In this paper, we show that performance can be improved by adding features specifically designed for sentence-level polarity classification. We consider both explicit polarity information and various linguistic features. A great proportion of the improvement that can be obtained by using polarity information can also be achieved by using a set of simple domain-independent linguistic features.
TePaCoC - A Testsuite for Testing Parser Performance on Complex German Grammatical Constructions
(2009)
TEI Feature Structures as a Representation Format for Multiple Annotation and Generic XML Documents
(2009)
Feature structures are mathematical entities (rooted labeled directed acyclic graphs) that can be represented as graph displays, attribute value matrices or as XML adhering to the constraints of a specialized TEI tag set. We demonstrate that this latter ISO-standardized format can be used as an integrative storage and exchange format for sets of multiple annotation XML documents. This specific domain of application is rooted in the approach of multiple annotations, which marks a possible solution for XML-compliant markup in scenarios with conflicting annotation hierarchies. A more extreme proposal consists in the possible use as a meta-representation format for generic XML documents. For both scenarios our strategy concerning pertinent feature structure representations is grounded on the XDM (XQuery 1.0 and XPath 2.0 Data Model). The ubiquitous hierarchical and sequential relationships within XML documents are represented by specific features that take ordered list values. The mapping to the TEI feature structure format has been implemented in the form of an XSLT 2.0 stylesheet. It can be characterized as exploiting aspects of both the push and pull processing paradigm as appropriate. An indexing mechanism is provided with regard to the multiple annotation documents scenario. Hence, implicit links concerning identical primary data are made explicit in the result format. In comparison to alternative representations, the TEI-based format does well in many respects, since it is both integrative and well-formed XML. However, the result documents tend to grow very large depending on the size of the input documents and their respective markup structure. This may also be considered as a downside regarding the proposed use for generic XML documents. On the positive side, it may be possible to achieve a hookup to methods and applications that have been developed for feature structure representations in the fields of (computational) linguistics and knowledge representation.
The paper discusses particular logical consistency conditions satisfied by German proposition-embedding predicates which determine the question type (external and internal whether-form as well as exhaustive and non-exhaustive wh-form), the correlate type (es- or da-correlate) as well as the impact of the correlate on the respective consistency condition. It will turn out that some consistency conditions also determine the embedding of verb second and subject-control.
Generative lexicalized parsing models, which are the mainstay for probabilistic parsing of English, do not perform as well when applied to languages with different language-specific properties such as free(r) word order or rich morphology. For German and other non-English languages, linguistically motivated complex treebank transformations have been shown to improve performance within the framework of PCFG parsing, while generative lexicalized models do not seem to be as easily adaptable to these languages. In this paper, we show a practical way to use grammatical functions as first-class citizens in a discriminative model that allows to extend annotated treebank grammars with rich feature sets without having to suffer from sparse data problems. We demonstrate the flexibility of the approach by integrating unsupervised PP attachment and POS-based word clusters into the parser.
In opinion mining, there has been only very little work investigating semi-supervised machine learning on document-level polarity classification. We show that semi-supervised learning performs significantly better than supervised learning when only few labelled data are available. Semi-supervised polarity classifiers rely on a predictive feature set. (Semi-)Manually built polarity lexicons are one option but they are expensive to obtain and do not necessarily work in an unknown domain. We show that extracting frequently occurring adjectives & adverbs of an unlabeled set of in-domain documents is an inexpensive alternative which works equally well throughout different domains.