Refine
Year of publication
- 2010 (20) (remove)
Document Type
- Conference Proceeding (12)
- Article (8)
Has Fulltext
- yes (20)
Is part of the Bibliography
- no (20)
Keywords
- Deutsch (10)
- Computerlinguistik (4)
- Korpus <Linguistik> (4)
- Annotation (3)
- Automatische Sprachanalyse (3)
- Maschinelles Lernen (3)
- Natürliche Sprache (3)
- Frame-Semantik (2)
- Fremdsprache (2)
- Grammatik (2)
Publicationstate
- Veröffentlichungsversion (20) (remove)
Reviewstate
- Peer-Review (20) (remove)
Publisher
This paper describes general requirements for evaluating and documenting NLP tools with a focus on morphological analysers and the design of a Gold Standard. It is argued that any evaluation must be measurable and documentation thereof must be made accessible for any user of the tool. The documentation must be of a kind that it enables the user to compare different tools offering the same service, hence the descriptions must contain measurable values. A Gold Standard presents a vital part of any measurable evaluation process, therefore, the corpus-based design of a Gold Standard, its creation and problems that occur are reported upon here. Our project concentrates on SMOR, a morphological analyser for German that is to be offered as a web-service. We not only utilize this analyser for designing the Gold Standard, but also evaluate the tool itself at the same time. Note that the project is ongoing, therefore, we cannot present final results.
Corpus-based identification and disambiguation of reading indicators for German nominalizations
(2010)
Corpus data is often structurally and lexically ambiguous; corpus extraction methodologies thus must be made aware of ambiguities. Therefore, given an extraction task, all relevant ambiguities must be identified. To resolve these ambiguities, contextual data responsible for one or another reading is to be considered. In the context of our present work, German -ung-nominalizations and their sortal readings are under examination. A number of these nominalizations may be read as an event or a result, depending on the semantic group they belong to. Here, we concentrate on nominalizations of verbs of saying (henceforth: "verba dicendi"), identify their context partners and their influence on the sortal reading of the nominalizations in question. We present a tool which calculates the sortal reading of such nominalizations and thus may improve not only corpus extraction, but also e.g. machine translation. Lastly, we describe successful attempts to identify the correct sortal reading, conclusions and future work.
So far, comprehensive grammar descriptions of Northern Sotho have only been available in the form of prescriptive books aiming at teaching the language. This paper describes parts of the first morpho-syntactic description of Northern Sotho from a computational perspective (Faaß, 2010a). Such a description is necessary for implementing rule based, operational grammars. It is also essential for the annotation of training data to be utilised by statistical parsers. The work that we partially present here may hence provide a resource for computational processing of the language in order to proceed with producing linguistic representations beyond tagging, may it be chunking or parsing. The paper begins with describing significant Northern Sotho verbal morpho-syntactics (section 2). It is shown that the topology of the verb can be depicted as a slot system which may form the basis for computational processing (section 3). Note that the implementation of the described rules (section 4) and also coverage tests are ongoing processes upon that we will report in more detail at a later stage.
This paper describes the application of probabilistic part of speech taggers to the Dzongkha language. A tag set containing 66 tags is designed, which is based on the Penn Treebank. A training corpus of 40,247 tokens is utilized to train the model. Using the lexicon extracted from the training corpus and lexicon from the available word list, we used two statistical taggers for comparison reasons. The best result achieved was 93.1% accuracy in a 10-fold cross validation on the training set. The winning tagger was thereafter applied to annotate a 570,247 token corpus.
Grammatiktheoretische Forschung, das hat die jüngste IDS-Jahrestagung wieder einmal plastisch vor Augen geführt, muss gedacht werden als zähes Ringen zweier grundsätzlich antagonistischer Prinzipien: Der reichhaltigen Fülle sprachlicher Okkurrenzen, deren gründlicher Auslotung ein beträchtlicher Teil der gegenwärtigen sprachtheoretisch und sprachtechnologisch ausgerichteten Anstrengung gewidmet ist, muss stets der Versuch gegenüberstehen, diese überbordende Varianz abstrahierend und generalisierend einzudämmen – ohne dabei die empirischen Befunde übermäßig und unzulässig zu nivellieren.
Grammars even trying to be as comprehensible as possible hardly avoid using technical terms unknown to novices. To overcome these inconveniencies, the grammatical information system grammis of the Institut für Deutsche Sprache incorporated a glossary specialized on terms used within the system. This glossary - actually named Grammatische Grundbegriffe (elementary terms of grammar) and tied by hyperlinks to technical terms in the core grammar' of grammis - offers short and simple explanations mainly by means of exemplification. The idea is to provide the users with provisional understanding to get along while following the main themes they are interested in. Explicitly, the glossary is not a stand-alone dictionary of grammatical terms, and it should not be regarded as one.
This paper presents a survey on the role of negation in sentiment analysis. Negation is a very common linguistic construction that affects polarity and, therefore, needs to be taken into consideration in sentiment analysis.
We will present various computational approaches modeling negation in sentiment analysis. We will, in particular, focus on aspects such as level of representation used for sentiment analysis, negation word detection and scope of negation. We will also discuss limits and challenges of negation modeling on that task.
Bootstrapping Supervised Machine-learning Polarity Classifiers with Rule-based Classification
(2010)
In this paper, we explore the effectiveness of bootstrapping supervised machine-learning polarity classifiers using the output of domain-independent rule-based classifiers. The benefit of this method is that no labeled training data are required. Still, this method allows to capture in-domain knowledge by training the supervised classifier on in-domain features, such as bag of words.
We investigate how important the quality of the rule-based classifier is and what features are useful for the supervised classifier. The former addresses the issue in how far relevant constructions for polarity classification, such as word sense disambiguation, negation modeling, or intensification, are important for this self-training approach. We not only compare how this method relates to conventional semi-supervised learning but also examine how it performs under more difficult settings in which classes are not balanced and mixed reviews are included in the dataset.
Opinion holder extraction is one of the important subtasks in sentiment analysis. The effective detection of an opinion holder depends on the consideration of various cues on various levels of representation, though they are hard to formulate explicitly as features. In this work, we propose to use convolution kernels for that task which identify meaningful fragments of sequences or trees by themselves. We not only investigate how different levels of information can be effectively combined in different kernels but also examine how the scope of these kernels should be chosen. In general relation extraction, the two candidate entities thought to be involved in a relation are commonly chosen to be the boundaries of sequences and trees. The definition of boundaries in opinion holder extraction, however, is less straightforward since there might be several expressions beside the candidate opinion holder to be eligible for being a boundary.
Der Beitrag gibt einen Überblick über die Entwicklung und die Aufgaben des Fachverbandes Deutsch als Fremdsprache (FaDaF) seit seiner Gründung 1989/90. Er zeigt dabei die Entwicklungslinien des Verbandes auf, der als Nachfolge-Organisation des Arbeitskreises Deutsch als Fremdsprache beim DAAD (AkDaF) dessen Aufgaben übernommen, fortgeführt und weiter entwickelt hat.