Refine
Year of publication
- 2017 (43) (remove)
Document Type
- Conference Proceeding (43) (remove)
Has Fulltext
- yes (43)
Keywords
- Korpus <Linguistik> (21)
- Corpus linguistics (8)
- Computerlinguistik (7)
- Deutsch (7)
- Texttechnologie (5)
- Augenfolgebewegung (4)
- Blickbewegung (4)
- Corpus technology (4)
- Experimentelle Psychologie (4)
- Natürliche Sprache (4)
Publicationstate
- Veröffentlichungsversion (35)
- Postprint (7)
- Zweitveröffentlichung (3)
Reviewstate
- Peer-Review (36)
- Peer-review (6)
Publisher
- Institut für Deutsche Sprache (9)
- Lexical Computing CZ s.r.o. (5)
- TUDpress (3)
- The Association for Computational Linguistics (3)
- Linköping University Electronic Press (2)
- McGill University & Université de Montréal (2)
- University of Birmingham (2)
- Asian Federation of Natural Language Processing (1)
- Association for Computational Linguistics (1)
- Association for Computing Machinery (1)
We propose a new type of subword embedding designed to provide more information about unknown compounds, a major source for OOV words in German. We present an extrinsic evaluation where we use the compound embeddings as input to a neural dependency parser and compare the results to the ones obtained with other types of embeddings. Our evaluation shows that adding compound embeddings yields a significant improvement of 2% LAS over using word embeddings when no POS information is available. When adding POS embeddings to the input, however, the effect levels out. This suggests that it is not the missing information about the semantics of the unknown words that causes problems for parsing German, but the lack of morphological information for unknown words. To augment our evaluation, we also test the new embeddings in a language modelling task that requires both syntactic and semantic information.
Complex linguistic phenomena, such as Clitic Climbing in Bosnian, Croatian and Serbian, are often described intuitively, only from the perspective of the main tendency. In this paper, we argue that web corpora currently offer the best source of empirical material for studying Clitic Climbing in BCS. They thus allow the most accurate description of this phenomenon, as less frequent constructions can be tracked only in big, well-annotated data sources. We compare the properties of web corpora for BCS with traditional sources and give examples of studies on CC based on web corpora. Furthermore, we discuss problems related to web corpora and suggest some improvements for the future.
Das 18. Jahrhundert war wissenschaftlich von großen Umbrüchen geprägt, auch im Bereich der Anatomie und Physiologie des Menschen. Die hierauserwachsende lebhafte Diskussion erstreckte sich auch auf das noch sehr junge Gebiet der (mechanischen) Sprachsynthese und ihrer Grundlagen. Das Sprachsynthesekonzept Wolfgang von Kempelens (1734–1804) ist hierbei ein besonders eindrückliches Beispiel dafür, dass eine grundlegende wissenschaftliche Erkenntnis womöglich durch technologische Limitationen nicht notwendigerweise auch praktisch umgesetzt werden kann. Grundsätzlich waren Kempelens Erkenntnisse zur Anatomie und Physiologie des Menschen und damit auch zur Spracherzeugung weitestgehend zutreffend. Die praktische Umsetzung hingegen wirkt aus heutiger Sicht recht kurios. Kempelens Vokaltrakt-Konzept soll exemplarisch dem nur wenig früher entstandenen Prototypen zur Sprachsynthese Christian Gottlieb Kratzensteins (1723–1795) gegenübergestellt werden. Dessen „Erkenntnisse“ müssen heute vielfach als falsch bezeichnet werden; sein Modell zur Vokalsynthese weist einerseits auffällige Parallelen zu demjenigen KEMPELENS auf, geht hinsichtlich der Physiologie jedoch von vielfach irrigen Annahmen aus.
Universal Dependency (UD) annotations, despite their usefulness for cross-lingual tasks and semantic applications, are not optimised for statistical parsing. In the paper, we ask what exactly causes the decrease in parsing accuracy when training a parser on UD-style annotations and whether the effect is similarly strong for all languages. We conduct a series of experiments where we systematically modify individual annotation decisions taken in the UD scheme and show that this results in an increased accuracy for most, but not for all languages. We show that the encoding in the UD scheme, in particular the decision to encode content words as heads, causes an increase in dependency length for nearly all treebanks and an increase in arc direction entropy for many languages, and evaluate the effect this has on parsing accuracy.
We present a major step towards the creation of the first high-coverage lexicon of polarity shifters. In this work, we bootstrap a lexicon of verbs by exploiting various linguistic features. Polarity shifters, such as ‘abandon’, are similar to negations (e.g. ‘not’) in that they move the polarity of a phrase towards its inverse, as in ‘abandon all hope’. While there exist lists of negation words, creating comprehensive lists of polarity shifters is far more challenging due to their sheer number. On a sample of manually annotated verbs we examine a variety of linguistic features for this task. Then we build a supervised classifier to increase coverage. We show that this approach drastically reduces the annotation effort while ensuring a high-precision lexicon. We also show that our acquired knowledge of verbal polarity shifters improves phrase-level sentiment analysis.