Refine
Year of publication
Document Type
- Conference Proceeding (79)
- Article (17)
- Part of a Book (7)
- Book (5)
- Image (1)
- Part of Periodical (1)
Language
- English (96)
- German (13)
- Multiple languages (1)
Has Fulltext
- yes (110)
Keywords
- Computerlinguistik (110) (remove)
Publicationstate
- Veröffentlichungsversion (80)
- Zweitveröffentlichung (24)
- Postprint (11)
- (Verlags)-Lektorat (1)
Reviewstate
- Peer-Review (110) (remove)
Publisher
- Association for Computational Linguistics (19)
- European Language Resources Association (10)
- Gesellschaft für Sprachtechnologie und Computerlinguistik (5)
- Zenodo (5)
- CLARIN (4)
- LiU Electronic Press (4)
- European Language Resources Association (ELRA) (3)
- Incoma Ltd. (3)
- Linköping University Electronic Press (3)
- Springer (3)
In this paper, we investigate the practical applicability of Co-Training for the task of building a classifier for reference resolution. We are concerned with the question if Co-Training can significantly reduce the amount of manual labeling work and still produce a classifier with an acceptable performance.
The paper investigates the evolution of document grammars from a linguistic point of view. Document grammars have been developed in the past decades in order to formalize knowledge on the structure of textual information. A well-known instance of a document grammar is the »Document Type Definition« (DTD) as part of the Extensible Markup Language (XML). DTDs allow to define so-called tree grammars that constrain the application of tag-sets in the process of annotation of a document. In an XML-based document workflow, DTDs play a crucial role for validation and transforming huge amounts of texts in standardized data formats. An interesting point in the development of XML DTDs is the fact that the restriction of the formal expressiveness paved the way to understand the formal properties of document grammars better and to develop more a powerful version like XML Schema recently. In this sense, the simplicity of the original approach, resulting from the necessary restriction of previous approaches, yielded new complexity on formally understood grounds.
In order to determine priorities for the improvement of timing in synthetic speech this study looks at the role of segmental duration prediction and the role of phonological symbolic representation in the perceptual quality of a text-to-speech system. In perception experiments using German speech synthesis, two standard duration models (Klatt rules and CART) were tested. The input to these models consisted of a symbolic representation which was either derived from a database or a text-to-speech system. Results of the perception experiments show that different duration models can only be distinguished when the symbolic representation is appropriate. Considering the relative importance of the symbolic representation, post-lexical segmental rules were investigated with the outcome that listeners differ in their preferences regarding the degree of segmental reduction. As a conclusion, before fine-tuning the duration prediction, it is important to derive an appropriate phonological symbolic representation in order to improve timing in synthetic speech.
We present a light-weight tool for the annotation of linguistic data on multiple levels. It is based on the simplification of annotations to sets of markables having attributes and standing in certain relations to each other. We describe the main features of the tool, emphasizing its simplicity, customizability and versatility
We present an implemented XML data model and a new, simplified query language for multi-level annotated corpora. The new query language involves automatic conversion of queries into the underlying, more complicated MMAXQL query language. It supports queries for sequential and hierarchical, but also associative (e.g. coreferential) relations. The simplified query language has been designed with non-expert users in mind.
We present an implemented machine learning system for the automatic detection of nonreferential it in spoken dialog. The system builds on shallow features extracted from dialog transcripts. Our experiments indicate a level of performance that makes the system usable as a preprocessing filter for a coreference resolution system. We also report results of an annotation study dealing with the classification of it by naive subjects.