Computerlinguistik
Refine
Year of publication
Document Type
- Article (15)
- Conference Proceeding (15)
- Part of a Book (3)
- Other (2)
- Book (1)
- Master's Thesis (1)
- Part of Periodical (1)
- Report (1)
Has Fulltext
- yes (39)
Is part of the Bibliography
- no (39) (remove)
Keywords
- Deutsch (39) (remove)
Publicationstate
- Veröffentlichungsversion (15)
- Postprint (4)
- Zweitveröffentlichung (2)
- Erstveröffentlichung (1)
- Preprint (1)
Reviewstate
Publisher
- Institut für Deutsche Sprache (8)
- European Language Resources Association (ELRA) (2)
- Universität Hamburg (2)
- Association for Computational Linguistics (1)
- CSLI Publications (1)
- European Language Resources Association (1)
- Gesellschaft für Sprachtechnologie and Computerlinguistik (1)
- Institute of Cybernetics, Institute of the Estonian Language (1)
- International Committee on Computational Linguistics (1)
- Kluwer (1)
This paper describes general requirements for evaluating and documenting NLP tools with a focus on morphological analysers and the design of a Gold Standard. It is argued that any evaluation must be measurable and documentation thereof must be made accessible for any user of the tool. The documentation must be of a kind that it enables the user to compare different tools offering the same service, hence the descriptions must contain measurable values. A Gold Standard presents a vital part of any measurable evaluation process, therefore, the corpus-based design of a Gold Standard, its creation and problems that occur are reported upon here. Our project concentrates on SMOR, a morphological analyser for German that is to be offered as a web-service. We not only utilize this analyser for designing the Gold Standard, but also evaluate the tool itself at the same time. Note that the project is ongoing, therefore, we cannot present final results.
Mehrsprachigkeit in linguistischen Daten. Theoretische und praktische Aspekte ihrer Erfassung
(2008)
In order to determine priorities for the improvement of timing in synthetic speech this study looks at the role of segmental duration prediction and the role of phonological symbolic representation in the perceptual quality of a text-to-speech system. In perception experiments using German speech synthesis, two standard duration models (Klatt rules and CART) were tested. The input to these models consisted of a symbolic representation which was either derived from a database or a text-to-speech system. Results of the perception experiments show that different duration models can only be distinguished when the symbolic representation is appropriate. Considering the relative importance of the symbolic representation, post-lexical segmental rules were investigated with the outcome that listeners differ in their preferences regarding the degree of segmental reduction. As a conclusion, before fine-tuning the duration prediction, it is important to derive an appropriate phonological symbolic representation in order to improve timing in synthetic speech.
Für koordinative Konstrukte sind verschiedene syntaktische Grundstrukturen vorgeschlagen worden. Allen diesen Ansätzen ist gemein, daß sie die inkre- mentelle Verarbeitung dieser Konstruktionen nicht plausibel erklären können, obwohl Indizien dafür vorliegen, daß es sich bei Koordination keineswegs um ein genuin strukturelles Phänomen handelt, sondern um eines, daß aus den Prinzipien der inkrementellen Verarbeitung emergiert. Das skizzierte Verarbeitungsmodell basiert deshalb auf der Annahme, daß syntaktische Strukturen im Falle der Koordination mehrfach benutzt werden und hinsichtlich verschiedener sog. Projektionen zu verarbeiten sind. Diese Annahme erlaubt es, die Vielfalt der bei der Koordination auftretenden Tilgungs- und Reduktionsphänomene auf die Realisation koordinativer Strukturen bezüglich ihrer verschiedenen Projektionen zurückzuführen.
E-VALBU: Advanced SQL/XML processing of dictionary data using an object-relational XML database
(2008)
Contemporary practical lexicography uses a wide range of advanced technological aids,most prominently database systems for the administration of dictionary content. Since XML has become a de facto standard for the coding of lexicographic articles, integrated markup functionality – such as query, update, or transformation of instances – is of particular importance. Even the multi-channel distribution of dictionary data benefits from powerful XML database services. Exemplified by E-VALBU, the most comprehensive electronic dictionary on German verb valency, we outline an integrated approach for advanced XML storing and processing within an object-relational database, and for a public retrieval frontend using Web Services and AJAX technology.
The naturalness of synthetic speech depends strongly on the prediction of appropriate prosody. For the present study the original annotation of the German speech database “Kiel Corpus of Read Speech” was extended automatically with syntactic features, word frequency, and syllable boundaries. Several classification and regression trees for predicting symbolic prosody features, postlexical phonological processes, duration, and F0 were trained on this database. The perceptual evaluation showed that the overall perceptual quality of the German text-to-speech system MARY can be significantly improved by training all models that contribute to prosody prediction on the same database. Furthermore, it showed that the error introduced by symbolic prosody prediction perceptually equals the error produced by a direct method that does not exploit any symbolic prosody features.
The goal of the MULI (MUltiLingual Information structure) project is to empirically analyse information structure in German and English newspaper texts. In contrast to other projects in which information structure is annotated and investigated (e.g. in the Prague Dependency Treebank, which mirrors the basic information about the topic-focus articulation of the sentence), we do not annotate theory-biased categories like topic-focus or theme-rheme. Trying to be as theory-independent as possible, we annotate those features which are relevant to information structure and on the basis of which typical patterns, co-occurrences or correlations can be determined. We distinguish between three annotation levels: syntax, discourse and prosody. The data is based on the TIGER Corpus for German and the Penn Treebank for English, since the existing information on part-of-speech and syntactic structure can be re-used for our purposes. The actual annotation of an English example sequence illustrates our choice of categories on each level. Their combination offers the possibility to investigate how information structure is realised and can be interpreted.