Refine
Year of publication
- 2015 (28) (remove)
Document Type
- Conference Proceeding (15)
- Article (10)
- Part of a Book (2)
- Book (1)
Has Fulltext
- yes (28)
Keywords
- Korpus <Linguistik> (10)
- Annotation (9)
- Corpus annotation (6)
- Corpus technology (6)
- Datenbanksystem (6)
- Deutsch (5)
- Large corpora (5)
- Corpus linguistics (4)
- Corpus management (3)
- Corpus query language (3)
Publicationstate
- Veröffentlichungsversion (28) (remove)
Reviewstate
- Peer-Review (28) (remove)
Publisher
- Institut für Deutsche Sprache (8)
- Association for Computational Linguistics ( ACL ); Curran Associates, Inc. (1)
- De Gruyter (1)
- Federal´noe gosudarstvennoe unitarnoe predprijatie Akademičeskij naučno-izdatel´skij, proizvodstvenno-poligrafičeskij i knigorasprostranitel´skij centr Nauka (1)
- German Society for Computational Linguistics & Language Technology (GSCL) (1)
- Gesellschaft für Sprachtechnologie and Computerlinguistik (1)
- International Phonetic Association (IPA) (1)
- Linköping University Electronic Press (1)
- Linköping University Electronic Press, Linköpings universitet (1)
- Nodus Publikationen (1)
Die öffentliche Akzeptanz und Wirkung natur- und technikwissenschaftlicher Forschung hängt grundlegend davon ab, ob sich die Ziele und Forschungsergebnisse an die Öffentlichkeit vermitteln lassen. Doch die Inhalte aktueller Forschungsvorhaben sind für ein Laienpublikum oft nur schwer zugänglich und verständlich. Vor dem Hintergrund, die gesellschaftliche Diskussion natur- und technikwissenschaftlicher Forschung zu verbessern, untersuchen und bewerten wir im Projekt PopSci – Understanding Science einen wichtigen Sektor des populärwissenschaftlichen Diskurses in Deutschland empirisch. Hierfür identifizieren wir die linguistischen Merkmale deutscher populärwissenschaftlicher Texte durch korpusbasierte Methoden und untersuchen deren Effekt auf die kognitive Verarbeitung der Texte durch Laien. Dazu setzen wir Vor- und Nachwissenstests ein. Außerdem messen wir die Blickbewegungen der Leserinnen und Leser, während sie populärwissenschaftliche Texte lesen. Aus dieser Kombination von unterschiedlichen Methoden versuchen wir, erste Empfehlungen zur Verbesserung des linguistischen Stils und der Wissensrepräsentation populärwissenschaftlicher Texte abzuleiten.
This article reports on the on-going CoRoLa project, aiming at creating a reference corpus of contemporary Romanian (from 1945 onwards), opened for online free exploitation by researchers in linguistics and language processing, teachers of Romanian, students. We invest serious efforts in persuading large publishing houses and other owners of IPR on relevant language data to join us and contribute the project with selections of their text and speech repositories. The CoRoLa project is coordinated by two Computer Science institutes of the Romanian Academy, but enjoys cooperation of and consulting from professional linguists from other institutes of the Romanian Academy. We foresee a written component of the corpus of more than 500 million word forms, and a speech component of about 300 hours of recordings. The entire collection of texts (covering all functional styles of the language) will be pre-processed and annotated at several levels, and also documented with standardized metadata. The pre-processing includes cleaning the data and harmonising the diacritics, sentence splitting and tokenization. Annotation will include morpho-lexical tagging and lemmatization in the first stage, followed by syntactic, semantic and discourse annotation in a later stage.
To optimize the sharing and reuse of existing data, many funding organizations now require researchers to specify a management plan for research data. In such a plan, researchers are supposed to describe the entire life cycle of the research data they are going to produce, from data creation to formatting, interpretation, documentation, short-term storage, long-term archiving and data re-use. To support researchers with this task, we built DMPTY, a wizard that guides researchers through the essential aspects of managing data, elicits information from them, and finally, generates a document that can be further edited and linked to the original research proposal.
In a project called "A Library of a Billion Words" we needed an implementation of the CTS protocol that is capable of handling a text collection containing at least 1 billion words. Because the existing solutions did not work for this scale or were still in development I started an implementation of the CTS protocol using methods that MySQL provides. Last year we published a paper that introduced a prototype with the core functionalities without being compliant with the specifications of CTS (Tiepmar et al., 2013). The purpose of this paper is to describe and evaluate the MySQL based implementation now that it is fulfilling the specifications version 5.0 rc.1 and mark it as finished and ready to use. Further information, online instances of CTS for all described datasets and binaries can be accessed via the projects website.
In a previous article (Faaß et al., 2012), a first attempt was made at documenting and encoding morphemic units of two South African Bantu languages, i.e. Northern Sotho and Zulu, with the aim of describing and storing the morphemic units of these two languages in a single relational database, structured as a hierarchical ontology. As a follow-up, the current article describes the implementation of our part-of-speech ontology. We give a detailed description of the morphemes and categories contained in the database, highlighting the need and reasons for a flexible ontology which will provide for both language specific and general linguistic information. By giving a detailed account of the methodology for the population of the database, we provide linguists from other Bantu languages with a road map for extending the database to also include their languages of specialization.
We present a quantitative approach to disambiguating flat morphological analyses and producing more deeply structured analyses. Based on existing morphological segmentations, possible combinations of resulting word trees for the next level are filtered first by criteria of linguistic plausibility and then by weighting procedures based on the geometric mean. The frequencies for weighting are derived from three different sources (counts of morphs in a lexicon, counts of largest constituents in a lexicon, counts of token frequencies in a corpus) and can be used either to find the best analysis on the level of morphs or on the next higher constituent level. The evaluation shows that for this task corpus-based frequency counts are slightly superior to counts of lexical data.
In this paper, I present the COW14 tool chain, which comprises a web corpus creation tool called texrex, wrappers for existing linguistic annotation tools as well as an online query software called Colibri2. By detailed descriptions of the implementation and systematic evaluations of the performance of the software on different types of systems, I show that the COW14 architecture is capable of handling the creation of corpora of up to at least 100 billion tokens. I also introduce our running demo system which currently serves corpora of up to roughly 20 billion tokens in Dutch, English, French, German, Spanish, and Swedish
In diesem Aufsatz werden Positionierungsverfahren analysiert, welche die Macher einer Talkshow einsetzen, um ihre Gäste den Fernsehzuschauern als relevante Gesprächspartner für das Thema „Steuerhinterziehung durch Prominente” zu präsentieren. Es wird untersucht, wie es den Machern der Talkshow gelingt, die Gäste bereits bei der Erstvorstellung durch das Zusammenspiel einer Stimme aus dem Off und der Kameraführung als „prototypische Vertreter” zu präsentieren und zueinander zu positionieren. Von den insgesamt fünf Teilnehmern der Talkshow werden zwei dieser Erstvorstellungen detailliert analysiert. Es handelt sich um die Präsentation zweier Gäste, die in einer deutlich antagonistischen Beziehung zueinander stehen. Diese Gäste werden unmittelbar hintereinander vorgestellt. Auf der Grundlage aller fünf Gastpräsentationen, die wir detailliert rekonstruiert haben, jedoch aus Platzgründen hier leider nicht ebenfalls präsentieren können, wird ein strukturiertes Positionierungsgeflecht deutlich. Dieses Geflecht weist im Zentrum die von uns rekonstruierte thematische und personelle „Gegnerschaft“ auf. In der Peripherie sind dann insgesamt vier Vertreter relevanter gesellschaftlicher Positionen zum Thema der Talkshow beigeordnet. Dabei handelt es sich um Vertreter der Rechtsprechung, der Politik, der Alltagsmoral und der Psychologie und Theologie. Die Analysen werden in theoretischer Hinsicht auf der Grundlage multimodaler Vorstellungen zur Positionierung und zum Recipient Design durchgeführt. In methodisch-methodologischer Perspektive orientiert sich die Analyse an der multimodalen Interaktionsanalyse.
Feedback utterances are among the most frequent in dialogue. Feedback is also a crucial aspect of all linguistic theories that take social interaction involving language into account. However, determining communicative functions is a notoriously difficult task both for human interpreters and systems. It involves an interpretative process that integrates various sources of information. Existing work on communicative function classification comes from either dialogue act tagging where it is generally coarse grained concerning the feed- back phenomena or it is token-based and does not address the variety of forms that feed- back utterances can take. This paper introduces an annotation framework, the dataset and the related annotation campaign (involving 7 raters to annotate nearly 6000 utterances). We present its evaluation not merely in terms of inter-rater agreement but also in terms of usability of the resulting reference dataset both from a linguistic research perspective and from a more applicative viewpoint.
We present studies using the 2013 log files from the German version of Wiktionary. We investigate several lexicographically relevant variables and their effect on look-up frequency: Corpus frequency of the headword seems to have a strong effect on the number of visits to a Wiktionary entry. We then consider the question of whether polysemic words are looked up more often than monosemic ones. Here, we also have to take into account that polysemic words are more frequent in most languages. Finally, we present a technique to investigate the time-course of look-up behaviour for specific entries. We exemplify the method by investigating influences of (temporary) social relevance of specific headwords.