Refine
Year of publication
- 2016 (40) (remove)
Document Type
- Article (24)
- Conference Proceeding (10)
- Part of a Book (5)
- Book (1)
Has Fulltext
- yes (40)
Keywords
- Deutsch (15)
- Korpus <Linguistik> (8)
- Gesprochene Sprache (5)
- Französisch (4)
- Konversationsanalyse (4)
- Chatten <Kommunikation> (3)
- Component MetaData Infrastructure (CMDI) (3)
- Kontrastive Grammatik (3)
- Metadaten (3)
- Ukrainisch (3)
Publicationstate
- Veröffentlichungsversion (40) (remove)
Reviewstate
- Peer-Review (40) (remove)
Publisher
- Association for Computational Linguistics (3)
- European Language Resources Association (ELRA) (3)
- CLARIN (2)
- De Gruyter (2)
- Frontiers Media (2)
- Gesellschaft für Sprachtechnologie und Computerlinguistik (2)
- Verlag für Gesprächsforschung (2)
- Academic Publishing Division of the Faculty of Arts of the University of Ljubljana (1)
- Austrian Centre for Digital Humanities, Austrian Academy of Sciences (1)
- Buro van die Wat (1)
Vorgestellt werden kontrastive Analysen zur Besetzung und Häufigkeitsverteilung von Vorfeldern im Deutschen und ihren französischen, italienischen, norwegischen, polnischen und ungarischen Äquivalenten in morphosyntaktisch annotierten Wikipedia-Korpora. Im Rahmen der Untersuchung wurden mit korpusanalytischen Methoden quantitative Zusammenhänge bei den sprachspezifischen Ausprägungen von Vorfeldern nachgewiesen, die im Einklang mit typischen Struktureigenschaften der untersuchten Kontrastsprachen stehen. Die Ergebnisse legen aber nahe, dass die untersuchten Vorfeldstrukturen ‒ trotz der beträchtlichen Größe und thematischen Vielfalt der Wikipedia-Korpora ‒ nicht hinreichend repräsentativ sind, um uneingeschränkt Rückschlüsse auf allgemeine Struktureigenschaften der sechs Kontrastsprachen zu ziehen. Hierfür verantwortlich ist insbesondere die ausgeprägte Textsortenspezifizität der Mediengattung (Online-)Enzyklopädie, was mithilfe weiterer Vergleichskorpora aufgezeigt werden konnte.
In conversation, interlocutors rarely leave long gaps between turns, suggesting that next speakers begin to plan their turns while listening to the previous speaker. The present experiment used analyses of speech onset latencies and eye-movements in a task-oriented dialogue paradigm to investigate when speakers start planning their responses. German speakers heard a confederate describe sets of objects in utterances that either ended in a noun [e.g., Ich habe eine Tür und ein Fahrrad (“I have a door and a bicycle”)] or a verb form [e.g., Ich habe eine Tür und ein Fahrrad besorgt (“I have gotten a door and a bicycle”)], while the presence or absence of the final verb either was or was not predictable from the preceding sentence structure. In response, participants had to name any unnamed objects they could see in their own displays with utterances such as Ich habe ein Ei (“I have an egg”). The results show that speakers begin to plan their turns as soon as sufficient information is available to do so, irrespective of further incoming words.
Having found their way onto the computer screens, comics soon branched into webcomics. These kept a lot of the characteristics of print comic books, but gradually adapted new unexplored modes of representation. Three relatively new ‘enhancements’ to the medium of comics are presented in this article: webcomics enhanced through the use of the infinite canvas, as proposed by Scott McCloud, those enhanced with videos and/or sound, and lastly those enhanced with interactive and ludic elements. All of the mentioned push the medium of comics into new waters, and by doing so they add new layers of meaning and modify their structure based on the make-up of the implemented features. Infinite canvas manages to lift some limitations of print comics without changing the overall feel too drastically, while animated and voiced webcomics, as well as interactive or game comics, have a much higher inclination to transgress into domains of other media and transform themselves in order to accommodate and integrate these novel foreign features.
The paper presents best practices and results from projects in four countries dedicated to the creation of corpora of computer-mediated communication and social media interactions (CMC). Even though there are still many open issues related to building and annotating corpora of that type, there already exists a range of accessible solutions which have been tested in projects and which may serve as a starting point for a more precise discussion of how future standards for CMC corpora may (and should) be shaped like.
Converting and Representing Social Media Corpora into TEI: Schema and best practices from CLARIN-D
(2016)
The paper presents results from a curation project within CLARIN-D, in which an existing lMWord corpus of German chat communication has been integrated into the DEREKO and DWDS corpus infrastructures of the CLARIN-D centres at the Institute for the German Language (IDS, Mannheim) and at the Berlin-Brandenburg Academy of Sciences (BBAW, Berlin). The focus is on the solutions developed for converting and representing the corpus in a TEI format.
The paper reports the results of the curation project ChatCorpus2CLARIN. The goal of the project was to develop a workflow and resources for the integration of an existing chat corpus into the CLARIN-D research infrastructure for language resources and tools in the Humanities and the Social Sciences (http://clarin-d.de). The paper presents an overview of the resources and practices developed in the project, describes the added value of the resource after its integration and discusses, as an outlook, to what extent these practices can be considered best practices which may be useful for the annotation and representation of other CMC and social media corpora.
Brown clustering has been used to help increase parsing performance for morphologically rich languages. However, much of the work has focused on using clustering techniques to replace terminal nodes or as a feature for parsing. Instead, we choose to examine how effectively Brown clustering is for unlexicalized parsing by creating data-driven POS tagsets which are then used with the Berkeley parser. We investigate cluster sizes as well as on what information (e.g. words vs. lemmas) clustering will yield the best parser performance. Our results approach the current state of the art results for the German T¨uBa-D/Z treebank when using parser internal tagging.
Many applications in Natural Language Processing require a semantic analysis of sentences in terms of truth-conditional representations, often with specific desiderata in terms of which information needs to be included in the semantic analysis. However, there are only very few tools that allow such an analysis. We investigate the representations of an automatic analysis pipeline of the C&C parser and Boxer to determine whether Boxer’s analyses in form of Discourse Representation Structure can be successfully converted into a more surface oriented event semantic representation, which will serve as input for a fusion algorithm for fusing hard and soft information. We use a data set of synthetic counter intelligence messages for our investigation. We provide a basic pipeline for conversion and subsequently discuss areas in which ambiguities and differences between the semantic representations present challenges in the conversion process.
Der Beitrag widmet sich dem Thema der kommunikativen Deviationen in Interviews im Ukrainischen und Deutschen. Dabei werden die Deviationen sowohl in den Presseinterviews als auch in den populärsten Videointerviews auf YouTube untersucht. Die Deviationen werden in die von der Position des Adressanten, des Adressaten sowie des Zuschauers aufgeteilt. Die Aufmerksamkeit wird der Sprach- und der kommunikativen Kompetenz der Kommunikanten als der Hauptursache der Deviationen in den Interviews gelenkt. Die Deviationen werden als eine der Voraussetzungen der erfolgreichen Kommunikation bestimmt.