Refine
Document Type
- Part of a Book (4)
- Article (1)
- Conference Proceeding (1)
Has Fulltext
- yes (6)
Keywords
- Soziale Software (6) (remove)
Publicationstate
Reviewstate
- (Verlags)-Lektorat (4)
- (Verlags-)Lektorat (1)
- Peer-Review (1)
Publisher
- de Gruyter (2)
- Austrian academy of sciences (1)
- De Gruyter Mouton (1)
- Halem (1)
- Nomos (1)
Der Beitrag behandelt das Zusammenspiel von Text und Interaktion im Internet. Abschnitt 2 erläutert am Beispiel der Wikipedia, wie sich die textorientierte Arbeit an den Artikeln und das interaktionsorientierte Diskutieren funktional ergänzen. Abschnitt 3 untersucht Links als digitale Kohärenzbildungshilfen und zeigt an einem Fallbeispiel, wie diese in den schriftlichen Diskussionen dafür genutzt werden, relevante Informationen im „virtuellen“ Aufmerksamkeitsbereich präsent und für phorische und deiktische Bezugnahmen zugänglich zu machen. Abschnitt 4 diskutiert Ergebnisse aus zwei Vergleichsstudien zum Gebrauch der Konnektoren 'weil' sowie 'sprich' und 'd.h.' in Wikipedia-Artikeln und Diskussionen, die auf der Basis von Wikipedia-Korpora in der DeReKo-Sammlung des IDS durchgefuhrt wurden.
A large database is a desirable basis for multimodal analysis. The development of more elaborate methods, data banks, and tools for a stronger empirical grounding of multimodal analysis is a prevailing topic within multimodality. Prereq- uisite for this are corpora for multimodal data. Our contribution aims at developing a proposal for gathering and building multimodal corpora of audio-visual social media data, predominantly YouTube data.Our contribution has two parts: First we outline a participation framework which is able to represent the complexity of YouTube communication. To this end we ‘dissect’ the different communicative and multimodal layers YouTube consists of. Besides the Video performance YouTube also integrates comments, social media operators, commercials, and announcements for further YouTube Videos. The data consists of various media and modes and is interactively engaged in various discourses. Hence, it is rather difficult to decide what can be considered as a basic communicative unit (or a ‘turn’) and how it can be mapped. Another decision to be made is which elements are of higher priority than others, thus have to be integrated in an adequate transcription format. We illustrate our conceptual considerations on the example of so-called L e t’s Plays, which are supposed to present and comment Computer gaming processes.The second part is devoted to corpus building. Most previous studies either worked with ad hoc data samples or outlined data mining and data sampling strategies. Our main aim is to delineate in a systematic way and based on the conceptual outline in the first part necessary elements which should be part of a YouTube corpus. To this end we describe in a first Step which components (e.g., the Video itself, the comments, the metadata, etc.) should be captured. ln a second Step we outline why and which relations (e.g., screen appearances, hypertextual struc- tures, etc.) are worth to get part of the corpus. In sum, our contribution aims at outlining a proposal for gathering and systematizing multimodal data, specifically audio-visual social media data, in a corpus derived from a conceptual modeling of important communicative processes of the research object itself.
We present a testsuite for POS tagging German web data. Our testsuite provides the original raw text as well as the gold tokenisations and is annotated for parts-of-speech. The testsuite includes a new dataset for German tweets, with a current size of 3,940 tokens. To increase the size of the data, we harmonised the annotations in already existing web corpora, based on the Stuttgart-Tübingen Tag Set. The current version of the corpus has an overall size of 48,344 tokens of web data, around half of it from Twitter. We also present experiments, showing how different experimental setups (training set size, additional out-of-domain training data, self-training) influence the accuracy of the taggers. All resources and models will be made publicly available to the research community.
Das Medium Internet ist im Wandel, und mit ihm ändern sich seine Publikations- und Rezeptionsbedingungen. Welche Chancen bieten die momentan parallel diskutierten Zukunftsentwürfe von Social Web und Semantic Web? Zur Beantwortung dieser Frage beschäftigt sich der Beitrag mit den Grundlagen beider Modelle unter den Aspekten Anwendungsbezug und Technologie, beleuchtet darüber hinaus jedoch auch deren Unzulänglichkeiten sowie den Mehrwert einer mediengerechten Kombination. Am Beispiel des grammatischen Online-Informationssystems grammis wird eine Strategie zur integrativen Nutzung der jeweiligen Stärken skizziert.