Refine
Year of publication
- 2016 (347) (remove)
Document Type
- Part of a Book (136)
- Article (104)
- Conference Proceeding (51)
- Book (33)
- Part of Periodical (12)
- Working Paper (5)
- Doctoral Thesis (3)
- Other (2)
- Preprint (1)
Keywords
- Deutsch (113)
- Korpus <Linguistik> (47)
- Gesprochene Sprache (31)
- Konversationsanalyse (24)
- Wörterbuch (22)
- Interaktion (20)
- Computerunterstützte Lexikographie (19)
- Linguistik (17)
- Diskursanalyse (16)
- Kommunikation (15)
Publicationstate
- Veröffentlichungsversion (169)
- Zweitveröffentlichung (35)
- Postprint (17)
- Erstveröffentlichung (1)
Reviewstate
Publisher
- Institut für Deutsche Sprache (45)
- de Gruyter (34)
- De Gruyter (23)
- Winter (19)
- European Language Resources Association (ELRA) (13)
- Narr Francke Attempto (12)
- Retorika (8)
- Peter Lang (7)
- Linssen Druckcenter (6)
- Association for Computational Linguistics (5)
Die Preußische Akademie der Wissenschaften zu Berlin hat im Jahr 1906 auf Bitte der deutschen Regierung die Verantwortung für die Arbeiten zur Vollendung des Deutschen Wörterbuchs von Jacob Grimm und Wilhelm Grimm übernommen. Im Jahr 1929/30 hat sie die Berliner Arbeitsstelle gegründet. Nach dem Zweiten Weltkrieg wurde dieses lexikographische Grundlagenwerk in den Jahrzehnten der Spaltung Deutschlands, aber in enger Gemeinschaft einer Berliner und einer Göttinger Arbeitsstelle zum Abschluss gebracht. Schon in den fünfziger Jahren entschlossen sich die Akademien in Berlin und Göttingen, „zunächst“ die völlige Neubearbeitung der ältesten Teile des Werks, die die Brüder Grimm zwischen 1852 und 1863 noch selbst erarbeitet hatten, vorzunehmen. Diese Neubearbeitung ist inzwischen nahezu abgeschlossen. Umso deutlicher zeigt sich aber nun, dass auch die übrigen Teile dringend der Neubearbeitung bedürfen. Das Jahrhundertwerk der Brüder Grimm, ihre wichtigste gemeinsame sprachwissenschaftliche Leistung, heute in der ganzen Welt täglich von Tausenden im Internet benutzt, Fundament der gesamten neueren deutschen Wortforschung, kann seine Aufgabe nur erfüllen, wenn es nicht als Museumsstück bewundert, sondern in gründlich erneuerter Form als aktuelles Auskunftsmittel fortgeführt wird. In dieser Situation war die Schließung der Berliner Arbeitsstelle im Dezember 2012 das falsche Signal.
Having found their way onto the computer screens, comics soon branched into webcomics. These kept a lot of the characteristics of print comic books, but gradually adapted new unexplored modes of representation. Three relatively new ‘enhancements’ to the medium of comics are presented in this article: webcomics enhanced through the use of the infinite canvas, as proposed by Scott McCloud, those enhanced with videos and/or sound, and lastly those enhanced with interactive and ludic elements. All of the mentioned push the medium of comics into new waters, and by doing so they add new layers of meaning and modify their structure based on the make-up of the implemented features. Infinite canvas manages to lift some limitations of print comics without changing the overall feel too drastically, while animated and voiced webcomics, as well as interactive or game comics, have a much higher inclination to transgress into domains of other media and transform themselves in order to accommodate and integrate these novel foreign features.
The present paper reports the first results of the compilation and annotation of a blog corpus for German. The main aim of the project is the representation of the blog discourse structure and relations between its elements (blog posts, comments) and participants (bloggers, commentators). The data included in the corpus were manually collected from the scientific blog portal SciLogs. The feature catalogue for the corpus annotation includes three types of information which is directly or indirectly provided in the blog or can be construed by means of statistical analysis or computational tools. At this point, only directly available information (e.g., title of the blog post, name of the blogger etc.) has been annotated. We believe, our blog corpus can be of interest for the general study of blog structure or related research questions as well as for the development of NLP methods and techniques (e.g. for authorship detection).
The present paper reports the first results of the compilation and annotation of a blog corpus for German. The main aim of the project is the representation of the blog discourse structure and relations between its elements (blog posts, comments) and participants (bloggers, commentators). The data included in the corpus were manually collected from the scientific blog portal SciLogs. The feature catalogue for the corpus annotation includes three types of information which is directly or indirectly provided in the blog or can be construed by means of statistical analysis or computational tools. At this point, only directly available information (e.g. title of the blog post, name of the blogger etc.) has been annotated. We believe, our blog corpus can be of interest for the general study of blog structure or related research questions as well as for the development of NLP methods and techniques (e.g. for authorship detection).
Medialität und Sozialität sind grundlegende Kategorien einer medienlinguistischen Perspektive auf Sprache und Kommunikation und sollen im Folgenden die Ausgangspunkte einer Auseinandersetzung mit der Operativität digitaler Schriftzeichen bilden. Nach einer kurzen Einleitung wird dazu der Operativitätsbegriff erläutert und dieser dann anhand eines Postings im Microblog Twitter exemplifiziert.
Many applications in Natural Language Processing require a semantic analysis of sentences in terms of truth-conditional representations, often with specific desiderata in terms of which information needs to be included in the semantic analysis. However, there are only very few tools that allow such an analysis. We investigate the representations of an automatic analysis pipeline of the C&C parser and Boxer to determine whether Boxer’s analyses in form of Discourse Representation Structure can be successfully converted into a more surface oriented event semantic representation, which will serve as input for a fusion algorithm for fusing hard and soft information. We use a data set of synthetic counter intelligence messages for our investigation. We provide a basic pipeline for conversion and subsequently discuss areas in which ambiguities and differences between the semantic representations present challenges in the conversion process.