Refine
Year of publication
- 2016 (166) (remove)
Document Type
- Article (59)
- Conference Proceeding (44)
- Part of a Book (43)
- Book (13)
- Working Paper (4)
- Doctoral Thesis (3)
Has Fulltext
- yes (166) (remove)
Keywords
- Deutsch (63)
- Korpus <Linguistik> (30)
- Gesprochene Sprache (20)
- Konversationsanalyse (11)
- Wörterbuch (11)
- Computerunterstützte Lexikographie (10)
- Französisch (8)
- German (7)
- Computerlinguistik (6)
- Linguistik (6)
Publicationstate
- Veröffentlichungsversion (166) (remove)
Reviewstate
Publisher
Die Preußische Akademie der Wissenschaften zu Berlin hat im Jahr 1906 auf Bitte der deutschen Regierung die Verantwortung für die Arbeiten zur Vollendung des Deutschen Wörterbuchs von Jacob Grimm und Wilhelm Grimm übernommen. Im Jahr 1929/30 hat sie die Berliner Arbeitsstelle gegründet. Nach dem Zweiten Weltkrieg wurde dieses lexikographische Grundlagenwerk in den Jahrzehnten der Spaltung Deutschlands, aber in enger Gemeinschaft einer Berliner und einer Göttinger Arbeitsstelle zum Abschluss gebracht. Schon in den fünfziger Jahren entschlossen sich die Akademien in Berlin und Göttingen, „zunächst“ die völlige Neubearbeitung der ältesten Teile des Werks, die die Brüder Grimm zwischen 1852 und 1863 noch selbst erarbeitet hatten, vorzunehmen. Diese Neubearbeitung ist inzwischen nahezu abgeschlossen. Umso deutlicher zeigt sich aber nun, dass auch die übrigen Teile dringend der Neubearbeitung bedürfen. Das Jahrhundertwerk der Brüder Grimm, ihre wichtigste gemeinsame sprachwissenschaftliche Leistung, heute in der ganzen Welt täglich von Tausenden im Internet benutzt, Fundament der gesamten neueren deutschen Wortforschung, kann seine Aufgabe nur erfüllen, wenn es nicht als Museumsstück bewundert, sondern in gründlich erneuerter Form als aktuelles Auskunftsmittel fortgeführt wird. In dieser Situation war die Schließung der Berliner Arbeitsstelle im Dezember 2012 das falsche Signal.
Having found their way onto the computer screens, comics soon branched into webcomics. These kept a lot of the characteristics of print comic books, but gradually adapted new unexplored modes of representation. Three relatively new ‘enhancements’ to the medium of comics are presented in this article: webcomics enhanced through the use of the infinite canvas, as proposed by Scott McCloud, those enhanced with videos and/or sound, and lastly those enhanced with interactive and ludic elements. All of the mentioned push the medium of comics into new waters, and by doing so they add new layers of meaning and modify their structure based on the make-up of the implemented features. Infinite canvas manages to lift some limitations of print comics without changing the overall feel too drastically, while animated and voiced webcomics, as well as interactive or game comics, have a much higher inclination to transgress into domains of other media and transform themselves in order to accommodate and integrate these novel foreign features.
The present paper reports the first results of the compilation and annotation of a blog corpus for German. The main aim of the project is the representation of the blog discourse structure and relations between its elements (blog posts, comments) and participants (bloggers, commentators). The data included in the corpus were manually collected from the scientific blog portal SciLogs. The feature catalogue for the corpus annotation includes three types of information which is directly or indirectly provided in the blog or can be construed by means of statistical analysis or computational tools. At this point, only directly available information (e.g. title of the blog post, name of the blogger etc.) has been annotated. We believe, our blog corpus can be of interest for the general study of blog structure or related research questions as well as for the development of NLP methods and techniques (e.g. for authorship detection).
Medialität und Sozialität sind grundlegende Kategorien einer medienlinguistischen Perspektive auf Sprache und Kommunikation und sollen im Folgenden die Ausgangspunkte einer Auseinandersetzung mit der Operativität digitaler Schriftzeichen bilden. Nach einer kurzen Einleitung wird dazu der Operativitätsbegriff erläutert und dieser dann anhand eines Postings im Microblog Twitter exemplifiziert.
Many applications in Natural Language Processing require a semantic analysis of sentences in terms of truth-conditional representations, often with specific desiderata in terms of which information needs to be included in the semantic analysis. However, there are only very few tools that allow such an analysis. We investigate the representations of an automatic analysis pipeline of the C&C parser and Boxer to determine whether Boxer’s analyses in form of Discourse Representation Structure can be successfully converted into a more surface oriented event semantic representation, which will serve as input for a fusion algorithm for fusing hard and soft information. We use a data set of synthetic counter intelligence messages for our investigation. We provide a basic pipeline for conversion and subsequently discuss areas in which ambiguities and differences between the semantic representations present challenges in the conversion process.
Brown clustering has been used to help increase parsing performance for morphologically rich languages. However, much of the work has focused on using clustering techniques to replace terminal nodes or as a feature for parsing. Instead, we choose to examine how effectively Brown clustering is for unlexicalized parsing by creating data-driven POS tagsets which are then used with the Berkeley parser. We investigate cluster sizes as well as on what information (e.g. words vs. lemmas) clustering will yield the best parser performance. Our results approach the current state of the art results for the German T¨uBa-D/Z treebank when using parser internal tagging.
We present the IUCL system, based on supervised learning, for the shared task on stance detection. Our official submission, the random forest model, reaches a score of 63.60, and is ranked 6th out of 19 teams. We also use gradient boosting decision trees and SVM and merge all classifiers into an ensemble method. Our analysis shows that random forest is good at retrieving minority classes and gradient boosting majority classes. The strengths of different classifiers wrt. precision and recall complement each other in the ensemble.