Refine
Year of publication
- 2013 (27) (remove)
Document Type
- Conference Proceeding (27) (remove)
Has Fulltext
- yes (27)
Is part of the Bibliography
- no (27)
Keywords
Publicationstate
Reviewstate
- Peer-Review (10)
- (Verlags)-Lektorat (4)
Publisher
- Association for Computational Linguistics (3)
- Trojina, Institute for Applied Slovene Studies/Eesti Keele Instituut (3)
- Association of Internet Researchers (2)
- Dagstuhl (2)
- International Speech Communications Association (2)
- Universität Hildesheim (2)
- ACM (1)
- Asian Federation of Natural Language Processing (1)
- Bulgarian Academy of Sciences (1)
- Institut für Informationswissenschaft und Sprachtechnologie, Universität Hildesheim (1)
Opinion holder extraction is one of the most important tasks in sentiment analysis. We will briefly outline the importance of predicates for this task and categorize them according to part of speech and according to which semantic role they select for the opinion holder. For many languages there do not exist semantic resources from which such predicates can be easily extracted. Therefore, we present alternative corpus-based methods to gain such predicates automatically, including the usage of prototypical opinion holders, i.e. common nouns, denoting for example experts or analysts, which describe particular groups of people whose profession or occupation is to form and express opinions towards specific items.
We explore the feasibility of contextual healthiness classification of food items. We present a detailed analysis of the linguistic phenomena that need to be taken into consideration for this task based on a specially annotated corpus extracted from web forum entries. For automatic classification, we compare a supervised classifier and rule-based classification. Beyond linguistically motivated features that include sentiment information we also consider the prior healthiness of food items.
We investigate the task of detecting reliable statements about food-health relationships from natural language texts. For that purpose, we created a specially annotated web corpus from forum entries discussing the healthiness of certain food items. We examine a set of task-specific features (mostly) based on linguistic insights that are instrumental in finding utterances that are commonly perceived as reliable. These features are incorporated in a supervised classifier and compared against standard features that are widely used for various tasks in natural language processing, such as bag of words, part-of speech and syntactic parse information.
Interested in formally modelling similarity between narratives, we investigate judgements of similarity between narratives in a small corpus of film reviews and book–film comparisons. A main finding is that judgements tend to concern multiple levels of story representation at once. As these texts are pragmatically related to reception contexts, we find many references to reception quality and optimality. We conclude that current formal models of narrative can not capture the task of naturalistic narrative comparisons given in the analysed reviews, but that the development of models containing a more reception-oriented point of view will be necessary.
The understanding of story variation, whether motivated by cultural currents or other factors, is important for applications of formal models of narrative such as story generation or story retrieval. We present the first stage of an experiment to elicit natural narrative variation data suitable for evaluation with respect to story similarity, to qualitative and quantitative analysis of story variation, and also for data processing. We also present few preliminary results from the first stage of the experiment, using Red Riding Hood and Romeo and Juliet as base texts.
Extending the possibilities for collaborative work with TEI/XML through the usage of a wiki system
(2013)
This paper presents and discusses an integrated project-specific working environment for editing TEI/XML-files and linking entities of interest to a dedicated wiki system. This working environment has been specifically tailored to the workflow in our interdisciplinary digital humanities project GeoBib. It addresses some challenges that arose while working with person-related data and geographical references in a growing collection of TEI/XML-files. While our current solution provides some essential benefits, we also discuss several critical issues and challenges that remain.
With the advent of mobile devices, mediatized political discourse became more dynamic. I assume that the microblog Twitter can be considered as a medium for spatial coordination during protests. Therefore, the case of neo-Nazi demonstrations and counter-protests in the city of Dresden that occurred in February 2012 is analysed. Data consists of microposts that occurred during the event. Quantitative analysis of hashtag and retweet frequencies was performed as well as qualitative speech act pattern analysis and a tempo-spatial discourse analysis on selected subsets of microposts. Results show that a common linguistic practice is verbal georeferencing and by that constructing space. Empirical analysis indicates a strong relation between communicational online space and physical offline place: Protest participants permanently reconfigure spatial context discursively and thus the contested protest area becomes a temporarily meaningful place.
This paper explores on the basis of empirical research, how patterns of interaction and argumentation in political discourse on Twitter evolve as translocal communities in the creative shape of “joint digital storytelling”. Joint storytelling embraces coordinated activities by multiple actors focusing on a shared topic. By adding personal information and evaluation, participants construct an open narrative format, which can be inviting and inspiring for others, who then join in with their own narratives. This model will be exemplified by analyzing a large amount of tweets (107,000) collected during a political conflict between proponents and adversaries of a local traffic project in Germany. Analysis is based on (1) the textual level, (2) the operative level (hashtags, @- and RT-Symbol, hyperlinks etc.) and (3) the visual level of storytelling (embedded photos, videos). Results show a new way of creating translocal online communities and political deliberation.
This paper contributes to the discussion on best practices for the syntactic analysis of non-canonical language, focusing on Twitter microtext. We present an annotation experiment where we test an existing POS tagset, the Stuttgart-Tübingen Tagset (STTS), with respect to its applicability for annotating new text from the social media, in particular from Twitter microblogs. We discuss different tagset extensions proposed in the literature and test our extended tagset on a set of 506 tweets (7.418 tokens) where we achieve an inter-annotator agreement for two human annotators in the range of 92.7 to 94.4 (k). Our error analysis shows that especially the annotation of Twitterspecific phenomena such as hashtags and at-mentions causes disagreements between the human annotators. Following up on this, we provide a discussion of the different uses of the @- and #-marker in Twitter and argue against analysing both on the POS level by means of an at-mention or hashtag label. Instead, we sketch a syntactic analysis which describes these phenomena by means of syntactic categories and grammatical functions.
We examine predicative adjectives as an unsupervised criterion to extract subjective adjectives. We do not only compare this criterion with a weakly supervised extraction method but also with gradable adjectives, i.e. another highly subjective subset of adjectives that can be extracted in an unsupervised fashion. In order to prove the robustness of this extraction method, we will evaluate the extraction with the help of two different state-of-the-art sentiment lexicons (as a gold standard).
The perception of prosodic prominence is influenced by different sources like different acoustic cues, linguistic expectations and context. We use a generalized additive model and a random forest to model the perceived prominence on a corpus of spoken German. Both models are able to explain over 80% of the variance. While the random forests give us some insights on the relative importance of the cues, the general additive model gives us insights on the interaction between different cues to prominence.
A frequently replicated finding is that higher frequency words tend to be shorter and contain more strongly reduced vowels. However, little is known about potential differences in the articulatory gestures for high vs. low frequency words. The present study made use of electromagnetic articulography to investigate the production of two German vowels, [i] and [a], embedded in high and low frequency words. We found that word frequency differently affected the production of [i] and [a] at the temporal as well as the gestural level. Higher frequency of use predicted greater acoustic durations for long vowels; reduced durations for short vowels; articulatory trajectories with greater tongue height for [i] and more pronounced downward articulatory trajectories for [a]. These results show that the phonological contrast between short and long vowels is learned better with experience, and challenge both the Smooth Signal Redundancy Hypothesis and current theories of German phonology.
This paper addresses the task of finding antecedents for locally uninstantiated arguments. To resolve such null instantiations, we develop a weakly supervised approach that investigates and combines a number of linguistically motivated strategies that are inspired by work on semantic role labeling and corefence resolution. The performance of the system is competitive with the current state-of-the-art supervised system.
In this paper, we report on an effort to develop a gold standard for the intensity ordering of subjective adjectives. Rather than pursue a complete order as produced by paying attention to the mean scores of human ratings only, we take into account to what extent assessors consistently rate pairs of adjectives relative to each other. We show that different available automatic methods for producing polar intensity scores produce results that correlate well with our gold standard, and discuss some conceptual questions surrounding the notion of polar intensity.
Igel is a small XQuery-based web application for examining a collection of document grammars; in particular, for comparing related document grammars to get a better overview of their differences and similarities. In its initial form, Igel reads only DTDs and provides only simple lists of constructs in them (elements, attributes, notations, parameter entities). Our continuing work is aimed at making Igel provide more sophisticated and useful information about document grammars and building the application into a useful tool for the analysis (and the maintenance!) of families of related document grammars
Nutzerfeedback und seine Auswirkungen auf den lexikographischen Prozess von Internetwörterbüchern
(2013)
In aktuellen Internetwörterbüchern werden die Wörterbuchnutzer mithilfe eines breiten Spektrums an Möglichkeiten in die lexikographische Arbeit einbezogen (z. B. Fehlermeldungen, eigene Einträge) bzw. mithilfe verschiedener Mittel an das Wörterbuch gebunden (z. B. Newsletter, Blogs). Zwischen wirklicher Bottom-up-Lexikographie und Methoden der Nutzerbindung bei kommerziellen Onlinewörterbüchern bewegen sich also die vielfältigen Möglichkeiten des Nutzerfeedbacks, das in vielen Fällen auf den Entstehungsprozess des jeweiligen Wörterbuchs Einfluss nimmt. In diesem Vortrag wird vorgestellt, an welchen Stellen des lexikographischen Prozesses von Internetwörterbüchern sich „die Öffentlichkeit“ einbringen kann und wie sich dieses Feedback auf den Prozess der Erarbeitung solcher Wörterbücher auswirkt. Dabei werden zugleich die verschiedenen Phasen des lexikographischen Prozesses von Internwörterbüchern vorgestellt und die für Wörterbücher in diesem Medium spezifischen Herstellungsbedingungen diskutiert.
Vor allem in älteren Wörterbüchern mit philologischer Ausrichtung ist die Mikrostruktur der Artikel häufig diskursiv und unsystematisch. Eine automatisierte Digitalisierung solcher Wörterbücher mit dem Ziel, ihre logische Struktur zu kodieren, ist nicht möglich; in vielen Fällen ist schon ein Parser für ein manuell nachzubearbeitendes Rohdigitalisat kein realistisches Ziel, weil die Angabetypen des Wörterbuchs nicht klar voneinander abgrenzbar und in den Einzelartikeln nicht eindeutig identifizierbar sind. In solchen Fällen wirft auch eine nachträgliche manuelle Formalisierung der Mikrostruktur große lexikografische Probleme auf. Für komplexere Anwendungsszenarien wie etwa Abfragen in Webanwendungen kann es dennoch unumgänglich sein, wenigstens sämtliche relevanten in den Artikeln diskutierten Wortformen mit grundsätzlichen diasystematischen und morphologischen Informationen sowie ihren Relationen zueinander in einem maschinell lesbaren Format strukturiert zu repräsentieren, etwa durch datenzentrierte XML-Dokumente. Der Vortrag versucht, die lexikografischen und technischen Möglichkeiten und Grenzen einer solchen teilweisen und manuellen Retrodigitalisierung am Beispiel von Erfahrungen mit einem älteren Wörterbuch deutscher Lehnwörter im Slovenischen (Striedter-Temps 1963) auszuloten. Das Wörterbuch soll in ein Portal von Lehnwörterbüchern mit Deutsch als gemeinsamer Gebersprache eingebunden werden. Die Einzelartikel werden dem Benutzer als Bilddigitalisate zur Verfügung gestellt; die zusätzliche textuelle Retrodigitalisierung ist jedoch für komplexere, insbesondere auch für wörterbuchübergreifende und portalweite, Suchabfragen erforderlich.
Die Benutzung von Onlinewörterbüchern ist bislang wenig erforscht. Am Institut für Deutsche Sprache in Mannheim wurde versucht, diese Forschungslücke mit einem Projekt zur Benutzungsforschung zumindest zum Teil schließen (s. www.benutzungsforschung.de). Die empirischen Studien wurden methodisch sowohl in Form von Onlinefragebögen, die neben befragenden auch experimentelle Elemente enthielten, als auch anhand eines Labortests (mit Eyetracking-Verfahren) durchgeführt. Die erste Studie untersuchte generell die Anlässe und sozialen Situationen der Verwendung von Onlinewörterbüchern sowie die Ansprüche, die Nutzer an Onlinewörterbücher stellen. An der zweisprachigen Onlinestudie (deutsch/englisch) nahmen international fast 700 Probanden teil. Durch die hohe Resonanz auf die erste Studie und den daraus folgenden Wunsch, die gewonnenen Informationen empirisch zu vertiefen, richtet sich auch die die zweite Studie an ein internationales Publikum und schloss inhaltlich an die erste Studie an. Später konzentrierten sich die Studien auf monolinguale deutsche Onlinewörterbücher wie elexiko (Studien 3 und 4), sowie auf das Wörterbuchportal OWID (Studie 5). Im Vortrag werden ausgewählte Ergebnisse der verschiedenen Studien vorgestellt.
In mechanical speech synthesis reed pipes were mainly used for the generation of the voice. The organ stop "vox humana" played a central role for this concept. Historical documents report that the "vox humana" sounded like human vowels. In this study tones of four different "voces humanae" were recorded to investigate the similarity to human vowels. The acoustical and perceptual analysis revealed that some though not all tones show a high similarity to selected vowels.
The KorAP project (“Korpusanalyseplattform der nächste Generation”, “Corpus-analysis platform of the next generation”), carried out at the Institut fUr Deutsche Sprache (IDS) in Mannheim, Germany, has as its goal the development of a modem, state-of-the-art corpus-analysis platform, capable of handling very large corpora and opening the perspectives for innovative linguistic research. The platform will facilitate new linguistic findings by making it possible to manage and analyse extremely large amounts of primary data and annotations, while at the same time allowing an undistorted view of the primary un-annotated text, and thus fully satisfying expectations associated with a scientific tool. The project started in July 2011 and is funded till June 2014. The demo presentation in December will be the first version following a preliminary feature freeze, and will open the alpha testing phase of the project.
Contexts of dictionary use
(2013)
To design effective electronic dictionaries, reliable empirical information on how dictionaries are actually being used is of great value for lexicographers. To my knowledge, no existing empirical research addresses the context of dictionary use, or the extra-lexicographic situations in which a dictionary consultation is embedded. This is mainly due to the fact that data about these contexts is difficult to obtain. To take a first step in closing this research gap, I incorporated an open-ended question (“In which contexts or situations would you use a dictionary?”) into the online survey (N = 684) and asked the participants to answer this question by providing as much information as possible. Instead of presenting well-known facts about standardized types of usage situation, this paper will focus on the more offbeat circumstances of dictionary use and aims of users, as they are reflected in the responses. Overall, the results indicate that there is a community whose work is closely linked with dictionaries and, accordingly, they deal very routinely with this type of text. Dictionaries are also seen as a linguistic treasure trove for games or crossword puzzles, and as a standard which can be referred to as an authority. While it is important to emphasize that the results are only preliminary, they do indicate the potential of empirical research in this area.
The web portal Lehnwortportal Deutsch (lwp.ids-mannheim.de), developed at the Institute for the German Language (IDS), aims to provide unified access to existing and possibly new dictionaries of German loanwords in other languages. Internally, the lexicographical information is represented as a directed acyclic graph of relations between words. The graph abstracts from the idiosyncrasies of the individual component dictionaries. This paper explores two different strategies to make complex graph-based cross-dictionary queries in such a portal more accessible to users. The first strategy effectively hides the underlying graph structure, but allows users to assign scopes (internally defined in terms of the graph structure) to search criteria. A second type of search strategy directly formulates queries in terms of the relational graph structure. In this case, search results are not entries but n-tuples of words (metalemmata, loanwords, etyma); a query consists of specifying properties of these words and relations between them. A working prototype of an easy-to-use human-readable declarative query language is presented and ways to interactively construct queries are discussed.
Kommunikationsverben, an online reference work on German communication verbs and part of the dictionary portal OWID, describes the meaning of communication verbs on two levels: a lexical level, represented in the dictionary entries and by sets of lexical features, and a conceptual level, represented by different types of situations referred to by specific types of verbs. These two levels have each been implemented in special types of access structures. A first explorative access to the conceptual level provides the user with a list of the main classes of communication verbs, the subclasses of each of these, and the lexical fields pertaining to each subclass. Lexical fields are presented together with a characterisation of the situation type to which the verbs of that field are used to refer. Information about the conceptual level is additionally accessible by an advanced search option allowing the user to combine components of the characterisation of situation types to “create” any kind of situation and search for the verbs that correspond to it. Information about the lexical level of the meaning of communication verbs is accessible via the dictionary entries and by another advanced search option allowing the user to search for verbs with particular lexical features or combinations of these.