Refine
Year of publication
Document Type
- Article (485)
- Conference Proceeding (262)
- Part of a Book (193)
- Book (25)
- Review (18)
- Part of Periodical (12)
- Other (6)
- Working Paper (5)
- Image (1)
- Periodical (1)
Language
- English (590)
- German (381)
- French (14)
- Portuguese (6)
- Multiple languages (4)
- Russian (4)
- Ukrainian (4)
- Latvian (2)
- Italian (1)
- Polish (1)
Keywords
- Deutsch (319)
- Korpus <Linguistik> (232)
- Konversationsanalyse (85)
- Computerlinguistik (80)
- Interaktion (80)
- Gesprochene Sprache (61)
- Annotation (47)
- Forschungsdaten (47)
- Kommunikation (44)
- Englisch (41)
Publicationstate
- Veröffentlichungsversion (1008) (remove)
Reviewstate
- Peer-Review (1008) (remove)
Publisher
- IDS-Verlag (91)
- de Gruyter (43)
- Association for Computational Linguistics (34)
- Schmidt (30)
- Institut für Deutsche Sprache (28)
- Verlag für Gesprächsforschung (22)
- Leibniz-Institut für Deutsche Sprache (IDS) (21)
- European Language Resources Association (ELRA) (19)
- Zenodo (19)
- European Language Resources Association (18)
This paper consists of a short analysis of the sources and the treatment of the legal lexicon in the first dictionary published by the Spanish Royal Academy (1726–1739), followed by a longer commentary on the representation and the treatment of the concept of judge, in which the reflection of the extralinguistic factors in the definitions stands in focus. The results highlight the relevance of the legal context of that era for the treatment of the lexicon related to the legal domain, but they also demonstrate the pattern in which the lexicographic data displays peculiarities of legal matters.
This paper reports on the efforts of twelve national teams in building the International Comparable Corpus (ICC; https://korpus.cz/icc) that will contain highly comparable datasets of spoken, written and electronic registers. The languages currently covered are Czech, Finnish, French, German, Irish, Italian, Norwegian, Polish, Slovak, Swedish and, more recently, Chinese, as well as English, which is considered to be the pivot language. The goal of the project is to provide much-needed data for contrastive corpus-based linguistics. The ICC corpus is committed to the idea of re-using existing multilingual resources as much as possible and the design is modelled, with various adjustments, on the International Corpus of English (ICE). As such, ICC will contain approximately the same balance of forty percent of written language and 60 percent of spoken language distributed across 27 different text types and contexts. A number of issues encountered by the project teams are discussed, ranging from copyright and data sustainability to technical advances in data distribution.
The Component MetaData Infrastructure (CMDI) is a framework for the creation and usage of metadata formats to describe all kinds of resources in the CLARIN world. To better connect to the library world, and to allow librarians to enter metadata for linguistic resources into their catalogues, a crosswalk from CMDI-based formats to bibliographic standards is required. The general and rather fluid nature of CMDI, however, makes it hard to map arbitrary CMDI schemas to metadata standards such as Dublin Core (DC) or MARC 21, which have a mature, well-defined and fixed set of field descriptors. In this paper, we address the issue and propose crosswalks between CMDI-based profiles originating from the NaLiDa project and DC and MARC 21, respectively.
Sometimes in interaction, a speaker articulates an overt interpretation of prior talk. Such moments have been studied as involving the repair of a problem with the other’s talk or as formulating an understanding of the matter at hand. Stepping back from the established notions of formulations and repair, we examine the variety of actions speakers do with the practice of offering an interpretation, and the order within this domain. Results show half a dozen usage types of interpretations in mundane interaction. These form a largely continuous territory of action, with recognizably distinct usage types as well as cases falling between these (proto)typical uses. We locate order in the domain of interpretations using the method of semantic maps and show that, contrary to earlier assumptions in the literature, interpretations that formulate an understanding of the matter at hand are actually quite pervasive in ordinary talk. These findings contribute to research on action formation and advance our understanding of understanding in interaction. Data are video- and audio-recordings of mundane social interaction in the German language from a variety of settings.
The present paper explores how rules are enforced and talked about in everyday life. Drawing on a corpus of board game recordings across European languages, we identify a sequential and praxeological context for rule talk. After a game rule is breached, a participant enforces proper play and then formulates a rule with an impersonal deontic statement (e.g. “It’s not allowed to do this”). Impersonal deontic statements express what may or may not be done without tying the obligation to a particular individual. Our analysis shows that such statements are used as part of multi-unit and multi-modal turns where rule talk is accomplished through both grammatical and embodied means. Impersonal deontic statements serve multiple interactional goals: they account for having changed another’s behavior in the moment and at the same time impart knowledge for the future. We refer to this complex action as an “instruction.” The results of this study advance our understanding of rules and rule-following in everyday life, and of how resources of language and the body are combined to enforce and formulate rules.
We examine moments in social interaction in which a person formulates what another thinks or believes. Such formulations of belief constitute a practice with specifiable contexts and consequences. Belief formulations treat aspects of the other person's prior conduct as accountable on the basis that it provided a new angle on a topic, or otherwise made a surprising contribution within an ongoing course of actions. The practice of belief formulations subjectivizes the content that the other articulated and thereby topicalizes it, mobilizing commitment to that position, an account, or further elaboration. We describe how the practice can be put to work in different activity contexts: sometimes it is designed to undermine the other's position as a subjective 'mere belief', at other times it serves to mobilize further topic talk. Throughout, belief formulations show themselves to be a method by which we get to know ourselves and each other as mental agents.
‘Can’ and ‘must’-type modal verbs in the direct sanctioning of misconduct across European languages
(2023)
Deontic meanings of obligation and permissibility have mostly been studied in relation to modal verbs, even though researchers are aware that such meanings can be conveyed in other ways (consider, for example, the contributions to Nuyts/van der Auwera (eds.) 2016). This presentation reports on an ongoing project that examines deontic meaning but takes as its starting point not a type of linguistic structure but a particular kind of social moment that presumably attracts deontic talk: The management of potentially ‚unacceptable‘ or untoward actions (taking the last bread roll at breakfast, making a disallowed move during a board game, etc.). Data come from a multi-language parallel video corpus of everyday social interaction in English, German, Italian, and Polish. Here, we focus on moments in which one person sanctions another’s behavior as unacceptable. Using interactional-linguistic methods (Couper-Kuhlen/Selting 2018), we examine similarities and differences across these four languages in the use of modal verbs as part of such sanctioning attempts. First results suggest that modal verbs are not as common in the sanctioning of misconduct as one might expect. Across the four languages, only between 10%–20% of relevant sequences involve a modal verb. Most of the time, in this context, speakers achieve deontic meaning in other ways (e.g., infinitives such as German nicht so schmatzen, ‚no smacking‘). This raises the question what exactly modal verbs, on those relatively rare occasions when they are used, contribute to the accomplishment of deontic meaning. The reported study pursues this question in two ways: 1) By considering similarities across languages in the ways that modal verbs interact with other (verbal) means in the sanctioning of misconduct.; 2) By considering differences across languages in the use of modal verbs. Here, we find that the relevant modal verbs are used similarly in some activity contexts (enforcing rules during board games), but less so in other activity contexts (mundane situations with no codified rules). In sum, the presented study adds to cross-linguistically grounded knowledge about deontic meaning and its relationships to linguistics structures.
In Beispielen wie
(1) Du hast scheints / Weiß Gott nichts begriffen.
(2) It cost £200, give or take.
(3) Qu’est ce qu’il a dit?
werden verbale Konstruktionen (kurz: VK, hier jeweils die fett gesetzten Teile) in einer Weise gebraucht, die der Grammatik verbaler Konstruktionen zuwiderläuft. In (1) und (2) wird die verbale Konstruktion wie ein Adverb/eine Partikel gebraucht bzw. wie ein Ausdruck in der Funktion eines (adverbialen) Adjunkts/ Supplements. In (3) ist die verbale Konstruktion zum Bestandteil einer periphrastischen interrogativen Konstruktion geworden. Wie sind solche ‘Umfunktionalisierungen’ – wie ich das Phänomen zunächst vortheoretisch bezeichnen möchte – einzuordnen? Handelt es sich um Lexikalisierung oder um Grammatikalisierung? Oder um ein Phänomen der dritten Art? Die Umfunktionalisierung verbaler Syntagmen bzw. Konstruktionen – ich gebrauche die Abkürzung UVK für ‘umfunktionalisierte verbale Konstruktion(en)’ – ist ein bisher weniger gut untersuchtes Phänomen, etwa gegenüber der Umfunktionalisierung von Präpositionalphrasen, die sprachübergreifend zu komplexen, „sekundären“ Präpositionen werden können (man vergleiche DEU auf Grund + Genitiv / von, ENG on top of, FRA à cause de).
Anhand eines grammatischen Details werden deskriptive und generative Beschreibungsansätze miteinander verglichen. Drei verschiedene Typen des nicht-phorischen eswerden im Hinblick auf die grammatischen Dimensionen 'Stellung', 'Festigkeit' und 'Komplement-Assoziation' beschrieben; das jeweilige Profil des Typs wird festgelegt. In generativen Lösungen geht es primär um den Subjektstatus der es-Typen und damit allgemeiner um die umstrittene Annahme einer strukturellen Subjektposition im deutschen Satz. Es wird gezeigt, daß nicht-phorisches es im allgemeinen nicht als Besetzung einer strukturellen Subjektposition in Frage kommt. Entsprechende generative Lösungen stehen im Widerspruch zum deskriptiv ermittelten grammatischen Profil von es.
Die Grammatik von a/s-Nominalen ist noch nicht hinreichend erforscht. Der Konstituentenstatus wird unterschiedlich beurteilt; als syntaktische Funktionen werden nur die adnominale und die Funktion als Verbergänzung identifiziert. Es wird gezeigt, daß dieser reduktionistische Ansatz den a/s-Nominalen unter satzsemantischem Aspekt nicht gerecht wird: Dislozierung aus der NP ist mit satzsemantischen Veränderungen verbunden, die als Interpretationen jeweils veränderter syntaktischer Funktion zu verstehen sind. Der Aufsatz argumentiert für insgesamt vier mögliche syntaktische Funktionen; zu den beiden bereits genannten kommen die verbbezogen und die satzbezogen adverbiale hinzu.
Im vorliegenden Beitrag wird ein Vorschlag für die Wortartenunterscheidung bei den nominalen Funktionswörtern entwickelt, der auf dem Prinzip der ‘Unterspezifikation’ beruht. Das Merkmal, in dem nominale Funktionswörter unterspezifiziert sein können, ist ‘Selbstständigkeit’. So werden ‘nur-selbstständige nominale Funktionswörter’ (genuine Pronomina), von ‘nur-adnominalen’ (genuine Determinative) und ‘non-selbstständigen’ unterschieden. Den Non-Selbstständigen wie dt. dieser, die im Hinblick auf Selbstständigkeit unterspezifiziert sind, gilt das besondere Augenmerk. Im Anschluss an die englische Grammatikografie wird eine Verwendungstypik für diese Gruppe vorgestellt. Ihre Konkurrenz mit den Nur Selbstständigen wird sprachvergleichend, vor allem im Kontrast zwischen Englisch und Deutsch, heraus gearbeitet. Aus den Beobachtungen werden allgemeinere Folgerungen für das Phänomen der Indeterminiertheit oder Adaptivität von sprachlichen Ausdrücken, seine Beschreibung mithilfe von Unterspezifikation und seine unterschiedlichen Erscheinungsformen in der Flexionsmorphologie und im Lexikon von Funktions- und Inhaltswörtern gezogen. Hintergrund des Beitrags ist das IDS-Projekt „Grammatik des Deutschen im europäischen Vergleich“ (GDE).
In diesem Beitrag wird eine neue, funktional motivierte Systematik für den adnominalen Genitiv und entsprechende von-Phrasen, die zusammenfassend als ‘possessive Attribute’ bezeichnet werden, entwickelt. Sie beruht auf Erkenntnissen aus der sprachtypologischen Forschung und dem Vergleich mit anderen, vor allem germanischen Sprachen. Der Beschreibungsrahmen für die NP mit der übergreifenden ‘funktionalen Domäne’ der Referenz und den zugehörigen Subdomänen wird vorgestellt. Possessive Attribute können als eine Ausdrucksform der Subdomäne Modifikation bestimmt werden. Es wird gezeigt, dass possessive Attribute verschiedene funktionale Typen der Modifikation realisieren können: referentiell-verankernde (der Hut meiner Schwester), qualitative (ein Autor deutscher Herkunft) und klassifikatorische (ein Mann der Tat). Auch randständige possessive Attribute wie der ‘Teilungsgenitiv’ (eine Tasse heißen Tees) und der Identitätsgenitiv (das Laster der Unbescheidenheit) werden berücksichtigt. Die neue Ordnung possessiver Attribute nach funktionalen Subdomänen ist der traditionellen Einteilung vorzuziehen, insofern als sie lediglich Grundunterscheidungen gemäß dem referenzsemantischen Status des Modifikators (begrifflich versus referentiell) und nach dem Beitrag des Modifikators zur Bedeutungskomposition der NP (verankernd versus qualitativ bzw. klassifikatorisch) berücksichtigt. Zudem ist sie durch Testverfahren wie den Pronominalisierungstest abgesichert.
Der Beitrag verfolgt zwei Zielsetzungen: eine deskriptive und eine methodologische. Auf der Ebene grammatischer Beschreibung erfolgt eine Analyse der deutschen Relativsatzkonstruktion aus der Gegenüberstellung mit entsprechenden Konstruktionen anderer europäischer Sprachen heraus, insbesondere mit Konstruktionen des Englischen, Französischen, Polnischen und Ungarischen, den Kernkontrastsprachen des Projekts „Grammatik des Deutschen im europäischen Vergleich“. Dabei wird auf die zentralen Projektkonzepte ‘funktionale Domäne’ und ‘Varianzparameter’ rekurriert. Die funktionale Domäne des Relativsatzes wird als Beitrag zu der übergreifenden Funktion nominaler Konstruktionen, nämlich der Referenz, bestimmt und zwar als referentielle Modifikation des begrifflichen Kerns durch einen verankernden Sachverhalt. Von den die Sprachen differenzierenden Parametrisierungen werden drei herausgegriffen und in ihrer Korrelation diskutiert. In methodologischer Hinsicht soll am Beispiel des Relativsatzes gezeigt werden, in welcher Weise typologische Generalisierungen, Kontraste zwischen – in diesem Fall überwiegend nah verwandten bzw. über Sprachkontakte miteinander verbundenen – Sprachen und einzelsprachenspezifische Eigenschaften aufeinander zu beziehen sind, immer im Dienst einer besseren Einsicht in das Funktionieren des Deutschen.
Die traditionelle Einordnung von man als Indefinitpronomen wird in Zweifel gezogen, andere Zuordnungsmöglichkeiten werden geprüft. Zu diesem Zweck werden die Morphosyntax und die Semantik von man herausgearbeitet. Dabei steht insbesondere die Dichotomie 'generische' versus 'partikuläre' Verwendung zur Debatte. Abschließend wird ein kurzer Blick auf man aus der Lernerperspektive und im Sprachvergleich geworfen.
Die in der gesprochenen Umgangssprache und in Dialekten weit verbreitete nominale Possessorkonstruktion des Typs dem Vater sein Hut tanzt in morphologischer, syntaktischer und semantischer Hinsicht außer der Reihe. Dessen ungeachtet hält sie sich hartnäckig in den genannten Varietäten und erscheint somit als funktional angemessen.
Der Beitrag gibt einen Überblick über die Datenlage im Deutschen und stellt die Analysevorschläge im Hinblick auf Morphologie, syntaktische und semantische Struktur vor. Der Blick auf andere Sprachen und die Beschreibungsansätze in der allgemeinen Sprachtypologie erlauben eine neue Perspektive, die diese Konstruktion in den Kontext grundsätzlicher Alternativen für die Markierung syntaktischer Relationen („head-marking“ versus „dependent-marking“) einordnet. Auch dem viel diskutierten Thema der Entstehung der Konstruktion auf dem Wege von Reanalyse oder Grammatikalisierung sind unter dieser übergreifenden Perspektive neue Aspekte abzugewinnen. Abschließend wird der Frage nachgegangen, welche Eigenschaften diese Konstruktion trotz grammatischer Sonderwege und Sanktionierung durch die normative Grammatik für die Sprecher attraktiv machen.
Relationale Adjektive, also Adjektive, die aus Substantiven abgeleitet werden und die in attributiver Konstruktion mit einem Kopfsubstantiv eine unspezifische Relation zwischen dem Begriff des Kopfs und dem Begriff der Basis ausdrücken, spielen in den klassischen Sprachen eine bedeutende Rolle. Ausgehend von der silvestris musa, der Waldmuse des Vergil, wird in dem vorliegenden Beitrag den Nachwirkungen dieses Musters in europäischen Sprachen, dem Französischen, Englischen, vor allem aber im Deutschen nachgegangen. Die semantische Funktion solcher Adjektive wird der funktionalen Domäne ‚klassifikatorische Modifikation‘ zugeordnet. Sprachübergreifende Gemeinsamkeiten und Unterschiede werden herausgearbeitet. In knapper Form werden auch relationale Adjektive im Polnischen und Ungarischen, den weiteren Vergleichssprachen des Projekts „Grammatik des Deutschen im europäischen Vergleich“, einbezogen. Die Frage nach dem Verhältnis von universalen, sprachfamiliären, arealen und sprachspezifischen Eigenschaften des Konstruktionsmusters sowie nach dem Grad des lateinischen Einflusses wird auf diesem Hintergrund präziser formulierbar.
Der Beitrag diskutiert - aus der Perspektive sozialer Welten - die Frage des Zusammenhangs zwischen den Deutungsmustern und Wissensbestanden, deren sich Migranten bedienen, und den Formen ihrer sozialen Teilhabe. Die empirische Analyse stützt sich auf „intra-ethnische“ Interaktionsprozesse in der sozialen Welt eines „türkischen“ Fußballvereins in Mannheim. Es wird gezeigt, dass sich im untersuchten Fall ethnische Selbstorganisation und Integration auf spezifische Weise paaren. Zu den Strukturmerkmalen dieser lokalen sozialen Welt zählen insbesondere ihre Einbettung in eine Vielzahl unterschiedlicher Kontexte und ihre interne Differenzierung. Des Weiteren ist die alltagspragmatische Verwendung "türkischer" Kulturmuster und der universalistische Charakter der symbolischen Legitimationen in der Alltagsphilosophie der Vereinsangehörigen zu nennen. Schließlich ist die Dominanz von Handlungsanforderungen und Deutungen aus der Fußballwelt gegenüber solchen aus dem „ethnischen“ Milieu sowie die Infragestellung der Kategorien „deutsch“ und „türkisch“ kennzeichnend für die untersuchte Sozialwelt.
Vorwort
(2021)
This chapter starts out by giving a brief overview of the main priorities of international and German studies in the area of linguistic landscape research. The contributions to this volume are then embedded in current debates and developments in the field. Finally, we outline important desiderata of linguistic landscape research that focus on German and address challenges of knowledge transfer and application as well as possible contributions to international lines of research.
Co-development of action, conceptualization and social interaction mutually scaffold and support each other within a virtuous feedback cycle in the development of human language in children. Within this framework, the purpose of this article is to bring together diverse but complementary accounts of research methods that jointly contribute to our understanding of cognitive development and in particular, language acquisition in robots. Thus, we include research pertaining to developmental robotics, cognitive science, psychology, linguistics and neuroscience, as well as practical computer science and engineering. The different studies are not at this stage all connected into a cohesive whole; rather, they are presented to illuminate the need for multiple different approaches that complement each other in the pursuit of understanding cognitive development in robots. Extensive experiments involving the humanoid robot iCub are reported, while human learning relevant to developmental robotics has also contributed useful results.
Disparate approaches are brought together via common underlying design principles. Without claiming to model human language acquisition directly, we are nonetheless inspired by analogous development in humans and consequently, our investigations include the parallel co-development of action, conceptualization and social interaction. Though these different approaches need to ultimately be integrated into a coherent, unified body of knowledge, progress is currently also being made by pursuing individual methods.
Research on syntactic ambiguity resolution in language comprehension has shown that subjects' processing decisions are influenced by a variety of heterogeneous factors such as e.g., syntactic complexity, semantic fit and the discourse frequency of the competing structures. The present paper investigates a further potentially relevant factor in such processes: effects of syntagmatic lexical chunking (or matching to a complex memorized prefab) whose occurrence would be predicted from usage-based assumptions about linguistic categorisation. Focusing on the widely studied so-called DO/SC-ambiguity in which a post-verbal NP is syntactically ambiguous between a direct object and the subject of an embedded clause, potentially biasing collocational chunks of the relevant type are identified in a number of corpus-linguistic pretests and then investigated in a self-paced reading experiment. The results show a significant increase in processing difficulty from a collocationally neutral over a lexically biasing to a strongly biasing condition. This suggests that syntagmatically complex and partially schematic templates of the kind envisioned in usage-based Construction Grammar may impinge on speakers' online processing decisions during sentence comprehension.
Introduction
(2008)
Smooth turn-taking in conversation depends in part on speakers being able to communicate their intention to hold or cede the floor. Both prosodic and gestural cues have been shown to be used in this context. We investigate the interplay of pitch movements and hand gestures at locations at which speaker change becomes relevant, comparing their use in German and Swedish. We find that there are some shared functions of prosody and gesture with regard to turn-taking in the two languages, but that these shared functions appear to be mediated by the different phonological demands on pitch in the two languages.
Looking at gestures as a means for communication, they can serve conversational participants at several levels. As co-speech gestures, they can add information to the verbally expressed content and they can serve to manage turn-taking. In order to look closer at the interplay between these resources in face-to face conversation, we annotated hand gestures, syntactic completion points and the related turn-organisation, and measured the timing of gesture strokes and their lexical/phrasal referent. In a case study on German, we observe the trend that speakers vary less in gesturelexis on- and offsets when keeping the turn after syntactic completions than at speaker changes, backchannel or other locations of a conversation. This indicates that timing properties of non-verbal cues interact with verbal cues to manage turn-taking.
Social media, as the fifth estate, increasingly influence public discourses and play a major role in shaping public opinion. Undoubtedly, they have the potential to promote participation and democracy. On the other side, they also constitute a risk for democratic societies, as the spread of hate speech and fake news has shown. As a response, forms of counterspeech organised by civil society have emerged in social media to counter the normalisation of hate speech and democracy-threatening discourses. In order to influence discourse in social media in terms of the fifth estate, counterspeech campaigns must be visible also quantitatively. In this ethnographic contrastive study, I analysed the activities of the German and Finnish Facebook groups of the network #iamhere international. The intensity and continuity of their activities is obviously influenced by their strategic organisation: conventionalised rules support them whereas lacking or inconsequent rules seemed to be counterproductive.
Das Ziel des Beitrages ist es, das Schweigen und seine sprachliche Gestaltung in Bezug auf die Makro- und Mikrostruktur des literarischen Textes zu erforschen. Den theoretischen Hintergrund bilden linguistische und literaturwissenschaftliche Arbeiten, die kommunikative, pragmatische, semantische, kulturelle sowie literaturhistorische Aspekte des Schweigens behandeln und seine Abgrenzung von der Stille hervorheben, die als Naturphänomen zu verstehen ist. Hingewiesen wird ausgehend vom Modell der literarischen Kommunikation auf die Rolle des Schweigens in der Triade Autor-Text-Leser sowie auf seine Realisierungsmöglichkeiten in der Struktur und Sprache des Erzähltextes. Dabei richtet sich die Aufmerksamkeit nicht nur auf das Schweigen als Nicht-Sprechen, sondern auch auf die nichtssagende Rede, die im Rahmen der Kommunikationssituation die Semantik des Schweigens aktualisiert. Die zwei gegensätzlichen Schweigeformen kommen in den Berliner Romanen von Robert Walser (1878-1956) zum Vorschein und unterliegen der genauen Analyse aus der Perspektive der Makro- und Mikrostilistik. Untersucht werden das Erzählprinzip der Geschwätzigkeit in Geschwister Tanner (1907), die Ironie in Der Gehülfe (1908) und die fragmentarische Erzählweise in Jakob von Gunten (1909), durch die das Schweigen sowohl auf der thematischen Ebene als auch in der Struktur und Sprache des Textes realisiert wird. Als narrative Strategie beeinflusst Schweigen die Form und den Inhalt Walsers Berliner Romane und erzielt somit die vom Autor gewünschte Wirkung auf den Leser.
Many studies on dictionary use presuppose that users do indeed consult lexicographic resources. However, little is known about what users actually do when they try to solve language problems on their own. We present an observation study where learners of German were allowed to browse the web freely while correcting erroneous German sentences. In this paper, we are focusing on the multi-methodological approach of the study, especially the interplay between quantitative and qualitative approaches. In one example study, we will show how the analysis of verbal protocols, the correction task and the screen recordings can reveal the effects of intuition, language (learning) awareness, and determination on the accuracy of the corrections. In another example study, we will show how preconceived hypotheses about the problem at hand might hinder participants from arriving at the correct solution.
Im vorliegenden Beitrag gehen wir von der Prämisse aus, dass die Angemessenheit sprachlicher Formen nicht pauschal, sondern anhand des jeweiligen Kontexts zu beurteilen ist. Anhand einer Online-Fragebogenstudie mit durch weil eingeleiteten Nebensätzen untersuchen wir die Hypothese, dass Varianten, die nicht dem Schriftstandard entsprechen, in Kommunikationsformen, die sich weniger an standard- und schriftsprachlichen Normen orientieren, als (mindestens) ebenso angemessen oder zumindest unterschiedlich wahrgenommen werden wie eine schriftstandardsprachliche Variante. Wir untersuchen dies anhand von drei Aufgaben: Rezeption, Produktion und Assoziation zu bestimmten Medien und Textsorten. Wir können zeigen, dass die schriftnormgerechte Variante durchweg als am akzeptabelsten eingeschätzt wird. In allen drei Aufgaben finden sich aber auch eindeutige und übereinstimmende Effekte, die nahelegen, dass die verschiedenen Varianten in Abhängigkeit der Textsorte doch unterschiedlich eingeschätzt, produziert und assoziiert werden.
Wiktionary is increasingly gaining influence in a wide variety of linguistic fields such as NLP and lexicography, and has great potential to become a serious competitor for publisher-based and academic dictionaries. However, little is known about the "crowd" that is responsible for the content of Wiktionary. In this article, we want to shed some light on selected questions concerning large-scale cooperative work in online dictionaries. To this end, we use quantitative analyses of the complete edit history files of the English and German Wiktionary language editions. Concerning the distribution of revisions over users, we show that — compared to the overall user base — only very few authors are responsible for the vast majority of revisions in the two Wiktionary editions. In the next step, we compare this distribution to the distribution of revisions over all the articles. The articles are subsequently analysed in terms of rigour and diversity, typical revision patterns through time, and novelty (the time since the last revision). We close with an examination of the relationship between corpus frequencies of headwords in articles, the number of article visits, and the number of revisions made to articles.
Dictionaries have been part and parcel of literate societies for many centuries. They assist in communication, particularly across different languages, to aid in understanding, creating, and translating texts. Communication problems arise whenever a native speaker of one language comes into contact with a speaker of another language. At the same time, English has established itself as a lingua franca of international communication. This marked tendency gives lexicography of English a particular significance, as English dictionaries are used intensively and extensively by huge numbers of people worldwide.
We present ESDexplorer (https://owid.shinyapps.io/ESDexplorer), a browser application which allows the user to explore the data from a large European survey on dictionary use and culture. We built ESDexplorer with several target groups in mind: our cooperation partners, other researchers, and a more general public interested in the results. Also, we present in detail the architecture and technological realisation of the application and discuss some legal aspects of data protection that motivated some architectural choices.
We introduce DeReKoGram, a novel frequency dataset containing lemma and part-of-speech (POS) information for 1-, 2-, and 3-grams from the German Reference Corpus. The dataset contains information based on a corpus of 43.2 billion tokens and is divided into 16 parts based on 16 corpus folds. We describe how the dataset was created and structured. By evaluating the distribution over the 16 folds, we show that it is possible to work with a subset of the folds in many use cases (e.g., to save computational resources). In a case study, we investigate the growth of vocabulary (as well as the number of hapax legomena) as an increasing number of folds are included in the analysis. We cross-combine this with the various cleaning stages of the dataset. We also give some guidance in the form of Python, R, and Stata markdown scripts on how to work with the resource.
Neologisms, i.e., new words or meanings, are finding their way into everyday language use all the time. In the process, already existing elements of a language are recombined or linguistic material from other languages is borrowed. But are borrowed neologisms accepted similarly well by the speech community as neologisms that were formed from “native” material? We investigate this question based on neologisms in German. Building on the corresponding results of a corpus study, we test the hypothesis of whether “native” neologisms are more readily accepted than those borrowed from English. To do so, we use a psycholinguistic experimental paradigm that allows us to estimate the degree of uncertainty of the participants based on the mouse trajectories of their responses. Unexpectedly, our results suggest that the neologisms borrowed from English are accepted more frequently, more quickly, and more easily than the “native” ones. These effects, however, are restricted to people born after 1980, the so-called millenials. We propose potential explanations for this mismatch between corpus results and experimental data and argue, among other things, for a reinterpretation of previous corpus studies.
Based on the privative derivational suffix -los, we test statements found in the literature on word formation using a – at least in this field – novel empirical basis: a list of affective-emotional ratings of base nouns and associated -los derivations. In addition to a frequency analysis based on the German Reference Corpus, we show that, in general, emotional polarity (so-called valence, positive vs. negative emotions) is reversed by suffixation with -los. This change is stronger for more polarized base nouns. The perceived intensity of emotion (so-called arousal) is generally lower for -los derivations than for base nouns. Finally, to capture the results theoretically, we propose a prototypical -los construction in the framework of Construction Morphology.
Die öffentliche Akzeptanz und Wirkung natur- und technikwissenschaftlicher Forschung hängt grundlegend davon ab, ob sich die Ziele und Forschungsergebnisse an die Öffentlichkeit vermitteln lassen. Doch die Inhalte aktueller Forschungsvorhaben sind für ein Laienpublikum oft nur schwer zugänglich und verständlich. Vor dem Hintergrund, die gesellschaftliche Diskussion natur- und technikwissenschaftlicher Forschung zu verbessern, untersuchen und bewerten wir im Projekt PopSci – Understanding Science einen wichtigen Sektor des populärwissenschaftlichen Diskurses in Deutschland empirisch. Hierfür identifizieren wir die linguistischen Merkmale deutscher populärwissenschaftlicher Texte durch korpusbasierte Methoden und untersuchen deren Effekt auf die kognitive Verarbeitung der Texte durch Laien. Dazu setzen wir Vor- und Nachwissenstests ein. Außerdem messen wir die Blickbewegungen der Leserinnen und Leser, während sie populärwissenschaftliche Texte lesen. Aus dieser Kombination von unterschiedlichen Methoden versuchen wir, erste Empfehlungen zur Verbesserung des linguistischen Stils und der Wissensrepräsentation populärwissenschaftlicher Texte abzuleiten.
Poster des Text+ Partners Leibniz-Institut für Deutsche Sprache Mannheim präsentiert beim Workshop "Wohin damit? Storing and reusing my language data" am 22. Juni 2023 in Mannheim. Das Poster wurde im Kontext der Arbeit des Vereins Nationale Forschungsdateninfrastruktur (NFDI) e.V. verfasst. NFDI wird von der Bundesrepublik Deutschland und den 16 Bundesländern finanziert, und das Konsortium Text+ wird gefördert durch die Deutsche Forschungsgemeinschaft (DFG) – Projektnummer 460033370. Die Autor:innen bedanken sich für die Förderung sowie Unterstützung. Ein Dank geht außerdem an alle Einrichtungen und Akteur:innen, die sich für den Verein und dessen Ziele engagieren.
The Leibniz-Institute for the German Language (IDS) was established in Mannheim in 1964. Since then, it has been at the forefront of innovation in German linguistics as a hub for digital language data. This chapter presents various lessons learnt from over five decades of work by the IDS, ranging from the importance of sustainability, through its strong technical base and FAIR principles, to the IDS’ role in national and international cooperation projects and its expertise on legal and ethical issues related to language resources and language technology.
Das vorliegende Papier fasst den bisherigen Diskussionsstand zur Konzeption eines Organisationsmodells für die institutionelle Verstetigung des Verbundforschungsprojektes TextGrid zusammen und bündelt die bisherigen Arbeitsergebnisse im Arbeitspaket 3 – Strukturelle und organisatorische Nachhaltigkeit. Das hier skizzierte Organisationsmodell basiert auf den in D-Grid und WissGrid erarbeiteten Nachhaltigkeitskonzepten und adaptiert das Konzept der Virtuellen Organisation (VO) für TextGrid. Insgesamt strebt TextGrid eine institutionelle Verstetigung seiner Aktivitäten nach Ende der Projektlaufzeit an und beabsichtigt gemeinsam mit Virtuellen Forschungsumgebungen aus anderen Wissenschaftsdisziplinen Wege und Prozesse etablieren zu können. Am 24./25. Februar 2011 hat TextGrid einen Strategie-Workshop in Berlin ausgerichtet, zu dem sich eine Expertenrunde zur „Nachhaltigkeit von Virtuellen Forschungsumgebungen“ eingefunden hat. Diskutiert werden wird, wie Virtuelle Forschungsumgebungen basierend auf heutigen finanziellen und organisatorischen Strukturen nachhaltig sein können und welche Empfehlungen sich daraus für TextGrid ableiten. Die Diskussionsergebnisse der Expertenrunde werden zusammen mit den Überlegungen in diesem Papier in die Konzeption eines umfassenderen Organisationsmodells einfließen, das die Grundlage für eine Verstetigung von TextGrid bilden wird.
The actual or anticipated impact of research projects can be documented in scientific publications and project reports. While project reports are available at varying level of accessibility, they might be rarely used or shared outside of academia. Moreover, a connection between outcomes of actual research project and potential secondary use might not be explicated in a project report. This paper outlines two methods for classifying and extracting the impact of publicly funded research projects. The first method is concerned with identifying impact categories and assigning these categories to research projects and their reports by extension by using subject matter experts; not considering the content of research reports. This process resulted in a classification schema that we describe in this paper. With the second method which is still work in progress, impact categories are extracted from the actual text data.
In this paper, we present the Multiple Annotation approach, which solves two problems: the problem of annotating overlapping structures, and the problem that occurs when documents should be annotated according to different, possibly heterogeneous tag sets. This approach has many advantages: it is based on XML, the modeling of alternative annotations is possible, each level can be viewed separately, and new levels can be added at any time. The files can be regarded as an interrelated unit, with the text serving as the implicit link. Two representations of the information contained in the multiple files (one in Prolog and one in XML) are described. These representations serve as a base for several applications.
Gerade weil das Thema der diesjährigen Arbeitstagung bereits seit einigen Jahrzehnten immer wieder Gegenstand verschiedener Forschungsrichtungen gewesen ist und heute gleichermaßen polymorph erforscht wird, sollten im Rahmen dieser Tagung aktuelle Projekte aus unterschiedlichen Disziplinen vorgestellt und interdisziplinär verhandelt werden. Das Ziel der Tagung war es, MedizinerInnen, PsychologInnen und GesprächsanalytikerInnen eine Plattform zu bieten, miteinander in Kontakt zu treten, die vorgestellten Ansätze, Erkenntnisinteressen und Methoden gemeinschaftlich zu diskutieren und dabei herauszustellen, in welchen Punkten sich diese von den eigenen unterscheiden.
Die Artefaktbezeichnungen im Deutschen weisen, wie viele andere sprachliche Ausdrücke auch, eine vom Kontext abhängige Bedeutungsvariation auf, die sich nach systematisch wiederkehrenden Mustern gestaltet. Ein Ziel dieser Untersuchung ist es, herauszufinden, wie diese Bedeutungsvariation zustande kommt und welche semantischen Relationen oder Merkmale das Bindeglied zwischen den einzelnen Varianten der Wortbedeutung bilden. So lässt sich auch der Grad an Systematizität oder Regelhaftigkeit der Polysemie genauer bestimmen. Die Bedeutungsvariationen bei Artefaktbezeichnungen werden hier im wesentlichen als Fälle von metonymischer Bedeutungsverschiebung behandelt. Den Ausgangspunkt der Analyse bildet dabei eine unterspezifizierte semantische Form der sprachlichen Ausdrücke, die mit Hilfe verschiedener inferenzieller Verfahren und unter Einbeziehung von Kontext und Weltwissen schrittweise angereichert und modelliert wird.
This paper describes work directed towards the development of a syllable prominence-based prosody generation functionality for a German unit selection speech synthesis system. A general concept for syllable prominence-based prosody generation in unit selection synthesis is proposed. As a first step towards its implementation, an automated syllable prominence annotation procedure based on acoustic analyses has been performed on the BOSS speech corpus. The prominence labeling has been evaluated against an existing annotation of lexical stress levels and manual prominence labeling on a subset of the corpus. We discuss methods and results and give an outlook on further implementation steps.
We examine the new task of detecting derogatory compounds (e.g. curry muncher). Derogatory compounds are much more difficult to detect than derogatory unigrams (e.g. idiot) since they are more sparsely represented in lexical resources previously found effective for this task (e.g. Wiktionary). We propose an unsupervised classification approach that incorporates linguistic properties of compounds. It mostly depends on a simple distributional representation. We compare our approach against previously established methods proposed for extracting derogatory unigrams.
We present an approach for modeling German negation in open-domain fine grained sentiment analysis. Unlike most previous work in sentiment analysis, we assume that negation can be conveyed by many lexical units (and not only common negation words) and that different negation words have different scopes. Our approach is examined on a new dataset comprising sentences with mentions of polar expressions and various negation words. We identify different types of negation words that have the same scopes. We show that already negation modeling based on these types largely outperforms traditional negation models which assume the same scope for all negation words and which employ a window-based scope detection rather than a scope detection based on syntactic information.
We present the pilot edition of the GermEval Shared Task on the Identification of Offensive Language. This shared task deals with the classification of German tweets from Twitter. It comprises two tasks, a coarse-grained binary classification task and a fine-grained multi-class classification task. The shared task had 20 participants submitting 51 runs for the coarse-grained task and 25 runs for the fine-grained task. Since this is a pilot task, we describe the process of extracting the raw-data for the data collection and the annotation schema. We evaluate the results of the systems submitted to the shared task. The shared task homepage can be found at https://projects.cai. fbi.h-da.de/iggsa/
We address the detection of abusive words. The task is to identify such words among a set of negative polar expressions. We propose novel features employing information from both corpora and lexical resources. These features are calibrated on a small manually annotated base lexicon which we use to produce a large lexicon. We show that the word-level information we learn cannot be equally derived from a large dataset of annotated microposts. We demonstrate the effectiveness of our (domain-independent) lexicon in the crossdomain detection of abusive microposts.
We discuss the impact of data bias on abusive language detection. We show that classification scores on popular datasets reported in previous work are much lower under realistic settings in which this bias is reduced. Such biases are most notably observed on datasets that are created by focused sampling instead of random sampling. Datasets with a higher proportion of implicit abuse are more affected than datasets with a lower proportion.
We examine predicative adjectives as an unsupervised criterion to extract subjective adjectives. We do not only compare this criterion with a weakly supervised extraction method but also with gradable adjectives, i.e. another highly subjective subset of adjectives that can be extracted in an unsupervised fashion. In order to prove the robustness of this extraction method, we will evaluate the extraction with the help of two different state-of-the-art sentiment lexicons (as a gold standard).
Implicitly abusive language – What does it actually look like and why are we not getting there?
(2021)
Abusive language detection is an emerging field in natural language processing which has received a large amount of attention recently. Still the success of automatic detection is limited. Particularly, the detection of implicitly abusive language, i.e. abusive language that is not conveyed by abusive words (e.g. dumbass or scum), is not working well. In this position paper, we explain why existing datasets make learning implicit abuse difficult and what needs to be changed in the design of such datasets. Arguing for a divide-and-conquer strategy, we present a list of subtypes of implicitly abusive language and formulate research tasks and questions for future research.
We propose to use abusive emojis, such as the “middle finger” or “face vomiting”, as a proxy for learning a lexicon of abusive words. Since it represents extralinguistic information, a single emoji can co-occur with different forms of explicitly abusive utterances. We show that our approach generates a lexicon that offers the same performance in cross-domain classification of abusive microposts as the most advanced lexicon induction method. Such an approach, in contrast, is dependent on manually annotated seed words and expensive lexical resources for bootstrapping (e.g. WordNet). We demonstrate that the same emojis can also be effectively used in languages other than English. Finally, we also show that emojis can be exploited for classifying mentions of ambiguous words, such as “fuck” and “bitch”, into generally abusive and just profane usages.
We present a gold standard for semantic relation extraction in the food domain for German. The relation types that we address are motivated by scenarios for which IT applications present a commercial potential, such as virtual customer advice in which a virtual agent assists a customer in a supermarket in finding those products that satisfy their needs best. Moreover, we focus on those relation types that can be extracted from natural language text corpora, ideally content from the internet, such as web forums, that are easy to retrieve. A typical relation type that meets these requirements are pairs of food items that are usually consumed together. Such a relation type could be used by a virtual agent to suggest additional products available in a shop that would potentially complement the items a customer has already in their shopping cart. Our gold standard comprises structural data, i.e. relation tables, which encode relation instances. These tables are vital in order to evaluate natural language processing systems that extract those relations.
Knowledge Acquisition with Natural Language Processing in the Food Domain: Potential and Challenges
(2012)
In this paper, we present an outlook on the effectiveness of natural language processing (NLP) in extracting knowledge for the food domain. We identify potential scenarios that we think are particularly suitable for NLP techniques. As a source for extracting knowledge we will highlight the benefits of textual content from social media. Typical methods that we think would be suitable will be discussed. We will also address potential problems and limits that the application of NLP methods may yield.
In this paper, we examine methods to automatically extract domain-specific knowledge from the food domain from unlabeled natural language text. We employ different extraction methods ranging from surface patterns to co-occurrence measures applied on different parts of a document. We show that the effectiveness of a particular method depends very much on the relation type considered and that there is no single method that works equally well for every relation type. We also examine a combination of extraction methods and also consider relationships between different relation types. The extraction methods are applied both on a domain-specific corpus and the domain-independent factual knowledge base Wikipedia. Moreover, we examine an open-domain lexical ontology for suitability.
Automatic Food Categorization from Large Unlabeled Corpora and Its Impact on Relation Extraction
(2014)
We present a weakly-supervised induction method to assign semantic information to food items. We consider two tasks of categorizations being food-type classification and the distinction of whether a food item is composite or not. The categorizations are induced by a graph-based algorithm applied on a large unlabeled domain-specific corpus. We show that the usage of a domain-specific corpus is vital. We do not only outperform a manually designed open-domain ontology but also prove the usefulness of these categorizations in relation extraction, outperforming state-of-the-art features that include syntactic information and Brown clustering.
Negation is an important contextual phenomenon that needs to be addressed in sentiment analysis. Next to common negation function words, such as not or none, there is also a considerably large class of negation content words, also referred to as shifters, such as the verbs diminish, reduce or reverse. However, many of these shifters are ambiguous. For instance, spoil as in spoil your chance reverses the polarity of the positive polar expression chance while in spoil your loved ones, no negation takes place. We present a supervised learning approach to disambiguating verbal shifters. Our approach takes into consideration various features, particularly generalization features.
We present a descriptive analysis on the two datasets from the shared task on Source, Subjective Expression and Target Extraction from Political Speeches (STEPS), the only existing German dataset for opinion role extraction of its size. Our analysis discusses the individual properties of the three components, subjective expressions, sources and targets and their relations towards each other. Our observations should help practitioners and researchers when building a system to extract opinion roles from German data.
We examine the task of separating types from brands in the food domain. Framing the problem as a ranking task, we convert simple textual features extracted from a domain-specific corpus into a ranker without the need of labeled training data. Such method should rank brands (e.g. sprite) higher than types (e.g. lemonade). Apart from that, we also exploit knowledge induced by semi-supervised graph-based clustering for two different purposes. On the one hand, we produce an auxiliary categorization of food items according to the Food Guide Pyramid, and assume that a food item is a type when it belongs to a category unlikely to contain brands. On the other hand, we directly model the task of brand detection using seeds provided by the output of the textual ranking features. We also harness Wikipedia articles as an additional knowledge source.
In this paper, we explore different linguistic structures encoded as convolution kernels for the detection of subjective expressions. The advantage of convolution kernels is that complex structures can be directly provided to a classifier without deriving explicit features. The feature design for the detection of subjective expressions is fairly difficult and there currently exists no commonly accepted feature set. We consider various structures, such as constituency parse structures, dependency parse structures, and predicate-argument structures. In order to generalize from lexical information, we additionally augment these structures with clustering information and the task-specific knowledge of subjective words. The convolution kernels will be compared with a standard vector kernel.
In order to automatically extract opinion holders, we propose to harness the contexts of prototypical opinion holders, i.e. common nouns, such as experts or analysts, that describe particular groups of people whose profession or occupation is to form and express opinions towards specific items. We assess their effectiveness in supervised learning where these contexts are regarded as labelled training data and in rule-based classification which uses predicates that frequently co-occur with mentions of the prototypical opinion holders. Finally, we also examine in how far knowledge gained from these contexts can compensate the lack of large amounts of labeled training data in supervised learning by considering various amounts of actually labeled training sets.
In opinion mining, there has been only very little work investigating semi-supervised machine learning on document-level polarity classification. We show that semi-supervised learning performs significantly better than supervised learning when only few labelled data are available. Semi-supervised polarity classifiers rely on a predictive feature set. (Semi-)Manually built polarity lexicons are one option but they are expensive to obtain and do not necessarily work in an unknown domain. We show that extracting frequently occurring adjectives & adverbs of an unlabeled set of in-domain documents is an inexpensive alternative which works equally well throughout different domains.
Bootstrapping Supervised Machine-learning Polarity Classifiers with Rule-based Classification
(2010)
In this paper, we explore the effectiveness of bootstrapping supervised machine-learning polarity classifiers using the output of domain-independent rule-based classifiers. The benefit of this method is that no labeled training data are required. Still, this method allows to capture in-domain knowledge by training the supervised classifier on in-domain features, such as bag of words.
We investigate how important the quality of the rule-based classifier is and what features are useful for the supervised classifier. The former addresses the issue in how far relevant constructions for polarity classification, such as word sense disambiguation, negation modeling, or intensification, are important for this self-training approach. We not only compare how this method relates to conventional semi-supervised learning but also examine how it performs under more difficult settings in which classes are not balanced and mixed reviews are included in the dataset.
Though polarity classification has been extensively explored at document level, there has been little work investigating feature design at sentence level. Due to the small number of words within a sentence, polarity classification at sentence level differs substantially from document-level classification in that resulting bag-of-words feature vectors tend to be very sparse resulting in a lower classification accuracy.
In this paper, we show that performance can be improved by adding features specifically designed for sentence-level polarity classification. We consider both explicit polarity information and various linguistic features. A great proportion of the improvement that can be obtained by using polarity information can also be achieved by using a set of simple domain-independent linguistic features.
Opinion holder extraction is one of the important subtasks in sentiment analysis. The effective detection of an opinion holder depends on the consideration of various cues on various levels of representation, though they are hard to formulate explicitly as features. In this work, we propose to use convolution kernels for that task which identify meaningful fragments of sequences or trees by themselves. We not only investigate how different levels of information can be effectively combined in different kernels but also examine how the scope of these kernels should be chosen. In general relation extraction, the two candidate entities thought to be involved in a relation are commonly chosen to be the boundaries of sequences and trees. The definition of boundaries in opinion holder extraction, however, is less straightforward since there might be several expressions beside the candidate opinion holder to be eligible for being a boundary.
We investigate the task of detecting reliable statements about food-health relationships from natural language texts. For that purpose, we created a specially annotated web corpus from forum entries discussing the healthiness of certain food items. We examine a set of task-specific features (mostly) based on linguistic insights that are instrumental in finding utterances that are commonly perceived as reliable. These features are incorporated in a supervised classifier and compared against standard features that are widely used for various tasks in natural language processing, such as bag of words, part-of speech and syntactic parse information.
We examine the task of detecting implicitly abusive comparisons (e.g. “Your hair looks like you have been electrocuted”). Implicitly abusive comparisons are abusive comparisons in which abusive words (e.g. “dumbass” or “scum”) are absent. We detail the process of creating a novel dataset for this task via crowdsourcing that includes several measures to obtain a sufficiently representative and unbiased set of comparisons. We also present classification experiments that include a range of linguistic features that help us better understand the mechanisms underlying abusive comparisons.
We address the task of distinguishing implicitly abusive sentences on identity groups (“Muslims contaminate our planet”) from other group-related negative polar sentences (“Muslims despise terrorism”). Implicitly abusive language are utterances not conveyed by abusive words (e.g. “bimbo” or “scum”). So far, the detection of such utterances could not be properly addressed since existing datasets displaying a high degree of implicit abuse are fairly biased. Following the recently-proposed strategy to solve implicit abuse by separately addressing its different subtypes, we present a new focused and less biased dataset that consists of the subtype of atomic negative sentences about identity groups. For that task, we model components that each address one facet of such implicit abuse, i.e. depiction as perpetrators, aspectual classification and non-conformist views. The approach generalizes across different identity groups and languages.
A Supervised learning approach for the extraction of opinion sources and targets from German text
(2019)
We present the first systematic supervised learning approach for the extraction of opinion sources and targets on German language data. A wide choice of different features is presented, particularly syntactic features and generalization features. We point out specific differences between opinion sources and targets. Moreover, we explain why implicit sources can be extracted even with fairly generic features. In order to ensure comparability our classifier is trained and tested on the dataset of the STEPS shared task.
We present an approach to the new task of opinion holder and target extraction on opinion compounds. Opinion compounds (e.g. user rating or victim support) are noun compounds whose head is an opinion noun. We do not only examine features known to be effective for noun compound analysis, such as paraphrases and semantic classes of heads and modifiers, but also propose novel features tailored to this new task. Among them, we examine paraphrases that jointly consider holders and targets, a verb detour in which noun heads are replaced by related verbs, a global head constraint allowing inferencing between different compounds, and the categorization of the sentiment view that the head conveys.
We report on the two systems we built for Task 1 of the German Sentiment Analysis Shared Task, the task on Source, Subjective Expression and Target Extraction from Political Speeches (STEPS). The first system is a rule-based system relying on a predicate lexicon specifying extraction rules for verbs, nouns and adjectives, while the second is a translation-based system that has been obtained with the help of the (English) MPQA corpus.
This paper presents a survey on the role of negation in sentiment analysis. Negation is a very common linguistic construction that affects polarity and, therefore, needs to be taken into consideration in sentiment analysis.
We will present various computational approaches modeling negation in sentiment analysis. We will, in particular, focus on aspects such as level of representation used for sentiment analysis, negation word detection and scope of negation. We will also discuss limits and challenges of negation modeling on that task.
A syntax-based scheme for the annotation and segmentation of German spoken language interactions
(2018)
Unlike corpora of written language where segmentation can mainly be derived from orthographic punctuation marks, the basis for segmenting spoken language corpora is not predetermined by the primary data, but rather has to be established by the corpus compilers. This impedes consistent querying and visualization of such data. Several ways of segmenting have been proposed,
some of which are based on syntax. In this study, we developed and evaluated annotation and segmentation guidelines in reference to the topological field model for German. We can show that these guidelines are used consistently across annotators. We also investigated the influence of various interactional settings with a rather simple measure, the word-count per segment and unit-type. We observed that the word count and the distribution of each unit type differ in varying interactional settings and that our developed segmentation and annotation guidelines are used consistently across annotators. In conclusion, our syntax-based segmentations reflect interactional properties that are intrinsic to the social interactions that participants are involved in. This can be used for further analysis of social interaction and opens the possibility for automatic segmentation of transcripts.
Die durch die Covid-19-Pandemie bedingte Umstellung der Präsenzlehre auf digitale Lehr- und Lernformate stellte Lehrende und Studierende gleichermaßen vor eine Herausforderung. Innerhalb kürzester Zeit musste die Nutzung von Plattformen und digitalen Tools erlernt und getestet werden. Der Beitrag stellt exemplarisch Dienste und Werkzeuge von CLARIAH-DE vor und erläutert, wie die digitale Forschungsinfrastruktur Lehrende und Studierende auch im Rahmen der digitalen Lehre unterstützen kann.
Um eine bessere Erreichbarkeit und Zugänglichkeit zu bestehenden sowie neuen Angeboten von Lehr- und Schulungsmaterialien im Bereich der Digital Humanities zu ermöglichen, sollten diese in einem zentralen Verzeichnis zur Verfügung gestellt werden. Im Rahmen des CLARIAH-DE Projekts wurde – zunächst für die Umsetzung eines Projektmeilensteins – eine Lösung gesucht, die eine übergreifende Suche in frei zugänglichen und nachnutzbaren Lehr- und Schulungsmaterialien zu Forschungsmethoden, Verfahren sowie Werkzeugen im Bereich der Digital Humanities in unterschiedlichen Plattformen und Repositorien bietet.
Online Access Tools for Spoken German: The Resources of the Deutsches Spracharchiv in a Database
(2002)
This paper shows some details of the modernization of the Deutsches Spracharchiv (DSAv). It explores some future possibilities of linguistical documentation and analysis using the Web. The Institut für Deutsche Sprache (IDS) in Mannheim is the central institution for linguistic research in Germany. The DSAv in the IDS is the center for documentation and research of spoken German. These archives include the largest collection of sound recordings of spoken German (dialects and colloquial speech, including e.g. lots of extinct dialects of former German territories in Eastern Europe) - altogether more than 15,000 sound recordings. The lacking clarification and accessibility of this data material has been felt as an essential deficit. The opportunity to edit the sound signal digitally offers a much easier access to spoken language. Through the integration of the already existing information about the corpora and the transcribed texts in an information- and full text databank, as well as the linking of the data with the acoustic signal (alignment), arises a data-pool with considerably better documentation of the materials and a fast direct grasp of the recorded sounds. Thus, the DSAv initiates totally new research questions for the work at the IDS, as well as for linguistics altogether.
Der Beitrag gibt einen Überblick über die Entwicklung und die Aufgaben des Fachverbandes Deutsch als Fremdsprache (FaDaF) seit seiner Gründung 1989/90. Er zeigt dabei die Entwicklungslinien des Verbandes auf, der als Nachfolge-Organisation des Arbeitskreises Deutsch als Fremdsprache beim DAAD (AkDaF) dessen Aufgaben übernommen, fortgeführt und weiter entwickelt hat.
Corpus researchers, along with many other disciplines in science are being put under continual pressure to show accountability and reproducibility in their work. This is unsurprisingly difficult when the researcher is faced with a wide array of methods and tools through which to do their work; simply tracking the operations done can be problematic, especially when toolchains are often configured by the developers, but left largely as a black box to the user. Here we present a scheme for encoding this ‘meta data’ inside the corpus files themselves in a structured data format, along with a proof-of-concept tool to record the operations performed on a file.
Naming and titling have been discussed in sociolinguistics as markers of status or solidarity. However, these functions have not been studied on a larger scale or for social media data. We collect a corpus of tweets mentioning presidents of six G20 countries by various naming forms. We show that naming variation relates to stance towards the president in a way that is suggestive of a framing effect mediated by respectfulness. This confirms sociolinguistic theory of naming and titling as markers of status.
Data Management is one of the core activities of all CLARIN centres providing data and services for the academia. In PARTHENOS, European initiatives and projects in the area of the humanities and social sciences assembled to compare policies and procedures. One of the areas of interest is data management. The data management landscape shows a lot of proliferation, for which an abstraction level is introduced to help centres, such as CLARIN centres, in the process of providing the best possible services to users with data management needs.
This paper reports on the restructuring of a bilingual (Greek Sign Language, GSL – Modern Greek) lexicographic database with the use of the WordNet semantic and lexical database. The relevant research was carried out by the Institute for Language and Speech Processing (ILSP) / Athena R.C. team within the framework of the European project Easier. The project will produce a framework for intelligent machine translation to bring down language barriers among several spoken/written and sign languages. This paper describes the experience of the ILSP team to contribute to a multilingual repository of signs and their corresponding translations and to organize and enhance a bilingual dictionary (GSL – Modern Greek) as a result of this mapping; this will be the main focus of this paper. The methodology followed relies on the use of WordNet and, more specifically, the Open Multilingual WordNet (OMW) tool to map content in GSL to WordNet synsets.
Cette contribution propose une analyse qualitative et quantitative des reformulations sur des données interactionnelles. Pour la constitution du corpus d’étude, nous nous appuyons sur un outil de détection automatique des hétéro-répétitions, considérées comme indices de reformulation. Après avoir illustré les éléments qui ont présidé à la conception de l’outil, nous présentons le paramétrage de cette ressource, que nous avons testée sur quatre enregistrements de la base de données CLAPI. Cette étude souligne la pertinence de l’approche interactionnelle dans l’analyse des hétéro-répétitions, en en montrant les fonctionnalités multiples, notamment dans les pratiques de reformulation dans la conversation.
Once a new word or a new meaning is added to a monolingual dictionary, the lexicographer is to provide a definition of this item. This paper focuses on the methodological challenges in writing such definitions. After a short discussion of the central terminology (method and definition), the article describes factors which inform this process: linguistic theories, linguistic and lexicographical methods, and types of definitions. Using the example of elexiko, a dictionary project of the Institute for the German language (IDS) in Mannheim, Germany, the paper finally showcases the compilation of definitions in a monolingual online dictionary of contemporary German.
Seit Jahrzehnten fordern zahlreiche Metalexikografen und Lexikografen immer wieder eine umfangreichere Beschäftigung mit Wörterbüchern im muttersprachlichen Deutschunterricht, auch in der gymnasialen Oberstufe. Trotzdem spielen die Wortschatzarbeit und der Umgang mit Wörterbüchern in Lehrplänen, Didaktiken und Lehrwerken in den meisten Fällen allenfalls eine marginale Rolle. Im Anschluss an eine überblicksartige Bestandsaufnahme dazu untersucht der vorliegende Beitrag, inwieweit elexiko, ein Onlinewörterbuch zur deutschen Gegenwartssprache, sinnvoll in den muttersprachlichen Deutschunterricht der Sekundarstufe II integriert werden könnte. Am Beispiel des Angabebereichs der Bedeutungserläuterung wird überprüft, ob Schüler der gymnasialen Oberstufe als Zielgruppe für elexiko infrage kommen und für welche linguistischen Themen sich die Wortschatzarbeit mit den semantischen Paraphrasen für elexiko anbietet.
This article reports on the on-going CoRoLa project, aiming at creating a reference corpus of contemporary Romanian (from 1945 onwards), opened for online free exploitation by researchers in linguistics and language processing, teachers of Romanian, students. We invest serious efforts in persuading large publishing houses and other owners of IPR on relevant language data to join us and contribute the project with selections of their text and speech repositories. The CoRoLa project is coordinated by two Computer Science institutes of the Romanian Academy, but enjoys cooperation of and consulting from professional linguists from other institutes of the Romanian Academy. We foresee a written component of the corpus of more than 500 million word forms, and a speech component of about 300 hours of recordings. The entire collection of texts (covering all functional styles of the language) will be pre-processed and annotated at several levels, and also documented with standardized metadata. The pre-processing includes cleaning the data and harmonising the diacritics, sentence splitting and tokenization. Annotation will include morpho-lexical tagging and lemmatization in the first stage, followed by syntactic, semantic and discourse annotation in a later stage.
Das Lehnwortportal Deutsch (LWPD) ist ein Online-Informationssystem zu Entlehnungen von Wörtern aus dem Deutschen in andere Sprachen. Es beruht auf einer wachsenden Zahl von lexikographischen Ressourcen zu verschiedenen Sprachen und bietet eine einfache ressourcenübergreifende Suchfunktion an. Das Poster präsentiert eine derzeit in Entwicklung befindliche onomasiologische Suchfunktion für das LWPD.
The transfer of research data management from one institution to another infrastructural partner is all but trivial, but can be required,for instance, when an institution faces reorganisation or closure. In a case study, we describe the migration of all research data, identify the challenges we encountered, and discuss how we addressed them. It shows that the moving of research data management to another institution is a feasible, but potentially costly enterprise. Being able to demonstrate the feasibility of research data migration supports the stance of data archives that users can expect high levels of trust and reliability when it comes to data safety and sustainability.
The Component MetaData Infrastructure (CMDI) is the dominant framework for describing language resources according to ISO 24622 (ISO/TC 37/SC 4, 2015). Within the CLARIN world, CMDI has become a huge success. The Virtual Language Observatory (VLO) now holds over 800.000 resources, all described with CMDI-based metadata. With the metadata being harvested from about thirty centres, there is a considerable amount of heterogeneity in the data. In part, there is some use of controlled vocabularies to keep data heterogeneity in check, say when describing the type of a resource, or the country the resource is originating from. However, when CMDI data refers to the names of persons or organisations, strings are used in a rather uncontrolled manner. Here, the CMDI community can learn from libraries and archives who maintain standardised lists for all kinds of names. In this paper, we advocate the use of freely available authority files that support the unique identification of persons, organisations, and more. The systematic use of authority records enhances the quality of the metadata, hence improves the faceted browsing experience in the VLO, and also prepares the sharing of CMDI-based metadata with the data in library catalogues.
The Component MetaData Infrastructure (CMDI) provides a lego-brick framework for the creation, use and re-use of self-defined metadata formats. The design of CMDI can be a force forgood, but history shows that it has often been misunderstood or badly executed. Consequently,it has led the community towards the dark ages of metadata clutter rather than the bright side of semantic interoperability. In this abstract, we report on the condition of CMDI but also outlinean agenda to make the CMDI world a better place to use, share and profit from metadata.
To optimize the sharing and reuse of existing data, many funding organizations now require researchers to specify a management plan for research data. In such a plan, researchers are supposed to describe the entire life cycle of the research data they are going to produce, from data creation to formatting, interpretation, documentation, short-term storage, long-term archiving and data re-use. To support researchers with this task, we built DMPTY, a wizard that guides researchers through the essential aspects of managing data, elicits information from them, and finally, generates a document that can be further edited and linked to the original research proposal.
Lexicon schemas and their use are discussed in this paper from the perspective of lexicographers and field linguists. A variety of lexicon schemas have been developed, with goals ranging from computational lexicography (DATR) through archiving (LIFT, TEI) to standardization (LMF, FSR). A number of requirements for lexicon schemas are given. The lexicon schemas are introduced and compared to each other in terms of conversion and usability for this particular user group, using a common lexicon entry and providing examples for each schema under consideration. The formats are assessed and the final recommendation is given for the potential users, namely to request standard compliance from the developers of the tools used. This paper should foster a discussion between authors of standards, lexicographers and field linguists.