Refine
Year of publication
- 2021 (76) (remove)
Document Type
- Article (34)
- Conference Proceeding (25)
- Part of a Book (6)
- Other (3)
- Book (2)
- Part of Periodical (2)
- Report (2)
- Course Material (1)
- Working Paper (1)
Language
- English (76) (remove)
Has Fulltext
- yes (76)
Keywords
- Interaktion (20)
- Korpus <Linguistik> (20)
- Konversationsanalyse (19)
- Deutsch (14)
- Forschungsdaten (10)
- Computerlinguistik (9)
- Kommunikation (8)
- conversation analysis (8)
- Semantik (7)
- Syntax (6)
Publicationstate
- Veröffentlichungsversion (76) (remove)
Reviewstate
- Peer-Review (66)
- (Verlags)-Lektorat (3)
Publisher
- Taylor & Francis (11)
- Leibniz-Institut für Deutsche Sprache (IDS) (7)
- Association for Computational Linguistics (6)
- Linköping University Electronic Press (6)
- Leibniz-Institut für Deutsche Sprache (4)
- Benjamins (3)
- CLARIN (3)
- Deutsche Gesellschaft für Sprachwissenschaft (3)
- Zenodo (3)
- Frontiers Media SA (2)
Verbs may be attributed to higher agency than other grammatical categories. In Study 1, we confirmed this hypothesis with archival datasets comprising verbs (N = 950) and adjectives (N = 2115). We then investigated whether verbs (vs. adjectives) increase message effectiveness. In three experiments presenting potential NGOs (Studies 2 and 3) or corporate campaigns (Study 4) in verb or adjective form, we demonstrate the hypothesized relationship. Across studies, (overall N = 721) grammatical agency consistently increased message effectiveness. Semantic agency varied across contexts by either increasing (Study 2), not affecting (Study 3), or decreasing (Study 4) the effectiveness of the message. Overall, experiments provide insights in to the meta-semantic effects of verbs – demonstrating how grammar may influence communication outcomes.
We describe a simple procedure for the automatic creation of word-level alignments between printed documents and their respective full-text versions. The procedure is unsupervised, uses standard, off-the-shelf components only, and reaches an F-score of 85.01 in the basic setup and up to 86.63 when using pre- and post-processing. Potential areas of application are manual database curation (incl. document triage) and biomedical expression OCR.
This paper will address the challenge of creating a knowledge graph from a corpus of historical encyclopedias with a special focus on word sense alignment (WSA) and disambiguation (WSD). More precisely, we examine WSA and WSD approaches based on article similarity to link messy historical data, utilizing Wikipedia as aground-truth component – as the lack of a critical overlap in content paired with the amount of variation between and within the encyclopedias does not allow for choosing a ”baseline” encyclopedia to align the others to. Additionally, we are comparing the disambiguation performance of conservative methods like the Lesk algorithm to more recent approaches, i.e. using language models to disambiguate senses.
This article explores the relation between word order and response latency, focusing on responses to question-word questions. Qualitative (multimodal) and quantitative analyses of naturally occurring conversations in French—where question-words can occur in initial, medial, or final position within the question—show that variation in word order affects the timing of responses. It is argued that this is so because word order provides a differential basis for action ascription, creating different temporal opportunities for projecting the recipient’s next relevant action. The frequent occurrence of early responses to questions with an initial question-word, in particular, stresses the importance of the recognition point of an action under way for response timing and shows respondents’ pervasive orientation to sequential progressivity. Findings highlight how lexico-syntactic trajectories of emergent turns, prior talk and actions, material and bodily features of interaction, and participants’ shared expectations conspire in shaping the time-courses of action ascription and action projection.
Who is we? Disambiguating the referents of first person plural pronouns in parliamentary debates
(2021)
This paper investigates the use of first person plural pronouns as a rhetorical device in political speeches. We present an annotation schema for disambiguating pronoun references and use our schema to create an annotated corpus of debates from the German Bundestag. We then use our corpus to learn to automatically resolve pronoun referents in parliamentary debates. We explore the use of data augmentation with weak supervision to further expand our corpus and report preliminary results.
Research on multimodal interaction has shown that simultaneity of embodied behavior and talk is constitutive for social action. In this study, we demonstrate different temporal relationships between verbal and embodied actions. We focus on uses of German darf/kann ich? (“may/can I?”) in which speakers initiate, or even complete the embodied action that is addressed by the turn before the recipient’s response. We argue that through such embodied conduct, the speaker bodily enacts high agency, which is at odds with the low deontic stance they express through their darf/kann ich?-TCUs. In doing so, speakers presuppose that the intersubjective permissibility of the action is highly probable or even certain. Moreover, we demonstrate how the speaker’s embodied action, joint perceptual salience of referents, and the projectability of the action addressed with darf/kann ich? allow for a lean syntactic design of darf/kann ich?-TCUs (i.e., pronominalization, object omission, and main verb omission). Our findings underscore the reflexive relationship between lean syntax, sequential organization and multimodal conduct.
N-grams are of utmost importance for modern linguistics and language technology. The legal status of n-grams, however, raises many practical questions. Traditionally, text snippets are considered copyrightable if they meet the originality criterion, but no clear indicators as to the minimum length of original snippets exist; moreover, the solutions adopted in some EU Member States (the paper cites German and French law as examples) are considerably different. Furthermore, recent developments in EU law (the CJEU's Pelham decision and the new right of press publishers) also provide interesting arguments in this debate. The paper presents the existing approaches to the legal protection of n-grams and tries to formulate some clear guidelines as to the length of n-grams that can be freely used and shared.
The German e-dictionary documenting confusables Paronyme – Dynamisch im Kontrast contains lexemes which are similar in sound, spelling and/or meaning, e.g. autoritär/autoritativ, innovativ/innovatorisch. These can cause uncertainty as to their appropriate use. The monolingual guide could be easily expanded to become a multilingual platform for commonly confused items by incorporating language modules. The value of this visionary resource is manifold. Firstly, e-dictionaries of confusables have not yet been compiled for most European languages; consequently, the German resource could serve as a model of practice. Secondly, it would be able to explain the usage of false friends. Thirdly, cognates and loan word equivalents would be offered for simultaneous consultation. Fourthly, users could find out whether, for example, a German pair is semantically equivalent to a pair in another language. Finally, it would inform users about cases where a pair of semantically similar words in one language has only one lexical counterpart in another language. This paper is an appeal for visionary projects and collaborative enterprises. I will outline the dictionary’s layout and contents as shown by its contrastive entries. I will demonstrate potential additions, which would make it possible to build up a large platform for easily misused words in different languages.
Validating the Performativity Hypothesis to Neg-Raising using corpus data: Evidence from Polish
(2021)
Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus
(2021)
Since the introduction of large language models in Natural Language Processing, large raw corpora have played a crucial role in Computational Linguistics. However, most of these large raw corpora are either available only for English or not available to the general public due to copyright issues. Nevertheless, there are some examples of freely available multilingual corpora for training Deep Learning NLP models, such as the OSCAR and Paracrawl corpora. However, they have quality issues, especially for low-resource languages. Moreover, recreating or updating these corpora is very complex. In this work, we try to reproduce and improve the goclassy pipeline used to create the OSCAR corpus. We propose a new pipeline that is faster, modular, parameterizable, and well documented. We use it to create a corpus similar to OSCAR but larger and based on recent data. Also, unlike OSCAR, the metadata information is at the document level. We release our pipeline under an open source license and publish the corpus under a research-only license.