Refine
Year of publication
Document Type
- Conference Proceeding (387)
- Part of a Book (268)
- Article (204)
- Book (37)
- Working Paper (17)
- Doctoral Thesis (16)
- Other (14)
- Preprint (7)
- Part of Periodical (5)
- Report (2)
Language
- English (961) (remove)
Keywords
- Korpus <Linguistik> (279)
- Deutsch (209)
- Computerlinguistik (98)
- Annotation (70)
- Interaktion (65)
- Konversationsanalyse (60)
- Automatische Sprachanalyse (54)
- Gesprochene Sprache (53)
- Englisch (52)
- Wörterbuch (42)
Publicationstate
- Veröffentlichungsversion (961) (remove)
Reviewstate
Publisher
- IDS-Verlag (80)
- de Gruyter (55)
- Association for Computational Linguistics (44)
- European Language Resources Association (ELRA) (43)
- European Language Resources Association (24)
- Institut für Deutsche Sprache (24)
- Springer (19)
- Linköping University Electronic Press (15)
- The Association for Computational Linguistics (15)
- Leibniz-Institut für Deutsche Sprache (IDS) (14)
This paper consists of a short analysis of the sources and the treatment of the legal lexicon in the first dictionary published by the Spanish Royal Academy (1726–1739), followed by a longer commentary on the representation and the treatment of the concept of judge, in which the reflection of the extralinguistic factors in the definitions stands in focus. The results highlight the relevance of the legal context of that era for the treatment of the lexicon related to the legal domain, but they also demonstrate the pattern in which the lexicographic data displays peculiarities of legal matters.
This paper reports on the efforts of twelve national teams in building the International Comparable Corpus (ICC; https://korpus.cz/icc) that will contain highly comparable datasets of spoken, written and electronic registers. The languages currently covered are Czech, Finnish, French, German, Irish, Italian, Norwegian, Polish, Slovak, Swedish and, more recently, Chinese, as well as English, which is considered to be the pivot language. The goal of the project is to provide much-needed data for contrastive corpus-based linguistics. The ICC corpus is committed to the idea of re-using existing multilingual resources as much as possible and the design is modelled, with various adjustments, on the International Corpus of English (ICE). As such, ICC will contain approximately the same balance of forty percent of written language and 60 percent of spoken language distributed across 27 different text types and contexts. A number of issues encountered by the project teams are discussed, ranging from copyright and data sustainability to technical advances in data distribution.
The Component MetaData Infrastructure (CMDI) is a framework for the creation and usage of metadata formats to describe all kinds of resources in the CLARIN world. To better connect to the library world, and to allow librarians to enter metadata for linguistic resources into their catalogues, a crosswalk from CMDI-based formats to bibliographic standards is required. The general and rather fluid nature of CMDI, however, makes it hard to map arbitrary CMDI schemas to metadata standards such as Dublin Core (DC) or MARC 21, which have a mature, well-defined and fixed set of field descriptors. In this paper, we address the issue and propose crosswalks between CMDI-based profiles originating from the NaLiDa project and DC and MARC 21, respectively.
Sometimes in interaction, a speaker articulates an overt interpretation of prior talk. Such moments have been studied as involving the repair of a problem with the other’s talk or as formulating an understanding of the matter at hand. Stepping back from the established notions of formulations and repair, we examine the variety of actions speakers do with the practice of offering an interpretation, and the order within this domain. Results show half a dozen usage types of interpretations in mundane interaction. These form a largely continuous territory of action, with recognizably distinct usage types as well as cases falling between these (proto)typical uses. We locate order in the domain of interpretations using the method of semantic maps and show that, contrary to earlier assumptions in the literature, interpretations that formulate an understanding of the matter at hand are actually quite pervasive in ordinary talk. These findings contribute to research on action formation and advance our understanding of understanding in interaction. Data are video- and audio-recordings of mundane social interaction in the German language from a variety of settings.
The present paper explores how rules are enforced and talked about in everyday life. Drawing on a corpus of board game recordings across European languages, we identify a sequential and praxeological context for rule talk. After a game rule is breached, a participant enforces proper play and then formulates a rule with an impersonal deontic statement (e.g. “It’s not allowed to do this”). Impersonal deontic statements express what may or may not be done without tying the obligation to a particular individual. Our analysis shows that such statements are used as part of multi-unit and multi-modal turns where rule talk is accomplished through both grammatical and embodied means. Impersonal deontic statements serve multiple interactional goals: they account for having changed another’s behavior in the moment and at the same time impart knowledge for the future. We refer to this complex action as an “instruction.” The results of this study advance our understanding of rules and rule-following in everyday life, and of how resources of language and the body are combined to enforce and formulate rules.
We examine moments in social interaction in which a person formulates what another thinks or believes. Such formulations of belief constitute a practice with specifiable contexts and consequences. Belief formulations treat aspects of the other person's prior conduct as accountable on the basis that it provided a new angle on a topic, or otherwise made a surprising contribution within an ongoing course of actions. The practice of belief formulations subjectivizes the content that the other articulated and thereby topicalizes it, mobilizing commitment to that position, an account, or further elaboration. We describe how the practice can be put to work in different activity contexts: sometimes it is designed to undermine the other's position as a subjective 'mere belief', at other times it serves to mobilize further topic talk. Throughout, belief formulations show themselves to be a method by which we get to know ourselves and each other as mental agents.
Discourse metaphors
(2008)
The article introduces the notion of discourse metaphor, relatively stable metaphorical mappings that function as a key framing device within a particular discourse over a certain period of time. Discourse metaphors are illustrated by case studies from three lines of research: on the cultural imprint of metaphors, on the negotiation of metaphors and on cross-linguistic occurrence. The source concepts of discourse metaphors refer to phenomenologically salient real or fictitious objects that are part of interactional space (i.e., can be pointed at, like MACHINES or HOUSES) and/or occupy an important place in cultural imagination. Discourse metaphors change both over time and across the discourses where they are used. The implications of focussing on different types of source domains for our thinking about the embodiment and sociocultural situatedness of metaphor is discussed, with particular reference to recent developments in Conceptual Metaphor Theory. Research on discourse suggests that situatedness is a crucial factor in the functioning and dynamics of metaphor.
This chapter describes the resources that speakers of Polish use when recruiting assistance and collaboration from others in everyday social interaction. The chapter draws on data from video recordings of informal conversation in Polish, and reports language-specific findings generated within a large-scale comparative project involving eight languages from five continents (see other chapters of this volume). The resources for recruitment described in this chapter include linguistic structures from across the levels of grammatical organization, as well as gestural and other visible and contextual resources of relevance to the interpretation of action in interaction. The presentation of categories of recruitment, and elements of recruitment sequences, follows the coding scheme used in the comparative project (see Chapter 2 of the volume). This chapter extends our knowledge of the structure and usage of Polish with detailed attention to the properties of sequential structure in conversational interaction. The chapter is a contribution to an emerging field of pragmatic typology.
‘Can’ and ‘must’-type modal verbs in the direct sanctioning of misconduct across European languages
(2023)
Deontic meanings of obligation and permissibility have mostly been studied in relation to modal verbs, even though researchers are aware that such meanings can be conveyed in other ways (consider, for example, the contributions to Nuyts/van der Auwera (eds.) 2016). This presentation reports on an ongoing project that examines deontic meaning but takes as its starting point not a type of linguistic structure but a particular kind of social moment that presumably attracts deontic talk: The management of potentially ‚unacceptable‘ or untoward actions (taking the last bread roll at breakfast, making a disallowed move during a board game, etc.). Data come from a multi-language parallel video corpus of everyday social interaction in English, German, Italian, and Polish. Here, we focus on moments in which one person sanctions another’s behavior as unacceptable. Using interactional-linguistic methods (Couper-Kuhlen/Selting 2018), we examine similarities and differences across these four languages in the use of modal verbs as part of such sanctioning attempts. First results suggest that modal verbs are not as common in the sanctioning of misconduct as one might expect. Across the four languages, only between 10%–20% of relevant sequences involve a modal verb. Most of the time, in this context, speakers achieve deontic meaning in other ways (e.g., infinitives such as German nicht so schmatzen, ‚no smacking‘). This raises the question what exactly modal verbs, on those relatively rare occasions when they are used, contribute to the accomplishment of deontic meaning. The reported study pursues this question in two ways: 1) By considering similarities across languages in the ways that modal verbs interact with other (verbal) means in the sanctioning of misconduct.; 2) By considering differences across languages in the use of modal verbs. Here, we find that the relevant modal verbs are used similarly in some activity contexts (enforcing rules during board games), but less so in other activity contexts (mundane situations with no codified rules). In sum, the presented study adds to cross-linguistically grounded knowledge about deontic meaning and its relationships to linguistics structures.
This article presents preliminary results indicating that speakers have a different pitch range when they speak a foreign language compared to the pitch variation that occurs when they speak their native language. To this end, a learner corpus with French and German speakers was analyzed. Results suggest that speakers indeed produce a smaller pitch range in the respective L2. This is true for both groups of native speakers. A possible explanation for this finding is that speakers are less confident in their productions, therefore, they concentrate more on segments and words and subsequently refrain from realizing pitch range more native-like. For language teaching, the results suggest that learners should be trained extensively on the more pronounced use of pitch in the foreign language.
This study examines the pitch profiles of French learners of German and German learners of French, both in their native language (L1), and in their respective foreign language (L2). Results of the analysis of 84 speakers suggest that for short read sentences, French and German speakers do not show pitch range differences in their native production. Furthermore, analyses of mean f0 and pitch range indicate that range is not necessarily reduced in L2 productions. These results are different from results reported in prior research. Possible reasons for these differences are discussed.
Speakers’ linguistic experience is for the most part experience with language as used in conversational interaction. Though highly relevant for usage-based linguistics, the study of such data is as yet often left to other frameworks such as conversation analysis and interactional linguistics (Couper-Kuhlen and Selting 2001). On the basis of a case study of salient usage patterns of the two German motion verbs kommen and gehen in spontaneous conversation, the present paper argues for a methodological integration of quantitative corpus-linguistic methods with qualitative conversation analytic approaches to further the usage-based study of conversational interaction.
Co-development of action, conceptualization and social interaction mutually scaffold and support each other within a virtuous feedback cycle in the development of human language in children. Within this framework, the purpose of this article is to bring together diverse but complementary accounts of research methods that jointly contribute to our understanding of cognitive development and in particular, language acquisition in robots. Thus, we include research pertaining to developmental robotics, cognitive science, psychology, linguistics and neuroscience, as well as practical computer science and engineering. The different studies are not at this stage all connected into a cohesive whole; rather, they are presented to illuminate the need for multiple different approaches that complement each other in the pursuit of understanding cognitive development in robots. Extensive experiments involving the humanoid robot iCub are reported, while human learning relevant to developmental robotics has also contributed useful results.
Disparate approaches are brought together via common underlying design principles. Without claiming to model human language acquisition directly, we are nonetheless inspired by analogous development in humans and consequently, our investigations include the parallel co-development of action, conceptualization and social interaction. Though these different approaches need to ultimately be integrated into a coherent, unified body of knowledge, progress is currently also being made by pursuing individual methods.
How (and when) do speakers generalise from memorised exemplars of a construction to a productive schema? The present paper presents a novel take on this issue by offering a corpus-based approach to semantic extension processes. Focusing on clusters of German ADJ N expressions involving the heavily polysemous adjective tief ‚deep’, it is shown that type frequency (a commonly used measure of productivity) needs to be relativised to distinct semantic classes within the overall usage spectrum of a given construction in order to predict the occurrence of novel types within a particular region of this spectrum. Some methodological and theoretical implications for usage-based linguistic model building are considered.
Localism
(2017)
Research on syntactic ambiguity resolution in language comprehension has shown that subjects' processing decisions are influenced by a variety of heterogeneous factors such as e.g., syntactic complexity, semantic fit and the discourse frequency of the competing structures. The present paper investigates a further potentially relevant factor in such processes: effects of syntagmatic lexical chunking (or matching to a complex memorized prefab) whose occurrence would be predicted from usage-based assumptions about linguistic categorisation. Focusing on the widely studied so-called DO/SC-ambiguity in which a post-verbal NP is syntactically ambiguous between a direct object and the subject of an embedded clause, potentially biasing collocational chunks of the relevant type are identified in a number of corpus-linguistic pretests and then investigated in a self-paced reading experiment. The results show a significant increase in processing difficulty from a collocationally neutral over a lexically biasing to a strongly biasing condition. This suggests that syntagmatically complex and partially schematic templates of the kind envisioned in usage-based Construction Grammar may impinge on speakers' online processing decisions during sentence comprehension.
Introduction
(2008)
Localist hypothesis
(2017)
Smooth turn-taking in conversation depends in part on speakers being able to communicate their intention to hold or cede the floor. Both prosodic and gestural cues have been shown to be used in this context. We investigate the interplay of pitch movements and hand gestures at locations at which speaker change becomes relevant, comparing their use in German and Swedish. We find that there are some shared functions of prosody and gesture with regard to turn-taking in the two languages, but that these shared functions appear to be mediated by the different phonological demands on pitch in the two languages.
Looking at gestures as a means for communication, they can serve conversational participants at several levels. As co-speech gestures, they can add information to the verbally expressed content and they can serve to manage turn-taking. In order to look closer at the interplay between these resources in face-to face conversation, we annotated hand gestures, syntactic completion points and the related turn-organisation, and measured the timing of gesture strokes and their lexical/phrasal referent. In a case study on German, we observe the trend that speakers vary less in gesturelexis on- and offsets when keeping the turn after syntactic completions than at speaker changes, backchannel or other locations of a conversation. This indicates that timing properties of non-verbal cues interact with verbal cues to manage turn-taking.
Linguistic corpora have been annotated by means of SGML-based markup languages for almost 20 years. We can, very roughly, differentiate between three distinct evolutionary stages of markup technologies. (1)Originally, single SGML tree-based document instances were deemed sufficient for the representation of linguistic structures. (2) Linguists began to realize that alternatives and extensions to the traditional model are needed. Formalisms such as, for example, NITE were proposed: the NITE Object Model (NOM) consists of multi-rooted trees. (3) We are now on the threshold of the third evolutionary stage: even NITE's very flexible approach is not suited for all linguistic purposes. As some structures, such as these, cannot be modeled by multi-rooted trees, an even more flexible approach is needed in order to provide a generic annotation format that is able to represent genuinely arbitrary linguistic data structures.
Many studies on dictionary use presuppose that users do indeed consult lexicographic resources. However, little is known about what users actually do when they try to solve language problems on their own. We present an observation study where learners of German were allowed to browse the web freely while correcting erroneous German sentences. In this paper, we are focusing on the multi-methodological approach of the study, especially the interplay between quantitative and qualitative approaches. In one example study, we will show how the analysis of verbal protocols, the correction task and the screen recordings can reveal the effects of intuition, language (learning) awareness, and determination on the accuracy of the corrections. In another example study, we will show how preconceived hypotheses about the problem at hand might hinder participants from arriving at the correct solution.
Wiktionary is increasingly gaining influence in a wide variety of linguistic fields such as NLP and lexicography, and has great potential to become a serious competitor for publisher-based and academic dictionaries. However, little is known about the "crowd" that is responsible for the content of Wiktionary. In this article, we want to shed some light on selected questions concerning large-scale cooperative work in online dictionaries. To this end, we use quantitative analyses of the complete edit history files of the English and German Wiktionary language editions. Concerning the distribution of revisions over users, we show that — compared to the overall user base — only very few authors are responsible for the vast majority of revisions in the two Wiktionary editions. In the next step, we compare this distribution to the distribution of revisions over all the articles. The articles are subsequently analysed in terms of rigour and diversity, typical revision patterns through time, and novelty (the time since the last revision). We close with an examination of the relationship between corpus frequencies of headwords in articles, the number of article visits, and the number of revisions made to articles.
Dictionaries have been part and parcel of literate societies for many centuries. They assist in communication, particularly across different languages, to aid in understanding, creating, and translating texts. Communication problems arise whenever a native speaker of one language comes into contact with a speaker of another language. At the same time, English has established itself as a lingua franca of international communication. This marked tendency gives lexicography of English a particular significance, as English dictionaries are used intensively and extensively by huge numbers of people worldwide.
We present ESDexplorer (https://owid.shinyapps.io/ESDexplorer), a browser application which allows the user to explore the data from a large European survey on dictionary use and culture. We built ESDexplorer with several target groups in mind: our cooperation partners, other researchers, and a more general public interested in the results. Also, we present in detail the architecture and technological realisation of the application and discuss some legal aspects of data protection that motivated some architectural choices.
We introduce DeReKoGram, a novel frequency dataset containing lemma and part-of-speech (POS) information for 1-, 2-, and 3-grams from the German Reference Corpus. The dataset contains information based on a corpus of 43.2 billion tokens and is divided into 16 parts based on 16 corpus folds. We describe how the dataset was created and structured. By evaluating the distribution over the 16 folds, we show that it is possible to work with a subset of the folds in many use cases (e.g., to save computational resources). In a case study, we investigate the growth of vocabulary (as well as the number of hapax legomena) as an increasing number of folds are included in the analysis. We cross-combine this with the various cleaning stages of the dataset. We also give some guidance in the form of Python, R, and Stata markdown scripts on how to work with the resource.
Neologisms, i.e., new words or meanings, are finding their way into everyday language use all the time. In the process, already existing elements of a language are recombined or linguistic material from other languages is borrowed. But are borrowed neologisms accepted similarly well by the speech community as neologisms that were formed from “native” material? We investigate this question based on neologisms in German. Building on the corresponding results of a corpus study, we test the hypothesis of whether “native” neologisms are more readily accepted than those borrowed from English. To do so, we use a psycholinguistic experimental paradigm that allows us to estimate the degree of uncertainty of the participants based on the mouse trajectories of their responses. Unexpectedly, our results suggest that the neologisms borrowed from English are accepted more frequently, more quickly, and more easily than the “native” ones. These effects, however, are restricted to people born after 1980, the so-called millenials. We propose potential explanations for this mismatch between corpus results and experimental data and argue, among other things, for a reinterpretation of previous corpus studies.
The author presents a study using eye-tracking-while-reading data from participants reading German jurisdictional texts. I am particularly interested in nominalisations. It can be shown that nominalisations are read significantly longer than other nouns and that this effect is quite strong. Furthermore, the results suggest that nouns are read faster in reformulated texts. In the reformulations, nominalisations were transformed into verbal structures. Reformulations did not lead to increased processing times of verbal constructions but reformulated texts were read faster overall. Where appropriate, results are compared to a previous study of Hansen et al. (2006) using the same texts but other methodology and statistical analysis.
On the Lossless Transformation of Single-File, Multi-Layer Annotations into Multi-Rooted Trees
(2007)
The Generalised Architecture for Sustainability (GENAU) provides a framework for the transformation of single-file, multi-layer annotations into multi-rooted trees. By employing constraints expressed in XCONCUR-CL, this procedure can be performed lossless, i.e., without losing information, especially with regard to the nesting of elements that belong to multiple annotation layers. This article describes how different types of linguistic corpora can be transformed using specialised tools, and how constraint rules can be applied to the resulting multi-rooted trees to add an additional level of validation.
We describe a general two-stage procedure for re-using a custom corpus for spoken language system development involving a transformation from character-based markup to XML, and DSSSL stylesheet-driven XML markup enhancement with multiple lexical tag trees. The procedure was used to generate a fully tagged corpus; alternatively with greater economy of computing resources, it can be employed as a parametrised ‘tagging on demand’ filter. The implementation will shortly be released as a public resource together with the corpus (German spoken dialogue, about 500k word form tokens) and lexicon (about 75k word form types).
The Leibniz-Institute for the German Language (IDS) was established in Mannheim in 1964. Since then, it has been at the forefront of innovation in German linguistics as a hub for digital language data. This chapter presents various lessons learnt from over five decades of work by the IDS, ranging from the importance of sustainability, through its strong technical base and FAIR principles, to the IDS’ role in national and international cooperation projects and its expertise on legal and ethical issues related to language resources and language technology.
The actual or anticipated impact of research projects can be documented in scientific publications and project reports. While project reports are available at varying level of accessibility, they might be rarely used or shared outside of academia. Moreover, a connection between outcomes of actual research project and potential secondary use might not be explicated in a project report. This paper outlines two methods for classifying and extracting the impact of publicly funded research projects. The first method is concerned with identifying impact categories and assigning these categories to research projects and their reports by extension by using subject matter experts; not considering the content of research reports. This process resulted in a classification schema that we describe in this paper. With the second method which is still work in progress, impact categories are extracted from the actual text data.
In this paper, we present the Multiple Annotation approach, which solves two problems: the problem of annotating overlapping structures, and the problem that occurs when documents should be annotated according to different, possibly heterogeneous tag sets. This approach has many advantages: it is based on XML, the modeling of alternative annotations is possible, each level can be viewed separately, and new levels can be added at any time. The files can be regarded as an interrelated unit, with the text serving as the implicit link. Two representations of the information contained in the multiple files (one in Prolog and one in XML) are described. These representations serve as a base for several applications.
Overlap in markup occurs where some markup structures do not nest, such as where the structural division of the text into lists, sections, etc., differs from the syntactic division of the text into sentences and phrases. The Multiple Annotation solution to this problem (redundant encoding in multiple forms) has many advantages: it is based on XML, the modeling of alternative annotations is possible, each level can be viewed separately, and new levels can be added at any time. But it has the significant disadvantage of independence of the separate files. These multiply annotated files can be regarded as an interrelated unit, with the text serving as the implicit link. Two representations of the information contained in the multiple files (one in Prolog and one in XML) can be programmatically derived and used together for editing, for inference, or for unification of the multiply annotated documents.
This paper describes work directed towards the development of a syllable prominence-based prosody generation functionality for a German unit selection speech synthesis system. A general concept for syllable prominence-based prosody generation in unit selection synthesis is proposed. As a first step towards its implementation, an automated syllable prominence annotation procedure based on acoustic analyses has been performed on the BOSS speech corpus. The prominence labeling has been evaluated against an existing annotation of lexical stress levels and manual prominence labeling on a subset of the corpus. We discuss methods and results and give an outlook on further implementation steps.
The present paper provides a new approach to the form-function relation in Latin declension. First, inflections are discussed from a functional point of view with special consideration to questions of syncretism. A case hierarchy is justified for Latin that conforms to general observations on case systems. The analysis leads to a markedness scale that provides a ranking of case-number-combinations from unmarked to most marked. Systematic syncretism always applies to contiguous sections of the case-number-scale (‘syncretism fields’). Second, inflections are analysed from a formal point of view taking into account partial identities and differences among noun endings. Theme vowels being factored out, endings are classified on the basis of their make-up, e.g., as sigmatic endings; as containing desinential (non-thematic) vowels; as containing long vowels; and so on. The analysis leads to a view of endings as involving more basic elements or ‘markers’. Endings of the various declensions instantiate a small number of types, and these can be put into a ranked order (a formal scale) that applies transparadigmatically. Third, the relationship between the independently substantiated functional and formal hierarchies is examined. In any declension, the form-function-relationship is established by aligning the relevant formal and functional scales (or ‘sequences’). Some types of endings are in one-to-one correspondence with bundles of morphosyntactic properties as they should be according to a classical morphemic approach, but others are not. Nevertheless, endings can be assigned a uniform role if the form-function-relationship is understood to be based on an alignment of formal and functional sequences. A diagrammatical form-function relationship is revealed that could not be captured in classical or refined morphemic approaches.
We examine the new task of detecting derogatory compounds (e.g. curry muncher). Derogatory compounds are much more difficult to detect than derogatory unigrams (e.g. idiot) since they are more sparsely represented in lexical resources previously found effective for this task (e.g. Wiktionary). We propose an unsupervised classification approach that incorporates linguistic properties of compounds. It mostly depends on a simple distributional representation. We compare our approach against previously established methods proposed for extracting derogatory unigrams.
We present an approach for modeling German negation in open-domain fine grained sentiment analysis. Unlike most previous work in sentiment analysis, we assume that negation can be conveyed by many lexical units (and not only common negation words) and that different negation words have different scopes. Our approach is examined on a new dataset comprising sentences with mentions of polar expressions and various negation words. We identify different types of negation words that have the same scopes. We show that already negation modeling based on these types largely outperforms traditional negation models which assume the same scope for all negation words and which employ a window-based scope detection rather than a scope detection based on syntactic information.
We present the pilot edition of the GermEval Shared Task on the Identification of Offensive Language. This shared task deals with the classification of German tweets from Twitter. It comprises two tasks, a coarse-grained binary classification task and a fine-grained multi-class classification task. The shared task had 20 participants submitting 51 runs for the coarse-grained task and 25 runs for the fine-grained task. Since this is a pilot task, we describe the process of extracting the raw-data for the data collection and the annotation schema. We evaluate the results of the systems submitted to the shared task. The shared task homepage can be found at https://projects.cai. fbi.h-da.de/iggsa/