Refine
Year of publication
Document Type
- Article (18)
- Part of a Book (5)
- Conference Proceeding (4)
- Book (1)
Has Fulltext
- yes (28)
Keywords
- Deutsch (5)
- Interaktion (5)
- Korpus <Linguistik> (4)
- Automatische Sprachanalyse (3)
- Computerlinguistik (3)
- Gesprochene Sprache (3)
- Multimodalität (3)
- Natürliche Sprache (3)
- Theater (3)
- Annotation (2)
Publicationstate
- Postprint (14)
- Veröffentlichungsversion (12)
- Zweitveröffentlichung (9)
Reviewstate
- Peer-Review (28) (remove)
Publisher
- Springer (28) (remove)
This contribution explores the relationship between the English CEFR (Common European Framework of Reference for Languages) vocabulary levels and user interest in English Wiktionary entries. User interest was operationalized through the number of views of these entries in Wikimedia server logs covering a period of four years (2019–2022). Our findings reveal a significant relationship between CEFR levels and user interest: entries classified at lower CEFR levels tend to attract more views, which suggests a greater user interest in more basic vocabulary. A multiple regression model controlling for other known or potential factors affecting interest: corpus frequency, polysemy, word prevalence, and age of acquisition confirmed that lower CEFR levels attract significantly more views even after taking into account the other predictors. These findings highlight the importance of CEFR levels in predicting which words users are likely to look up, with implications for lexicography and the development of language learning materials.
The CLARIN infrastructure as an interoperable language technology platform for SSH and beyond
(2023)
CLARIN is a European Research Infrastructure Consortium developing and providing a federated and interoperable platform to support scientists in the field of the Social Sciences and Humanities in carrying-out language-related research. This contribution provides an overview of the entire infrastructure with a particular focus on tool interoperability, ease of access to research data, tools and services, the importance of sharing knowledge within and across (national) communities, and community building. By taking into account FAIR principles from the very beginning, CLARIN succeeded in becoming a successful example of a research infrastructure that is actively used by its members. The benefits CLARIN members reap from their infrastructure secure a future for their common good that is both sustainable and attractive to partners beyond the original target groups.
Neologisms, i.e., new words or meanings, are finding their way into everyday language use all the time. In the process, already existing elements of a language are recombined or linguistic material from other languages is borrowed. But are borrowed neologisms accepted similarly well by the speech community as neologisms that were formed from “native” material? We investigate this question based on neologisms in German. Building on the corresponding results of a corpus study, we test the hypothesis of whether “native” neologisms are more readily accepted than those borrowed from English. To do so, we use a psycholinguistic experimental paradigm that allows us to estimate the degree of uncertainty of the participants based on the mouse trajectories of their responses. Unexpectedly, our results suggest that the neologisms borrowed from English are accepted more frequently, more quickly, and more easily than the “native” ones. These effects, however, are restricted to people born after 1980, the so-called millenials. We propose potential explanations for this mismatch between corpus results and experimental data and argue, among other things, for a reinterpretation of previous corpus studies.
Theater rehearsals are (usually) confronted with the problem of having to transform a written text into an audio-visual, situated and temporal performance. Our contribution focuses on the emergence and stabilization of a gestural form as a solution for embodying a certain aesthetic concept which is derived from the script. This process involves instructions and negotiations, making the process of stabilization publicly and thus intersubjectively accessible. As scenes are repeatedly rehearsed, rehearsals are perspicuous settings for tracking interactional histories. Based on videotaped professional theatre interactions in Germany, we focus on consecutive instances of rehearsing the same scene and trace the interactional history of a particular gesture. This gesture is used by the director to instruct the actors to play a particular aspect of a scene adopting a certain aesthetic concept. Stabilization requires the emergence of shared knowledge. We will show the practices by which shared knowledge is established over time during the rehearsal process and, in turn, how the accumulation of knowledge contributes to a change in the interactional practices themselves. Specifically, we show how a gesture emerges in the process of developing and embodying an aesthetic concept, and how this gesture eventually becomes a sign that refers to and evokes accumulated knowledge. At the same time, we show how this accumulated knowledge changes the instructional activities in the rehearsal process. Our study contributes to the overall understanding of knowledge accumulation in interaction in general and in theater rehearsals in particular. At the same time, it is devoted to the central importance of gestures in theater, which are both a means and a product of theatrical staging.
Hintergrund: Die digitale Transformation prägt gesellschaftliche Systeme weltweit. Digital Health umfasst verschiedene Bereiche, wie z. B. die Verfügbarkeit und Auswertung von Daten, die Möglichkeit der Vernetzung innerhalb der eigenen Berufs- oder Betroffenengruppe und die Art, wie Patient*innen, Angehörige und Behandler*innen miteinander kommunizieren.
Ziel der Arbeit: Digital Health wird mit ihren Auswirkungen auf die Beziehung und die Kommunikation zwischen Patient*innen, Angehörigen und Behandler*innen beleuchtet. Veränderungen, die bereits erkennbar sind, werden beschrieben und Perspektiven aufgezeigt.
Methoden: Das Thema wird aus sozialphilosophischer, sprachwissenschaftlicher und ärztlicher Perspektive in folgenden Bereichen exploriert: digitale vs. analoge Kommunikation, Narration vs. Datensammeln, Internet und soziale Medien als Informationsquelle, Raum für Identitätsbildung und Veränderung der Interaktion von Patient*innen, Angehörigen und Behandler*innen.
Ergebnisse: Die Erweiterung der Interaktion zwischen Patient*innen und Ärzt*innen auf digitale und Präsenzformate sowie die asynchrone und synchrone Kommunikation erhöhen die Komplexität, aber auch die Flexibilität. Die Fokussierung auf „objektive“ Daten kann den Blick auf die Person mit ihrer individuellen Biografie beeinträchtigen, während digitale Räume die Möglichkeiten zur Identitätsbildung aufseiten der Patient*innen und für die Interaktion deutlich erweitern.
Diskussion: Bereits jetzt zeigen sich Vorteile der Digitalisierung (z. B. besseres Selbstmanagement) und Nachteile (Fokussierung auf Daten statt auf die Person). Für den kinder- und jugendärztlichen Bereich bestehen die Notwendigkeiten, professionelle kommunikative Kompetenzen und professionelle Gesundheitskompetenz zu erweitern sowie die Organisation seiner Versorgungseinrichtungen weiterzuentwickeln.
This article presents a discussion on the main linguistic phenomena which cause difficulties in the analysis of user-generated texts found on the web and in social media, and proposes a set of annotation guidelines for their treatment within the Universal Dependencies (UD) framework of syntactic analysis. Given on the one hand the increasing number of treebanks featuring user-generated content, and its somewhat inconsistent treatment in these resources on the other, the aim of this article is twofold: (1) to provide a condensed, though comprehensive, overview of such treebanks—based on available literature—along with their main features and a comparative analysis of their annotation criteria, and (2) to propose a set of tentative UD-based annotation guidelines, to promote consistent treatment of the particular phenomena found in these types of texts. The overarching goal of this article is to provide a common framework for researchers interested in developing similar resources in UD, thus promoting cross-linguistic consistency, which is a principle that has always been central to the spirit of UD.
German subjectively veridical sicher sein ‘be certain’ can embed ob-clauses in negative contexts, while subjectively veridical glauben ‘believe’ and nonveridical möglich sein ‘be possible’ cannot. The Logical Form of F isn’t certain if M is in Rome is regarded as the negated disjunction of two sentences ¬(cf σ ∨ cf ¬σ) or ¬cf σ ∧ ¬cf ¬σ. Be certain can have this LF because ¬cf σ and ¬cf ¬σ are compatible and nonveridical. Believe excludes this LF because ¬bf σ and ¬bf ¬σ are incompatible in a question-under-discussion context. It follows from this incompatibility and from the incompatibility of bf σ and bf ¬σ that bf ¬σ and ¬bf σ are equivalent. Therefore believe cannot be nonveridical. Be possible doesn’t allow the LF either. Similar to believe, ¬pf σ and ¬pf ¬σ are incompatible. But unlike believe, pf σ and pf ¬σ are compatible.
The demo presents a minimalist, off-the-shelf AND tool which provides a fundamental AND operation, the comparison of two publications with ambiguous authors, as an easily accessible HTTP interface. The tool implements this operation using standard AND functionality, but puts particular emphasis on advanced methods from natural language processing (NLP) for comparing publication title semantics.
The use of digital resources and tools across humanities disciplines is steadily increasing, giving rise to new research paradigms and associated methods that are commonly subsumed under the term digital humanities. Digital humanities does not constitute a new discipline in itself, but rather a new approach to humanities research that cuts across different existing humanities disciplines. While digital humanities extends well beyond language-based research, textual resources and spoken language materials play a central role in most humanities disciplines.
The transfer of research data management from one institution to another infrastructural partner is all but trivial, but can be required, for instance, when an institution faces reorganization or closure. In a case study, we describe the migration of all research data, identify the challenges we encountered, and discuss how we addressed them. It shows that the moving of research data management to another institution is a feasible, but potentially costly enterprise. Being able to demonstrate the feasibility of research data migration supports the stance of data archives that users can expect high levels of trust and reliability when it comes to data safety and sustainability.
Just like most varieties of West Germanic, virtually all varieties of German use a construction in which a cognate of the English verb 'do' (standard German 'tun') functions as an auxiliary and selects another verb in the bare infinitive, a construction known as 'do'-periphrasis or 'do'-support. The present paper provides an Optimality Theoretic (OT) analysis of this phenomenon. It builds on a previous analysis by Bader and Schmid (An OT-analysis of 'do'-support in Modern German, 2006) but (i) extends it from root clauses to subordinate clauses and (ii) aims to capture all of the major distributional patterns found across (mostly non-standard) varieties of German. In so doing, the data are used as a testing ground for different models of German clause structure. At first sight, the occurrence of 'do' in subordinate clauses, as found in many varieties, appears to support the standard CP-IP-VP analysis of German. In actual fact, however, the full range of data turn out to challenge, rather than support, this model. Instead, I propose an analysis within the IP-less model by Haider (Deutsche Syntax - generativ. Vorstudien zur Theorie einer projektiven Grammatik, Narr, Tübingen, 1993 et seq.). In sum, the 'do'-support data will be shown to have implications not only for the analysis of clause structure but also for the OT constraints commonly assumed to govern the distribution of 'do', for the theory of non-projecting words (Toivonen in Non-projecting words, Kluwer, Dordrecht, 2003) as well as research on grammaticalization.
We present an approach for modeling German negation in open-domain fine grained sentiment analysis. Unlike most previous work in sentiment analysis, we assume that negation can be conveyed by many lexical units (and not only common negation words) and that different negation words have different scopes. Our approach is examined on a new dataset comprising sentences with mentions of polar expressions and various negation words. We identify different types of negation words that have the same scopes. We show that already negation modeling based on these types largely outperforms traditional negation models which assume the same scope for all negation words and which employ a window-based scope detection rather than a scope detection based on syntactic information.
Dieser Beitrag widmet sich der Analyse des Zusammenspiels sprachlich-hörbarer und sichtbar-kinesischer Praktiken, die beim alltäglichen Erzählen eingesetzt werden. Im Rahmen einer konversationsanalytisch basierten Untersuchung von Videoaufnahmen deutscher Alltagsgespräche wird die Bandbreite alltäglicher narrativer Praktiken in der face-to-face-Kommunikation aufgezeigt. Dies erfolgt exemplarisch anhand zweier Beispiele, in denen Einstieg, Ausgestaltung sowie Beendigung der Erzählung unter unterschiedlichen sequentiellen und multimodalen Bedingungen vollzogen werden. Die Untersuchung unterstreicht einerseits die Indexikalität alltäglicher narrativer Praktiken, andererseits die Notwendigkeit einer interaktionalen Narratologie, die diese Praktiken als Produkt sprachlicher, verkörperter und räumlicher Ressourcen sowie der Zusammenarbeit mehrerer Teilnehmer analysiert und konzeptualisiert.
Wir diskutieren in diesem Beitrag Implikationen, mit denen man zu tun bekommt, wenn man kleinste Formen situativer Vergesellschaftung – wir sprechen von kommunikativen Minimalformen – untersucht. Kommunikative Minimalformen sind kurzzeitige, nur wenige Sekunden dauernde, gemeinsam konstituierte Interaktionsereignisse. Ungeachtet ihrer Kürze weisen sie zum einen eine komplexe Interaktionsstruktur auf. Zum anderen besitzen sie auch eine klare soziale Implikation und eigene Wertigkeit. In dem hier untersuchten Fall, bei dem Passanten durch ein offenes Fenster in einen Privatraum blicken und dabei ertappt werden, zeigt sich diese soziale Implikativität als moralische Kommunikation im Sinne der interaktiven Bearbeitung eigenen Fehlverhaltens.
We present a method for detecting and reconstructing separated particle verbs in a corpus of spoken German by following an approach suggested for written language. Our study shows that the method can be applied successfully to spoken language, compares different ways of dealing with structures that are specific to spoken language corpora, analyses some remaining problems, and discusses ways of optimising precision or recall for the method. The outlook sketches some possibilities for further work in related areas.
Multinomial processing tree (MPT) models are a class of measurement models that account for categorical data by assuming a finite number of underlying cognitive processes. Traditionally, data are aggregated across participants and analyzed under the assumption of independently and identically distributed observations. Hierarchical Bayesian extensions of MPT models explicitly account for participant heterogeneity by assuming that the individual parameters follow a continuous hierarchical distribution.We provide an accessible introduction to hierarchical MPT modeling and present the user-friendly and comprehensive R package TreeBUGS, which implements the two most important hierarchical MPT approaches for participant heterogeneity—the beta-MPT approach (Smith & Batchelder, Journal of Mathematical Psychology 54:167-183, 2010) and the latent-trait MPT approach (Klauer, Psychometrika 75:70-98, 2010). TreeBUGS reads standard MPT model files and obtains Markov-chain Monte Carlo samples that approximate the posterior distribution. The functionality and output are tailored to the specific needs of MPT modelers and provide tests for the homogeneity of items and participants, individual and group parameter estimates, fit statistics, and within- and between-subjects comparisons, as well as goodness-of-fit and summary plots. We also propose and implement novel statistical extensions to include continuous and discrete predictors (as either fixed or random effects) in the latent-trait MPT model.
Researchers interested in the sounds of speech or the physical gestures of Speakers make use of audio and video recordings in their work. Annotating these recordings presents a different set of requirements to the annotation of text. Special purpose tools have been developed to display video and audio Signals and to allow the creation of time-aligned annotations. This chapter reviews the most widely used of these tools for both manual and automatic generation of annotations on multimodal data.
We present a method to identify and document a phenomenon on which there is very little empirical data: German phrasal compounds occurring in the form of as a single token (without punctuation between their components). Relying on linguistic criteria, our approach implies to have an operational notion of compounds which can be systematically applied as well as (web) corpora which are large and diverse enough to contain rarely seen phenomena. The method is based on word segmentation and morphological analysis, it takes advantage of a data-driven learning process. Our results show that coarse-grained identification of phrasal compounds is best performed with empirical data, whereas fine-grained detection could be improved with a combination of rule-based and frequency-based word lists. Along with the characteristics of web texts, the orthographic realizations seem to be linked to the degree of expressivity.
We present a supervised machine learning AND system which tackles semantic similarity between publication titles by means of word embeddings. Word embeddings are integrated as external components, which keeps the model small and efficient, while allowing for easy extensibility and domain adaptation. Initial experiments show that word embeddings can improve the Recall and F score of the binary classification sub-task of AND. Results for the clustering sub-task are less clear, but also promising and overall show the feasibility of the approach.
In this article, we explore the feasibility of extracting suitable and unsuitable food items for particular health conditions from natural language text. We refer to this task as conditional healthiness classification. For that purpose, we annotate a corpus extracted from forum entries of a food-related website. We identify different relation types that hold between food items and health conditions going beyond a binary distinction of suitability and unsuitability and devise various supervised classifiers using different types of features. We examine the impact of different task-specific resources, such as a healthiness lexicon that lists the healthiness status of a food item and a sentiment lexicon. Moreover, we also consider task-specific linguistic features that disambiguate a context in which mentions of a food item and a health condition co-occur and compare them with standard features using bag of words, part-of-speech information and syntactic parses. We also investigate in how far individual food items and health conditions correlate with specific relation types and try to harness this information for classification.
Large classes at universities(> 1600 students) create their own challenges for teaching and learning. Audience feedback is lacking and fine tuning of lectures, courses and exam preparation to address individual needs is very difficult to achieve. At RWTH Aachen University, a course concept and a knowledge map learning tool aimed to support individual students to prepare for exams in information science through theme-based exercises were developed and evaluated. The tool was grounded in the notion of self-regul ated learning with the goal of enabling students to learn
independently.
In this article, we examine the effectiveness of bootstrapping supervised machine-learning polarity classifiers with the help of a domain-independent rule-based classifier that relies on a lexical resource, i.e., a polarity lexicon and a set of linguistic rules. The benefit of this method is that though no labeled training data are required, it allows a classifier to capture in-domain knowledge by training a supervised classifier with in-domain features, such as bag of words, on instances labeled by a rule-based classifier. Thus, this approach can be considered as a simple and effective method for domain adaptation. Among the list of components of this approach, we investigate how important the quality of the rule-based classifier is and what features are useful for the supervised classifier. In particular, the former addresses the issue in how far linguistic modeling is relevant for this task. We not only examine how this method performs under more difficult settings in which classes are not balanced and mixed reviews are included in the data set but also compare how this linguistically-driven method relates to state-of-the-art statistical domain adaptation.
In two eye-tracking experiments, we investigated the relationship between the subject preference in the resolution of subject-object ambiguities in German embedded clauses and semantic word order constraints (i.e., prominence hierarchies relating to the specificity/referentiality of noun phrases, case assignment and thematic role assignment). Our central research question concerned the timecourse with which prominence information is used and particularly whether it modulates the subject preference. In both experiments, we replicated previous findings of reanalysis effects for object-initial structures. Our findings further suggest that noun phrase prominence does not alter initial parsing strategies (viz., the subject preference), but rather modulates the ease of later reanalysis processes. In Experiment 1, the object case assigned by the verb did not affect the ease of reanalysis. However, the syntactic reanalysis was rendered more difficult when the order of the two arguments violated the specificity/referentiality hierarchy. Experiment 2 revealed that the initial subject preference also holds for verbs favoring an object-initial base order (i.e., dative object-experiencer verbs). However, the advantage for subject-initial sentences is neutralized in relatively late processing stages when the thematic role hierarchy and the specificity hierarchy converge to promote scrambling.
The ISOcat registry reloaded
(2012)
The linguistics community is building a metadata-based infrastructure for the description of its research data and tools. At its core is the ISOcat registry, a collaborative platform to hold a (to be standardized) set of data categories (i.e., field descriptors). Descriptors have definitions in natural language and little explicit interrelations. With the registry growing to many hundred entries, authored by many, it is becoming increasingly apparent that the rather informal definitions and their glossary-like design make it hard for users to grasp, exploit and manage the registry’s content. In this paper, we take a large subset of the ISOcat term set and reconstruct from it a tree structure following the footsteps of schema.org. Our ontological re-engineering yields a representation that gives users a hierarchical view of linguistic, metadata-related terminology. The new representation adds to the precision of all definitions by making explicit information which is only implicitly given in the ISOcat registry. It also helps uncovering and addressing potential inconsistencies in term definitions as well as gaps and redundancies in the overall ISOcat term set. The new representation can serve as a complement to the existing ISOcat model, providing additional support for authors and users in browsing, (re-)using, maintaining, and further extending the community’s terminological metadata repertoire.
Although there is a growing interest of policy makers in higher education issues (especially on an international scale), there is still a lack of theoretically well-grounded comparative analyses of higher education policy. Even broadly discussed topics in higher education research like the potential convergence of European higher education systems in the course of the Bologna Process suffer from a thin empirical and comparative basis. This paper aims to deal with these problems by addressing theoretical questions concerning the domestic impact of the Bologna Process and the role national factors play in determining its effects on cross-national policy convergence. It develops a distinct theoretical approach for the systematic and comparative analysis of cross-national policy convergence. In doing so, it relies upon insights from related research areas — namely literature on Europeanization as well as studies dealing with cross-national policy convergence.
Situiertheit
(1993)