Refine
Year of publication
Document Type
- Article (485)
- Conference Proceeding (262)
- Part of a Book (193)
- Book (25)
- Review (18)
- Part of Periodical (12)
- Other (6)
- Working Paper (5)
- Image (1)
- Periodical (1)
Language
- English (590)
- German (381)
- French (14)
- Portuguese (6)
- Multiple languages (4)
- Russian (4)
- Ukrainian (4)
- Latvian (2)
- Italian (1)
- Polish (1)
Keywords
- Deutsch (319)
- Korpus <Linguistik> (232)
- Konversationsanalyse (85)
- Computerlinguistik (80)
- Interaktion (80)
- Gesprochene Sprache (61)
- Annotation (47)
- Forschungsdaten (47)
- Kommunikation (44)
- Englisch (41)
Publicationstate
- Veröffentlichungsversion (1008) (remove)
Reviewstate
- Peer-Review (1008) (remove)
Publisher
- IDS-Verlag (91)
- de Gruyter (43)
- Association for Computational Linguistics (34)
- Schmidt (30)
- Institut für Deutsche Sprache (28)
- Verlag für Gesprächsforschung (22)
- Leibniz-Institut für Deutsche Sprache (IDS) (21)
- European Language Resources Association (ELRA) (19)
- Zenodo (19)
- European Language Resources Association (18)
Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus
(2021)
Since the introduction of large language models in Natural Language Processing, large raw corpora have played a crucial role in Computational Linguistics. However, most of these large raw corpora are either available only for English or not available to the general public due to copyright issues. Nevertheless, there are some examples of freely available multilingual corpora for training Deep Learning NLP models, such as the OSCAR and Paracrawl corpora. However, they have quality issues, especially for low-resource languages. Moreover, recreating or updating these corpora is very complex. In this work, we try to reproduce and improve the goclassy pipeline used to create the OSCAR corpus. We propose a new pipeline that is faster, modular, parameterizable, and well documented. We use it to create a corpus similar to OSCAR but larger and based on recent data. Also, unlike OSCAR, the metadata information is at the document level. We release our pipeline under an open source license and publish the corpus under a research-only license.
The focus of this paper will be on lexical information systems and the framework guidelines for the definition of the curricula within the educational system of the Autonomous Province of Bolzano/ Bozen (Italy). In Italy, the competences to be achieved at different school levels are published in the form of general guidelines. On this basis each school has to specify the general competency goals and to spell them out in a concrete curriculum. In this paper I will examine to what extent lexical information systems are represented in the framework guidelines within the German and the Italian educational system of the Autonomous Province, these being separate systems. In a second step, I will check the representations of the resources against the “Villa Vigoni Theses on Lexicography“. Finally, I will discuss the results and give an outlook for further research.
Heute wird mehr geschrieben als je zuvor und die digitale Kommunikation trägt wesentlich dazu bei; ein großer Teil des heutigen Schreibens ist dialogisches Schreiben im Alltag. Konsequenterweise wird die Online-Kommunikation zunehmend Thema in Bildungskontexten und in der Deutschdidaktik. Offen ist aber weiterhin, wie Texte des interaktionsorientierten Schreibens bewertet werden sollen, die sich von solchen des textorientierten Schreibens in vielerlei Hinsicht unterscheiden können. Während es für textorientiertes Schreiben Normen gibt, die in Sprachkodizes erfasst sind, ist es nicht klar, was der Bezugspunkt für interaktionsorientierte Texte sein könnte. In diesem Beitrag analysieren wir die Verwendung von Konnektoren in der Online-Kommunikation und die Repräsentation von online-spezifischen Besonderheiten in Sprachressourcen. Die Ergebnisse zeigen, dass spezifische Online-Verwendungsweisen von Konnektoren in Sprachkodizes kaum berücksichtigt und beschrieben werden.
Das Kommunizieren in Sozialen Medien und der Umgang mit Hypertexten ist im Jahr 2020 kein Randphänomen mehr. Die sprachlichen Besonderheiten internetbasierter Kommunikation und Sozialer Medien sind mittlerweile auch gut erforscht und beschrieben, allerdings werden diese bislang in deutschen Grammatiken, mit Ausnahme von Hoffmann (2014), allenfalls am Rande behandelt. Selbst neuere Ansätze zur Textanalyse, z. B. Ágel (2017), konzentrieren sich auf gestaltstabile, linear organisierte Schrifttexte. Dasselbe gilt für Ansätze, die primär für die Bewertung von Schreibprodukten in Bildungskontexten entwickelt wurden.
The annual microcensus provides Germany’s most important official statistics. Unlike a census it does not cover the whole population, but a representative 1%-sample of it. In 2017, the German microcensus asked a question on the language of the population, i.e. ‘Which language is mainly spoken in your household?’ Unfortunately, the question, its design and its position within the whole microcensus’ questionnaire feature several shortcomings. The main shortcoming is that multilingual repertoires cannot be captured by it. Recommendations for the improvement of the microcensus’ language question: first and foremost the question (i.e. its wording, design, and answer options) should make it possible to count multilingual repertoires.
This paper explores how attitudes affect the seemingly objective process of counting speakers of varieties using the example of Low German, Germany’s sole regional language. The initial focus is on the basic taxonomy of classifying a variety as a language or a dialect. Three representative surveys then provide data for the analysis: the Germany Survey 2008, the Northern Germany Survey 2016, and the Germany Survey 2017. The results of these surveys indicate that there is no consensus concerning the evaluation of Low German’s status and that attitudes towards Low German are related to, for example, proficiency in the language. These attitudes are shown to matter when counting speakers of Low German and investigating the status it has been accorded.
Bislang gibt es keine akkuraten, repräsentativen Statistiken dazu, welche Sprachen in Deutschland gesprochen werden. Zwar wird in verschiedenen Erhebungen nach Muttersprachen oder nach zuhause gesprochenen Sprachen gefragt; aufgrund einiger Mängel im Erhebungsdesign bilden die Ergebnisse der vorliegenden Erhebungen jedoch die sprachliche Realität der in Deutschland lebenden Bevölkerung nicht angemessen ab. Im Beitrag wird anhand von drei Erhebungen gezeigt, dass bereits die Instrumente zur Erhebung von Sprache von Spracheinstellungen geprägt sind und dass dadurch die Gültigkeit der Ergebnisse stark eingeschränkt wird. Diese Mängel gelten für Sprachstatistiken im Hinblick auf die gesamte Bevölkerung Deutschlands – Kinder und Jugendliche eingeschlossen.
Das Centre de Sociologie de l’Innovation (CSI) der Ecole des Mines in Paris ist eine Hochburg der Wissenschaftssoziologie, an der die Arbeiten von Bruno Latour und Michel Callon erstellt wurden. Deren Untersuchungen haben eine Reihe von Analysen der wissenschaftlichen Praktiken ausgelöst, die manchmal – vor allem in der angelsächsischen Literatur – unter dem Begriff „Actor-Network-Theory“ (ANT) zusammengefasst werden. Dieser fundamentale Beitrag zur Wissenschaftssoziologie zeichnet sich aus durch eine gesteigerte Aufmerksamkeit sowohl gegenüber den Praktiken der Wissenschaftler, der „science in action“, den Objekten, den Artefakten und den technischen Vorrichtungen als auch gegenüber den Netzwerken, in denen sich Menschen und Nicht-Menschen zusammenfügen und im Umlauf sind. Eine Gruppe von Forschern des CSI, Madeleine Akrich, Antoine Hennion und Vololona Rabeharisoa, hat freundlicherweise eingewilligt, im folgenden Text sehr frei über die Thematik des vorliegenden ZBBS-Heftes und über die Art und Weise zu diskutieren, in der sie sich in ihren Forschungsfeldern und in ihren Arbeiten gegenüber den Fragen positionieren, die durch die Berücksichtigung der sozialen Interaktionen in wissenschaftlichen Arbeitsvollzügen aufgeworfen werden.
Although the N400 was originally discovered in a paradigm designed to elicit a P300 (Kutas and Hillyard, 1980), its relationship with the P300 and how both overlapping event-related potentials (ERPs) determine behavioral profiles is still elusive. Here we conducted an ERP (N = 20) and a multiple-response speed-accuracy tradeoff (SAT) experiment (N = 16) on distinct participant samples using an antonym paradigm (The opposite of black is white/nice/yellow with acceptability judgment). We hypothesized that SAT profiles incorporate processes of task-related decision-making (P300) and stimulus-related expectation violation (N400). We replicated previous ERP results (Roehm et al., 2007): in the correct condition (white), the expected target elicits a P300, while both expectation violations engender an N400 [reduced for related (yellow) vs. unrelated targets (nice)]. Using multivariate Bayesian mixed-effects models, we modeled the P300 and N400 responses simultaneously and found that correlation between residuals and subject-level random effects of each response window was minimal, suggesting that the components are largely independent. For the SAT data, we found that antonyms and unrelated targets had a similar slope (rate of increase in accuracy over time) and an asymptote at ceiling, while related targets showed both a lower slope and a lower asymptote, reaching only approximately 80% accuracy. Using a GLMM-based approach (Davidson and Martin, 2013), we modeled these dynamics using response time and condition as predictors. Replacing the predictor for condition with the averaged P300 and N400 amplitudes from the ERP experiment, we achieved identical model performance. We then examined the piecewise contribution of the P300 and N400 amplitudes with partial effects (see Hohenstein and Kliegl, 2015). Unsurprisingly, the P300 amplitude was the strongest contributor to the SAT-curve in the antonym condition and the N400 was the strongest contributor in the unrelated condition. In brief, this is the first demonstration of how overlapping ERP responses in one sample of participants predict behavioral SAT profiles of another sample. The P300 and N400 reflect two independent but interacting processes and the competition between these processes is reflected differently in behavioral parameters of speed and accuracy.
In this paper, the author studies the role of the dictionary in the first language acquisition, highlighting its didactic value. Based on two Romanian lexicographical works of the 19th century, Lexiconul de la Buda (Buda, 1825) [the Lexicon of Buda] et Vocabularu romano-francesu (Bucarest, 1870) [the Romanian-French Vocabulary], the author analyses the normative information recorded in the articles in order to observe which level of language (i. e. phonetical, morphological, syntactical and lexical) is concerned. Such an approach allows to distinguish between the possible changings both at the level of the perception or at the grammatical, lexical and semantical description, i. e. the settlement of the word in the first language, and at a technical level, i. e. the making of article and of dictionary.
This paper aims at verifying if the most important online Brazilian Portuguese dictionaries include some of the neologisms identified in texts published in the 1990s to 2000s, formed with the elements ciber-, e-, bio-, eco- and narco, which we refer to as fractomorphemes / fracto-morphèmes. Three online dictionaries were analyzed (Aulete, Houaiss and Michaelis), as well as Vocabulário Ortográfico da Língua Portuguesa (VOLP). We were able to conclude that all three dictionaries and VOLP include neologisms with these elements; Michaelis and VOLP do not include separate entries for bound morphemes, whereas Houaiss includes entries for all of them and Aulete includes entries for bio-, eco- and narco-. Aulete also describes the neological meaning of eco- and narco-, whereas Houaiss does not.
Collaborative work in NFDI
(2023)
The non-profit association National Research Data Infrastructure (NFDI) promotes science and research through a National Research Data Infrastructure. Its aim is to develop and establish an overarching research data management (RDM) for Germany and to increase the efficiency of the entire German science system. After a two-and-a-half year build up phase, the process of adding new consortia, each representing a different data domain, has ended in March 2023. NFDI now has 26 disciplinary consortia (and one additional basic service collaboration). Now the full extent of cross-consortial interaction is beginning to show.
The automatic recognition of idioms poses a challenging problem for NLP applications. Whereas native speakers can intuitively handle multiword expressions whose compositional meanings are hard to trace back to individual word semantics, there is still ample scope for improvement regarding computational approaches. We assume that idiomatic constructions can be characterized by gradual intensities of semantic non-compositionality, formal fixedness, and unusual usage context, and introduce a number of measures for these characteristics, comprising count-based and predictive collocation measures together with measures of context (un)similarity. We evaluate our approach on a manually labelled gold standard, derived from a corpus of German pop lyrics. To this end, we apply a Random Forest classifier to analyze the individual contribution of features for automatically detecting idioms, and study the trade-off between recall and precision. Finally, we evaluate the classifier on an independent dataset of idioms extracted from a list of Wikipedia idioms, achieving state-of-the art accuracy.
In order to differentiate between figurative and literal usage of verb-noun combinations for the shared task on the disambiguation of German Verbal Idioms issued for KONVENS 2021, we apply and extend an approach originally developed for detecting idioms in a dataset consisting of random ngram samples. The classification is done by implementing a rather shallow, statistics-based pipeline without intensive preprocessing and examinations on the morphosyntactic and semantic level. We describe the overall approach, the differences between the original dataset and the dataset of the KONVENS task, provide experimental classification results, and analyse the individual contributions of our feature sets.
This paper aims at investigating the usage of present subjunctive (Konjunktiv I), which is traditionally labelled as a feature of standard written language and therefore as typically occurring in communication genres based on it such as press texts and reporting, in everyday spoken German. Through an analysis of corpus data performed according to theory and method of Interactional Linguistics and encompassing private, institutional and public interactional domains, the paper will show how this particular verb form expresses different epistemic stances according to its syntactic embedment.
Post-field syntax and focalization strategies in National Socialist political speech. This paper deals with a syntactic feature of spoken German, i.e. post-field filling, and with its occurrence in one specific discourse type – political speech – throughout one significant period of the history of German language – National Socialism. This paper aims at pointing out the communicative pragmatic function of right dislocation in the NS political speech on the basis of some collected examples.
This paper focuses on so called syntactic projection phenomena in the German language. This term from the German Gesprächsforschung is used to define the fact that an utterance or part of it foreshadows another one. This paper aims at pointing out how such projection phenomena are consciously exploited for rhethorical purposes. This will be observed on the basis of excerpts from the Stuttgart 21 mediation talks. The linguistic analysis carried out in this paper will focus on syntactic projection phenomena involving the use of causal adverbial connectives deshalb and deswegen.
In diesem Beitrag stellen wir die Ergebnisse einer Studie über die Intonation von Frageaktivitäten in deutschen Alltagsgesprächen vor. Unsere Untersuchung erforscht, inwieweit die Intonation zur Kontextualisierung von konversationellen Fragen beiträgt. In der Analyse stützen wir uns auf das autosegmental-metrische Modell von Peters und das taxonomische Modell der interaktionalen Prosodieforschung von Selting. Diese Modelle beschreiben jeweils phonologische oder pragmatische Aspekte der Frageintonation, zwei Dimensionen, die für sich genommen, keine vollständige Beschreibung liefern können. Auf der Grundlage authentischer Gesprächsdaten aus dem Korpus FOLK argumentieren wir für die Kompatibilität des autosegmental-metrischen Modells von Peters und des taxonomischen Modells der Frageintonation von Selting. Die Merkmale aus beiden Modellen lassen sich zu Bündeln kombinieren, die es erlauben, die Intonation von Fragen zu erfassen.
An ongoing academic and research program, the “Vocabula Grammatica” lexicon, implemented by the Centre for the Greek Language (Thessaloniki, Greece), aims at lemmatizing all the philological, grammatical, rhetorical, and metrical terms in the written texts of scholars (philologists and scholiasts) who curated the ancient Greek literature from the beginning of the Hellenistic period (4th/3rd c. BC) until the end of the Byzantine era (15th c. AD). In particular, it aspires to fill serious gaps (a) in the study of ancient Greek scholarship and (b) in the lexicography of the ancient Greek language and literature. By providing specific examples, we will highlight the typical and methodological features of the forthcoming dictionary.
Der Auftaktworkshop "Lexik des gesprochenen Deutsch: Forschungsstand, Erwartungen und Anforderungen an die Entwicklung einer innovativen lexikografischen Ressource" fand am 16. und 17. Februar 2017 am Institut fur Deutsche Sprache (IDS) in Mannheim statt. Das von der Leibniz-Gemeinschaft geforderte Projekt "Lexik des gesprochenen Deutsch" (=LeGeDe, Leibniz-Wettbewerb 2016, Forderlinie "Innovative Vorhaben") nahm im September 2016 am IDS seine Arbeit auf. Das Hauptziel ist die Erstellung einer korpusbasierten elektronischen Ressource zur Lexik des gesprochenen Deutsch auf der Grundlage von lexikologischen und gesprachsanalytischen Untersuchungen authentischer gesprochensprachlicher Daten.
In this paper, we describe a data processing pipeline used for annotated spoken corpora of Uralic languages created in the INEL (Indigenous Northern Eurasian Languages) project. With this processing pipeline we convert the data into a loss-less standard format (ISO/TEI) for long-term preservation while simultaneously enabling a powerful search in this version of the data. For each corpus, the input we are working with is a set of files in EXMARaLDA XML format, which contain transcriptions, multimedia alignment, morpheme segmentation and other kinds of annotation. The first step of processing is the conversion of the data into a certain subset of TEI following the ISO standard ’Transcription of spoken language’ with the help of an XSL transformation. The primary purpose of this step is to obtain a representation of our data in a standard format, which will ensure its long-term accessibility. The second step is the conversion of the ISO/TEI files to a JSON format used by the “Tsakorpus” search platform. This step allows us to make the corpora available through a web-based search interface. As an addition, the existence of such a converter allows other spoken corpora with ISO/TEI annotation to be made accessible online in the future.
This paper presents the QUEST project and describes concepts and tools that are being developed within its framework. The goal of the project is to establish quality criteria and curation criteria for annotated audiovisual language data. Building on existing resources developed by the participating institutions earlier, QUEST develops tools that could be used to facilitate and verify adherence to these criteria. An important focus of the project is making these tools accessible for researchers without substantial technical background and helping them produce high-quality data. The main tools we intend to provide are the depositors’ questionnaire and automatic quality assurance, both developed as web applications. They are accompanied by a Knowledge base, which will contain recommendations and descriptions of best practices established in the course of the project. Conceptually, we split linguistic data into three resource classes (data deposits, collections and corpora). The class of a resource defines the strictness of the quality assurance it should undergo. This division is introduced so that too strict quality criteria do not prevent researchers from depositing their data.
This paper presents the QUEST project and describes concepts and tools that are being developed within its framework. The goal of the project is to establish quality criteria and curation criteria for annotated audiovisual language data. Building on existing resources developed by the participating institutions earlier, QUEST also develops tools that could be used to facilitate and verify adherence to these criteria. An important focus of the project is making these tools accessible for researchers without substantial technical background and helping them produce high-quality data. The main tools we intend to provide are a questionnaire and automatic quality assurance for depositors of language resources, both developed as web applications. They are accompanied by a knowledge base, which will contain recommendations and descriptions of best practices established in the course of the project. Conceptually, we consider three main data maturity levels in order to decide on a suitable level of strictness of the quality assurance. This division has been introduced to avoid that a set of ideal quality criteria prevent researchers from depositing or even assessing their (legacy) data. The tools described in the paper are work in progress and are expected to be released by the end of the QUEST project in 2022.
The CMDI Explorer
(2020)
We present the CMDI Explorer, a tool that empowers users to easily explore the contents of complex CMDI records and to process selected parts of them with little effort. The tool allows users, for instance, to analyse virtual collections represented by CMDI records, and to send collection items to other CLARIN services such as the Switchboard for subsequent processing. The CMDI Explorer hence adds functionality that many users felt was lacking from the CLARIN tool space.
CMDI Explorer
(2021)
We present CMDI Explorer, a tool that empowers users to easily explore the contents of complex CMDI records and to process selected parts of them with little effort. The tool allows users, for instance, to analyse virtual collections represented by CMDI records, and to send collection items to other CLARIN services such as the Switchboard for subsequent processing. CMDI Explorer hence adds functionality that many users felt was lacking from the CLARIN tool space.
This paper addresses long-term archival for large corpora. Three aspects specific to language resources are focused, namely (1) the removal of resources for legal reasons, (2) versioning of (unchanged) objects in constantly growing resources, especially where objects can be part of multiple releases but also part of different collections, and (3) the conversion of data to new formats for digital preservation. It is motivated why language resources may have to be changed, and why formats may need to be converted. As a solution, the use of an intermediate proxy object called a signpost is suggested. The approach will be exemplified with respect to the corpora of the Leibniz Institute for the German Language in Mannheim, namely the German Reference Corpus (DeReKo) and the Archive for Spoken German (AGD).
Signposts for CLARIN
(2020)
An implementation of CMDI-based signposts and its use is presented in this paper. Arnold et al. 2020 present Signposts as a solution to challenges in long-term preservation of corpora, especially corpora that are continuously extended and subject to modification, e.g., due to legal injunctions, but also may overlap with respect to constituents, and may be subject to migrations to new data formats. We describe the contribution Signposts can make to the CLARIN infrastructure and document the design for the CMDI profile.
Signposts for CLARIN
(2021)
An implementation of CMDI-based signposts and its use is presented in this paper. Arnold, Fisseni et al. (2020) present signposts as a solution to challenges in long-term preservation of corpora. Though applicable to digital resources in general, we focus on corpora, especially those that are continuously extended or subject to modification, e.g., due to legal injunctions, but also may overlap with respect to constituents, and may be subject to migrations to new data formats. We describe the contribution signposts can make to the CLARIN infrastructure, notably virtual collections, and document the design for the CMDI profile.
In diesem Beitrag widmen wir uns der Frage, welche Schritte unternommen werden müssen, um Skripte, die bei der Aufbereitung und/oder Auswertung von Forschungsdaten Anwendung finden, so FAIR wie möglich zu gestalten. Dabei nehmen wir sowohl Reproduzierbarkeit, also den Weg von den (Roh)daten zu den Ergebnissen einer Studie, als auch Wiederverwertbarkeit, also die Möglichkeit, die Methoden einer Studie mittels des Skripts auf andere Daten anzuwenden, in den Fokus und beleuchten dabei die folgenden Aspekte: Arbeitsumgebung, Datenvalidierung, Modularisierung, Dokumentation und Lizenz.
Prominence has been widely studied on the word level and the syllable level. An extensive study comparing the two approaches is missing in the literature. This study investigates how word and syllable prominence relate to each other in German. We find that perceptual ratings based on the word level are more extreme than those based on the syllable level. The correlations between word prominence and acoustic features are greater than the correlations between syllable prominence and acoustic features.
Sound units play a pivotal role in cognitive models of auditory comprehension. The general consensus is that during perception listeners break down speech into auditory words and subsequently phones. Indeed, cognitive speech recognition is typically taken to be computationally intractable without phones. Here we present a computational model trained on 20 hours of conversational speech that recognizes word meanings within the range of human performance (model 25%, native speakers 20–44%), without making use of phone or word form representations. Our model also generates successfully predictions about the speed and accuracy of human auditory comprehension. At the heart of the model is a ‘wide’ yet sparse two-layer artificial neural network with some hundred thousand input units representing summaries of changes in acoustic frequency bands, and proxies for lexical meanings as output units. We believe that our model holds promise for resolving longstanding theoretical problems surrounding the notion of the phone in linguistic theory.
In our study we use the experimental framework of priming to manipulate our subjects’ expectations of syllable prominence in sentences with a well-defined syntactic and phonological structure. It shows that it is possible to prime prominence patterns and that priming leads to significant differences in the judgment of syllable prominence.
Streefkerk defines prominence as the perceptually outstanding parts in spoken language. An optimal rating scale for syllable prominence has not been found yet. This paper evaluates a 4-point, an 11-point, a 31-point, and a continuous scale for the rating of syllable prominence and gives support for scales using a higher number of levels. Priming effects found by Arnold, et al., could only be replicated using the 31-point scale.
Vorgestellt werden kontrastive Analysen zur Besetzung und Häufigkeitsverteilung von Vorfeldern im Deutschen und ihren französischen, italienischen, norwegischen, polnischen und ungarischen Äquivalenten in morphosyntaktisch annotierten Wikipedia-Korpora. Im Rahmen der Untersuchung wurden mit korpusanalytischen Methoden quantitative Zusammenhänge bei den sprachspezifischen Ausprägungen von Vorfeldern nachgewiesen, die im Einklang mit typischen Struktureigenschaften der untersuchten Kontrastsprachen stehen. Die Ergebnisse legen aber nahe, dass die untersuchten Vorfeldstrukturen ‒ trotz der beträchtlichen Größe und thematischen Vielfalt der Wikipedia-Korpora ‒ nicht hinreichend repräsentativ sind, um uneingeschränkt Rückschlüsse auf allgemeine Struktureigenschaften der sechs Kontrastsprachen zu ziehen. Hierfür verantwortlich ist insbesondere die ausgeprägte Textsortenspezifizität der Mediengattung (Online-)Enzyklopädie, was mithilfe weiterer Vergleichskorpora aufgezeigt werden konnte.
In many European languages, propositional arguments (PAs) can be realized as different types of structures. Cross-linguistically, complex structures with PAs show a systematic correlation between the strength of the semantic bond and the syntactic union (cf. Givón 2001; Wurmbrand/Lohninger 2023). Also, different languages show similarities with respect to the (lexical) licensing of different PAs (cf. Noonan 1985; Givón 2001; Cristofaro 2003 on different predicate types). However, on a more fine-grained level, a variation across languages can be observed both with respect to the syntactic-semantic properties of PAs as well as to their licensing and usage. This presentation takes a multi-contrastive view of different types of PAs as syntactic subjects and objects by looking at five European languages: EN, DE, IT, PL and HU. Our goal is to identify the parameters of variation in the clausal domain with PAs and by this to contribute to a better understanding of the individual language systems on the one hand and the nature of the linguistic variation in the clausal domain on the other hand. Phenomena and Methodology: We investigate the following types of PAs: direct object (DO) clauses (1), prepositional object (PO) clauses (2), subject clauses (3), and nominalizations (4, 5). Additionally, we discuss clause union phenomena (6, 7). The analyzed parameters include among others finiteness, linear position of the PA, (non) presence of a correlative element, (non) presence of a complementizer, lexical-semantic class of the embedding verb. The phenomena are analyzed based on corpus data (using mono- and multilingual corpora), experimental data (acceptability judgement surveys) or introspective data.
This article investigates mundane photo taking practices with personal mobile devices in the co-presence of others, as well as “divergent” self-initiated smartphone use, thereby exploring the impact of everyday technologies on social interaction. Utilizing multimodal conversation analysis, we examined sequences in which young adults take pictures of food and drinks in restaurants and cafés. Although everyday interactions are abundant in opportunities for accomplishing food photography as a side activity, our data show that taking pictures is also often prioritized over other activities. Through a detailed sequential analysis of video recordings and dynamic screen captures of mobile devices, we illustrate how photographers orient to the momentary opportunities for and relevance of photo taking, that is, how they systematically organize their photographing with respect to the ongoing social encounter and the (projected) changes in the material environment. We investigate how the participants multimodally negotiate the “mainness” and “sideness” (Mondada, 2014) of situated food photography and describe some particular features of participants’ conduct in moments of mundane multiactivity.
The classification of verbs in Levin's (1993) English Verb Classes and Alternations: A preliminary Investigation, on the basis of both intuitive semantic grouping and their participation in valence alternations, is often used by the NLP community as evidence of the semantic similarity of verbs (Jing & McKeown 1998; Lapata & Brew 1999; Kohl et al. 1998). In this paper, we compare the Levin classification with the work of the FrameNet project (Fillmore & Baker 2001), where words (not just verbs) are grouped according to the conceptual structures (frames) that underlie them and their combinatorial patterns are inductively derived from corpus evidence. This means that verbs grouped together in FrameNet (FN) might be semantically similar but have different (or no) alternations, and that verbs which share the same alternation might be represented in two different semantic frames.
Playing videogames is a popular social activity; people play videogames in different places, on different media, in different situations, alone or with partners, online or offline. Unsurprisingly, they thereby share space (physically or virtually) with other playing or non-playing people. The special issue investigates through different contexts and settings how non-players become participants of the gaming interaction and how players and non-players co-construct presence. The introduction provides a problem-related context for the individual contributions and then briefly presents them.
This paper investigates situations in French videogame interactions where non-players who share the same physical space as players, participate in the gaming activities as spectators. Through a detailed multimodal and sequential analysis, we show that being a spectator is a local achievement of all co-present participants - players and non-players.
As the Web ought to be considered as a series of sources rather than as a source in itself, a problem facing corpus construction resides in meta-information and categorization. In addition, we need focused data to shed light on particular subfields of the digital public sphere. Blogs are relevant to that end, especially if the resulting web texts can be extracted along with metadata and made available in coherent and clearly describable collections.
Wenn man verschiedenartige Forschungsdaten über Metadaten inhaltlich beschreiben möchte, sind bibliografische Angaben allein nicht ausreichend. Vielmehr benötigt man zusätzliche Beschreibungsmittel, die der Natur und Komplexität gegebener Forschungsressourcen Rechnung tragen. Verschiedene Arten von Forschungsdaten bedürfen verschiedener Metadatenprofile, die über gemeinsame Komponenten definiert werden. Solche Forschungsdaten können gesammelt (z.B. über OAI-PMH-Harvesting) und mittels Facetten-basierter Suche über eine einheitliche Schnittstelle exploriert werden. Der beschriebene Anwendungskontext kann über sprachwissenschaftliche Daten hinaus verallgemeinert werden.
Linguistics is facing the challenge of many other sciences as it continues to grow into increasingly complex subfields, each with its own separate or overarching branches. While linguists are certainly aware of the overall structure of the research field, they cannot follow all developments other than those of their subfields. It is thus important to help specialists but also newcomers alike to bushwhack through evolved or unknown territory of linguistic data. A considerable amount of research data in linguistics is described with metadata. While studies described and published in archived journals and conference proceedings receive a quite homogeneous set of metadata tags — e.g., author, title, publisher —, this does not hold for the empirical data and analyses that underlie such studies. Moreover, lexicons, grammars, experimental data, and other types of resources come in different forms; and to make things worse, their description in terms of metadata is also not uniform, if existing at all. These problems are well-known and there are now a number of international initiatives — e.g., CLARIN, FlareNet, MetaNet, DARIAH — to build infrastructures for managing linguistic resources. The NaLiDa project, funded by the German Research Foundation, aims at facilitating the management and access to linguistic resources originating from German research institutions. In cooperation with the German SFB 833 research center, we are developing a combination of faceted and full-text search to give integrated access through heterogeneous metadata sets. Our approach is supported by a central registry for metadata field descriptors, and a component repository for structured groups of data categories as larger building blocks.
The long road to a historical dictionary of Lower Sorbian. Towards a lexical information system
(2022)
The Sorbian Institute has been taking preparatory steps for a historical-documentary vocabulary information system for Lower Sorbian for about 10 years. To this end, the entire extant written material (16th–21st centuries) of this strongly endangered European minority language is to be systematically evaluated. An attempt made a few years ago to organise and finance the project as a long-term scientific project was not successful in the end. Therefore, it can only be advanced step by step and via some detours. The article informs about the interim status of the project, especially with respect to the creation of a reliable database.
The term “pivot” usually refers to two overlapping syntactic units such that the completion of the first unit simultaneously launches the second. In addition, pivots are generally said to be characterized by the smooth prosodic integration of their syntactic parts. This prosodic integration is typically achieved by prosodic-phonetic matching of the pivot components. As research on such turns in a range of languages has illustrated, speakers routinely deploy pivots so as to be able to continue past a point of possible turn completion, in the service of implementing some additional or revised action. This article seeks to build on, and complement, earlier research by exploring two issues in more detail as follows: (1) what exactly do pivotal turn extensions accomplish on the action dimension, and (2) what role does prosodic-phonetic packaging play in this? We will show that pivot constructions not only exhibit various degrees of prosodic-phonetic (non-)integration, i.e., differently strong cesuras, but that they can be ordered on a continuum, and that this cline maps onto the relationship of the actions accomplished by the components of the pivot construction. While tighter prosodic-phonetic integration, i.e., weak(er) cesuring, co-occurs with post-pivot actions whose relationship to that of the pre-pivot tends to be rather retrospective in character, looser prosodic-phonetic integration, i.e., strong(er) cesuring, is associated with a more prospective orientation of the post-pivot’s action. These observations also raise more general questions with regard to the analysis of action.
In conversation, speakers need to plan and comprehend language in parallel in order to meet the tight timing constraints of turn taking. Given that language comprehension and speech production planning both require cognitive resources and engage overlapping neural circuits, these two tasks may interfere with one another in dialogue situations. Interference effects have been reported on a number of linguistic processing levels, including lexicosemantics. This paper reports a study on semantic processing efficiency during language comprehension in overlap with speech planning, where participants responded verbally to questions containing semantic illusions. Participants rejected a smaller proportion of the illusions when planning their response in overlap with the illusory word than when planning their response after the end of the question. The obtained results indicate that speech planning interferes with language comprehension in dialogue situations, leading to reduced semantic processing of the incoming turn. Potential explanatory processing accounts are discussed.
We present a collection of (currently) about 5.500 commands directed to voice-controlled virtual assistants (VAs) by sixteen initial users of a VA system in their homes. The collection comprises recordings captured by the VA itself and with a conditional voice recorder (CVR) selectively capturing recordings including the VA-directed commands plus some surrounding context. Next to a description of the collection, we present initial findings on the patterns of use of the VA systems during the first weeks after installation, including usage timing, the development of usage frequency, distributions of sentence structures across commands, and (the development of) command success rates. We discuss the advantages and disadvantages of the applied collection-specific recording approach and describe potential research questions that can be investigated in the future, based on the collection, as well as the merit of combining quantitative corpus linguistic approaches with qualitative in-depth analyses of single cases.
To ensure short gaps between turns in conversation, next speakers regularly start planning their utterance in overlap with the incoming turn. Three experiments investigate which stages of utterance planning are executed in overlap. E1 establishes effects of associative and phonological relatedness of pictures and words in a switch-task from picture naming to lexical decision. E2 focuses on effects of phonological relatedness and investigates potential shifts in the time-course of production planning during background speech. E3 required participants to verbally answer questions as a base task. In critical trials, however, participants switched to visual lexical decision just after they began planning their answer. The task-switch was time-locked to participants' gaze for response planning. Results show that word form encoding is done as early as possible and not postponed until the end of the incoming turn. Hence, planning a response during the incoming turn is executed at least until word form activation.
In conversation, turn-taking is usually fluid, with next speakers taking their turn right after the end of the previous turn. Most, but not all, previous studies show that next speakers start to plan their turn early, if possible already during the incoming turn. The present study makes use of the list-completion paradigm (Barthel et al., 2016), analyzing speech onset latencies and eye-movements of participants in a task-oriented dialogue with a confederate. The measures are used to disentangle the contributions to the timing of turn-taking of early planning of content on the one hand and initiation of articulation as a reaction to the upcoming turn-end on the other hand. Participants named objects visible on their computer screen in response to utterances that did, or did not, contain lexical and prosodic cues to the end of the incoming turn. In the presence of an early lexical cue, participants showed earlier gaze shifts toward the target objects and responded faster than in its absence, whereas the presence of a late intonational cue only led to faster response times and did not affect the timing of participants' eye movements. The results show that with a combination of eye-movement and turn-transition time measures it is possible to tease apart the effects of early planning and response initiation on turn timing. They are consistent with models of turn-taking that assume that next speakers (a) start planning their response as soon as the incoming turn's message can be understood and (b) monitor the incoming turn for cues to turn-completion so as to initiate their response when turn-transition becomes relevant.
Speech planning is a sophisticated process. In dialog, it regularly starts in overlap with an incoming turn by a conversation partner. We show that planning spoken responses in overlap with incoming turns is associated with higher processing load than planning in silence. In a dialogic experiment, participants took turns with a confederate describing lists of objects. The confederate’s utterances (to which participants responded) were pre-recorded and varied in whether they ended in a verb or an object noun and whether this ending was predictable or not. We found that response planning in overlap with sentence-final verbs evokes larger task-evoked pupillary responses, while end predictability had no effect. This finding indicates that planning in overlap leads to higher processing load for next speakers in dialog and that next speakers do not proactively modulate the time course of their response planning based on their predictions of turn endings. The turn-taking system exerts pressure on the language processing system by pushing speakers to plan in overlap despite the ensuing increase in processing load.
In conversation, interlocutors rarely leave long gaps between turns, suggesting that next speakers begin to plan their turns while listening to the previous speaker. The present experiment used analyses of speech onset latencies and eye-movements in a task-oriented dialogue paradigm to investigate when speakers start planning their responses. German speakers heard a confederate describe sets of objects in utterances that either ended in a noun [e.g., Ich habe eine Tür und ein Fahrrad (“I have a door and a bicycle”)] or a verb form [e.g., Ich habe eine Tür und ein Fahrrad besorgt (“I have gotten a door and a bicycle”)], while the presence or absence of the final verb either was or was not predictable from the preceding sentence structure. In response, participants had to name any unnamed objects they could see in their own displays with utterances such as Ich habe ein Ei (“I have an egg”). The results show that speakers begin to plan their turns as soon as sufficient information is available to do so, irrespective of further incoming words.
Comprehending conditional statements is fundamental for hypothetical reasoning about situations. However, the online comprehension of conditional statements containing different conditional connectives is still debated. We report two self-paced reading experiments on German conditionals presenting the conditional connectives wenn (‘if’) and nur wenn (‘only if’) in identical discourse contexts. In Experiment 1, participants read a conditional sentence followed by the confirmed antecedent p and the confirmed or negated consequent q. The final, critical sentence was presented word by word and contained a positive or negative quantifier (ein/kein ‘one/no’). Reading times of the two quantifiers did not differ between the two conditional connectives. In Experiment 2, presenting a negated antecedent, reading times for the critical positive quantifier (ein) did not differ between conditional connectives, while reading times for the negative quantifier (kein) were shorter for nur wenn than for wenn. The results show that comprehenders form distinct predictions about discourse continuations due to differences in the lexical semantics of the tested conditional connectives, shedding light on the role of conditional connectives in the online interpretation of conditionals in general.
Having found their way onto the computer screens, comics soon branched into webcomics. These kept a lot of the characteristics of print comic books, but gradually adapted new unexplored modes of representation. Three relatively new ‘enhancements’ to the medium of comics are presented in this article: webcomics enhanced through the use of the infinite canvas, as proposed by Scott McCloud, those enhanced with videos and/or sound, and lastly those enhanced with interactive and ludic elements. All of the mentioned push the medium of comics into new waters, and by doing so they add new layers of meaning and modify their structure based on the make-up of the implemented features. Infinite canvas manages to lift some limitations of print comics without changing the overall feel too drastically, while animated and voiced webcomics, as well as interactive or game comics, have a much higher inclination to transgress into domains of other media and transform themselves in order to accommodate and integrate these novel foreign features.
We present a method for detecting and reconstructing separated particle verbs in a corpus of spoken German by following an approach suggested for written language. Our study shows that the method can be applied successfully to spoken language, compares different ways of dealing with structures that are specific to spoken language corpora, analyses some remaining problems, and discusses ways of optimising precision or recall for the method. The outlook sketches some possibilities for further work in related areas.
The paper reviews the results of work done in the context of TEI-Lex0, a joint ENeL / DARIAH / PARTHENOS initiative aimed at formulating guidelines for the encoding of retrodigitized dictionaries by streamlining and simplifying the recommendations of the “Print Dictionaries” chapter of the TEI Guidelines. TEI-Lex0 work is performed by teams concentrating on each of the main components of dictionary entries. The work presented here concerns proposals for constraining TEI-based encoding of orthographic, phonetic, and grammatical information on written and spoken forms of the lemma (headword), including auxiliary inflected forms. We also adduce examples of handling various types of orthographic and phonetic variants, as well as examples of handling the representation of inflectional paradigms, which have received less attention in the TEI Guidelines but which are nonetheless essential for properly exposing data content to the various uses that digitized lexica may have.
It is well known that the distribution of lexical and grammatical patterns is size- and register-sensitive (Biber 1986, and later publications). This fact alone presents a challenge to many corpus-oriented linguistic studies focusing on a single language. When it comes to cross-linguistic studies using corpora, the challenge becomes even greater due to the lack of high-quality multilingual corpora (Kupietz et al. 2020; Kupietz/Trawiński 2022), which are comparable with respect to the size and the register. That was the motivation for the creation of the European Reference Corpus EuReCo, an initiative started in 2013 at the Leibniz Institute for the German Language (IDS) together with several European partners (Kupietz et al. 2020). EuReCo is an emerging federated corpus, with large virtual comparable corpora across various languages and with an infrastructure supporting contrastive research. The core of the infrastructure is KorAP (Diewald et al. 2016), a scalable open-source platform supporting the analysis and visualisation of properties of texts annotated by multiple and potentially conflicting information layers, and supporting several corpus query languages. Until recently, EuReCo consisted of three monolingual subparts: the German Reference Corpus DeReKo (Kupietz et al. 2018), the Reference Corpus of Contemporary Romanian Language (Barbu Mititelu/Tufiş/Irimia 2018), and the Hungarian National Corpus (Váradi 2002). The goal of the present submission is twofold. On the one hand, it reports about the new component of EuReCo: a sample of the National Corpus of Polish (Przepiórkowski et al. 2010). On the other hand, it presents the results of a new pilot study using the newly extended EuReCo. This pilot study investigates selected Polish collocations involving light verbs and their prepositional / nominal complements (Fig. 1) and extends the collocation analyses of German, Romanian and Hungarian (Fig. 2) discussed in Kupietz/Trawiński (2022).
In mid-2017, as part of our activities within the TEI Special Interest Group for Linguists (LingSIG), we submitted to the TEI Technical Council a proposal for a new attribute class that would gather attributes facilitating simple token-level linguistic annotation. With this proposal, we addressed community feedback complaining about the lack of a specific tagset for lightweight linguistic annotation within the TEI. Apart from @lemma and @lemmaRef, up till now TEI encoders could only resort to using the generic attribute @ana for inline linguistic annotation, or to the quite complex system of feature structures for robust linguistic annotation, the latter requiring relatively complex processing even for the most basic types of linguistic features. As a result, there now exists a small set of basic descriptive devices which have been made available at the cost of only very small changes to the TEI tagset. The merit of a predefined TEI tagset for lightweight linguistic annotation is the homogeneity of tagging and thus better interoperability of simple linguistic resources encoded in the TEI. The present paper introduces the new attributes, makes a case for one more addition, and presents the advantages of the new system over the legacy TEI solutions.
CoMParS is a resource under construction in the context of the long-term project German Grammar in European Comparison (GDE) at the IDS Mannheim. The principal goal of GDE is to create a novel contrastive grammar of German against the background of other European languages. Alongside German, which is the central focus, the core languages for comparison are English, French, Hungarian and Polish, representing different typological classes. Unlike traditional contrastive grammars available for German, which usually cover language pairs and are based on formal grammatical categories, the new GDE grammar is developed in the spirit of functionalist typology. This implies that, instead of formal criteria, cognitively motivated functional domains in terms of Givón (1984) are used as tertia comparationis. The purpose of CoMParS is to document the empirical basis of the theoretical assumptions of GDE-V and to illustrate the otherwise rather abstract content of grammar books by as many as possible naturally occurring and adequately presented multilingual examples, including information on their use in specific contexts and registers. These examples come from existing parallel corpora, and our presentation will focus on the legal aspects and consequences of this choice of language data.
The paper presents best practices and results from projects in four countries dedicated to the creation of corpora of computer-mediated communication and social media interactions (CMC). Even though there are still many open issues related to building and annotating corpora of that type, there already exists a range of accessible solutions which have been tested in projects and which may serve as a starting point for a more precise discussion of how future standards for CMC corpora may (and should) be shaped like.
Converting and Representing Social Media Corpora into TEI: Schema and best practices from CLARIN-D
(2016)
The paper presents results from a curation project within CLARIN-D, in which an existing lMWord corpus of German chat communication has been integrated into the DEREKO and DWDS corpus infrastructures of the CLARIN-D centres at the Institute for the German Language (IDS, Mannheim) and at the Berlin-Brandenburg Academy of Sciences (BBAW, Berlin). The focus is on the solutions developed for converting and representing the corpus in a TEI format.
The paper reports the results of the curation project ChatCorpus2CLARIN. The goal of the project was to develop a workflow and resources for the integration of an existing chat corpus into the CLARIN-D research infrastructure for language resources and tools in the Humanities and the Social Sciences (http://clarin-d.de). The paper presents an overview of the resources and practices developed in the project, describes the added value of the resource after its integration and discusses, as an outlook, to what extent these practices can be considered best practices which may be useful for the annotation and representation of other CMC and social media corpora.
Since 2013 representatives of several French and German CMC corpus projects have developed three customizations of the TEI-P5 standard for text encoding in order to adapt the encoding schema and models provided by the TEI to the structural peculiarities of CMC discourse. Based on the three schema versions, a 4th version has been created which takes into account the experiences from encoding our corpora and which is specifically designed for the submission of a feature request to the TEI council. On our poster we would present the structure of this schema and its relations (commonalities and differences) to the previous schemas.
In this Paper, we describe a schema and models which have been developed for the representation of corpora of computer-mediated communicatin (CMC corpora) using the representation framework provided by the Text Encoding Initiative (TEI). We characterise CMC discourse as dialogic, sequentially organised interchange between humans and point out that many features of CMC are not adequately handled by current corpus encoding schemas and tools. We formulate desiderata for a representation of CMC in encoding schemes and argue why the TEI is a suitable framework for the encoding of CMC corpora. We propose a model of basic CMC units (utterances, posts, and nonverbal activities) and the macro- and micro-level structures of interactions in CMC environments. Based on these models, we introduce CMC-core, a TEI customisation for the encoding of CMC corpora, which defines CMC-specific encoding features on the four levels of elements, model classes, attribute classes, and modules of the TEI infrastructure. The description of our customisation is illustrated by encoding examples from corpora by researchers of the TEI SIG CMC, representing a variety of CMC genres, i.e. chat, wiki talk, twitter, blog, and Second Life interactions. The material described, i.e. schemata, encoding examples, and documentation, is available from the of the TEI CMC SIG Wiki and will accompany a feature request to the TEI council in late 2019.
Our paper describes an experiment aimed to assessment of lexical coverage in web corpora in comparison with the traditional ones for two closely related Slavic languages from the lexicographers’ perspective. The preliminary results show that web corpora should not be considered ― inferior, but rather ― different.
We investigate the optional omission of the infinitival marker in a Swedish future tense construction. During the last two decades the frequency of omission has been rapidly increasing, and this process has received considerable attention in the literature. We test whether the knowledge which has been accumulated can yield accurate predictions of language variation and change. We extracted all occurrences of the construction from a very large collection of corpora. The dataset was automatically annotated with language-internal predictors which have previously been shown or hypothesized to affect the variation. We trained several models in order to make two kinds of predictions: whether the marker will be omitted in a specific utterance and how large the proportion of omissions will be for a given time period. For most of the approaches we tried, we were not able to achieve a better-than-baseline performance. The only exception was predicting the proportion of omissions using autoregressive integrated moving average models for one-step-ahead forecast, and in this case time was the only predictor that mattered. Our data suggest that most of the language-internal predictors do have some effect on the variation, but the effect is not strong enough to yield reliable predictions.
Der Beitrag behandelt die Frage, inwiefern es sich bei den gegenwärtigen Russlanddeutschen (Erwachsenen und Jugendlichen der ersten Generation, Einwanderungswelle der 1990er Jahre aus Sprachinseln) um Re-Migranten handelt, welche Veränderungen in den Varietätenrepertoires stattfinden und welche Schwierigkeiten und Probleme, aber auch Vorteile sich durch diese spezifische Migrationskonfiguration für die zugewanderten Russlanddeutschen ergeben. Die besondere Situation der Re-Migration mit der spezifischen linguistisch-soziolinguistischen Problematik wird durch Beispiele aus dem aktuellen IDS-Projekt „Migrationslinguistik“ veranschaulicht. Einerseits liegen besondere varietätenlinguistische Konstellationen vor, die bei der russlanddeutschen Migrantenpopulation generationenspezifische Konturen aufweisen. Dadurch entstehen andererseits unikale linguistische Sprachkontaktbedingungen, die die sprachlich-kommunikative Integration und den Erhalt der Migrantensprache Russisch in besonderer Weise beeinflussen können.
With recourse to a broader understanding of the concept of translation, the transfer of source texts in one variety into another variety of the same language can also be called translation. This paper focuses on the target language – or rather – the target variety “easy-to-read language”, which is meant to make texts comprehensible for people with communication limitations. Considering its origins in the disability rights movement, the aim is to inform affected persons about their rights and democratic processes, i.e. to translate especially legal texts into the so-called easy-to-read language. Although there is a whole range of rules and guidelines for formulating in easy-to-read language, ”none offers a sufficient approach for translation into easy-to-read language“ (Bredel & Maaß, 2016a, p. 109). Standardization of the variety is also still a long way off. On the one hand, the contribution takes stock of legal regulations in easy-to-read language. On the other hand, four versions of the Federal Participation Law in easy-to-read language are analysed with regard to their external features and the constructions used to explain technical terminology. The analysis shows that legal texts in easy-to-read language are (still) quite limited in number and are also difficult to find. Concerning the second part, the constructions used exhibit a great structural variance, both intra- and intertextually. It is therefore questionable whether the addressees can access the texts independently. Also, it is still necessary to make the rules, the formulations of the rules and the implementations clearer so that the translations fulfil their function.
The European language world is characterized by an ideology of monolingualism and national languages. This language-related world view interacts with social debates and definitions about linguistic autonomy, diversity, and variation. For the description of border minorities and their sociolinguistic situation, however, this view reaches its limits. In this article, the conceptual difficulties with a language area that crosses national borders are examined. It deals with the minority in East Lorraine (France) in particular. On the language-historical level, this minority is closely related to the language of its (big) neighbor Germany. At the same time, it looks back on a conflictive history with this country, has never filled a (subordinated) political–administrative unit, and has experienced very little public support. We want to address the questions of how speakers themselves reflect on their linguistic situation and what concepts and argumentative figures they bring up in relation to what (Germanic) variety. To this end, we look at statements from guideline-based interviews. In the paper, we present first observations gained through qualitative content analysis.
Almanca tuhfe / Deutsches Geschenk (1916) oder: Wie schreibt man deutsch mit arabischen Buchstaben?
(2022)
Versified dictionaries are bilingual/multilingual glossaries written in verse form to teach essential words in any foreign language. In Islamic culture, versified dictionaries were produced to teach the Arabic language to the young generations of Muslim communities not native in Arabic. In the course of time, many bilingual/multilingual versified dictionaries were written in different languages throughout the Islamic world. The focus of this study is on the Turkish-German versified dictionary titled Almanca Tuhfe / Deutsches Geschenk [German Gift], published by Dr. Sherefeddin Pasha in Istanbul in 1916. This dictionary is the only dictionary in verse ever written combining these two languages. Moreover the dictionary is one of the few texts containing German words written in Arabic letters (applying Ottoman spelling conventions). The study concentrates on the way German words are spelled and tries to find out, whether Sherefeddin Pasha applied something like fixed rules to write the German lexemes.
Dieses Papier diskutiert informationsstrukturelle Aspekte der mehrfachen Vorfeldbesetzung im Deutschen. Auf der Grundlage einer größtenteils aus den IDS-Korpora extrahierten Belegsammlung werden Diskursgegebenheit, Fokus- und Topikstatus (vor allem) des Vorfeldmaterials beschrieben und in Bezug zu entsprechenden Aussagen in der Literatur gesetzt. Neben informationsstrukturellen Faktoren werden im letzten Abschnitt mögliche weitere Faktoren angesprochen, die mehrfache Vorfeldbesetzung favorisieren könnten. Zudem werden für einen begrenzten Ausschnitt des Deutschen erstmals Zahlen vorgelegt, die das Verhältnis von mehrfacher Vorfeldbesetzung zur ähnlichen, aber als „kanonischer“ geltenden Besetzung des Vorfelds mit einer (möglicherweise partiellen) Verbalphrase illustrieren.
Language of Responsibility. The Influence of Linguistic Abstraction on Collective Moral Emotions
(2017)
Two experiments investigated the effects of linguistic abstractness on the experience of collective moral emotions. In Experiment 1 participants were presented with two scenarios about ingroup misbehavior, phrased using descriptive action verbs, interpretative action verbs, adjectives or nouns. The results show that participants experienced slightly more negative moral emotions with higher levels of linguistic abstractness. In Experiment 2 we also tested for the influence of national identification on the relationship between linguistic abstractness and emotional reactions. Additionally, we expanded the number of scenarios. Experiment 2 replicated the earlier pattern, but found larger differences between conditions. The strength of national identification did not moderate the observed effects. The results of this research are discussed within the context of the linguistic category model and psychology of collective moral emotions.
Linguistic Variation and Change in 250 Years of English Scientific Writing: A Data-Driven Approach
(2020)
We trace the evolution of Scientific English through the Late Modern period to modern time on the basis of a comprehensive corpus composed of the Transactions and Proceedings of the Royal Society of London, the first and longest-running English scientific journal established in 1665. Specifically, we explore the linguistic imprints of specialization and diversification in the science domain which accumulate in the formation of “scientific language” and field-specific sublanguages/registers (chemistry, biology etc.). We pursue an exploratory, data-driven approach using state-of-the-art computational language models and combine them with selected information-theoretic measures (entropy, relative entropy) for comparing models along relevant dimensions of variation (time, register). Focusing on selected linguistic variables (lexis, grammar), we show how we deploy computational language models for capturing linguistic variation and change and discuss benefits and limitations.
We report results from an exploratory study of college students’ conceptions of poetry in which we asked them to name three things they expect from a poem. Frequency- and list-based analyses of their responses revealed that they primarily expect poems to rhyme, but they also identified a number of form-, content-, and reception-related genre expectations, which we discuss in relation to relevant previous research. We propose that rhyme’s predominance in college students’ genre expectations reflects its perceptual and cognitive salience during incremental poetry comprehension rather than its frequency in contemporary poetic practice. Our results characterize the genre conceptions of the population that empirical studies of poetry comprehension typically investigate, and thus provide relevant background information for the interpretation of empirical
findings in this field.
We examined genre-specific reading strategies for literary texts and hypothesized that text categorization (literary prose vs. poetry) modulates both how readers gather information from a text (eye movements) and how they realize its phonetic surface form (speech production). We recorded eye movements and speech while college students (N = 32) orally read identical texts that we categorized and formatted as either literary prose or poetry. We further varied the text position of critical regions (text-initial vs. text-medial) to compare how identical information is read and articulated with and without context; this allowed us to assess whether genre-specific reading strategies make differential use of identical context information. We observed genre-dependent differences in reading and speaking tempo that reflected several aspects of reading and articulation. Analyses of regions of interests revealed that word-skipping increased particularly while readers progressed through the texts in the prose condition; speech rhythm was more pronounced in the poetry condition irrespective of the text position. Our results characterize strategic poetry and prose reading, indicate that adjustments of reading behavior partly reflect differences in phonetic surface form, and shed light onto the dynamics of genre-specific literary reading. They generally support a theory of literary comprehension that assumes distinct literary processing modes and incorporates text categorization as an initial processing step.
Die Bedeutung von Forschungsdatenmanagement im wissenschaftspolitischen Diskurs und im wissenschaftlichen Arbeitsalltag nimmt stetig zu. Nationale und internationale Forschungsinfrastrukturen, Verbünde, disziplinäre Datenzentren und institutionelle Kompetenzzentren nähern sich den Herausforderungen aus unterschiedlichen Perspektiven. Dieser Beitrag stellt das Data Center for the Humanities an der Universität zu Köln als Beispiel für ein universitäres Datenzentrum mit fachlicher Spezialisierung auf die Geisteswissenschaften vor.
This paper describes the lexical database tool LOLA (Linguistic-Oriented Lexical database Approach) which has been developed for the construction and maintenance of lexicons for the machine translation system LMT. First, the requirements such a tool should meet are discussed, then LMT and the lexical information it requires, and some issues concerning vocabulary acquisition are presented. Afterwards the architecture and the components of the LOLA system are described and it is shown how we tried to meet the requirements worked out earlier. Although LOLA originally has been designed and implemented for the German-English LMT prototype, it aimed from the beginning at a representation of lexical data that can be reused for other LMT or MT prototypes or even other NLP applications. A special point of discussion will therefore be the adaptability of the tool and its components as well as the reusability of the lexical data stored in the database for the lexicon development for LMT or for other applications.
This article deals with the notion of reality. During the last twenty years, public discourse in western societies has identified the opposition between the real and the virtual as one of the cultural key questions. Taking concrete examples as a point of departure, the paper investigates the semantics of the polysemic terms virtual and real. A semiotic model of the relation between (human) organisms, concepts and signs is used in order to demonstrate that the virtual cannot be adequately described as something opposed to reality, but must be seen as an indispensable part of it. The way in which organisms constitute reality is discussed in the light of the basic cognitive operations of categorization and the formation of conceptual relations, and also of their linguistic counterparts. The apparent conflict between the real and the virtual, which has led many critics to develop apocalyptic visions of the end of civilization, is, in fact, a phantom, product of an outdated theory of semantics.
Deutsch als Fremdsprache und die Grammatik des Deutschen. Über die Fruchtbarkeit des fremden Blicks
(2000)
Der Aufsatz gibt einen Überblick über die Arten von Nebensätzen im Deutschen. Er beschreibt ihre syntaktischen Funktionen und Strukturen anhand des Felderschemas. Auf drei Gesichtspunkte wird detaillierter eingegangen: nebensatzspezifische Funktionen von (i) Pronomina und (ii) Vergleichspartikeln sowie (iii) die Rolle von Nebensätzen bei der Satzspaltung. Der Aufsatz kann im Universitätsunterricht und als Grundlage für Untersuchungen in vergleichender Syntax genutzt werden.
Was ist Deixis?
(1995)
There are a number of linguistic elements whose deictic character is by and large uncontroversial, amongst them I, here and now (German ich, hier and jetzt). Recent theoretical and descriptive treatments, however, have based their definitions of deixis on divergent properties of such elements. In the first part of the present paper, twelve properties of deictic elements are compiled and discussed with particular reference to their general semiotic status. The second part focusses on the opposition of proximity and remoteness (as exemplified by German dies-/jen- and hier/dort) in a number of different types of deixis.
This article summarizes results of an empirical study on the use of so called verbs of transportation in German and Brazilian Portuguese. Such verbs constantly cause dijficulties and mistakes in the language production of non-native Speakers. The paper presentsfour observations on the grammar (verb prefixes, prepositions), semantics (places and paths) and pragmatics (deixis) of verbs of transportation in the two languages. It leads to the conclusion that Brazilian learners tend to have more dijficulties with the morphology and syntax of German transportation verbs, whereas German learners tend to have more dijficulties with the pragmatics of the corresponding verbs in Brazilian Portuguese. Dijficulties with the specification of places and paths can be observed in both directions, but they lead to unidiomatic usage rather than to outright mistakes.
Der Aufsatz entwirft eine Zusammenschau der Verknüpfungseigenschaften der Satzkonnektoren des Deutschen und eine Terminologie für ihre Beschreibung. Zur Illustration dient eine Auswahl von 24 Kausal- und Konsekutivkonnektoren. In der ersten Hälfte geht es um semantische und syntaktische Eigenschaften sowie um Eigenschaften der Syntax-Semantik-Schnittstelle. In der zweiten Hälfte stehen diskurs- und informationsstrukturelle Eigenschaften im Vordergrund. Es zeigt sich, dass die beschriebenen Verknüpfungseigenschaften sich nicht beliebig miteinander kombinieren, sondern charakteristische Eigenschaftsprofile bilden, mit deren Hilfe sich fünf große Konnektorklassen definieren und als geordnetes Teilsystem der Grammatik darstellen lassen.
Der Aufsatz gibt einen Überblick über die Arten von Nebensätzen im Deutschen. Er beschreibt ihre syntaktischen Funktionen und Strukturen anhand des Felderschemas. Auf drei Gesichtspunkte wird detaillierter eingegangen: nebensatzspezifische Funktionen von (i) Pronomina und (ii) Vergleichspartikeln sowie (iii) die Rolle von Nebensätzen bei der Satzspaltung. Der Aufsatz kann im Universitätsunterricht und als Grundlage für Untersuchungen in vergleichender Syntax genutzt werden.
Most authors agree that modal particles - a dass of function words widely considered characteristic of Modem German - cannot receive prosodic stress, though the reasons for this restriction have not yet been satisfactorily explained. This paper argues that unstressability follows from the general contribution of modal particles to compositional utterance meaning, which requires them to take scope over focus-background structures. Form and function of modal particle meanings are modelled and illustrated for five representative examples - the particles wohl, ja, eigentlich, eben and halt. It is argued that these as well as other particles, whenever they occur under prosodic stress, cannot preserve the meaning nor the syntactic behaviour of modal particles. All instances of stressed particles in German must therefore be categorized in other functional classes.
Der vorliegende Aufsatz behandelt die Frage, ob generische Referenz ein semantisches oder ein pragmatisches Phänomen ist. Die Opposition von partikulärer vs. generischer Referenz wird vor dem Hintergrund von vier anderen Oppositionen diskutiert: definite vs. indefinite DP, Zähl-DP vs. Masse-DP, quantifizierte vs. nicht-quantifizierte DP sowie referentieller vs. attributiver Gebrauch. Es wird gezeigt, dass keine dieser Oppositionen entscheidenden Einfluss auf die Auswahl einer partikulären oder generischen Deutung einer DP durch den Interpreten hat. Die einzige formale Beschränkung scheint darin zu bestehen, dass quantifizierte Masse-DPs keine generische Interpretation erlauben. Die Analyse deutet darauf hin, dass generische Referenz keine semantische Eigenschaft von DPs ist. Als Alternative zu einem formal-semantischen Ansatz, wird hier eine kontextsensitive kognitive Suchroutine vorgeschlagen, die aus einer feststehenden Menge vorgefertigter Interpretationsmuster geeignete Lesarten für Determinansphrasen auswählt. Eine solche Suchroutine kann der Schnittstelle zwischen Semantik und Pragmatik zugeordnet werden. Darüber hinaus können wir annehmen, dass die Interpretation pseudo- generischer DPs durch rein pragmatische Inferenzprozeduren nach den zwei Haupttypen der Metonymie (pars pro toto und totum pro parte) geregelt wird.
This paper develops a theoretical model for the semantics of connectives, following central ideas of Reichenbachian tense semantics.
In a first step, the terminological and conceptual framework is presented and illustrated with German da. The meaning of a connective is modeled as a four-place-relation between the situated object E, a reference object R, a discourse anchor S and the speaker O. The relata can belong to one of four different classes of entities: physical object, event, proposition or act. Correspondingly, the relations are divided into four cognitive domains: space, time, alethics/epistemics, and deontics. In each domain, relations can be treated under three different perspectives: situation, condition or causation. A cross-classification of relational domains and perspectives provides a typology of connectives which is more consistent than the ones available in traditional grammar.
In the second part of the article, the analytic apparatus is refined, using German so as the main example. Following Roman Jakobson, a distinction is made between contiguity and similarity relations. Contiguity relations are typically encoded by functional categories, whereas similarity relations are encoded by lexical categories. However, there are a few connectives like so which encode similarity relations. A structural isomorphism between similarity and contiguity relations makes it possible to reinterpret so in certain contexts as an indicator of contiguity. In these cases, so is semantically weakened, particularly in relation to its definiteness. The model is extended to also, from which als descends etymologically.
The third part of the article contains the semantic characterization of als in its variants as an intransitive and transitive connective. Als is described paradigmatically, in terms of the semantic oppositions that distinguish it from da, so, wie and wenn. Like so, it originally encodes similarity relations, but in present day German its use has been extended, so that it may indicate contiguity relations as well. With da and so it shares the abstract relational meaning O-S,R,E. The main difference from da is its lesser degree of definiteness; in contrast to so, its use is almost exclusively temporal. Wie and wenn are indefinites, i.e. they do not establish a deictic backlink to the speaker and discourse context. Als indicates that the situated event temporally overlaps with a specific event of reference, whose factivity is presupposed. The reference event must be categorically predictable in the context of utterance. Als does not indicate temporal antecedence of the reference event in relation to the speech event; it only requires the identifiability of the reference event and its non-coincidence with the speech event.
In the last section, so-called "peripheral temporal clauses" are examined with respect to the syntagmatic interaction between aspectuality, intonational focus, serialization of clauses and the abstract relational meaning of als. The proposed semantic formula is shown to be capable not only of clarifying the paradigmatic structure of a subset of German connectives but also of explaining the semantic and stylistic properties of complex sentences.
The present paper deals with grammaticalization as a comprehensive model of erosive processes in the history of natural languages, exemplified in German and Brazilian Portuguese. Grammaticalization is conceived of as the reduction of pragmatic versatility, semantic concreteness, syntactic liberty and phonetic substance of linguistic elements. It is subdivided into the processes of lexicalization, which transforms polylexematic into monolexematic elements, and deslexicalization, which reduces lexematic to sublexematic elements. In the middle of these processes stands the lexicon, which is seen as the central stock of linguistic elements. Within the lexicon, the process of grammaticalization continues, from lexical word classes through intermediate classes to grammatical word classes. The lower boundary of the lexicon is a critical threshold, down to which the process of grammaticalization is compensated for by linguistic recycling that leads lexematic elements back into the linguistic circuit, through the formation of new polylexematic units. Beyond this threshold, however, no recycling is possible any more, so that elements which have once lost their lexical character are condemned to disappear in the long run. The different stages of grammaticalization are introduced and illustrated by means of concrete examples, first from Brazilian Portuguese and afterwards from German.
Following the tradition of English grammar, some authors have distinguished between count and non-count nouns in Portuguese. The present paper resumes this discussion and develops the hypothesis that contemporary Brazilian Portuguese does not have count nouns, but only non-count nouns and nouns that are neutral in relation to countability.