Refine
Year of publication
- 2019 (148) (remove)
Document Type
- Article (86)
- Conference Proceeding (36)
- Review (11)
- Part of a Book (8)
- Other (2)
- Part of Periodical (2)
- Book (1)
- Report (1)
- Working Paper (1)
Language
- English (77)
- German (68)
- Ukrainian (2)
- Multiple languages (1)
Has Fulltext
- yes (148)
Keywords
- Deutsch (61)
- Korpus <Linguistik> (39)
- Gesprochene Sprache (17)
- Automatische Sprachanalyse (14)
- Konversationsanalyse (10)
- Interaktion (9)
- corpus linguistics (9)
- Rezension (8)
- Annotation (7)
- Wörterbuch (7)
Publicationstate
- Veröffentlichungsversion (75)
- Zweitveröffentlichung (67)
- Postprint (18)
Reviewstate
- Peer-Review (148) (remove)
Publisher
- de Gruyter (21)
- Erich Schmidt (15)
- Leibniz-Institut für Deutsche Sprache (12)
- German Society for Computational Linguistics & Language Technology und Friedrich-Alexander-Universität Erlangen-Nürnberg (9)
- Lexical Computing CZ s.r.o. (6)
- Editura Academiei Române (5)
- Leibniz-Institut für Deutsche Sprache (IDS) (5)
- Springer (4)
- The Association for Computational Linguistics (4)
- Verlag für Gesprächsforschung (4)
Seit 2017 wird im deutschen Mikrozensus eine Frage zur Sprache der Bevölkerung gestellt. Die letzte Spracherhebung in einem deutschen Zensus datiert aus dem Jahr 1939; entsprechend gibt es aktuell keine aussagekräftigen Sprachstatistiken in Deutschland. Die neue Sprachfrage des Mikrozensus weist jedoch erhebliche Mängel auf; offensichtlich wurde sie als Stellvertreterfrage zur Messung kultureller Integration konzipiert. Im vorliegenden Text werden die Fragen diskutiert und ihre ersten Ergebnisse analysiert. Daran anschließend werden andere Varianten von Sprachfragen dargestellt, dabei wird insbesondere auf die vorbildlichen Sprachfragen im kanadischen Zensus eingegangen. Abschließend wird die Sprachfrage der Deutschland-Erhebung 2018 des IDS inklusive ihrer Ergebnisse vorgestellt; die Deutschland-Erhebung 2018 stellt neben dem Mikrozensus bislang die einzige repräsentative Spracherhebung in Deutschland dar.
Language attitudes matter; they influence people’s behaviour and decisions. Therefore, it is crucial to learn more about patterns in the way that languages are evaluated. One means of doing so is using a quantitative approach with data representative of a whole population, so that results mirror dispositions at a societal level. This kind of approach is adopted here, with a focus on the situation in Germany. The article consists of two parts. First, I will present some results of a new representative survey on language attitudes in Germany (the Germany Survey 2017). Second, I will show how language attitudes penetrate even seemingly objective data collection processes by examining the German Microcensus. In 2017, for the first time in eighty years, the German Microcensus included a question on language use ‘at home’. Unfortunately, however, the question was clearly tainted by language attitudes instead of being objective. As a result, the Microcensus significantly misrepresents the linguistic reality of different migrant languages spoken in Germany.
Although the N400 was originally discovered in a paradigm designed to elicit a P300 (Kutas and Hillyard, 1980), its relationship with the P300 and how both overlapping event-related potentials (ERPs) determine behavioral profiles is still elusive. Here we conducted an ERP (N = 20) and a multiple-response speed-accuracy tradeoff (SAT) experiment (N = 16) on distinct participant samples using an antonym paradigm (The opposite of black is white/nice/yellow with acceptability judgment). We hypothesized that SAT profiles incorporate processes of task-related decision-making (P300) and stimulus-related expectation violation (N400). We replicated previous ERP results (Roehm et al., 2007): in the correct condition (white), the expected target elicits a P300, while both expectation violations engender an N400 [reduced for related (yellow) vs. unrelated targets (nice)]. Using multivariate Bayesian mixed-effects models, we modeled the P300 and N400 responses simultaneously and found that correlation between residuals and subject-level random effects of each response window was minimal, suggesting that the components are largely independent. For the SAT data, we found that antonyms and unrelated targets had a similar slope (rate of increase in accuracy over time) and an asymptote at ceiling, while related targets showed both a lower slope and a lower asymptote, reaching only approximately 80% accuracy. Using a GLMM-based approach (Davidson and Martin, 2013), we modeled these dynamics using response time and condition as predictors. Replacing the predictor for condition with the averaged P300 and N400 amplitudes from the ERP experiment, we achieved identical model performance. We then examined the piecewise contribution of the P300 and N400 amplitudes with partial effects (see Hohenstein and Kliegl, 2015). Unsurprisingly, the P300 amplitude was the strongest contributor to the SAT-curve in the antonym condition and the N400 was the strongest contributor in the unrelated condition. In brief, this is the first demonstration of how overlapping ERP responses in one sample of participants predict behavioral SAT profiles of another sample. The P300 and N400 reflect two independent but interacting processes and the competition between these processes is reflected differently in behavioral parameters of speed and accuracy.
Der vorliegende Beitrag beschäftigt sich mit dem Gebrauch von konnektintegrierbaren Konnektoren im gesprochenen Deutsch. Die Analyse wird am Beispiel der Adverbkonnektoren deshalb und deswegen als Korrelate zum Subjunktor weil und ausgehend von theoretischen Prämissen aus der traditionellen Grammatik und aus der Gesprächsforschung durchgeführt. Der Gebrauch der genannten Konnektoren wird innerhalb einer Auswahl von Korpusdaten gesprochener Sprache beobachtet, die mehrere verschiedene Gattungen der alltäglichen bzw. der institutionellen Kommunikation umfasst.
This paper aims at investigating the usage of present subjunctive (Konjunktiv I), which is traditionally labelled as a feature of standard written language and therefore as typically occurring in communication genres based on it such as press texts and reporting, in everyday spoken German. Through an analysis of corpus data performed according to theory and method of Interactional Linguistics and encompassing private, institutional and public interactional domains, the paper will show how this particular verb form expresses different epistemic stances according to its syntactic embedment.
This paper focuses on so called syntactic projection phenomena in the German language. This term from the German Gesprächsforschung is used to define the fact that an utterance or part of it foreshadows another one. This paper aims at pointing out how such projection phenomena are consciously exploited for rhethorical purposes. This will be observed on the basis of excerpts from the Stuttgart 21 mediation talks. The linguistic analysis carried out in this paper will focus on syntactic projection phenomena involving the use of causal adverbial connectives deshalb and deswegen.
In this paper, we describe a data processing pipeline used for annotated spoken corpora of Uralic languages created in the INEL (Indigenous Northern Eurasian Languages) project. With this processing pipeline we convert the data into a loss-less standard format (ISO/TEI) for long-term preservation while simultaneously enabling a powerful search in this version of the data. For each corpus, the input we are working with is a set of files in EXMARaLDA XML format, which contain transcriptions, multimedia alignment, morpheme segmentation and other kinds of annotation. The first step of processing is the conversion of the data into a certain subset of TEI following the ISO standard ’Transcription of spoken language’ with the help of an XSL transformation. The primary purpose of this step is to obtain a representation of our data in a standard format, which will ensure its long-term accessibility. The second step is the conversion of the ISO/TEI files to a JSON format used by the “Tsakorpus” search platform. This step allows us to make the corpora available through a web-based search interface. As an addition, the existence of such a converter allows other spoken corpora with ISO/TEI annotation to be made accessible online in the future.
As the Web ought to be considered as a series of sources rather than as a source in itself, a problem facing corpus construction resides in meta-information and categorization. In addition, we need focused data to shed light on particular subfields of the digital public sphere. Blogs are relevant to that end, especially if the resulting web texts can be extracted along with metadata and made available in coherent and clearly describable collections.
Speech planning is a sophisticated process. In dialog, it regularly starts in overlap with an incoming turn by a conversation partner. We show that planning spoken responses in overlap with incoming turns is associated with higher processing load than planning in silence. In a dialogic experiment, participants took turns with a confederate describing lists of objects. The confederate’s utterances (to which participants responded) were pre-recorded and varied in whether they ended in a verb or an object noun and whether this ending was predictable or not. We found that response planning in overlap with sentence-final verbs evokes larger task-evoked pupillary responses, while end predictability had no effect. This finding indicates that planning in overlap leads to higher processing load for next speakers in dialog and that next speakers do not proactively modulate the time course of their response planning based on their predictions of turn endings. The turn-taking system exerts pressure on the language processing system by pushing speakers to plan in overlap despite the ensuing increase in processing load.
Since 2013 representatives of several French and German CMC corpus projects have developed three customizations of the TEI-P5 standard for text encoding in order to adapt the encoding schema and models provided by the TEI to the structural peculiarities of CMC discourse. Based on the three schema versions, a 4th version has been created which takes into account the experiences from encoding our corpora and which is specifically designed for the submission of a feature request to the TEI council. On our poster we would present the structure of this schema and its relations (commonalities and differences) to the previous schemas.
Most authors agree that modal particles - a dass of function words widely considered characteristic of Modem German - cannot receive prosodic stress, though the reasons for this restriction have not yet been satisfactorily explained. This paper argues that unstressability follows from the general contribution of modal particles to compositional utterance meaning, which requires them to take scope over focus-background structures. Form and function of modal particle meanings are modelled and illustrated for five representative examples - the particles wohl, ja, eigentlich, eben and halt. It is argued that these as well as other particles, whenever they occur under prosodic stress, cannot preserve the meaning nor the syntactic behaviour of modal particles. All instances of stressed particles in German must therefore be categorized in other functional classes.
In this paper, we investigate the temporal interpretation of propositional attitude complement clauses in four typologically unrelated languages: Washo (language isolate), Medumba (Niger-Congo), Hausa (Afro-Asiatic), and Samoan (Austronesian). Of these languages, Washo and Medumba are optional-tense languages, while Hausa and Samoan are tenseless. Just like in obligatory-tense languages, we observe variation among these languages when it comes to the availability of so-called simultaneous and backward-shifted readings of complement clauses. For our optional-tense languages, we argue that a Sequence of Tense parameter is active in these languages, just as in obligatory-tense languages. However, for completely tenseless clauses, we need something more. We argue that there is variation in the degree to which languages make recourse to res-movement, or a similar mechanism that manipulates LF structures to derive backward-shifted readings in tenseless complement clauses. We additionally appeal to cross-linguistic variation in the lexical semantics of perfective aspect to derive or block certain readings. The result is that the typological classification of a language as tensed, optionally tensed, or tenseless, does not alone determine the temporal interpretation possibilities for complement clauses. Rather, structural parameters of variation cross-cut these broad classes of languages to deliver the observed cross-linguistic picture.
This article investigates the use of überhaupt and sowieso in German and Dutch. These two words are frequently classified as particles, if only because of their pragmatic functions. The frequent use of particles is considered a specific trait common to German and Dutch, and the description of their semantics and pragmatics is notoriously difficult. It is unclear whether both particles have the same meaning in Dutch (where they are loanwords) and German, whether they can fulfil the same syntactic functions and to what extent the (semantic and pragmatic) functions of überhaupt und sowieso overlap. There has already been linguistic research on überhaupt and sowieso by Fisseni (2009) using the world-wide web and by Bruijnen and Sudhoff (2013) using the EUROPARL corpus. In the present study we critically evaluated the corpus study, integrating information on original utterance language and discussing the adequacy of this corpus. Moreover, we conducted an experimental survey collecting subjective-intuitive judgements in three dimensions, thus gathering more data on sparse and informal constructions.
By using these complementary methods, we obtain a more nuanced picture of the use of überhaupt and sowieso in both languages: On the one hand, the data show where the use of both words is more similar and on the other hand, differences between the languages can also be discerned.
Kertész, András (2017): The historiography of generative linguistics. Tübingen: Narr. [Rezension]
(2019)
This contribution presents a quantitative approach to speech, thought and writing representation (ST&WR) and steps towards its automatic detection. Automatic detection is necessary for studying ST&WR in a large number of texts and thus identifying developments in form and usage over time and in different types of texts. The contribution summarizes results of a pilot study: First, it describes the manual annotation of a corpus of short narrative texts in relation to linguistic descriptions of ST&WR. Then, two different techniques of automatic detection – a rule-based and a machine learning approach – are described and compared. Evaluation of the results shows success with automatic detection, especially for direct and indirect ST&WR.
In this paper, we present our work-inprogress to automatically identify free indirect representation (FI), a type of thought representation used in literary texts. With a deep learning approach using contextual string embeddings, we achieve f1 scores between 0.45 and 0.5 (sentence-based evaluation for the FI category) on two very different German corpora, a clear improvement on earlier attempts for this task. We show how consistently marked direct speech can help in this task. In our evaluation, we also consider human inter-annotator scores and thus address measures of certainty for this difficult phenomenon.
In diesem Beitrag wird das Redewiedergabe-Korpus (RW-Korpus) vorgestellt, ein historisches Korpus fiktionaler und nicht-fiktionaler Texte, das eine detaillierte manuelle Annotation mit Redewiedergabeformen enthält. Das Korpus entsteht im Rahmen eines laufenden DFG-Projekts und ist noch nicht endgültig abgeschlossen, jedoch ist für Frühjahr 2019 ein Beta-Release geplant, welches der Forschungsgemeinschaft zur Verfügung gestellt wird. Das endgültige Release soll im Frühjahr 2020 erfolgen. Das RW-Korpus stellt eine neuartige Ressource für die Redewiedergabe-Forschung dar, die in dieser Detailliertheit für das Deutsche bisher nicht verfügbar ist, und kann sowohl für quantitative linguistische und literaturwissenschaftliche Untersuchungen als auch als Trainingsmaterial für maschinelles Lernen dienen.
Introduction
(2019)
The present paper examines a variety of ways in which the Corpus of Contemporary Romanian Language (CoRoLa) can be used. A multitude of examples intends to highlight a wide range of interrogation possibilities that CoRoLa opens for different types of users. The querying of CoRoLa displayed here is supported by the KorAP frontend, through the querying language Poliqarp. Interrogations address annotation layers, such as the lexical, morphological and, in the near future, the syntactical layer, as well as the metadata. Other issues discussed are how to build a virtual corpus, how to deal with errors, how to find expressions and how to identify expressions.
Nearly all of the very large corpora of English are “static”, which allows a wide range of one-time, pre-processed data, such as collocates. The challenge comes with large “dynamic” corpora, which are updated regularly, and where preprocessing is much more difficult. This paper provides an overview of the NOW corpus (News on the Web), which is currently 8.2 billion words in size, and which grows by about 170 million words each month. We discuss the architecture of NOW, and provide many examples that show how data from NOW can (uniquely) be extracted to look at a wide range of ongoing changes in English.
We propose a Cross-lingual Encoder-Decoder model that simultaneously translates and generates sentences with Semantic Role Labeling annotations in a resource-poor target language. Unlike annotation projection techniques, our model does not need parallel data during inference time. Our approach can be applied in monolingual, multilingual and cross-lingual settings and is able to produce dependencybased and span-based SRL annotations. We benchmark the labeling performance of our model in different monolingual and multilingual settings using well-known SRL datasets. We then train our model in a cross-lingual setting to generate new SRL labeled data. Finally, we measure the effectiveness of our method by using the generated data to augment the training basis for resource-poor languages and perform manual evaluation to show that it produces high-quality sentences and assigns accurate semantic role annotations. Our proposed architecture offers a flexible method for leveraging SRL data in multiple languages.
In an earlier publication it was claimed that there is no useful relationship between Swahili-English dictionary look-up frequencies and the occurrence frequencies for the same wordforms in Swahili-English corpora, at least not beyond the top few thousand wordforms. This result was challenged using data for German by a different team of researchers using an improved methodology. In the present article the original Swahili-English data is revisited, using ten years’ worth of it rather than just two, and using the improved methodology. We conclude that there is indeed a positive relationship. In addition, we show that online dictionary look-up behaviour is remarkably similar across languages, even when, as in our case, one is dealing with languages from very dissimilar language families. Furthermore, online dictionaries turn out to have minimum look-up success rates, below which they simply cannot go. These minima are language-sensitive and vary depending on the regularity of the searched-for entries, but are otherwise constant no matter the size of randomly sampled dictionaries. Corpus-informed sampling always improves on any random method. Lastly, from the point of view of the graphical user interface, we argue that the average user of an online bilingual dictionary is better served with a single search box, rather than separate search boxes for each dictionary side.
How do people communicate in mobile settings of interaction? How does mobility affect the way we speak? How does mobility exert influence on the manner in which talk itself is consequential for how we move in space? Recently, questions of this sort have attracted increasing attention in the human and social sciences. This Special Issue contributes to the emerging body of studies on mobility and talk by inspecting an ordinary and ubiquitous phenomenon in which communication among mobile participants is paramount: participation in traffic. This editorial presents previous work on mobility in natural settings, as carried out by interactionally oriented researchers. It also shows how the investigation into traffic participation adds new perspectives to research on language and communication.
This paper asks whether and in which ways managing coordination tasks in traffic involve the accomplishment of intersubjectivity. Taking instances of coordinating passing an obstacle with oncoming traffic as the empirical case, four different practices were found.
1. Intersubjectivity can be presupposed by expecting others to stick to the traffic code and other mutually shared expectations.
2. Intersubjective solutions emerge step by step by mutual responsive-anticipatory adaptation of driving decisions.
3. Intersubjectivity can be accomplished by explicit interactive negotiation of passages.
4. Coordination problems can be solved without relying on intersubjectivity by unilateral, responsive-anticipatory adaptation to others’ behaviors.
This article examines a recurrent format that speakers use for defining ordinary expressions or technical terms. Drawing on data from four different languages - Flemish, French, German, and Italian - it focuses on definitions in which a definiendum is first followed by a negative definitional component (‘definiendum is not X’), and then by a positive definitional component (‘definiendum is Y’). The analysis shows that by employing this format, speakers display sensitivity towards a potential meaning of the definiendum that recipients could have taken to be valid. By negating this meaning, speakers discard this possible, yet unintended understanding. The format serves three distinct interactional purposes: (a) it is used for argumentation, e.g. in discussions and political debates, (b) it works as a resource for imparting knowledge, e.g. in expert talk and instructions, and (c) it is employed, in ordinary conversation, for securing the addressee's correct understanding of a possibly problematic expression. The findings contribute to our understanding of how epistemic claims and displays relate to the turn-constructional and sequential organization of talk. They also show that the much quoted ‘problem of meaning’ is, first and foremost, a participant's problem.
Mein Beitrag entstand im Rahmen meiner biografie- und interaktionsanalytischen Studie zu sozialen und sprachlichen Erfahrungen junger „Rückkehrer/innen“, d. h. junger Frauen und Männer türkischer Herkunft, die in Deutschland oder Österreich aufwuchsen, und als Jugendliche bzw. junge Erwachsene in die Türkei migrierten. Furkan, der Informant, den ich hier vorstelle, schildert Ausgrenzungserfahrungen in Deutschland aufgrund seiner ethnischen Herkunft und Anpassungsprobleme in der Türkei aufgrund sprachlicher und sozialer Auffälligkeiten. Ziel meiner Analyse ist es, die verschiedenen Phasen seiner Lebensgeschichte in beiden Lebenswelten zu beschreiben, den Zusammenhang zwischen Ausgrenzungserlebnissen, ihrer Deutung und ihrer narrativen Bewältigung zu rekonstruieren und die Unterschiede zwischen der Schilderung in beiden Lebenswelten herauszuarbeiten. Auf dieser Basis lässt sich die narrative Bewältigung der Erlebnisse in Kindheit und früher Jugend in Deutschland mit Erzählformen für Traumata in Beziehung setzen.
Narrativer Entwurf einer positiven Selbstkategorie in unterschiedlichen Sozial- und Sprachwelten
(2019)
Dieser Beitrag entstand im Rahmen meiner biografie- und interaktionsanalytischen Studie zu sozialen und sprachlichen Erfahrungen junger „RückkehrerInnen“, d.h. junger Frauen und Männer türkischer Herkunft, die in Deutschland oder Österreich aufwuchsen, und als Jugendliche bzw. junge Erwachsene in die Türkei migrierten. Arda, der Informant, den ich im Folgenden vorstellen werde, beschreibt unterschiedliche Sozialwelten in Deutschland und in der Türkei. Dabei räumt er der Beschreibung von zwei grundsätzlich unterschiedlichen Lebenswelten, die seine Kindheit in Deutschland prägen, großen Raum ein: zum einen der Lebenswelt des Türkenviertels in Kreuzberg, wo er geboren ist und bis zur Einschulung lebte, und zum anderen der deutschen Lebenswelt, in die seine Familie später umgezogen ist und in der er die Grundschule besucht und absolviert hat. Nach der Übersiedlung in die Türkei erlebt Arda eine moderne türkische Lebenswelt, an die er sich anpassen muss. In seinem neuen Leben erfährt er den schmerzlichen Verlust der deutschen Alltagssprache. Zur Beschreibung verwendet er komplexe Verfahren ethnischer und sozialer Kategorisierung und negativer bzw. positiver Selbstpositionierungen zu den verschiedenen Welten. Ziel meiner Analysen ist es nach einem Überblick über die soziolinguistische Forschung zu sozialer Kategorisierung, die Kategorisierungsprozesse und deren charakteristische Eigenschaften und Handlungsweisen, die Arda verwendet, zu rekonstruieren und die sprachlichen Mittel und Verfahren zu beschreiben, die zur Positionierung und zur Selbst- und Fremdkategorisierung verwendet werden.
The user interfaces for corpus analysis platforms must provide a high degree of accessibility for ordinary users and at the same time provide the possibility to answer complex research questions. In this paper, we present the design concepts behind the user interface of KorAP, a corpus analysis platform that has evolved into the main gateway to CoRoLa, the Reference Corpus of Contemporary Romanian Language. Based on established principles of user interface design, we show how KorAP addresses the challenge of providing a user-friendly interface for heterogeneous corpus data to a wide range of users with different research questions.
Статтю присвячено дослідженню комунікативних невдач у мовленнєвому жанрі відеоінтерв’ю крізь призму української національної ідентичності. Визначено тематику, типи і жанрово-мовну специфіку українського відеоінтерв’ю як зразка діалогічного мовлення. Встановлено специфіку комунікативних невдач у цьому жанрі (зі спортсменами, політиками і культурними діячами) з огляду на позиції комунікантів, структурні рівні досліджуваного жанру та максими спілкування.
Mangelhafter Adressatenzuschnitt in ukrainischen und deutschen politischen Youtube-Interviews
(2019)
The article investigates Ukrainian and German YouTube interviews from the point of view of contrastive linguistics. The purpose of this paper is to separate out the interview as a communicative genre and to determine the main aspects of research on discrepancies in expectations among interview participants, in particular to clarify the role of poor recipient design as the cause of communication failures. Results indicate that poor recipient design is the most common source of communication failures in both languages.
The article deals with communicative failures of journalists in “YouTube” celebrity video interviews in the Ukrainian and German linguacultures from the point of view of social interaction and the theory of speech genres at all structural levels of the communicative genre construction, establishing common and distinctive features in both linguacultures. The analysis made it possible to conclude that behind a language (speech) failure there is a violation caused by a journalist, a respondent, or an external noise.
Central complements: good arguments are self-explanatory.
Together with its central complements, verbs model basic patterns of interaction. The constellations of these complements in turn correspond to central patterns of the argument structure. Nominative and accusative complements formally occupy the first and second positions (subject and object), but they also have certain semantic preferences. The formal function of the dative is less pronounced, where it occurs (ditransitive verbs) the semantic imprint of the frame („transfer“) is very strong. This corresponds to the meaning of a core group of corresponding verbs. Other verbs that allow this pattern are used more often in other valence structures and the ditransitive use appears as a systematic way of personal extension of object‑related activities. This will be discussed with reference to the verbs zeigen and (in a different way) lehren.
Lebenslauf bis 2019
(2019)
Ulrich Engel schildert die einzelnen Stationen seines Lebens: als Kind im Vorkriegsdeutschland und als junger Soldat, anschließend seine Lehrertätigkeit und wissenschaftliche Laufbahn, insbesondere seine Funktion als Direktor des Instituts für Deutsche Sprache in Mannheim. Er hebt seine Tätigkeit als Leiter von mehreren Projekten von kontrastiven zweisprachigen Grammatiken sowie Valenzwörterbüchern hervor. Dabei schildert er seinen familiären Hintergrund als Spiegel des gesellschaftlich‑politischen Wandels im Vor‑ und Nachkriegsdeutschland.
We report on a new project building a Natural Language Processing resource for Zulu by making use of resources already available. Combining tagging results with the results of morphological analysis semi-automatically, we expect to reduce the amount of manual work when generating a finely-grained gold standard corpus usable for training a tagger. From the tagged corpus, we plan to extract verb-argument pairs with the aim of compiling a verb valency lexicon for Zulu.
Distributional models of word use constitute an indispensable tool in corpus based lexicological research for discovering paradigmatic relations and syntagmatic patterns (Belica et al. 2010). Recently, word embeddings (Mikolov et al. 2013) have revived the field by allowing to construct and analyze distributional models on very large corpora. This is accomplished by reducing the very high dimensionality of word cooccurrence contexts, the size of the vocabulary, to few dimensions, such as 100-200. However, word use and meaning can vary widely along dimensions such as domain, register, and time, and word embeddings tend to represent only the most prevalent meaning. In this paper we thus construct domain specific word embeddings to allow for systematically analyzing variations in word use. Moreover, we also demonstrate how to reconstruct domain specific co-occurrence contexts from the dense word embeddings.
Der vorliegende Beitrag setzt sich mit dem computergestützten Transkriptionsverfahren arabisch-deutscher Gesprächsdaten für interaktionsbezogene Untersuchungen auseinander. Zunächst werden wesentliche methodische Herausforderungen der gesprächsanalytischen Arbeit adressiert: Hinsichtlich der derzeitigen Korpustechnologie ermöglicht die Verwendung von arabischen Schriftzeichen in einem mehrsprachigen, bidirektionalen Transkript keine analysegerechte Rekonstruktion von Reziprozität, Linearität und Simultaneität sprachlichen Handelns. Zudem ist die Verschriftung von arabischen Gesprächsdaten aufgrund der unzureichenden (gesprächsanalytischen) Beschäftigung mit den standardfernen Varietäten und gesprochensprachlichen Phänomenen erschwert. Daher widmet sich der zweite Teil des Beitrags den bisher erarbeiteten und erprobten Lösungsansätzen ̶ einem stringenten, gesprächsanalytisch fundierten Transkriptionssystem für gesprochenes Arabisch.
The paper deals with the process of computer-aided transcription regarding Arabic-German data material for interaction-based studies. First of all, it sheds light upon some major methodological challenges posed by the conversation-analytic approaches: due to current corpus technology, the reciprocity, linearity, and simultaneity of linguistic activities cannot be reconstructed in an analytically proper way when using the Arabic characters in multilingual and bidirectional transcripts. The difficulty of transcribing Arabic encounters is also compounded by the fact that Spoken Arabic as well as its varieties and phenomena have not been standardised enough (for conversation-analytic purposes). Therefore, the second part of this paper is dedicated to preliminary, self-developed solutions, namely a systematic method for transcribing Spoken Arabic.
We present web services implementing a workflow for transcripts of spoken language following TEI guidelines, in particular ISO 24624:2016 "Language resource management - Transcription of spoken language". The web services are available at our website and will be available via the CLARIN infrastructure, including the Virtual Language Observatory and WebLicht.
Die Vermittlung von Fachsprache gewinnt in der heutigen europäischen Gesellschaft, die von 'Bewegungen' unterschiedlicher Art charakterisiert ist, immer mehr an Relevanz, aber die Lernergruppen werden immer differenzierter und die Lehrenden, die meist keine Experten auf dem Fachgebiet sind, haben Schwierigkeiten lernergerechte Kurse zu gestalten, da die Möglichkeiten zur Aus- oder Fortbildung selten sind. Fragen, die offen stehen oder nur teilweise beantwortet wurden, gibt es noch viele und eine einheitliche Antwort ist nicht immer möglich, aber wir möchten trotzdem versuchen, anstatt von Problemfällen auch Experimente und Lösungen vorzustellen. Wir möchten zeigen, wie und mit welchen Mitteln und Werkzeugen Fachsprachen beschrieben werden können und welche Auswirkungen dies im Unterricht haben kann. Nach einem Überblick über die unterschiedlichen Definitionsmöglichkeiten von 'Fachsprache', zeigen wir, welche Auswirkungen die unterschiedlichen Schwerpunkte in der Lehre haben können. Abschließend werden wir ein kleines korpuslinguistisches Experiment vorstellen (Korpus mit den Aufsätzen zum Themenschwerpunkt 'Fachsprache' ZIF 2019-1), um mögliche Anregungen zur Benutzung von Korpora zu geben, da sich Korpora in allen Phasen des Unterrichts (vor, während und danach) sowohl für Lehrende als auch für Lernende positiv auswirken können.
Persuasionsstrategien in deutschen rechtsorientierten Zeitungen. Eine korpuslinguistische Studie
(2019)
Corpus Linguistics has often proved fruitful to examine different types of discourses, also the one of refugees. Aim of the paper is to show how language usage patterns can be focused on with the help of techniques grounded in Corpus Linguistics, giving information about themes and topoi. After showing what type of words (keywords, collocations) and what type of phenomena will be considered (topoi, metaphors and frames) in the article, the focus will shift on the methodology and the adopted criteria. After presenting the primary corpus (articles from right-oriented newspapers) and the comparison corpus (articles from 'Die Zeit') the main results of the analysis are presented and reflected on.
Tourlex: ein deutsch-italienisches Fachwörterbuch zur Tourismussprache für italienische DaF-Lerner
(2019)
Tourlex is a specialized bilingual online dictionary under construction hosted at the University of Mannheim with a particular focus on collocations and multi-word units. The languages included are German and Italian, but because of the need for online dictionaries of tourism language (Flinz 2015: 56) the framework is open to the inclusion of other languages. Tourlex is a corpus-based dictionary, i.e. the primary sources will be corpora, in particular a proper bilingual comparable corpus analysed with the tools Sketch Engine and Lexpan, and the freely accessible corpus DeReKo. The aim of this paper is to give an overview of the main actions (already done but also in planning), according to the phases of the lexicographical process of a dictionary under construction. The description of each phase will be enriched by examples taken from the project, showing also how the decisions taken to satisfy the needs of the user, the Italian learner of German as a foreign language, had influenced the microstructure of the entries. We conclude with a final reflection on the data, facts, and ongoing problems.