Refine
Year of publication
Document Type
- Part of a Book (249)
- Article (200)
- Conference Proceeding (150)
- Book (39)
- Working Paper (12)
- Other (11)
- Part of Periodical (10)
- Preprint (10)
- Doctoral Thesis (2)
- Course Material (1)
Language
- English (687) (remove)
Is part of the Bibliography
- yes (687) (remove)
Keywords
- Korpus <Linguistik> (203)
- Deutsch (185)
- Interaktion (74)
- Konversationsanalyse (69)
- Gesprochene Sprache (52)
- Computerlinguistik (43)
- Forschungsdaten (40)
- Annotation (38)
- Englisch (34)
- Wörterbuch (34)
Publicationstate
- Veröffentlichungsversion (402)
- Zweitveröffentlichung (136)
- Postprint (88)
- Ahead of Print (5)
- Preprint (3)
- Erstveröffentlichung (1)
Reviewstate
- Peer-Review (431)
- (Verlags)-Lektorat (107)
- Peer-review (10)
- Peer-Revied (5)
- Verlags-Lektorat (5)
- Qualifikationsarbeit (Dissertation, Habilitationsschrift) (2)
- (Verlags-)Lektorat (1)
- (Verlags-)lektorat (1)
- Abschlussarbeit (Bachelor, Master, Diplom, Magister) (Bachelor, Master, Diss.) (1)
- Peer review (1)
Publisher
Im ’Minimalistischen Programm’ (Chomsky 1995) werden A-Bewegungen (’A- movements’, d.h. N-Hebung, V-Hebung usw.) und A’-Bewegungen (’A-bar- movements’, d.h. Extraposition, VP-Adjunktion, ’scrambling’ usw.) als sehr ungleichwertige Operationen behandelt. Der vorliegende Aufsatz untersucht die distinkten Eigenschaften von A-Bewegungen und A’-Bewegungen anhand von drei Gruppen von Argumenten, nämlich Topikalisierung (Abschnitt 2), Verschiebung schwacher Pronomina (Abschnitt 3) und Verb-Zweit unter der Symmetrie-Annahme (Abschnitt 4). Die Konklusionen daraus sind, daß die hier vertretene Analyse eine Möglichkeit bietet, A-Bewegungen und A’- Bewegungen zu unterscheiden, ohne letztere aus dem Zuständigkeitsbereich der Grammatik zu verbannen.
Contrasting and turn transition: Prosodic projection with the parallel-opposition constructions
(2009)
The parallel-opposition construction has not yet been widely described as an independent construction type. This article reports on its realization in everyday British-English conversation. In particular, it focusses on prosodic projection in the lexically and syntactically unmarked first component of this syntactic pattern, and thus adds to the body of research investigating the organization of turn-taking in the context of bi-clausal constructions with which the first part lacks explicit lexical hints to their continuation. It is shown that the parallel-opposition construction, next to specific semantic–pragmatic, syntactic and lexical features, also exhibits a relatively fixed range of prosodic features in the first conjunct, among these narrow focus, continuing intonation and/or the avoidance of intonation-unit boundary signals. These are used to project continuation of an otherwise complete utterance and, thus, to secure the floor for the expression of contrast. In addition, the detailed analysis of apparently deviant cases, which takes into account the on-line production of syntax, shows that a lack of prosodically projective features in the first component of the parallel-opposition construction can be explained by the strategic, retrospective use of the construction to resolve problems in turn transition.
Metaphor and discourse
(2009)
Temporal frames of reference
(2010)
This paper discusses the technological and methodological challenges in creating and sharing HAMATAC, the Hamburg Map Task Corpus. The first version of the corpus, consisting of 24 recordings with orthographic transcriptions and metadata, is publicly available. A second version featuring different types of linguistic annotation is in progress. I will describe how the various software tools and data formats of the EXMARaLDA system were used for transcription and multi-level annotation, to compile recordings and transcriptions into a corpus and manage metadata, to publish the corpus, and how they can be used for carrying out corpus queries (KWIC) and analyses. Some recurrent issues in corpus building and sharing and the interaction of technological and methodological aspects will be illustrated using HAMATAC.
This article discusses questions concerning the creation, annotation and sharing of spoken language corpora. We use the Hamburg Map Task Corpus (HAMATAC), a small corpus in which advanced learners of German were recorded solving a map task, as an example to illustrate our main points. We first give an overview of the corpus creation and annotation process including recording, metadata documentation, transcription and semi-automatic annotation of the data. We then discuss the manual annotation of disfluencies as an example case in which many of the typical and challenging problems for data reuse – in particular the reliability of interpretative annotations – are revealed.
We present a novel NLP resource for the explanation of linguistic phenomena, built and evaluated exploring very large annotated language corpora. For the compilation, we use the German Reference Corpus (DeReKo) with more than 5 billion word forms, which is the largest linguistic resource worldwide for the study of contemporary written German. The result is a comprehensive database of German genitive formations, enriched with a broad range of intra- und extralinguistic metadata. It can be used for the notoriously controversial classification and prediction of genitive endings (short endings, long endings, zero-marker). We also evaluate the main factors influencing the use of specific endings. To get a general idea about a factor’s influences and its side effects, we calculate chi-square-tests and visualize the residuals with an association plot. The results are evaluated against a gold standard by implementing tree-based machine learning algorithms. For the statistical analysis, we applied the supervised LMT Logistic Model Trees algorithm, using the WEKA software. We intend to use this gold standard to evaluate GenitivDB, as well as to explore methodologies for a predictive genitive model.
The article investigates the conditions under which the w-relativizer was appears instead of the d-relativzer das in German relative clauses. Building on Wiese 2013, we argue that was constitutes the elsewhere case that applies when identification with the antecedent cannot be established by syntactic means via upward agreement with respect to phi-features. Corpuslinguistic results point to the conclusion that this is the case whenever there is no lexical nominal in the antecedent that, following Geach 1962 and Baker 2003, supplies a criterion of identity needed to establish sameness of reference between the antecedent and the relativizer.
This contribution offers a fine-grained analysis of German and Romanian ditransitive and prepositional transfer constructions. The transfer construction (TC) is shown to be realised in German by 26 argument structure patterns (ASPs), which are conceived of as form-meaning pairings which differ only minimally. The mainstream constructionist view of the different types of TCs being related by polysemy links is rejected, the ASPs being argued instead to be related by family relationships. All but six of the ASPs identified for German are shown to possess a Romanian counterpart. For some ditransitive structures, German is shown to possess two prepositional variants, one with an (‘at’) and one with zu (‘to’) or auf (‘on’), while Romanian has only one. Due to the lack of a Romanian counterpart for the German zu and auf variants, Romanian lacks some of the dative alternations found in German. However, Romanian as well as German permits the double object pattern to interact with take-verbs, verbs of removal and add-verbs, which do not allow the ditransitive construction in English. Since these verb classes also permit at least one prepositional pattern in both languages, Romanian and German show a larger number of dative alternation types than English.
On ancient grammars of space
(2014)
This volume presents new research by the Topoi group "The Conception of Spaces in Language" on the expression of spatial relations in ancient languages. The six articles in this volume discuss static and dynamic aspects of the spatial grammars of Ancient to Medieval Greek, Akkadian, Hittite, and Hieroglyphic Ancient Egyptian, as well as field data on eight modern languages (Arabic, Hebrew, English, German, Russian, French, Italian, and Spanish). Among the grams discussed are spatial particles, motion verbs, case and, most prominently, spatial prepositions. All ancient language data are fully explained in linguistic word-by-word glosses and are therefore accessible to scholars who are not themselves experts on the respective languages. Taken together, these contributions extend the scope of research on spatial grammar back to the third millennium BCE.
German lexical items with similar or related morphological roots and similar meaning potential are easily confused by native speakers and language learners. These include so-called paronyms such as effektiv/effizient , sensitive/sensibel, formell/formal/förmlich . Although these are generally not regarded as synonyms, empirical studies suggest that in some cases items of a paronym set have undergone meaning change and developed synonymous notions. In other cases, they remain similar in meaning, but show subtle differences in definition and restrictions of usage. Whereas the treatment of synonyms has received attention from corpus-linguists (cf. Partington 1998; Taylor 2003), the subject of paronyms has not been revisited with empirical, data-driven methods neither in terms of semantic theory nor in terms of practical lexicography. As a consequence, we also need to search for suitable corpus methods for detailed semantic investigation. Lexicographically, some German paronyms have been documented in printed dictionaries (e.g. Müller 1973; Pollmann & Wolk 2010). However, there is no corpus-assisted reference guide describing paronyms empirically and enabling readers to find the correct contemporary usage. Therefore, solutions to some lexicographic challenges are required.
We start by trying to answer a question that has already been asked by de Schryver et al. (2006): Do dictionary users (frequently) look up words that are frequent in a corpus. Contrary to their results, our results that are based on the analysis of log files from two different online dictionaries indicate that users indeed look up frequent words frequently. When combining frequency information from the Mannheim German Reference Corpus and information about the number of visits in the Digital Dictionary of the German Language as well as the German language edition of Wiktionary, a clear connection between corpus and look-up frequencies can be observed. In a follow-up study, we show that another important factor for the look-up frequency of a word is its temporal social relevance. To make this effect visible, we propose a de-trending method where we control both frequency effects and overall look-up trends.
This paper reports on an ongoing lexicographical project that investigates Polish loanwords from German that were further borrowed into the East Slavic languages Russian, Ukrainian, and Belorussian. The results will be published as three separate dictionaries in the Lehnwortportal Deutsch, a freely available web portal for loanword dictionaries having German as their common source language. On the database level, the portal models lexicographical data as a cross-resource directed acyclic graph of relations between individual words, including German ‘metalemmata’ as normalized representations of diasystemic variants of German etyma. Amongst other things, this technology makes it possible to use the web portal as an ‘inverted loanword dictionary’ to find loanwords in different languages borrowed from the same German etymon. The different possible pathways of German loanwords that went through Polish into the East Slavic languages can be represented directly as paths in the graph. A dedicated in-house dictionary editing software system assists lexicographers in producing and keeping track of these paths even in complex cases where, e.g, only a derivative of a German loanword in Polish has been borrowed into Russian. The paper concludes with some remarks on the particularities of the dictionary/portal access structure needed for presenting and searching borrowing chains.
Using online dictionaries
(2014)
The chapter provides a review of research literature on the use of electronic dictionaries. Because the central terms electronic dictionary and research into dictionaiy use are sometimes used in different ways in the research, it is necessary first of all to examine these more closely, in Order to clarify their use in this research review. The main chapter presents several individual studies in chronological order.
This article presents an approach that supports the creation of personal learning environments (PLE) suitable for self-regulated learning (SRL). PLEs became very popular in recent years offering more personal freedom to learners than traditional learning environments. However, creating and configuring PLEs demand specific meta-skills that not all learners have. This situation leads to the challenge how learners can be supported to create PLEs that are useful to achieve their intended learning outcomes. The theory of SRL describes learners as self-regulated if they are capable of taking over control of the own learning process. Grounding on that theory, a model has been elaborated that offers guidance for the creation of PLEs containing tools for cognitive and meta-cognitive learning activities. The implementation of this approach has been done in the context of the ROLE infrastructure. A quantitative and qualitative evaluation with teachers describes advantages and ideas for improvement.
This paper gives an overview of recent developments in the German Reference Corpus DeReKo in terms of growth, maximising relevant corpus strata, metadata, legal issues, and its current and future research interface. Due to the recent acquisition of new licenses, DeReKo has grown by a factor of four in the first half of 2014, mostly in the area of newspaper text, and presently contains over 24 billion word tokens. Other strata, like fictional texts, web corpora, in particular CMC texts, and spoken but conceptually written texts have also increased significantly. We report on the newly acquired corpora that led to the major increase, on the principles and strategies behind our corpus acquisition activities, and on our solutions for the emerging legal, organisational, and technical challenges.
We present an approach to an aspect of managing complex access scenarios to large and heterogeneous corpora that involves handling user queries that, intentionally or due to the complexity of the queried resource, target texts or annotations outside of the given user’s permissions. We first outline the overall architecture of the corpus analysis platform KorAP, devoting some attention to the way in which it handles multiple query languages, by implementing ISO CQLF (Corpus Query Lingua Franca), which in turn constitutes a component crucial for the functionality discussed here. Next, we look at query rewriting as it is used by KorAP and zoom in on one kind of this procedure, namely the rewriting of queries that is forced by data access restrictions.
Maximizing the potential of very large corpora: 50 years of big language data at IDS Mannheim
(2014)
Very large corpora have been built and used at the IDS since its foundation in 1964. They have been made available on the Internet since the beginning of the 90’s to currently over 30,000 researchers worldwide. The Institute provides the largest archive of written German (Deutsches Referenzkorpus, DeReKe) which has recently been extended to 24 billion words. DeReKe has been managed and analysed by engines known as COSMAS and afterwards COSMAS II, which is currently being replaced by a new, scalable analysis platform called KorAP. KorAP makes it possible to manage and analyse texts that are accompanied by multiple, potentially conflicting, grammatical and structural annotation layers, and is able to handle resources that are distributed across different, and possibly geographically distant, storage systems. The majority of texts in DeReKe are not licensed for free redistribution, hence, the COSMAS and KorAP systems offer technical solutions to facilitate research on very large corpora that are not available (and not suitable) for download. For the new KorAP system, it is also planned to provide sandboxed environments to support non-remote-API access “near the data” through which users can run their own analysis programs.
Hosting Providers play an essential role in the development of Internet services such as e-Research Infrastructures. In order to promote the development of such services, legislators on both sides of the Atlantic Ocean introduced “safe harbour” provisions to protect Service Providers (a category which includes Hosting Providers) from legal claims (e.g. of copyright infringement). Relevant provisions can be found in § 512 of the United States Copyright Act and in art. 14 of the Directive 2000/31/EC (and its national implementations). The cornerstone of this framework is the passive role of the Hosting Provider through which he has no knowledge of the content that he hosts. With the arrival of Web 2.0, however, the role of Hosting Providers on the Internet changed; this change has been reflected in court decisions that have reached varying conclusions in the last few years. The purpose of this article is to present the existing framework (including recent case law from the US, Germany and France).
This contribution presents the procedure used in the Handbuch deutscher Kommunikationsverben and in its online version Kommunikationsverben in the lexicographical internet portal OWID to divide sets of semantically similar communication verbs into ever smaller sets of ever closer synonyms. Kommunikationsverben describes the meaning of communication verbs on two levels: a lexical level, represented in the dictionary entries and by sets of lexical features, and a conceptual level, represented by different types of situations referred to by specific types of verbs. The procedure starts at the conceptual level of meaning where verbs used to refer to the same specific situation type are grouped together. At the lexical level of meaning, the sets of verbs obtained from the first step are successively divided into smaller sets on the basis of the criteria of (i) identity of lexical meaning, (ii) identity of lexical features, and (iii) identity of contexts of usage. The stepwise procedure applied is shown to result in the creation of a semantic network for communication verbs.
Part-of-speech tagging (POS-tagging) of spoken data requires different means of annotation than POS-tagging of written and edited texts. In order to capture the features of German spoken language, a distinct tagset is needed to respond to the kinds of elements which only occur in speech. In order to create such a coherent tagset the most prominent phenomena of spoken language need to be analyzed, especially with respect to how they differ from written language. First evaluations have shown that the most prominent cause (over 50%) of errors in the existing automatized POS-tagging of transcripts of spoken German with the Stuttgart Tübingen Tagset (STTS) and the treetagger was the inaccurate interpretation of speech particles. One reason for this is that this class of words is virtually absent from the current STTS. This paper proposes a recategorization of the STTS in the field of speech particles based on distributional factors rather than semantics. The ultimate aim is to create a comprehensive reference corpus of spoken German data for the global research community. It is imperative that all phenomena are reliably recorded in future part-of-speech tag labels.
Machine learning methods offer a great potential to automatically investigate large amounts of data in the humanities. Our contribution to the workshop reports about ongoing work in the BMBF project KobRA (http://www.kobra.tu-dortmund.de) where we apply machine learning methods to the analysis of big corpora in language-focused research of computer-mediated communication (CMC). At the workshop, we will discuss first results from training a Support Vector Machine (SVM) for the classification of selected linguistic features in talk pages of the German Wikipedia corpus in DeReKo provided by the IDS Mannheim. We will investigate different representations of the data to integrate complex syntactic and semantic information for the SVM. The results shall foster both corpus-based research of CMC and the annotation of linguistic features in CMC corpora.
Language resources are often compiled for the purpose of variational analysis, such as studying differences between genres, registers, and disciplines, regional and diachronic variation, influence of gender, cultural context, etc. Often the sheer number of potentially interesting contrastive pairs can get overwhelming due to the combinatorial explosion of possible combinations. In this paper, we present an approach that combines well understood techniques for visualization heatmaps and word clouds with intuitive paradigms for exploration drill down and side by side comparison to facilitate the analysis of language variation in such highly combinatorial situations. Heatmaps assist in analyzing the overall pattern of variation in a corpus, and word clouds allow for inspecting variation at the level of words.
Newspapers became extremely popular in Germany during the 18th and 19th century, and thus increasingly influential for modern German. However, due to the lack of digitized historical newspaper corpora for German, this influence could not be analyzed systematically. In this paper, we introduce the Mannheim Corpus of Digital Newspapers and Magazines, which in its current release comprises 21 newspapers and magazines from the 18th and 19th century. With over 4.1 Mio tokens in about 650 volumes it currently constitutes the largest historical corpus dedicated to newspapers in German. We briefly discuss the prospect of the corpus for analyzing the evolution of news as a genre in its own right and the influence of contextual parameters such as region and register on the language of news. We then focus on one historically influential aspect of newspapers – their role in disseminating foreign words in German. Our preliminary quantitative results indeed indicate that newspapers use foreign words significantly more frequently than other genres, in particular belles lettres.
Discourses of Helping Professions brings together cutting-edge research on professional discourses from both traditional helping contexts such as doctor-patient interaction or psychotherapy and more recent helping contexts such as executive coaching. Unlike workplace, professional and institutional discourse – by now well established fields in linguistic research – discourses of helping professions represent an innovative concept in its orientation to a common communicative goal: solving patients’ and clients’ physical, psychological, emotional, professional or managerial problems via a particular helping discourse. The book sets out to uncover differences, similarities and interferences in how professionals and those seeking help interactively tackle this communicative goal. In its focus on professional helping contexts and its inter-professional perspective, the current book is a primer, intended to spark off more interdisciplinary and (applied) research on helping discourses, a socio-cultural phenomenon that is of growing importance in our post-modern society. As such, it is of great relevance for discourse researchers and discourse practitioners, caretakers and social scientists of all shades as well as for everybody interested in helping professions.
In 2010, ISO published a standard for syntactic annotation, ISO 24615:2010 (SynAF). Back then, the document specified a comprehensive reference model for the representation of syntactic annotations, but no accompanying XML serialisation. ISO’s subcommittee on language resource management (ISO TC 37/SC 4) is working on making the SynAF serialisation ISOTiger an additional part of the standard. This contribution addresses the current state of development of ISOTiger, along with a number of open issues on which we are seeking community feedback in order to ensure that ISOTiger becomes a useful extension to the SynAF reference model.
"FOLK is the ""Forschungs- und Lehrkorpus Gesprochenes Deutsch (FOLK)"" (eng.: research and teaching corpus of spoken German). The project has set itself the aim of building a corpus of German conversations which a) covers a broad range of interaction types in private, institutional and public settings, b) is sufficiently large and diverse and of sufficient quality to support different qualitative and quantitative research approaches, c) is transcribed, annotated and made accessible according to current technological standards, and d) is available to the scientific community on a sound legal basis and without unnecessary restrictions of usage. This paper gives an overview of the corpus design, the strategies for acquisition of a diverse range of interaction data, and the corpus construction workflow from recording via transcription an annotation to dissemination."
The Database for Spoken German (Datenbank für Gesprochenes Deutsch, DGD2, http://dgd.ids-mannheim.de) is the central platform for publishing and disseminating spoken language corpora from the Archive of Spoken German (Archiv für Gesprochenes Deutsch, AGD, http://agd.ids-mannheim.de) at the Institute for the German Language in Mannheim. The corpora contained in the DGD2 come from a variety of sources, some of them in-house projects, some of them external projects. Most of the corpora were originally intended either for research into the (dialectal) variation of German or for studies in conversation analysis and related fields. The AGD has taken over the task of permanently archiving these resources and making them available for reuse to the research community. To date, the DGD2 offers access to 19 different corpora, totalling around 9000 speech events, 2500 hours of audio recordings or 8 million transcribed words. This paper gives an overview of the data made available via the DGD2, of the technical basis for its implementation, and of the most important functionalities it offers. The paper concludes with information about the users of the database and future plans for its development.
One was a distinguished natural scientist and engineer, the other a self-taught scientist and vilified as a conman: Christian Gottlieb Kratzenstein (1723–1795) and Wolfgang von Kempelen (1734–1804). Some of the former’s postula-tions on human physiology and articulation of speech proved wrong in later years. Most of the latter’s theories are considered applicable even today. The perhaps most contrasting approaches to speech synthesis during the 18th century are linked to their names. There are many essential differences between their approaches which show that these two researchers were not only representatives of different schools of thought, but also representatives of two different scientific eras. A speculative and philosophical approach on the one hand versus an empirical and logical approach on the other hand. Both Kratzenstein and Kempelen published books on their research. But while the “Tentamen” [4] of the physician Kratzen-stein remains rather vague and imprecise in its descriptions of vowel production and synthesis, the “Mechanismus” [8] of the engineer Kempelen shows much more precision and correctness in almost every respect of human speech and lan-guage. The goal of this paper is to discuss the differences between these two con-temporaneous researchers on speech synthesis and to compare their theories with present-days findings.
This paper presents some results from an online survey regarding the functions and presentation of lexicographically compiled and automatically compiled corpus citations in a general monolingual e-dictionary of German (elexico). Our findings suggest that dictionary users have a clear understanding of the functions of corpus citations in lexicography.
Reading corpora are text collections that are enriched with processing data. From a corpus linguist’s perspective, they can be seen as an extension of classical linguistic corpora with human language processing behavior. From a psycholinguist’s perspective, reading corpora allow to test psycholinguistic hypotheses on subsets of language and language processing as it is ‘in the wild’ – in contrast to strictly controlled language material in isolated sentences, as used in most psycholinguistic experiments. In this paper, we will investigate a relevance-based account of language processing which states that linguistic structures, that are embedded deeper syntactically, are read faster because readers allocate less attention to these structures.
Using the Google Ngram Corpora for six different languages (including two varieties of English), a large-scale time series analysis is conducted. It is demonstrated that diachronic changes of the parameters of the Zipf–Mandelbrot law (and the parameter of the Zipf law, all estimated by maximum likelihood) can be used to quantify and visualize important aspects of linguistic change (as represented in the Google Ngram Corpora). The analysis also reveals that there are important cross-linguistic differences. It is argued that the Zipf–Mandelbrot parameters can be used as a first indicator of diachronic linguistic change, but more thorough analyses should make use of the full spectrum of different lexical, syntactical and stylometric measures to fully understand the factors that actually drive those changes.
When formulating a request for an object, speakers can choose among different grammatical resources that would all serve the overall purpose. This paper examines the social contexts indexed and created by the choice of the turn format can I have x to request a shared good (the pepper grinder, a tissue from a box on the table, etc.) in British English informal interaction. The analysis is based on a video corpus of approximately 25 h of everyday interaction among family and friends. In its home environment, a request in the format can I have x treats the other as being in control over the relevant material object, a control that is the contingent outcome of ongoing courses of action. This contingent control over a shared good produces an obligation to make it available. This analysis is supported by an examination of similarly formatted request turns in other languages, of can I have x in another interactional environment (after a relevant offer has been made) in British English, and of deviant cases. The results highlight the intimate connection of request format selection to the present engagements of (prospective) request recipients.
Recipient design is a key constituent of intersubjectivity in interaction. Recipient design of turns is informed by prior knowledge about and shared experience with recipients. Designing turns in order to be maximally effective for the particular recipient(s) is crucial for accomplishing intersubjectively coordinated action. This paper reports on a specific pragmatic structure of recipient design, i.e. counter-factual recipient design, and how it impinges on intersubjectivity in interaction. Based on an analysis of video-recordings data from driving school lessons in German, two kinds of counterfactual recipient design of instructors' requests are distinguished: pedagogic and egocentric turn-design. Counterfactual, pedagogic turn-design is used strategically to diagnose student skills and to create opportunities for corrective instructions. Egocentric turn-design rests on private, non-shared knowledge of the instructor. Egocentrically designed turns imply expectations of how to comply with requests which cannot be recovered by the student and which lead to a breakdown of intersubjective cooperation. This paper identifies practices, sources and interactional consequences of these two kinds of counterfactual recipient design. In addition, the study enhances our understanding of recipient design in at least three ways. It shows that recipient design does not only concern referential and descriptive practices, but also the indexing intelligible projections of next actions; it highlights the productive, other-positioning effects of recipient design; it argues that recipient design should be analyzed in terms of temporally extended interactional trajectories, linking turn-constructional practices to interactional histories and consecutive trajectories of joint action.
Positioning
(2015)
Over the last two decades, “positioning” has become an established concept used to elucidate how identities are deployed and negotiated in narratives. This chapter first locates positioning in the larger field of research on identities and discourse. Commonalities and differences in conceptions of positioning are highlighted. In the following, the historical development of theoretical approaches to positioning and their methodological implications are reviewed in more detail. The article closes by taking up two current lines of debate concerning the future development of the concept of positioning.
In this paper, a method for measuring synchronic corpus (dis-)similarity put forward by Kilgarriff (2001) is adapted and extended to identify trends and correlated changes in diachronic text data, using the Corpus of Historical American English (Davies 2010a) and the Google Ngram Corpora (Michel et al. 2010a). This paper shows that this fully data-driven method, which extracts word types that have undergone the most pronounced change in frequency in a given period of time, is computationally very cheap and that it allows interpretations of diachronic trends that are both intuitively plausible and motivated from the perspective of information theory. Furthermore, it demonstrates that the method is able to identify correlated linguistic changes and diachronic shifts that can be linked to historical events. Finally, it can help to improve diachronic POS tagging and complement existing NLP approaches. This indicates that the approach can facilitate an improved understanding of diachronic processes in language change.
This article reports about the on-going work on a new version of the metadata framework Component Metadata Infrastructure (CMDI), central to the CLARIN infrastructure. Version 1.2 introduces a number of important changes based on the experience gathered in the last five years of intensive use of CMDI by the digital humanities community, addressing problems encountered, but also introducing new functionality. Next to the consolidation of the structure of the model and schema sanity, new means for lifecycle management have been introduced aimed at combatting the observed proliferation of components, new mechanism for use of external vocabularies will contribute to more consistent use of controlled values and cues for tools will allow improved presentation of the metadata records to the human users. The feature set has been frozen and approved, and the infrastructure is now entering a transition phase, in which all the tools and data need to be migrated to the new version.
Usenet is a large online resource containing user-generated messages (news articles) organised in discussion groups (newsgroups) which deal with a wide variety of different topics. We describe the download, conversion, and annotation of a comprehensive German news corpus for integration in DeReKo, the German Reference Corpus hosted at the Institut für Deutsche Sprache in Mannheim.
This paper discusses computational linguistic methods for the semi-automatic analysis of modality interdependencies (the combination of complex resources such as speaking, writing, and visualizing; MID) in professional crosssituational interaction settings. The overall purpose of the approach is to develop models, methods, and a framework for the description and analysis of MID forms and functions. The paper describes work in progress—the development of an annotation framework that allows annotating different data and file formats at various levels, to relate annotation levels and entries independently of the given file format, and to visualize patterns.
Some structures in printed dictionaries also occur in online dictionaries, some do not occur, some need to be adapted whereas new structures may be introduced in online dictionaries. This paper looks at one type of structure, known in printed dictionaries as outer texts. It is argued that the notions of a frame structure and front and back matter texts do not apply to online dictionaries. The data distribution in online dictionaries does not only target the dictionary articles. There are components outside the word list section of the dictionary. These components are not always texts. They could e.g. also be video clips. Consequently the notion of outer texts in printed dictionaries is substituted by the notion of outer features in online dictionaries. This paper shows how outer features help to constitute a feature compound. The outer features in eight online dictionaries are discussed. Where the users guidelines text is a compulsory outer text in printed dictionaries it seems that an equivalent feature is often eschewed in online dictionaries. A distinction is made between dictionary-internal and dictionary-external outer features, illustrating that outer features can be situated in other sources than the specific dictionary. More research is needed to formulate models for online features that can play a comprehensive role in online dictionaries.
Word-formation rules differ from syntactic rules in that they, apart from obeying morphological and semantic constraints, can also be − and often are − restricted phonologically. The present article includes an overview of the relevant phenomena in English and discusses the consequences for the representation of words in the mental lexicon and for grammar.
Bilingual Kindergarten programmes. The interaction of language management and language attitudes
(2015)
We analyze the linguistic evolution of selected scientific disciplines over a 30-year time span (1970s to 2000s). Our focus is on four highly specialized disciplines at the boundaries of computer science that emerged during that time: computational linguistics, bioinformatics, digital construction, and microelectronics. Our analysis is driven by the question whether these disciplines develop a distinctive language use—both individually and collectively—over the given time period. The data set is the English Scientific Text Corpus (scitex), which includes texts from the 1970s/1980s and early 2000s. Our theoretical basis is register theory. In terms of methods, we combine corpus-based methods of feature extraction (various aggregated features [part-of-speech based], n-grams, lexico-grammatical patterns) and automatic text classification. The results of our research are directly relevant to the study of linguistic variation and languages for specific purposes (LSP) and have implications for various natural language processing (NLP) tasks, for example, authorship attribution, text mining, or training NLP tools.
Speakers’ linguistic experience is for the most part experience with language as used in conversational interaction. Though highly relevant for usage-based linguistics, the study of such data is as yet often left to other frameworks such as conversation analysis and interactional linguistics (Couper-Kuhlen and Selting 2001). On the basis of a case study of salient usage patterns of the two German motion verbs kommen and gehen in spontaneous conversation, the present paper argues for a methodological integration of quantitative corpus-linguistic methods with qualitative conversation analytic approaches to further the usage-based study of conversational interaction.
The book investigates the diachronic dimension of contact-induced language change based on empirical data from Pennsylvania German (PG), a variety of German in long-term contact with English. Written data published in local print media from Pennsylvania (USA) between 1868 and 1992 are analyzed with respect to semantic changes in the argument structure of verbs, the use of impersonal constructions, word order changes in subordinate clauses and in prepositional phrase constructions.
The research objective is to trace language change based on diachronic empirical data, and to assess whether existing models of language contact make provisions to cover the long-term developments found in PG. The focus of the study is thus twofold: first, it provides a detailed analysis of selected semantic and syntactic changes in Pennsylvania German, and second, it links the empirical findings to theoretical approaches to language contact.
Previous investigations of PG have drawn a more or less static, rather than dynamic, picture of this contact variety. The present study explores how the dynamics of language contact can bring about language mixing, borrowing, and, eventually, language change, taking into account psycholinguistic processes in (the head of) the bilingual speaker.
Metalinguistic awareness of standard vs standard usage. The case of determiners in spoken German
(2015)
Contents:
1. Michal Křen: Recent Developments in the Czech National Corpus, S. 1
2. Dan Tufiş, Verginica Barbu Mititelu, Elena Irimia, Stefan Dumitrescu, Tiberiu Boros, Horia Nicolai Teodorescu: CoRoLa Starts Blooming – An update on the Reference Corpus of Contemporary Romanian Language, S. 5
3. Sebastian Buschjäger, Lukas Pfahler, Katharina Morik: Discovering Subtle Word Relations in Large German Corpora, S. 11
4. Johannes Graën, Simon Clematide: Challenges in the Alignment, Management and Exploitation of Large and Richly Annotated Multi-Parallel Corpora, S. 15
5. Stefan Evert, Andrew Hardie: Ziggurat: A new data model and indexing format for large annotated text corpora, S. 21
6. Roland Schäfer: Processing and querying large web corpora with the COW14 architecture, S. 28
7. Jochen Tiepmar: Release of the MySQL-based implementation of the CTS protocol, S. 35
Frimer et al. (2015) claim that there is a linear relationship between the level of prosocial language and the level of public disapproval of US Congress. A re-analysis demonstrates that this relationship is the result of a misspecified model that does not account for first-order autocorrelated disturbances. A Stata script to reproduce all presented results is available as an appendix.
The present thesis introduces KoralQuery, a protocol for the generic representation of queries to linguistic corpora. KoralQuery defines a set of types and operations which serve as abstract representations of linguistic entities and configurations. By combining these types and operations in a nested structure, the protocol may express linguistic structures of arbitrary complexity. It achieves a high degree of neutrality with regard to linguistic theory, as it provides flexible structures that allow for the setting of certain parameters to access several complementing and concurrent sources and layers of annotation on the same textual data. JSON-LD is used as a serialisation format for KoralQuery, which allows for the well-defined and normalised exchange of linguistic queries between query engines to promote their interoperability. The automatic translation of queries issued in any of three supported query languages to such KoralQuery serialisations is the second main contribution of this thesis. By employing the introduced translation module, query engines may also work independently of particular query languages, as their backend technology may rely entirely on the abstract KoralQuery representations of the queries. Thus, query engines may provide support for several query languages at once without any additional overhead. The original idea of a general format for the representation of linguistic queries comes from an initiative called Corpus Query Lingua Franca (CQLF), whose theoretic backbone and practical considerations are outlined in the first part of this thesis. This part also includes a brief survey of three typologically different corpus query languages, thus demonstrating their wide variety of features and defining the minimal target space of linguistic types and operations to be covered by KoralQuery.
This paper presents a dictionary writing system developed at the Institute for the German Language in Mannheim (IDS) for an ongoing international lexicographical project that traces the way of German loanwords in the East Slavic languages Russian, Belarusian and Ukrainian that were possibly borrowed via Polish. The results will be published in the Lehnwortportal Deutsch (LWP, lwp.ids-mannheim.de), a web portal for loanword dictionaries with German as the common donor language. The system described here is currently in use for excerpting data from a large range of historical and contemporary East Slavic monolingual dictionaries. The paper focuses on the tools that help in merging excerpts that are etymologically related to one and the same Polish etymon. The merging process involves eliminating redundancies and inconsistencies and, above all, mapping word senses of excerpted entries onto a common cross-language set of ‘metasenses’. This mapping may involve literally hundreds of excerpted East Slavic word senses, including quotations, for one ‘underlying’ Polish etymon.
The task-oriented and format-driven development of corpus query systems has led to the creation of numerous corpus query languages (QLs) that vary strongly in expressiveness and syntax. This is a severe impediment for the interoperability of corpus analysis systems, which lack a common protocol. In this paper, we present KoralQuery, a JSON-LD based general corpus query protocol, aiming to be independent of particular QLs, tasks and corpus formats. In addition to describing the system of types and operations that Koral- Query is built on, we exemplify the representation of corpus queries in the serialized format and illustrate use cases in the KorAP project.
Introduction
(2015)
Learning from Errors. Systematic Analysis of Complex Writing Errors for Improving Writing Technology
(2015)
In this paper, we describe ongoing research on writing errors with the ultimate goal to develop error-preventing editing functions in word-processors. Drawing from the state-of-the-art research in errors carried out in various fields, we propose the application of a general concept for action-slips as introduced by Norman. We demonstrate the feasibility of this approach by using a large corpus of writing errors in published texts. The concept of slips considers both the process and the product: some failure in a procedure results in an error in the product, i.e., is visible in the written text. In order to develop preventing functions, we need to determine causes of such visible errors.
Natural language Processing tools are mostly developed for and optimized on newspaper texts, and often Show a substantial performance drop when applied to other types of texts such as Twitter feeds, Chat data or Internet forum posts. We explore a range of easy-to-implement methods of adapting existing part-of-speech taggers to improve their performance on Internet texts. Our results show that these methods can improve tagger performance substantially.
We present studies using the 2013 log files from the German version of Wiktionary. We investigate several lexicographically relevant variables and their effect on look-up frequency: Corpus frequency of the headword seems to have a strong effect on the number of visits to a Wiktionary entry. We then consider the question of whether polysemic words are looked up more often than monosemic ones. Here, we also have to take into account that polysemic words are more frequent in most languages. Finally, we present a technique to investigate the time-course of look-up behaviour for specific entries. We exemplify the method by investigating influences of (temporary) social relevance of specific headwords.
This paper shows how understanding in interaction is informed by temporality, and in particular, by the workings of retrospection. Understanding is a temporally extended, sequentially organized process. Temporality, namely, the sequential relationship of turn positions, equips participants with default mechanisms to display understandings and to expect such displays. These mechanisms require local management of turn-taking to be in order, i.e., the possibility and the expectation to respond locally and reciprocally to prior turns at talk. Sequential positions of turns in interaction provide an infrastructure for displaying understanding and accomplishing intersubjectivity. Linguistic practices specialized in displaying particular kinds of (not) understanding are adapted to the individual sequential positions with respect to an action-to-be-understood.
The authors establish a phenomenological perspective on the temporal constitution of experience and action. Retrospection and projection (i.e. backward as well as forward orientation of everyday action), sequentiality and the sequential organization of activities as well as simultaneity (i.e. participants’ simultaneous coordination) are introduced as key concepts of a temporalized approach to interaction. These concepts are used to capture that every action is produced as an inter-linked step in the succession of adjacent actions, being sensitive to the precise moment where it is produced. The adoption of a holistic, multimodal and praxeological perspective additionally shows that action in interaction is organized according to several temporal orders simultaneously in operation. Each multimodal resource used in interaction has its own temporal properties.
Temporality in interaction
(2015)
Corpus-assisted analyses of public discourse often focus on the level of the lexicon. This article argues in favour of corpus-assisted analyses of discourse, but also in favour of conceptualising salient lexical items in public discourse in a more determined way. It draws partly on non-Anglophone academic traditions in order to promote a conceptualisation of discourse keywords, thereby highlighting how their meaning is determined by their use in discourse contexts. It also argues in favour of emphasising the cognitive and epistemic dimensions of discourse-determined semantic structures. These points will be exemplified by means of a corpus-assisted, as well as a frame-based analysis of the discourse keyword financial crisis in British newspaper articles from 2009. Collocations of financial crisis are assigned to a generic matrix frame for ‘event’ which contains slots that specify possible statements about events. By looking at which slots are more, respectively less filled with collocates of financial crisis, we will trace semantic presence as well as absence, and thereby highlight the pragmatic dimensions of lexical semantics in public discourse. The article also advocates the suitability of discourse keyword analyses for systematic contrastive analyses of public/political discourse and for lexicographical projects that could serve to extend the insights drawn from corpus-guided approaches to discourse analysis.
Formal learning in higher education creates its own challenges for didactics, teaching, technology, and organization. The growing need for well-educated employees requires new ideas and tools in education. Within the ROLE project, three personal learning environments based on ROLE technology were used to accompany “traditional” teaching and learning activities at universities. The test beds at the RWTH Aachen University in Germany, the School of Continuing Education of Shanghai Jiao Tong University in China, and the Uppsala University in Sweden differ in learning culture, the number of students and their individual background, synchronous versus distant learning, etc. The big range of test beds underlines the flexibility of ROLE technology. For each test bed, the learning scenario is presented and analyzed as well as the particular ROLE learning environment. The evaluation methods are described and the research results discussed in detail. The learned lessons provide an easy way to benefit from the ROLE research work which demonstrates the potential for new ideas based on flexible e-learning concepts and tools in “traditional” education.
Optimality theory (henceforth OT) models natural language competence in terms of interactions of universal constraints, notably markedness and faithfulness constraints. This article illustrates some of the major advances in the understanding of word-formation phenomena originating from this theory, including the prosodic organization of morphologically complex words, neutralization patterns in derivational affixes, allomorphy, and infixation.
In this paper, general problems with easily confused words among a language community are addressed. Serving as an example, the difficulties of semantic differentiation between the use of German sensibel and sensitiv are discussed. One the one hand, the question is raised as to how a speech community faces challenges of semantic shifts and how monolingual dictionaries document lexical items with similar semantic aspects. On the other hand, I will demonstrate the discrepancies of information on meaning as retrieved and interpreted from large corpus data. It will be shown how the semantics of words change and hence cause confusion among speakers. As a result, empirical evidence opens up several questions concerning the prescriptive vs. descriptive treatment of paronymic items such as sensibel/sensitiv and it demands different approaches to the lexicographic description of such words in future reference works.
During the second half of the 19th century, extended regions of the South Pacific came to be part of the German colonial empire. The colonial administration included repeated and diverse efforts to implement German as the official language in several settings (administration, government, education) in the colonial areas. Due to unfamiliar sociological and linguistic conditions, to competition with English as a(nother) prestigious colonizer language, and to the short time-span of the German colonial rule, these efforts rendered only little language-related effect. Nevertheless, some linguistic traces remained, and these seem to reflect in what areas language implementation was organized most thoroughly. The study combines two directions of investigation: First, taking a historical approach, legal and otherwise official documents and information are considered in order to understand how the implementation process was planned and (intended to be) carried out. Second, from a linguistic perspective, documented lexical borrowings and other traces of linguis tic contact are identified that can corroborate the historical findings by reflecting a greater effect of contact in such areas where the implementation of German was carried out most strictly. The goal of this paper is, firstly, to trace the political and missionary activities in language planning with regard to German in the colonial Pacific, rather similar to a modem language policy scenario when a new code of prestige or national unity is implemented. Secondly, these activities are evaluated in the face of the outcome that can be observed, in the historical practice as well as in long-term effects of language contact up until today.
In order to demonstrate why it is important to correctly account for the (serial dependent) structure of temporal data, we document an apparently spectacular relationship between population size and lexical diversity: for five out of seven investigated languages, there is a strong relationship between population size and lexical diversity of the primary language in this country. We show that this relationship is the result of a misspecified model that does not consider the temporal aspect of the data by presenting a similar but nonsensical relationship between the global annual mean sea level and lexical diversity. Given the fact that in the recent past, several studies were published that present surprising links between different economic, cultural, political and (socio-)demographical variables on the one hand and cultural or linguistic characteristics on the other hand, but seem to suffer from exactly this problem, we explain the cause of the misspecification and show that it has profound consequences. We demonstrate how simple transformation of the time series can often solve problems of this type and argue that the evaluation of the plausibility of a relationship is important in this context. We hope that our paper will help both researchers and reviewers to understand why it is important to use special models for the analysis of data with a natural temporal ordering.
This thesis consists of the following three papers that all have been published in international peer-reviewed journals:
Chapter 3: Koplenig, Alexander (2015c). The Impact of Lacking Metadata for the Measurement of Cultural and Linguistic Change Using the Google Ngram Data Sets—Reconstructing the Composition of the German Corpus in Times of WWII. Published in: Digital Scholarship in the Humanities. Oxford: Oxford University Press. [doi:10.1093/llc/fqv037]
Chapter 4: Koplenig, Alexander (2015b). Why the quantitative analysis of dia-chronic corpora that does not consider the temporal aspect of time-series can lead to wrong conclusions. Published in: Digital Scholarship in the Humanities. Oxford: Oxford University Press. [doi:10.1093/llc/fqv030]
Chapter 5: Koplenig, Alexander (2015a). Using the parameters of the Zipf–Mandelbrot law to measure diachronic lexical, syntactical and stylistic changes – a large-scale corpus analysis. Published in: Corpus Linguistics and Linguistic Theory. Berlin/Boston: de Gruyter. [doi:10.1515/cllt-2014-0049]
Chapter 1 introduces the topic by describing and discussing several basic concepts relevant to the statistical analysis of corpus linguistic data. Chapter 2 presents a method to analyze diachronic corpus data and a summary of the three publications. Chapters 3 to 5 each represent one of the three publications. All papers are printed in this thesis with the permission of the publishers.