Refine
Year of publication
Document Type
- Part of a Book (1761)
- Article (1170)
- Conference Proceeding (442)
- Book (214)
- Other (100)
- Review (61)
- Working Paper (48)
- Part of Periodical (28)
- Doctoral Thesis (25)
- Report (17)
Language
- German (2834)
- English (961)
- French (22)
- Multiple languages (18)
- Russian (14)
- Spanish (11)
- Portuguese (9)
- Ukrainian (5)
- Latvian (3)
- Polish (3)
Keywords
- Deutsch (1505)
- Korpus <Linguistik> (544)
- Konversationsanalyse (208)
- Gesprochene Sprache (176)
- Wörterbuch (176)
- Grammatik (162)
- Interaktion (153)
- Kommunikation (140)
- Sprachgebrauch (139)
- Computerlinguistik (136)
Publicationstate
- Veröffentlichungsversion (3883) (remove)
Reviewstate
- (Verlags)-Lektorat (2490)
- Peer-Review (1008)
- Verlags-Lektorat (79)
- Peer-review (37)
- Qualifikationsarbeit (Dissertation, Habilitationsschrift) (33)
- Review-Status-unbekannt (12)
- Abschlussarbeit (Bachelor, Master, Diplom, Magister) (Bachelor, Master, Diss.) (5)
- (Verlags-)Lektorat (4)
- Verlagslektorat (4)
- Peer-Revied (3)
Publisher
- de Gruyter (621)
- Institut für Deutsche Sprache (354)
- Leibniz-Institut für Deutsche Sprache (IDS) (223)
- Narr (206)
- IDS-Verlag (108)
- Lang (97)
- Niemeyer (90)
- De Gruyter (59)
- Verlag für Gesprächsforschung (51)
- Association for Computational Linguistics (44)
Arbeitet man als muttersprachlicher Sprecher des Deutschen mit Corpora gesprochener oder geschriebener deutscher Sprache, dann reflektiert man in aller Regel nur selten über die Vielzahl von kulturspezifischen Informationen, die in solchen Texten kodifiziert sind - vor allem, wenn es sich bei diesen Daten um Texte aus der Gegenwart handelt. In den meisten Fällen hat man nämlich keinerlei Probleme mit dem in den Daten präsupponierten und als allgemein bekannt erachteten Hintergrundswissen. Betrachtet man dagegen Daten in Corpora, die andere - vor allem nicht-indoeuropäische - Sprachen dokumentieren, dann wird einem schnell bewusst, wieviel an kulturspezifischem Wissen nötig ist, um diese Daten adäquat zu verstehen. In meinem Beitrag illustriere ich diese Beobachtung an einem Beispiel aus meinem Corpus des Kilivila, der austronesischen Sprache der Trobriand-Insulaner von Papua-Neuguinea. Anhand eines kurzen Ausschnitts einer insgesamt etwa 26 Minuten dauernden Dokumentation, worüber und wie sechs Trobriander miteinander tratschen und klatschen, zeige ich, was ein Hörer oder Leser eines solchen kurzen Daten-Ausschnitts wissen muss, um nicht nur dem Gespräch überhaupt folgen zu können, sondern auch um zu verstehen, was dabei abläuft und wieso ein auf den ersten Blick absolut alltägliches Gespräch plötzlich für einen Trobriander ungeheuer an Brisanz und Bedeutung gewinnt. Vor dem Hintergrund dieses Beispiels weise ich dann zum Schluss meines Beitrags darauf hin, wie unbedingt nötig und erforderlich es ist, in allen Corpora bei der Erschließung und Kommentierung von Datenmaterialien durch sogenannte Metadaten solche kulturspezifischen Informationen explizit zu machen.
Transdisciplinary research is research not only on, but also for and, most of all, with practitioners. In the research framework of transdisciplinarity, scholars and practitioners collaborate throughout research projects with the aim of mutual learning. This paper shows the value transdisciplinarity can add to media linguistics. It does so by investigating the digital literacy shift in journalism: the change, in the last two decades, from the predominance of a writing mode that we have termed focused writing to a mode we have called writing-by-the-way. Large corpora of writing process data have been generated and analyzed with the multimethod approach of progression analysis in order to combine analytical depth with breadth. On the object level of doing writing in journalism, results show that the general trend towards writing-by-the-way opens up new niches for focused writing. On a meta level of doing research, findings explain under what conditions transdisciplinarity allows for deeper insights into the medialinguistic object of investigation.
Cybermobbing ist der gezielte Versuch, online das Face einer anderen Person zu dekonstruieren. Etwa ein Drittel aller Jugendlichen ist schon mindestens einmal mit diesem Problem konfrontiert worden. Seinen temporären Höhepunkt erreichte es mit dem Erscheinen der Internetseite Isharegossip.com (ISG). Diese entwickelte sich sehr schnell zu einer regelrechten Mobbing-Plattform. Täter fanden hier ganz besonders drastische verbale Mittel, um ihre Opfer zu kompromittieren. Bislang wurde noch nicht qualitativ analysiert, inwieweit Opfer und sogenannte virtuelle Zaungäste auf diese Verbalattacken reagieren. Ziel des Aufsatzes ist es, anhand eines typischen Diskurses sechs Verteidigungsstrategien aufzuzeigen, die von Opfern aber auch von sogenannten virtuellen Zaungästen angewandt werden, um das Face des Opfers zu rekonstruieren und zu stabilisieren.
Die traditionelle Einordnung von man als Indefinitpronomen wird in Zweifel gezogen, andere Zuordnungsmöglichkeiten werden geprüft. Zu diesem Zweck werden die Morphosyntax und die Semantik von man herausgearbeitet. Dabei steht insbesondere die Dichotomie 'generische' versus 'partikuläre' Verwendung zur Debatte. Abschließend wird ein kurzer Blick auf man aus der Lernerperspektive und im Sprachvergleich geworfen.
Acht authentische Arbeitsbesprechungen aus Unternehmen bilden die Basis für eine detaillierte linguistische Analyse. Von Mikrosignalen bis hin zu rhetorischen Verfahren werden sprachliche Mittel im Hinblick auf steuernde und manipulative Funktionen beschrieben. Aus dem Gesprächsverhalten der Teilnehmer entfaltet sich in actu ein Spektrum sozialer Strukturen in unternehmerischen Organisationen.
The paper presents a summary of an attempt to define the notion of “sentence mood”. It pursues the question for which phenomena it makes sense to subsume them under this term. It proposes to capture by “sentence mood” one aspect of sentence (not clause!) meaning which can be seen as the base of the traditional sentence type (Satzarten) distinction. This aspect of sentence meaning is a special kind of attitude towards the state of affairs denoted by the sentence. It is typically determined by supralexical factors, and is to be interpreted under normal conditions.
"Sprachschrott" [Leserforum]
(1988)
"Systemrelevant" - eine sprachwissenschaftliche Betrachtung des Begriffs aus aktuellem Anlass
(2020)
"Themengebundene Verwendung(en)" als neuer Angabetyp unter der Rubrik "Besonderheiten des Gebrauchs"
(2011)
"Verschlampung". Zur Glosse von B. Strecker "Wem die Sprache gehört" (SPRACHREPORT 2/89, S. 4)
(1989)
Die Textsorte Gebet hat zwar eine klare formale Struktur und auch aus sprech- akttheoretischer Perspektive lassen sich einige Aussagen treffen. Über den Inhalt von Gebeten liegen uns jedoch noch zu wenige Erkenntnisse vor. Dabei sehen sich Linguisten vor allem methodischen Problemen gegenüber. So ist der Wortlaut privater Gebete kaum zugänglich. Im vorliegenden Aufsatz wird eine als Pretest konzipierte Fragebogenstudie vorgestellt, die verschiedene Aspekte des Betens thematisiert. Es werden Tendenzen aufgezeigt, ob und wie Menschen im Gebet Emotionen verbalisieren. Auch über die Konzeptualisierung von Gott, die der Kommunikation zugrunde liegt, lassen sich Annahmen ableiten. In diesem Zusammenhang werden die Textsortenspezifika des Gebets diskutiert.
"wer ich bin? dein schlimmster alptraum, baby!" Cybermobbing - ein Thema für den Deutschunterricht
(2012)
Von Beginn der Mediengeschichte an verwenden Journalisten mehr oder weniger feste Fügungen, meist, um Angaben über die Quellen einer Nachricht, ihre Hintergründe und Übermittlung zu machen. Die Arbeit untersucht die kommunikativen, syntaktischen und lexikalischen Formen der Versprachlichung im Hinblick auf die Herausbildung und Tradierung fester Fügungen. Dabei wird unveröffentlichtes Material umfangreich dokumentiert und interpretiert.
"Wie Schule Sprache macht"
(2019)
"Wilde Pflanzen ohne nährende Frucht". Der politisch-soziale Wortschatz bei den Brüdern Grimm
(1990)
"Übergesetzliches Recht". Reflexionen nationalsozialistischen Unrechts in der frühen Nachkriegszeit
(2002)
Urteilsbegründungen sind zeit- und damit sprachgeschichtliche Dokumente. Sie sind Psychogramme der Gesellschaft eines Staates. In der ersten Nachkriegsdekade reflektieren sie die zwei Seinsformen des deutschen Gemeinwesens bis und ab 1949. Der Beitrag rekonstruiert vor diesem Hintergrund richterliche Selbstprofile, welche die Rechtsprechung der ersten Nachkriegsdekade bestimmen, um anschließend an drei Beispielen der Frage nachzugehen, wie es möglich ist, dass ein Richter im Rahmen einer konzisen Argumentation dasselbe Argument abweist, das ein anderer Richter, ebenso konzis argumentierend, akzeptiert. Die Theoretische Grundlegung dieser Untersuchung besteht – ihrem Erkenntnisinteresse und vor allem der Beschaffenheit der untersuchten Textsorte ,Gerichtsurteil‘ folgend – aus einem Ensemble argumentations- und konzeptanalytischer Aspekte: Urteilsbegründungen sind ihrem Zweck nach argumentierende Texte, die im Argumentieren Schuldkonzepte realisieren. Überlegungen hierzu sind der Untersuchung vorangestellt.
Christian Cay Lorenz Hirschfeld (1742-1792) beschrieb in seiner "Theorie der Gartenkunst" die Stellung des Menschen in der Natur und spiegelte die gesellschaftlichen Zustände. Dabei stellte er eine Verbindung zwischen der Kunstform des Landschaftsgartens und der Verbesserung der Menschheit her. In diesem Band wird exemplarisch vorgeführt, mit welchem lexematischen Material er die Kombination aus differenzierter Beschreibung und beabsichtigter ästhetischer Erziehung in moralischer Absicht innerhalb des vom Sprachsystem lexikalisch vorgegebenen Rahmens umsetzte und welche sprachlichen Strategien aus diesen Intentionen resultierten.
This thesis is a corpus linguistic investigation of the language used by young German speakers online, examining lexical, morphological, orthographic, and syntactic features and changes in language use over time. The study analyses the language in the Nottinghamer Korpus deutscher YouTube‐Sprache ("Nottingham corpus of German YouTube language", or NottDeuYTSch corpus), one of the first large corpora of German‐language comments taken from the videosharing website YouTube, and built specifically for this project. The metadatarich corpus comprises c.33 million tokens from more than 3 million comments posted underneath videos uploaded by mainstream German‐language youthorientated YouTube channels from 2008‐2018.
The NottDeuYTSch corpus was created to enable corpus linguistic approaches to studying digital German youth language (Jugendsprache), having identified the need for more specialised web corpora (see Barbaresi 2019). The methodology for compiling the corpus is described in detail in the thesis to facilitate future construction of web corpora. The thesis is situated at the intersection of Computer‐Mediated Communication (CMC) and youth language, which have been important areas of sociolinguistic scholarship since the 1980s, and explores what we can learn from a corpus‐driven, longitudinal approach to (online) youth language. To do so, the thesis uses corpus linguistic methods to analyse three main areas:
1. Lexical trends and the morphology of polysemous lexical items. For this purpose, the analysis focuses on geil, one of the most iconic and productive words in youth language, and presents a longitudinal analysis, demonstrating that usage of geil has decreased, and identifies lexical items that have emerged as potential replacements. Additionally, geil is used to analyse innovative morphological productiveness, demonstrating how different senses of geil are used as a base lexeme or affixoid in compounding and derivation.
2. Syntactic developments. The novel grammaticalization of several subordinating conjunctions into both coordinating conjunctions and discourse markers is examined. The investigation is supported by statistical analyses that demonstrate an increase in the use of non‐standard syntax over the timeframe of the corpus and compares the results with other corpora of written language.
3. Orthography and the metacommunicative features of digital writing. This analysis identifies orthographic features and strategies in the corpus, e.g. the repetition of certain emoji, and develops a holistic framework to study metacommunicative functions, such as the communication of illocutionary force, information structure, or the expression of identities. The framework unifies previous research that had focused on individual features, integrating a wide range of metacommunicative strategies within a single, robust system of analysis.
By using qualitative and computational analytical frameworks within corpus linguistic methods, the thesis identifies emergent linguistic features in digital youth language in German and sheds further light on lexical and morphosyntactic changes and trends in the language of young people over the period 2008‐2018. The study has also further developed and augmented existing analytical frameworks to widen the scope of their application to orthographic features associated with digital writing.
Positioning analysis, a variant of discourse analysis, was used to explore the narratives of 40 psychiatric patients (11 females and 29 males; mean age = 40 years) who had manifest difficulties with engagement with statutory mental health services. Positioning analysis is a qualitative method that captures how people linguistically position the roles and identities of themselves and others in their day-to-day lives and narratives. The language of disengagement incorporated the passive positioning of self in relation to their lives and treatment through the use of metaphor, the passive voice and them and us attribution, while the discourse of engagement incorporated more active positioning of self, achieved through the use of the personal pronoun we and metaphoric references to balanced relationships. The findings corroborate previous thematic analysis that highlighted the importance of identity and agency in the ‘making or breaking’ of therapeutic relationships (Priebe et al. 2005). Implications are discussed in relation to how positioning analysis may help signal and emphasize important life and therapeutic experiences in spoken narratives as well as clinical consultations.
The paper reports the results of the curation project ChatCorpus2CLARIN. The goal of the project was to develop a workflow and resources for the integration of an existing chat corpus into the CLARIN-D research infrastructure for language resources and tools in the Humanities and the Social Sciences (http://clarin-d.de). The paper presents an overview of the resources and practices developed in the project, describes the added value of the resource after its integration and discusses, as an outlook, to what extent these practices can be considered best practices which may be useful for the annotation and representation of other CMC and social media corpora.
This study explores how ‘gatherings’ turn into ‘encounters’ in a virtual world (VW) context. Most communication technologies enable only focused encounters between distributed participants, but in VWs both gatherings and encounters can occur. We present close sequential analysis of moments when after a silent gathering, interaction among participants in a VW is gradually resumed, and also investigate the social actions in the verbal (re-)opening turns. Our findings show that like in face-to-face situations, also in VWs participants often use different types of embodied resources to achieve the transition, rather than rely on verbal means only. However, the transition process in VWs has distinctive characteristics compared to the one in face-to-face situations. We discuss how participants in a VW use virtually embodied pre-beginnings to display what we call encounter-readiness, instead of displaying lack of presence by avatar stillness. The data comprise 40 episodes of video-recorded team interactions in a VW.
In recent years, formal semantic research on the meaning of tense and aspect has benefited from a number of studies investigating languages with graded tense systems. This paper contributes a first sketch of the temporal marking system of Awing (Grassfields Bantu), focusing on two varieties of remote past and remote future. We argue that the data support a "symmetric" analysis of past and future tense in Awing. In our specific proposal, Awing temporal remoteness markers are uniformly analyzed as quantificational tense operators, and both the past and the future paradigm include a form that prevents contextual restriction of this temporal quantifier.
Jugend- und Szenensprachen sind wichtige Ressourcen für den lexikalischen Wandel der Standardsprache „von unten“, dessen letzte Etappe der Eingang in ein gesamtsprachiges Wörterbuch ist. Ziele dieses Beitrags sind es. den Verbreitungsprozess jugendsprachlicher lexikalischer Innovationen zu modellieren und die Rolle der Massenmedien im lexikalischen Wandel von unten zu klären. Die Diskussion verbindet die Mikroperspektive der soziolinguistischen Akkomodations- und Netzwerktheorie mit der Makroperspektive der Massenmedien als Indikatoren der gesellschaftlichen Reichweite sprachlicher Innovationen. Drei analytische Dimensionen werden aufeinander bezogen. Zunächst wird versucht, innovationsfreudige lexikalische Kategorien zu identifizieren. Der zweite Schritt gilt der sozialen Verbreitung lexikalischer Innovationen, wobei die individuelle sprachliche Akkomodation und Imitation genauso angesprochen wird wie die Rolle der Massenmedien im Verbreitungsprozess. Auf dieser Grundlage werden „Karrieren“ jugendsprachlicher Ausdrücke in der öffentlichen Kommunikation quantitativ und qualitativ untersucht. Die zunehmende Erscheinungshäufigkeit ausgewählter lexikalischer Einheiten wird im Zeitungskorpus des IDS verfolgt. Am Beispiel chillen wird ein Entwicklungspfad herausgearbeitet, der von der metasprachlichen Thematisierung über den Gebrauch als Zitat bis hin zum Eingang des Ausdrucks in die Eigenperspektive der Journalisten führt.
This conference booklet provides information about 10th International Contrastive Linguistics Conference (ICLC-10) that took place in Mannheim, Germany, from 18 to 21 July 2023. It contains
– a description of the conference aims,
– details on the conference venue,
– information on committees,
– the conference program,
– the abstracts of the keynotes, oral and poster presentations, and
– an author index.
This paper focusss on the first Slavonic-Romanian lexicons, compiled in the second half of the 17th century and their use(rs), proposing a method of investigating the manner in which lexical information available in the above corpus relates, if at all, to the vocabulary of texts from the same period. We chose to investigate their relation to an anonymous Old Testament translation made from Church Slavonic, also from the second half of the 17th century, which was supposed to be produced in the same geographical area, in the same Church Slavonic school or even by the same author as the lexicons. After applying a lemmatizer on both the Biblical text (Books of Genesis and Daniel) and the Romanian material from the lexicons, we analyse the results and double the statistical analysis with a series of case studies, focusing on some common lexemes that might be an indicator of the relatedness of the texts. Even if the analysis points out that the lexicons might not have been compiled as a tool for the translation of religious texts, it proves to be a useful method that reveals interesting data and provides the basis for more extensive approaches.
Die sprachlichen Veränderungen der letzten 20 Jahre sind von zwei Zeitabschnitten gekennzeichnet, die in Bezug auf die Wortschatzentwicklung unterschiedlicher nicht hätten sein können: Der erste, kurze, ist von der Wendezeit – mit auffälligem, meist nur vorübergehendem Lexemwandel – und dem Beitritt der DDR zur Bundesrepublik – mit dem Verschwinden bzw. Austausch des größten Teils des DDR-typischen Wortschatzes – geprägt. Der zweite, wesentlich längere Abschnitt ist von der Entwicklung im vereinigten Deutschland mit einem im Vergleich unauffälligen, weil kontinuierlichen Wortschatzwandel bestimmt.
Am Beispiel der polyfunktionalen Mehrworteinheit <was weiß ich> wird das Zusammenspiel von pragmatischer und phonetischer Ausdifferenzierung in Pragmatikalisierungsprozessen untersucht. Hierzu werden spontan-sprachliche Belege aus dem Korpus „Deutsch heute“ analysiert. Die beobachtete phonetische Variationsbreite deutet auf eine komplexe Beziehung zu den jeweiligen pragmatischen Funktionen hin.
This paper presents the application of the <tiger2/> format to various linguistic scenarios with the aim of making it the standard serialisation for the ISO 24615 [1] (SynAF) standard. After outlining the main characteristics of both the SynAF metamodel and the <tiger2/> format, as extended from the initial Tiger XML format [2], we show through a range of different language families how <tiger2/> covers a variety of constituency and dependency based analyses.
Sogenannte „Pragmatikalisierte Mehrworteinheiten“ sind im Deutschen hochfrequent und unterliegen bisweilen tiefgreifenden phonetischen Reduktionsprozessen. Diese können Realisierungsvarianten hervorbringen, die in der Rückschau auf mehr als eine lexematische Ursprungsform zurückführbar sind. Die vorliegende Studie untersucht mit [ˈzɐmɐ] einen besonders prägnanten Fall dieser Art anhand eines Perzeptionsexperimentes.
This introductory tutorial describes a strictly corpus-driven approach for uncovering indications for aspects of use of lexical items. These aspects include ‘(lexical) meaning’ in a very broad sense and involve different dimensions, they are established in and emerge from respective discourses. Using data-driven mathematical-statistical methods with minimal (linguistic) premises, a word’s usage spectrum is summarized as a collocation profile. Self-organizing methods are applied to visualize the complex similarity structure spanned by these profiles. These visualizations point to the typical aspects of a word’s use, and to the common and distinctive aspects of any two words.
This manual introduces a conversation analytically informed coding scheme for episodes involving the direct social sanctioning of problem behavior in informal social interaction which was developed in the project Norms, Rules, and Morality across Languages (NoRM-aL) at the Leibniz-Institute for the German Language. It outlines the background for its development, delimits the phenomena to which the coding scheme can be applied and provides instructions for its use.
The scheme asks for basic information about the recording and the participants involved in the episode, before taking stock of different features of the sanctioning episode as a whole. This is followed by sets of specific coding questions about the sanctioning move itself (such as its timing and composition) and the reaction it engenders. The coding enables researchers to get a bird’s eye view on recurrent features of such episodes in larger quantities of data and allows for comparisons across different languages and informal settings.
Song lyrics can be considered as a text genre that has features of both written and spoken discourse, and potentially provides extensive linguistic and cultural information to scientists from various disciplines. However, pop songs play a rather subordinate role in empirical language research so far - most likely due to the absence of scientifically valid and sustainable resources. The present paper introduces a multiply annotated corpus of German lyrics as a publicly available basis for multidisciplinary research. The resource contains three types of data for the investigation and evaluation of quite distinct phenomena: TEI-compliant song lyrics as primary data, linguistically and literary motivated annotations, and extralinguistic metadata. It promotes empirically/statistically grounded analyses of genre-specific features, systemic-structural correlations and tendencies in the texts of contemporary pop music. The corpus has been stratified into thematic and author-specific archives; the paper presents some basic descriptive statistics, as well as the public online frontend with its built-in evaluation forms and live visualisations.
This report presents a corpus of articulations recorded with Schlieren photography, a recording technique to visualize aeroflow dynamics for two purposes. First, as a means to investigate aerodynamic processes during speech production without any obstruction of the lips and the nose. Second, to provide material for lecturers of phonetics to illustrates these aerodynamic processes. Speech production was recorded with 10 kHz frame rate for statistical video analyses. Downsampled videos (500 Hz) were uplodad to a youtube channel for illustrative purposes. Preliminary analyses demonstrate potential in applying Schlieren photography in research.
In this paper, we will present a first attempt to classify commonly confused words in German by consulting their communicative functions in corpora. Although the use of so-called paronyms causes frequent uncertainties due to similarities in spelling, sound and semantics, up until now the phenomenon has attracted little attention either from the perspective of corpus linguistics or from cognitive linguistics. Existing investigations rely on structuralist models, which do not account for empirical evidence. Still, they have developed an elaborate model based on formal criteria, primarily on word formation (cf. Lăzărescu 1999). Looking from a corpus perspective, such classifications are incompatible with language in use and cognitive elements of misuse.
This article sketches first lexicological insights into a classification model as derived from semantic analyses of written communication. Firstly, a brief description of the project will be provided. Secondly, corpus-assisted paronym detection will be focused. Thirdly, in the main section the paper concerns the description of the datasets for paronym classification and the classification procedures. As a work in progress, new insights will continually be extended once spoken and CMC data are added to the investigations.
This paper presents a short insight into a new project at the "Institute for the German Language” (IDS) (Mannheim). It gives an insight into some basic ideas for a corpus-based dictionary of spoken German, which will be developed and compiled by the new project "The Lexicon of spoken German” (Lexik des gesprochenen Deutsch, LeGeDe). The work is based on the "Research and Teaching Corpus of Spoken German” (Forschungs- und Lehrkorpus Gesprochenes Deutsch, FOLK), which is implemented in the "Database for Spoken German” (Datenbank für Gesprochenes Deutsch, DGD). Both resources, the database and the corpus, have been developed at the IDS.
This paper presents the prototype of a lexicographic resource for spoken German in interaction, which was conceived within the framework of the LeGeDe-project (LeGeDe=Lexik des gesprochenen Deutsch). First of all, it summarizes the theoretical and methodological approaches that were used for the initial planning of the resource. The headword candidates were selected by analyzing corpus-based data. Therefore, the data of two corpora (written and spoken German) were compared with quantitative methods. The information that was gathered on the selected headword candidates can be assigned to two different sections: meanings and functions in interaction.
Additionally, two studies on the expectations of future users towards the resource were carried out. The results of these two studies were also taken into account in the development of the prototype. Focusing on the presentation of the resource’s content, the paper shows both the different lexicographical information in selected dictionary entries, and the information offered by the provided hyperlinks and external texts. As a conclusion, it summarizes the most important innovative aspects that were specifically developed for the implementation of such a resource.
Ph@ttSessionz and Deutsch heute are two large German speech databases. They were created for different purposes: Ph@ttSessionz to test Internet-based recordings and to adapt speech recognizers to the voices of adolescent speakers, Deutsch heute to document regional variation of German. The databases differ in their recording technique, the selection of recording locations and speakers, elicitation mode, and data processing.
In this paper, we outline how the recordings were performed, how the data was processed and annotated, and how the two databases were imported into a single relational database system. We present acoustical measurements on the digit items of both databases. Our results confirm that the elicitation technique affects the speech produced, that f0 is quite comparable despite different recording procedures, and that large speech technology databases with suitable metadata may well be used for the analysis of regional variation of speech.
There have been several attempts to annotate communicative functions to utterances of verbal feedback in English previously. Here, we suggest an annotation scheme for verbal and non-verbal feedback utterances in French including the categories base, attitude, previous and visual. The data comprises conversations, maptasks and negotiations from which we extracted ca. 13,000 candidate feedback utterances and gestures. 12 students were recruited for the annotation campaign of ca. 9,500 instances. Each instance was annotated by between 2 and 7 raters. The evaluation of the annotation agreement resulted in an average best-pair kappa of 0.6. While the base category with the values acknowledgement, evaluation, answer, elicit and other achieves good agreement, this is not the case for the other main categories. The data sets, which also include automatic extractions of lexical, positional and acoustic features, are freely available and will further be used for machine learning classification experiments to analyse the form-function relationship of feedback.
The main objective of this article is to describe the current activities at the Mannheim Institute for German Language regarding the implementation of a domain-specific ontology for German grammar. We differentiate ontology bases from ontology management Systems, point out the benefits of database-driven Solutions, and go Step by Step through all phases of the ontology lifecycle. In Order to demonstrate the practical use of our approach, we outline the interface between our ontology and the grammis web Information System, and compare the ontology-based retrieval mechanism with traditional full text search.
We present a descriptive analysis on the two datasets from the shared task on Source, Subjective Expression and Target Extraction from Political Speeches (STEPS), the only existing German dataset for opinion role extraction of its size. Our analysis discusses the individual properties of the three components, subjective expressions, sources and targets and their relations towards each other. Our observations should help practitioners and researchers when building a system to extract opinion roles from German data.
The present paper reports the first results of the compilation and annotation of a blog corpus for German. The main aim of the project is the representation of the blog discourse structure and relations between its elements (blog posts, comments) and participants (bloggers, commentators). The data included in the corpus were manually collected from the scientific blog portal SciLogs. The feature catalogue for the corpus annotation includes three types of information which is directly or indirectly provided in the blog or can be construed by means of statistical analysis or computational tools. At this point, only directly available information (e.g. title of the blog post, name of the blogger etc.) has been annotated. We believe, our blog corpus can be of interest for the general study of blog structure or related research questions as well as for the development of NLP methods and techniques (e.g. for authorship detection).
We present an implemented XML data model and a new, simplified query language for multi-level annotated corpora. The new query language involves automatic conversion of queries into the underlying, more complicated MMAXQL query language. It supports queries for sequential and hierarchical, but also associative (e.g. coreferential) relations. The simplified query language has been designed with non-expert users in mind.
Linguistic query systems are special purpose IR applications. We present a novel state-of-the-art approach for the efficient exploitation of very large linguistic corpora, combining the advantages of relational database management systems (RDBMS) with the functional MapReduce programming model. Our implementation uses the German DEREKO reference corpus with multi-layer linguistic annotations and several types of text-specific metadata, but the proposed strategy is language-independent and adaptable to large-scale multilingual corpora.
So far, there have been few descriptions on creating structures capable of storing lexicographic data, ISO 24613:2008 being one of the latest. Another one is by Spohr (2012), who designs a multifunctional lexical resource which is able to store data of different types of dictionaries in a user-oriented way. Technically, his design is based on the principle of a hierarchical XML/OWL (eXtensible Markup Language/Web Ontology Language) representation model. This article follows another route in describing a model based on entities and relations between them; MySQL (usually referred to as: Structured Query Language) describes a database system of tables containing data and definitions of relations between them. The model was developed in the context of the project "Scientific eLexicography for Africa" and the lexicographic database to be built thereof will be implemented with MySQL. The principles of the ISO model and of Spohr's model are adhered to with one major difference in the implementation strategy: we do not place the lemma in the centre of attention, but the sense description — all other elements, including the lemma, depend on the sense description. This article also describes the contained lexicographic data sets and how they have been collected from different sources. As our aim is to compile several prototypical internet dictionaries (a monolingual Northern Sotho dictionary, a bilingual learners' Xhosa–English dictionary and a bilingual Zulu–English dictionary), we describe the necessary microstructural elements for each of them and which principles we adhere to when designing different ways of accessing them. We plan to make the model and the (empty) database with all graphical user interfaces that have been developed, freely available by mid-2015.
We present a gold standard for semantic relation extraction in the food domain for German. The relation types that we address are motivated by scenarios for which IT applications present a commercial potential, such as virtual customer advice in which a virtual agent assists a customer in a supermarket in finding those products that satisfy their needs best. Moreover, we focus on those relation types that can be extracted from natural language text corpora, ideally content from the internet, such as web forums, that are easy to retrieve. A typical relation type that meets these requirements are pairs of food items that are usually consumed together. Such a relation type could be used by a virtual agent to suggest additional products available in a shop that would potentially complement the items a customer has already in their shopping cart. Our gold standard comprises structural data, i.e. relation tables, which encode relation instances. These tables are vital in order to evaluate natural language processing systems that extract those relations.
We present a testsuite for POS tagging German web data. Our testsuite provides the original raw text as well as the gold tokenisations and is annotated for parts-of-speech. The testsuite includes a new dataset for German tweets, with a current size of 3,940 tokens. To increase the size of the data, we harmonised the annotations in already existing web corpora, based on the Stuttgart-Tübingen Tag Set. The current version of the corpus has an overall size of 48,344 tokens of web data, around half of it from Twitter. We also present experiments, showing how different experimental setups (training set size, additional out-of-domain training data, self-training) influence the accuracy of the taggers. All resources and models will be made publicly available to the research community.
One of the fundamental questions about human language is whether all languages are equally complex. Here, we approach this question from an information-theoretic perspective. We present a large scale quantitative cross-linguistic analysis of written language by training a language model on more than 6500 different documents as represented in 41 multilingual text collections consisting of ~ 3.5 billion words or ~ 9.0 billion characters and covering 2069 different languages that are spoken as a native language by more than 90% of the world population. We statistically infer the entropy of each language model as an index of what we call average prediction complexity. We compare complexity rankings across corpora and show that a language that tends to be more complex than another language in one corpus also tends to be more complex in another corpus. In addition, we show that speaker population size predicts entropy. We argue that both results constitute evidence against the equi-complexity hypothesis from an information-theoretic perspective.
We apply a decision tree based approach to pronoun resolution in spoken dialogue. Our system deals with pronouns with NP- and non-NP-antecedents. We present a set of features designed for pronoun resolution in spoken dialogue and determine the most promising features. We evaluate the system on twenty Switchboard dialogues and show that it compares well to Byron’s (2002) manually tuned system.
Creating and maintaining metadata for various kinds of resources requires appropriate tools to assist the user. The paper presents the metadata editor ProFormA for the creation and editing of CMDI (Component Metadata Infrastructure) metadata in web forms. This editor supports a number of CMDI profiles currently being provided for different types of resources. Since the editor is based on XForms and server-side processing, users can create and modify CMDI files in their standard browser without the need for further processing. Large parts of ProFormA are implemented as web services in order to reuse them in other contexts and programs.
In this paper we present a new approach to lexicographical design for the description of German speech act verbs. This approach is based on an action-theoretical semantic conception. The several conditions for linguistic action provide the basis for the elaboration of the central semantic features. The systematic relationship of these features is reflected in the organization of a lexical database which allows various possibilities of access to different types of lexical information.
In the following paper we shall give an outline of the semantic framework for describing speech act verbs, i. e. verbs of communication, with the practical goal of a semantical database for a (dictionary of) synonymy of German speech act verbs which enables the user not only to find a list of synonymous verbs but also enables him to gain an insight into the semantic relations between the words.
The semantic framework is based on
(i) a set of conditions for performing speech acts as the relevant domain of reference
(ii) the introduction of a notion of situation, or better type of situation
The performative as well as the descriptive use of the verbs can be reduced to their fundamental dependency on the situations in which they are used: on the one hand with regard to the possibility of the action itself, and on the other hand with regard to the possibility of their designation. For both ways of use the relevant aspects of the situation constitute the necessary conditions.
One of the most popular techniques used in HPSG-based studies to describe linguistic phenomena is the raising mechanism. Besides ordinary raising verbs or adjectives, this tool has been applied for handling verbal complexes and discontinuous constituents, among other phenomena. In this paper, a new application for raising within the HPSG paradigm will be discussed, thereby investigating data from the prepositional domain. We will analyze linguistic properties of word combinations in German consisting of a preposition, a noun, and another preposition (such as auf Grund von (‘by virtue of’)), thus arguing that raising is the most appropriate method for satisfactorily describing the crucial syntactic features which are typical for those expressions. The objective of this paper is thus to demonstrate the efficiency of the raising mechanism as used in HPSG, and therefore, to emphasize the importance of designing a satisfactory uniform theory of raising within this grammar framework.
One of the most popular techniques used in HPSG-based studies to describe linguistic phenomena is the raising mechanism. Besides ordinary raising verbs or adjectives, this tool has been applied for handling verbal complexes and discontinuous constituents, among other phenomena. In this paper, a new application for raising within the HPSG paradigm will be discussed, thereby investigating data from the prepositional domain. We will analyze linguistic properties of word combinations in German consisting of a preposition, a noun, and another preposition (such as auf Grund von (‘by virtue of’)), thus arguing that raising is the most appropriate method for satisfactorily describing the crucial syntactic features which are typical for those expressions. The objective of this paper is thus to demonstrate the efficiency of the raising mechanism as used in HPSG, and therefore, to emphasize the importance of designing a satisfactory uniform theory of raising within this grammar framework.
Classical null hypothesis significance tests are not appropriate in corpus linguistics, because the randomness assumption underlying these testing procedures is not fulfilled. Nevertheless, there are numerous scenarios where it would be beneficial to have some kind of test in order to judge the relevance of a result (e.g. a difference between two corpora) by answering the question whether the attribute of interest is pronounced enough to warrant the conclusion that it is substantial and not due to chance. In this paper, I outline such a test.
The understanding of story variation, whether motivated by cultural currents or other factors, is important for applications of formal models of narrative such as story generation or story retrieval. We present the first stage of an experiment to elicit natural narrative variation data suitable for evaluation with respect to story similarity, to qualitative and quantitative analysis of story variation, and also for data processing. We also present few preliminary results from the first stage of the experiment, using Red Riding Hood and Romeo and Juliet as base texts.
XML has been designed for creating structured documents, but the information that is encoded in these structures are, by definition, out of scope for XML. Additional sources, normally not easily interpretable by computers, such as documentation are needed to determine the intention of specific tags in a tag-set. The Component Metadata Infrastructure (CMDI) takes a rather pragmatic approach to foster interoperability between XML instances in the domain of metadata descriptions for language resources. This paper gives an overview of this approach.
This paper presents the current results of an ongoing research project on corpus distribution of prepositions and pronouns within Polish preposition-pronoun contractions. The goal of the project is to provide a quantitative description of Polish preposition-pronoun contractions taking into consideration morphosyntactic properties of their components. It is expected that the results will provide a basis for a revision of the traditionally assumed inflectional paradigms of Polish pronouns and, thus, for a possible remodeling of these paradigms. The results of corpus-based investigations of the distribution of prepositions within preposition-pronoun contractions can be used for grammar-theoretical and lexicographic purposes.
The present paper examines the relationship between pragmatics, semantics and grammar as subdisciplines of linguistics from three different perspectives. The first section gives a historical survey of their development during the 20th century and classifies linguistic schools according to their interest in different fields of research. The second part presents a systematic model of the field of objects to be investigated by linguistics, aiming at a more precise delimitation of its subdisciplines. Finally, in the third section, the division of labour between pragmatics, semantics and grammar is discussed in the light of the concrete example of verb valence.
This paper presents the system architecture as well as the underlying workflow of the Extensible Repository System of Digital Objects (ERDO) which has been developed for the sustainable archiving of language resources within the Tübingen CLARIN-D project. In contrast to other approaches focusing on archiving experts, the described workflow can be used by researchers without required knowledge in the field of long-term storage for transferring data from their local file systems into a persistent repository.
This paper describes the lexical database tool LOLA (Linguistic-Oriented Lexical database Approach) which has been developed for the construction and maintenance of lexicons for the machine translation system LMT. First, the requirements such a tool should meet are discussed, then LMT and the lexical information it requires, and some issues concerning vocabulary acquisition are presented. Afterwards the architecture and the components of the LOLA system are described and it is shown how we tried to meet the requirements worked out earlier. Although LOLA originally has been designed and implemented for the German-English LMT prototype, it aimed from the beginning at a representation of lexical data that can be reused for other LMT or MT prototypes or even other NLP applications. A special point of discussion will therefore be the adaptability of the tool and its components as well as the reusability of the lexical data stored in the database for the lexicon development for LMT or for other applications.
Connectives are conjunctions, prepositions, adverbs and other particles which share the function of encoding semantic relations between sentences, or rather, between semantic objects some of which can be meanings of sentences. The relata linked by any such relation will fall into one of four distinct categories: they will be physical objects, states of affairs, propositions, or pragmatic options (the atoms of human interaction). Physical objects constitute the conceptual domain of space, states of affairs the domain of time, propositions the epistemic domain, and pragmatic options the deontic domain. The relations encodable in any of these domains can be divided into four basic types: similarity relations, situating relations, conditional relations, and causal relations. Conceptual domains and types of relations define the universe of possible connections between semantic objects.
Connectives differ as to the interpretations they permit in terms of conceptual domains and types of relations. Very few connectives are specialized on relata of one certain category and relations of one certain type. Possible examples in German are später (‘later on’) and zwischenzeitlich (‘in the meantime’), which encode situating relations between states of affairs. Other connectives are specialized on relata of one certain category, but are underspecified with respect to the type of relation. An example is German sobald (‘as soon as’), which can only connect states of affairs, but accepts situating, conditional and causal readings. Connectives of a third group are specialized on relations of a certain type, but are underspecified with respect to the category of the relata. Examples of this kind are German weil (‘because’) and trotzdem (‘nevertheless’), which encode causal relations, but accept states of affairs, propositions and pragmatic options as their relata. Connectives of a fourth group are underspecified both for the category of relata and the type of relation. An example is German da (‘there’), which accepts relata of any category and allows for situating, conditional and causal readings. Connectives like und (‘and’) and oder (‘or’) exhibit an even higher degree of under specification, in that they allow for all kinds of relations and relata.
Feedback utterances are among the most frequent in dialogue. Feedback is also a crucial aspect of linguistic theories that take social interaction, involving language, into account. This paper introduces the corpora and datasets of a project scrutinizing this kind of feedback utterances in French. We present the genesis of the corpora (for a total of about 16 hours of transcribed and phone force-aligned speech) involved in the project. We introduce the resulting datasets and discuss how they are being used in on-going work with focus on the form-function relationship of conversational feedback. All the corpora created and the datasets produced in the framework of this project will be made available for research purposes.