Refine
Year of publication
- 2017 (221) (remove)
Document Type
- Part of a Book (95)
- Article (62)
- Book (26)
- Conference Proceeding (24)
- Other (5)
- Working Paper (5)
- Report (3)
- Part of Periodical (1)
Is part of the Bibliography
- yes (221) (remove)
Keywords
- Deutsch (98)
- Korpus <Linguistik> (48)
- Gesprochene Sprache (29)
- Grammatik (12)
- Wörterbuch (11)
- Diskursanalyse (10)
- Interaktion (10)
- Konversationsanalyse (9)
- Rezension (9)
- Computerunterstützte Lexikographie (8)
Publicationstate
- Veröffentlichungsversion (103)
- Zweitveröffentlichung (26)
- Postprint (12)
- Erstveröffentlichung (1)
- Preprint (1)
Reviewstate
- Peer-Review (71)
- (Verlags)-Lektorat (60)
- Peer-review (11)
- (Verlags-)Lektorat (2)
- Peer-Revied (2)
- (Verlags-)lektorat (1)
Publisher
- de Gruyter (26)
- Institut für Deutsche Sprache (20)
- De Gruyter (17)
- Narr Francke Attempto (17)
- Narr (10)
- Verlag für Gesprächsforschung (10)
- Hempen (9)
- Stauffenburg (7)
- TUDpress (6)
- Heidelberg University Publishing (5)
Begegnungen mit neuen Wörtern: Zu lexikografischen Praktiken im Neologismenwörterbuch des IDS
(2017)
Das von der Leibniz-Gemeinschaft geförderte Projekt „Lexik des gesprochenen Deutsch“(LeGeDe, Leibniz-Wettbewerb 2016, Förderlinie I: „Innovative Vorhaben“) nahm im September 2016 am Institut für Deutsche Sprache (IDS) seine Arbeit auf.1 Das Hauptziel ist die Erstellung einer korpusbasierten lexikografischen Online-Ressource zur Lexik des gesprochenen Deutsch auf der Grundlage von lexikologischen und gesprächsanalytischen Untersuchungen authentischer gesprochensprachlicher Daten. Als Kooperationsprojekt der Abteilungen Lexik und Pragmatik arbeiten Mitarbeiter/innen aus der Lexikologie, Lexikografie, Interaktionalen bzw. Gesprächslinguistik, Korpus- und Computerlinguistik und den Empirischen Methoden zusammen, wodurch sowohl aus der Sicht der Gesprochene- Sprache-Forschung als auch aus lexikografischer Perspektive eine innovative Form der Sprachbeschreibung entstehen soll.
Der Auftaktworkshop "Lexik des gesprochenen Deutsch: Forschungsstand, Erwartungen und Anforderungen an die Entwicklung einer innovativen lexikografischen Ressource" fand am 16. und 17. Februar 2017 am Institut fur Deutsche Sprache (IDS) in Mannheim statt. Das von der Leibniz-Gemeinschaft geforderte Projekt "Lexik des gesprochenen Deutsch" (=LeGeDe, Leibniz-Wettbewerb 2016, Forderlinie "Innovative Vorhaben") nahm im September 2016 am IDS seine Arbeit auf. Das Hauptziel ist die Erstellung einer korpusbasierten elektronischen Ressource zur Lexik des gesprochenen Deutsch auf der Grundlage von lexikologischen und gesprachsanalytischen Untersuchungen authentischer gesprochensprachlicher Daten.
Sound units play a pivotal role in cognitive models of auditory comprehension. The general consensus is that during perception listeners break down speech into auditory words and subsequently phones. Indeed, cognitive speech recognition is typically taken to be computationally intractable without phones. Here we present a computational model trained on 20 hours of conversational speech that recognizes word meanings within the range of human performance (model 25%, native speakers 20–44%), without making use of phone or word form representations. Our model also generates successfully predictions about the speed and accuracy of human auditory comprehension. At the heart of the model is a ‘wide’ yet sparse two-layer artificial neural network with some hundred thousand input units representing summaries of changes in acoustic frequency bands, and proxies for lexical meanings as output units. We believe that our model holds promise for resolving longstanding theoretical problems surrounding the notion of the phone in linguistic theory.
We present a method to identify and document a phenomenon on which there is very little empirical data: German phrasal compounds occurring in the form of as a single token (without punctuation between their components). Relying on linguistic criteria, our approach implies to have an operational notion of compounds which can be systematically applied as well as (web) corpora which are large and diverse enough to contain rarely seen phenomena. The method is based on word segmentation and morphological analysis, it takes advantage of a data-driven learning process. Our results show that coarse-grained identification of phrasal compounds is best performed with empirical data, whereas fine-grained detection could be improved with a combination of rule-based and frequency-based word lists. Along with the characteristics of web texts, the orthographic realizations seem to be linked to the degree of expressivity.
In this paper we present the results of an automatic classification of Russian texts into three levels of difficulty. Our aim is to build a study corpus of Russian, in which a L2 student is able to select texts of a desired complexity. We are building on a pilot study, in which we classified Russian texts into two levels of difficulty. In the current paper, we apply the classification to an extended corpus of 577 labelled texts. The best-performing combination of features achieves an accuracy of 0,74 within at most one level difference.
The paper reviews the results of work done in the context of TEI-Lex0, a joint ENeL / DARIAH / PARTHENOS initiative aimed at formulating guidelines for the encoding of retrodigitized dictionaries by streamlining and simplifying the recommendations of the “Print Dictionaries” chapter of the TEI Guidelines. TEI-Lex0 work is performed by teams concentrating on each of the main components of dictionary entries. The work presented here concerns proposals for constraining TEI-based encoding of orthographic, phonetic, and grammatical information on written and spoken forms of the lemma (headword), including auxiliary inflected forms. We also adduce examples of handling various types of orthographic and phonetic variants, as well as examples of handling the representation of inflectional paradigms, which have received less attention in the TEI Guidelines but which are nonetheless essential for properly exposing data content to the various uses that digitized lexica may have.
CoMParS is a resource under construction in the context of the long-term project German Grammar in European Comparison (GDE) at the IDS Mannheim. The principal goal of GDE is to create a novel contrastive grammar of German against the background of other European languages. Alongside German, which is the central focus, the core languages for comparison are English, French, Hungarian and Polish, representing different typological classes. Unlike traditional contrastive grammars available for German, which usually cover language pairs and are based on formal grammatical categories, the new GDE grammar is developed in the spirit of functionalist typology. This implies that, instead of formal criteria, cognitively motivated functional domains in terms of Givón (1984) are used as tertia comparationis. The purpose of CoMParS is to document the empirical basis of the theoretical assumptions of GDE-V and to illustrate the otherwise rather abstract content of grammar books by as many as possible naturally occurring and adequately presented multilingual examples, including information on their use in specific contexts and registers. These examples come from existing parallel corpora, and our presentation will focus on the legal aspects and consequences of this choice of language data.
In this chapter, a conversation-analytic approach is used to study medical recommendations as an essential part of medical advice. Tlte analyses are based on renal treatment planning conversations in which physicians inform patients about an upcoming dialysis therapy. The data reveals that medical recommendations are marked throughout by their strikingly tentative and relativistic phrasing in which the conflict between physicians duty of care and the patient’s autonomy is obvious. The observed discrepancy between what should be said and what patients and physicians want to be said - and heard - not only gives reason to challenge the ethical and legal requirements concerning medical recommendations and their implications for medical practice, but also to rethink the current models of decision-making in medical communication.
Reden über Geld
(2017)
The paper presents best practices and results from projects dedicated to the creation of corpora of computer-mediated communication and social media interactions (CMC) from four different countries. Even though there are still many open issues related to building and annotating corpora of this type, there already exists a range of tested solutions which may serve as a starting point for a comprehensive discussion on how future standards for CMC corpora could (and should) be shaped like.
The paper reports on the results of a scientific colloquium dedicated to the creation of standards and best practices which are needed to facilitate the integration of language resources for CMC stemming from different origins and the linguistic analysis of CMC phenomena in different languages and genres. The key issue to be solved is that of interoperability – with respect to the structural representation of CMC genres, linguistic annotations metadata, and anonymization/pseudonymization schemas. The objective of the paper is to convince more projects to partake in a discussion about standards for CMC corpora and for the creation of a CMC corpus infrastructure across languages and genres. In view of the broad range of corpus projects which are currently underway all over Europe, there is a great window of opportunity for the creation of standards in a bottom-up approach.
This survey describes the practice of dictionary criticism in German philological periodicals. It focuses on reviews of general dictionaries of contemporary German as well as of historical dictionaries of German. Our results show that only a few reference works are reviewed by only a small group of reviewers, and different volumes of dictionaries are not necessarily reviewed systematically. On the criteria for the selection of the reviewed dictionaries one can only speculate. All in all, this rather unsystematic review practice can be interpreted as a disregard of the work of lexicographers, as a neglect of the interests of potential dictionary users, and as a sad disinterest of philologists in lexicographic work.
Die pfälzische Sprachinsel am Niederrhein, deren Gründung auf das Jahr 1641 zurückgeht, ist die letzte deutsche Binnensprachinsel. Sie steht unter einem akuten Assimilationsdruck, der sich im funktionellen Wandel des autochthonen dialektalen Systems bemerkbar macht; verstärkt wird dieser Prozess durch den deutschlandweit vielerorts beobachtbaren Rückgang der Dialektkompetenz auf basisdialektaler Ebene. In der vorliegenden Arbeit werden einerseits die Entwicklung in der Struktur des Sprachinseldialekts und andererseits die Rolle des Gebrauchs von sprachlichen Varianten als identitätsmarkierende Mittel untersucht. Dazu werden Sprachproben aus zwei Generationen variablenanalytisch ausgewertet und die Ergebnisse gegenübergestellt. Dabei zeigt sich, dass die dialektkompetenten Sprecher der jüngeren Generation einzelne (ehemals) dialektale Merkmale verstärkt realisieren, um ihre Identität als pfälzische Sprachinsulaner zu markieren.
This contribution aims to shed light on the structural development of Luxembourgish German in the 19th Century. The fact that it is embedded in a multilingual context raises many research questions. The evidence comprises predominantly bilingual German/French public notices issued by the City of Luxembourg in this period. The analysis of two conjunctions suggests that processes of replication and interlingual transfer are sources for Variation. It shows that the influence of French was particularly acute during the “French period” (1795-1814). However, rather than working in isolation, the language contact phenomena operate on the basis of similar constructions existing in the borrowing language. In addition, ancient German forms quickly disappeared, despite showing similarity to forms in the local dialect.
Eine reichhaltige Auszeichnung mit Metadaten ist für alle Arten von Korpora für die linguistische Forschung wünschenswert. Für große Korpora (insbesondere Webkorpora) müssen Metadaten automatisch erzeugt werden, wobei die Genauigkeit der Auszeichnung besonders kritisch ist. Wir stellen einen Ansatz zur automatischen Klassifikation nach Themengebiet (Topikdomäne) vor, die auf dem lexikalischen Material in Texten basiert. Dazu überführen wir weniger gut interpretierbare Ergebnisse aus einer so genannten Topikmodellierung mittels eines überwachten Lernverfahrens in eine besser interpretierbare Kategorisierung nach 13 Themengebieten. Gegenüber (automatisch erzeugten) Klassifikationen nach Genre, Textsorte oder Register, die zumeist auf Verteilungen grammatischer Merkmale basieren, erscheint eine solche thematische Klassifikation geeigneter, um zusätzliche Kontrollvariablen für grammatische Variationsstudien bereitzustellen. Wir evaluieren das Verfahren auf Webtexten aus DECOW14 und Zeitungstexten aus DeReKo, für die jeweils getrennte Goldstandard-Datensätze manuell annotiert wurden.