Refine
Year of publication
- 2018 (90) (remove)
Document Type
- Part of a Book (90) (remove)
Language
Has Fulltext
- yes (90)
Keywords
- Deutsch (30)
- Korpus <Linguistik> (26)
- Gesprochene Sprache (9)
- Grammatik (6)
- Annotation (5)
- Interaktionsanalyse (5)
- Linguistik (5)
- Semantische Analyse (5)
- Visualisierung (5)
- Automatische Sprachverarbeitung (4)
Publicationstate
- Veröffentlichungsversion (90) (remove)
Reviewstate
- (Verlags)-Lektorat (50)
- Peer-Review (39)
- Verlagslektorat (1)
Publisher
- de Gruyter (29)
- European language resources association (ELRA) (11)
- Heidelberg University Publishing (11)
- Znanstvena založba Filozofske fakultete Univerze v Ljubljani / Ljubljana University Press, Faculty of Arts (7)
- De Gruyter (3)
- Hungarian Academy of Sciences (3)
- Leibniz-Zentrum allgemeine Sprachwissenschaft (ZAS); Humboldt-Universität zu Berlin (3)
- The Association for Computational Linguistics (3)
- Institut für Deutsche Sprache (2)
- Lang (2)
Am Beispiel der polyfunktionalen Mehrworteinheit <was weiß ich> wird das Zusammenspiel von pragmatischer und phonetischer Ausdifferenzierung in Pragmatikalisierungsprozessen untersucht. Hierzu werden spontan-sprachliche Belege aus dem Korpus „Deutsch heute“ analysiert. Die beobachtete phonetische Variationsbreite deutet auf eine komplexe Beziehung zu den jeweiligen pragmatischen Funktionen hin.
A syntax-based scheme for the annotation and segmentation of German spoken language interactions
(2018)
Unlike corpora of written language where segmentation can mainly be derived from orthographic punctuation marks, the basis for segmenting spoken language corpora is not predetermined by the primary data, but rather has to be established by the corpus compilers. This impedes consistent querying and visualization of such data. Several ways of segmenting have been proposed,
some of which are based on syntax. In this study, we developed and evaluated annotation and segmentation guidelines in reference to the topological field model for German. We can show that these guidelines are used consistently across annotators. We also investigated the influence of various interactional settings with a rather simple measure, the word-count per segment and unit-type. We observed that the word count and the distribution of each unit type differ in varying interactional settings and that our developed segmentation and annotation guidelines are used consistently across annotators. In conclusion, our syntax-based segmentations reflect interactional properties that are intrinsic to the social interactions that participants are involved in. This can be used for further analysis of social interaction and opens the possibility for automatic segmentation of transcripts.
The grammatical information system grammis combines descriptive texts on German grammar with dictionaries of specific word classes and grammatical terminology. In this paper, we describe the first attempts at analyzing user behavior for an online grammar of the German language and the implementation of an analysis and data extraction tool based on Matomo, a web analytics tool. We focus on the analysis of the keywords the users search for, either within grammis or via an external search platform like Google, and the analysis of the interaction between the text components within grammis and the integrated dictionaries. The overall results show that about 50% of the searches are for grammatical terms, and that the users shift from texts to dictionaries, mainly by using the integrated links to the dictionary of terminology within the texts. Based on these findings, we aim to improve grammis by extending its integrated dictionaries.
In this paper we use methods for creating a large lexicon of verbal polarity shifters and apply them to German. Polarity shifters are content words that can move the polarity of a phrase towards its opposite, such as the verb “abandon” in “abandon all hope”. This is similar to how negation words like “not” can influence polarity. Both shifters and negation are required for high precision sentiment analysis. Lists of negation words are available for many languages, but the only language for which a sizable lexicon of verbal polarity shifters exists is English. This lexicon was created by bootstrapping a sample of annotated verbs with a supervised classifier that uses a set of data- and resource-driven features. We reproduce and adapt this approach to create a German lexicon of verbal polarity shifters. Thereby, we confirm that the approach works for multiple languages. We further improve classification by leveraging cross-lingual information from the English shifter lexicon. Using this improved approach, we bootstrap a large number of German verbal polarity shifters, reducing the annotation effort drastically. The resulting German lexicon of verbal polarity shifters is made publicly available.
Wie nun bereits seit einigen Jahren üblich, wurde die IDS-Jahrestagung auch dieses Jahr wieder von einer Methodenmesse begleitet, auf der sich passend zum Tagungsthema anwendungsorientierte Projekte mit Bezug zur Lexikonforschung präsentierten. Die Bandbreite der dargebotenen Themen war sehr groß: innovative methodische Ansätze im Bereich der Translationswissenschaft, Tools zur Analyse und Beschreibung lexikalischer Muster oder zur Detektion von Neologismen, neue lexikografische Ressourcen bis hin zu Infrastrukturaktivitäten und einem Kooperationsprojekt zwischen Schüler/innen und Wissenschaftler/innen zur Wortschatzanalyse. Im Folgenden sollen die einzelnen Projekte, die sich auf der Messe präsentiert haben, auf der Basis der eingereichten Abstracts der Messeteilnehmer/innen kurz vorgestellt werden.
The present submission reports on a pilot project conducted at the Institute for the German Language (IDS), aiming at strengthening the connection between ISO TC37SC4 “Language Resource Management” and the CLARIN infrastructure. In terminology management, attempts have recently been made to use graph-theoretical analyses to get a better understanding of the structure of terminology resources. The project described here aims at applying some of these methods to potentially incomplete concept fields produced over years by numerous researchers serving as experts and editors of ISO standards. The main results of the project are twofold. On the one hand, they comprise concept networks dynamically generated from a relational database and browsable by the user. On the other, the project has yielded significant qualitative feedback that will be offered to ISO. We provide the institutional context of this endeavour, its theoretical background, and an overview of data preparation and tools used. Finally, we discuss the results and illustrate some of them.
German is a language with complex morphological processes. Its long and often ambiguous word forms present a bottleneck problem in natural language processing. As a step towards morphological analyses of high quality, this paper introduces a morphological treebank for German. It is derived from the linguistic database CELEX which is a standard resource for German morphology. We build on its refurbished, modernized and partially revised version. The derivation of the morphological trees is not trivial, especially for such cases of conversions which are morpho-semantically opaque and merely of diachronic interest. We develop solutions and present exemplary analyses. The resulting database comprises about 40,000 morphological trees of a German base vocabulary whose format and grade of detail can be chosen according to the requirements of the applications. The Perl scripts for the generation of the treebank are publicly available on github. In our discussion, we show some future directions for morphological treebanks. In particular, we aim at the combination with other reliable lexical resources such as GermaNet.
Many studies on dictionary use presuppose that users do indeed consult lexicographic resources. However, little is known about what users actually do when they try to solve language problems on their own. We present an observation study where learners of German were allowed to browse the web freely while correcting erroneous German sentences. In this paper, we are focusing on the multi-methodological approach of the study, especially the interplay between quantitative and qualitative approaches. In one example study, we will show how the analysis of verbal protocols, the correction task and the screen recordings can reveal the effects of intuition, language (learning) awareness, and determination on the accuracy of the corrections. In another example study, we will show how preconceived hypotheses about the problem at hand might hinder participants from arriving at the correct solution.
This paper discusses changes in lexicographic traditions with respect to contrastive dictionary entries and dynamic, on-demand e-lexicographic descriptions. The new German online dictionary Paronyme - Dyna- misch im Kontrast is concerned with easily confused words (paronyms), such as effektivtefficient and sensibel/ sensitiv. New approaches to the empirical analysis and lexicographic presentation of words such as these are required, and this dictionary is committed to overcoming the discrepancy between traditional practice and insights from language use. As a corpus-guided reference work, it strives to adequately reflect not only authentic use in situations of actual communication, but also cognitive ideas such as conceptual structure, categorization and knowledge. Looking up easily confused lexical items requires contrastive entries where users can instantly compare meaning, contexts and reference. Adaptable access to lexicographic details and variable search options offer different foci and perspectives on linguistic information, and authentic examples reflect prototypical structures. These are essential in order to meet all the different interests of users. This paper will illustrate the contrastive structure of the new e-dictionary and demonstrate which information can be compared. It also focusses on various dynamic modes of dictionary consultation, which enable users to shift perspectives on paronyms accordingly.
Der CorpusExplorer v2.0 ist eine frei verfügbare Software zur korpushermeneutischen Analyse und bietet über 45 unterschiedliche Analysen/Visualisierungen für eigenes Korpusmaterial an. Dieser Praxisbericht gibt Einblicke, zeigt Fallstricke auf und bietet Lösungen an, um die tägliche Visualisierungsarbeit zu erleichtern. Zunächst wird ein kurzer Einblick in die Ideen gegeben, die zur Entwicklung des CorpusExplorers führten, einer korpuslinguistischen Software, die nicht nur vielfältige Forschungsansätze unterstützt, sondern auch mit einem Fokus auf die universitäre Lehre entwickelt wird. Der Mittelteil behandelt einen der vielen Fallstricke, die im Entwicklungsprozess auftraten: Effizienz-/Anpassungsprobleme – bzw.: Was passiert, wenn Visualisierungen an neue Begebenheiten angepasst werden müssen? Da diese Lösung Teil des CorpusExplorers v2.0 ist, wird abschließend darauf eingegangen, wie unterschiedliche Visualisierungen zu denselben Datensätzen sich auf die Rezeption/Interpretation von Daten auswirken.
Except for some recent advances in spoken language lexicography (cf. Verdonik & Sepesy Maučec 2017, Hansen & Hansen 2012, Siepmann 2015), traditional lexicographic work is mainly oriented towards the written language. In this paper, we describe a method we used to identify relevant headword candidates for a lexicographic resource for spoken language that is currently being developed at the Institute for the German Language (IDS, Mannheim). We describe the challenges of the headword selection for a dictionary of spoken language, and having made considerations regarding our headword concept, we present the corpus-based procedures that we used in order to facilitate the headword selection. After presenting the results regarding the selection of one-word lemmas, we discuss the opportunities and limitations of our approach.
Das deutsche Wort Frühstück
(2018)
Seit Mitte der 1990er Jahre wird am Institut für deutsche Sprache (IDS) in Mannheim erforscht, wie der hochkomplexe Gegenstandsbereich „Grammatik“ unter Ausnutzung digitaler Sprachressourcen und hypertextueller Navigationsstrukturen gleichermaßen wissenschaftlich fundiert und anschaulich vermittelt werden kann. Die grammatischen Online-Informationssysteme des IDS wenden sich nicht allein an Forscher und die interessierte Öffentlichkeit in Deutschland, sondern in gleichem Maße an Germanisten und Deutsch-Lernende in der ganzen Welt. Der vorliegende Beitrag beschreibt die damit verbundenen Hoffnungen und Anspruche. Daran anschließend thematisiert er praktische Einsatzmöglichkeiten und skizziert die funktionale und inhaltliche Weiterentwicklung der digitalen Grammatik-Angebote.
Our corpus study is concerned with subject-verb agreement in contemporary German, more precisely the variation in verb number. We focus on subjects consisting of noun phrases coordinated by the conjunction und (‘and’). In our samples, both nouns are in singular. Number resolution – i.e., plural verb despite of the singular nouns – can be regarded as the default choice in contemporary German. However, our data show that eliding the second determiner in the subject enhances the probability of using the singular verb. This ellipsis effect is highly significant in German and Austrian texts. It seems to be weaker in Swiss texts. Regression analyses reveal that the ellipsis effect is stronger than both the highly significant influence of subject individuation and the significant effect of subject agentivity.
We present evidence for the analysis of the vowels in English <say> and <so> as biphonemic diphthongs /ɛi/ and /əu/, based on neutralization patterns, regular alternations, and foot structure. /ɛi/ and /əu/ are hence structurally on a par with the so called “true diphthongs” /ɑi/, /ɐu/, /ɔi/, but also share prosodic organization with the monophthongs /i/ and /u/. The phonological evidence is supported by dynamic measurements based on the American English TIMIT database.
Calculations of F2-slopes proved to be especially suited to distinguish the relevant groups in accordance with their phonologically motivated prosodic organizations.
Negation is an important contextual phenomenon that needs to be addressed in sentiment analysis. Next to common negation function words, such as not or none, there is also a considerably large class of negation content words, also referred to as shifters, such as the verbs diminish, reduce or reverse. However, many of these shifters are ambiguous. For instance, spoil as in spoil your chance reverses the polarity of the positive polar expression chance while in spoil your loved ones, no negation takes place. We present a supervised learning approach to disambiguating verbal shifters. Our approach takes into consideration various features, particularly generalization features.
We study German affixoids, a type of morpheme in between affixes and free stems. Several properties have been associated with them – increased productivity; a bleached semantics, which is often evaluative and/or intensifying and thus of relevance to sentiment analysis; and the existence of a free morpheme counterpart – but not been validated empirically. In experiments on a new data set that we make available, we put these key assumptions from the morphological literature to the test and show that despite the fact that affixoids generate many low-frequency formations, we can classify these as affixoid or non-affixoid instances with a best F1-score of 74%.
Einleitung
(2018)
Einleitung
(2018)
Einleitung
(2018)
Einleitung
(2018)
Einleitung
(2018)
Einleitung
(2018)
Einleitung
(2018)
The paper describes preliminary studies regarding the usage of Example-Based Querying for specialist corpora. We outline an infrastructure for its application within the linguistic domain. Example-Based Querying deals with retrieval situations where users would like to explore large collections of specialist texts semantically, but are unable to explicitly name the linguistic phenomenon they look for. As a way out, the proposed framework allows them to input prototypical everyday language examples or cases of doubt, which are automatically processed by CRF and linked to appropriate linguistic texts in the corpus.
In this paper, we present our approach to automatically extracting German terminology in the domain of grammar using texts from the online information system grammis as our corpus. We analyze existing repositories of German grammatical terminology and develop Part-of-speech patterns for our extraction thereby showing the importance of unigrams in this domain. We contrast the results of the automatic extraction with a manually extracted standard. By comparing the performance of well-known statistical measures, we show how measures based on corpus comparison outperform alternative methods.
We present the conceptual foundations and basic features of fLexiCoGraph, a generic software package for creating and presenting curated human-oriented lexicographical resources that are roughly modeled according to Měchura’s (2016) idea of graph-augmented trees. The system is currently under development and will be made accessible as open source software. As a sample use case we discuss an existing online database of loanwords borrowed from German into other languages which is based on a growing number of language-specific loanword dictionaries (Lehnwortportal Deutsch). The paper outlines the conceptual foundations of fLexiCoGraph’s hybrid graph/XML data model. To establish a database, XML-based resources may be imported or even input manually. An additional graph database layer is then constructed from these XML source documents in a freely configurable, but automated way; subsequently, the resulting graph can be manipulated and enlarged through a visual user interface in such a way that keeps the relationship to the source document information explicit at all times. We sketch the tooling support for different kinds of graph-level editing processes, including mechanisms for dealing with updated XML source documents and coping with duplicate or inconsistent information, and briefly discuss the browser interface for end users.
Complement phrases are essential for constructing well-formed sentences in German. Identifying verb complements and categorizing complement classes is challenging even for linguists who are specialized in the field of verb valency. Against this background, we introduce an ML-based algorithm which is able to identify and classify complement phrases of any German verb in any written sentence context. We use a large training set consisting of example sentences from a valency dictionary, enriched with POS tagging, and the ML-based technique of Conditional Random Fields (CRF) to generate the classification models.
Unserdeutsch (Rabaul Creole German) entstand um 1900 an einer katholischen Missionsstation in Vunapope auf der Insel New Britain im Bismarck-Archipel. Seine dominante Substratsprache ist Tok Pisin, das melanesische Pidgin-Englisch, seine Superstratsprache Deutsch. Der Aufsatz versucht das sprachliche Superstrat von Unserdeutsch näher zu bestimmen, d. h. die Frage zu beantworten, welches Deutsch von den Missionaren in Vunapope um 1900, am Ort und zum Zeitpunkt der Entstehung von Unserdeutsch, gesprochen wurde. Zu diesem Zweck werden die als Superstrattransfer aus dem Deutschen erklärbaren, regional markierten linguistischen Strukturmerkmale in Unserdeutsch untersucht und im geschlossenen Sprachgebiet sprachgeografisch lokalisiert. Ergänzt wird diese linguistische Evidenz durch extra- und metalinguistische Evidenz aus einschlägigen, zeitgenössischen Quellen. Die Ergebnisse deuten auf ein vorwiegend nordwestdeutsch-westfälisch geprägtes, insgesamt jedoch heterogenes, standardnahes sprachliches Superstrat hin und widerlegen somit frühere diesbezügliche Aussagen in der einschlägigen Fachliteratur. Und sie zeigen zugleich auch, dass die Analyse von kolonialen und sonstigen Auswanderervarietäten, besonders von solchen, die – wie Unserdeutsch – im Laufe ihrer späteren Geschichte den Kontakt zum sprachlichen Mutterland vollständig verloren haben, zur Rekonstruktion historischer Mündlichkeit wertvolle Daten liefern kann.
Dieser Beitrag setzt sich mit Gesprächskorpora als einem besonderen Typus von Korpora gesprochener Sprache auseinander. Es werden zunächst wesentliche Eigenschaften solcher Korpora herausgearbeitet und einige der wichtigsten deutschsprachigen Gesprächskorpora vorgestellt. Der zweite Teil des Beitrags setzt sich dann mit dem Forschungs- und Lehrkorpus Gesprochenes Deutsch (FOLK) auseinander. FOLK hat sich zum Ziel gesetzt, ein wissenschaftsöffentliches Korpus von Interaktionsdaten aufzubauen, das methodisch und technisch dem aktuellen Forschungsstand entspricht. Die Herausforderungen, die sich beim Aufbau von FOLK in methodischer und korpustechnologischer Hinsicht stellen, werden in abschließenden Abschnitt diskutiert.
In recent years, the availability of large annotated and searchable corpora, together with a new interest in the empirical foundation and validation of linguistic theory and description, has sparked a surge of novel and interesting work using corpus-based methods to study the grammar of natural languages. However, a look at relevant current research on the grammar of the Germanic, Romance, and Slavic languages reveals a variety of different theoretical approaches and empirical foci, which can be traced back to different philological and linguistic traditions. Still, this current state of affairs should not be seen as an obstacle but as an ideal basis for a fruitful exchange of ideas between different research paradigms.
Grußwort
(2018)
Grußwort/Welcome address
(2018)
“To cleanse and at the same time enrich your mother tongue is the task of the brightest people.”
With this quote Goethe, the famous German poet, seemed to have described the work of EFNIL today. But is our task really that easy? Do we “cleanse” our language by deleting superfluous elements? Do we not lose the rich abundance of a language in so doing? Or is Goethe asking for other languages to be prevented from influencing his mother tongue? Would this even be feasible in a globalised world?
Rudi Carrell, a famous entertainer on German TV, once said:
“When I came to Germany I only spoke English. But the German language contains so many English words nowadays that I am now fluent in German!”
His opinion is probably shared by many people learning German.
My daily job is to support around 100,000 schools abroad that offer German as a foreign language. We ask ourselves daily: which German language should we be offering young people today? The classical German of literature? Or practical German which will enable young people to join the workforce of many German companies worldwide? And most of all: how do we motivate young people to learn German? Or any other foreign language?
Yes, English, French, German, Spanish – these languages are in competition in many schools. But the most important fact is: the benefit lies in learning a foreign language, no matter which. Because by learning a foreign language we start to understand foreign cultures and other people. And THAT is what matters.
The actual or anticipated impact of research projects can be documented in scientific publications and project reports. While project reports are available at varying level of accessibility, they might be rarely used or shared outside of academia. Moreover, a connection between outcomes of actual research project and potential secondary use might not be explicated in a project report. This paper outlines two methods for classifying and extracting the impact of publicly funded research projects. The first method is concerned with identifying impact categories and assigning these categories to research projects and their reports by extension by using subject matter experts; not considering the content of research reports. This process resulted in a classification schema that we describe in this paper. With the second method which is still work in progress, impact categories are extracted from the actual text data.
The sentiment polarity of a phrase does not only depend on the polarities of its words, but also on how these are affected by their context. Negation words (e.g. not, no, never) can change the polarity of a phrase. Similarly, verbs and other content words can also act as polarity shifters (e.g. fail, deny, alleviate). While individually more sparse, they are far more numerous. Among verbs alone, there are more than 1200 shifters. However, sentiment analysis systems barely consider polarity shifters other than negation words. A major reason for this is the scarcity of lexicons and corpora that provide information on them. We introduce a lexicon of verbal polarity shifters that covers the entirety of verbs found in WordNet. We provide a fine-grained annotation of individual word senses, as well as information for each verbal shifter on the syntactic scopes that it can affect.
The European digital research infrastructure CLARIN (Common Language Resources and Technology Infrastructure) is building a Knowledge Sharing Infrastructure (KSI) to ensure that existing knowledge and expertise is easily available both for the CLARIN community and for the humanities research communities for which CLARIN is being developed. Within the Knowledge Sharing Infrastructure, so called Knowledge Centres comprise one or more physical institutions with particular expertise in certain areas and are committed to providing their expertise in the form of reliable knowledge-sharing services. In this paper, we present the ninth K Centre – the CLARIN Knowledge Centre for Linguistic Diversity and Language Documentation (CKLD) – and the expertise and services provided by the member institutions at the Universities of London (ELAR/SWLI), Cologne (DCH/IfDH/IfL) and Hamburg (HZSK/INEL). The centre offers information on current best practices, available resources and tools, and gives advice on technological and methodological matters for researchers working within relevant fields.
Dictionary usage research is a topic of increasing importance within the field of lexicography. At the beginning of the new millennium, the dictionary user was still relatively unknown. However, in the last ten years, more and more user studies have been published. Consequently, methods, data and the conclusions which can be drawn were successively refined. Also, new possibilities of web-based data collection, e.g., the analysis of log files, enriched this field of research. This contribution aims to describe the state of the art in dictionary usage research in the digital era. I begin by providing a short overview of methodological and terminological basics and then place a special focus on three different methods of collecting empirical data on dictionary use: online questionnaires, eye tracking and the analysis of log-files. All these methods are illustrated on user studies conducted at the Institute for the German Language in Mannheim.
Pädiatrische Gespräche unterscheiden sich gegenüber anderen ärztlichen Gesprächen mit Patienten hinsichtlich der Gesprächsaufgaben und der Beteiligungskonstellationen. In einer triadischen Konstellation mit Arzt, Patient und Eltern(teil) müssen unterschiedliche Kenntnisse und Zuständigkeiten aller Beteiligten ausreichend abgeglichen und Verständigung und Gesprächsergebnisse gesichert werden. In diesem Beitrag wird zunächst die Forschungslage umrissen und das Handlungsschema pädiatrischer Erstkonsultationen kurz dargelegt. Daran anschließend werden anhand einer Fallanalyse die vielschichtigen und komplexen Aufgabenstellungen der Beteiligten bei der Herstellung und Durchführung der körperlichen Untersuchung beleuchtet.
Kontexte und ihre Verteilung
(2018)
Die typischen sprachlichen Kontexte, in denen ein Wort verwendet wird, spannen den Rahmen auf, über den sowohl Sprecher als auch Forscher einer Sprache wesentliche Aspekte der Bedeutung des Wortes erschließen und vermitteln. Über große Korpora und entsprechende korpus-, aber auch computerlinguistische Methoden stehen nunmehr systematische Zugänge zu den typischen Verwendungsweisen zur Verfügung, am Institut für Deutsche Sprache etwa über die Kookkurrenzanalyse seit 1995. Auf den Ergebnissen des letztgenannten Verfahrens operieren weitere Methoden, die Bedeutungsbeziehungen zwischen Wörtern auf Ähnlichkeitsbeziehungen des Kontextverhaltens zurückfuhren. In jüngerer Zeit werden Ansätze vor allem aus der Computerlinguistik und dem information retrieval diskutiert, die mit einem ähnlichen Ziel antreten. Dieser Beitrag soll einen prinzipiellen Überblick bieten, wie die verschiedenen Forschungsstränge den Begriff Kontext interpretieren, wie sie ihn systematisch erfassen und zum Vergleich einsetzen. Neben Bedeutungsnähe wird vor allem Mehrdeutigkeit besondere Beachtung finden.