Lexikografie
Refine
Year of publication
- 2021 (16) (remove)
Document Type
- Article (7)
- Part of a Book (4)
- Conference Proceeding (4)
- Report (1)
Keywords
- Korpus <Linguistik> (9)
- Lexikografie (9)
- Deutsch (8)
- Online-Wörterbuch (7)
- Wörterbuch (7)
- Computerunterstützte Lexikografie (4)
- Paronym (4)
- Einsprachiges Wörterbuch (3)
- Kontrastive Linguistik (3)
- Lexik des gesprochen Deutsch (LeGeDe) (3)
Publicationstate
- Veröffentlichungsversion (9)
- Zweitveröffentlichung (6)
- Postprint (3)
Reviewstate
- Peer-Review (9)
- (Verlags)-Lektorat (5)
Publisher
- de Gruyter (4)
- Leibniz-Institut für Deutsche Sprache (IDS) (2)
- Lexical Computing CZ s.r.o. (2)
- Association for Computational Linguistics (1)
- Cambridge University Press (1)
- Democritus University of Thrace (1)
- Erich Schmidt (1)
- Karolinum (1)
- Oxford University Press (1)
- Universitäts- und Landesbibliothek Darmstadt (1)
In this paper we present an experimental semantic search function, based on word embeddings, for an integrated online information system on German lexical borrowings into other languages, the Lehnwortportal Deutsch (LWPD). The LWPD synthesizes an increasing number of lexicographical resources and provides basic cross-resource search options. Onomasiological access to the lexical units of the portal is a highly desirable feature for many research questions, such as the likelihood of borrowing lexical units with a given meaning (Haspelmath & Tadmor, 2009; Zeller, 2015). The search technology is based on multilingual pre-trained word embeddings, and individual word senses in the portal are associated with word vectors. Users may select one or more among a very large number of search terms, and the database returns lexical items with word sense vectors similar to these terms. We give a preliminary assessment of the feasibility, usability and efficacy of our approach, in particular in comparison to search options based on semantic domains or fields.
Alleviating pain is good and abandoning hope is bad. We instinctively understand how words like alleviate and abandon affect the polarity of a phrase, inverting or weakening it. When these words are content words, such as verbs, nouns, and adjectives, we refer to them as polarity shifters. Shifters are a frequent occurrence in human language and an important part of successfully modeling negation in sentiment analysis; yet research on negation modeling has focused almost exclusively on a small handful of closed-class negation words, such as not, no, and without. A major reason for this is that shifters are far more lexically diverse than negation words, but no resources exist to help identify them. We seek to remedy this lack of shifter resources by introducing a large lexicon of polarity shifters that covers English verbs, nouns, and adjectives. Creating the lexicon entirely by hand would be prohibitively expensive. Instead, we develop a bootstrapping approach that combines automatic classification with human verification to ensure the high quality of our lexicon while reducing annotation costs by over 70%. Our approach leverages a number of linguistic insights; while some features are based on textual patterns, others use semantic resources or syntactic relatedness. The created lexicon is evaluated both on a polarity shifter gold standard and on a polarity classification task.
This paper reports on an ongoing international project of compiling a freely accessible online Dictionary of German Loans in Polish Dialects. The dictionary will be the first comprehensive lexicographic compendium of its kind, serving as a complement to existing resources on German lexical loans in the literary or standard language. The empirical results obtained in the project will shed new light on the distribution of German loanwords among different dialects, also in comparison to the well-documented situation in written Polish. The dictionary will have a strong focus on the dialectal distribution of Polish dialectal variants for a given German etymon, accessible through interactive cartographic representations and corresponding search options. The editorial process is realized with dedicated collaborative web tools. The new resource will be published as an integrated part of an online information system for German lexical borrowings in other languages, the Lehnwortportal Deutsch, and is therefore highly cross-linked with other loanword dictionaries on Polish as well as Slavic and further European languages.
This introduction summarizes general issues combining lexicography and neology in the context of the Globalex Workshop on Lexicography and Neology series. We present each of the six papers composing this Special Issue, featuring two Slavic languages (Czech and Slovak) and two Romance ones (Brazilian Portuguese and Spanish in its European and Latin American varieties) and their diverse lexicographic research and representation, in specialized dictionaries of neologisms or general language ones, in monolingual, bilingual and multilingual lexical resources, and in print and digital dictionaries.
Coronaparty, Jo-jo-Lockdown und Mask-have – Wortschatzerweiterung während des Corona-Stillstands
(2021)
Das 1901er-Regelwerk wird in einem direkten Vergleich mit dem geltenden amtlichen Regelwerk gemeinhin als defizitär eingestuft. Diese Einschätzung basiert auf der Annahme eines Primats des Regelteils. Der vorliegende Beitrag setzt hieran an und bestimmt auf der Basis der Festlegungen zur Getrennt- und Zusammenschreibung Funktion und Verhältnis von Regelteil und Wörterverzeichnis des ersten gesamtdeutschen Regelwerks in seinem historischen Entstehungskontext. Dabei zeigt sich, dass das Regelwerk von 1901 einen anderen Weg in der Kodifikation beschreitet; während im Regelteil Regularitäten aufgezeigt und Kriterien zur Schreibungsfindung an die Hand gegeben werden, erfolgt die Kodifikation rechtschreibschwieriger Fälle über das Wörterverzeichnis.
Die LeGeDe-Ressource: korpusbasierte lexikografische Einblicke und anwendungsorientierte Ausblicke
(2021)
Der Beitrag stellt die lexikografische Online-Ressource LeGeDe, den ersten korpusbasierten Prototypen für Besonderheiten der Lexik des Deutschen in der Interaktion vor. Dabei werden sowohl die Herausforderungen an das innovative Projekt thematisiert als auch Möglichkeiten für einen anwendungsorientierten Nutzen im DaF- und DaZ-Bereich diskutiert und als Ausblick Desiderata für die weitere Beschäftigung mit der lexikografischen Kodifizierung gesprochensprachlicher Spezifika des Deutschen aufgezeigt.
The main aim of this contribution is to present the range of lexicographic information from LeGeDe, an electronic prototype for lexical and interactional features of spoken German. The focus lies on the detailed description of the different lexicographical information classes using illustrative examples and figures from the resource. In addition to highlighting the lexicographic microstructure and providing an overview of the outer texts and the multimedia information offer, the contribution also presents detailed background data on the conception of the LeGeDe resource. Innovative aspects and possible applications are outlined and forward-looking desiderata are offered.
Im E-Wörterbuch „Paronyme – Dynamisch im Kontrast“ werden erstmals leicht verwechselbare Ausdrücke, sogenannte Paronyme (z.B. autoritär / autoritativ, speziell / spezial), in kontrastiven und dynamischen Einträgen beschrieben. Auf zwei Beschreibungsebenen verzahnt es lexikalische Angaben mit enzyklopädischen bzw. konzeptuell-orientierten Details. Korpusanalytische Auseinandersetzungen zeigen, wie stark der Gebrauch einiger Paronyme von den Beschreibungen in traditionellen Lehr- und Nachschlagewerken abweicht. Aber Korpusdaten deuten ebenso auf sprachliche Varianz und Wandel hin, die in speziellen Rubriken festgehalten werden. Neben der Vorstellung des Wörterbuches steht die Frage im Vordergrund, wie die Informationen systematisch aus den Daten gewonnen, analysiert und redaktionell ausgewertet werden, um als Bedeutungs-, Kollokations-, Konstruktions-, Referenz- und Domänenangaben jedes Stichwort so genau wie möglich beschreiben zu können.
While there is a large amount of research in the field of Lexical Semantic Change Detection, only few approaches go beyond a standard benchmark evaluation of existing models. In this paper, we propose a shift of focus from change detection to change discovery, i.e., discovering novel word senses over time from the full corpus vocabulary. By heavily fine-tuning a type-based and a token-based approach on recently published German data, we demonstrate that both models can successfully be applied to discover new words undergoing meaning change. Furthermore, we provide an almost fully automated framework for both evaluation and discovery.