Lexikografie
Refine
Year of publication
Document Type
- Part of a Book (451)
- Article (296)
- Conference Proceeding (63)
- Part of Periodical (48)
- Book (47)
- Review (32)
- Other (3)
- Doctoral Thesis (2)
- Contribution to a Periodical (1)
- Report (1)
Language
Keywords
- Deutsch (545)
- Wörterbuch (298)
- Lexikographie (142)
- Computerunterstützte Lexikographie (111)
- Korpus <Linguistik> (111)
- Lexikografie (105)
- Online-Wörterbuch (86)
- Rezension (81)
- Neologismus (68)
- Wortschatz (60)
Publicationstate
- Veröffentlichungsversion (440)
- Zweitveröffentlichung (129)
- Postprint (27)
- Ahead of Print (1)
Reviewstate
Publisher
- de Gruyter (147)
- Institut für Deutsche Sprache (138)
- Niemeyer (52)
- Schwann (50)
- Narr (48)
- IDS-Verlag (42)
- Akademie-Verlag (30)
- Lang (26)
- De Gruyter (17)
- Erich Schmidt (14)
Seit 1996 ist das Amtliche Regelwerk zur deutschen Rechtschreibung (einschließlich Amtlichem Wörterverzeichnis) gültig. Es regelt die Orthografie für Behörden und Schulen in Deutschland sowie in den sechs weiteren Mitgliedsländern des Rats für deutsche Rechtschreibung. Für die Wörterbuchverlage bzw. alle Wörterbuchprojekte gilt es, dieses hoch abstrakte Regelwerk einerseits auf alle Einträge in den A–Z-Teilen der Wörterbücher anzuwenden und andererseits ggf. das Regelwerk selbst zu „übersetzen“ und es damit einer breiten Öffentlichkeit zugänglich zu machen.
Die Anforderungen an gegenwartssprachliche Wörterbücher beinhalten, bei der Aufbereitung der lexikalischen Informationen in Stichwortartikeln die lemmabezogenen Korrektschreibungen adäquat zu berücksichtigen. Die dazugehörigen Arbeitsgänge in der Redaktion des Digitalen Wörterbuchs der deutschen Sprache (DWDS) reichen von der Ansetzung der Nennformen in allen ggf. zulässigen orthographischen Varianten über die Anlage von Verweisen auf die einschlägige Bezugsnorm bis zur Dokumentation ausgewählter Korpusbelege mit gebrauchsfrequenten Abweichungs- und Falschschreibungen. Als besondere Herausforderungen für die lexikographische Praxis erweisen sich regelmäßig Lücken und Interpretationsspielräume in der amtlichen Regelung sowie die bei Belegrecherchen in den DWDS-Textquellen zutage tretenden Diskrepanzen zwischen orthographischer Norm und Schreibusus.
This contribution explores the relationship between the English CEFR (Common European Framework of Reference for Languages) vocabulary levels and user interest in English Wiktionary entries. User interest was operationalized through the number of views of these entries in Wikimedia server logs covering a period of four years (2019–2022). Our findings reveal a significant relationship between CEFR levels and user interest: entries classified at lower CEFR levels tend to attract more views, which suggests a greater user interest in more basic vocabulary. A multiple regression model controlling for other known or potential factors affecting interest: corpus frequency, polysemy, word prevalence, and age of acquisition confirmed that lower CEFR levels attract significantly more views even after taking into account the other predictors. These findings highlight the importance of CEFR levels in predicting which words users are likely to look up, with implications for lexicography and the development of language learning materials.
In this article, we provide an insight into the development and application of a corpus-lexicographic tool for finding neologisms that are not yet listed in German dictionaries. As a starting point, we used the words listed in a glossary of German neologisms surrounding the COVID-19 pandemic. These words are lemma candidates for a new dictionary on COVID-19 discourse in German. They also provided the database used to develop and test the NeoRate tool. We report on the lexicographic work in our dictionary project, the design and functionalities of NeoRate, and describe the first test results with the tool, in particular with regard to previously unregistered words. Finally, we discuss further development of the tool and its possible applications.
Any bilingual dictionary is contrastive by nature, as it documents linguistic information between language pairs. However, the design and compilation of most bilingual dictionaries is often no more than mere lists of lexical or semantic equivalents. In internet forums, one can observe a huge interest in acquiring relevant knowledge about specific lexical items or pairs that are prone to comparison in a more comprehensive manner as they may pose lexical semantic challenges. In particular, these often concern easily confused pairs (e.g. false friends or paronyms) and new terms increasingly travelling between languages in news and social media (Šetka-Čilić/Ilić Plauc 2021). With regard to English and German, the fundamental comparative principles upon which contrastive guides should be build are either absent, or specialised contrastive dictionaries simply do not exist, e.g. comprehensive descriptive resources for false friends, paronyms, protologisms or neologisms (see Gouws/Prinsloo/de Schryver 2004). As a result, users turn to electronic resources such as Google translate, blogs and language forums for help. For example, it is English words such as muscular which have two German translations options.
These are two confusables muskulär and muskulös both of which exhibit a different semantic profile. German sensitiv/sensibel and their English formal counterparts sensitive/sensible are false friends. However, these terms are highly polysemous in both languages and have semantic features in common. Their full meaning spectrum is hardly captured in bilingual dictionaries to allow for a full comparison. Translating protologisms such as German Doppelwumms as well as more established new words is one of the most challenging problems. Currently, German neologisms such as Klimakleber are translated as climate glue (instead of climate activist glueing him-/herself onto objects) by online tools, simply causing mistakes and contextual distortion. Most challenges users face today are well-known (e.g. Rets 2016). New terms are often unregistered in dictionaries and it is often impossible to make appropriate choices between two or more (commonly misused) words between two languages (e.g. Benzehra 2007). These are all relevant problems to translators and language learners alike (e.g González Ribao 2019).
This paper calls for the implication of insights from contrastive lexicology into modern bilingual lexicography. To turn dictionaries into valuable resources and in order to create productive strategies in a learning environment, the practice of writing dictionaries requires a critical re-assessment. Furthermore, the full potential of electronic contrastive resources needs to be recognised and put into practice. After all, monolingual German lexicography has started to reflect on how users’ needs can be accounted for in specific comparative linguistic situations. Some of these ideas can be comfortably extended to bilingual reference guides. On the one hand, this paper will deliver a critical account of some English-German/German-English dictionaries and touch on the shortcomings of contemporary bilingual lexicography. On the other hand, with the help of fictitious resources I will demonstrate contrastive structures as focal points of consultations which answer some of the more frequent language questions more reliably. Among others, I will explain how we need to build user-friendly dictionaries to allow for translating false friends or easily confusable words from the source language into its target language efficiently. With regard to neologisms, I will show how discursive descriptions and definitions that are more elaborate can support language learners to learn about necessary extra-linguistic knowledge. Overall, this could improve the role of specialised dictionaries in the teaching or translating process (cf. Miliç/Sadri/Glušac 2019).
Der vorliegende Band enthält die Beiträge eines Kolloquiums am Institut für Deutsche Sprache, Mannheim, in dem das komplexe und moderne Werk sowie das systematische Arbeiten Johann Christoph Adelungs gewürdigt wurde. Die Beiträger und Beiträgerinnen des Bandes stellen das kulturgeschichtliche Denken Adelungs, sein lexikographisches Werk, seine grammatischen, orthographischen und stilistischen Arbeiten unter spezifischen Fragestellungen dar: Adelungs durch Herder inspiriertes Verständnis von Kulturgeschichte bildet gleichsam das Prinzip seiner Arbeit. In Beispielen wird die Adelung-Rezeption beschrieben ebenso wie die Bedeutung seines Werks für heutige sprachhistorische Forschung. Dass Adelung mit seinen Arbeiten in Spannungsfelder einzuordnen ist, machen diejenigen Beiträge deutlich, die ihn als Traditionalisten und als Vertreter der beginnenden Moderne zeigen, als Sprachgelehrten mit präskriptiven und deskriptiven Anliegen, als konservativen Denker und Aufklärer zugleich. Insgesamt gibt dieser Band einen Überblick über die Komplexität von Adelungs Schaffen und über den Stand der Forschung.
Sprachliche Zweifelsfälle kommen auf allen linguistischen Ebenen vor. Ihre Einordnung erfolgt zumeist nach Systemebene, nach Entstehungsursache oder nach lexematischer Struktur. Sprachlicher Zweifel kann auch nach intra- und interlingualen Aspekten unterschieden werden. Stehen zwei oder mehrere lexikalische Varianten zur Verfügung, kann es zu Unsicherheiten bezüglich des angemessenen Gebrauchs kommen. Nicht nur Muttersprachler*innen sind mit Schwierigkeiten konfrontiert, Zweifelsfälle stellen auch ein Problem bei der Fremdsprachenproduktion dar.
Dieser Band beschränkt sich auf lexikalisch-semantische, flexivische und wortbildungsbedingte Zweifelsfälle und führt interessierte Leser*innen in Fachliteratur und Nachschlagewerke ein. Er streift Fragen der Sprachdidaktik, der Fehler- und Variationslinguistik, denn die Auseinandersetzung mit typischen Zweifelsfällen zeigt auch das Spannungsfeld zwischen allgemeinem Usus und kodifizierter Norm, zwischen Gegenwart und Wandel, zwischen Dynamik, sprachlichem Reichtum und erlernter Bildungstradition.
Die Behandlung der Euro-Krise in der deutschen Presse ist typisch für die Art und Weise, wie sich die Beschreibung komplexer Phänomene der Wirtschaft im letzten Jahrzehnt entwickelt hat: Fachberichte schwinden allmählich zugunsten von neuen Erzählformen, in denen rhetorische Figuren die Oberhand gewinnen. Darunter sind vor allem Metaphern zu finden, die hauptsächlich konventioneller Natur sind, aber auch gern kreativ fortgesetzt werden. Sie spielen meist eine zentrale Rolle auf der Textebene, indem sie wesentlich zur Kohärenz eines Abschnitts bzw. eines ganzen Artikels beitragen. Diese innovativen Kommunikationsformen mögen zwar das Interesse des breiten Publikums an wirtschaftlichen Debatten wecken, aber sie führen oft zu einer groben Vereinfachung, die den technischen Aspekt der Euro-Krise völlig beiseite lässt. Außerdem sind die benutzten Bilder in der Regel sehr negativ gefärbt, was die Angst der Öffentlichkeit vor einem weltweiten Zusammenbruch der Finanzmärkte sicherlich noch verstärkt und dem Vertrauen der Bürger in Europa nicht gerade dient. Die Vorliebe der Massenmedien für düstere Szenarien enthüllt somit eine bewusste Strategie der Dramatisierung, die immer mehr zum „Storytelling“ tendiert.
Ways out of the dictionary: hyperlinks to other sources in German and African online dictionaries
(2023)
This study examines a number of German and African online dictionaries to see how they make use of the possibility of linking to external sources (e.g. other dictionaries, encyclopaedias, or even corpus data). The article investigates which hyperlinks occur at which places in the word articles and how these are presented to the dictionary users. This is done against the background of metalexicographic considerations on the planning of outer features and the mediostructure in online dictionaries as well as different categorizations of hyperlinks in online reference works. The results show that retro-digitized dictionaries make virtually no use of hyperlinks to external sources. Genuine online dictionaries, on the other hand, do, but often in a form that needs improvement, since, for example, explanations of dictionary-external links are not always found in the user guide and their design is different even within a dictionary.
In many countries of the world, perspectives on gender equality and racism have changed in recent decades. One result has been more attention being devoted to traces of androcentric and racist language in society. This also affects dictionaries. In lexicography there are discussions about whether or to what extent social asymmetries are inscribed in dictionaries and if this is still acceptable. The issue of the nature of description plays an important role in this discussion. If sexist usages are often found in language use, i.e. in the corpus data on which the dictionary is based, does the dictionary also have to show them? How is this, in turn, compatible with the normative power of dictionaries? Do dictionaries contribute to the perpetuation of gender stereotypes by showcasing them under the banner of descriptive principles? And what roles do lexicographers play in this process? The article deals with these questions on the basis of individual lexicographical examples and current discussions in the lexicographic and public community.
Unter Neologismen finden sich bedeutungsgleiche Ausdrücke (im weitesten Sinne Synonyme), die unter bestimmten Bedingungen sprachliche Unsicherheiten hervorrufen. Das liegt u. a. an ihrer semantisch-konzeptuellen Ähnlichkeit, an nicht abgeschlossenen Lexikalisierungsprozessen, aber es treten auch Zweifel auf, weil es Unterschiede zwischen der Allgemein- und der Fachsprache gibt. Für einige Neologismen ist es auch charakteristisch, dass mehrere morphologische Varianten gleichzeitig in den Wortschatz eintreten, sodass nicht immer klar ist, wann welche präferiert werden. Dass all diese Ausdrücke lexikalischem Wettbewerb und situationsgebundenen Gebrauchsbedingungen ausgesetzt sind und dass sie zu Zweifel führen können, wird in Onlineforen sichtbar. Dieser Beitrag beschäftigt sich mit der Frage, wie solche Paare/Gruppen korpusgestützt semantisch analysiert und wie sie in deskriptiven Wörterbüchern angemessen beschrieben werden können, um sowohl Gemeinsamkeiten als auch Unterschiede für Nachschlagende sichtbar zu machen. Dazu werden konkrete Beispiele und ein gegenüberstellendes Wörterbuchdarstellungsformat für neologistische Synonyme vorgeschlagen.
Im Mittelpunkt des Beitrags steht die Frage nach Ursprung und Genese der im geltenden amtlichen Regelwerk niedergelegten Regel, die eine Zusammenschreibung von Adjektiv-Verb-Verbindungen bei Vorliegen einer nicht literalen Bedeutung vorsieht. Ausgangspunkt bilden dabei Sprachtheoretiker und Akteure wie Johann Christoph Adelung, Wilhelm Wilmanns und Konrad Duden, die die Diskussion beherrscht und (dadurch) maßgeblich die erste gesamtdeutsche Rechtschreibregelung im Jahre 1902 mitgestaltet haben. Ein weiterer Schwerpunkt liegt auf der Umsetzung der Rechtschreibregelung in den orthographischen Wörterbüchern. Erst in dieser zeigt sich, inwiefern der gefundene Kompromiss trägt und inwieweit sich die Beteiligten daran gebunden fühlen, in Sonderheit Duden, der mit seinen Wörterbüchern alsbald eine marktführende Position einnahm und über dessen Duden-Rechtschreibung die Regel einer bedeutungsunterscheidenden Zusammenschreibung bei Adjektiv-Verb-Verbindungen letztlich für alle verbindlich wurde.
This study aims to establish what lexical factors make it more likely for dictionary users to consult specific articles in a dictionary using the English Wiktionary log files, which include records of user visits over the course of 6 years. Recent findings suggest that lexical frequency is a significant factor predicting look-up behavior, with the more frequent words being more likely to be consulted. Three further lexical factors are brought into focus: (1) age of acquisition; (2) lexical prevalence; and (3) degree of polysemy operationalized as the number of dictionary senses. Age of acquisition and lexical prevalence data were obtained from recent published studies and linked to the list of visited Wiktionary lemmas, whereas polysemy status was derived from Wiktionary entries themselves. Regression modeling confirms the significance of corpus frequency in explaining user interest in looking up words in the dictionary. However, the remaining three factors also make a contribution whose nature is discussed and interpreted. Knowing what makes dictionary users look up words is both theoretically interesting and practically useful to lexicographers, telling them which lexical items should be prioritized in lexicographic work.
The internationally renowned conference of the European Association for Lexicography (EURALEX) has taken place every two years for the past 39 years. Last year’s conference, held July 12th–16th, 2022, marked EURALEX’s 20th edition, and more than 200 international participants gathered at Mannheim Palace to discuss current developments, learn about new projects, and present their own work — either in lexicography or in one of the many applied or neighboring disciplines such as corpus and computational linguistics.
The present paper examines the rise and fall of Modern High German loanwords in English from 1600 until 2000, principally making use of the record of borrowing documented by the Oxford English Dictionary (OED) in its Third Edition (online version, in revision 2000-). Groups of loanwords are analysed by century, with reference to the changing social and cultural landscape characterising relationships between the relevant nations over this period. This is not a simple picture: each language grows over the period in different ways, and the speakers of English look to German at different times for different types of borrowing, as the political and intellectual balance alters.
Gerd Hentschel gehört zu den Pionieren der heutigen Computerlexikografie und der IT-gestützten Korpuserschließung. Eine seiner ersten Zeitschriftenpublikationen, mit dem Titel Einsatz von EDV und Mikrocomputer in einem lexikographischen Forschungsprojekt zum deutschen Lehnwort im Polnischen (Hentschel 1983), befasst sich mit der Frage, wie - unter den damaligen technischen Vorzeichen - Forschungs- und Dokumentationsarbeiten zu polnischen Germanismen sinnvoll durch die Verwendung von Computern unterstützt werden können. Die besagten Arbeiten mündeten später in die Online-Publikation des Wörterbuchs der deutschen Lehnwörter in der polnischen Schrift- und Standardsprache (WDLP). Es ist aus heutiger Sicht bemerkenswert, mit welchen Beschränkungen die Arbeit mit dem Computer noch vor 40 Jahren zu kämpfen hatte. Aus gegebenem Anlass sei es gestattet, diesen Punkt etwas ausführlicher zu illustrieren.
OWID und OWIDplus – lexikographisch-lexikologische Online-Informationssysteme des IDS Mannheim
(2023)
Lexikographische und lexikalische Ressourcen zum Deutschen werden an vielen unterschiedlichen Institutionen erarbeitet, z. B. an Akademien der Wissenschaften oder in privatwirtschaftlichen Verlagen. Auch am Leibniz-Institut für Deutsche Sprache (IDS) in Mannheim werden solche Materialien erstellt und der (Fach-)Öffentlichkeit unter dem Dach von OWID, dem „Online-Wortschatz-Informationssystem Deutsch“ (owid.de), präsentiert.
This article describes an English Zulu learners’ dictionary that is part of a larger set of information tools, namely an online Zulu course, an e-dictionary of possessives (which was implemented earlier) accompanied by training software offering translation tasks on several levels, and an ontology of morphemic items categorizing and describing all parts of speech of Zulu. The underlying lexicographic database contains the usual type of lexicographic data, such as translation equivalents and their respective morphosyntactic data, but its entries have been extended with data related to the lessons of the online course in order to enable the learner to link both tools autonomously. The ‘outer matter’ is integrated into the website in the form of several texts on additional web pages (how-to-use, typical outputs, grammar tables, information on morphosyntactic rules, etc.). The dictionary comprises a modular system, where each module fulfils one of the necessary functions.
Eine Wörterbuchforschung für das Sprachenpaar Deutsch-Spanisch an der Schnittstelle zwischen Phraseologie und Konstruktionsgrammatik existiert bislang praktisch nicht. Ziel der vorliegenden Arbeit ist es daher, einen Beitrag zur Schließung dieser Lücke zu leisten, und zwar am Beispiel der „Idiomatik Deutsch-Spanisch" (IDSP) (Schemann et al. 2013). Die Phraseologieforschung befasst sich zwar schon lange mit nicht-kompositionalen Konstruktionen (die heterogen benannt werden z.B. Satzmuster, Phraseoschablonen, Phrasem- Konstruktionen, Schemata), die empirische Fundierung ist aber eher noch unsystematisch und bezogen auf die Lexikografie eher noch im Anfang begriffen. Es wird zum einen gezeigt, welchen großen Stellenwert solchen Mustern in der „Idiomatik Deutsch-Spanisch" (ebd.) zukommt. Zum anderen wird ein Vorschlag unterbreitet, mit dem die im Wörterbuch verzeichneten Phraseme und Muster unter einer dem Aspekt verfestigter Muster und Schemata klassifiziert und gruppiert werden können.
Der folgende Beitrag befasst sich mit Phänomenen, die sich eher am Rande der festen Wortverbindungen befinden, aber eben dort, wo die (Pseudo-)Freiheit trügerisch ist und für manche Sprecher/Schreiber zum Handicap werden kann. Fremdsprachenlerner, die sich der Grenzen ihrer Freiheit bewusst sind und dann Wörterbücher heranziehen, stoßen nämlich bei der Suche nach Definitionen oder nach dem „passenden Wort" meistens auf Ungenauigkeiten oder Gleichsetzungen, die ihnen den Eindruck einer oft unübersichtlichen, arbiträren oder gar chaotischen Lage vermitteln und ihnen jedenfalls selten aus dem Labyrinth der Synonymie heraushelfen. Ich möchte hier an einigen adjektivischen Beispielen zeigen, wie dieses Labyrinth aussieht und für den Wörterbuchnutzer bald zum Teufelskreis wird, um dann auf einige Parameter der Adjektiv-Nomen-Verbindungen einzugehen. Meine Ausgangshypothese ist, dass im Zeitalter der großen Korpora Wörterbücher sich auch bei der Beschreibung der einzelnen Lexeme unbedingt auf den heutigen konkreten Gebrauch stützen sollen, d.h. dass sowohl die Präferenzen der Wortverbindungen bei der Bedeutungsbeschreibung als auch ihre Usualität bei den angeführten Beispielen zu berücksichtigen sind. Durch die Untersuchung einiger Problemfälle werden abschließend mögliche Auswege aufgezeigt.
Neologisms, i.e., new words or meanings, are finding their way into everyday language use all the time. In the process, already existing elements of a language are recombined or linguistic material from other languages is borrowed. But are borrowed neologisms accepted similarly well by the speech community as neologisms that were formed from “native” material? We investigate this question based on neologisms in German. Building on the corresponding results of a corpus study, we test the hypothesis of whether “native” neologisms are more readily accepted than those borrowed from English. To do so, we use a psycholinguistic experimental paradigm that allows us to estimate the degree of uncertainty of the participants based on the mouse trajectories of their responses. Unexpectedly, our results suggest that the neologisms borrowed from English are accepted more frequently, more quickly, and more easily than the “native” ones. These effects, however, are restricted to people born after 1980, the so-called millenials. We propose potential explanations for this mismatch between corpus results and experimental data and argue, among other things, for a reinterpretation of previous corpus studies.
This paper reports on an ongoing international project of compiling a freely accessible online Dictionary of German Loans in Polish Dialects. The dictionary will be the first comprehensive lexicographic compendium of its kind, serving as a complement to existing resources on German lexical loans in the literary or standard language. The empirical results obtained in the project will shed new light on the distribution of German loanwords among different dialects, also in comparison to the well-documented situation in written Polish. The dictionary will have a strong focus on the dialectal distribution of Polish dialectal variants for a given German etymon, accessible through interactive cartographic representations and corresponding search options. The editorial process is realized with dedicated collaborative web tools. The new resource will be published as an integrated part of an online information system for German lexical borrowings in other languages, the Lehnwortportal Deutsch, and is therefore highly cross-linked with other loanword dictionaries on Polish as well as Slavic and further European languages.
Die lexikografische Behandlung von Neologismen aus der Perspektive hispanophoner DaF-Lernender
(2019)
Anhand von einigen medialen Kommunikationsverben wie mailen oder twittern wird das lexikografische Informationsangebot zu Neologismen auf seine Adäquatheit für die fremdsprachige Produktion untersucht. Die Untersuchung erfolgt aus der Perspektive eines spanischsprachigen DaF-Lernenden. Zur Analyse werden sowohl Neologismenwörterbücher und -datenbanken für das Deutsche als auch gängige, bilinguale Online-Wörterbücher für das Sprachenpaar Spanisch–Deutsch gezogen. Die Ergebnisse der lexikografischen Untersuchung werden exemplarisch mit korpusbasierten Daten aus einer Doktorarbeit verglichen. Die Befunde zeigen den Bedarf und die Notwendigkeit auf, die lexikografische Behandlung von (verbalen) Neologismen im spanisch–deutschen Kontext zu optimieren. Dabei soll — insbesondere — die fremdsprachige Textproduktion berücksichtigt werden.
Electronic dictionaries should support dictionary users by giving them guidance in text production and text reception, alongside a user-definable offer of lexicographic data for cognitive purposes. In this article, we sketch the principles of an interactive and dynamic electronic dictionary aimed at text production and text reception guiding users in innovative ways, especially with respect to difficult, complicated or confusing issues. The lexicographer has to do a very careful analysis of the nature of the possible problems to suggest an optimal solution for a specific problem. We are of the opinion that there are numerous complex situations where users need more detailed support than currently available in e-dictionaries, enabling them to make valid and correct choices. For highly complex situations, we suggest guidance through a decision tree-like device. We assume that the solutions proposed here are not specific to one language only but can, after careful analysis, be applied to e-dictionaries in different languages across the world.
So far, there have been few descriptions on creating structures capable of storing lexicographic data, ISO 24613:2008 being one of the latest. Another one is by Spohr (2012), who designs a multifunctional lexical resource which is able to store data of different types of dictionaries in a user-oriented way. Technically, his design is based on the principle of a hierarchical XML/OWL (eXtensible Markup Language/Web Ontology Language) representation model. This article follows another route in describing a model based on entities and relations between them; MySQL (usually referred to as: Structured Query Language) describes a database system of tables containing data and definitions of relations between them. The model was developed in the context of the project "Scientific eLexicography for Africa" and the lexicographic database to be built thereof will be implemented with MySQL. The principles of the ISO model and of Spohr's model are adhered to with one major difference in the implementation strategy: we do not place the lemma in the centre of attention, but the sense description — all other elements, including the lemma, depend on the sense description. This article also describes the contained lexicographic data sets and how they have been collected from different sources. As our aim is to compile several prototypical internet dictionaries (a monolingual Northern Sotho dictionary, a bilingual learners' Xhosa–English dictionary and a bilingual Zulu–English dictionary), we describe the necessary microstructural elements for each of them and which principles we adhere to when designing different ways of accessing them. We plan to make the model and the (empty) database with all graphical user interfaces that have been developed, freely available by mid-2015.
Dieser Beitrag stellt zwei Korpora vor, die als Datengrundlage für die Bestimmung der Regionalangaben im Digitalen Wörterbuch der deutschen Sprache (DWDS) fungieren: das ZDL-Regionalkorpus und das Webmonitor-Korpus. Diese Korpora wurden am Zentrum für digitale Lexikographie der deutschen Sprache (ZDL) erstellt und stehen allen registrierten Nutzern der DWDS-Plattform für Recherchen zur Verfügung. Das ZDL-Regionalkorpus enthält Artikel aus Lokal- und Regionalressorts deutscher Tageszeitungen, die mit arealen Metadaten versehen sind. Es wird ergänzt durch regionale Internet-Quellen im Webmonitor-Korpus, die zusätzliche Areale und Ortspunkte aus dem deutschen Sprachraum einbeziehen. Die Benutzerschnittstelle der linguistisch annotierten Korpora erlaubt nicht nur komplexe sprachliche Abfragen, sondern bietet auch statistische Recherchewerkzeuge zur Bestimmung arealer Verteilungen.
So far, Sepedi negations have been considered more from the point of view of lexicographical treatment. Theoretical works on Sepedi have been used for this purpose, setting as an objective a neat description of these negations in a (paper) dictionary. This paper is from a different perspective: instead of theoretical works, corpus linguistic methods are used: (1) a Sepedi corpus is examined on the basis of existing descriptions of the occurrences of a relevant verb, looking at its negated forms from a purely prescriptive point of view; (2) a "corpus-driven" strategy is employed, looking only for sequences of negation particles (or morphemes) in order to list occurring constructions, without taking into account the verbs occurring in them, apart from their endings. The approach in (2) is only intended to show a possible methodology to extend existing theories on occurring negations. We would also like to try to help lexicographers to establish a frequency-based order of entries of possible negation forms in their dictionaries by showing them the number of respective occurrences. As with all corpus linguistic work, however, we must regard corpus evidence not as representative, but as tendencies of language use that can be detected and described. This is especially true for Sepedi, for which only few and small corpora exist. This paper also describes the resources and tools used to create the necessary corpus and also how it was annotated with part of speech and lemmas. Exploring the quality of available Sepedi part-of-speech taggers concerning verbs, negation morphemes and subject concords may be a positive side result.
This paper describes a first version of an integrated e-dictionary translating possessive constructions from English to Zulu. Zulu possessive constructions are difficult to learn for non-mother tongue speakers. When translating from English into Zulu, a speaker needs to be acquainted with the nominal classification of nouns indicating possession and possessor. Furthermore, (s)he needs to be informed about the morpho-syntactic rules associated with certain combinations of noun classes. Lastly, knowledge of morpho-phonetic changes is also required, because these influence the orthography of the output word forms. Our approach is a novel one in that we combine e-lexicography and natural language processing by developing a (web) interface supporting learners, as well as other users of the dictionary to produce Zulu possessive constructions. The final dictionary that we intend to develop will contain several thousand nouns which users can combine as they wish. It will also translate single words and frequently used multiword expressions, and allow users to test their own translations. On request, information about the morpho-syntactic and morpho-phonetic rules applied by the system are displayed together with the translation. Our approach follows the function theory: the dictionary supports users in text production, at the same time fulfilling a cognitive function.
In this paper, the author studies the role of the dictionary in the first language acquisition, highlighting its didactic value. Based on two Romanian lexicographical works of the 19th century, Lexiconul de la Buda (Buda, 1825) [the Lexicon of Buda] et Vocabularu romano-francesu (Bucarest, 1870) [the Romanian-French Vocabulary], the author analyses the normative information recorded in the articles in order to observe which level of language (i. e. phonetical, morphological, syntactical and lexical) is concerned. Such an approach allows to distinguish between the possible changings both at the level of the perception or at the grammatical, lexical and semantical description, i. e. the settlement of the word in the first language, and at a technical level, i. e. the making of article and of dictionary.
This paper presents the decisions behind the design of a maths dictionary for primary school children. We are aware that there has been a considerable problem regarding Mexican children’s performance in maths dragging on for a long time, and far from getting better, it is getting worse. One of the probable causes seems to be the lack of coordination between maths textbooks and teaching methods. Most maths textbooks used in primary schools include lots of activities and problem-solving techniques, but hardly any conceptual information in the form of definitions or explanations. Consequently, many children learn to do things, but have difficulty understanding mathematical concepts and applying them in different contexts. To help solve this problem, at least partially, the project of the dictionary was launched aiming at helping children to grasp and understand maths concepts learned during those first six years of their formal education. The dictionary is a corpus-based terminographical product whose macrostructure, microstructure, typography, and additional information were specifically designed to help children understand mathematical concepts.
To effectively design online tools and develop sophisticated programs, for the teaching of Ancient Greek language, there is a clear need for lexical resources that provide semantic links with Modern Greek. This paper proposes a microstructure for an online Ancient Greek to Modern Greek thesaurus (AMGthes) that serves educational purposes. The terms of this bilingual thesaurus have been selected from reference Ancient Greek texts, taught and studied during lower and upper secondary education in Greece. The main objective here is to build a semantic map that helps students find relevant and semanti- cally related terms (synonyms and antonyms) in Ancient Greek, and then provide a rich set of suitable translations and definitions in Modern Greek. Designed to be an online resource, the thesaurus is being developed using web technologies, and thus will be available to every school and university student that pursues a degree in digital humanities.
The paper presents the results of empirical research conducted with students from the Faculty of Translation studies of Ventspils University of Applied Sciences (VUAS) in Latvia. The study investigates the habits and practices concerning the use of dictionaries on the part of translation students, as well as types of dictionaries used, frequency of use, etc. The study also presents an insight into the evaluation of the usefulness of dictionaries by Latvian students. The research describes the advantages and disadvantages of dictionaries used by the respondents, the importance of the preface and the explanation of the terms and abbreviations used in dictionaries. The research conducted, as well as the insights, results and recommendations presented, will be relevant for the lexicographic community, as it reflects the experience of one Latvian University to improve the teaching of dictionary use and lexicographic culture in this country and to complement dictionary use research with the Latvian experience.
Learning from students. On the design and usability of an e-dictionary of mathematical graph theory
(2022)
We created a prototype of an electronic dictionary for the mathematical domain of graph theory. We evaluate our prototype and compare its effectiveness in task-based tests with that of Wikipedia. Our dictionary is based on a corpus; the terms and their definitions were automatically extracted and annotated by experts (cf. Kruse/Heid 2020). The dictionary is bilingual, covering German and English; it gives equivalents, definitions and semantically related terms. For the implementation of the dictionary, we used LexO (Bellandi et al. 2017). The target group of the dictionary are students of mathematics who attend lectures in German and work with English resources. We carried out tests to understand which items the students search for when they work on graph-theoretical tasks. We ran the same test twice, with comparable student groups, either allowing Wikipedia as an information source or our dictionary. The dictionary seems to be especially helpful for students who already have a vague idea of a term because they can use the resource to check if their idea is right.
This paper describes the results of an empirical investigation carried out within the project Lessico Multilingue dei Beni Culturali (LBC), whose aim is to create a multilingual online dictionary of the lexicon of the Italian artistic heritage. The dictionary, whose lexicographic process has already started, is intended for linguists and specialist translators as well as for professionals in the tourism sector and students of Foreign Languages and Literatures. The investigation conducted through a questionnaire submitted to undergraduate students at the University of Milan and at the University of Florence has a double aim: to research the habits in the use of lexicographic tools by possible users of the dictionary (Italian Learners of German Language), and to identify preferences regarding macro-, medio- and microstructural features of the future LBC-dictionary to realize a user-friendly tool. After a brief introduction on the state of the art of the survey in the field of Dictionary Users Studies, the article describes the questionnaire and the results obtained from the pilot study. A summary and a discussion on the future developments of the project conclude the work.
This paper gives an insight into a cross-media publishing process on different stages: from a printed bilingual syntagmatic dictionary for GFL to an online learner’s dictionary of German collocations to a German learner’s dictionary portal. On the basis of an sql database specially developed for a corpus-guided dictionary of German collocations, the bilingual syntagmatic learner’s dictionary KolleX was published in 2014. The first part of the article describes this lexicographic process, focusing the most relevant aspects of the dictionary concept, e. g. dictionary type, subject matter, corpus guided data selection and microstructure. The second part introduces the first online version of KolleX from 2016 and the profound changes in the editing system – from a desktop version (2005) to a web-based editing system (2016) –, which resulted successively in a prototype of a German learner’s dictionary portal, called E-KolleX DaF (2018–). Focusing on the aspects of dynamism and integration of different resources from a learner’s perspective the paper shows the innovative features of this new online reference work. The contribution presents the solutions for the integration of new datatypes in the database of KolleX and the linking to different data in German monolingual dictionary platforms. The paper outlines the web design, functioning and technical improvements of E-KolleX DaF. The conclusions provide an outlook to the forthcoming challenges.
There is a growing interest in pedagogical lexicography, and more specifically in the study of dictionary users’ abilities and strategies (Prichard 2008; Gavriilidou 2010, 2011; Gavriilidou/Mavrommatidou/Markos 2020; Gavriilidou/Konstantinidou 2021; Chatjipapa et al. 2020). Τhe purpose of this presentation is to investigate dictionary use strategy and the effect of an explicit and integrated dictionary awareness intervention program on upper elementary pupils’ dictionary use strategies according to gender and type of school. A total of 150 students from mainstream and intercultural schools, aged 10–12 years old, participated in the study. Data were collected before and after the intervention through the Strategy Inventory for Dictionary Use (SIDU) (Gavriilidou 2013). The results showed a significant effect of the intervention program on Dictionary Use Strategies employed by the experimental group and support the claim that increased dictionary use can be the outcome of explicit strategy instruction. In addition, the effective application of the program suggests that a direct and clear presentation of DUS is likely to be more successful than an implicit presentation. The present study contributes to the discussion concerning both the ‘teachability’ of dictionary use strategies and skills and the effective forms of intervention programs raising dictionary use awareness and culture.
Thesauri have long been recognized as valuable structured resources aiding Information Retrieval systems. A thesaurus provides a precise and controlled vocabulary which serves to coordinate data indexing and retrieval. The paper presents a bilingual Greek and English specialized thesaurus that is being developed as the backbone of a platform aimed at enhancing and enriching the cultural experiences of visitors in Eastern Macedonia and Thrace, Greece. The cultural component of the intended platform comprises textual data, images of artifacts and living entities (animals and plants in the area), as well as audio and video. The thesaurus covers the domains of Archaeology, Literature, Mythology, and Travel; therefore, it can be viewed as a set of inter-linked thesauri. Where applicable, terms and names in the database are also geo-referenced.
The EMLex Dictionary of Lexicography (= EMLexDictoL) is a plurilingual subject field dictionary (in German, English, Afrikaans, Galician, Italian, Polish and Spanish) that contains the basic subject field terminology of lexicography and dictionary research, in which the dictionary article texts are presented in a sophisticated but comprehensible form. The articles are supplemented by a complex crossreferencing system and the current subject field literature of the respective national languages. Following the lemma position, the dictionary articles contain items regarding morphology, synonymy, the position of the definiens, additional explanations, the cross-reference position, the position for literature, the equivalent terms in the other six languages of the dictionary as well as the names of the authors.
Given the relevance of interoperability, born-digital lexicographic resources as well as legacy retro-digitised dictionaries have been using structured formats to encode their data, following guidelines such as the Text Encoding Initiative or the newest TEI Lex-0. While this new standard is being defined in a stricter approach than the original TEI dictionary schema, its reuse of element names for several types of annotation as well as the highly detailed structure makes it difficult for lexicographers to efficiently edit resources and focus on the real content. In this paper, we present the approach designed within LeXmart to facilitate the editing of TEI Lex-0 encoded resources, guaranteeing consistency through all editing processes.
An ongoing academic and research program, the “Vocabula Grammatica” lexicon, implemented by the Centre for the Greek Language (Thessaloniki, Greece), aims at lemmatizing all the philological, grammatical, rhetorical, and metrical terms in the written texts of scholars (philologists and scholiasts) who curated the ancient Greek literature from the beginning of the Hellenistic period (4th/3rd c. BC) until the end of the Byzantine era (15th c. AD). In particular, it aspires to fill serious gaps (a) in the study of ancient Greek scholarship and (b) in the lexicography of the ancient Greek language and literature. By providing specific examples, we will highlight the typical and methodological features of the forthcoming dictionary.
Basnage’s revision (1701) of Furetiere’s Dictionnaire universel is profoundly different from Furetiere’s work in several regards. One of the most noticeable features of the dictionary lies in his in- creased use of usage labels. Although Furetiere already made use of usage labels (see Rey 1990), Basnage gives them a prominent role. As he states in the preface to his edition, a dictionary that aspires to the title of “universal” should teach how to speak in a polite way (“poliment”), right (“juste”) and making use of specific terminology for each art. He specifies, lemma by lemma, the diaphasic dimension by indicating the word’s register and context of use, the diastratic one by noting the differences in the use of the language within the social strata, the diachronic evolution by indicating both archaisms and neologisms, the diame- sic aspect by highlighting the gaps between oral and written language, the diatopic one by specifying either foreign borrowings or regionalisms.
After extracting the entries containing formulas such as “ce mot est...”, “ce terme est...” and similar ones, we compare the number of entries and the type of information provided by the two lexicographers1. In this paper, we will focus on Basnage’s innovative contribution. Furthermore, we will try to identify the lexi- cographer’s sources, i. e. we will try to establish on which grammars, collections of linguistic remarks or contemporary dictionaries Basnage relies his judgements.
Wortgeschichte digital (‘digital word history’) is a new historical dictionary of New High German, the most recent period of German reaching from approximately 1600 AD up to the present. By contrast to many historical dictionaries, Wortgeschichte digital has a narrated text – a “word history” – at the core of its entries. The motivation for choosing this format rather than traditional microstructures is
briefly outlined. Special emphasis it put on the way these word histories interact with other components of the dictionary, notably with the quotation section. As Wortgeschichte digital is an online only project, visualizations play an important role for the design of the dictionary. Two examples are presented: first, the “quotation navigator” which is relevant for the microstructure of the entries, and, second, a timeline (“Zeitstrahl”) which is part of the macrostructure as it gives access to the lemma inventory from a diachronic point of view.
Since the beginning of the Covid-19 pandemic, about 2000 new lexical units have entered the German lexicon. These concern a multitude of coinings and word formations (Kuschelkontakt, rumaerosolen, pandemüde) as well as lexical borrowings mainly from English (Lockdown, Hotspot, Superspreader). In a special way, these neologisms function as keywords and lexical indicators sketching the development of the multifaceted corona discourse in Germany. They can be detected systematically by corpus-linguistic investigations of reports and debates in contemporary public communication. Keyword analyses not only exhibit new vocabulary, they also reveal discursive foci, patterns of argumentation and topicalisations within the diverse narratives of the discourse. With the help of quickly established and dominant neologisms, this paper will outline typical contexts and thematic references, but it will also identify speakers' attitudes and evaluations.
In the currently ongoing process of retro-digitization of Serbian dialectal dictionaries, the biggest obstacle is the lack of machine readable versions of paper editions. Therefore, one essential step is needed before venturing into the dictionary-making process in the digital environment – OCRing the pages with the highest possible accuracy. Successful retro-digitization of Serbian dialectal dictionaries, currently in progress, has shown a dire need for one basic yet necessary step, lacking until now – OCRing the pages with the highest possible accuracy. OCR processing is not a new technology, as many opensource and commercial software solutions can reliably convert scanned images of paper documents into digital documents. Available software solutions are usually efficient enough to process scanned contracts, invoices, financial statements, newspapers, and books. In cases where it is necessary to process documents that contain accented text and precisely extract each character with diacritics, such software solutions are not efficient enough. This paper presents the OCR software called “SCyDia”, developed to overcome this issue. We demonstrate the organizational structure of the OCR software “SCyDia” and the first results. The “SCyDia” is a web-based software solution that relies on the open-source software “Tesseract” in the background. “SCyDia” also contains a module for semi-automatic text correction. We have already processed over 15,000 pages, 13 dialectal dictionaries, and five dialectal monographs. At this point in our project, we have analyzed the accuracy of the “SCyDia” by processing 13 dialectal dictionaries. The results were analyzed manually by an expert who examined a number of randomly selected pages from each dictionary. The preliminary results show great promise, spanning from 97.19% to 99.87%.
Lexical data API
(2022)
This API provides data from various dictionary resources of K Dictionaries across 50 languages. It is used by language service providers, app developers, and researchers, and returns data as JSON documents. A basic search result consists of an object containing partial lexical information on entries that match the search criteria, but further in-depth information is also available. Basic search parameters include the source resource, source language, and text (lemma), and the entries are returned as objects within the results array. It is possible to look for words with specific syntactic criteria, specifying the part of speech, grammatical number, gender and subcategorization, monosemous or polysemous entries. When searching by parameters, each entry result contains a unique entry ID, and each sense has its own unique sense ID. Using these IDs, it is possible to obtain more data – such as syntactic and semantic information, multiword expressions, examples of usage, translations, etc. – of a single entry or sense. The software demonstration includes a brief overview of the API with practical examples of its operation.