Refine
Year of publication
Document Type
- Part of a Book (362)
- Conference Proceeding (237)
- Article (189)
- Book (63)
- Other (31)
- Working Paper (21)
- Contribution to a Periodical (7)
- Review (7)
- Doctoral Thesis (6)
- Preprint (5)
Language
- German (514)
- English (416)
- French (5)
- Multiple languages (3)
Keywords
- Korpus <Linguistik> (938) (remove)
Publicationstate
- Veröffentlichungsversion (543)
- Zweitveröffentlichung (202)
- Postprint (51)
- Erstveröffentlichung (2)
- Ahead of Print (1)
- Preprint (1)
Reviewstate
- (Verlags)-Lektorat (402)
- Peer-Review (304)
- Peer-review (12)
- Verlags-Lektorat (11)
- Qualifikationsarbeit (Dissertation, Habilitationsschrift) (8)
- Review-Status-unbekannt (5)
- Peer-Revied (4)
- Abschlussarbeit (Bachelor, Master, Diplom, Magister) (Bachelor, Master, Diss.) (3)
- Zweitveröffentlichung (2)
- (Verlags-)Lektorat (1)
Publisher
- de Gruyter (137)
- Institut für Deutsche Sprache (57)
- Narr (47)
- European Language Resources Association (ELRA) (29)
- Leibniz-Institut für Deutsche Sprache (IDS) (29)
- IDS-Verlag (25)
- European Language Resources Association (23)
- Narr Francke Attempto (23)
- Association for Computational Linguistics (18)
- Leibniz-Institut für Deutsche Sprache (17)
Sprichwörter im Gebrauch
(2017)
Sprichwörter im Gebrauch
(2015)
This paper reports on the efforts of twelve national teams in building the International Comparable Corpus (ICC; https://korpus.cz/icc) that will contain highly comparable datasets of spoken, written and electronic registers. The languages currently covered are Czech, Finnish, French, German, Irish, Italian, Norwegian, Polish, Slovak, Swedish and, more recently, Chinese, as well as English, which is considered to be the pivot language. The goal of the project is to provide much-needed data for contrastive corpus-based linguistics. The ICC corpus is committed to the idea of re-using existing multilingual resources as much as possible and the design is modelled, with various adjustments, on the International Corpus of English (ICE). As such, ICC will contain approximately the same balance of forty percent of written language and 60 percent of spoken language distributed across 27 different text types and contexts. A number of issues encountered by the project teams are discussed, ranging from copyright and data sustainability to technical advances in data distribution.
This paper introduces a method for computer-based analyses of metaphor in discourse, combining quantitative and qualitative elements. This method is illustrated with data from research on German newspaper discourse concerning the ongoing system transformations of the late 1980s and early 1990s. Methodological aspects of the research procedure are discussed and it is argued that quantitative elements can enhance comparability in cross-cultural and cross-lingual research. Some basic findings of the research are presented. The peculiarities of the German Wende discourse - especially the salience of a passive perspective on the ongoing political and social changes - are outlined.
Dieser Beitrag widmet sich der Beschreibung des Korpus Deutsch in Namibia (DNam), das über die Datenbank für Gesprochenes Deutsch (DGD) frei zugänglich ist. Bei diesem Korpus handelt es sich um eine neue digitale Ressource, die den Sprachgebrauch der deutschsprachigen Minderheit in Namibia sowie die zugehörigen Spracheinstellungen umfassend und systematisch dokumentiert. Wir beschreiben die Datenerhebung und die dabei angewandten Methoden (freie Gespräche, „Sprachsituationen“, semi-strukturierte Interviews), die Datenaufbereitung inklusive Transkription, Normalisierung und Tagging sowie die Eigenschaften des verfügbaren Korpus (Umfang, verfügbare Metadaten usw.) und einige grundlegende Funktionalitäten im Rahmen der DGD. Erste Forschungsergebnisse, die mithilfe der neuen Ressource erzielt wurden, veranschaulichen die vielseitige Nutzbarkeit des Korpus für Fragestellungen aus den Bereichen Kontakt-, Variations-
und Soziolinguistik.
Mit diesem Bild beschreibt Hermann Unterstöger in einem „Sprachlabor“- Artikel der Süddeutschen Zeitung vom 23.3.2013 die Erfolgsgeschichte, die das Substantiv (das) Narrativ in den letzten 30 Jahren vorgelegt hat. Während Unterstöger feinsinnig den intertextuellen Bezug zum „Narrenschiff“ des Sebastian Brant oder dem gleichnamigen Roman von Katherine Ann Porter bemüht, wird Matthias Heine, der Autor von „Seit wann hat geil nichts mehr mit Sex zu tun? 100 deutsche Wörter und ihre erstaunlichen Karrieren“ in einem Artikel in der WELT vom 13.11.2016, wie nach diesem Buchtitel zu erwarten, eher grob: Dort heißt es: „Hinz und Kunz schwafeln heutzutage vom ,Narrativ‘“.
Die explorative Studie untersucht anhand von Korpusbelegen, in welchen Fällen satzförmige oder infinitivische propositionale Strukturen bedeutungserhaltend durch Nominalisierungen ersetzbar sind. Auf indirekte Weise soll so ein Zugang auch zur Bedeutung von propositionalen Strukturen selbst eröffnet werden. Die in der Literatur gängige These, dass nur bei einem Teil der Denotatsorten propositionaler Strukturen (von Ereignissen über Tatsachen bis zu ,rein abstrakten Objekten‘) Nominalisierung möglich sei, wird durch die Studie widerlegt. Damit stellt sich auch die Frage nach der Haltbarkeit der gängigen Fassung des Begriffs Proposition selbst. Die von Friederike Moltmann vertretene neue Sichtweise auf Propositionen scheint hingegen auch für Nominalisierungen eine Analyse ohne die bisher auftretenden Widersprüche zu ermöglichen.
Speakers’ linguistic experience is for the most part experience with language as used in conversational interaction. Though highly relevant for usage-based linguistics, the study of such data is as yet often left to other frameworks such as conversation analysis and interactional linguistics (Couper-Kuhlen and Selting 2001). On the basis of a case study of salient usage patterns of the two German motion verbs kommen and gehen in spontaneous conversation, the present paper argues for a methodological integration of quantitative corpus-linguistic methods with qualitative conversation analytic approaches to further the usage-based study of conversational interaction.
Within cognitive linguistics, there is an increasing awareness that the study of linguistic phenomena needs to be grounded in usage. Ideally, research in cognitive linguistics should be based on authentic language use, its results should be replicable, and its claims falsifiable. Consequently, more and more studies now turn to corpora as a source of data. While corpus-based methodologies have increased in sophistication, the use of corpus data is also associated with a number of unresolved problems. The study of cognition through off-line linguistic data is, arguably, indirect, even if such data fulfils desirable qualities such as being natural, representative and plentiful. Several topics in this context stand out as particularly pressing issues. This discussion note addresses (1) converging evidence from corpora and experimentation, (2) whether corpora mirror psychological reality, (3) the theoretical value of corpus linguistic studies of ‘alternations’, (4) the relation of corpus linguistics and grammaticality judgments, and, lastly, (5) the nature of explanations in cognitive corpus linguistics. We do not claim to resolve these issues nor to cover all possible angles; instead, we strongly encourage reactions and further discussion.
Den Wald vor lauter Bäumen sehen - und andersherum: zum Verhältnis von 'Mustern' und 'Regeln'
(2011)
Die Konstruktionsgrammatik setzt dem Begriff der konstruktiven Regel den des komplexen Musters entgegen, das in syntaktischen Generalisierungsprozessen analogisch erweitert wird. Der vorliegende Beitrag präsentiert eine solche musterbasierte Analyse von deutschen Konstruktionen mit lokativem Subjekt (Wiesen und Wälder wuchern vor Blumen und Kräutern) als Extension einer Reihe verwandter Konstruktionen mit kausaler und intensivierender Funktion, aus denen die lokative Variante mutmaßlich hervorgegangen ist. Die Analyse argumentiert, dass der umgebenden ,Ökologie‘ der Zielkonstruktion im sprachlichen Wissen der Sprecher eine zentrale Rolle für die Erklärung der attestierten Varianten zukommt, die in regelbasierten Zugängen als unmotivierte ,Ausnahmen‘ von allgemeinen Linkingprinzipien gelten müssen.
High word frequency and neighborhood density contribute to the accuracy and speed of word production in English adults (e.g., Vitevitch & Sommers 2003), and characterize early words in child English (e.g., Storkel 2004). The present study investigated a speech corpus of child German (ages 2;00-3;00) to further the understanding of the influence of frequency and density on production. Results for four children suggest that, contrary to English, words produced early are not from denser neighborhoods in an adult lexicon than later words. As in English, frequent words are produced before less frequent words. Implications on theory and methodology are discussed.
This paper presents EXMARaLDA, a system for the computer-assisted creation and analysis of spoken
language corpora. The first part contains some general observations about technological and methodological requirements for doing corpus-based pragmatics. The second part explains the systems architecture and gives an overview of its most important software components a transcription editor, a corpus management tool and a corpus query tool. The last part presents some corpora which have been or are currently being compiled with the help of EXMARaLDA.
Einleitung
(2021)
Mit dem zweiten Band werden vier neue „Bausteine“ zu einer korpuslinguistisch fundierten Grammatik des Deutschen vorgelegt. Sie behandeln die Bereiche Determination, syntaktische Funktionen der Nominalphrase und Attribution. Dem Fachpublikum werden zugleich die analysierten Sprachdaten und vertiefende Zusatzuntersuchungen zugänglich gemacht.
Grammatik - explorativ
(2015)
Die am IDS aufgebauten großen Korpora ermöglichen es, vermeintlich freie und aus grammatikographischer Sicht eben dadurch problematische Varianten des Standarddeutschen systematisch in den Untersuchungsfokus zu stellen. Mit spezifischen Techniken und Werkzeugen kann die korpuslinguistische Arbeit dabei eine recht theorieunabhängige Beschreibung einzelner Varianten grammatischer Phänomene leisten und deren Häufigkeit bestimmen; damit stellt sie auch eine transparente quantitativ-statistische Basis für die Validierung von in der einschlägigen Literatur vertretenen Hypothesen bereit. Wie im Beitrag gezeigt werden soll, ist die Auswertung von Korpusdaten beträchtlichen Umfangs mit modernen computerlinguistischen und statistischen Methoden ganz besonders geeignet, grammatische und außersprachliche Faktoren zu identifizieren, deren Interaktion die Wahl zwischen den vermeintlich freien Alternativen bestimmt.
Ziel dieses Projekts ist es, Sprachdaten so nah wie möglich am Jetzt zu erheben und analysierbar zu machen. Wir möchten, dass möglichst viele Menschen, nicht nur Sprachwissenschaftlerinnen und Sprachwissenschaftler, in die Lage versetzt werden, Sprachdaten zu explorieren und zu nutzen. Hierzu erheben wir ein Korpus, d. h. eine aufbereitete Sammlung von Sprachdaten von RSS-Feeds deutschsprachiger Onlinequellen. Wir zeichnen die Entwicklung der Analysewerkzeuge von einem Prototyp hin zur aktuellen Form der Anwendung nach, die eine komplette Reimplementierung darstellt. Dabei gehen wir auf die Architektur, einige Analysebeispiele sowie Erweiterungsmöglichkeiten ein. Fragen der Skalierbarkeit und Performanz stehen dabei im Mittelpunkt. Unsere Darstellungen lassen sich daher auf andere Data-Science-Projekte verallgemeinern.
Dictionaries have been part and parcel of literate societies for many centuries. They assist in communication, particularly across different languages, to aid in understanding, creating, and translating texts. Communication problems arise whenever a native speaker of one language comes into contact with a speaker of another language. At the same time, English has established itself as a lingua franca of international communication. This marked tendency gives lexicography of English a particular significance, as English dictionaries are used intensively and extensively by huge numbers of people worldwide.
We start by trying to answer a question that has already been asked by de Schryver et al. (2006): Do dictionary users (frequently) look up words that are frequent in a corpus. Contrary to their results, our results that are based on the analysis of log files from two different online dictionaries indicate that users indeed look up frequent words frequently. When combining frequency information from the Mannheim German Reference Corpus and information about the number of visits in the Digital Dictionary of the German Language as well as the German language edition of Wiktionary, a clear connection between corpus and look-up frequencies can be observed. In a follow-up study, we show that another important factor for the look-up frequency of a word is its temporal social relevance. To make this effect visible, we propose a de-trending method where we control both frequency effects and overall look-up trends.
We introduce DeReKoGram, a novel frequency dataset containing lemma and part-of-speech (POS) information for 1-, 2-, and 3-grams from the German Reference Corpus. The dataset contains information based on a corpus of 43.2 billion tokens and is divided into 16 parts based on 16 corpus folds. We describe how the dataset was created and structured. By evaluating the distribution over the 16 folds, we show that it is possible to work with a subset of the folds in many use cases (e.g., to save computational resources). In a case study, we investigate the growth of vocabulary (as well as the number of hapax legomena) as an increasing number of folds are included in the analysis. We cross-combine this with the various cleaning stages of the dataset. We also give some guidance in the form of Python, R, and Stata markdown scripts on how to work with the resource.
Neologisms, i.e., new words or meanings, are finding their way into everyday language use all the time. In the process, already existing elements of a language are recombined or linguistic material from other languages is borrowed. But are borrowed neologisms accepted similarly well by the speech community as neologisms that were formed from “native” material? We investigate this question based on neologisms in German. Building on the corresponding results of a corpus study, we test the hypothesis of whether “native” neologisms are more readily accepted than those borrowed from English. To do so, we use a psycholinguistic experimental paradigm that allows us to estimate the degree of uncertainty of the participants based on the mouse trajectories of their responses. Unexpectedly, our results suggest that the neologisms borrowed from English are accepted more frequently, more quickly, and more easily than the “native” ones. These effects, however, are restricted to people born after 1980, the so-called millenials. We propose potential explanations for this mismatch between corpus results and experimental data and argue, among other things, for a reinterpretation of previous corpus studies.
Based on the privative derivational suffix -los, we test statements found in the literature on word formation using a – at least in this field – novel empirical basis: a list of affective-emotional ratings of base nouns and associated -los derivations. In addition to a frequency analysis based on the German Reference Corpus, we show that, in general, emotional polarity (so-called valence, positive vs. negative emotions) is reversed by suffixation with -los. This change is stronger for more polarized base nouns. The perceived intensity of emotion (so-called arousal) is generally lower for -los derivations than for base nouns. Finally, to capture the results theoretically, we propose a prototypical -los construction in the framework of Construction Morphology.
Dieser Beitrag gibt einen Überblick über die methodischen Ausgangspunkte des Projekts MIT. Qualität und stellt einige zentrale Erkenntnisse zur Modellbildung, der korpuslinguistischen Analyse und Akzeptabilitätserhebungen in der Sprachgemeinschaft vor. Wir zeigen dabei, wie bestehende Textqualitätsmodelle anhand einer Analyse einschlägiger Ratgeberliteratur erweitert werden können. Es wurden zwei empirische Fallstudien durchgeführt, die beide auf die Herstellung von textueller Kohärenz mittels des Kausalkonnektors weil fokussieren. Wir stellen zunächst eine korpuskontrastive Analyse vor. Weiterhin zeigen wir, wie man anhand verschiedener Aufgabenstellungen diverse Aspekte von Akzeptabilität in der Sprachgemeinschaft abprüfen kann.
Reading corpora are text collections that are enriched with processing data. From a corpus linguist’s perspective, they can be seen as an extension of classical linguistic corpora with human language processing behavior. From a psycholinguist’s perspective, reading corpora allow to test psycholinguistic hypotheses on subsets of language and language processing as it is ‘in the wild’ – in contrast to strictly controlled language material in isolated sentences, as used in most psycholinguistic experiments. In this paper, we will investigate a relevance-based account of language processing which states that linguistic structures, that are embedded deeper syntactically, are read faster because readers allocate less attention to these structures.
Poster des Text+ Partners Leibniz-Institut für Deutsche Sprache Mannheim präsentiert beim Workshop "Wohin damit? Storing and reusing my language data" am 22. Juni 2023 in Mannheim. Das Poster wurde im Kontext der Arbeit des Vereins Nationale Forschungsdateninfrastruktur (NFDI) e.V. verfasst. NFDI wird von der Bundesrepublik Deutschland und den 16 Bundesländern finanziert, und das Konsortium Text+ wird gefördert durch die Deutsche Forschungsgemeinschaft (DFG) – Projektnummer 460033370. Die Autor:innen bedanken sich für die Förderung sowie Unterstützung. Ein Dank geht außerdem an alle Einrichtungen und Akteur:innen, die sich für den Verein und dessen Ziele engagieren.
This article introduces the topic of ‘‘Multilingual language resources and interoperability’’. We start with a taxonomy and parameters for classifying language resources. Later we provide examples and issues of interoperatability, and resource architectures to solve such issues. Finally we discuss aspects of linguistic formalisms and interoperability.
Overlap in markup occurs where some markup structures do not nest, such as where the structural division of the text into lists, sections, etc., differs from the syntactic division of the text into sentences and phrases. The Multiple Annotation solution to this problem (redundant encoding in multiple forms) has many advantages: it is based on XML, the modeling of alternative annotations is possible, each level can be viewed separately, and new levels can be added at any time. But it has the significant disadvantage of independence of the separate files. These multiply annotated files can be regarded as an interrelated unit, with the text serving as the implicit link. Two representations of the information contained in the multiple files (one in Prolog and one in XML) can be programmatically derived and used together for editing, for inference, or for unification of the multiply annotated documents.
Die zentrale Aufgabenstellung des Verbundprojektes TextTransfer (Pilot) war eine Machbarkeitsprüfung für die Entwicklung eines Text-Mining-Verfahrens, mit dem Forschungsergebnisse automatisiert auf Hinweise zu Transfer- und Impactpotenzialen untersucht werden können. Das vom Projektkoordinator IDS verantwortete Teilprojekt konzentrierte sich dabei auf die Entwicklung der methodischen Grundlagen, während der Projektpartner TIB vornehmlich für die Bereitstellung eines geeigneten Datensatzes verantwortlich war. Solchen automatisierten Verfahren liegen zumeist textbasierte Daten als physisches Manifest wissenschaftlicher Erkenntnisse zugrunde, die im Falle von TextTransfer (Pilot) als empirische Grundlage herangezogen wurden. Das im Verbund zur Anwendung gebrachte maschinelle Lernverfahren stützte sich ausschließlich auf deutschsprachige Projektendberichte öffentlich geförderter Forschung. Diese Textgattung eignet sich insbesondere hinsichtlich ihrer öffentlichen Verfügbarkeit bei zuständigen Gedächtnisorganisationen und aufgrund ihrer im Vergleich zu anderen Formaten wissenschaftlicher Publikation relativen strukturellen wie sprachlichen Homogenität. TextTransfer (Pilot) ging daher grundsätzlich von der Annahme struktureller bzw. sprachlicher Ähnlichkeit in Berichtstexten aus, bei denen der Nachweis tatsächlich erfolgten Transfers zu erbringen war. Im Folgenden wird in diesen Fällen von Texten bzw. textgebundenen Forschungsergebnissen mit Transfer- und Impactpotenzial gesprochen werden. Es wurde ferner postuliert, dass sich diese Indizien von sprachlichen Eigenschaften in Texten zu Projekten ohne nachzuweisenden bzw. ggf. auch niemals erfolgtem, aber potenziell möglichem Transfer oder Impact unterscheiden lassen. Mit einer Verifizierung dieser Annahmen war es möglich, Transfer- oder Impactwahrscheinlichkeiten in großen Mengen von Berichtsdaten ohne eingehende Lektüre zu prognostizieren.
Maskierung
(2015)
Aus forschungsethischen Gründen müssen die Daten aus Gesprächsaufzeichnungen, die Metadaten sowie die Transkripte maskiert werden. Der Beitrag stellt Arbeitsschritte der Maskierung vor, die auf den Erfahrungen bei der Datenaufbereitung der Daten des Forschungs- und Lehrkorpus Gesprochenes Deutsch (FOLK) für die Veröffentlichung in der Datenbank für Gesprochenes Deutsch (DGD) basieren.
The following article shows how several verbal argument structure patterns can build clusters or families. Argument structure patterns are conceptualised as form-meaning pairings related by family relationships. These are based on formal and / or semantic characteristics of the individual patterns making up the family. The small family of German argument structure patterns containing vor sich her and vor sich hin is selected to illustrate the process whereby pattern meaning combines with the syntactic and semantic properties of the patterns’ individual components to constitute a higher-level family or cluster of argument structure patterns. The study shows that the patterns making up the family are similar with regard to some of their formal characteristics, but differ quite clearly with respect to their meaning. The article also discusses the conditions of usage of the individual patterns of the family, the contribution of verb meaning and prepositional meaning to the overall meaning of the patterns, coercion effects, and productivity issues.
This paper describes work directed towards the development of a syllable prominence-based prosody generation functionality for a German unit selection speech synthesis system. A general concept for syllable prominence-based prosody generation in unit selection synthesis is proposed. As a first step towards its implementation, an automated syllable prominence annotation procedure based on acoustic analyses has been performed on the BOSS speech corpus. The prominence labeling has been evaluated against an existing annotation of lexical stress levels and manual prominence labeling on a subset of the corpus. We discuss methods and results and give an outlook on further implementation steps.
Der Beitrag untersucht korpuspragmatisch am Beispiel der Präpositionalphrasen mit gegen Varianten der Gegenwehr in der Zeit des Nationalsozialismus. Im Vordergrund stehen Flugblätter, Programmschriften und Zeitungsartikel, die unter den Bedingungen von Verfolgung, Exil oder Desertation kollaborativ verfasst wurden. Eine Spur zu diesen Dokumenten, die die Heterogenität und die Konfliktlinien des Widerstands auf Textebene widerspiegeln, legt die Korpusauswertung mithilfe der soziopragmatischen Annotationen aus dem Paderborner HetWik-Projekt. Methodisch werden gegen-Phrasen anhand ihrer Füllerprofile und Kollokatoren einzelnen Handlungsmustern zugeordnet. Im Ergebnis zeigt sich der Solidarisierungseffekt von situativ verfestigten Kollokationen sowie eine (selbst)kritische Reflexion von NS-Feindschaften.
This paper investigates evidence for linguistic coherence in new urban dialects that evolved in multiethnic and multilingual urban neighbourhoods. We propose a view of coherence as an interpretation of empirical observations rather than something that would be ‘‘out there in the data’’, and argue that this interpretation should be based on evidence of systematic links between linguistic phenomena, as established by patterns of covariation between phenomena that can be shown to be related at linguistic levels. In a case study, we present results from qualitative and quantitative analyses for a set of phenomena that have been described for Kiezdeutsch, a new dialect from multilingual urban Germany. Qualitative analyses point to linguistic relationships between different phenomena and between pragmatic and linguistic levels. Quantitative analyses, based on corpus data from KiDKo (www.kiezdeutschkorpus.de), point to systematic advantages for the Kiezdeutsch data from a multiethnic and multilingual context provided by the main corpus (KiDKo/Mu), compared to complementary corpus data from a mostly monoethnic and monolingual (German) context (KiDKo/Mo). Taken together, this indicates patterns of covariation that support an interpretation of coherence for this new dialect: our findings point to an interconnected linguistic system, rather than to a mere accumulation of individual features. In addition to this internal coherence, the data also points to external coherence: Kiezdeutsch is not disconnected on the outside either, but fully integrated within the general domain of German, an integration that defies a distinction of ‘‘autochthonous’’ and ‘‘allochthonous’’ German, not only at the level of speakers, but also at the level of linguistic systems.
Opinion Holder and Target Extraction for Verb-based Opinion Predicates – The Problem is Not Solved
(2015)
We offer a critical review of the current state of opinion role extraction involving opinion verbs. We argue that neither the currently available lexical resources nor the manually annotated text corpora are sufficient to appropriately study this task. We introduce a new corpus focusing on opinion roles of opinion verbs from the Subjectivity Lexicon and show potential benefits of this corpus. We also demonstrate that state-of-the-art classifiers perform rather poorly on this new dataset compared to the standard dataset for the task showing that there still remains significant research to be done.
We present a gold standard for semantic relation extraction in the food domain for German. The relation types that we address are motivated by scenarios for which IT applications present a commercial potential, such as virtual customer advice in which a virtual agent assists a customer in a supermarket in finding those products that satisfy their needs best. Moreover, we focus on those relation types that can be extracted from natural language text corpora, ideally content from the internet, such as web forums, that are easy to retrieve. A typical relation type that meets these requirements are pairs of food items that are usually consumed together. Such a relation type could be used by a virtual agent to suggest additional products available in a shop that would potentially complement the items a customer has already in their shopping cart. Our gold standard comprises structural data, i.e. relation tables, which encode relation instances. These tables are vital in order to evaluate natural language processing systems that extract those relations.
In this paper, we examine methods to automatically extract domain-specific knowledge from the food domain from unlabeled natural language text. We employ different extraction methods ranging from surface patterns to co-occurrence measures applied on different parts of a document. We show that the effectiveness of a particular method depends very much on the relation type considered and that there is no single method that works equally well for every relation type. We also examine a combination of extraction methods and also consider relationships between different relation types. The extraction methods are applied both on a domain-specific corpus and the domain-independent factual knowledge base Wikipedia. Moreover, we examine an open-domain lexical ontology for suitability.
Automatic Food Categorization from Large Unlabeled Corpora and Its Impact on Relation Extraction
(2014)
We present a weakly-supervised induction method to assign semantic information to food items. We consider two tasks of categorizations being food-type classification and the distinction of whether a food item is composite or not. The categorizations are induced by a graph-based algorithm applied on a large unlabeled domain-specific corpus. We show that the usage of a domain-specific corpus is vital. We do not only outperform a manually designed open-domain ontology but also prove the usefulness of these categorizations in relation extraction, outperforming state-of-the-art features that include syntactic information and Brown clustering.
We investigate the task of detecting reliable statements about food-health relationships from natural language texts. For that purpose, we created a specially annotated web corpus from forum entries discussing the healthiness of certain food items. We examine a set of task-specific features (mostly) based on linguistic insights that are instrumental in finding utterances that are commonly perceived as reliable. These features are incorporated in a supervised classifier and compared against standard features that are widely used for various tasks in natural language processing, such as bag of words, part-of speech and syntactic parse information.