Computerlinguistik
Refine
Year of publication
- 2012 (37) (remove)
Document Type
- Conference Proceeding (23)
- Article (10)
- Part of a Book (3)
- Other (1)
Has Fulltext
- yes (37)
Keywords
- Computerlinguistik (14)
- Korpus <Linguistik> (8)
- Standardisierung (8)
- Metadaten (7)
- Datenmanagement (5)
- Infrastruktur (5)
- Annotation (4)
- Deutsch (4)
- Forschung (4)
- Information Extraction (4)
Publicationstate
- Veröffentlichungsversion (22)
- Zweitveröffentlichung (5)
- Postprint (3)
Reviewstate
- Peer-Review (18)
- (Verlags)-Lektorat (10)
Publisher
Corpora with high-quality linguistic annotations are an essential component in many NLP applications and a valuable resource for linguistic research. For obtaining these annotations, a large amount of manual effort is needed, making the creation of these resources time-consuming and costly. One attempt to speed up the annotation process is to use supervised machine-learning systems to automatically assign (possibly erroneous) labels to the data and ask human annotators to correct them where necessary. However, it is not clear to what extent these automatic pre-annotations are successful in reducing human annotation effort, and what impact they have on the quality of the resulting resource. In this article, we present the results of an experiment in which we assess the usefulness of partial semi-automatic annotation for frame labeling. We investigate the impact of automatic pre-annotation of differing quality on annotation time, consistency and accuracy. While we found no conclusive evidence that it can speed up human annotation, we found that automatic pre-annotation does increase its overall quality.
This article discusses questions concerning the creation, annotation and sharing of spoken language corpora. We use the Hamburg Map Task Corpus (HAMATAC), a small corpus in which advanced learners of German were recorded solving a map task, as an example to illustrate our main points. We first give an overview of the corpus creation and annotation process including recording, metadata documentation, transcription and semi-automatic annotation of the data. We then discuss the manual annotation of disfluencies as an example case in which many of the typical and challenging problems for data reuse – in particular the reliability of interpretative annotations – are revealed.
Knowledge Acquisition with Natural Language Processing in the Food Domain: Potential and Challenges
(2012)
In this paper, we present an outlook on the effectiveness of natural language processing (NLP) in extracting knowledge for the food domain. We identify potential scenarios that we think are particularly suitable for NLP techniques. As a source for extracting knowledge we will highlight the benefits of textual content from social media. Typical methods that we think would be suitable will be discussed. We will also address potential problems and limits that the application of NLP methods may yield.
We present a gold standard for semantic relation extraction in the food domain for German. The relation types that we address are motivated by scenarios for which IT applications present a commercial potential, such as virtual customer advice in which a virtual agent assists a customer in a supermarket in finding those products that satisfy their needs best. Moreover, we focus on those relation types that can be extracted from natural language text corpora, ideally content from the internet, such as web forums, that are easy to retrieve. A typical relation type that meets these requirements are pairs of food items that are usually consumed together. Such a relation type could be used by a virtual agent to suggest additional products available in a shop that would potentially complement the items a customer has already in their shopping cart. Our gold standard comprises structural data, i.e. relation tables, which encode relation instances. These tables are vital in order to evaluate natural language processing systems that extract those relations.
In this paper, we examine methods to automatically extract domain-specific knowledge from the food domain from unlabeled natural language text. We employ different extraction methods ranging from surface patterns to co-occurrence measures applied on different parts of a document. We show that the effectiveness of a particular method depends very much on the relation type considered and that there is no single method that works equally well for every relation type. We also examine a combination of extraction methods and also consider relationships between different relation types. The extraction methods are applied both on a domain-specific corpus and the domain-independent factual knowledge base Wikipedia. Moreover, we examine an open-domain lexical ontology for suitability.
In this paper, we compare three different generalization methods for in-domain and cross-domain opinion holder extraction being simple unsupervised word clustering, an induction method inspired by distant supervision and the usage of lexical resources. The generalization methods are incorporated into diverse classifiers. We show that generalization causes significant improvements and that the impact of improvement depends on the type of classifier and on how much training and test data differ from each other. We also address the less common case of opinion holders being realized in patient position and suggest approaches including a novel (linguistically-informed) extraction method how to detect those opinion holders without labeled training data as standard datasets contain too few instances of this type.
We present an experimental approach to determining natural dimensions of story comparison. The results show that untrained test subjects generally do not privilege structural information. When asked to justify sameness ratings, they may refer to content, but when asked to state differences, they mostly refer to style, concrete events, details and motifs. We conclude that adequate formal models of narratives must represent such non-structural data.
Dieser Beitrag versucht, eine Einschätzung der Einsatzmöglichkeiten für automatische Analysemethoden aus der aktuellen computerlinguistischen Forschung für die sprachvergleichende Grammatikforschung vorzunehmen. Zur Illustration werden die Ergebnisse einer computerlinguistischen Studie für die vergleichende Untersuchung von Spaltsatzkonstruktionen in verschiedenen Sprachen wiedergegeben und ausführlich diskutiert. Der Korpuszugang erfolgt in diesem Rahmen auf Basis einer vollautomatischen syntaktischen Analyse, die dann noch zusätzlich durch eine statistische Wortalignierung kontrastiv auf Parallelkorpora beleuchtet werden kann. Neben der Vorstellung der bereits bestehenden automatischen Annotationsmöglichkeiten, die in meinen Augen vielversprechende Wege für den sprachwissenschaftlichen Korpuszugang eröffnen, ist die Hoffnung, dass dieser Beitrag durch die abschließende Diskussion zu dem Bewusstsein beiträgt, dass eine tiefere, organischere Verbindung der beiden sprachwissenschaftlichen Disziplinen möglich ist: dann nämlich, wenn der Korpuszugang nicht mit statischen, vordefinierten Werkzeugen erfolgt, deren Verhalten durch die Grammatikforscherin oder den Grammatikforscher nicht beeinflusst werden kann, sondern wenn ein interaktiver Werkzeuggebrauch erfolgt, der von den vielfältigen Anpassungsmöglichkeiten mit den zugrunde liegenden maschinellen Lernverfahren Gebrauch macht.
A formal narrative representation is a procedure assigning a formal description to a natural language narrative. One of the goals of the computational models of narrative community is to understand this procedure better in order to automatize it. A formal framework fit for automatization should allow for objective and reproducible representations. In this paper, we present empirical work focussing on objectivity and reproducibility of the formal framework by Vladimir Propp (1928). The experiments consider Propp’s formalization of Russian fairy tales and formalizations done by test subjects in the same formal framework; the data show that some features of Propp’s system such as the assignment of the characters to the dramatis personae and some of the functions are not easy to reproduce.