Computerlinguistik
Refine
Year of publication
Document Type
- Conference Proceeding (302)
- Part of a Book (126)
- Article (87)
- Book (26)
- Working Paper (16)
- Other (15)
- Report (11)
- Contribution to a Periodical (7)
- Doctoral Thesis (7)
- Master's Thesis (4)
Language
- English (422)
- German (186)
- Multiple languages (2)
- French (1)
Keywords
- Computerlinguistik (205)
- Korpus <Linguistik> (166)
- Annotation (78)
- Deutsch (76)
- Automatische Sprachanalyse (69)
- Forschungsdaten (50)
- Natürliche Sprache (49)
- Digital Humanities (42)
- Gesprochene Sprache (40)
- Maschinelles Lernen (33)
Publicationstate
- Veröffentlichungsversion (373)
- Zweitveröffentlichung (108)
- Postprint (55)
- Preprint (2)
- (Verlags)-Lektorat (1)
- Erstveröffentlichung (1)
Reviewstate
Publisher
- Association for Computational Linguistics (40)
- European Language Resources Association (32)
- de Gruyter (30)
- Springer (26)
- European Language Resources Association (ELRA) (23)
- Institut für Deutsche Sprache (21)
- Zenodo (17)
- Linköping University Electronic Press (13)
- The Association for Computational Linguistics (11)
- CLARIN (9)
Preface
(2019)
Preface
(2020)
The automatic recognition of idioms poses a challenging problem for NLP applications. Whereas native speakers can intuitively handle multiword expressions whose compositional meanings are hard to trace back to individual word semantics, there is still ample scope for improvement regarding computational approaches. We assume that idiomatic constructions can be characterized by gradual intensities of semantic non-compositionality, formal fixedness, and unusual usage context, and introduce a number of measures for these characteristics, comprising count-based and predictive collocation measures together with measures of context (un)similarity. We evaluate our approach on a manually labelled gold standard, derived from a corpus of German pop lyrics. To this end, we apply a Random Forest classifier to analyze the individual contribution of features for automatically detecting idioms, and study the trade-off between recall and precision. Finally, we evaluate the classifier on an independent dataset of idioms extracted from a list of Wikipedia idioms, achieving state-of-the art accuracy.
In order to differentiate between figurative and literal usage of verb-noun combinations for the shared task on the disambiguation of German Verbal Idioms issued for KONVENS 2021, we apply and extend an approach originally developed for detecting idioms in a dataset consisting of random ngram samples. The classification is done by implementing a rather shallow, statistics-based pipeline without intensive preprocessing and examinations on the morphosyntactic and semantic level. We describe the overall approach, the differences between the original dataset and the dataset of the KONVENS task, provide experimental classification results, and analyse the individual contributions of our feature sets.
New KARL (Knowledge Acquisition and Representation Language) allows to specify all parts of a problem-solving method (PSM). It is a formal language with a well-defined semantics and thus allows to represent PSMs precisely and unambiguously yet abstracting from implementation detail. In this paper it is shown how the language KARL has been modified and extended to New KARL to better meet the needs for the representation of PSMs. Based on a conceptual structure of PSMs new language primitives are introduced for KARL to specify such a conceptual structure and to support the configuration of methods. An important goal for this extension was to preserve three important properties of KARL: to be (i) a conceptual, (ii) a formal, and (iii) an executable language.
Die Arbeitsgruppe konstituierte sich im Rahmen des Workshops „Querbezüge des Knowledge Engineering zu Methoden des Software Engineering und der Entwicklung von Informationssystemen" auf der 2. Deutschen Tagung Expertensysteme [AnS93]. Anfangs beteiligten sich zehn verschiedene Gruppen bzw. Einzelpersonen an der Arbeitsgruppe. Zur Fokussierung der Arbeiten beschloß die Arbeitsgruppe, sich primär mit den Themen Vorgehensmodelle und Methoden zu beschäftigen. Unter einem Vorgehensmodell wurde dabei die „Festlegung der bei der Entwicklung eines Systems durchzuführenden Arbeitsschritte verstanden, ... Beziehungen zwischen den Arbeitsschritten sind ebenso festzulegen wie Anforderungen an die zu erzeugenden Ergebnisse." [AL0+93]. Als eine Methode wurde eine „systematische Handlungsvorschrift zur Lösung von Aufgaben einer bestimmten Art verstanden." [AL0+93]. Dementsprechend wurde in der Arbeitsgruppe der Begriff Methodik im Sinne von Methodensammlung verwendet. Außerdem einigte man sich in der Arbeitsgruppe darauf, die Arbeiten anhand einer vergleichenden Fallstudie durchzuführen. In Abwandlung des oft verwendeten IFIP Beispiels [0SV82] wurde als Aufgabenstellung für die Fallstudie die Entwicklung eines (wissensbasierten) Systems zur Tagungsverwaltung ausgewählt. Im Rahmen ihrer Arbeit organisierte die Arbeitsgruppe noch einen weiteren Workshop „Vorgehensmodelle und Methoden zur Entwicklung komplexer Softwaresysteme", der auf der 18. Deutschen Jahrestagung für Künstliche Intelligenz durchgeführt wurde [KuS94]. Leider zeigte es sich in der laufenden Arbeit der Arbeitsgruppe, daß es insbesondere für Mitglieder aus der Wirtschaft sehr schwierig ist, sich über eine längeren Zeitraum aktiv an einer derartigen Arbeitsgruppe zu beteiligen. So blieben für die letzte Phase der Arbeitsgruppe nur noch vier Gruppen übrig, die auch in diesem Abschlußbericht vertreten sind. Von daher sollte klar sein, daß dieser Abschlußbericht keine alle Aspekte umfassende Analyse sein kann, sondern sich vielmehr auf Schlußfolgerungen beschränken muß, die auf Grund der analysierten Methodiken möglich sind. Gleichwohl beinhalten diese Methodiken aus Sicht der Autoren typische methodische Vorgehensweisen in den beteiligten Fachgebieten. Um einen systematischen Vergleich der Methodiken zu ermöglichen, erarbeitete die Arbeitsgruppe einen Kriterienkatalog, mit dem charakteristische Eigenschaften einer Methodik erfaßt werden können [Kri97]. Dieser Kriterienkatalog wird nachfolgend verwendet, um jede der vier Methodiken detailliert zu charakterisieren.
This poster summarizes the results of the CLARIAH-DE Work Package 3: Skills Training and Promotion of Junior Researchers.
For a research field that is characterised by rapid technical development, CLARIAH-DE has to include the promotion of data literacy necessary for the efficient use of this digital research infrastructure as part of its objective. To develop, consolidate and refine a common programme in this area, work package 3 set itself the following sub goals:
- Consolidation of the activities from the previous projects into a joint service
- Cataloguing and reflecting on the methods and tools used in the research field, with the aim of identifying remaining gaps
- Skills training of, individual support for and the promotion of junior researchers
In this paper, we describe a data processing pipeline used for annotated spoken corpora of Uralic languages created in the INEL (Indigenous Northern Eurasian Languages) project. With this processing pipeline we convert the data into a loss-less standard format (ISO/TEI) for long-term preservation while simultaneously enabling a powerful search in this version of the data. For each corpus, the input we are working with is a set of files in EXMARaLDA XML format, which contain transcriptions, multimedia alignment, morpheme segmentation and other kinds of annotation. The first step of processing is the conversion of the data into a certain subset of TEI following the ISO standard ’Transcription of spoken language’ with the help of an XSL transformation. The primary purpose of this step is to obtain a representation of our data in a standard format, which will ensure its long-term accessibility. The second step is the conversion of the ISO/TEI files to a JSON format used by the “Tsakorpus” search platform. This step allows us to make the corpora available through a web-based search interface. As an addition, the existence of such a converter allows other spoken corpora with ISO/TEI annotation to be made accessible online in the future.
This paper presents the QUEST project and describes concepts and tools that are being developed within its framework. The goal of the project is to establish quality criteria and curation criteria for annotated audiovisual language data. Building on existing resources developed by the participating institutions earlier, QUEST develops tools that could be used to facilitate and verify adherence to these criteria. An important focus of the project is making these tools accessible for researchers without substantial technical background and helping them produce high-quality data. The main tools we intend to provide are the depositors’ questionnaire and automatic quality assurance, both developed as web applications. They are accompanied by a Knowledge base, which will contain recommendations and descriptions of best practices established in the course of the project. Conceptually, we split linguistic data into three resource classes (data deposits, collections and corpora). The class of a resource defines the strictness of the quality assurance it should undergo. This division is introduced so that too strict quality criteria do not prevent researchers from depositing their data.
This paper presents the QUEST project and describes concepts and tools that are being developed within its framework. The goal of the project is to establish quality criteria and curation criteria for annotated audiovisual language data. Building on existing resources developed by the participating institutions earlier, QUEST also develops tools that could be used to facilitate and verify adherence to these criteria. An important focus of the project is making these tools accessible for researchers without substantial technical background and helping them produce high-quality data. The main tools we intend to provide are a questionnaire and automatic quality assurance for depositors of language resources, both developed as web applications. They are accompanied by a knowledge base, which will contain recommendations and descriptions of best practices established in the course of the project. Conceptually, we consider three main data maturity levels in order to decide on a suitable level of strictness of the quality assurance. This division has been introduced to avoid that a set of ideal quality criteria prevent researchers from depositing or even assessing their (legacy) data. The tools described in the paper are work in progress and are expected to be released by the end of the QUEST project in 2022.