Computerlinguistik
Refine
Year of publication
Document Type
- Part of a Book (50)
- Conference Proceeding (50)
- Article (14)
- Working Paper (4)
- Other (3)
- Book (2)
Has Fulltext
- yes (123)
Is part of the Bibliography
- no (123) (remove)
Keywords
- Computerlinguistik (36)
- Korpus <Linguistik> (32)
- Annotation (20)
- Automatische Sprachanalyse (12)
- Auszeichnungssprache (11)
- XML (11)
- Texttechnologie (10)
- Deutsch (7)
- Digital Humanities (6)
- SGML (6)
Publicationstate
- Veröffentlichungsversion (83)
- Zweitveröffentlichung (27)
- Postprint (15)
Reviewstate
- (Verlags)-Lektorat (123) (remove)
Publisher
- de Gruyter (12)
- Springer (9)
- European Language Resources Association (ELRA) (5)
- Oxford University Press (4)
- Institut für Deutsche Sprache (3)
- Lang (3)
- Niemeyer (3)
- Ruhr-Universität Bochum (3)
- University of Oulu (3)
- Universitätsverlag Hildesheim (3)
The 2014 issue of KONVENS is even more a forum for exchange: its main topic is the interaction between Computational Linguistics and Information Science, and the synergies such interaction, cooperation and integrated views can produce. This topic at the crossroads of different research traditions which deal with natural language as a container of knowledge, and with methods to extract and manage knowledge that is linguistically represented is close to the heart of many researchers at the Institut für Informationswissenschaft und Sprachtechnologie of Universität Hildesheim: it has long been one of the institute’s research topics, and it has received even more attention over the last few years.
The chapter on formats and models for lexicons deals with different available data formats of lexical resources. It elaborates on their structure and possible uses. Motivated by the restrictions in merging different lexical resources based on widely spread formalisms and international standards, a formal lexicon model for lexical resources is developed which is related to graph structures in annotations. For lexicons this model is termed the Lexicon Graph. Within this model the concepts of lexicon entries and lexical structures frequently described in the literature are formally defined and examples are given. The article addresses the problem of ambiguity in those formal terms. An implementation based on XML and XML technology such as XQuery for the defined structures is given. The relation to international standards is included as well.
In this chapter, we discuss steps toward extending CMDI’s semantic interoperability beyond the Social Sciences and Humanities: We stress the need for an initial data curation step, in part supported by a relation registry that helps impose some structure on CMDI vocabulary; we describe the use of authority file information and other controlled vocabulary to help connecting CMDI-based metadata to existing Linked Data; we show how significant parts of CMDI-based metadata can be converted to bibliographic metadata standards and hence entered into library catalogs; and finally we describe first steps to convert CMDI-based metadata to RDF. The initial grassroots approach of CMDI (meaning that anybody can define metadata descriptors and components) mirrors the AAA slogan of the Semantic Web (“Anyone can say Anything about Any topic”). Ironically, this makes it hard to fully link CMDI-based metadata to other Semantic Web datasets. This paper discusses the challenges of this enterprise.
Der CorpusExplorer v2.0 ist eine frei verfügbare Software zur korpushermeneutischen Analyse und bietet über 45 unterschiedliche Analysen/Visualisierungen für eigenes Korpusmaterial an. Dieser Praxisbericht gibt Einblicke, zeigt Fallstricke auf und bietet Lösungen an, um die tägliche Visualisierungsarbeit zu erleichtern. Zunächst wird ein kurzer Einblick in die Ideen gegeben, die zur Entwicklung des CorpusExplorers führten, einer korpuslinguistischen Software, die nicht nur vielfältige Forschungsansätze unterstützt, sondern auch mit einem Fokus auf die universitäre Lehre entwickelt wird. Der Mittelteil behandelt einen der vielen Fallstricke, die im Entwicklungsprozess auftraten: Effizienz-/Anpassungsprobleme – bzw.: Was passiert, wenn Visualisierungen an neue Begebenheiten angepasst werden müssen? Da diese Lösung Teil des CorpusExplorers v2.0 ist, wird abschließend darauf eingegangen, wie unterschiedliche Visualisierungen zu denselben Datensätzen sich auf die Rezeption/Interpretation von Daten auswirken.
This paper describes a new approach to improve the analysis and categorization of web documents using statistical methods for template based clustering as well as semantical analysis based on terminological ontologies. A domain-specific environment serves for prove of concept. In order to demonstrate the widespread practical benefit of our approach, we outline a combined mathematical and semantical framework for information retrieval on internet resources.
Corpora with high-quality linguistic annotations are an essential component in many NLP applications and a valuable resource for linguistic research. For obtaining these annotations, a large amount of manual effort is needed, making the creation of these resources time-consuming and costly. One attempt to speed up the annotation process is to use supervised machine-learning systems to automatically assign (possibly erroneous) labels to the data and ask human annotators to correct them where necessary. However, it is not clear to what extent these automatic pre-annotations are successful in reducing human annotation effort, and what impact they have on the quality of the resulting resource. In this article, we present the results of an experiment in which we assess the usefulness of partial semi-automatic annotation for frame labeling. We investigate the impact of automatic pre-annotation of differing quality on annotation time, consistency and accuracy. While we found no conclusive evidence that it can speed up human annotation, we found that automatic pre-annotation does increase its overall quality.
This paper deals with multiword lexemes (MWLs), focussing on two types of verbal MWLs: verbal idioms and support verb constructions. We discuss the characteristic properties of MWLs, namely nonstandard compositionality, restricted substitutability of components, and restricted morpho-syntactic flexibility, and we show how these properties may cause serious problems during the analysis, generation, and transfer steps of machine translation systems. In order to cope with these problems, MT lexicons need to provide detailed descriptions of MWL properties. We list the types of information which we consider the necessary minimum for a successful processing of MWLs, and report on some feasibility studies aimed at the automatic extraction of German verbal multiword lexemes from text corpora and machine-readable dictionaries.
Der Mythos „Künstliche Intelligenz“ wird besonders von der sogenannten „transhumanistischen“ Community im Silicon Valley propagiert, deren Vertreter wie der Physiker Ray Kurzweil davon ausgehen, dass wir in spätestens 30 Jahren mit KIs kommunizieren könnten, wie mit einem Menschen (Kurzweil 2005). Saudi Arabien hat 2017 bereits dem anthropomorphen Roboter mit Sprachinterface Sophia die Staatsbürgerschaft zugesprochen (Arab News 2017). Künstliche Intelligenzen wie Apples Assistenzsystem Siri oder Amazons Alexa halten derzeit Einzug in unseren Alltag. Chatbots und Social-Bots wie der Twitter-Bot Tay nehmen Einfluss auf öffentliche Diskurse und interaktives Spielzeug mit Dialogfunktion führt bereits unsere Jüngsten an die Interaktion mit dem artifiziellen Gegenüber heran. Hier entsteht eine völlig neue Form der Dialogizität, die wir aus linguistischer Perspektive noch kaum verstehen. Unabhängige Studien zur Mensch-Maschine-Interaktion stellen also ein großes Desiderat dar.